[jira] [Assigned] (PHOENIX-6092) Optionally queue DDL requests issued while metadata upgrade is in progress and replay on upgrade failure

2021-03-17 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned PHOENIX-6092:
-

Assignee: (was: Chinmay Kulkarni)

> Optionally queue DDL requests issued while metadata upgrade is in progress 
> and replay on upgrade failure
> 
>
> Key: PHOENIX-6092
> URL: https://issues.apache.org/jira/browse/PHOENIX-6092
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Critical
> Fix For: 4.16.1, 4.17.0
>
>
> Currently, if a metadata upgrade is in progress (either triggered by an 
> explicit "EXECUTE UPGRADE" command or by a new client with autoUpgrade 
> enabled), in-flight DDLs will generally go through and work as expected. 
> However, if the upgrade happens to fail, we restore the snapshot of 
> SYSTEM.CATALOG (and with 
> [PHOENIX-6086|https://issues.apache.org/jira/browse/PHOENIX-6086] even other 
> SYSTEM tables) to represent its state before the upgrade started. Due to 
> this, any DDLs issued after the upgrade began are lost.
> There are upgrade steps that need to iterate over each table/index/view in 
> the cluster and multiple steps that need full table scans on SYSTEM.CATALOG 
> and so this time window where we could potentially lose client DDLs is not 
> negligible (could be to the order of minutes).
> This Jira is to discuss ways to tackle this problem. Perhaps we can introduce 
> a configuration which when enabled could use some sort of write-ahead log to 
> store DDLs issued while the upgrade is in progress and replay those DDLs in 
> case we need to restore SYSTEM tables from their snapshot.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6134) Provide a server-side configuration to disallow all DDLs

2021-03-17 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned PHOENIX-6134:
-

Assignee: (was: Chinmay Kulkarni)

> Provide a server-side configuration to disallow all DDLs
> 
>
> Key: PHOENIX-6134
> URL: https://issues.apache.org/jira/browse/PHOENIX-6134
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Critical
> Fix For: 4.16.1, 4.17.0
>
>
> Having such an option is useful when doing a Phoenix rollback on a cluster, 
> doing metadata fixes such as manual changes to SYSTEM.CATALOG, etc.
> It provides more power to server-side operators to if they're able to 
> explicitly block DDLs in order to prevent metadata from being in an 
> inconsistent state.
> All changes for this should be on the server-side so that we can block DDLs 
> even from older clients.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5837) Table Level Phoenix Metrics Implementation.

2021-02-08 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned PHOENIX-5837:
-

Assignee: vikas meka

> Table Level Phoenix Metrics Implementation.
> ---
>
> Key: PHOENIX-5837
> URL: https://issues.apache.org/jira/browse/PHOENIX-5837
> Project: Phoenix
>  Issue Type: Task
>Reporter: vikas meka
>Assignee: vikas meka
>Priority: Major
>  Labels: metric-collector, metrics
>
> currently GlobalClient Metrics provide aggregated information about the usage 
> of the application in-terms of total no. of reads, total no. of writes, total 
> number of failures, total no.of failures etc. In some scenarios these 
> counters do not provide much useful insights into Phoenix usage pattern. The 
> task is to add TableLevel Phoenix metrics on top of Global Phoenix metrics 
> which would be helpful in analyzing the reads/writes/deletes on a table.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5586) Add documentation for Splittable SYSTEM.CATALOG

2021-02-04 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5586:
--
Fix Version/s: (was: 4.16.0)
   4.16.1

> Add documentation for Splittable SYSTEM.CATALOG
> ---
>
> Key: PHOENIX-5586
> URL: https://issues.apache.org/jira/browse/PHOENIX-5586
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Chinmay Kulkarni
>Priority: Blocker
> Fix For: 5.1.1, 4.16.1
>
>
> There are many changes after PHOENIX-3534 especially for backwards 
> compatibility. There are additional configurations such as 
> "phoenix.allow.system.catalog.rollback" which allows rollback of splittable 
> SYSTEM.CATALOG, etc. We should document these changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5435) Annotate HBase WALs with Phoenix Metadata

2020-12-16 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5435:
--
Fix Version/s: 4.16.0
   5.1.0

> Annotate HBase WALs with Phoenix Metadata
> -
>
> Key: PHOENIX-5435
> URL: https://issues.apache.org/jira/browse/PHOENIX-5435
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-5435-4.x.patch
>
>
> HBase write-ahead-logs (WALs) drive not only failure recovery, but HBase 
> replication and some HBase backup frameworks. The WALs contain HBase-level 
> metadata such as table and region, but lack Phoenix-level metadata. That 
> means that it's quite difficult to build correct logic that needs to know 
> about Phoenix-level constructs such as multi-tenancy, views, or indexes. 
> HBASE-22622 and HBASE-22623 add the capacity for coprocessors to annotate 
> extra key/value pairs of metadata into the HBase WAL. We should have the 
> option to annotate the tuple , or 
> some hashed way to reconstruct that tuple into the WAL. It should have a 
> feature toggle so operators who don't need it don't bear the slight extra 
> storage cost. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5404) Move check to client side to see if there are any child views that need to be dropped while receating a table/view

2020-12-14 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned PHOENIX-5404:
-

Assignee: (was: Chinmay Kulkarni)

> Move check to client side to see if there are any child views that need to be 
> dropped while receating a table/view
> --
>
> Key: PHOENIX-5404
> URL: https://issues.apache.org/jira/browse/PHOENIX-5404
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Thomas D'Silva
>Priority: Major
> Fix For: 5.1.0, 4.16.1
>
>
> Remove {{ViewUtil.dropChildViews(env, tenantIdBytes, schemaName, 
> tableName);}} call in MetdataEndpointImpl.createTable
> While creating a table or view we need to ensure that are not any child views 
> that haven't been clean up by the DropChildView task yet. Move this check to 
> the client (issue a scan against SYSTEM.CHILD_LINK to see if a single linking 
> row exists).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6256) Fix MaxConcurrentConnectionsIT test flapper

2020-12-14 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6256:
--
Attachment: PHOENIX-6256.4.x.v1.patch

> Fix MaxConcurrentConnectionsIT test flapper
> ---
>
> Key: PHOENIX-6256
> URL: https://issues.apache.org/jira/browse/PHOENIX-6256
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>Reporter: Xinyi Yan
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.16.0
>
> Attachments: PHOENIX-6256.4.x.v1.patch
>
>
> MaxConcurrentConnectionsIT failed 3 out of last 10 runs.
>  
> h3. Error Message
> Found 1 connections still open. expected:<0> but was:<1>
> h3. Stacktrace
> java.lang.AssertionError: Found 1 connections still open. expected:<0> but 
> was:<1> at org.junit.Assert.fail(Assert.java:89) at 
> org.junit.Assert.failNotEquals(Assert.java:835) at 
> org.junit.Assert.assertEquals(Assert.java:647) at 
> org.apache.phoenix.query.MaxConcurrentConnectionsIT.testDeleteRuntimeFailureClosesConnections(MaxConcurrentConnectionsIT.java:122)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.junit.runners.Suite.runChild(Suite.java:128) at 
> org.junit.runners.Suite.runChild(Suite.java:27) at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:383)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:344)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:417)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-4763) Changing a base table property value should be reflected in child views (if the property wasn't changed)

2020-12-12 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-4763:
--
Fix Version/s: 4.15.0

> Changing a base table property value should be reflected in child views (if 
> the property wasn't changed)
> 
>
> Key: PHOENIX-4763
> URL: https://issues.apache.org/jira/browse/PHOENIX-4763
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4763-4.x-HBase-1.3.patch, PHOENIX-4763.patch, 
> PHOENIX-4763_v2.patch
>
>
> .. for properties that are valid on views. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (PHOENIX-4763) Changing a base table property value should be reflected in child views (if the property wasn't changed)

2020-12-12 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni closed PHOENIX-4763.
-

> Changing a base table property value should be reflected in child views (if 
> the property wasn't changed)
> 
>
> Key: PHOENIX-4763
> URL: https://issues.apache.org/jira/browse/PHOENIX-4763
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4763-4.x-HBase-1.3.patch, PHOENIX-4763.patch, 
> PHOENIX-4763_v2.patch
>
>
> .. for properties that are valid on views. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6256) Fix MaxConcurrentConnectionsIT test flapper

2020-12-10 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6256:
--
Fix Version/s: (was: 5.1.0)

> Fix MaxConcurrentConnectionsIT test flapper
> ---
>
> Key: PHOENIX-6256
> URL: https://issues.apache.org/jira/browse/PHOENIX-6256
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Xinyi Yan
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.16.0
>
>
> MaxConcurrentConnectionsIT failed 3 out of last 10 runs.
>  
> h3. Error Message
> Found 1 connections still open. expected:<0> but was:<1>
> h3. Stacktrace
> java.lang.AssertionError: Found 1 connections still open. expected:<0> but 
> was:<1> at org.junit.Assert.fail(Assert.java:89) at 
> org.junit.Assert.failNotEquals(Assert.java:835) at 
> org.junit.Assert.assertEquals(Assert.java:647) at 
> org.apache.phoenix.query.MaxConcurrentConnectionsIT.testDeleteRuntimeFailureClosesConnections(MaxConcurrentConnectionsIT.java:122)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.junit.runners.Suite.runChild(Suite.java:128) at 
> org.junit.runners.Suite.runChild(Suite.java:27) at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:383)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:344)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:417)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6256) Fix MaxConcurrentConnectionsIT test flapper

2020-12-10 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6256:
--
Affects Version/s: (was: 5.0.0)

> Fix MaxConcurrentConnectionsIT test flapper
> ---
>
> Key: PHOENIX-6256
> URL: https://issues.apache.org/jira/browse/PHOENIX-6256
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>Reporter: Xinyi Yan
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.16.0
>
>
> MaxConcurrentConnectionsIT failed 3 out of last 10 runs.
>  
> h3. Error Message
> Found 1 connections still open. expected:<0> but was:<1>
> h3. Stacktrace
> java.lang.AssertionError: Found 1 connections still open. expected:<0> but 
> was:<1> at org.junit.Assert.fail(Assert.java:89) at 
> org.junit.Assert.failNotEquals(Assert.java:835) at 
> org.junit.Assert.assertEquals(Assert.java:647) at 
> org.apache.phoenix.query.MaxConcurrentConnectionsIT.testDeleteRuntimeFailureClosesConnections(MaxConcurrentConnectionsIT.java:122)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.junit.runners.Suite.runChild(Suite.java:128) at 
> org.junit.runners.Suite.runChild(Suite.java:27) at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:383)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:344)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:417)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6256) Fix MaxConcurrentConnectionsIT test flapper

2020-12-10 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6256:
--
Affects Version/s: 5.0.0
   4.15.0

> Fix MaxConcurrentConnectionsIT test flapper
> ---
>
> Key: PHOENIX-6256
> URL: https://issues.apache.org/jira/browse/PHOENIX-6256
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Xinyi Yan
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.16.0
>
>
> MaxConcurrentConnectionsIT failed 3 out of last 10 runs.
>  
> h3. Error Message
> Found 1 connections still open. expected:<0> but was:<1>
> h3. Stacktrace
> java.lang.AssertionError: Found 1 connections still open. expected:<0> but 
> was:<1> at org.junit.Assert.fail(Assert.java:89) at 
> org.junit.Assert.failNotEquals(Assert.java:835) at 
> org.junit.Assert.assertEquals(Assert.java:647) at 
> org.apache.phoenix.query.MaxConcurrentConnectionsIT.testDeleteRuntimeFailureClosesConnections(MaxConcurrentConnectionsIT.java:122)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.junit.runners.Suite.runChild(Suite.java:128) at 
> org.junit.runners.Suite.runChild(Suite.java:27) at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:383)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:344)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:417)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6256) Fix MaxConcurrentConnectionsIT test flapper

2020-12-10 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6256:
--
Fix Version/s: 5.1.0

> Fix MaxConcurrentConnectionsIT test flapper
> ---
>
> Key: PHOENIX-6256
> URL: https://issues.apache.org/jira/browse/PHOENIX-6256
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Xinyi Yan
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>
> MaxConcurrentConnectionsIT failed 3 out of last 10 runs.
>  
> h3. Error Message
> Found 1 connections still open. expected:<0> but was:<1>
> h3. Stacktrace
> java.lang.AssertionError: Found 1 connections still open. expected:<0> but 
> was:<1> at org.junit.Assert.fail(Assert.java:89) at 
> org.junit.Assert.failNotEquals(Assert.java:835) at 
> org.junit.Assert.assertEquals(Assert.java:647) at 
> org.apache.phoenix.query.MaxConcurrentConnectionsIT.testDeleteRuntimeFailureClosesConnections(MaxConcurrentConnectionsIT.java:122)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.junit.runners.Suite.runChild(Suite.java:128) at 
> org.junit.runners.Suite.runChild(Suite.java:27) at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:383)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:344)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:417)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6256) Fix MaxConcurrentConnectionsIT test flapper

2020-12-10 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned PHOENIX-6256:
-

Assignee: Chinmay Kulkarni

> Fix MaxConcurrentConnectionsIT test flapper
> ---
>
> Key: PHOENIX-6256
> URL: https://issues.apache.org/jira/browse/PHOENIX-6256
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xinyi Yan
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.16.0
>
>
> MaxConcurrentConnectionsIT failed 3 out of last 10 runs.
>  
> h3. Error Message
> Found 1 connections still open. expected:<0> but was:<1>
> h3. Stacktrace
> java.lang.AssertionError: Found 1 connections still open. expected:<0> but 
> was:<1> at org.junit.Assert.fail(Assert.java:89) at 
> org.junit.Assert.failNotEquals(Assert.java:835) at 
> org.junit.Assert.assertEquals(Assert.java:647) at 
> org.apache.phoenix.query.MaxConcurrentConnectionsIT.testDeleteRuntimeFailureClosesConnections(MaxConcurrentConnectionsIT.java:122)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.junit.runners.Suite.runChild(Suite.java:128) at 
> org.junit.runners.Suite.runChild(Suite.java:27) at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
>  at 
> org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:383)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:344)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:417)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (PHOENIX-6086) Take a snapshot of all SYSTEM tables before attempting to upgrade them

2020-12-01 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reopened PHOENIX-6086:
---

> Take a snapshot of all SYSTEM tables before attempting to upgrade them
> --
>
> Key: PHOENIX-6086
> URL: https://issues.apache.org/jira/browse/PHOENIX-6086
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Viraj Jasani
>Priority: Critical
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6086.4.x.000.patch, 
> PHOENIX-6086.master.000.patch, PHOENIX-6086.master.002.patch, 
> PHOENIX-6086.master.003.patch
>
>
> Currently we only take a snapshot of SYSTEM.CATALOG before attempting to 
> upgrade it (see 
> [this|https://github.com/apache/phoenix/blob/1922895dfe5960dc025709b04acfaf974d3959dc/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L3718]).
>  From 4.15 onwards we also store critical metadata information in other 
> SYSTEM tables like SYSTEM.CHILD_LINK, so it is beneficial to also snapshot 
> those tables before upgrading them henceforth.
> We also currently don't take a snapshot of SYSTEM.CATALOG on receiving an 
> [UpgradeRequiredException|https://github.com/apache/phoenix/blob/1922895dfe5960dc025709b04acfaf974d3959dc/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L3685-L3707]
>  which we should do.
> In case of any errors during the upgrade, we restore SYSTEM.CATALOG from this 
> snapshot and we should extend this to all tables. In cases where the table 
> didn't exist before the upgrade, we need to ensure it is dropped so that a 
> subsequent upgrade attempt can start afresh.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6230) IT suite hangs on ViewConcurrencyAndFailureIT

2020-11-20 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6230:
--
Affects Version/s: 4.15.0

> IT suite hangs on ViewConcurrencyAndFailureIT
> -
>
> Key: PHOENIX-6230
> URL: https://issues.apache.org/jira/browse/PHOENIX-6230
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Istvan Toth
>Assignee: Chinmay Kulkarni
>Priority: Critical
> Fix For: 5.1.0, 4.16.0
>
>
> The ASF Jenkins postcommit job timed out 5 times out of the last 6 runs on 
> the master branch.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6230) IT suite hangs on ViewConcurrencyAndFailureIT

2020-11-20 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6230:
--
Fix Version/s: 4.16.0

> IT suite hangs on ViewConcurrencyAndFailureIT
> -
>
> Key: PHOENIX-6230
> URL: https://issues.apache.org/jira/browse/PHOENIX-6230
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.0.0
>Reporter: Istvan Toth
>Assignee: Chinmay Kulkarni
>Priority: Critical
> Fix For: 5.1.0, 4.16.0
>
>
> The ASF Jenkins postcommit job timed out 5 times out of the last 6 runs on 
> the master branch.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6230) IT suite hangs on ViewConcurrencyAndFailureIT

2020-11-19 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6230:
--
Fix Version/s: 5.1.0

> IT suite hangs on ViewConcurrencyAndFailureIT
> -
>
> Key: PHOENIX-6230
> URL: https://issues.apache.org/jira/browse/PHOENIX-6230
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.0.0
>Reporter: Istvan Toth
>Assignee: Chinmay Kulkarni
>Priority: Critical
> Fix For: 5.1.0
>
>
> The ASF Jenkins postcommit job timed out 5 times out of the last 6 runs on 
> the master branch.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6230) IT suite hangs on ViewConcurrencyAndFailureIT

2020-11-19 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6230:
--
Affects Version/s: (was: 5.1.0)
   5.0.0

> IT suite hangs on ViewConcurrencyAndFailureIT
> -
>
> Key: PHOENIX-6230
> URL: https://issues.apache.org/jira/browse/PHOENIX-6230
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.0.0
>Reporter: Istvan Toth
>Assignee: Chinmay Kulkarni
>Priority: Critical
>
> The ASF Jenkins postcommit job timed out 5 times out of the last 6 runs on 
> the master branch.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6230) IT suite hangs on ViewConcurrencyAndFailureIT

2020-11-19 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned PHOENIX-6230:
-

Assignee: Chinmay Kulkarni

> IT suite hangs on ViewConcurrencyAndFailureIT
> -
>
> Key: PHOENIX-6230
> URL: https://issues.apache.org/jira/browse/PHOENIX-6230
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0
>Reporter: Istvan Toth
>Assignee: Chinmay Kulkarni
>Priority: Critical
>
> The ASF Jenkins postcommit job timed out 5 times out of the last 6 runs on 
> the master branch.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6191) Creating a view which has its own new columns should also do checkAndPut checks on SYSTEM.MUTEX

2020-11-18 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni resolved PHOENIX-6191.
---
Resolution: Fixed

> Creating a view which has its own new columns should also do checkAndPut 
> checks on SYSTEM.MUTEX
> ---
>
> Key: PHOENIX-6191
> URL: https://issues.apache.org/jira/browse/PHOENIX-6191
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Critical
> Fix For: 5.1.0, 4.16.0
>
>
> Currently, when creating a view we do conditional writes with a checkAndPut 
> to SYSTEM.MUTEX for the keys:
> (, ,  name>)
> for each column in the view WHERE clause. Similarly, when issuing an ALTER 
> TABLE/VIEW, we do a conditional write with a checkAndPut to SYSTEM.MUTEX for 
> the key:
> (, ,  the column to add/drop>)
> to prevent conflicting modifications between a base table/view and its child 
> views. However, if we create a view with its own new columns, for ex:
> {code:sql}
> CREATE VIEW V1 (NEW_COL1 INTEGER, NEW_COL2 INTEGER) AS SELECT * FROM T1 WHERE 
> B = 10;
> {code}
> we will not do a checkAndPut with the new columns being added to the view 
> (NEW_COL1 and NEW_COL2) thus conflicting concurrent mutations may occur to a 
> parent in this case, for ex: a simultaneous ALTER TABLE/VIEW of the parent 
> which adds NEW_COL1 as a VARCHAR. This will lead to data being unable to be 
> read properly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6225) fix the dependency issue on the master branch

2020-11-16 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned PHOENIX-6225:
-

Assignee: Xinyi Yan

> fix the dependency issue on the master branch
> -
>
> Key: PHOENIX-6225
> URL: https://issues.apache.org/jira/browse/PHOENIX-6225
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Major
> Fix For: 5.1.0
>
> Attachments: PHOENIX-6225.master.patch
>
>
> The master branch build failed locally because some of files are import 
> com.google.common instead of phoenix.thirdparty.com.google.common



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6225) fix the dependency issue on the master branch

2020-11-16 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6225:
--
Issue Type: Bug  (was: Improvement)

> fix the dependency issue on the master branch
> -
>
> Key: PHOENIX-6225
> URL: https://issues.apache.org/jira/browse/PHOENIX-6225
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Xinyi Yan
>Priority: Major
> Fix For: 5.1.0
>
> Attachments: PHOENIX-6225.master.patch
>
>
> The master branch build failed locally because some of files are import 
> com.google.common instead of phoenix.thirdparty.com.google.common



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (PHOENIX-6123) Old clients cannot query a view if the parent has an index

2020-11-13 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reopened PHOENIX-6123:
---

> Old clients cannot query a view if the parent has an index
> --
>
> Key: PHOENIX-6123
> URL: https://issues.apache.org/jira/browse/PHOENIX-6123
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
>
> Steps to repro:
> 1. Start a 4.16 cluster and run the following with a 4.16 client:
> {code:sql}
> CREATE TABLE IF NOT EXISTS S.T (A INTEGER NOT NULL PRIMARY KEY, B INTEGER, C 
> INTEGER);
> CREATE INDEX IF NOT EXISTS IDX ON S.T(B);
> CREATE VIEW IF NOT EXISTS V1 AS SELECT * FROM S.T WHERE C > 1;
> {code}
> 2. From a 4.14 client, try to query the newly created view:
> {code:sql}
> 0: jdbc:phoenix:> SELECT * FROM V1;
> Error: ERROR 504 (42703): Undefined column. columnName=S.IDX.0:C 
> (state=42703,code=504)
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=S.IDX.0:C
>   at 
> org.apache.phoenix.schema.PTableImpl.getColumnForColumnName(PTableImpl.java:828)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:477)
>   at 
> org.apache.phoenix.compile.ExpressionCompiler.resolveColumn(ExpressionCompiler.java:372)
>   at 
> org.apache.phoenix.compile.WhereCompiler$WhereExpressionCompiler.resolveColumn(WhereCompiler.java:197)
>   at 
> org.apache.phoenix.compile.WhereCompiler$WhereExpressionCompiler.visit(WhereCompiler.java:183)
>   at 
> org.apache.phoenix.compile.WhereCompiler$WhereExpressionCompiler.visit(WhereCompiler.java:170)
>   at 
> org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:56)
>   at 
> org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
>   at org.apache.phoenix.parse.CastParseNode.accept(CastParseNode.java:60)
>   at 
> org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
>   at 
> org.apache.phoenix.parse.ComparisonParseNode.accept(ComparisonParseNode.java:45)
>   at 
> org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:96)
>   at 
> org.apache.phoenix.util.IndexUtil.rewriteViewStatement(IndexUtil.java:535)
>   at 
> org.apache.phoenix.schema.MetaDataClient.addIndexesFromParentTable(MetaDataClient.java:918)
>   at 
> org.apache.phoenix.schema.MetaDataClient.addTableToCache(MetaDataClient.java:4036)
>   at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:680)
>   at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:539)
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:573)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:391)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:228)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:206)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:593)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:567)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:330)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:315)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:314)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.access$600(PhoenixStatement.java:238)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:382)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:315)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:314)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:307)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1947)
>   at sqlline.Commands.execute(Commands.java:814)
>   at sqlline.Commands.sql(Commands.java:754)
>   at sqlline.SqlLine.dispatch(SqlLine.java:646)
>   at sqlline.SqlLine.begin(SqlLine.java:510)
>   at sqlline.SqlLine.start(SqlLine.java:233)
>   at sqlline.SqlLine.main(SqlLine.java:175)
> {code}
> The same happens if the view is created on top of a view that has an index. 
> Similarly, a view creation 

[jira] [Resolved] (PHOENIX-6123) Old clients cannot query a view if the parent has an index

2020-11-13 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni resolved PHOENIX-6123.
---
Resolution: Fixed

> Old clients cannot query a view if the parent has an index
> --
>
> Key: PHOENIX-6123
> URL: https://issues.apache.org/jira/browse/PHOENIX-6123
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
>
> Steps to repro:
> 1. Start a 4.16 cluster and run the following with a 4.16 client:
> {code:sql}
> CREATE TABLE IF NOT EXISTS S.T (A INTEGER NOT NULL PRIMARY KEY, B INTEGER, C 
> INTEGER);
> CREATE INDEX IF NOT EXISTS IDX ON S.T(B);
> CREATE VIEW IF NOT EXISTS V1 AS SELECT * FROM S.T WHERE C > 1;
> {code}
> 2. From a 4.14 client, try to query the newly created view:
> {code:sql}
> 0: jdbc:phoenix:> SELECT * FROM V1;
> Error: ERROR 504 (42703): Undefined column. columnName=S.IDX.0:C 
> (state=42703,code=504)
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=S.IDX.0:C
>   at 
> org.apache.phoenix.schema.PTableImpl.getColumnForColumnName(PTableImpl.java:828)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:477)
>   at 
> org.apache.phoenix.compile.ExpressionCompiler.resolveColumn(ExpressionCompiler.java:372)
>   at 
> org.apache.phoenix.compile.WhereCompiler$WhereExpressionCompiler.resolveColumn(WhereCompiler.java:197)
>   at 
> org.apache.phoenix.compile.WhereCompiler$WhereExpressionCompiler.visit(WhereCompiler.java:183)
>   at 
> org.apache.phoenix.compile.WhereCompiler$WhereExpressionCompiler.visit(WhereCompiler.java:170)
>   at 
> org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:56)
>   at 
> org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
>   at org.apache.phoenix.parse.CastParseNode.accept(CastParseNode.java:60)
>   at 
> org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
>   at 
> org.apache.phoenix.parse.ComparisonParseNode.accept(ComparisonParseNode.java:45)
>   at 
> org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:96)
>   at 
> org.apache.phoenix.util.IndexUtil.rewriteViewStatement(IndexUtil.java:535)
>   at 
> org.apache.phoenix.schema.MetaDataClient.addIndexesFromParentTable(MetaDataClient.java:918)
>   at 
> org.apache.phoenix.schema.MetaDataClient.addTableToCache(MetaDataClient.java:4036)
>   at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:680)
>   at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:539)
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:573)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:391)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:228)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:206)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:593)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:567)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:330)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:315)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:314)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.access$600(PhoenixStatement.java:238)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:382)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:315)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:314)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:307)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1947)
>   at sqlline.Commands.execute(Commands.java:814)
>   at sqlline.Commands.sql(Commands.java:754)
>   at sqlline.SqlLine.dispatch(SqlLine.java:646)
>   at sqlline.SqlLine.begin(SqlLine.java:510)
>   at sqlline.SqlLine.start(SqlLine.java:233)
>   at sqlline.SqlLine.main(SqlLine.java:175)
> {code}
> The same happens if the view is created on top of a view that has an index. 
> 

[jira] [Resolved] (PHOENIX-6123) Old clients cannot query a view if the parent has an index

2020-11-13 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni resolved PHOENIX-6123.
---
Resolution: Cannot Reproduce

Looks like this is no longer an issue. Must have been fixed with other fixes 
along this code path :)
Closing.

> Old clients cannot query a view if the parent has an index
> --
>
> Key: PHOENIX-6123
> URL: https://issues.apache.org/jira/browse/PHOENIX-6123
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
>
> Steps to repro:
> 1. Start a 4.16 cluster and run the following with a 4.16 client:
> {code:sql}
> CREATE TABLE IF NOT EXISTS S.T (A INTEGER NOT NULL PRIMARY KEY, B INTEGER, C 
> INTEGER);
> CREATE INDEX IF NOT EXISTS IDX ON S.T(B);
> CREATE VIEW IF NOT EXISTS V1 AS SELECT * FROM S.T WHERE C > 1;
> {code}
> 2. From a 4.14 client, try to query the newly created view:
> {code:sql}
> 0: jdbc:phoenix:> SELECT * FROM V1;
> Error: ERROR 504 (42703): Undefined column. columnName=S.IDX.0:C 
> (state=42703,code=504)
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=S.IDX.0:C
>   at 
> org.apache.phoenix.schema.PTableImpl.getColumnForColumnName(PTableImpl.java:828)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:477)
>   at 
> org.apache.phoenix.compile.ExpressionCompiler.resolveColumn(ExpressionCompiler.java:372)
>   at 
> org.apache.phoenix.compile.WhereCompiler$WhereExpressionCompiler.resolveColumn(WhereCompiler.java:197)
>   at 
> org.apache.phoenix.compile.WhereCompiler$WhereExpressionCompiler.visit(WhereCompiler.java:183)
>   at 
> org.apache.phoenix.compile.WhereCompiler$WhereExpressionCompiler.visit(WhereCompiler.java:170)
>   at 
> org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:56)
>   at 
> org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
>   at org.apache.phoenix.parse.CastParseNode.accept(CastParseNode.java:60)
>   at 
> org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
>   at 
> org.apache.phoenix.parse.ComparisonParseNode.accept(ComparisonParseNode.java:45)
>   at 
> org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:96)
>   at 
> org.apache.phoenix.util.IndexUtil.rewriteViewStatement(IndexUtil.java:535)
>   at 
> org.apache.phoenix.schema.MetaDataClient.addIndexesFromParentTable(MetaDataClient.java:918)
>   at 
> org.apache.phoenix.schema.MetaDataClient.addTableToCache(MetaDataClient.java:4036)
>   at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:680)
>   at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:539)
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:573)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:391)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:228)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:206)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:593)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:567)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:330)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:315)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:314)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.access$600(PhoenixStatement.java:238)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:382)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:315)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:314)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:307)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1947)
>   at sqlline.Commands.execute(Commands.java:814)
>   at sqlline.Commands.sql(Commands.java:754)
>   at sqlline.SqlLine.dispatch(SqlLine.java:646)
>   at sqlline.SqlLine.begin(SqlLine.java:510)
>   at sqlline.SqlLine.start(SqlLine.java:233)
>   at 

[jira] [Resolved] (PHOENIX-6212) Improve SystemCatalogIT.testSystemTableSplit() to ensure no splitting occurs when splitting is disabled

2020-11-11 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni resolved PHOENIX-6212.
---
Resolution: Fixed

Thanks for the review [~yanxinyi]. Committed to master and 4.x

> Improve SystemCatalogIT.testSystemTableSplit() to ensure no splitting occurs 
> when splitting is disabled
> ---
>
> Key: PHOENIX-6212
> URL: https://issues.apache.org/jira/browse/PHOENIX-6212
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6212.master.patch
>
>
> We should fix 
> [SystemCatalogIT.testSystemTableSplit()|https://github.com/apache/phoenix/blob/8aa243d1e35a2a9ac85c6c30f1e958289973c214/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogIT.java#L65]
>  so that it ensures that splitting SYSTEM.CATALOG fails if either 
> phoenix.allow.system.catalog.rollback=true and/or
> phoenix.system.catalog.splittable=false
> Currently this test is not really ensuring that a split fails (see [this 
> comment|https://github.com/apache/phoenix/pull/949#discussion_r516276781] for 
> more details)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6212) Improve SystemCatalogIT.testSystemTableSplit() to ensure no splitting occurs when splitting is disabled

2020-11-11 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6212:
--
Attachment: PHOENIX-6212.master.patch

> Improve SystemCatalogIT.testSystemTableSplit() to ensure no splitting occurs 
> when splitting is disabled
> ---
>
> Key: PHOENIX-6212
> URL: https://issues.apache.org/jira/browse/PHOENIX-6212
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6212.master.patch
>
>
> We should fix 
> [SystemCatalogIT.testSystemTableSplit()|https://github.com/apache/phoenix/blob/8aa243d1e35a2a9ac85c6c30f1e958289973c214/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogIT.java#L65]
>  so that it ensures that splitting SYSTEM.CATALOG fails if either 
> phoenix.allow.system.catalog.rollback=true and/or
> phoenix.system.catalog.splittable=false
> Currently this test is not really ensuring that a split fails (see [this 
> comment|https://github.com/apache/phoenix/pull/949#discussion_r516276781] for 
> more details)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6212) Improve SystemCatalogIT.testSystemTableSplit() to ensure no splitting occurs when splitting is disabled

2020-11-10 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6212:
--
Description: 
We should fix 
[SystemCatalogIT.testSystemTableSplit()|https://github.com/apache/phoenix/blob/8aa243d1e35a2a9ac85c6c30f1e958289973c214/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogIT.java#L65]
 so that it ensures that splitting SYSTEM.CATALOG fails if either 
phoenix.allow.system.catalog.rollback=true and/or
phoenix.system.catalog.splittable=false

Currently this test is not really ensuring that a split fails (see [this 
comment|https://github.com/apache/phoenix/pull/949#discussion_r516276781] for 
more details)

  was:
We should fix 
[SystemCatalogIT.testSystemTableSplit()|https://github.com/apache/phoenix/blob/8aa243d1e35a2a9ac85c6c30f1e958289973c214/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogIT.java#L65]
 so that it ensures that splitting SYSTEM.CATALOG fails if either 
phoenix.allow.system.catalog.rollback=true and/or
phoenix.system.catalog.splittable=false

Currently this test is not really ensuring that a split goes through (see [this 
comment|https://github.com/apache/phoenix/pull/949#discussion_r516276781] for 
more details)


> Improve SystemCatalogIT.testSystemTableSplit() to ensure no splitting occurs 
> when splitting is disabled
> ---
>
> Key: PHOENIX-6212
> URL: https://issues.apache.org/jira/browse/PHOENIX-6212
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>
> We should fix 
> [SystemCatalogIT.testSystemTableSplit()|https://github.com/apache/phoenix/blob/8aa243d1e35a2a9ac85c6c30f1e958289973c214/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogIT.java#L65]
>  so that it ensures that splitting SYSTEM.CATALOG fails if either 
> phoenix.allow.system.catalog.rollback=true and/or
> phoenix.system.catalog.splittable=false
> Currently this test is not really ensuring that a split fails (see [this 
> comment|https://github.com/apache/phoenix/pull/949#discussion_r516276781] for 
> more details)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6032) When phoenix.allow.system.catalog.rollback=true, a view still sees data from a column that was dropped

2020-11-02 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6032:
--
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
47s{color} | {color:blue} phoenix-core in master has 969 extant spotbugs 
warnings. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 56s{color} 
| {color:red} phoenix-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/PreCommit-PHOENIX-Build/149/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | PHOENIX-6032 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13014595/PHOENIX-6032.master.v5.patch
 |
| Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti 
checkstyle compile |
| uname | Linux cb5dd8111da4 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev/phoenix-personality.sh |
| git revision | master / e828ef7 |
| Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
| unit | 
https://ci-hadoop.apache.org/job/PreCommit-PHOENIX-Build/149/artifact/patchprocess/patch-unit-phoenix-core.txt
 |
|  Test Results | 
https://ci-hadoop.apache.org/job/PreCommit-PHOENIX-Build/149/testReport/ |
| Max. process+thread count | 6952 (vs. ulimit of 3) |
| modules | C: phoenix-core U: phoenix-core |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-PHOENIX-Build/149/console |
| versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 |
| Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |


This message was automatically generated.

)

> When phoenix.allow.system.catalog.rollback=true, a view still sees data from 
> a column that was dropped
> 

[jira] [Created] (PHOENIX-6212) Improve SystemCatalogIT.testSystemTableSplit() to ensure no splitting occurs when splitting is disabled

2020-11-02 Thread Chinmay Kulkarni (Jira)
Chinmay Kulkarni created PHOENIX-6212:
-

 Summary: Improve SystemCatalogIT.testSystemTableSplit() to ensure 
no splitting occurs when splitting is disabled
 Key: PHOENIX-6212
 URL: https://issues.apache.org/jira/browse/PHOENIX-6212
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.15.0, 5.0.0
Reporter: Chinmay Kulkarni
Assignee: Chinmay Kulkarni
 Fix For: 5.1.0, 4.16.0


We should fix 
[SystemCatalogIT.testSystemTableSplit()|https://github.com/apache/phoenix/blob/8aa243d1e35a2a9ac85c6c30f1e958289973c214/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogIT.java#L65]
 so that it ensures that splitting SYSTEM.CATALOG fails if either 
phoenix.allow.system.catalog.rollback=true and/or
phoenix.system.catalog.splittable=false

Currently this test is not really ensuring that a split goes through (see [this 
comment|https://github.com/apache/phoenix/pull/949#discussion_r516276781] for 
more details)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6032) When phoenix.allow.system.catalog.rollback=true, a view still sees data from a column that was dropped

2020-11-02 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6032:
--
Attachment: PHOENIX-6032.master.v6.patch

> When phoenix.allow.system.catalog.rollback=true, a view still sees data from 
> a column that was dropped
> --
>
> Key: PHOENIX-6032
> URL: https://issues.apache.org/jira/browse/PHOENIX-6032
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6032.master.v1.patch, 
> PHOENIX-6032.master.v2.patch, PHOENIX-6032.master.v3.patch, 
> PHOENIX-6032.master.v4.patch, PHOENIX-6032.master.v5.patch, 
> PHOENIX-6032.master.v6.patch
>
>
> Start a 4.x server with phoenix.allow.system.catalog.rollback=true, 
> phoenix.system.catalog.splittable=false. Connect to it from a 4.x client with 
> phoenix.allow.system.catalog.rollback=true. Run the following from the 4.x 
> client:
> {code:sql}
> CREATE TABLE IF NOT EXISTS T (A INTEGER PRIMARY KEY, B INTEGER, C VARCHAR, D 
> INTEGER);
> CREATE VIEW IF NOT EXISTS V (VA INTEGER, VB INTEGER) AS SELECT * FROM T WHERE 
> B=200;
> UPSERT INTO V(A,B,C,D,VA,VB) VALUES (2, 200, 'def', -20, 91, 101);
> SELECT * FROM T;
> ++--+--+--+
> | A  |  B   |  C   |  D   |
> ++--+--+--+
> | 2  | 200  | def  | -20  |
> ++--+--+--+
> SELECT * FROM V;
> ++--+--+--+-+--+
> | A  |  B   |  C   |  D   | VA  |  VB  |
> ++--+--+--+-+--+
> | 2  | 200  | def  | -20  | 91  | 101  |
> ++--+--+--+-+--+
> -- as expected
> -- drop a parent column from the view
> ALTER VIEW V DROP COLUMN C;
> SELECT * FROM V;
> +--++--+--+-+--+
> |  C   | A  |  B   |  D   | VA  |  VB  |
> +--++--+--+-+--+
> | def  | 2  | 200  | -20  | 91  | 101  |
> +--++--+--+-+--+
> -- Column C can still be seen and its ordering is changed for some reason. If 
> you run the drop column again, it is actually dropped
> ALTER VIEW V DROP COLUMN C;
> SELECT * FROM V;
> ++--+--+-+--+
> | A  |  B   |  D   | VA  |  VB  |
> ++--+--+-+--+
> | 2  | 200  | -20  | 91  | 101  |
> ++--+--+-+--+
> -- Gets dropped when drop column is run a second time.
> {code}
> When splittable SYSTEM.CATALOG rollback is enabled, we store the parent's 
> column metadata along with the view as well. After the first drop column 
> command, metadata for column 'C' of the parent is removed from the view's 
> metadata rows however it is not marked diverged, nor is an EXCLUDED_COLUMN 
> entry made for that column in the view metadata rows.
> Because of this, when resolving the view we potentially keep combining the 
> parent table columns and still get column 'C'. When the second drop column 
> command is issued is when we actually add an EXCLUDED_COLUMN linking row for 
> 'C' in the view metadata.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5980) MUTATION_BATCH_FAILED_SIZE metric is incorrectly updated for failing delete mutations

2020-11-02 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned PHOENIX-5980:
-

Assignee: (was: Chinmay Kulkarni)

> MUTATION_BATCH_FAILED_SIZE metric is incorrectly updated for failing delete 
> mutations
> -
>
> Key: PHOENIX-5980
> URL: https://issues.apache.org/jira/browse/PHOENIX-5980
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Major
>  Labels: metrics, phoenix-hardening, quality-improvement
> Fix For: 4.16.0
>
>
> In the conn.commit() path, we get the number of mutations that failed to be 
> committed in the catch block of MutationState.sendMutations() (see 
> [here|https://github.com/apache/phoenix/blob/dcc88af8acc2ba8df10d2e9d498ab3646fdf0a78/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java#L1195-L1198]).
>  
> In case of delete mutations, the uncommittedStatementIndexes.length always 
> resolves to 1 and we always update the metric value by 1 in this case, even 
> though the actual mutation list corresponds to multiple DELETE mutations 
> which failed. In case of upserts, using unCommittedStatementIndexes.length is 
> fine since each upsert query corresponds to 1 Put. We should fix the logic 
> for deletes/mixed delete + upsert mutation batch failures.
> This wrong value is propagated to global client metrics as well as 
> MutationMetricQueue metrics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5980) MUTATION_BATCH_FAILED_SIZE metric is incorrectly updated for failing delete mutations

2020-11-02 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned PHOENIX-5980:
-

Assignee: Chinmay Kulkarni

> MUTATION_BATCH_FAILED_SIZE metric is incorrectly updated for failing delete 
> mutations
> -
>
> Key: PHOENIX-5980
> URL: https://issues.apache.org/jira/browse/PHOENIX-5980
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Major
>  Labels: metrics, phoenix-hardening, quality-improvement
> Fix For: 4.16.0
>
>
> In the conn.commit() path, we get the number of mutations that failed to be 
> committed in the catch block of MutationState.sendMutations() (see 
> [here|https://github.com/apache/phoenix/blob/dcc88af8acc2ba8df10d2e9d498ab3646fdf0a78/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java#L1195-L1198]).
>  
> In case of delete mutations, the uncommittedStatementIndexes.length always 
> resolves to 1 and we always update the metric value by 1 in this case, even 
> though the actual mutation list corresponds to multiple DELETE mutations 
> which failed. In case of upserts, using unCommittedStatementIndexes.length is 
> fine since each upsert query corresponds to 1 Put. We should fix the logic 
> for deletes/mixed delete + upsert mutation batch failures.
> This wrong value is propagated to global client metrics as well as 
> MutationMetricQueue metrics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6032) When phoenix.allow.system.catalog.rollback=true, a view still sees data from a column that was dropped

2020-11-02 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6032:
--
Attachment: PHOENIX-6032.master.v5.patch

> When phoenix.allow.system.catalog.rollback=true, a view still sees data from 
> a column that was dropped
> --
>
> Key: PHOENIX-6032
> URL: https://issues.apache.org/jira/browse/PHOENIX-6032
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6032.master.v1.patch, 
> PHOENIX-6032.master.v2.patch, PHOENIX-6032.master.v3.patch, 
> PHOENIX-6032.master.v4.patch, PHOENIX-6032.master.v5.patch
>
>
> Start a 4.x server with phoenix.allow.system.catalog.rollback=true, 
> phoenix.system.catalog.splittable=false. Connect to it from a 4.x client with 
> phoenix.allow.system.catalog.rollback=true. Run the following from the 4.x 
> client:
> {code:sql}
> CREATE TABLE IF NOT EXISTS T (A INTEGER PRIMARY KEY, B INTEGER, C VARCHAR, D 
> INTEGER);
> CREATE VIEW IF NOT EXISTS V (VA INTEGER, VB INTEGER) AS SELECT * FROM T WHERE 
> B=200;
> UPSERT INTO V(A,B,C,D,VA,VB) VALUES (2, 200, 'def', -20, 91, 101);
> SELECT * FROM T;
> ++--+--+--+
> | A  |  B   |  C   |  D   |
> ++--+--+--+
> | 2  | 200  | def  | -20  |
> ++--+--+--+
> SELECT * FROM V;
> ++--+--+--+-+--+
> | A  |  B   |  C   |  D   | VA  |  VB  |
> ++--+--+--+-+--+
> | 2  | 200  | def  | -20  | 91  | 101  |
> ++--+--+--+-+--+
> -- as expected
> -- drop a parent column from the view
> ALTER VIEW V DROP COLUMN C;
> SELECT * FROM V;
> +--++--+--+-+--+
> |  C   | A  |  B   |  D   | VA  |  VB  |
> +--++--+--+-+--+
> | def  | 2  | 200  | -20  | 91  | 101  |
> +--++--+--+-+--+
> -- Column C can still be seen and its ordering is changed for some reason. If 
> you run the drop column again, it is actually dropped
> ALTER VIEW V DROP COLUMN C;
> SELECT * FROM V;
> ++--+--+-+--+
> | A  |  B   |  D   | VA  |  VB  |
> ++--+--+-+--+
> | 2  | 200  | -20  | 91  | 101  |
> ++--+--+-+--+
> -- Gets dropped when drop column is run a second time.
> {code}
> When splittable SYSTEM.CATALOG rollback is enabled, we store the parent's 
> column metadata along with the view as well. After the first drop column 
> command, metadata for column 'C' of the parent is removed from the view's 
> metadata rows however it is not marked diverged, nor is an EXCLUDED_COLUMN 
> entry made for that column in the view metadata rows.
> Because of this, when resolving the view we potentially keep combining the 
> parent table columns and still get column 'C'. When the second drop column 
> command is issued is when we actually add an EXCLUDED_COLUMN linking row for 
> 'C' in the view metadata.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-5945) TaskRegionObserver can kick off the same task multiple times if SYSTEM.TASK has split

2020-10-30 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni resolved PHOENIX-5945.
---
Resolution: Duplicate

> TaskRegionObserver can kick off the same task multiple times if SYSTEM.TASK 
> has split
> -
>
> Key: PHOENIX-5945
> URL: https://issues.apache.org/jira/browse/PHOENIX-5945
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.1.0, 4.16.1, 4.17.0
>
>
> We don't specify a split policy for 
> [SYSTEM.TASK|https://github.com/apache/phoenix/blob/5f9364db7e4925229704706e148e62f4cf4ec4c2/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java#L381],
>  so by default it will be allowed to split. Now if SYSTEM.TASK spans multiple 
> regions, each region's 
> [postOpen|https://github.com/apache/phoenix/blob/5f9364db7e4925229704706e148e62f4cf4ec4c2/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TaskRegionObserver.java#L137]
>  schedules the SelfHealingTask at the specified interval and so [each region 
> will run a FTS on the 
> table|https://github.com/apache/phoenix/blob/5f9364db7e4925229704706e148e62f4cf4ec4c2/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TaskRegionObserver.java#L159]
>  and try to kick-off all the incomplete and non-failed tasks.
> This can lead to the same tasks being kicked off multiple times as a corner 
> race condition in spite of [this 
> check|https://github.com/apache/phoenix/blob/5f9364db7e4925229704706e148e62f4cf4ec4c2/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TaskRegionObserver.java#L187-L195]
>  (which is another FTS) and also lead to unnecessary extra load on the server.
> We do not explicitly outline that tasks need to be idempotent, so we should 
> handle this properly in the TaskRegionObserver so that each region is only 
> responsible for tasks lying within its boundaries.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6032) When phoenix.allow.system.catalog.rollback=true, a view still sees data from a column that was dropped

2020-10-30 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6032:
--
Attachment: PHOENIX-6032.master.v4.patch

> When phoenix.allow.system.catalog.rollback=true, a view still sees data from 
> a column that was dropped
> --
>
> Key: PHOENIX-6032
> URL: https://issues.apache.org/jira/browse/PHOENIX-6032
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6032.master.v1.patch, 
> PHOENIX-6032.master.v2.patch, PHOENIX-6032.master.v3.patch, 
> PHOENIX-6032.master.v4.patch
>
>
> Start a 4.x server with phoenix.allow.system.catalog.rollback=true, 
> phoenix.system.catalog.splittable=false. Connect to it from a 4.x client with 
> phoenix.allow.system.catalog.rollback=true. Run the following from the 4.x 
> client:
> {code:sql}
> CREATE TABLE IF NOT EXISTS T (A INTEGER PRIMARY KEY, B INTEGER, C VARCHAR, D 
> INTEGER);
> CREATE VIEW IF NOT EXISTS V (VA INTEGER, VB INTEGER) AS SELECT * FROM T WHERE 
> B=200;
> UPSERT INTO V(A,B,C,D,VA,VB) VALUES (2, 200, 'def', -20, 91, 101);
> SELECT * FROM T;
> ++--+--+--+
> | A  |  B   |  C   |  D   |
> ++--+--+--+
> | 2  | 200  | def  | -20  |
> ++--+--+--+
> SELECT * FROM V;
> ++--+--+--+-+--+
> | A  |  B   |  C   |  D   | VA  |  VB  |
> ++--+--+--+-+--+
> | 2  | 200  | def  | -20  | 91  | 101  |
> ++--+--+--+-+--+
> -- as expected
> -- drop a parent column from the view
> ALTER VIEW V DROP COLUMN C;
> SELECT * FROM V;
> +--++--+--+-+--+
> |  C   | A  |  B   |  D   | VA  |  VB  |
> +--++--+--+-+--+
> | def  | 2  | 200  | -20  | 91  | 101  |
> +--++--+--+-+--+
> -- Column C can still be seen and its ordering is changed for some reason. If 
> you run the drop column again, it is actually dropped
> ALTER VIEW V DROP COLUMN C;
> SELECT * FROM V;
> ++--+--+-+--+
> | A  |  B   |  D   | VA  |  VB  |
> ++--+--+-+--+
> | 2  | 200  | -20  | 91  | 101  |
> ++--+--+-+--+
> -- Gets dropped when drop column is run a second time.
> {code}
> When splittable SYSTEM.CATALOG rollback is enabled, we store the parent's 
> column metadata along with the view as well. After the first drop column 
> command, metadata for column 'C' of the parent is removed from the view's 
> metadata rows however it is not marked diverged, nor is an EXCLUDED_COLUMN 
> entry made for that column in the view metadata rows.
> Because of this, when resolving the view we potentially keep combining the 
> parent table columns and still get column 'C'. When the second drop column 
> command is issued is when we actually add an EXCLUDED_COLUMN linking row for 
> 'C' in the view metadata.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6030) When phoenix.allow.system.catalog.rollback=true, a view still sees data for columns that were dropped from its parent view

2020-10-30 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni resolved PHOENIX-6030.
---
Resolution: Fixed

> When phoenix.allow.system.catalog.rollback=true, a view still sees data for 
> columns that were dropped from its parent view
> --
>
> Key: PHOENIX-6030
> URL: https://issues.apache.org/jira/browse/PHOENIX-6030
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
>
> Start a 4.x server with phoenix.allow.system.catalog.rollback=true, 
> phoenix.system.catalog.splittable=false. Connect to it from a 4.x client with 
> phoenix.allow.system.catalog.rollback=true. Run the following from the 4.x 
> client:
> {code:sql}
> CREATE TABLE IF NOT EXISTS T (A INTEGER PRIMARY KEY, B INTEGER, C VARCHAR, D 
> INTEGER);
> CREATE VIEW IF NOT EXISTS V (VA INTEGER, VB INTEGER) AS SELECT * FROM T WHERE 
> B=200;
> UPSERT INTO V(A,B,C,D,VA,VB) VALUES (2, 200, 'def', -20, 91, 101);
> SELECT * FROM T;
> ++--+--+--+
> | A  |  B   |  C   |  D   |
> ++--+--+--+
> | 2  | 200  | def  | -20  |
> ++--+--+--+
> SELECT * FROM V;
> ++--+--+--+-+--+
> | A  |  B   |  C   |  D   | VA  |  VB  |
> ++--+--+--+-+--+
> | 2  | 200  | def  | -20  | 91  | 101  |
> ++--+--+--+-+--+
> -- as expected
> -- below view can be either a tenant-specific view or a global view, as long 
> as its parent is V.
> CREATE VIEW V_t001 AS SELECT * FROM V;
> ALTER VIEW V DROP COLUMN VA;
> SELECT * FROM V;
> ++--+--+--+--+
> | A  |  B   |  C   |  D   |  VB  |
> ++--+--+--+--+
> | 2  | 200  | def  | -20  | 101  |
> ++--+--+--+--+
> -- We shouldn't see VA below since it was dropped from the parent
> SELECT * FROM V_T001;
> ++--+--+--+-+--+
> | A  |  B   |  C   |  D   | VA  |  VB  |
> ++--+--+--+-+--+
> | 2  | 200  | def  | -20  | 91  | 101  |
> ++--+--+--+-+--+
> {code}
> If rollback is enabled, we prevent adding/dropping a column to/from a *table* 
> that has child views (see 
> [this|https://github.com/apache/phoenix/blob/2fcb8541c9dd7317e62239bd208ff4377ba794e2/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2602-L2615]).
>  However, we don't prevent adding/dropping columns to/from a *view* that has 
> child views (see 
> [here|https://github.com/apache/phoenix/blob/2fcb8541c9dd7317e62239bd208ff4377ba794e2/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2591]).
>  Either we should also prevent column mutations in case of views that have 
> children or make sure that dropped columns don't show up when querying a 
> child view.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6032) When phoenix.allow.system.catalog.rollback=true, a view still sees data from a column that was dropped

2020-10-30 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6032:
--
Attachment: PHOENIX-6032.master.v3.patch

> When phoenix.allow.system.catalog.rollback=true, a view still sees data from 
> a column that was dropped
> --
>
> Key: PHOENIX-6032
> URL: https://issues.apache.org/jira/browse/PHOENIX-6032
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6032.master.v1.patch, 
> PHOENIX-6032.master.v2.patch, PHOENIX-6032.master.v3.patch
>
>
> Start a 4.x server with phoenix.allow.system.catalog.rollback=true, 
> phoenix.system.catalog.splittable=false. Connect to it from a 4.x client with 
> phoenix.allow.system.catalog.rollback=true. Run the following from the 4.x 
> client:
> {code:sql}
> CREATE TABLE IF NOT EXISTS T (A INTEGER PRIMARY KEY, B INTEGER, C VARCHAR, D 
> INTEGER);
> CREATE VIEW IF NOT EXISTS V (VA INTEGER, VB INTEGER) AS SELECT * FROM T WHERE 
> B=200;
> UPSERT INTO V(A,B,C,D,VA,VB) VALUES (2, 200, 'def', -20, 91, 101);
> SELECT * FROM T;
> ++--+--+--+
> | A  |  B   |  C   |  D   |
> ++--+--+--+
> | 2  | 200  | def  | -20  |
> ++--+--+--+
> SELECT * FROM V;
> ++--+--+--+-+--+
> | A  |  B   |  C   |  D   | VA  |  VB  |
> ++--+--+--+-+--+
> | 2  | 200  | def  | -20  | 91  | 101  |
> ++--+--+--+-+--+
> -- as expected
> -- drop a parent column from the view
> ALTER VIEW V DROP COLUMN C;
> SELECT * FROM V;
> +--++--+--+-+--+
> |  C   | A  |  B   |  D   | VA  |  VB  |
> +--++--+--+-+--+
> | def  | 2  | 200  | -20  | 91  | 101  |
> +--++--+--+-+--+
> -- Column C can still be seen and its ordering is changed for some reason. If 
> you run the drop column again, it is actually dropped
> ALTER VIEW V DROP COLUMN C;
> SELECT * FROM V;
> ++--+--+-+--+
> | A  |  B   |  D   | VA  |  VB  |
> ++--+--+-+--+
> | 2  | 200  | -20  | 91  | 101  |
> ++--+--+-+--+
> -- Gets dropped when drop column is run a second time.
> {code}
> When splittable SYSTEM.CATALOG rollback is enabled, we store the parent's 
> column metadata along with the view as well. After the first drop column 
> command, metadata for column 'C' of the parent is removed from the view's 
> metadata rows however it is not marked diverged, nor is an EXCLUDED_COLUMN 
> entry made for that column in the view metadata rows.
> Because of this, when resolving the view we potentially keep combining the 
> parent table columns and still get column 'C'. When the second drop column 
> command is issued is when we actually add an EXCLUDED_COLUMN linking row for 
> 'C' in the view metadata.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6032) When phoenix.allow.system.catalog.rollback=true, a view still sees data from a column that was dropped

2020-10-29 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6032:
--
Attachment: PHOENIX-6032.master.v2.patch

> When phoenix.allow.system.catalog.rollback=true, a view still sees data from 
> a column that was dropped
> --
>
> Key: PHOENIX-6032
> URL: https://issues.apache.org/jira/browse/PHOENIX-6032
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6032.master.v1.patch, 
> PHOENIX-6032.master.v2.patch
>
>
> Start a 4.x server with phoenix.allow.system.catalog.rollback=true, 
> phoenix.system.catalog.splittable=false. Connect to it from a 4.x client with 
> phoenix.allow.system.catalog.rollback=true. Run the following from the 4.x 
> client:
> {code:sql}
> CREATE TABLE IF NOT EXISTS T (A INTEGER PRIMARY KEY, B INTEGER, C VARCHAR, D 
> INTEGER);
> CREATE VIEW IF NOT EXISTS V (VA INTEGER, VB INTEGER) AS SELECT * FROM T WHERE 
> B=200;
> UPSERT INTO V(A,B,C,D,VA,VB) VALUES (2, 200, 'def', -20, 91, 101);
> SELECT * FROM T;
> ++--+--+--+
> | A  |  B   |  C   |  D   |
> ++--+--+--+
> | 2  | 200  | def  | -20  |
> ++--+--+--+
> SELECT * FROM V;
> ++--+--+--+-+--+
> | A  |  B   |  C   |  D   | VA  |  VB  |
> ++--+--+--+-+--+
> | 2  | 200  | def  | -20  | 91  | 101  |
> ++--+--+--+-+--+
> -- as expected
> -- drop a parent column from the view
> ALTER VIEW V DROP COLUMN C;
> SELECT * FROM V;
> +--++--+--+-+--+
> |  C   | A  |  B   |  D   | VA  |  VB  |
> +--++--+--+-+--+
> | def  | 2  | 200  | -20  | 91  | 101  |
> +--++--+--+-+--+
> -- Column C can still be seen and its ordering is changed for some reason. If 
> you run the drop column again, it is actually dropped
> ALTER VIEW V DROP COLUMN C;
> SELECT * FROM V;
> ++--+--+-+--+
> | A  |  B   |  D   | VA  |  VB  |
> ++--+--+-+--+
> | 2  | 200  | -20  | 91  | 101  |
> ++--+--+-+--+
> -- Gets dropped when drop column is run a second time.
> {code}
> When splittable SYSTEM.CATALOG rollback is enabled, we store the parent's 
> column metadata along with the view as well. After the first drop column 
> command, metadata for column 'C' of the parent is removed from the view's 
> metadata rows however it is not marked diverged, nor is an EXCLUDED_COLUMN 
> entry made for that column in the view metadata rows.
> Because of this, when resolving the view we potentially keep combining the 
> parent table columns and still get column 'C'. When the second drop column 
> command is issued is when we actually add an EXCLUDED_COLUMN linking row for 
> 'C' in the view metadata.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5998) Paged server side ungrouped aggregate operations

2020-10-28 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5998:
--
Fix Version/s: (was: 4.x)
   4.16.0

> Paged server side ungrouped aggregate operations 
> -
>
> Key: PHOENIX-5998
> URL: https://issues.apache.org/jira/browse/PHOENIX-5998
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 4.16.0
>
> Attachments: PHOENIX-5998.4.x.001.patch, PHOENIX-5998.4.x.002.patch, 
> PHOENIX-5998.4.x.003.patch
>
>
> Phoenix provides the option of performing upsert select and delete query 
> operations on the client or server side.  This is decided by the Phoenix 
> optimizer based on configuration parameters. For the server side option, the 
> table operation (upsert select/delete query) is parallelized such that 
> multiple table regions are scanned and the mutations derived from these scans 
> can also be executed in parallel on the server side. However, currently there 
> is no paging capability and the server side operation can take long enough 
> lead to HBase client timeouts. When this happens, Phoenix can return failure 
> to its applications and the rest of the parallel scans and mutations on the 
> server side can still continue since  Phoenix has no mechanism in place to 
> stop these operations before returning failure to applications. This can 
> create unexpected race conditions between these left-over operations and the 
> new operations issued by applications. Putting a limit on the number of rows 
> to be processed within a single RPC call (i.e., the next operation on the 
> scanner) on the server side using a Phoenix level paging is highly desirable 
> and a required step to prevent the possible race conditions. This paging 
> mechanism has been already implemented for index rebuild and verification 
> operations and proven to be effective to prevent timeouts. This paging can be 
> implemented for all server side operations including aggregates, upsert 
> selects, delete queries and so on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6030) When phoenix.allow.system.catalog.rollback=true, a view still sees data for columns that were dropped from its parent view

2020-10-28 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned PHOENIX-6030:
-

Assignee: Chinmay Kulkarni

> When phoenix.allow.system.catalog.rollback=true, a view still sees data for 
> columns that were dropped from its parent view
> --
>
> Key: PHOENIX-6030
> URL: https://issues.apache.org/jira/browse/PHOENIX-6030
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
>
> Start a 4.x server with phoenix.allow.system.catalog.rollback=true, 
> phoenix.system.catalog.splittable=false. Connect to it from a 4.x client with 
> phoenix.allow.system.catalog.rollback=true. Run the following from the 4.x 
> client:
> {code:sql}
> CREATE TABLE IF NOT EXISTS T (A INTEGER PRIMARY KEY, B INTEGER, C VARCHAR, D 
> INTEGER);
> CREATE VIEW IF NOT EXISTS V (VA INTEGER, VB INTEGER) AS SELECT * FROM T WHERE 
> B=200;
> UPSERT INTO V(A,B,C,D,VA,VB) VALUES (2, 200, 'def', -20, 91, 101);
> SELECT * FROM T;
> ++--+--+--+
> | A  |  B   |  C   |  D   |
> ++--+--+--+
> | 2  | 200  | def  | -20  |
> ++--+--+--+
> SELECT * FROM V;
> ++--+--+--+-+--+
> | A  |  B   |  C   |  D   | VA  |  VB  |
> ++--+--+--+-+--+
> | 2  | 200  | def  | -20  | 91  | 101  |
> ++--+--+--+-+--+
> -- as expected
> -- below view can be either a tenant-specific view or a global view, as long 
> as its parent is V.
> CREATE VIEW V_t001 AS SELECT * FROM V;
> ALTER VIEW V DROP COLUMN VA;
> SELECT * FROM V;
> ++--+--+--+--+
> | A  |  B   |  C   |  D   |  VB  |
> ++--+--+--+--+
> | 2  | 200  | def  | -20  | 101  |
> ++--+--+--+--+
> -- We shouldn't see VA below since it was dropped from the parent
> SELECT * FROM V_T001;
> ++--+--+--+-+--+
> | A  |  B   |  C   |  D   | VA  |  VB  |
> ++--+--+--+-+--+
> | 2  | 200  | def  | -20  | 91  | 101  |
> ++--+--+--+-+--+
> {code}
> If rollback is enabled, we prevent adding/dropping a column to/from a *table* 
> that has child views (see 
> [this|https://github.com/apache/phoenix/blob/2fcb8541c9dd7317e62239bd208ff4377ba794e2/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2602-L2615]).
>  However, we don't prevent adding/dropping columns to/from a *view* that has 
> child views (see 
> [here|https://github.com/apache/phoenix/blob/2fcb8541c9dd7317e62239bd208ff4377ba794e2/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2591]).
>  Either we should also prevent column mutations in case of views that have 
> children or make sure that dropped columns don't show up when querying a 
> child view.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6032) When phoenix.allow.system.catalog.rollback=true, a view still sees data from a column that was dropped

2020-10-28 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned PHOENIX-6032:
-

Assignee: Chinmay Kulkarni

> When phoenix.allow.system.catalog.rollback=true, a view still sees data from 
> a column that was dropped
> --
>
> Key: PHOENIX-6032
> URL: https://issues.apache.org/jira/browse/PHOENIX-6032
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
>
> Start a 4.x server with phoenix.allow.system.catalog.rollback=true, 
> phoenix.system.catalog.splittable=false. Connect to it from a 4.x client with 
> phoenix.allow.system.catalog.rollback=true. Run the following from the 4.x 
> client:
> {code:sql}
> CREATE TABLE IF NOT EXISTS T (A INTEGER PRIMARY KEY, B INTEGER, C VARCHAR, D 
> INTEGER);
> CREATE VIEW IF NOT EXISTS V (VA INTEGER, VB INTEGER) AS SELECT * FROM T WHERE 
> B=200;
> UPSERT INTO V(A,B,C,D,VA,VB) VALUES (2, 200, 'def', -20, 91, 101);
> SELECT * FROM T;
> ++--+--+--+
> | A  |  B   |  C   |  D   |
> ++--+--+--+
> | 2  | 200  | def  | -20  |
> ++--+--+--+
> SELECT * FROM V;
> ++--+--+--+-+--+
> | A  |  B   |  C   |  D   | VA  |  VB  |
> ++--+--+--+-+--+
> | 2  | 200  | def  | -20  | 91  | 101  |
> ++--+--+--+-+--+
> -- as expected
> -- drop a parent column from the view
> ALTER VIEW V DROP COLUMN C;
> SELECT * FROM V;
> +--++--+--+-+--+
> |  C   | A  |  B   |  D   | VA  |  VB  |
> +--++--+--+-+--+
> | def  | 2  | 200  | -20  | 91  | 101  |
> +--++--+--+-+--+
> -- Column C can still be seen and its ordering is changed for some reason. If 
> you run the drop column again, it is actually dropped
> ALTER VIEW V DROP COLUMN C;
> SELECT * FROM V;
> ++--+--+-+--+
> | A  |  B   |  D   | VA  |  VB  |
> ++--+--+-+--+
> | 2  | 200  | -20  | 91  | 101  |
> ++--+--+-+--+
> -- Gets dropped when drop column is run a second time.
> {code}
> When splittable SYSTEM.CATALOG rollback is enabled, we store the parent's 
> column metadata along with the view as well. After the first drop column 
> command, metadata for column 'C' of the parent is removed from the view's 
> metadata rows however it is not marked diverged, nor is an EXCLUDED_COLUMN 
> entry made for that column in the view metadata rows.
> Because of this, when resolving the view we potentially keep combining the 
> parent table columns and still get column 'C'. When the second drop column 
> command is issued is when we actually add an EXCLUDED_COLUMN linking row for 
> 'C' in the view metadata.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6127) Prevent unnecessary HBase admin API calls in ViewUtil.getSystemTableForChildLinks() and act lazily instead

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned PHOENIX-6127:
-

Assignee: Richard Antal

> Prevent unnecessary HBase admin API calls in 
> ViewUtil.getSystemTableForChildLinks() and act lazily instead
> --
>
> Key: PHOENIX-6127
> URL: https://issues.apache.org/jira/browse/PHOENIX-6127
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Richard Antal
>Priority: Major
>  Labels: phoenix-hardening, quality-improvement
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6127.master.v1.patch
>
>
> In order to handle the case of older clients connecting to a 4.16 cluster 
> that has old metadata (no SYSTEM.CHILD_LINK table yet), we call 
> ViewUtil.getSystemTableForChildLinks() to figure out whether to use 
> SYSTEM.CHILD_LINK or SYSTEM.CATALOG to look up parent->child linking rows.
> Here we do HBase table existence checks using HBase admin APIs (see 
> [this|https://github.com/apache/phoenix/blob/e3c7b4bdce2524eb4fd1e7eb0ccd3454fcca81ce/phoenix-core/src/main/java/org/apache/phoenix/util/ViewUtil.java#L265-L269])
>  which can be avoided. In almost all cases once we've called this API, we 
> later go on and retrieve the Table object anyhow so we can instead try to 
> always get the SYSTEM.CHILD_LINK table and if that fails, try to get 
> SYSTEM.CATALOG. This will avoid additional admin API calls.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6082) No need to do checkAndPut when altering properties for a table or view with column-encoding enabled

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6082:
--
Fix Version/s: (was: 4.16.0)
   4.17.0
   4.16.1

> No need to do checkAndPut when altering properties for a table or view with 
> column-encoding enabled
> ---
>
> Key: PHOENIX-6082
> URL: https://issues.apache.org/jira/browse/PHOENIX-6082
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: performance, phoenix-hardening, quality-improvement
> Fix For: 5.1.0, 4.16.1, 4.17.0
>
>
> ALTER TABLE/VIEW SET  follows the same code path as an add column. 
> Thus, when column-encoding is enabled on the physical table, we will do a 
> checkAndPut with the ,  (see 
> [this|https://github.com/apache/phoenix/blob/4ddbe2688b78645bc73857141cec12cb1c08993b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java#L3940-L3947]).
> This makes sense when we are adding a column since this causes an update to 
> the encoded column qualifier counter of the base table and we want to prevent 
> any concurrent changes to this field. However, when setting properties, we 
> don't update the column qualifier counter so this extra checkAndPut is 
> unnecessary. The server-side [write-lock on the table header 
> row|https://github.com/apache/phoenix/blob/4ddbe2688b78645bc73857141cec12cb1c08993b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2619]
>  followed by a [sequence number 
> check|https://github.com/apache/phoenix/blob/4ddbe2688b78645bc73857141cec12cb1c08993b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2684-L2691]
>  should be sufficient and is already done.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6033) Unable to add back a parent column that was earlier dropped from a view

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6033:
--
Fix Version/s: 4.17.0
   4.16.1

> Unable to add back a parent column that was earlier dropped from a view
> ---
>
> Key: PHOENIX-6033
> URL: https://issues.apache.org/jira/browse/PHOENIX-6033
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.1.0, 4.16.0, 4.16.1, 4.17.0
>
>
> In 4.14.3, we allowed adding a column (with the same name as a column 
> inherited from the parent) back to a view, which was dropped in the past. In 
> 4.x this is no longer allowed.
> Start 4.x server and run the following with a 4.x client:
> {code:sql}
> CREATE TABLE IF NOT EXISTS T (A INTEGER PRIMARY KEY, B INTEGER, C VARCHAR, D 
> INTEGER);
> -- create view
> CREATE VIEW IF NOT EXISTS V (VA INTEGER, VB INTEGER) AS SELECT * FROM T WHERE 
> B=200;
> UPSERT INTO V(A,B,C,D,VA,VB) VALUES (2, 200, 'def', -20, 91, 101);
> ALTER VIEW V DROP COLUMN C;
> SELECT * FROM V;
> ++--+--+-+--+
> | A  |  B   |  D   | VA  |  VB  |
> ++--+--+-+--+
> | 2  | 200  | -20  | 91  | 101  |
> ++--+--+-+--+
> ALTER VIEW C ADD C VARCHAR;
> -- The above add column step throws an error. It used to work before 4.15.
> {code}
> The stack trace for the error thrown is:
> {code:java}
> Error: ERROR 1012 (42M03): Table undefined. tableName=C 
> (state=42M03,code=1012)
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=C
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:777)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:442)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:434)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:425)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolver(FromCompiler.java:277)
>   at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3627)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1488)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:415)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:397)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:396)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:384)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1886)
>   at sqlline.Commands.execute(Commands.java:814)
>   at sqlline.Commands.sql(Commands.java:754)
>   at sqlline.SqlLine.dispatch(SqlLine.java:646)
>   at sqlline.SqlLine.begin(SqlLine.java:510)
>   at sqlline.SqlLine.start(SqlLine.java:233)
>   at sqlline.SqlLine.main(SqlLine.java:175)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5945) TaskRegionObserver can kick off the same task multiple times if SYSTEM.TASK has split

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5945:
--
Fix Version/s: (was: 4.16.0)
   4.17.0
   4.16.1

> TaskRegionObserver can kick off the same task multiple times if SYSTEM.TASK 
> has split
> -
>
> Key: PHOENIX-5945
> URL: https://issues.apache.org/jira/browse/PHOENIX-5945
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.1.0, 4.16.1, 4.17.0
>
>
> We don't specify a split policy for 
> [SYSTEM.TASK|https://github.com/apache/phoenix/blob/5f9364db7e4925229704706e148e62f4cf4ec4c2/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java#L381],
>  so by default it will be allowed to split. Now if SYSTEM.TASK spans multiple 
> regions, each region's 
> [postOpen|https://github.com/apache/phoenix/blob/5f9364db7e4925229704706e148e62f4cf4ec4c2/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TaskRegionObserver.java#L137]
>  schedules the SelfHealingTask at the specified interval and so [each region 
> will run a FTS on the 
> table|https://github.com/apache/phoenix/blob/5f9364db7e4925229704706e148e62f4cf4ec4c2/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TaskRegionObserver.java#L159]
>  and try to kick-off all the incomplete and non-failed tasks.
> This can lead to the same tasks being kicked off multiple times as a corner 
> race condition in spite of [this 
> check|https://github.com/apache/phoenix/blob/5f9364db7e4925229704706e148e62f4cf4ec4c2/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TaskRegionObserver.java#L187-L195]
>  (which is another FTS) and also lead to unnecessary extra load on the server.
> We do not explicitly outline that tasks need to be idempotent, so we should 
> handle this properly in the TaskRegionObserver so that each region is only 
> responsible for tasks lying within its boundaries.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5944) Modify the PK of SYSTEM.TASK to avoid full table scans

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5944:
--
Fix Version/s: (was: 4.16.0)
   4.17.0
   4.16.1

> Modify the PK of SYSTEM.TASK to avoid full table scans
> --
>
> Key: PHOENIX-5944
> URL: https://issues.apache.org/jira/browse/PHOENIX-5944
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.1.0, 4.16.1, 4.17.0
>
>
> The PK of SYSTEM.TASK is (TASK_TYPE, TASK_TS, TENANT_ID, TABLE_SCHEM, 
> TABLE_NAME) and the Task.queryTaskTable methods 
> [1|https://github.com/apache/phoenix/blob/5f9364db7e4925229704706e148e62f4cf4ec4c2/phoenix-core/src/main/java/org/apache/phoenix/schema/task/Task.java#L181]
>  and 
> [2|https://github.com/apache/phoenix/blob/5f9364db7e4925229704706e148e62f4cf4ec4c2/phoenix-core/src/main/java/org/apache/phoenix/schema/task/Task.java#L226]
>  do a full table scan as mentioned.
> Can we reorder/modify the PK to switch this to a range scan instead of a FTS? 
> Let's discuss if this is possible. Based on PHOENIX-5943, this change may be 
> to either or both of SYSTEM.TASK_QUEUE and SYSTEM.TASK_HISTORY.
> Note that since this is a PK change, we need to be careful when handling this 
> in the upgrade path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5498) When dropping a view, send delete mutations for parent->child links from client to server rather than doing server-server RPCs

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5498:
--
Fix Version/s: (was: 4.16.0)
   (was: 4.15.1)
   4.17.0
   4.16.1

> When dropping a view, send delete mutations for parent->child links from 
> client to server rather than doing server-server RPCs
> --
>
> Key: PHOENIX-5498
> URL: https://issues.apache.org/jira/browse/PHOENIX-5498
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.1.1, 4.16.1, 4.17.0
>
>
> Once we are able to generate delete mutations using the child view and parent 
> PTable, we should send the mutations directly from the client to the endpoint 
> coprocessor on SYSTEM.CHILD_LINK rather than doing a server-server RPC from 
> the SYSTEM.CATALOG region to the SYSTEM.CHILD_LINK region.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5586) Add documentation for Splittable SYSTEM.CATALOG

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5586:
--
Priority: Blocker  (was: Major)

> Add documentation for Splittable SYSTEM.CATALOG
> ---
>
> Key: PHOENIX-5586
> URL: https://issues.apache.org/jira/browse/PHOENIX-5586
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Chinmay Kulkarni
>Priority: Blocker
> Fix For: 4.15.1, 5.1.1, 4.16.0
>
>
> There are many changes after PHOENIX-3534 especially for backwards 
> compatibility. There are additional configurations such as 
> "phoenix.allow.system.catalog.rollback" which allows rollback of splittable 
> SYSTEM.CATALOG, etc. We should document these changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5497) When dropping a view, use the PTable for generating delete mutations for links rather than scanning SYSTEM.CATALOG

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5497:
--
Fix Version/s: (was: 4.16.0)
   4.17.0
   4.16.1

> When dropping a view, use the PTable for generating delete mutations for 
> links rather than scanning SYSTEM.CATALOG
> --
>
> Key: PHOENIX-5497
> URL: https://issues.apache.org/jira/browse/PHOENIX-5497
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.15.1, 5.1.1, 4.16.1, 4.17.0
>
>
> When dropping a view, we should generate the delete markers for the 
> parent->child links using the view and parent's PTable rather than by issuing 
> a scan on SYSTEM.CATALOG (see 
> [this|https://github.com/apache/phoenix/blob/207ab526ee511a19ac287f61fbd2cef268c5038d/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2310]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6154) Move the check for existence of child views and task addition to drop those child views to the client side when dropping a table/view

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6154:
--
Fix Version/s: (was: 4.16.0)
   4.16.1

> Move the check for existence of child views and task addition to drop those 
> child views to the client side when dropping a table/view
> -
>
> Key: PHOENIX-6154
> URL: https://issues.apache.org/jira/browse/PHOENIX-6154
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.1.0, 4.16.1
>
>
> When we issue a {{DROP TABLE/VIEW}}, if the table/view being dropped has 
> child views (and {{CASCADE}} is provided), we add a 
> {{[DropChildViewsTask|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/tasks/DropChildViewsTask.java]}}
>  in the {{SYSTEM.TASK}} table (see 
> [this|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479]).
>  This means that *while holding the row lock* for the table/view’s header row 
> ([here|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2253])
>  we do the following:
>  # Make an 
> [RPC|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2459-L2461]
>  to the region hosting {{SYSTEM.CHILD_LINK}} to scan it in order to find 
> child views.
>  # If any child views are found in the step above, we make additional RPCs to 
> the region hosting {{SYSTEM.TASK}} to 
> {{[UPSERT|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479-L2484]}}
>  a {{DropChildViewsTask}} for immediate child views.
>  # We [send remote 
> mutations|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2298-L2302]
>  to drop parent→child links from the {{SYSTEM.CHILD_LINK}} table.
> Of the above extra RPCs, note that even if the table/view has no child views 
> or if {{CASCADE}} is not provided, we will still do the first RPC from the 
> server while holding a row lock.
> We should move this check to the client (issue a scan against 
> SYSTEM.CHILD_LINK to see if a single linking row exists) and also add the 
> task from the client.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5404) Move check to client side to see if there are any child views that need to be dropped while receating a table/view

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5404:
--
Fix Version/s: (was: 4.16.0)
   4.16.1

> Move check to client side to see if there are any child views that need to be 
> dropped while receating a table/view
> --
>
> Key: PHOENIX-5404
> URL: https://issues.apache.org/jira/browse/PHOENIX-5404
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Thomas D'Silva
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.1.0, 4.16.1
>
>
> Remove {{ViewUtil.dropChildViews(env, tenantIdBytes, schemaName, 
> tableName);}} call in MetdataEndpointImpl.createTable
> While creating a table or view we need to ensure that are not any child views 
> that haven't been clean up by the DropChildView task yet. Move this check to 
> the client (issue a scan against SYSTEM.CHILD_LINK to see if a single linking 
> row exists).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6182) IndexTool to verify and repair every index row

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6182:
--
Fix Version/s: 4.16.1

> IndexTool to verify and repair every index row
> --
>
> Key: PHOENIX-6182
> URL: https://issues.apache.org/jira/browse/PHOENIX-6182
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Kadir OZDEMIR
>Assignee: Tanuj Khurana
>Priority: Major
> Fix For: 4.16.1
>
>
> IndexTool rebuilds and verifies every index row pointed by the data table.  
> However, IndexTool cannot clean up the index rows that are not referenced by 
> the data table if there are such index rows. In order to do that it needs to 
> scan index table regions and make sure that every index row is valid. For 
> example we can add an option called source table (as in IndexScrutinyTool) to 
> do repair and verify index rows.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5877) Add indexing support in backward compat test framework

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5877:
--
Fix Version/s: 4.17.0

> Add indexing support in backward compat test framework
> --
>
> Key: PHOENIX-5877
> URL: https://issues.apache.org/jira/browse/PHOENIX-5877
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Sandeep Guggilam
>Priority: Major
> Fix For: 4.17.0
>
>
> We need to add the indexing support as part of backward compatibility test 
> framework introduced as part of PHOENIX-5607 to catch any indexing related 
> bugs when old client connects to the updated server



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5882) Automate Release Candidate sign-off process

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5882:
--
Fix Version/s: 4.16.0

> Automate Release Candidate sign-off process
> ---
>
> Key: PHOENIX-5882
> URL: https://issues.apache.org/jira/browse/PHOENIX-5882
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0, 4.14.3
>Reporter: Chinmay Kulkarni
>Priority: Major
>  Labels: quality-improvement
> Fix For: 4.16.0
>
>
> HBase has 
> https://github.com/apache/hbase/blob/master/dev-support/hbase-vote.sh which 
> runs basic things that need to be verified when checking the validity of an 
> RC. We should incorporate this in Phoenix as well. This will help standardize 
> what is expected in each release (rather than it being dependent on the 
> voter) and will potentially save cycles for the RM when they carry out the RC 
> votes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5954) LocalImmutableNonTxIndexIT is a flapper

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5954:
--
Fix Version/s: 4.16.0

> LocalImmutableNonTxIndexIT is a flapper
> ---
>
> Key: PHOENIX-5954
> URL: https://issues.apache.org/jira/browse/PHOENIX-5954
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.0, 4.16.0
>Reporter: Geoffrey Jacoby
>Priority: Major
> Fix For: 4.16.0
>
>
> The following error is frequently seen in Phoenix Jenkins runs:
> 17:40:24 [ERROR]   
> LocalImmutableNonTxIndexIT>BaseIndexIT.testCreateIndexAfterUpsertStarted:263->BaseIndexIT.testCreateIndexAfterUpsertStarted:338
>  expected:<4> but was:<3>
> 17:40:24 [ERROR]   
> LocalImmutableNonTxIndexIT>BaseIndexIT.testCreateIndexAfterUpsertStarted:263->BaseIndexIT.testCreateIndexAfterUpsertStarted:338
>  expected:<4> but was:<3>



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5955) OrphanViewToolIT is flapping

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5955:
--
Fix Version/s: 4.16.0

> OrphanViewToolIT is flapping
> 
>
> Key: PHOENIX-5955
> URL: https://issues.apache.org/jira/browse/PHOENIX-5955
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Geoffrey Jacoby
>Priority: Major
> Fix For: 4.16.0
>
>
> In several Jenkins runs over the past few days, we've been getting failures 
> from the OrphanViewToolIT suite, which I believe had been well-behaved until 
> recently.
> {code:java}
> 17:40:24 [ERROR]   OrphanViewToolIT.testDeleteBaseTableRows:279
> 17:40:24 [ERROR]   OrphanViewToolIT.testDeleteBaseTableRows:279
> 17:40:24 [ERROR]   
> OrphanViewToolIT.testDeleteChildParentLinkRows:402->verifyOrphanFileLineCounts:255->verifyLineCount:209
> 17:40:24 [ERROR]   
> OrphanViewToolIT.testDeleteChildParentLinkRows:402->verifyOrphanFileLineCounts:255->verifyLineCount:209
> 17:40:24 [ERROR]   
> OrphanViewToolIT.testDeleteChildViewRows:315->verifyOrphanFileLineCounts:255->verifyLineCount:209
> 17:40:24 [ERROR]   
> OrphanViewToolIT.testDeleteChildViewRows:315->verifyOrphanFileLineCounts:255->verifyLineCount:209
> 17:40:24 [ERROR]   
> OrphanViewToolIT.testDeleteGrandchildViewRows:344->verifyOrphanFileLineCounts:256->verifyLineCount:209
> 17:40:24 [ERROR]   
> OrphanViewToolIT.testDeleteGrandchildViewRows:344->verifyOrphanFileLineCounts:255->verifyLineCount:209
> 17:40:24 [ERROR]   
> OrphanViewToolIT.testDeleteParentChildLinkRows:374->verifyOrphanFileLineCounts:255->verifyLineCount:209
> 17:40:24 [ERROR]   
> OrphanViewToolIT.testDeleteParentChildLinkRows:374->verifyOrphanFileLineCounts:255->verifyLineCount:209
> 17:40:24 [ERROR]   
> OrphanViewToolIT.testDeletePhysicalTableLinks:429->verifyLineCount:209
> 17:40:24 [ERROR]   
> OrphanViewToolIT.testDeletePhysicalTableLinks:424->verifyCountQuery:218
> 17:40:24 [ERROR]   
> LocalImmutableNonTxIndexIT>BaseIndexIT.testCreateIndexAfterUpsertStarted:2
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6047) Release phoenix-queryserver jars compatible with 4.15 and create a workflow for future 4.x releases

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6047:
--
Fix Version/s: queryserver-6.0.0

> Release phoenix-queryserver jars compatible with 4.15 and create a workflow 
> for future 4.x releases
> ---
>
> Key: PHOENIX-6047
> URL: https://issues.apache.org/jira/browse/PHOENIX-6047
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Chinmay Kulkarni
>Assignee: Istvan Toth
>Priority: Major
> Fix For: queryserver-6.0.0
>
>
> We should create a Jenkins build for the phoenix-queryserver repository, 
> release the first queryserver jars which depend on 4.15 and put in place a 
> workflow for future 4.x based releases of the queryserver, which includes 
> updating release documentation, etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6081) Improvements to snapshot based MR input format

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6081:
--
Affects Version/s: (was: master)
   5.0.0

> Improvements to snapshot based MR input format
> --
>
> Key: PHOENIX-6081
> URL: https://issues.apache.org/jira/browse/PHOENIX-6081
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.0.0, 4.14.3
>Reporter: Bharath Vissapragada
>Priority: Major
>
> Recently we switched an MR application from scanning live tables to scanning 
> snapshots (PHOENIX-3744). We ran into a severe performance issue, which 
> turned out to a correctness issue due to over-lapping scan splits generation. 
> After some debugging we figured that it has been fixed via PHOENIX-4997. Even 
> with that fix there are quite a few things we could improve about the 
> snapshot based input format. Listing them here, perhaps we can break them 
> into subtasks as needed.
> - Do not restore the snapshot per map task. Currently we restore the snapshot 
> once per map task into a temp directory. For large tables on big clusters, 
> this creates a storm of NN RPCs. We can do this once per job and let all the 
> map tasks operate on the same restored snapshot. HBase already did this via 
> HBASE-18806, we can do something similar.
> - Disable 
> [cacheBlocks|[https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setCacheBlocks-boolean-]]
>  on scans generated by input format. In our experiments block cache took a 
> lot of memory for MR jobs. For full table scans this isn't of much use and 
> can save a lot of memory.
> - Short circuit live-table codepaths when snapshots are enabled. Currently 
> some codepaths make live table HBase RPCs to get a bunch of data. For example
> {noformat}
> private List generateSplits(final QueryPlan qplan, Configuration 
> config) throws IOException {
> // We must call this in order to initialize the scans and splits from the 
> query plan
>   
> // Get the RegionSizeCalculator
> try(org.apache.hadoop.hbase.client.Connection connection =
> 
> HBaseFactoryProvider.getHConnectionFactory().createConnection(config)) {
> RegionLocator regionLocator = 
> connection.getRegionLocator(TableName.valueOf(tableName));
> RegionSizeCalculator sizeCalculator = new RegionSizeCalculator(regionLocator, 
> connection
> .getAdmin()); {noformat}
> This defeats the purpose of using snapshots. Refactor the code in a way that 
> the snapshot based codepaths do minimal HBase RPCs and rely solely on 
> snapshot manifest. Even better, improve locality of task scheduling based on 
> snapshot's hfile block locations.
> - Disable indexes for query plan for scanning over snapshots. If there is an 
> index based access path, getScans() can potentially return index based splits 
> which is not what we want for snapshots.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6081) Improvements to snapshot based MR input format

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6081:
--
Fix Version/s: 4.16.1

> Improvements to snapshot based MR input format
> --
>
> Key: PHOENIX-6081
> URL: https://issues.apache.org/jira/browse/PHOENIX-6081
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.0.0, 4.14.3
>Reporter: Bharath Vissapragada
>Priority: Major
> Fix For: 4.16.1
>
>
> Recently we switched an MR application from scanning live tables to scanning 
> snapshots (PHOENIX-3744). We ran into a severe performance issue, which 
> turned out to a correctness issue due to over-lapping scan splits generation. 
> After some debugging we figured that it has been fixed via PHOENIX-4997. Even 
> with that fix there are quite a few things we could improve about the 
> snapshot based input format. Listing them here, perhaps we can break them 
> into subtasks as needed.
> - Do not restore the snapshot per map task. Currently we restore the snapshot 
> once per map task into a temp directory. For large tables on big clusters, 
> this creates a storm of NN RPCs. We can do this once per job and let all the 
> map tasks operate on the same restored snapshot. HBase already did this via 
> HBASE-18806, we can do something similar.
> - Disable 
> [cacheBlocks|[https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setCacheBlocks-boolean-]]
>  on scans generated by input format. In our experiments block cache took a 
> lot of memory for MR jobs. For full table scans this isn't of much use and 
> can save a lot of memory.
> - Short circuit live-table codepaths when snapshots are enabled. Currently 
> some codepaths make live table HBase RPCs to get a bunch of data. For example
> {noformat}
> private List generateSplits(final QueryPlan qplan, Configuration 
> config) throws IOException {
> // We must call this in order to initialize the scans and splits from the 
> query plan
>   
> // Get the RegionSizeCalculator
> try(org.apache.hadoop.hbase.client.Connection connection =
> 
> HBaseFactoryProvider.getHConnectionFactory().createConnection(config)) {
> RegionLocator regionLocator = 
> connection.getRegionLocator(TableName.valueOf(tableName));
> RegionSizeCalculator sizeCalculator = new RegionSizeCalculator(regionLocator, 
> connection
> .getAdmin()); {noformat}
> This defeats the purpose of using snapshots. Refactor the code in a way that 
> the snapshot based codepaths do minimal HBase RPCs and rely solely on 
> snapshot manifest. Even better, improve locality of task scheduling based on 
> snapshot's hfile block locations.
> - Disable indexes for query plan for scanning over snapshots. If there is an 
> index based access path, getScans() can potentially return index based splits 
> which is not what we want for snapshots.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6081) Improvements to snapshot based MR input format

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6081:
--
Fix Version/s: 4.17.0

> Improvements to snapshot based MR input format
> --
>
> Key: PHOENIX-6081
> URL: https://issues.apache.org/jira/browse/PHOENIX-6081
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.0.0, 4.14.3
>Reporter: Bharath Vissapragada
>Priority: Major
> Fix For: 4.16.1, 4.17.0
>
>
> Recently we switched an MR application from scanning live tables to scanning 
> snapshots (PHOENIX-3744). We ran into a severe performance issue, which 
> turned out to a correctness issue due to over-lapping scan splits generation. 
> After some debugging we figured that it has been fixed via PHOENIX-4997. Even 
> with that fix there are quite a few things we could improve about the 
> snapshot based input format. Listing them here, perhaps we can break them 
> into subtasks as needed.
> - Do not restore the snapshot per map task. Currently we restore the snapshot 
> once per map task into a temp directory. For large tables on big clusters, 
> this creates a storm of NN RPCs. We can do this once per job and let all the 
> map tasks operate on the same restored snapshot. HBase already did this via 
> HBASE-18806, we can do something similar.
> - Disable 
> [cacheBlocks|[https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setCacheBlocks-boolean-]]
>  on scans generated by input format. In our experiments block cache took a 
> lot of memory for MR jobs. For full table scans this isn't of much use and 
> can save a lot of memory.
> - Short circuit live-table codepaths when snapshots are enabled. Currently 
> some codepaths make live table HBase RPCs to get a bunch of data. For example
> {noformat}
> private List generateSplits(final QueryPlan qplan, Configuration 
> config) throws IOException {
> // We must call this in order to initialize the scans and splits from the 
> query plan
>   
> // Get the RegionSizeCalculator
> try(org.apache.hadoop.hbase.client.Connection connection =
> 
> HBaseFactoryProvider.getHConnectionFactory().createConnection(config)) {
> RegionLocator regionLocator = 
> connection.getRegionLocator(TableName.valueOf(tableName));
> RegionSizeCalculator sizeCalculator = new RegionSizeCalculator(regionLocator, 
> connection
> .getAdmin()); {noformat}
> This defeats the purpose of using snapshots. Refactor the code in a way that 
> the snapshot based codepaths do minimal HBase RPCs and rely solely on 
> snapshot manifest. Even better, improve locality of task scheduling based on 
> snapshot's hfile block locations.
> - Disable indexes for query plan for scanning over snapshots. If there is an 
> index based access path, getScans() can potentially return index based splits 
> which is not what we want for snapshots.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6081) Improvements to snapshot based MR input format

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6081:
--
Affects Version/s: (was: 4.15.1)

> Improvements to snapshot based MR input format
> --
>
> Key: PHOENIX-6081
> URL: https://issues.apache.org/jira/browse/PHOENIX-6081
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 4.14.3, master
>Reporter: Bharath Vissapragada
>Priority: Major
>
> Recently we switched an MR application from scanning live tables to scanning 
> snapshots (PHOENIX-3744). We ran into a severe performance issue, which 
> turned out to a correctness issue due to over-lapping scan splits generation. 
> After some debugging we figured that it has been fixed via PHOENIX-4997. Even 
> with that fix there are quite a few things we could improve about the 
> snapshot based input format. Listing them here, perhaps we can break them 
> into subtasks as needed.
> - Do not restore the snapshot per map task. Currently we restore the snapshot 
> once per map task into a temp directory. For large tables on big clusters, 
> this creates a storm of NN RPCs. We can do this once per job and let all the 
> map tasks operate on the same restored snapshot. HBase already did this via 
> HBASE-18806, we can do something similar.
> - Disable 
> [cacheBlocks|[https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setCacheBlocks-boolean-]]
>  on scans generated by input format. In our experiments block cache took a 
> lot of memory for MR jobs. For full table scans this isn't of much use and 
> can save a lot of memory.
> - Short circuit live-table codepaths when snapshots are enabled. Currently 
> some codepaths make live table HBase RPCs to get a bunch of data. For example
> {noformat}
> private List generateSplits(final QueryPlan qplan, Configuration 
> config) throws IOException {
> // We must call this in order to initialize the scans and splits from the 
> query plan
>   
> // Get the RegionSizeCalculator
> try(org.apache.hadoop.hbase.client.Connection connection =
> 
> HBaseFactoryProvider.getHConnectionFactory().createConnection(config)) {
> RegionLocator regionLocator = 
> connection.getRegionLocator(TableName.valueOf(tableName));
> RegionSizeCalculator sizeCalculator = new RegionSizeCalculator(regionLocator, 
> connection
> .getAdmin()); {noformat}
> This defeats the purpose of using snapshots. Refactor the code in a way that 
> the snapshot based codepaths do minimal HBase RPCs and rely solely on 
> snapshot manifest. Even better, improve locality of task scheduling based on 
> snapshot's hfile block locations.
> - Disable indexes for query plan for scanning over snapshots. If there is an 
> index based access path, getScans() can potentially return index based splits 
> which is not what we want for snapshots.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6118) Multi Tenant Workloads using PHERF

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6118:
--
Fix Version/s: 4.16.0

> Multi Tenant Workloads using PHERF
> --
>
> Key: PHOENIX-6118
> URL: https://issues.apache.org/jira/browse/PHOENIX-6118
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 4.16.0
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Features like PHOENIX_TTL and Splittable SYSCAT need to be tested for a large 
> number of tenant views.
> In the absence of support for creating a large number of tenant views - Multi 
> leveled views dynamically and be able to query them in a generic framework, 
> the teams have to write custom logic to replay/run functional and perf 
> testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6171) Child views should not be allowed to override the parent view PHOENIX_TTL attribute.

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6171:
--
Fix Version/s: 4.16.0

> Child views should not be allowed to override the parent view PHOENIX_TTL 
> attribute.
> 
>
> Key: PHOENIX-6171
> URL: https://issues.apache.org/jira/browse/PHOENIX-6171
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.x
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 4.16.0
>
> Attachments: PHOENIX-6171.4.x.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6171) Child views should not be allowed to override the parent view PHOENIX_TTL attribute.

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6171:
--
Affects Version/s: 4.15.0

> Child views should not be allowed to override the parent view PHOENIX_TTL 
> attribute.
> 
>
> Key: PHOENIX-6171
> URL: https://issues.apache.org/jira/browse/PHOENIX-6171
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.15.0, 4.x
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 4.16.0
>
> Attachments: PHOENIX-6171.4.x.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6181) IndexRepairRegionScanner to verify and repair every global index row

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6181:
--
Fix Version/s: 4.16.0

> IndexRepairRegionScanner to verify and repair every global index row
> 
>
> Key: PHOENIX-6181
> URL: https://issues.apache.org/jira/browse/PHOENIX-6181
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.14.3
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 4.16.0
>
> Attachments: PHOENIX-6181-Addendum.4.x.001.patch, 
> PHOENIX-6181.4.x.001.patch, PHOENIX-6181.4.x.002.patch, 
> PHOENIX-6181.master.001.patch, PHOENIX-6181.master.002.patch
>
>
> IndexRebuildRegionScanner is the server side engine to rebuild and verify 
> every index row pointed by the data table. IndexRebuildRegionScanner runs on 
> data table regions and scans every data table rows locally, and then rebuilds 
> and verifies index table rows referenced by the data table rows over 
> server-to-server RPCs using the HBase client installed on region servers. 
> However, IndexRebuildRegionScanner cannot clean up the index rows that are 
> not referenced by the data table if there are such index rows. In order to do 
> that we need another region scanner that scans index table regions and makes 
> sure that every index row is valid. This region scanner will be called 
> IndexRepairRegionScanner.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6204) Provide a way to preserve HBase cell timestamps when running UPSERT SELECT statements

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6204:
--
Fix Version/s: 4.17.0

> Provide a way to preserve HBase cell timestamps when running UPSERT SELECT 
> statements
> -
>
> Key: PHOENIX-6204
> URL: https://issues.apache.org/jira/browse/PHOENIX-6204
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.17.0
>
>
> Today when we run an UPSERT SELECT statement, the data is upserted with the 
> current wall clock time rather than using the timestamp of the cells being 
> read via the SELECT statement. In some cases this is favorable, but in others 
> it is not.
> Providing a way to do an UPSERT SELECT in which upserts use the HBase 
> timestamp of the cells being read is a useful feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6207) Paged server side grouped aggregate operations

2020-10-27 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6207:
--
Fix Version/s: 4.16.0

> Paged server side grouped aggregate operations
> --
>
> Key: PHOENIX-6207
> URL: https://issues.apache.org/jira/browse/PHOENIX-6207
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.14.3
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 4.16.0
>
>
> Phoenix provides the option of performing query operations on the client or 
> server side. This is decided by the Phoenix optimizer based on configuration 
> parameters. For the server side option, the table operation is parallelized 
> such that multiple table regions are scanned. However, currently there is no 
> paging capability and the server side operation can take long enough lead to 
> HBase client timeouts. Putting a limit on the number of rows to be processed 
> within a single RPC call (i.e., the next operation on the scanner) on the 
> server side using a Phoenix level paging is highly desirable. This paging 
> mechanism has been already implemented for index rebuild and verification 
> operations and proven to be effective to prevent timeouts. This Jira is for 
> implementing this paging for the server side grouped aggregate operations. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6204) Provide a way to preserve HBase cell timestamps when running UPSERT SELECT statements

2020-10-22 Thread Chinmay Kulkarni (Jira)
Chinmay Kulkarni created PHOENIX-6204:
-

 Summary: Provide a way to preserve HBase cell timestamps when 
running UPSERT SELECT statements
 Key: PHOENIX-6204
 URL: https://issues.apache.org/jira/browse/PHOENIX-6204
 Project: Phoenix
  Issue Type: New Feature
Affects Versions: 4.15.0
Reporter: Chinmay Kulkarni


Today when we run an UPSERT SELECT statement, the data is upserted with the 
current wall clock time rather than using the timestamp of the cells being read 
via the SELECT statement. In some cases this is favorable, but in others it is 
not.

Providing a way to do an UPSERT SELECT in which upserts use the HBase timestamp 
of the cells being read is a useful feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6142) Make DDL operations resilient to orphan parent->child linking rows in SYSTEM.CHILD_LINK

2020-10-19 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6142:
--
Attachment: PHOENIX-6142.master.v1.patch

> Make DDL operations resilient to orphan parent->child linking rows in 
> SYSTEM.CHILD_LINK
> ---
>
> Key: PHOENIX-6142
> URL: https://issues.apache.org/jira/browse/PHOENIX-6142
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6142-4.x-v1.patch, PHOENIX-6142.4.x.v2.patch, 
> PHOENIX-6142.4.x.v3.patch, PHOENIX-6142.4.x.v4.patch, 
> PHOENIX-6142.master.v1.patch
>
>
> We are targeting PHOENIX-6141 for 4.17. Until we have it, we should aim at 
> making DDL operations resilient to orphan parent->child linking rows. DDL 
> operations identified which can fail due to orphan rows are:
>  # Any ALTER TABLE ADD/DROP/SET calls on the base table T will fail if there 
> are orphan links from T to some already dropped view. This happens because 
> the call to 
> [MetaDataEndpointImpl.findAllChildViews()|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2142]
>  from 
> [MetaDataEndpointImpl.mutateColumn()|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2592]
>  fails with a TableNotFoundException.
>  # Any DROP TABLE/VIEW call without CASCADE will fail even though there are 
> actually no child views since the orphan rows wrongly indicate that there are 
> child views.
>  # During the upgrade path for UpgradeUtil.syncUpdateCacheFreqAllIndexes(), 
> we will just ignore any orphan views (for ex, see 
> [this|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java#L1368-L1374]),
>  but the call to UpgradeUtil.upgradeTable() will fail with a 
> TableNotFoundException for each orphan view.
> # During a CREATE TABLE/VIEW, we try to drop any views from the previous life 
> of that table/view, however we might end up dropping a legitimate view (with 
> the same name) which is on another table/view because of this.
> Before dropping any views that we see from a parent->child link, we need to 
> ensure that the view is in fact a child view of the same table/view we think 
> it is an orphan of.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6192) UpgradeUtil.syncUpdateCacheFreqAllIndexes() does not use tenant-specific connection to resolve tenant views

2020-10-15 Thread Chinmay Kulkarni (Jira)
Chinmay Kulkarni created PHOENIX-6192:
-

 Summary: UpgradeUtil.syncUpdateCacheFreqAllIndexes() does not use 
tenant-specific connection to resolve tenant views
 Key: PHOENIX-6192
 URL: https://issues.apache.org/jira/browse/PHOENIX-6192
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.15.0, 5.0.0
Reporter: Chinmay Kulkarni
 Fix For: 5.1.0, 4.16.0


In UpgradeUtil.synchUpdateCacheFreqAllIndexes(), we try to retrieve all child 
views of each table to make all the view index UPDATE_CACHE_FREQUENCY property 
values in sync with the view.

Here however, when iterating over the parent->child link results, we don't use 
a tenant-specific connection to retrieve a tenant view leading to the PTable 
resolution failing (see 
[this|https://github.com/apache/phoenix/blob/264310bd1e6c14996c3cfb11557fc66a012cb01b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java#L1369])



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6191) Creating a view which has its own new columns should also do checkAndPut checks on SYSTEM.MUTEX

2020-10-15 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6191:
--
Description: 
Currently, when creating a view we do conditional writes with a checkAndPut to 
SYSTEM.MUTEX for the keys:
(, , )

for each column in the view WHERE clause. Similarly, when issuing an ALTER 
TABLE/VIEW, we do a conditional write with a checkAndPut to SYSTEM.MUTEX for 
the key:
(, , )

to prevent conflicting modifications between a base table/view and its child 
views. However, if we create a view with its own new columns, for ex:
{code:sql}
CREATE VIEW V1 (NEW_COL1 INTEGER, NEW_COL2 INTEGER) AS SELECT * FROM T1 WHERE B 
= 10;
{code}
we will not do a checkAndPut with the new columns being added to the view 
(NEW_COL1 and NEW_COL2) thus conflicting concurrent mutations may occur to a 
parent in this case, for ex: a simultaneous ALTER TABLE/VIEW of the parent 
which adds NEW_COL1 as a VARCHAR. This will lead to data being unable to be 
read properly.


  was:
Currently, when creating a view we do conditional writes with a checkAndPut to 
SYSTEM.MUTEX for the keys:
(, , )

for each column in the view WHERE clause. Similarly, when issuing an ALTER 
TABLE/VIEW, we do a conditional write with a checkAndPut to SYSTEM.MUTEX for 
the key:
(, , )

to prevent conflicting modifications between a base table/view and its child 
views. However, if we create a view with its own new columns, for ex:
{code:sql}
CREATE VIEW V1 (NEW_COL1 INTEGER, NEW_COL2 INTEGER) AS SELECT * FROM T1 WHERE B 
= 10;
{code}
we will not do a checkAndPut with the new columns being added to the view 
(NEW_COL1 and NEW_COL2) thus conflicting concurrent mutations may occur to a 
parent in this case. 



> Creating a view which has its own new columns should also do checkAndPut 
> checks on SYSTEM.MUTEX
> ---
>
> Key: PHOENIX-6191
> URL: https://issues.apache.org/jira/browse/PHOENIX-6191
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Critical
> Fix For: 5.1.0, 4.16.0
>
>
> Currently, when creating a view we do conditional writes with a checkAndPut 
> to SYSTEM.MUTEX for the keys:
> (, ,  name>)
> for each column in the view WHERE clause. Similarly, when issuing an ALTER 
> TABLE/VIEW, we do a conditional write with a checkAndPut to SYSTEM.MUTEX for 
> the key:
> (, ,  the column to add/drop>)
> to prevent conflicting modifications between a base table/view and its child 
> views. However, if we create a view with its own new columns, for ex:
> {code:sql}
> CREATE VIEW V1 (NEW_COL1 INTEGER, NEW_COL2 INTEGER) AS SELECT * FROM T1 WHERE 
> B = 10;
> {code}
> we will not do a checkAndPut with the new columns being added to the view 
> (NEW_COL1 and NEW_COL2) thus conflicting concurrent mutations may occur to a 
> parent in this case, for ex: a simultaneous ALTER TABLE/VIEW of the parent 
> which adds NEW_COL1 as a VARCHAR. This will lead to data being unable to be 
> read properly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6191) Creating a view which has its own new columns should also do checkAndPut checks on SYSTEM.MUTEX

2020-10-15 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned PHOENIX-6191:
-

Assignee: Chinmay Kulkarni

> Creating a view which has its own new columns should also do checkAndPut 
> checks on SYSTEM.MUTEX
> ---
>
> Key: PHOENIX-6191
> URL: https://issues.apache.org/jira/browse/PHOENIX-6191
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Critical
> Fix For: 5.1.0, 4.16.0
>
>
> Currently, when creating a view we do conditional writes with a checkAndPut 
> to SYSTEM.MUTEX for the keys:
> (, ,  name>)
> for each column in the view WHERE clause. Similarly, when issuing an ALTER 
> TABLE/VIEW, we do a conditional write with a checkAndPut to SYSTEM.MUTEX for 
> the key:
> (, ,  the column to add/drop>)
> to prevent conflicting modifications between a base table/view and its child 
> views. However, if we create a view with its own new columns, for ex:
> {code:sql}
> CREATE VIEW V1 (NEW_COL1 INTEGER, NEW_COL2 INTEGER) AS SELECT * FROM T1 WHERE 
> B = 10;
> {code}
> we will not do a checkAndPut with the new columns being added to the view 
> (NEW_COL1 and NEW_COL2) thus conflicting concurrent mutations may occur to a 
> parent in this case, for ex: a simultaneous ALTER TABLE/VIEW of the parent 
> which adds NEW_COL1 as a VARCHAR. This will lead to data being unable to be 
> read properly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6191) Creating a view which has its own new columns should also do checkAndPut checks on SYSTEM.MUTEX

2020-10-15 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6191:
--
Summary: Creating a view which has its own new columns should also do 
checkAndPut checks on SYSTEM.MUTEX  (was: We should also do checkAndPut checks 
on SYSTEM.MUTEX when creating a view which has its own new columns)

> Creating a view which has its own new columns should also do checkAndPut 
> checks on SYSTEM.MUTEX
> ---
>
> Key: PHOENIX-6191
> URL: https://issues.apache.org/jira/browse/PHOENIX-6191
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Critical
> Fix For: 5.1.0, 4.16.0
>
>
> Currently, when creating a view we do conditional writes with a checkAndPut 
> to SYSTEM.MUTEX for the keys:
> (, ,  name>)
> for each column in the view WHERE clause. Similarly, when issuing an ALTER 
> TABLE/VIEW, we do a conditional write with a checkAndPut to SYSTEM.MUTEX for 
> the key:
> (, ,  the column to add/drop>)
> to prevent conflicting modifications between a base table/view and its child 
> views. However, if we create a view with its own new columns, for ex:
> {code:sql}
> CREATE VIEW V1 (NEW_COL1 INTEGER, NEW_COL2 INTEGER) AS SELECT * FROM T1 WHERE 
> B = 10;
> {code}
> we will not do a checkAndPut with the new columns being added to the view 
> (NEW_COL1 and NEW_COL2) thus conflicting concurrent mutations may occur to a 
> parent in this case. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6191) We should also do checkAndPut checks on SYSTEM.MUTEX when creating a view which has its own new columns

2020-10-15 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6191:
--
Priority: Critical  (was: Major)

> We should also do checkAndPut checks on SYSTEM.MUTEX when creating a view 
> which has its own new columns
> ---
>
> Key: PHOENIX-6191
> URL: https://issues.apache.org/jira/browse/PHOENIX-6191
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Critical
> Fix For: 5.1.0, 4.16.0
>
>
> Currently, when creating a view we do conditional writes with a checkAndPut 
> to SYSTEM.MUTEX for the keys:
> (, ,  name>)
> for each column in the view WHERE clause. Similarly, when issuing an ALTER 
> TABLE/VIEW, we do a conditional write with a checkAndPut to SYSTEM.MUTEX for 
> the key:
> (, ,  the column to add/drop>)
> to prevent conflicting modifications between a base table/view and its child 
> views. However, if we create a view with its own new columns, for ex:
> {code:sql}
> CREATE VIEW V1 (NEW_COL1 INTEGER, NEW_COL2 INTEGER) AS SELECT * FROM T1 WHERE 
> B = 10;
> {code}
> we will not do a checkAndPut with the new columns being added to the view 
> (NEW_COL1 and NEW_COL2) thus conflicting concurrent mutations may occur to a 
> parent in this case. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6191) We should also do checkAndPut checks on SYSTEM.MUTEX when creating a view which has its own new columns

2020-10-15 Thread Chinmay Kulkarni (Jira)
Chinmay Kulkarni created PHOENIX-6191:
-

 Summary: We should also do checkAndPut checks on SYSTEM.MUTEX when 
creating a view which has its own new columns
 Key: PHOENIX-6191
 URL: https://issues.apache.org/jira/browse/PHOENIX-6191
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.15.0, 5.0.0
Reporter: Chinmay Kulkarni
 Fix For: 5.1.0, 4.16.0


Currently, when creating a view we do conditional writes with a checkAndPut to 
SYSTEM.MUTEX for the keys:
(, , )

for each column in the view WHERE clause. Similarly, when issuing an ALTER 
TABLE/VIEW, we do a conditional write with a checkAndPut to SYSTEM.MUTEX for 
the key:
(, , )

to prevent conflicting modifications between a base table/view and its child 
views. However, if we create a view with its own new columns, for ex:
{code:sql}
CREATE VIEW V1 (NEW_COL1 INTEGER, NEW_COL2 INTEGER) AS SELECT * FROM T1 WHERE B 
= 10;
{code}
we will not do a checkAndPut with the new columns being added to the view 
(NEW_COL1 and NEW_COL2) thus conflicting concurrent mutations may occur to a 
parent in this case. 




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6190) Race condition in view creation may allow conflicting changes for pre-4.15 clients and for scenarios with phoenix.allow.system.catalog.rollback=true

2020-10-15 Thread Chinmay Kulkarni (Jira)
Chinmay Kulkarni created PHOENIX-6190:
-

 Summary: Race condition in view creation may allow conflicting 
changes for pre-4.15 clients and for scenarios with 
phoenix.allow.system.catalog.rollback=true
 Key: PHOENIX-6190
 URL: https://issues.apache.org/jira/browse/PHOENIX-6190
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.15.0, 5.0.0
Reporter: Chinmay Kulkarni
 Fix For: 5.1.0, 4.16.0


For pre-4.15 clients and in scenarios where 
phoenix.allow.system.catalog.rollback=true, we have to block adding/dropping a 
column to/from a parent table/view as we no longer lock the parent on the 
server side while creating a child view to prevent conflicting changes. This is 
handled on the client side from 4.15 onwards.

However, there is a slight race condition here where a view may be created 
between the time we find all children of the parent and the time we do this 
check (see 
[this|https://github.com/apache/phoenix/blob/264310bd1e6c14996c3cfb11557fc66a012cb01b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2592]).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6142) Make DDL operations resilient to orphan parent->child linking rows in SYSTEM.CHILD_LINK

2020-10-15 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6142:
--
Attachment: PHOENIX-6142.4.x.v4.patch

> Make DDL operations resilient to orphan parent->child linking rows in 
> SYSTEM.CHILD_LINK
> ---
>
> Key: PHOENIX-6142
> URL: https://issues.apache.org/jira/browse/PHOENIX-6142
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6142-4.x-v1.patch, PHOENIX-6142.4.x.v2.patch, 
> PHOENIX-6142.4.x.v3.patch, PHOENIX-6142.4.x.v4.patch
>
>
> We are targeting PHOENIX-6141 for 4.17. Until we have it, we should aim at 
> making DDL operations resilient to orphan parent->child linking rows. DDL 
> operations identified which can fail due to orphan rows are:
>  # Any ALTER TABLE ADD/DROP/SET calls on the base table T will fail if there 
> are orphan links from T to some already dropped view. This happens because 
> the call to 
> [MetaDataEndpointImpl.findAllChildViews()|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2142]
>  from 
> [MetaDataEndpointImpl.mutateColumn()|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2592]
>  fails with a TableNotFoundException.
>  # Any DROP TABLE/VIEW call without CASCADE will fail even though there are 
> actually no child views since the orphan rows wrongly indicate that there are 
> child views.
>  # During the upgrade path for UpgradeUtil.syncUpdateCacheFreqAllIndexes(), 
> we will just ignore any orphan views (for ex, see 
> [this|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java#L1368-L1374]),
>  but the call to UpgradeUtil.upgradeTable() will fail with a 
> TableNotFoundException for each orphan view.
> # During a CREATE TABLE/VIEW, we try to drop any views from the previous life 
> of that table/view, however we might end up dropping a legitimate view (with 
> the same name) which is on another table/view because of this.
> Before dropping any views that we see from a parent->child link, we need to 
> ensure that the view is in fact a child view of the same table/view we think 
> it is an orphan of.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6142) Make DDL operations resilient to orphan parent->child linking rows in SYSTEM.CHILD_LINK

2020-10-14 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6142:
--
Attachment: PHOENIX-6142.4.x.v3.patch

> Make DDL operations resilient to orphan parent->child linking rows in 
> SYSTEM.CHILD_LINK
> ---
>
> Key: PHOENIX-6142
> URL: https://issues.apache.org/jira/browse/PHOENIX-6142
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6142-4.x-v1.patch, PHOENIX-6142.4.x.v2.patch, 
> PHOENIX-6142.4.x.v3.patch
>
>
> We are targeting PHOENIX-6141 for 4.17. Until we have it, we should aim at 
> making DDL operations resilient to orphan parent->child linking rows. DDL 
> operations identified which can fail due to orphan rows are:
>  # Any ALTER TABLE ADD/DROP/SET calls on the base table T will fail if there 
> are orphan links from T to some already dropped view. This happens because 
> the call to 
> [MetaDataEndpointImpl.findAllChildViews()|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2142]
>  from 
> [MetaDataEndpointImpl.mutateColumn()|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2592]
>  fails with a TableNotFoundException.
>  # Any DROP TABLE/VIEW call without CASCADE will fail even though there are 
> actually no child views since the orphan rows wrongly indicate that there are 
> child views.
>  # During the upgrade path for UpgradeUtil.syncUpdateCacheFreqAllIndexes(), 
> we will just ignore any orphan views (for ex, see 
> [this|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java#L1368-L1374]),
>  but the call to UpgradeUtil.upgradeTable() will fail with a 
> TableNotFoundException for each orphan view.
> # During a CREATE TABLE/VIEW, we try to drop any views from the previous life 
> of that table/view, however we might end up dropping a legitimate view (with 
> the same name) which is on another table/view because of this.
> Before dropping any views that we see from a parent->child link, we need to 
> ensure that the view is in fact a child view of the same table/view we think 
> it is an orphan of.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6142) Make DDL operations resilient to orphan parent->child linking rows in SYSTEM.CHILD_LINK

2020-10-13 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6142:
--
Description: 
We are targeting PHOENIX-6141 for 4.17. Until we have it, we should aim at 
making DDL operations resilient to orphan parent->child linking rows. DDL 
operations identified which can fail due to orphan rows are:
 # Any ALTER TABLE ADD/DROP/SET calls on the base table T will fail if there 
are orphan links from T to some already dropped view. This happens because the 
call to 
[MetaDataEndpointImpl.findAllChildViews()|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2142]
 from 
[MetaDataEndpointImpl.mutateColumn()|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2592]
 fails with a TableNotFoundException.
 # Any DROP TABLE/VIEW call without CASCADE will fail even though there are 
actually no child views since the orphan rows wrongly indicate that there are 
child views.
 # During the upgrade path for UpgradeUtil.syncUpdateCacheFreqAllIndexes(), we 
will just ignore any orphan views (for ex, see 
[this|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java#L1368-L1374]),
 but the call to UpgradeUtil.upgradeTable() will fail with a 
TableNotFoundException for each orphan view.
# During a CREATE TABLE/VIEW, we try to drop any views from the previous life 
of that table/view, however we might end up dropping a legitimate view (with 
the same name) which is on another table/view because of this.

Before dropping any views that we see from a parent->child link, we need to 
ensure that the view is in fact a child view of the same table/view we think it 
is an orphan of.

  was:
We are targeting PHOENIX-6141 for 4.17. Until we have it, we should aim at 
making DDL operations resilient to orphan parent->child linking rows. DDL 
operations identified which can fail due to orphan rows are:
 # Any ALTER TABLE ADD/DROP/SET calls on the base table T will fail if there 
are orphan links from T to some already dropped view. This happens because the 
call to 
[MetaDataEndpointImpl.findAllChildViews()|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2142]
 from 
[MetaDataEndpointImpl.mutateColumn()|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L259]
 fails with a TableNotFoundException.
 # Any DROP TABLE/VIEW call without CASCADE will fail even though there are 
actually no child views since the orphan rows wrongly indicate that there are 
child views.
 # During the upgrade path for UpgradeUtil.syncUpdateCacheFreqAllIndexes(), we 
will just ignore any orphan views (for ex, see 
[this|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java#L1368-L1374]),
 but the call to UpgradeUtil.upgradeTable() will fail with a 
TableNotFoundException for each orphan view.
# During a CREATE TABLE/VIEW, we try to drop any views from the previous life 
of that table/view, however we might end up dropping a legitimate view (with 
the same name) which is on another table/view because of this.

Before dropping any views that we see from a parent->child link, we need to 
ensure that the view is in fact a child view of the same table/view we think it 
is an orphan of.


> Make DDL operations resilient to orphan parent->child linking rows in 
> SYSTEM.CHILD_LINK
> ---
>
> Key: PHOENIX-6142
> URL: https://issues.apache.org/jira/browse/PHOENIX-6142
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6142-4.x-v1.patch, PHOENIX-6142.4.x.v2.patch
>
>
> We are targeting PHOENIX-6141 for 4.17. Until we have it, we should aim at 
> making DDL operations resilient to orphan parent->child linking rows. DDL 
> operations identified which can fail due to orphan rows are:
>  # Any ALTER TABLE ADD/DROP/SET calls on the base table T will fail if there 
> are orphan links from T to some already dropped view. This happens because 
> the call to 
> 

[jira] [Updated] (PHOENIX-6169) IT suite never finishes on 4.x with HBase 1.3 or 1.4

2020-10-01 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6169:
--
Fix Version/s: (was: 5.1.0)

> IT suite never finishes on 4.x with HBase 1.3 or 1.4
> 
>
> Key: PHOENIX-6169
> URL: https://issues.apache.org/jira/browse/PHOENIX-6169
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.16.0
> Environment: ASF Jenkins (at least)
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Blocker
> Fix For: 4.16.0
>
>
> running {{mvn verify}} on the current 4.x branch will hang indefinetly.
> Apart from making it impossible to run all tests, and get a successful test 
> run, this also results in Yetus not being able to post precommit check 
> results.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6169) IT suite never finishes on 4.x with HBase 1.3 or 1.4

2020-10-01 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6169:
--
Fix Version/s: 4.16.0
   5.1.0

> IT suite never finishes on 4.x with HBase 1.3 or 1.4
> 
>
> Key: PHOENIX-6169
> URL: https://issues.apache.org/jira/browse/PHOENIX-6169
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.16.0
> Environment: ASF Jenkins (at least)
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
>
> running {{mvn verify}} on the current 4.x branch will hang indefinetly.
> Apart from making it impossible to run all tests, and get a successful test 
> run, this also results in Yetus not being able to post precommit check 
> results.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6142) Make DDL operations resilient to orphan parent->child linking rows in SYSTEM.CHILD_LINK

2020-09-30 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6142:
--
Attachment: PHOENIX-6142.4.x.v2.patch

> Make DDL operations resilient to orphan parent->child linking rows in 
> SYSTEM.CHILD_LINK
> ---
>
> Key: PHOENIX-6142
> URL: https://issues.apache.org/jira/browse/PHOENIX-6142
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6142-4.x-v1.patch, PHOENIX-6142.4.x.v2.patch
>
>
> We are targeting PHOENIX-6141 for 4.17. Until we have it, we should aim at 
> making DDL operations resilient to orphan parent->child linking rows. DDL 
> operations identified which can fail due to orphan rows are:
>  # Any ALTER TABLE ADD/DROP/SET calls on the base table T will fail if there 
> are orphan links from T to some already dropped view. This happens because 
> the call to 
> [MetaDataEndpointImpl.findAllChildViews()|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2142]
>  from 
> [MetaDataEndpointImpl.mutateColumn()|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L259]
>  fails with a TableNotFoundException.
>  # Any DROP TABLE/VIEW call without CASCADE will fail even though there are 
> actually no child views since the orphan rows wrongly indicate that there are 
> child views.
>  # During the upgrade path for UpgradeUtil.syncUpdateCacheFreqAllIndexes(), 
> we will just ignore any orphan views (for ex, see 
> [this|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java#L1368-L1374]),
>  but the call to UpgradeUtil.upgradeTable() will fail with a 
> TableNotFoundException for each orphan view.
> # During a CREATE TABLE/VIEW, we try to drop any views from the previous life 
> of that table/view, however we might end up dropping a legitimate view (with 
> the same name) which is on another table/view because of this.
> Before dropping any views that we see from a parent->child link, we need to 
> ensure that the view is in fact a child view of the same table/view we think 
> it is an orphan of.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6153) Table Map Reduce job after a Snapshot based job fails with CorruptedSnapshotException

2020-09-28 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6153:
--
Fix Version/s: 5.1.0

> Table Map Reduce job after a Snapshot based job fails with 
> CorruptedSnapshotException
> -
>
> Key: PHOENIX-6153
> URL: https://issues.apache.org/jira/browse/PHOENIX-6153
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.15.0, 4.14.3, master
>Reporter: Saksham Gangwar
>Assignee: Saksham Gangwar
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6153.master.v1.patch, 
> PHOENIX-6153.master.v2.patch, PHOENIX-6153.master.v3.patch, 
> PHOENIX-6153.master.v4.patch, PHOENIX-6153.master.v5.patch
>
>
> Different MR job requests which reach [MapReduceParallelScanGrouper 
> getRegionBoundaries|https://github.com/apache/phoenix/blob/f9e304754bad886344a856dd2565e3f24e345ed2/phoenix-core/src/main/java/org/apache/phoenix/iterate/MapReduceParallelScanGrouper.java#L65]
>  we currently make use of shared configuration among jobs to figure out 
> snapshot names. 
> Example jobs' sequence: first two jobs work over snapshot and the third job 
> over a regular table.
> Prininting hashcode of objects when entering: 
> [https://github.com/apache/phoenix/blob/f9e304754bad886344a856dd2565e3f24e345ed2/phoenix-core/src/main/java/org/apache/phoenix/iterate/MapReduceParallelScanGrouper.java#L65]
> *Job 1:* (over snapshot of  *ABC_TABLE_1* and is successful)
> context.getConnection(): 521093916
>  ConnectionQueryServices: 1772519705
>  *Configuration conf: 813285994*
>      conf.get(PhoenixConfigurationUtil.SNAPSHOT_NAME_KEY):*ABC_TABLE_1*
>  
> *Job 2:* (over snapshot of *ABC_TABLE_2* and is successful)
> context.getConnection(): 1928017473
>  ConnectionQueryServices: 961279422
>  *Configuration conf: 813285994*
>      conf.get(PhoenixConfigurationUtil.SNAPSHOT_NAME_KEY): *ABC_TABLE_2*
>  
> *Job 3:* (over the table *ABC_TABLE_3* but fails with 
> CorruptedSnapshotException while it got nothing to do with snapshot)
> context.getConnection(): 28889670
>  ConnectionQueryServices: 424389847
>  *Configuration: 813285994*
>      conf.get(PhoenixConfigurationUtil.SNAPSHOT_NAME_KEY): *ABC_TABLE_2*
>  
> Exception which we get:
>  [2020:08:18 20:56:17.409] [MigrationRetryPoller-Executor-1] [ERROR] 
> [c.s.hgrate.mapreduce.MapReduceImpl] - Error submitting M/R job for Job 3
>  java.lang.RuntimeException: 
> org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Couldn't read 
> snapshot info 
> from:hdfs://.../hbase/.hbase-snapshot/ABC_TABLE_2_1597687413477/.snapshotinfo
>  at 
> org.apache.phoenix.iterate.MapReduceParallelScanGrouper.getRegionBoundaries(MapReduceParallelScanGrouper.java:81)
>  
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  at 
> org.apache.phoenix.iterate.BaseResultIterators.getRegionBoundaries(BaseResultIterators.java:541)
>  
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:893)
>  
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:641)
>  
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  at 
> org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:511)
>  
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  at 
> org.apache.phoenix.iterate.ParallelIterators.(ParallelIterators.java:62)
>  
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:278) 
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:367) 
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:218) 
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:213) 
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  at 
> org.apache.phoenix.mapreduce.PhoenixInputFormat.setupParallelScansWithScanGrouper(PhoenixInputFormat.java:252)
>  
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  

[jira] [Updated] (PHOENIX-6153) Table Map Reduce job after a Snapshot based job fails with CorruptedSnapshotException

2020-09-28 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6153:
--
Affects Version/s: (was: 4.x)
   4.15.0

> Table Map Reduce job after a Snapshot based job fails with 
> CorruptedSnapshotException
> -
>
> Key: PHOENIX-6153
> URL: https://issues.apache.org/jira/browse/PHOENIX-6153
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.15.0, 4.14.3, master
>Reporter: Saksham Gangwar
>Assignee: Saksham Gangwar
>Priority: Major
> Fix For: 4.16.0
>
> Attachments: PHOENIX-6153.master.v1.patch, 
> PHOENIX-6153.master.v2.patch, PHOENIX-6153.master.v3.patch, 
> PHOENIX-6153.master.v4.patch, PHOENIX-6153.master.v5.patch
>
>
> Different MR job requests which reach [MapReduceParallelScanGrouper 
> getRegionBoundaries|https://github.com/apache/phoenix/blob/f9e304754bad886344a856dd2565e3f24e345ed2/phoenix-core/src/main/java/org/apache/phoenix/iterate/MapReduceParallelScanGrouper.java#L65]
>  we currently make use of shared configuration among jobs to figure out 
> snapshot names. 
> Example jobs' sequence: first two jobs work over snapshot and the third job 
> over a regular table.
> Prininting hashcode of objects when entering: 
> [https://github.com/apache/phoenix/blob/f9e304754bad886344a856dd2565e3f24e345ed2/phoenix-core/src/main/java/org/apache/phoenix/iterate/MapReduceParallelScanGrouper.java#L65]
> *Job 1:* (over snapshot of  *ABC_TABLE_1* and is successful)
> context.getConnection(): 521093916
>  ConnectionQueryServices: 1772519705
>  *Configuration conf: 813285994*
>      conf.get(PhoenixConfigurationUtil.SNAPSHOT_NAME_KEY):*ABC_TABLE_1*
>  
> *Job 2:* (over snapshot of *ABC_TABLE_2* and is successful)
> context.getConnection(): 1928017473
>  ConnectionQueryServices: 961279422
>  *Configuration conf: 813285994*
>      conf.get(PhoenixConfigurationUtil.SNAPSHOT_NAME_KEY): *ABC_TABLE_2*
>  
> *Job 3:* (over the table *ABC_TABLE_3* but fails with 
> CorruptedSnapshotException while it got nothing to do with snapshot)
> context.getConnection(): 28889670
>  ConnectionQueryServices: 424389847
>  *Configuration: 813285994*
>      conf.get(PhoenixConfigurationUtil.SNAPSHOT_NAME_KEY): *ABC_TABLE_2*
>  
> Exception which we get:
>  [2020:08:18 20:56:17.409] [MigrationRetryPoller-Executor-1] [ERROR] 
> [c.s.hgrate.mapreduce.MapReduceImpl] - Error submitting M/R job for Job 3
>  java.lang.RuntimeException: 
> org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Couldn't read 
> snapshot info 
> from:hdfs://.../hbase/.hbase-snapshot/ABC_TABLE_2_1597687413477/.snapshotinfo
>  at 
> org.apache.phoenix.iterate.MapReduceParallelScanGrouper.getRegionBoundaries(MapReduceParallelScanGrouper.java:81)
>  
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  at 
> org.apache.phoenix.iterate.BaseResultIterators.getRegionBoundaries(BaseResultIterators.java:541)
>  
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:893)
>  
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:641)
>  
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  at 
> org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:511)
>  
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  at 
> org.apache.phoenix.iterate.ParallelIterators.(ParallelIterators.java:62)
>  
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:278) 
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:367) 
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:218) 
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:213) 
> ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT]
>  at 
> org.apache.phoenix.mapreduce.PhoenixInputFormat.setupParallelScansWithScanGrouper(PhoenixInputFormat.java:252)
>  
> 

[jira] [Updated] (PHOENIX-6124) Block adding/dropping a column on a parent view for clients <4.15 and for clients that have phoenix.allow.system.catalog.rollback=true

2020-09-23 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6124:
--
Priority: Blocker  (was: Major)

> Block adding/dropping a column on a parent view for clients <4.15 and for 
> clients that have phoenix.allow.system.catalog.rollback=true
> --
>
> Key: PHOENIX-6124
> URL: https://issues.apache.org/jira/browse/PHOENIX-6124
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
>
> For pre-4.15 clients, we have to block adding/dropping a column to a table if 
> it has child views since we can’t do any checkAndPut distributed locking (see 
> [this|https://github.com/apache/phoenix/blob/6ecc66738e576a5349605c2f5b20003df03f95de/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2595-L2616]).
>  
> However, this is only prevented if the parent is a table and not if the 
> parent is a view. We should extend [the 
> condition|https://github.com/apache/phoenix/blob/6ecc66738e576a5349605c2f5b20003df03f95de/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2591]
>  to also cover views since conflicting mutations on its children can also 
> lead to inconsistencies.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6142) Make DDL operations resilient to orphan parent->child linking rows in SYSTEM.CHILD_LINK

2020-09-23 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6142:
--
Description: 
We are targeting PHOENIX-6141 for 4.17. Until we have it, we should aim at 
making DDL operations resilient to orphan parent->child linking rows. DDL 
operations identified which can fail due to orphan rows are:
 # Any ALTER TABLE ADD/DROP/SET calls on the base table T will fail if there 
are orphan links from T to some already dropped view. This happens because the 
call to 
[MetaDataEndpointImpl.findAllChildViews()|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2142]
 from 
[MetaDataEndpointImpl.mutateColumn()|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L259]
 fails with a TableNotFoundException.
 # Any DROP TABLE/VIEW call without CASCADE will fail even though there are 
actually no child views since the orphan rows wrongly indicate that there are 
child views.
 # During the upgrade path for UpgradeUtil.syncUpdateCacheFreqAllIndexes(), we 
will just ignore any orphan views (for ex, see 
[this|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java#L1368-L1374]),
 but the call to UpgradeUtil.upgradeTable() will fail with a 
TableNotFoundException for each orphan view.
# During a CREATE TABLE/VIEW, we try to drop any views from the previous life 
of that table/view, however we might end up dropping a legitimate view (with 
the same name) which is on another table/view because of this.

Before dropping any views that we see from a parent->child link, we need to 
ensure that the view is in fact a child view of the same table/view we think it 
is an orphan of.

  was:
We are targeting PHOENIX-6141 for 4.17. Until we have it, we should aim at 
making DDL operations resilient to orphan parent->child linking rows. DDL 
operations identified which can fail due to orphan rows are:
 # Any ALTER TABLE ADD/DROP/SET calls on the base table T will fail if there 
are orphan links from T to some already dropped view. This happens because the 
call to 
[MetaDataEndpointImpl.findAllChildViews()|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2142]
 from 
[MetaDataEndpointImpl.mutateColumn()|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L259]
 fails with a TableNotFoundException.
 # Any DROP TABLE/VIEW call without CASCADE will fail even though there are 
actually no child views since the orphan rows wrongly indicate that there are 
child views.
 # During the upgrade path for UpgradeUtil.syncUpdateCacheFreqAllIndexes(), we 
will just ignore any orphan views (for ex, see 
[this|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java#L1368-L1374]),
 but the call to UpgradeUtil.upgradeTable() will fail with a 
TableNotFoundException for each orphan view.


> Make DDL operations resilient to orphan parent->child linking rows in 
> SYSTEM.CHILD_LINK
> ---
>
> Key: PHOENIX-6142
> URL: https://issues.apache.org/jira/browse/PHOENIX-6142
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
>
> We are targeting PHOENIX-6141 for 4.17. Until we have it, we should aim at 
> making DDL operations resilient to orphan parent->child linking rows. DDL 
> operations identified which can fail due to orphan rows are:
>  # Any ALTER TABLE ADD/DROP/SET calls on the base table T will fail if there 
> are orphan links from T to some already dropped view. This happens because 
> the call to 
> [MetaDataEndpointImpl.findAllChildViews()|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2142]
>  from 
> [MetaDataEndpointImpl.mutateColumn()|https://github.com/apache/phoenix/blob/fece8e69b9c03c80db7a0801d99e5de31fe15ffa/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L259]
>  fails with a TableNotFoundException.
>  # Any DROP TABLE/VIEW call without CASCADE will fail even though there are 
> actually no child views since the orphan 

[jira] [Assigned] (PHOENIX-6154) Move the check for existence of child views and task addition to drop those child views to the client side when dropping a table/view

2020-09-22 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned PHOENIX-6154:
-

Assignee: (was: Chinmay Kulkarni)

> Move the check for existence of child views and task addition to drop those 
> child views to the client side when dropping a table/view
> -
>
> Key: PHOENIX-6154
> URL: https://issues.apache.org/jira/browse/PHOENIX-6154
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>
> When we issue a {{DROP TABLE/VIEW}}, if the table/view being dropped has 
> child views (and {{CASCADE}} is provided), we add a 
> {{[DropChildViewsTask|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/tasks/DropChildViewsTask.java]}}
>  in the {{SYSTEM.TASK}} table (see 
> [this|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479]).
>  This means that *while holding the row lock* for the table/view’s header row 
> ([here|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2253])
>  we do the following:
>  # Make an 
> [RPC|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2459-L2461]
>  to the region hosting {{SYSTEM.CHILD_LINK}} to scan it in order to find 
> child views.
>  # If any child views are found in the step above, we make additional RPCs to 
> the region hosting {{SYSTEM.TASK}} to 
> {{[UPSERT|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479-L2484]}}
>  a {{DropChildViewsTask}} for immediate child views.
>  # We [send remote 
> mutations|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2298-L2302]
>  to drop parent→child links from the {{SYSTEM.CHILD_LINK}} table.
> Of the above extra RPCs, note that even if the table/view has no child views 
> or if {{CASCADE}} is not provided, we will still do the first RPC from the 
> server while holding a row lock.
> We should move this check to the client (issue a scan against 
> SYSTEM.CHILD_LINK to see if a single linking row exists) and also add the 
> task from the client.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6154) Move the check for existence of child views and task addition to drop those child views to the client side when dropping a table/view

2020-09-22 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6154:
--
Summary: Move the check for existence of child views and task addition to 
drop those child views to the client side when dropping a table/view  (was: 
Move check to see if there are any child views that need to be dropped and task 
addition to drop those child views to the client side when dropping a 
table/view)

> Move the check for existence of child views and task addition to drop those 
> child views to the client side when dropping a table/view
> -
>
> Key: PHOENIX-6154
> URL: https://issues.apache.org/jira/browse/PHOENIX-6154
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>
> When we issue a {{DROP TABLE/VIEW}}, if the table/view being dropped has 
> child views (and {{CASCADE}} is provided), we add a 
> {{[DropChildViewsTask|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/tasks/DropChildViewsTask.java]}}
>  in the {{SYSTEM.TASK}} table (see 
> [this|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479]).
>  This means that *while holding the row lock* for the table/view’s header row 
> ([here|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2253])
>  we do the following:
>  # Make an 
> [RPC|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2459-L2461]
>  to the region hosting {{SYSTEM.CHILD_LINK}} to scan it in order to find 
> child views.
>  # If any child views are found in the step above, we make additional RPCs to 
> the region hosting {{SYSTEM.TASK}} to 
> {{[UPSERT|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479-L2484]}}
>  a {{DropChildViewsTask}} for immediate child views.
>  # We [send remote 
> mutations|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2298-L2302]
>  to drop parent→child links from the {{SYSTEM.CHILD_LINK}} table.
> Of the above extra RPCs, note that even if the table/view has no child views 
> or if {{CASCADE}} is not provided, we will still do the first RPC from the 
> server while holding a row lock.
> We should move this check to the client (issue a scan against 
> SYSTEM.CHILD_LINK to see if a single linking row exists) and also add the 
> task from the client.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6155) Prevent doing direct upserts into SYSTEM.TASK from the client

2020-09-22 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6155:
--
Fix Version/s: 4.16.0
   5.1.0

> Prevent doing direct upserts into SYSTEM.TASK from the client
> -
>
> Key: PHOENIX-6155
> URL: https://issues.apache.org/jira/browse/PHOENIX-6155
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>
> In environments with namespace-mapping enabled, we will have to grant write 
> access to clients in order to make direct upserts into SYSTEM.TASK. Currently 
> we add a task from the client-side 
> [here|https://github.com/apache/phoenix/blob/4.x/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java#L4654].
>  In order to implement other Jiras like 
> [PHOENIX-6154|https://issues.apache.org/jira/browse/PHOENIX-6154] we also may 
> need to interact with the SYSTEM.TASK table from the client-side.
> Instead of doing direct upserts into this table, we should add an endpoint on 
> SYSTEM.TASK and clients should interact with that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6155) Prevent doing direct upserts into SYSTEM.TASK from the client

2020-09-22 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6155:
--
Affects Version/s: 5.0.0
   4.15.0

> Prevent doing direct upserts into SYSTEM.TASK from the client
> -
>
> Key: PHOENIX-6155
> URL: https://issues.apache.org/jira/browse/PHOENIX-6155
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Major
>
> In environments with namespace-mapping enabled, we will have to grant write 
> access to clients in order to make direct upserts into SYSTEM.TASK. Currently 
> we add a task from the client-side 
> [here|https://github.com/apache/phoenix/blob/4.x/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java#L4654].
>  In order to implement other Jiras like 
> [PHOENIX-6154|https://issues.apache.org/jira/browse/PHOENIX-6154] we also may 
> need to interact with the SYSTEM.TASK table from the client-side.
> Instead of doing direct upserts into this table, we should add an endpoint on 
> SYSTEM.TASK and clients should interact with that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6155) Prevent doing direct upserts into SYSTEM.TASK from the client

2020-09-22 Thread Chinmay Kulkarni (Jira)
Chinmay Kulkarni created PHOENIX-6155:
-

 Summary: Prevent doing direct upserts into SYSTEM.TASK from the 
client
 Key: PHOENIX-6155
 URL: https://issues.apache.org/jira/browse/PHOENIX-6155
 Project: Phoenix
  Issue Type: Improvement
Reporter: Chinmay Kulkarni


In environments with namespace-mapping enabled, we will have to grant write 
access to clients in order to make direct upserts into SYSTEM.TASK. Currently 
we add a task from the client-side 
[here|https://github.com/apache/phoenix/blob/4.x/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java#L4654].
 In order to implement other Jiras like 
[PHOENIX-6154|https://issues.apache.org/jira/browse/PHOENIX-6154] we also may 
need to interact with the SYSTEM.TASK table from the client-side.

Instead of doing direct upserts into this table, we should add an endpoint on 
SYSTEM.TASK and clients should interact with that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6154) Move check to see if there are any child views that need to be dropped and task addition to drop those child views to the client side when dropping a table/view

2020-09-22 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6154:
--
Summary: Move check to see if there are any child views that need to be 
dropped and task addition to drop those child views to the client side when 
dropping a table/view  (was: Move check to client side to see if there are any 
child views that need to be dropped while dropping a table/view)

> Move check to see if there are any child views that need to be dropped and 
> task addition to drop those child views to the client side when dropping a 
> table/view
> 
>
> Key: PHOENIX-6154
> URL: https://issues.apache.org/jira/browse/PHOENIX-6154
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>
> When we issue a {{DROP TABLE/VIEW}}, if the table/view being dropped has 
> child views (and {{CASCADE}} is provided), we add a 
> {{[DropChildViewsTask|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/tasks/DropChildViewsTask.java]}}
>  in the {{SYSTEM.TASK}} table (see 
> [this|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479]).
>  This means that *while holding the row lock* for the table/view’s header row 
> ([here|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2253])
>  we do the following:
>  # Make an 
> [RPC|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2459-L2461]
>  to the region hosting {{SYSTEM.CHILD_LINK}} to scan it in order to find 
> child views.
>  # If any child views are found in the step above, we make additional RPCs to 
> the region hosting {{SYSTEM.TASK}} to 
> {{[UPSERT|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479-L2484]}}
>  a {{DropChildViewsTask}} for immediate child views.
>  # We [send remote 
> mutations|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2298-L2302]
>  to drop parent→child links from the {{SYSTEM.CHILD_LINK}} table.
> Of the above extra RPCs, note that even if the table/view has no child views 
> or if {{CASCADE}} is not provided, we will still do the first RPC from the 
> server while holding a row lock.
> We should move this check to the client (issue a scan against 
> SYSTEM.CHILD_LINK to see if a single linking row exists) and also add the 
> task from the client.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6154) Move check to see if there are any child views that need to be dropped and task addition to drop those child views to the client side when dropping a table/view

2020-09-22 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned PHOENIX-6154:
-

Assignee: Chinmay Kulkarni

> Move check to see if there are any child views that need to be dropped and 
> task addition to drop those child views to the client side when dropping a 
> table/view
> 
>
> Key: PHOENIX-6154
> URL: https://issues.apache.org/jira/browse/PHOENIX-6154
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>
> When we issue a {{DROP TABLE/VIEW}}, if the table/view being dropped has 
> child views (and {{CASCADE}} is provided), we add a 
> {{[DropChildViewsTask|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/tasks/DropChildViewsTask.java]}}
>  in the {{SYSTEM.TASK}} table (see 
> [this|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479]).
>  This means that *while holding the row lock* for the table/view’s header row 
> ([here|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2253])
>  we do the following:
>  # Make an 
> [RPC|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2459-L2461]
>  to the region hosting {{SYSTEM.CHILD_LINK}} to scan it in order to find 
> child views.
>  # If any child views are found in the step above, we make additional RPCs to 
> the region hosting {{SYSTEM.TASK}} to 
> {{[UPSERT|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479-L2484]}}
>  a {{DropChildViewsTask}} for immediate child views.
>  # We [send remote 
> mutations|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2298-L2302]
>  to drop parent→child links from the {{SYSTEM.CHILD_LINK}} table.
> Of the above extra RPCs, note that even if the table/view has no child views 
> or if {{CASCADE}} is not provided, we will still do the first RPC from the 
> server while holding a row lock.
> We should move this check to the client (issue a scan against 
> SYSTEM.CHILD_LINK to see if a single linking row exists) and also add the 
> task from the client.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5404) Move check to client side to see if there are any child views that need to be dropped while receating a table/view

2020-09-22 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5404:
--
Description: 
Remove {{ViewUtil.dropChildViews(env, tenantIdBytes, schemaName, tableName);}} 
call in MetdataEndpointImpl.createTable

While creating a table or view we need to ensure that are not any child views 
that haven't been clean up by the DropChildView task yet. Move this check to 
the client (issue a scan against SYSTEM.CHILD_LINK to see if a single linking 
row exists).

  was:
When we issue a {{DROP TABLE/VIEW}}, if the table/view being dropped has child 
views (and {{CASCADE}} is provided), we add a 
{{[DropChildViewsTask|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/tasks/DropChildViewsTask.java]}}
 in the {{SYSTEM.TASK}} table (see 
[this|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479]).
 This means that *while holding the row lock* for the table/view’s header row 
([here|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2253])
 we do the following:
 # Make an 
[RPC|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2459-L2461]
 to the region hosting {{SYSTEM.CHILD_LINK}} to scan it in order to find child 
views.
 # If any child views are found in the step above, we make additional RPCs to 
the region hosting {{SYSTEM.TASK}} to 
{{[UPSERT|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479-L2484]}}
 a {{DropChildViewsTask}} for immediate child views.
 # We [send remote 
mutations|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2298-L2302]
 to drop parent→child links from the {{SYSTEM.CHILD_LINK}} table.

Of the above extra RPCs, note that even if the table/view has no child views or 
if {{CASCADE}} is not provided, we will still do the first RPC from the server 
while holding a row lock.

We should move this check to the client (issue a scan against SYSTEM.CHILD_LINK 
to see if a single linking row exists) and also add the task from the client.


> Move check to client side to see if there are any child views that need to be 
> dropped while receating a table/view
> --
>
> Key: PHOENIX-5404
> URL: https://issues.apache.org/jira/browse/PHOENIX-5404
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Thomas D'Silva
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>
> Remove {{ViewUtil.dropChildViews(env, tenantIdBytes, schemaName, 
> tableName);}} call in MetdataEndpointImpl.createTable
> While creating a table or view we need to ensure that are not any child views 
> that haven't been clean up by the DropChildView task yet. Move this check to 
> the client (issue a scan against SYSTEM.CHILD_LINK to see if a single linking 
> row exists).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6154) Move check to client side to see if there are any child views that need to be dropped while dropping a table/view

2020-09-22 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6154:
--
Description: 
When we issue a {{DROP TABLE/VIEW}}, if the table/view being dropped has child 
views (and {{CASCADE}} is provided), we add a 
{{[DropChildViewsTask|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/tasks/DropChildViewsTask.java]}}
 in the {{SYSTEM.TASK}} table (see 
[this|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479]).
 This means that *while holding the row lock* for the table/view’s header row 
([here|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2253])
 we do the following:
 # Make an 
[RPC|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2459-L2461]
 to the region hosting {{SYSTEM.CHILD_LINK}} to scan it in order to find child 
views.
 # If any child views are found in the step above, we make additional RPCs to 
the region hosting {{SYSTEM.TASK}} to 
{{[UPSERT|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479-L2484]}}
 a {{DropChildViewsTask}} for immediate child views.
 # We [send remote 
mutations|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2298-L2302]
 to drop parent→child links from the {{SYSTEM.CHILD_LINK}} table.

Of the above extra RPCs, note that even if the table/view has no child views or 
if {{CASCADE}} is not provided, we will still do the first RPC from the server 
while holding a row lock.

We should move this check to the client (issue a scan against SYSTEM.CHILD_LINK 
to see if a single linking row exists) and also add the task from the client.

  was:
Remove  {{ViewUtil.dropChildViews(env, tenantIdBytes, schemaName, tableName);}} 
call in MetdataEndpointImpl.createTable

While creating a table or view we need to ensure that are not any child views 
that haven't been clean up by the DropChildView task yet. Move this check to 
the client (issue a scan against SYSTEM.CHILD_LINK to see if a single linking 
row exists).


> Move check to client side to see if there are any child views that need to be 
> dropped while dropping a table/view
> -
>
> Key: PHOENIX-6154
> URL: https://issues.apache.org/jira/browse/PHOENIX-6154
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>
> When we issue a {{DROP TABLE/VIEW}}, if the table/view being dropped has 
> child views (and {{CASCADE}} is provided), we add a 
> {{[DropChildViewsTask|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/tasks/DropChildViewsTask.java]}}
>  in the {{SYSTEM.TASK}} table (see 
> [this|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479]).
>  This means that *while holding the row lock* for the table/view’s header row 
> ([here|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2253])
>  we do the following:
>  # Make an 
> [RPC|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2459-L2461]
>  to the region hosting {{SYSTEM.CHILD_LINK}} to scan it in order to find 
> child views.
>  # If any child views are found in the step above, we make additional RPCs to 
> the region hosting {{SYSTEM.TASK}} to 
> {{[UPSERT|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479-L2484]}}
>  a {{DropChildViewsTask}} for immediate child views.
>  # We [send remote 
> mutations|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2298-L2302]
>  to drop parent→child links from the 

[jira] [Updated] (PHOENIX-5404) Move check to client side to see if there are any child views that need to be dropped while receating a table/view

2020-09-22 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5404:
--
Description: 
When we issue a {{DROP TABLE/VIEW}}, if the table/view being dropped has child 
views (and {{CASCADE}} is provided), we add a 
{{[DropChildViewsTask|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/tasks/DropChildViewsTask.java]}}
 in the {{SYSTEM.TASK}} table (see 
[this|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479]).
 This means that *while holding the row lock* for the table/view’s header row 
([here|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2253])
 we do the following:
 # Make an 
[RPC|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2459-L2461]
 to the region hosting {{SYSTEM.CHILD_LINK}} to scan it in order to find child 
views.
 # If any child views are found in the step above, we make additional RPCs to 
the region hosting {{SYSTEM.TASK}} to 
{{[UPSERT|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479-L2484]}}
 a {{DropChildViewsTask}} for immediate child views.
 # We [send remote 
mutations|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2298-L2302]
 to drop parent→child links from the {{SYSTEM.CHILD_LINK}} table.

Of the above extra RPCs, note that even if the table/view has no child views or 
if {{CASCADE}} is not provided, we will still do the first RPC from the server 
while holding a row lock.

We should move this check to the client (issue a scan against SYSTEM.CHILD_LINK 
to see if a single linking row exists) and also add the task from the client.

  was:
Remove  {{ViewUtil.dropChildViews(env, tenantIdBytes, schemaName, tableName);}} 
call in MetdataEndpointImpl.createTable

While creating a table or view we need to ensure that are not any child views 
that haven't been clean up by the DropChildView task yet. Move this check to 
the client (issue a scan against SYSTEM.CHILD_LINK to see if a single linking 
row exists).


> Move check to client side to see if there are any child views that need to be 
> dropped while receating a table/view
> --
>
> Key: PHOENIX-5404
> URL: https://issues.apache.org/jira/browse/PHOENIX-5404
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Thomas D'Silva
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>
> When we issue a {{DROP TABLE/VIEW}}, if the table/view being dropped has 
> child views (and {{CASCADE}} is provided), we add a 
> {{[DropChildViewsTask|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/tasks/DropChildViewsTask.java]}}
>  in the {{SYSTEM.TASK}} table (see 
> [this|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479]).
>  This means that *while holding the row lock* for the table/view’s header row 
> ([here|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2253])
>  we do the following:
>  # Make an 
> [RPC|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2459-L2461]
>  to the region hosting {{SYSTEM.CHILD_LINK}} to scan it in order to find 
> child views.
>  # If any child views are found in the step above, we make additional RPCs to 
> the region hosting {{SYSTEM.TASK}} to 
> {{[UPSERT|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479-L2484]}}
>  a {{DropChildViewsTask}} for immediate child views.
>  # We [send remote 
> mutations|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2298-L2302]
>  to drop 

[jira] [Updated] (PHOENIX-6154) Move check to client side to see if there are any child views that need to be dropped while dropping a table/view

2020-09-22 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-6154:
--
Reporter: Chinmay Kulkarni  (was: Thomas D'Silva)

> Move check to client side to see if there are any child views that need to be 
> dropped while dropping a table/view
> -
>
> Key: PHOENIX-6154
> URL: https://issues.apache.org/jira/browse/PHOENIX-6154
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>
> Remove  {{ViewUtil.dropChildViews(env, tenantIdBytes, schemaName, 
> tableName);}} call in MetdataEndpointImpl.createTable
> While creating a table or view we need to ensure that are not any child views 
> that haven't been clean up by the DropChildView task yet. Move this check to 
> the client (issue a scan against SYSTEM.CHILD_LINK to see if a single linking 
> row exists).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6154) Move check to client side to see if there are any child views that need to be dropped while dropping a table/view

2020-09-22 Thread Chinmay Kulkarni (Jira)
Chinmay Kulkarni created PHOENIX-6154:
-

 Summary: Move check to client side to see if there are any child 
views that need to be dropped while dropping a table/view
 Key: PHOENIX-6154
 URL: https://issues.apache.org/jira/browse/PHOENIX-6154
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 5.0.0, 4.15.0
Reporter: Thomas D'Silva
Assignee: Chinmay Kulkarni
 Fix For: 5.1.0, 4.16.0


Remove  {{ViewUtil.dropChildViews(env, tenantIdBytes, schemaName, tableName);}} 
call in MetdataEndpointImpl.createTable

While creating a table or view we need to ensure that are not any child views 
that haven't been clean up by the DropChildView task yet. Move this check to 
the client (issue a scan against SYSTEM.CHILD_LINK to see if a single linking 
row exists).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   6   7   8   9   10   >