[jira] [Created] (PHOENIX-4795) Fix failing pherf tests in 5.x branch

2018-06-25 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created PHOENIX-4795:


 Summary: Fix failing pherf tests in 5.x branch
 Key: PHOENIX-4795
 URL: https://issues.apache.org/jira/browse/PHOENIX-4795
 Project: Phoenix
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 5.0.0


Pherf tests are failing in 5.x branch.
https://builds.apache.org/view/All/job/Phoenix-5.x-HBase-2.0/4/testReport/




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4795) Fix failing pherf tests in 5.x branch

2018-06-25 Thread Rajeshbabu Chintaguntla (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4795:
-
Description: 
Pherf tests are failing in 5.x branch. Mostly because port issue during hbase 
cluster start up in test cases.
https://builds.apache.org/view/All/job/Phoenix-5.x-HBase-2.0/4/testReport/


  was:
Pherf tests are failing in 5.x branch.
https://builds.apache.org/view/All/job/Phoenix-5.x-HBase-2.0/4/testReport/



> Fix failing pherf tests in 5.x branch
> -
>
> Key: PHOENIX-4795
> URL: https://issues.apache.org/jira/browse/PHOENIX-4795
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Blocker
> Fix For: 5.0.0
>
>
> Pherf tests are failing in 5.x branch. Mostly because port issue during hbase 
> cluster start up in test cases.
> https://builds.apache.org/view/All/job/Phoenix-5.x-HBase-2.0/4/testReport/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4688) Add kerberos authentication to python-phoenixdb

2018-06-25 Thread Lev Bronshtein (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523112#comment-16523112
 ] 

Lev Bronshtein commented on PHOENIX-4688:
-

installing from source directly ensures that only extra packages are downloaded

pip install file:///Users/lbronshtein/DEV/phoenix/python/requests-kerberos

>>> import requests_kerberos
>>> requests_kerberos.__version__
'0.13.0.dev0-phoenixdb'

 

So we can just install requests-kerberos locally and the phoenixdb

> Add kerberos authentication to python-phoenixdb
> ---
>
> Key: PHOENIX-4688
> URL: https://issues.apache.org/jira/browse/PHOENIX-4688
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lev Bronshtein
>Priority: Minor
>
> In its current state python-phoenixdv does not support support kerberos 
> authentication.  Using a modern python http library such as requests or 
> urllib it would be simple (if not trivial) to add this support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4781) Phoenix client project's jar naming convention causes maven-deploy-plugin to fail

2018-06-25 Thread Karan Mehta (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karan Mehta updated PHOENIX-4781:
-
Description: 
`maven-deploy-plugin` is used for deploying built artifacts to repository 
provided by `distributionManagement` tag. The name of files that need to be 
uploaded are either derived from pom file of the project or it generates an 
temporary one on its own.

For `phoenix-client` project, we essentially create a shaded uber jar that 
contains all dependencies and provide the project pom file for the plugin to 
work. `maven-jar-plugin` is disabled for the project, hence the shade plugin 
essentially packages the jar. The final name of the shaded jar is defined as 
`phoenix-${project.version}\-client`, which is different from how the standard 
maven convention based on pom file (artifact and group id) is 
`phoenix-client-${project.version}`

This causes `maven-deploy-plugin` to fail since it is unable to find any 
artifacts to be published.

`maven-install-plugin` works correctly and hence it installs correct jar in 
local repo.

The same is effective for `phoenix-pig` project as well. However we require the 
require jar for that project in the repo. I am not even sure why we create 
shaded jar for that project.

I will put up a 3 liner patch for the same.

Any thoughts? [~sergey.soldatov] [~elserj]

Files before change (first col is size):
{code:java}
103487701 Jun 13 22:47 
phoenix-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT-client.jar{code}
Files after change (first col is size):
{code:java}
3640 Jun 13 21:23 
original-phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar
103487702 Jun 13 21:24 
phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar{code}

  was:
`maven-deploy-plugin` is used for deploying built artifacts to repository 
provided by `distributionManagement` tag. The name of files that need to be 
uploaded are either derived from pom file of the project or it generates an 
temporary one on its own.

For `phoenix-client` project, we essentially create a shaded uber jar that 
contains all dependencies and provide the project pom file for the plugin to 
work. `maven-jar-plugin` is disabled for the project, hence the shade plugin 
essentially packages the jar. The final name of the shaded jar is defined as 
`phoenix-${project.version}-client`, which is different from how the standard 
maven convention based on pom file (artifact and group id) is 
`phoenix-client-${project.version}`

This causes `maven-deploy-plugin` to fail since it is unable to find any 
artifacts to be published.

`maven-install-plugin` works correctly and hence it installs correct jar in 
local repo.

The same is effective for `phoenix-pig` project as well. However we require the 
require jar for that project in the repo. I am not even sure why we create 
shaded jar for that project.

I will put up a 3 liner patch for the same.

Any thoughts? [~sergey.soldatov] [~elserj]

Files before change (first col is size):
{code:java}
103487701 Jun 13 22:47 
phoenix-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT-client.jar{code}
Files after change (first col is size):
{code:java}
3640 Jun 13 21:23 
original-phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar
103487702 Jun 13 21:24 
phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar{code}


> Phoenix client project's jar naming convention causes maven-deploy-plugin to 
> fail
> -
>
> Key: PHOENIX-4781
> URL: https://issues.apache.org/jira/browse/PHOENIX-4781
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Karan Mehta
>Priority: Major
> Attachments: PHOENIX-4781.001.patch
>
>
> `maven-deploy-plugin` is used for deploying built artifacts to repository 
> provided by `distributionManagement` tag. The name of files that need to be 
> uploaded are either derived from pom file of the project or it generates an 
> temporary one on its own.
> For `phoenix-client` project, we essentially create a shaded uber jar that 
> contains all dependencies and provide the project pom file for the plugin to 
> work. `maven-jar-plugin` is disabled for the project, hence the shade plugin 
> essentially packages the jar. The final name of the shaded jar is defined as 
> `phoenix-${project.version}\-client`, which is different from how the 
> standard maven convention based on pom file (artifact and group id) is 
> `phoenix-client-${project.version}`
> This causes `maven-deploy-plugin` to fail since it is unable to find any 
> artifacts to be published.
> `maven-install-plugin` works correctly and hence it installs correct jar in 
> local repo.
> The same is effective for `phoenix-pig` project as well. However we require 
> the require jar for that project in the repo. I am not even sure why we 
> create shaded jar for that project

Re: [DISCUSS] Docker images for Phoenix

2018-06-25 Thread Francis Chuang
I think it would make things easier if we reduce the scope of the docker 
images to just an all-in-one HBase + Phoenix + all Phoenix features 
enabled testing image to simplify things for now.


Since Phoenix has tags for $HBASE_VER:$PHOENIX_VER for each release, we 
should be able to use build hooks (see 
https://github.com/docker/hub-feedback/issues/508).


We simply write a shell script to parse the tag, split it into its 
constituent parts and pass it to the docker build command as BUILD_ARGS. 
The docker file would then reference these build args and build an image 
for each tag.


Francis

On 26/06/2018 3:52 AM, Josh Elser wrote:
Moving this over to the dev list since this is a thing for developers 
to make the call on. Would ask users who have interest to comment over 
there as well :)


I think having a "one-button" Phoenix environment is a big win, 
especially for folks who want to do one-off testing with a specific 
version.


My biggest hesitation (as you probably know) is integration with the 
rest of Apache infrastructure. That's a problem we can work on solving 
though (I think, just automation around publishing).


On 6/21/18 9:24 PM, Francis Chuang wrote:

Hi all,

I currently maintain a HBase + Phoenix all-in-one docker image[1]. 
The image is currently used to test Phoenix support for the Avatica 
Go SQL driver[2]. Judging by the number of pulls on docker hub 
(10k+), there are probably other people using it.


The image spins up HBase server with local storage, using the bundled 
Zookeeper with Phoenix support. The Phoenix query server is also 
started on port 8765.


While the image is definitely not suitable for production use, I 
think the test image still has valid use-cases and offers a lot of 
convenience. It's also possible to update the image in the future so 
that it can be used to spin up production clusters as well as testing 
instances (similar to what Ceph has done[3]).


Would the Phoenix community interested in accepting the dockerfile + 
related files and making it part of Phoenix? The added benefit of 
this is that it would be possible to configure some automation and 
have the docker images published directly to dockerhub as an 
automated build for each release.


Francis

[1] https://github.com/Boostport/hbase-phoenix-all-in-one

[2] https://github.com/apache/calcite-avatica-go

[3] https://github.com/ceph/ceph-container





[jira] [Commented] (PHOENIX-4688) Add kerberos authentication to python-phoenixdb

2018-06-25 Thread Lev Bronshtein (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522874#comment-16522874
 ] 

Lev Bronshtein commented on PHOENIX-4688:
-

Rather then renaming a package I decided to change the version string
-__version__ = '0.13.0.dev0'
+__version__ = '0.13.0.dev0-phoenixdb'

> Add kerberos authentication to python-phoenixdb
> ---
>
> Key: PHOENIX-4688
> URL: https://issues.apache.org/jira/browse/PHOENIX-4688
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lev Bronshtein
>Priority: Minor
>
> In its current state python-phoenixdv does not support support kerberos 
> authentication.  Using a modern python http library such as requests or 
> urllib it would be simple (if not trivial) to add this support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Docker images for Phoenix

2018-06-25 Thread Josh Elser
Moving this over to the dev list since this is a thing for developers to 
make the call on. Would ask users who have interest to comment over 
there as well :)


I think having a "one-button" Phoenix environment is a big win, 
especially for folks who want to do one-off testing with a specific version.


My biggest hesitation (as you probably know) is integration with the 
rest of Apache infrastructure. That's a problem we can work on solving 
though (I think, just automation around publishing).


On 6/21/18 9:24 PM, Francis Chuang wrote:

Hi all,

I currently maintain a HBase + Phoenix all-in-one docker image[1]. The 
image is currently used to test Phoenix support for the Avatica Go SQL 
driver[2]. Judging by the number of pulls on docker hub (10k+), there 
are probably other people using it.


The image spins up HBase server with local storage, using the bundled 
Zookeeper with Phoenix support. The Phoenix query server is also started 
on port 8765.


While the image is definitely not suitable for production use, I think 
the test image still has valid use-cases and offers a lot of 
convenience. It's also possible to update the image in the future so 
that it can be used to spin up production clusters as well as testing 
instances (similar to what Ceph has done[3]).


Would the Phoenix community interested in accepting the dockerfile + 
related files and making it part of Phoenix? The added benefit of this 
is that it would be possible to configure some automation and have the 
docker images published directly to dockerhub as an automated build for 
each release.


Francis

[1] https://github.com/Boostport/hbase-phoenix-all-in-one

[2] https://github.com/apache/calcite-avatica-go

[3] https://github.com/ceph/ceph-container



[jira] [Assigned] (PHOENIX-4794) PhoenixStorageHandler broken with Hive 3.1

2018-06-25 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned PHOENIX-4794:
---

Assignee: Jesus Camacho Rodriguez  (was: Josh Elser)

> PhoenixStorageHandler broken with Hive 3.1
> --
>
> Key: PHOENIX-4794
> URL: https://issues.apache.org/jira/browse/PHOENIX-4794
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Jesus Camacho Rodriguez
>Priority: Critical
> Fix For: 5.1.0
>
> Attachments: PHOENIX-4794.001.patch
>
>
> [~jcamachorodriguez] put together a nice patch on the heels of HIVE-12192 
> (date/timestamp handling in Hive) which fixes Phoenix. Without this patch, 
> we'll see both compilation and runtime failures in the PhoenixStorageHandler 
> with Hive 3.1.0-SNAPSHOT.
> Sadly, we need to wait for a Hive 3.1.0 to get this shipped in Phoenix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4794) PhoenixStorageHandler broken with Hive 3.1

2018-06-25 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4794:

Attachment: PHOENIX-4794.001.patch

> PhoenixStorageHandler broken with Hive 3.1
> --
>
> Key: PHOENIX-4794
> URL: https://issues.apache.org/jira/browse/PHOENIX-4794
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 5.1.0
>
> Attachments: PHOENIX-4794.001.patch
>
>
> [~jcamachorodriguez] put together a nice patch on the heels of HIVE-12192 
> (date/timestamp handling in Hive) which fixes Phoenix. Without this patch, 
> we'll see both compilation and runtime failures in the PhoenixStorageHandler 
> with Hive 3.1.0-SNAPSHOT.
> Sadly, we need to wait for a Hive 3.1.0 to get this shipped in Phoenix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4794) PhoenixStorageHandler broken with Hive 3.1

2018-06-25 Thread Josh Elser (JIRA)
Josh Elser created PHOENIX-4794:
---

 Summary: PhoenixStorageHandler broken with Hive 3.1
 Key: PHOENIX-4794
 URL: https://issues.apache.org/jira/browse/PHOENIX-4794
 Project: Phoenix
  Issue Type: Bug
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 5.1.0


[~jcamachorodriguez] put together a nice patch on the heels of HIVE-12192 
(date/timestamp handling in Hive) which fixes Phoenix. Without this patch, 
we'll see both compilation and runtime failures in the PhoenixStorageHandler 
with Hive 3.1.0-SNAPSHOT.

Sadly, we need to wait for a Hive 3.1.0 to get this shipped in Phoenix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3534) Support multi region SYSTEM.CATALOG table

2018-06-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522396#comment-16522396
 ] 

ASF GitHub Bot commented on PHOENIX-3534:
-

Github user twdsilva commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/303#discussion_r197829637
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/replication/SystemCatalogWALEntryFilter.java
 ---
@@ -35,20 +35,18 @@
  * during cluster upgrades. However, tenant-owned data such as 
tenant-owned views need to
  * be copied. This WALEntryFilter will only allow tenant-owned rows in 
SYSTEM.CATALOG to
  * be replicated. Data from all other tables is automatically passed. It 
will also copy
- * child links in SYSTEM.CATALOG that are globally-owned but point to 
tenant-owned views.
+ * child links in SYSTEM.CHILD_LINK that are globally-owned but point to 
tenant-owned views.
  *
  */
 public class SystemCatalogWALEntryFilter implements WALEntryFilter {
 
-  private static byte[] CHILD_TABLE_BYTES =
-  new byte[]{PTable.LinkType.CHILD_TABLE.getSerializedValue()};
-
   @Override
   public WAL.Entry filter(WAL.Entry entry) {
 
-//if the WAL.Entry's table isn't System.Catalog, it auto-passes this 
filter
+//if the WAL.Entry's table isn't System.Catalog or System.Child_Link, 
it auto-passes this filter
 //TODO: when Phoenix drops support for pre-1.3 versions of HBase, redo 
as a WALCellFilter
-if (!SchemaUtil.isMetaTable(entry.getKey().getTablename().getName())){
+byte[] tableName = entry.getKey().getTablename().getName();
+   if (!SchemaUtil.isMetaTable(tableName) && 
!SchemaUtil.isChildLinkTable(tableName)){
--- End diff --

SYSTEM.CHILD_LINK contains the parent->child linking rows and cells we use 
to detect race conditions (eg a column of conflicting type being added at the 
same time to a parent and child). 
The latter cells are written with a short TTL. 
I think we can use HBase replication for SYSTEM.CHILD_LINK. All the tenant 
specific view metadata rows in SYSTEM.CATALOG start with tenant id. 
I will modify this filter to how it was before PHOENIX-4229. 
@gjacoby126  Thanks for the suggestion.


> Support multi region SYSTEM.CATALOG table
> -
>
> Key: PHOENIX-3534
> URL: https://issues.apache.org/jira/browse/PHOENIX-3534
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>Priority: Major
> Attachments: PHOENIX-3534-wip.patch
>
>
> Currently Phoenix requires that the SYSTEM.CATALOG table is single region 
> based on the server-side row locks being held for operations that impact a 
> table and all of it's views. For example, adding/removing a column from a 
> base table pushes this change to all views.
> As an alternative to making the SYSTEM.CATALOG transactional (PHOENIX-2431), 
> when a new table is created we can do a lazy cleanup  of any rows that may be 
> left over from a failed DDL call (kudos to [~lhofhansl] for coming up with 
> this idea). To implement this efficiently, we'd need to also do PHOENIX-2051 
> so that we can efficiently find derived views.
> The implementation would rely on an optimistic concurrency model based on 
> checking our sequence numbers for each table/view before/after updating. Each 
> table/view row would be individually locked for their change (metadata for a 
> view or table cannot span regions due to our split policy), with the sequence 
> number being incremented under lock and then returned to the client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] phoenix pull request #303: PHOENIX-3534 Support multi region SYSTEM.CATALOG ...

2018-06-25 Thread twdsilva
Github user twdsilva commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/303#discussion_r197829637
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/replication/SystemCatalogWALEntryFilter.java
 ---
@@ -35,20 +35,18 @@
  * during cluster upgrades. However, tenant-owned data such as 
tenant-owned views need to
  * be copied. This WALEntryFilter will only allow tenant-owned rows in 
SYSTEM.CATALOG to
  * be replicated. Data from all other tables is automatically passed. It 
will also copy
- * child links in SYSTEM.CATALOG that are globally-owned but point to 
tenant-owned views.
+ * child links in SYSTEM.CHILD_LINK that are globally-owned but point to 
tenant-owned views.
  *
  */
 public class SystemCatalogWALEntryFilter implements WALEntryFilter {
 
-  private static byte[] CHILD_TABLE_BYTES =
-  new byte[]{PTable.LinkType.CHILD_TABLE.getSerializedValue()};
-
   @Override
   public WAL.Entry filter(WAL.Entry entry) {
 
-//if the WAL.Entry's table isn't System.Catalog, it auto-passes this 
filter
+//if the WAL.Entry's table isn't System.Catalog or System.Child_Link, 
it auto-passes this filter
 //TODO: when Phoenix drops support for pre-1.3 versions of HBase, redo 
as a WALCellFilter
-if (!SchemaUtil.isMetaTable(entry.getKey().getTablename().getName())){
+byte[] tableName = entry.getKey().getTablename().getName();
+   if (!SchemaUtil.isMetaTable(tableName) && 
!SchemaUtil.isChildLinkTable(tableName)){
--- End diff --

SYSTEM.CHILD_LINK contains the parent->child linking rows and cells we use 
to detect race conditions (eg a column of conflicting type being added at the 
same time to a parent and child). 
The latter cells are written with a short TTL. 
I think we can use HBase replication for SYSTEM.CHILD_LINK. All the tenant 
specific view metadata rows in SYSTEM.CATALOG start with tenant id. 
I will modify this filter to how it was before PHOENIX-4229. 
@gjacoby126  Thanks for the suggestion.


---


[jira] [Commented] (PHOENIX-3534) Support multi region SYSTEM.CATALOG table

2018-06-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522356#comment-16522356
 ] 

ASF GitHub Bot commented on PHOENIX-3534:
-

Github user gjacoby126 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/303#discussion_r197821955
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/replication/SystemCatalogWALEntryFilter.java
 ---
@@ -35,20 +35,18 @@
  * during cluster upgrades. However, tenant-owned data such as 
tenant-owned views need to
  * be copied. This WALEntryFilter will only allow tenant-owned rows in 
SYSTEM.CATALOG to
  * be replicated. Data from all other tables is automatically passed. It 
will also copy
- * child links in SYSTEM.CATALOG that are globally-owned but point to 
tenant-owned views.
+ * child links in SYSTEM.CHILD_LINK that are globally-owned but point to 
tenant-owned views.
  *
  */
 public class SystemCatalogWALEntryFilter implements WALEntryFilter {
 
-  private static byte[] CHILD_TABLE_BYTES =
-  new byte[]{PTable.LinkType.CHILD_TABLE.getSerializedValue()};
-
   @Override
   public WAL.Entry filter(WAL.Entry entry) {
 
-//if the WAL.Entry's table isn't System.Catalog, it auto-passes this 
filter
+//if the WAL.Entry's table isn't System.Catalog or System.Child_Link, 
it auto-passes this filter
 //TODO: when Phoenix drops support for pre-1.3 versions of HBase, redo 
as a WALCellFilter
-if (!SchemaUtil.isMetaTable(entry.getKey().getTablename().getName())){
+byte[] tableName = entry.getKey().getTablename().getName();
+   if (!SchemaUtil.isMetaTable(tableName) && 
!SchemaUtil.isChildLinkTable(tableName)){
--- End diff --

Would it be safe to turn on normal HBase replication on the new 
System.CHILD_LINK? (That is, is there any unwanted data in System.CHILD_LINK 
that this WALFilter wouldn't copy that normal HBase replication would?)

If normal HBase replication works for System.CHILD_LINK, and all view data 
left in System.Catalog starts with tenant_id, then the logic here can be 
greatly simplified, similar to how it was before PHOENIX-4229


> Support multi region SYSTEM.CATALOG table
> -
>
> Key: PHOENIX-3534
> URL: https://issues.apache.org/jira/browse/PHOENIX-3534
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>Priority: Major
> Attachments: PHOENIX-3534-wip.patch
>
>
> Currently Phoenix requires that the SYSTEM.CATALOG table is single region 
> based on the server-side row locks being held for operations that impact a 
> table and all of it's views. For example, adding/removing a column from a 
> base table pushes this change to all views.
> As an alternative to making the SYSTEM.CATALOG transactional (PHOENIX-2431), 
> when a new table is created we can do a lazy cleanup  of any rows that may be 
> left over from a failed DDL call (kudos to [~lhofhansl] for coming up with 
> this idea). To implement this efficiently, we'd need to also do PHOENIX-2051 
> so that we can efficiently find derived views.
> The implementation would rely on an optimistic concurrency model based on 
> checking our sequence numbers for each table/view before/after updating. Each 
> table/view row would be individually locked for their change (metadata for a 
> view or table cannot span regions due to our split policy), with the sequence 
> number being incremented under lock and then returned to the client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] phoenix pull request #303: PHOENIX-3534 Support multi region SYSTEM.CATALOG ...

2018-06-25 Thread gjacoby126
Github user gjacoby126 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/303#discussion_r197821955
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/replication/SystemCatalogWALEntryFilter.java
 ---
@@ -35,20 +35,18 @@
  * during cluster upgrades. However, tenant-owned data such as 
tenant-owned views need to
  * be copied. This WALEntryFilter will only allow tenant-owned rows in 
SYSTEM.CATALOG to
  * be replicated. Data from all other tables is automatically passed. It 
will also copy
- * child links in SYSTEM.CATALOG that are globally-owned but point to 
tenant-owned views.
+ * child links in SYSTEM.CHILD_LINK that are globally-owned but point to 
tenant-owned views.
  *
  */
 public class SystemCatalogWALEntryFilter implements WALEntryFilter {
 
-  private static byte[] CHILD_TABLE_BYTES =
-  new byte[]{PTable.LinkType.CHILD_TABLE.getSerializedValue()};
-
   @Override
   public WAL.Entry filter(WAL.Entry entry) {
 
-//if the WAL.Entry's table isn't System.Catalog, it auto-passes this 
filter
+//if the WAL.Entry's table isn't System.Catalog or System.Child_Link, 
it auto-passes this filter
 //TODO: when Phoenix drops support for pre-1.3 versions of HBase, redo 
as a WALCellFilter
-if (!SchemaUtil.isMetaTable(entry.getKey().getTablename().getName())){
+byte[] tableName = entry.getKey().getTablename().getName();
+   if (!SchemaUtil.isMetaTable(tableName) && 
!SchemaUtil.isChildLinkTable(tableName)){
--- End diff --

Would it be safe to turn on normal HBase replication on the new 
System.CHILD_LINK? (That is, is there any unwanted data in System.CHILD_LINK 
that this WALFilter wouldn't copy that normal HBase replication would?)

If normal HBase replication works for System.CHILD_LINK, and all view data 
left in System.Catalog starts with tenant_id, then the logic here can be 
greatly simplified, similar to how it was before PHOENIX-4229


---