[jira] [Commented] (PHOENIX-4662) NullPointerException in TableResultIterator.java on cache resend

2018-03-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410803#comment-16410803
 ] 

Hudson commented on PHOENIX-4662:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #73 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/73/])
PHOENIX-4662 Avoid NPE when server-caches are null (Csaba Skrabak) (elserj: rev 
d1c48b6d7a903f3a7a87222bcf1cf514b062f83c)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java


> NullPointerException in TableResultIterator.java on cache resend
> 
>
> Key: PHOENIX-4662
> URL: https://issues.apache.org/jira/browse/PHOENIX-4662
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4662.patch
>
>
> In the fix for PHOENIX-4010, there is a potential null dereference. Turned 
> out when we ran a previous version of HashJoinIT with PHOENIX-4010 backported.
> The caches field is initialized to null and may be dereferenced after 
> "Retrying when Hash Join cache is not found on the server ,by sending the 
> cache again".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4662) NullPointerException in TableResultIterator.java on cache resend

2018-03-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410798#comment-16410798
 ] 

Hudson commented on PHOENIX-4662:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1841 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1841/])
PHOENIX-4662 Avoid NPE when server-caches are null (Csaba Skrabak) (elserj: rev 
70e1be931881222705a47f8738ec423dc41a28fe)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java


> NullPointerException in TableResultIterator.java on cache resend
> 
>
> Key: PHOENIX-4662
> URL: https://issues.apache.org/jira/browse/PHOENIX-4662
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4662.patch
>
>
> In the fix for PHOENIX-4010, there is a potential null dereference. Turned 
> out when we ran a previous version of HashJoinIT with PHOENIX-4010 backported.
> The caches field is initialized to null and may be dereferenced after 
> "Retrying when Hash Join cache is not found on the server ,by sending the 
> cache again".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] phoenix pull request #295: PHOENIX-4579: Add a config to conditionally creat...

2018-03-22 Thread ChinmaySKulkarni
Github user ChinmaySKulkarni commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/295#discussion_r176617368
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2405,16 +2413,26 @@ public Void call() throws Exception {
 openConnection();
 hConnectionEstablished = true;
 boolean isDoNotUpgradePropSet = 
UpgradeUtil.isNoUpgradeSet(props);
+boolean doesSystemCatalogAlreadyExist = false;
--- End diff --

@JamesRTaylor @twdsilva also, do we really want to avoid calling 
ensureTableCreated from createTable in case it is a system table? Apart from 
actually ensuring that the table is created or not and the client-server 
compatibility checks, this method also modifies the table according to a new 
descriptor (I don't have enough background as to why we do this here). Any 
advice guys?


---


[jira] [Commented] (PHOENIX-4579) Add a config to conditionally create Phoenix meta tables on first client connection

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410662#comment-16410662
 ] 

ASF GitHub Bot commented on PHOENIX-4579:
-

Github user ChinmaySKulkarni commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/295#discussion_r176617368
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2405,16 +2413,26 @@ public Void call() throws Exception {
 openConnection();
 hConnectionEstablished = true;
 boolean isDoNotUpgradePropSet = 
UpgradeUtil.isNoUpgradeSet(props);
+boolean doesSystemCatalogAlreadyExist = false;
--- End diff --

@JamesRTaylor @twdsilva also, do we really want to avoid calling 
ensureTableCreated from createTable in case it is a system table? Apart from 
actually ensuring that the table is created or not and the client-server 
compatibility checks, this method also modifies the table according to a new 
descriptor (I don't have enough background as to why we do this here). Any 
advice guys?


> Add a config to conditionally create Phoenix meta tables on first client 
> connection
> ---
>
> Key: PHOENIX-4579
> URL: https://issues.apache.org/jira/browse/PHOENIX-4579
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Mujtaba Chohan
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4579.patch
>
>
> Currently we create/modify Phoenix meta tables on first client connection. 
> Adding a property to make it configurable (with default true as it is 
> currently implemented).
> With this property set to false, it will avoid lockstep upgrade requirement 
> for all clients when changing meta properties using PHOENIX-4575 as this 
> property can be flipped back on once all the clients are upgraded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4579) Add a config to conditionally create Phoenix meta tables on first client connection

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410616#comment-16410616
 ] 

ASF GitHub Bot commented on PHOENIX-4579:
-

Github user ChinmaySKulkarni commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/295#discussion_r176611190
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2405,16 +2413,26 @@ public Void call() throws Exception {
 openConnection();
 hConnectionEstablished = true;
 boolean isDoNotUpgradePropSet = 
UpgradeUtil.isNoUpgradeSet(props);
+boolean doesSystemCatalogAlreadyExist = false;
--- End diff --

@JamesRTaylor A few follow-up questions from your comment:

- If we were to call _ensureTableCreated_ at this point inside the init 
method, don't have the values of other arguments such as the families, splits, 
etc. which we would need for the admin.createTable call. Instead I propose 
splitting the _ensureTableCreated_ method and having another private method 
which just checks for the existence of system catalog and does client-server 
compatibility checks. We can reuse this stub inside _ensureTableCreated_ and 
make sure we don't do this again when called via _createTable_.
-  If we throw an UpgradeRequiredException in case an upgrade is required, 
the user will not get the connection object right? This also prevents the user 
from being able to run "EXECUTE UPGRADE". Instead I guess we can call 
_setUpgradeRequired_, log an error and return the connection object. This will 
disallow the user from running anything except "EXECUTE UPGRADE" since we have 
this snippet in place:
`
if (conn.getQueryServices().isUpgradeRequired() && !conn.isRunningUpgrade()
&& stmt.getOperation() != Operation.UPGRADE) {
throw new UpgradeRequiredException();
}
`



> Add a config to conditionally create Phoenix meta tables on first client 
> connection
> ---
>
> Key: PHOENIX-4579
> URL: https://issues.apache.org/jira/browse/PHOENIX-4579
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Mujtaba Chohan
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4579.patch
>
>
> Currently we create/modify Phoenix meta tables on first client connection. 
> Adding a property to make it configurable (with default true as it is 
> currently implemented).
> With this property set to false, it will avoid lockstep upgrade requirement 
> for all clients when changing meta properties using PHOENIX-4575 as this 
> property can be flipped back on once all the clients are upgraded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] phoenix pull request #295: PHOENIX-4579: Add a config to conditionally creat...

2018-03-22 Thread ChinmaySKulkarni
Github user ChinmaySKulkarni commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/295#discussion_r176611190
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2405,16 +2413,26 @@ public Void call() throws Exception {
 openConnection();
 hConnectionEstablished = true;
 boolean isDoNotUpgradePropSet = 
UpgradeUtil.isNoUpgradeSet(props);
+boolean doesSystemCatalogAlreadyExist = false;
--- End diff --

@JamesRTaylor A few follow-up questions from your comment:

- If we were to call _ensureTableCreated_ at this point inside the init 
method, don't have the values of other arguments such as the families, splits, 
etc. which we would need for the admin.createTable call. Instead I propose 
splitting the _ensureTableCreated_ method and having another private method 
which just checks for the existence of system catalog and does client-server 
compatibility checks. We can reuse this stub inside _ensureTableCreated_ and 
make sure we don't do this again when called via _createTable_.
-  If we throw an UpgradeRequiredException in case an upgrade is required, 
the user will not get the connection object right? This also prevents the user 
from being able to run "EXECUTE UPGRADE". Instead I guess we can call 
_setUpgradeRequired_, log an error and return the connection object. This will 
disallow the user from running anything except "EXECUTE UPGRADE" since we have 
this snippet in place:
`
if (conn.getQueryServices().isUpgradeRequired() && !conn.isRunningUpgrade()
&& stmt.getOperation() != Operation.UPGRADE) {
throw new UpgradeRequiredException();
}
`



---


[jira] [Commented] (PHOENIX-4668) Remove unnecessary table descriptor modification for SPLIT_POLICY column

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410360#comment-16410360
 ] 

ASF GitHub Bot commented on PHOENIX-4668:
-

Github user twdsilva commented on the issue:

https://github.com/apache/phoenix/pull/296
  
+1


> Remove unnecessary table descriptor modification for SPLIT_POLICY column
> 
>
> Key: PHOENIX-4668
> URL: https://issues.apache.org/jira/browse/PHOENIX-4668
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Major
>
> Inside _ConnectionQueryServicesImpl.ensureTableCreated()_, we modify the 
> table descriptor with
> newDesc.setValue(HTableDescriptor.SPLIT_POLICY, 
> MetaDataSplitPolicy.class.getName()), however we already have this mentioned 
> in the create statement DDL for system tables, so we can remove this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] phoenix issue #296: PHOENIX-4668: Remove unnecessary table descriptor modifi...

2018-03-22 Thread twdsilva
Github user twdsilva commented on the issue:

https://github.com/apache/phoenix/pull/296
  
+1


---


[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-03-22 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410338#comment-16410338
 ] 

Josh Elser commented on PHOENIX-4231:
-

Just noticed this via some broken tests at $dayjob. Nice change to stumble upon 
though. Any chance you could update [https://phoenix.apache.org/tuning.html] 
with this new configuration property, [~ckulkarni]? :)

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4231-v2.patch, PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4361) Remove redundant argument in separateAndValidateProperties in CQSI

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410309#comment-16410309
 ] 

ASF GitHub Bot commented on PHOENIX-4361:
-

Github user ChinmaySKulkarni closed the pull request at:

https://github.com/apache/phoenix/pull/282


> Remove redundant argument in separateAndValidateProperties in CQSI
> --
>
> Key: PHOENIX-4361
> URL: https://issues.apache.org/jira/browse/PHOENIX-4361
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Chinmay Kulkarni
>Priority: Minor
>
> Remove redundant argument in separateAndValidateProperties in CQSI
> private Pair 
> separateAndValidateProperties(PTable table, Map Object>>> properties, Set colFamiliesForPColumnsToBeAdded, 
> List>> families, Map 
> tableProps) 
> Map>> families was never used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4579) Add a config to conditionally create Phoenix meta tables on first client connection

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410286#comment-16410286
 ] 

ASF GitHub Bot commented on PHOENIX-4579:
-

Github user ChinmaySKulkarni commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/295#discussion_r176568690
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2405,16 +2413,26 @@ public Void call() throws Exception {
 openConnection();
 hConnectionEstablished = true;
 boolean isDoNotUpgradePropSet = 
UpgradeUtil.isNoUpgradeSet(props);
+boolean doesSystemCatalogAlreadyExist = false;
--- End diff --

@JamesRTaylor thanks for the review. I've submitted a patch for 
PHOENIX-4668 and will go ahead with the changes you've mentioned for this 
patch. I'm not sure about the status of PHOENIX-4575 (@mujtabachohan was 
working on this before his vacation), so don't want to make any changes to it 
without understanding its scope.
Thanks.



> Add a config to conditionally create Phoenix meta tables on first client 
> connection
> ---
>
> Key: PHOENIX-4579
> URL: https://issues.apache.org/jira/browse/PHOENIX-4579
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Mujtaba Chohan
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4579.patch
>
>
> Currently we create/modify Phoenix meta tables on first client connection. 
> Adding a property to make it configurable (with default true as it is 
> currently implemented).
> With this property set to false, it will avoid lockstep upgrade requirement 
> for all clients when changing meta properties using PHOENIX-4575 as this 
> property can be flipped back on once all the clients are upgraded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] phoenix pull request #295: PHOENIX-4579: Add a config to conditionally creat...

2018-03-22 Thread ChinmaySKulkarni
Github user ChinmaySKulkarni commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/295#discussion_r176568690
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2405,16 +2413,26 @@ public Void call() throws Exception {
 openConnection();
 hConnectionEstablished = true;
 boolean isDoNotUpgradePropSet = 
UpgradeUtil.isNoUpgradeSet(props);
+boolean doesSystemCatalogAlreadyExist = false;
--- End diff --

@JamesRTaylor thanks for the review. I've submitted a patch for 
PHOENIX-4668 and will go ahead with the changes you've mentioned for this 
patch. I'm not sure about the status of PHOENIX-4575 (@mujtabachohan was 
working on this before his vacation), so don't want to make any changes to it 
without understanding its scope.
Thanks.



---


[jira] [Commented] (PHOENIX-4668) Remove unnecessary table descriptor modification for SPLIT_POLICY column

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410226#comment-16410226
 ] 

ASF GitHub Bot commented on PHOENIX-4668:
-

GitHub user ChinmaySKulkarni opened a pull request:

https://github.com/apache/phoenix/pull/296

PHOENIX-4668: Remove unnecessary table descriptor modification for 
SPLIT_POLICY column

Removed the code which: removed the SPLIT_POLICY in the system catalog
table descriptor and then added it and modified the table.
This was earlier done as a workaround for HBASE-12570.
We specifically add the SPLIT_POLICY when creating system tables already
as part of the patch for PHOENIX-1674, hence this is no longer required.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ChinmaySKulkarni/phoenix PHOENIX-4668

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/296.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #296


commit 55df92b9df24e7ae6999f78aef5274c03e052e27
Author: Chinmay Kulkarni 
Date:   2018-03-22T20:14:10Z

PHOENIX-4668: Remove unnecessary table descriptor modification for 
SPLIT_POLICY column

Removed the code which: removed the SPLIT_POLICY in the system catalog
table descriptor and then added it and modified the table.
This was earlier done as a workaround for HBASE-12570.
We specifically add the SPLIT_POLICY when creating system tables already
as part of the patch for PHOENIX-1674, hence this is no longer required.




> Remove unnecessary table descriptor modification for SPLIT_POLICY column
> 
>
> Key: PHOENIX-4668
> URL: https://issues.apache.org/jira/browse/PHOENIX-4668
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Major
>
> Inside _ConnectionQueryServicesImpl.ensureTableCreated()_, we modify the 
> table descriptor with
> newDesc.setValue(HTableDescriptor.SPLIT_POLICY, 
> MetaDataSplitPolicy.class.getName()), however we already have this mentioned 
> in the create statement DDL for system tables, so we can remove this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4668) Remove unnecessary table descriptor modification for SPLIT_POLICY column

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410227#comment-16410227
 ] 

ASF GitHub Bot commented on PHOENIX-4668:
-

Github user ChinmaySKulkarni commented on the issue:

https://github.com/apache/phoenix/pull/296
  
@twdsilva @JamesRTaylor please review. Thanks.


> Remove unnecessary table descriptor modification for SPLIT_POLICY column
> 
>
> Key: PHOENIX-4668
> URL: https://issues.apache.org/jira/browse/PHOENIX-4668
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Major
>
> Inside _ConnectionQueryServicesImpl.ensureTableCreated()_, we modify the 
> table descriptor with
> newDesc.setValue(HTableDescriptor.SPLIT_POLICY, 
> MetaDataSplitPolicy.class.getName()), however we already have this mentioned 
> in the create statement DDL for system tables, so we can remove this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] phoenix pull request #296: PHOENIX-4668: Remove unnecessary table descriptor...

2018-03-22 Thread ChinmaySKulkarni
GitHub user ChinmaySKulkarni opened a pull request:

https://github.com/apache/phoenix/pull/296

PHOENIX-4668: Remove unnecessary table descriptor modification for 
SPLIT_POLICY column

Removed the code which: removed the SPLIT_POLICY in the system catalog
table descriptor and then added it and modified the table.
This was earlier done as a workaround for HBASE-12570.
We specifically add the SPLIT_POLICY when creating system tables already
as part of the patch for PHOENIX-1674, hence this is no longer required.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ChinmaySKulkarni/phoenix PHOENIX-4668

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/296.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #296


commit 55df92b9df24e7ae6999f78aef5274c03e052e27
Author: Chinmay Kulkarni 
Date:   2018-03-22T20:14:10Z

PHOENIX-4668: Remove unnecessary table descriptor modification for 
SPLIT_POLICY column

Removed the code which: removed the SPLIT_POLICY in the system catalog
table descriptor and then added it and modified the table.
This was earlier done as a workaround for HBASE-12570.
We specifically add the SPLIT_POLICY when creating system tables already
as part of the patch for PHOENIX-1674, hence this is no longer required.




---


[GitHub] phoenix issue #296: PHOENIX-4668: Remove unnecessary table descriptor modifi...

2018-03-22 Thread ChinmaySKulkarni
Github user ChinmaySKulkarni commented on the issue:

https://github.com/apache/phoenix/pull/296
  
@twdsilva @JamesRTaylor please review. Thanks.


---


[jira] [Commented] (PHOENIX-4619) Process transactional updates to local index on server-side

2018-03-22 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410176#comment-16410176
 ] 

Lars Hofhansl commented on PHOENIX-4619:


This need to go to master as well, right?

> Process transactional updates to local index on server-side
> ---
>
> Key: PHOENIX-4619
> URL: https://issues.apache.org/jira/browse/PHOENIX-4619
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4619_v1.patch
>
>
> For local indexes, we'll want to continue to process updates on the 
> server-side. After PHOENIX-4278, updates even for local indexes are occurring 
> on the client-side. The reason is that we know the updates to the index table 
> will be a local write and we can generate the write on the server side. 
> Having a separate RPC and sending the updates across the wire would be 
> tremendously inefficient. On top of that, we need the region boundary 
> information which we have already in the coprocessor, but would need to 
> retrieve it on the client side (with a likely race condition too if a split 
> occurs after we retrieve it).
> To fix this, we need to modify PhoenixTxnIndexMutationGenerator such that it 
> can be use on the server-side as well. The main change will be to change this 
> method signature to pass through an IndexMaintainer instead of a PTable 
> (which isn't available on the server-side):
> {code}
> public List getIndexUpdates(final PTable table, PTable index, 
> List dataMutations) throws IOException, SQLException {
> {code}
> I think this can be changed to the following instead and be used both client 
> and server side:
> {code}
> public List getIndexUpdates(final IndexMaintainer maintainer, 
> byte[] dataTableName, List dataMutations) throws IOException, 
> SQLException {
> {code}
> We can tweak the code that makes PhoenixTransactionalIndexer a noop for 
> clients >= 4.14 to have it execute if the index is a local index. The one 
> downside is that if there's a mix of local and global indexes on the same 
> table, the index update calculation will be done twice. I think having a mix 
> of index types would be rare, though, and we should advise against it.
> There's also this code in UngroupedAggregateRegionObserver which needs to be 
> updated to write shadow cells for Omid:
> {code}
> } else if (buildLocalIndex) {
> for (IndexMaintainer maintainer : 
> indexMaintainers) {
> if (!results.isEmpty()) {
> result.getKey(ptr);
> ValueGetter valueGetter =
> 
> maintainer.createGetterFromKeyValues(
> 
> ImmutableBytesPtr.copyBytesIfNecessary(ptr),
> results);
> Put put = 
> maintainer.buildUpdateMutation(kvBuilder,
> valueGetter, ptr, 
> results.get(0).getTimestamp(),
> 
> env.getRegion().getRegionInfo().getStartKey(),
> 
> env.getRegion().getRegionInfo().getEndKey());
> indexMutations.add(put);
> }
> }
> result.setKeyValues(results);
> {code}
> This is the code that builds a local index initially (unlike the global index 
> code path which executes an UPSERT SELECT on the client side to do this 
> initial population).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4619) Process transactional updates to local index on server-side

2018-03-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410104#comment-16410104
 ] 

Hudson commented on PHOENIX-4619:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #71 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/71/])
PHOENIX-4619 Process transactional updates to local index on server-side 
(jtaylor: rev 5814fcbe0529efb9e1d308c09870cefbf1872a95)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/PhoenixTransactionalProcessor.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/hbase/index/Indexer.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexCodec.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/builder/BaseIndexCodec.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexMetaData.java
* (add) 
phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexMetaDataBuilder.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixTransactionalIndexer.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/covered/NonTxIndexBuilderTest.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/TableState.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/transaction/PhoenixTransactionContext.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/transaction/TephraTransactionContext.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ImmutableIndexIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/covered/LocalTableStateTest.java
* (delete) 
phoenix-core/src/main/java/org/apache/phoenix/execute/PhoenixTxnIndexMutationGenerator.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/IndexCodec.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/NonTxIndexBuilder.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/covered/TestCoveredColumnIndexCodec.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/transaction/OmidTransactionContext.java
* (add) 
phoenix-core/src/main/java/org/apache/phoenix/execute/PhoenixTxIndexMutationGenerator.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/BaseIndexIT.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/covered/CoveredColumnIndexCodec.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/builder/BaseIndexBuilder.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/covered/CoveredIndexCodecForTesting.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/LocalTableState.java


> Process transactional updates to local index on server-side
> ---
>
> Key: PHOENIX-4619
> URL: https://issues.apache.org/jira/browse/PHOENIX-4619
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4619_v1.patch
>
>
> For local indexes, we'll want to continue to process updates on the 
> server-side. After PHOENIX-4278, updates even for local indexes are occurring 
> on the client-side. The reason is that we know the updates to the index table 
> will be a local write and we can generate the write on the server side. 
> Having a separate RPC and sending the updates across the wire would be 
> tremendously inefficient. On top of that, we need the region boundary 
> information which we have already in the coprocessor, but would need to 
> retrieve it on the client side (with a likely race condition too if a split 
> occurs after we retrieve it).
> To fix this, we need to modify PhoenixTxnIndexMutationGenerator such that it 
> can be use on the server-side as well. The main change will be to change this 
> method signature to pass through an IndexMaintainer instead of a PTable 
> (which isn't available on the server-side):
> {code}
> public List getIndexUpdates(final PTable table, PTable index, 
> List dataMutations) throws IOException, SQLException {
> {code}
> I think this can be changed to the following instead and be used both client 
> and server side:
> {code}
> 

[jira] [Commented] (PHOENIX-4660) Use TransactionProvider interface

2018-03-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410106#comment-16410106
 ] 

Hudson commented on PHOENIX-4660:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #71 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/71/])
PHOENIX-4660 Use TransactionProvider interface (jtaylor: rev 
45e75de7664e315219fe65c643f08c20b44a01aa)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/transaction/TransactionFactory.java
* (add) 
phoenix-core/src/main/java/org/apache/phoenix/transaction/TransactionProvider.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/TransactionUtil.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/index/IndexMetaDataCacheFactory.java
* (add) 
phoenix-core/src/main/java/org/apache/phoenix/transaction/TephraTransactionProvider.java
* (edit) phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/PhoenixTransactionalProcessor.java
* (add) 
phoenix-core/src/main/java/org/apache/phoenix/transaction/OmidTransactionProvider.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/PhoenixTxIndexMutationGenerator.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/tx/FlappingTransactionIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexMetaDataBuilder.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionlessQueryServicesImpl.java


> Use TransactionProvider interface
> -
>
> Key: PHOENIX-4660
> URL: https://issues.apache.org/jira/browse/PHOENIX-4660
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4660_v1.patch
>
>
> Instead of having a bunch of switch statements in TransactionFactory, let's 
> create a TransactionProvider interface that can be implemented by our 
> transaction providers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4659) Use coprocessor API to write local transactional indexes

2018-03-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410105#comment-16410105
 ] 

Hudson commented on PHOENIX-4659:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #71 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/71/])
PHOENIX-4659 Use coprocessor API to write local transactional indexes (jtaylor: 
rev b61e72f007e46199ef27bd3308bec4f4bf448c67)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/transaction/TransactionFactory.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/txn/TxWriteFailureIT.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/TransactionUtil.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixTransactionalIndexer.java


> Use coprocessor API to write local transactional indexes
> 
>
> Key: PHOENIX-4659
> URL: https://issues.apache.org/jira/browse/PHOENIX-4659
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4659_v1.patch
>
>
> Instead of writing local indexes through a separate thread pool, use the 
> coprocessor API to write them so that they are row-level transactionally 
> consistent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Almost ready for HBaseCon+PhoenixCon 2018 SanJose CFP

2018-03-22 Thread Josh Elser

Thanks Josh for doing this.

Do they have to be conflated so? Each community is doing their own
conference. This page/announcement makes it look like they have been
squashed together.

Thanks,
S


You have any concrete suggestions I can change?



You have hbasecon+phoenixcon which to me reads as a combined 
conference. I

thought we wanted to avoid this sort of messaging. Probably best to have
separate announcement pages.


I was originally intending to have separate pages, but I scrapped that 
because:


* I didn't have the time to make two sites (one took longer than I 
expected)

* I wasn't seeing content differentiation between the two

I'm hoping that, without getting word-y, there's a way that I can better 
express this? I definitely struggled with how to refer to these.


Would something like having the HBase "version" read only "HBaseCon", 
and the Phoenix "version", "PhoenixCon" make you happier? Does the 
"About" section read as to what you were expecting or would you like to 
see more separation there too?




Just to bring this full-circle, I've pushed an update to both websites 
to try to further clarify this if others want to re-look.


[jira] [Commented] (PHOENIX-4579) Add a config to conditionally create Phoenix meta tables on first client connection

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410018#comment-16410018
 ] 

ASF GitHub Bot commented on PHOENIX-4579:
-

Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/295#discussion_r176520222
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2405,16 +2413,26 @@ public Void call() throws Exception {
 openConnection();
 hConnectionEstablished = true;
 boolean isDoNotUpgradePropSet = 
UpgradeUtil.isNoUpgradeSet(props);
+boolean doesSystemCatalogAlreadyExist = false;
--- End diff --

I'd also recommend committing PHOENIX-4668 and PHOENIX-4575 first as 
that'll simplify what you need to do here a bit.


> Add a config to conditionally create Phoenix meta tables on first client 
> connection
> ---
>
> Key: PHOENIX-4579
> URL: https://issues.apache.org/jira/browse/PHOENIX-4579
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Mujtaba Chohan
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4579.patch
>
>
> Currently we create/modify Phoenix meta tables on first client connection. 
> Adding a property to make it configurable (with default true as it is 
> currently implemented).
> With this property set to false, it will avoid lockstep upgrade requirement 
> for all clients when changing meta properties using PHOENIX-4575 as this 
> property can be flipped back on once all the clients are upgraded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] phoenix pull request #295: PHOENIX-4579: Add a config to conditionally creat...

2018-03-22 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/295#discussion_r176520222
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2405,16 +2413,26 @@ public Void call() throws Exception {
 openConnection();
 hConnectionEstablished = true;
 boolean isDoNotUpgradePropSet = 
UpgradeUtil.isNoUpgradeSet(props);
+boolean doesSystemCatalogAlreadyExist = false;
--- End diff --

I'd also recommend committing PHOENIX-4668 and PHOENIX-4575 first as 
that'll simplify what you need to do here a bit.


---


[jira] [Commented] (PHOENIX-4579) Add a config to conditionally create Phoenix meta tables on first client connection

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409992#comment-16409992
 ] 

ASF GitHub Bot commented on PHOENIX-4579:
-

Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/295#discussion_r176506809
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2405,16 +2413,26 @@ public Void call() throws Exception {
 openConnection();
 hConnectionEstablished = true;
 boolean isDoNotUpgradePropSet = 
UpgradeUtil.isNoUpgradeSet(props);
+boolean doesSystemCatalogAlreadyExist = false;
--- End diff --

The flow of this code is very confusing. Let's do the compatibility check 
only once in ensureTableCreated by making the following changes:
- do not call ensureTableCreated from createTable if the table is a system 
table
- call ensureTableCreated instead here and remove this entire try block
- do all checks required in ensureTableCreated where we already determine 
if the table exists or not
- in ensureTableCreated, if SYSTEM.CATALOG doesn't exist and 
!isAutoUpgradeEnabled or isDoNotUpgradePropSet, then throw our standard 
UpgradeRequiredException immediately without creating system catalog metadata.
- the only call to checkClientServerCompatibility should be in 
ensureTableCreated (when isMetaTable is true)
- make sure to be defensive in the creation of the SYSTEM namespace and 
moving of SYSTEM tables as it's possible that multiple clients may be 
attempting to do that.

I think this improve the maintainability of this code (and fix this issue 
too).


> Add a config to conditionally create Phoenix meta tables on first client 
> connection
> ---
>
> Key: PHOENIX-4579
> URL: https://issues.apache.org/jira/browse/PHOENIX-4579
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Mujtaba Chohan
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4579.patch
>
>
> Currently we create/modify Phoenix meta tables on first client connection. 
> Adding a property to make it configurable (with default true as it is 
> currently implemented).
> With this property set to false, it will avoid lockstep upgrade requirement 
> for all clients when changing meta properties using PHOENIX-4575 as this 
> property can be flipped back on once all the clients are upgraded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] phoenix pull request #295: PHOENIX-4579: Add a config to conditionally creat...

2018-03-22 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/295#discussion_r176506809
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2405,16 +2413,26 @@ public Void call() throws Exception {
 openConnection();
 hConnectionEstablished = true;
 boolean isDoNotUpgradePropSet = 
UpgradeUtil.isNoUpgradeSet(props);
+boolean doesSystemCatalogAlreadyExist = false;
--- End diff --

The flow of this code is very confusing. Let's do the compatibility check 
only once in ensureTableCreated by making the following changes:
- do not call ensureTableCreated from createTable if the table is a system 
table
- call ensureTableCreated instead here and remove this entire try block
- do all checks required in ensureTableCreated where we already determine 
if the table exists or not
- in ensureTableCreated, if SYSTEM.CATALOG doesn't exist and 
!isAutoUpgradeEnabled or isDoNotUpgradePropSet, then throw our standard 
UpgradeRequiredException immediately without creating system catalog metadata.
- the only call to checkClientServerCompatibility should be in 
ensureTableCreated (when isMetaTable is true)
- make sure to be defensive in the creation of the SYSTEM namespace and 
moving of SYSTEM tables as it's possible that multiple clients may be 
attempting to do that.

I think this improve the maintainability of this code (and fix this issue 
too).


---


Re: Almost ready for HBaseCon+PhoenixCon 2018 SanJose CFP

2018-03-22 Thread Josh Elser
I'll pass along the concern to them. I don't control the content on that 
website, so I'm not sure if they'll immediately grok what an ideal fix 
is. I'll try to express the concern to them.


I imagine this will require some back and forth.

On 3/22/18 11:49 AM, Dima Spivak wrote:

+1. DataWorks Summit is a vendor-hosted event. Putting a community
conference under its umbrella suggests community endorsement that we may
want to avoid.

On Wed, Mar 21, 2018 at 5:53 PM 张铎(Duo Zhang)  wrote:


It is just a bit confusing to people who want to attend these events
because it is placed in the agenda section... It looks like that
HBaseCon/PhoenixCon is part of the DataWorks Summit, so maybe some
attendees may want to use the ticket for the DataWorks Summit to enter the
HBaseCon/PhoenixCon?

So advertise is good, but maybe move it to another section on the page?

Thanks.

2018-03-22 0:29 GMT+08:00 Josh Elser :


Hey Duo,

Thanks for digging into this. I am not surprised by it -- last I talked

to

the folks in charge of the website, they mentioned that they would
cross-advertise for us as well. Seems like their web-staff is a bit

faster

than I am though. I was planning to point them to our event page after we
had made our announcement, and hopefully they will just link back to us.

Any specific concerns? I think free-advertising is good, but the intent

is

not for HBaseCon/PhoenixCon to be "a part of" DataWorks Summit. I think
perhaps adding something like "HBaseCon and PhoenixCon (community

events)"

would help? Give me some concrete suggestions please :)


On 3/21/18 4:13 AM, 张铎(Duo Zhang) wrote:


https://dataworkssummit.com/san-jose-2018/

Here, in the Agenda section, HBaseCon and PhoenixCon are also included.

Monday, June 18
8:30 AM - 5:00 PM

Pre-event Training
8:30 AM - 5:00 PM

HBaseCon and PhoenixCon
12:00 PM – 7:00 PM

Registration
6:00 PM – 8:00 PM

Meetups

Is this intentional?

2018-03-21 14:59 GMT+08:00 Stack :

On Tue, Mar 20, 2018 at 7:51 PM, Josh Elser  wrote:


Hi all,


I've published a new website for the upcoming event in June in
California
at [1][2] for the HBase and Phoenix websites, respectively. 1 & 2 are
identical.

I've not yet updated any links on either website to link to the new
page.
I'd appreciate if folks can give their feedback on anything outwardly
wrong, incorrect, etc. If folks are happy, then I'll work on linking
from
the main websites, and coordinating an official announcement via mail
lists, social media, etc.

The website is generated from [3]. If you really want to be my
best-friend, let me know about the above things which are wrong via
pull-request ;)

- Josh

[1] https://hbase.apache.org/hbasecon-phoenixcon-2018/
[2] https://phoenix.apache.org/hbasecon-phoenixcon-2018/
[3] https://github.com/joshelser/hbasecon-jekyll




Thanks Josh for doing this.

Do they have to be conflated so? Each community is doing their own
conference. This page/announcement makes it look like they have been
squashed together.

Thanks,
S








Re: Almost ready for HBaseCon+PhoenixCon 2018 SanJose CFP

2018-03-22 Thread Dima Spivak
+1. DataWorks Summit is a vendor-hosted event. Putting a community
conference under its umbrella suggests community endorsement that we may
want to avoid.

On Wed, Mar 21, 2018 at 5:53 PM 张铎(Duo Zhang)  wrote:

> It is just a bit confusing to people who want to attend these events
> because it is placed in the agenda section... It looks like that
> HBaseCon/PhoenixCon is part of the DataWorks Summit, so maybe some
> attendees may want to use the ticket for the DataWorks Summit to enter the
> HBaseCon/PhoenixCon?
>
> So advertise is good, but maybe move it to another section on the page?
>
> Thanks.
>
> 2018-03-22 0:29 GMT+08:00 Josh Elser :
>
> > Hey Duo,
> >
> > Thanks for digging into this. I am not surprised by it -- last I talked
> to
> > the folks in charge of the website, they mentioned that they would
> > cross-advertise for us as well. Seems like their web-staff is a bit
> faster
> > than I am though. I was planning to point them to our event page after we
> > had made our announcement, and hopefully they will just link back to us.
> >
> > Any specific concerns? I think free-advertising is good, but the intent
> is
> > not for HBaseCon/PhoenixCon to be "a part of" DataWorks Summit. I think
> > perhaps adding something like "HBaseCon and PhoenixCon (community
> events)"
> > would help? Give me some concrete suggestions please :)
> >
> >
> > On 3/21/18 4:13 AM, 张铎(Duo Zhang) wrote:
> >
> >> https://dataworkssummit.com/san-jose-2018/
> >>
> >> Here, in the Agenda section, HBaseCon and PhoenixCon are also included.
> >>
> >> Monday, June 18
> >> 8:30 AM - 5:00 PM
> >>
> >> Pre-event Training
> >> 8:30 AM - 5:00 PM
> >>
> >> HBaseCon and PhoenixCon
> >> 12:00 PM – 7:00 PM
> >>
> >> Registration
> >> 6:00 PM – 8:00 PM
> >>
> >> Meetups
> >>
> >> Is this intentional?
> >>
> >> 2018-03-21 14:59 GMT+08:00 Stack :
> >>
> >> On Tue, Mar 20, 2018 at 7:51 PM, Josh Elser  wrote:
> >>>
> >>> Hi all,
> 
>  I've published a new website for the upcoming event in June in
>  California
>  at [1][2] for the HBase and Phoenix websites, respectively. 1 & 2 are
>  identical.
> 
>  I've not yet updated any links on either website to link to the new
>  page.
>  I'd appreciate if folks can give their feedback on anything outwardly
>  wrong, incorrect, etc. If folks are happy, then I'll work on linking
>  from
>  the main websites, and coordinating an official announcement via mail
>  lists, social media, etc.
> 
>  The website is generated from [3]. If you really want to be my
>  best-friend, let me know about the above things which are wrong via
>  pull-request ;)
> 
>  - Josh
> 
>  [1] https://hbase.apache.org/hbasecon-phoenixcon-2018/
>  [2] https://phoenix.apache.org/hbasecon-phoenixcon-2018/
>  [3] https://github.com/joshelser/hbasecon-jekyll
> 
> 
> >>>
> >>> Thanks Josh for doing this.
> >>>
> >>> Do they have to be conflated so? Each community is doing their own
> >>> conference. This page/announcement makes it look like they have been
> >>> squashed together.
> >>>
> >>> Thanks,
> >>> S
> >>>
> >>>
> >>
>
-- 
-Dima


[jira] [Commented] (PHOENIX-4658) IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap

2018-03-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409747#comment-16409747
 ] 

James Taylor commented on PHOENIX-4658:
---

In addition to your change, let’s introduce a new FORWARD_SCAN hint that forces 
a forward scan (and thus causing loadColumnFamiliesOnDemand to *not* be 
disabled). That way a particular use case could override the behavior of this 
patch if it has a severe impact on perf.

> IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap
> ---
>
> Key: PHOENIX-4658
> URL: https://issues.apache.org/jira/browse/PHOENIX-4658
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4658.patch, PHOENIX-4658.patch, 
> PHOENIX-4658.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with multiple column families (default column family and 
> "FAM")
> {code}
> CREATE TABLE TBL (
>   COL1 VARCHAR NOT NULL,
>   COL2 VARCHAR NOT NULL,
>   COL3 VARCHAR,
>   FAM.COL4 VARCHAR,
>   CONSTRAINT TRADE_EVENT_PK PRIMARY KEY (COL1, COL2)
> )
> {code}
> 2. Upsert a row
> {code}
> UPSERT INTO TBL (COL1, COL2) values ('AAA', 'BBB')
> {code}
> 3. Query with DESC for the table
> {code}
> SELECT * FROM TBL WHERE COL2 = 'BBB' ORDER BY COL1 DESC
> {code}
> By following the above steps, we face the following exception.
> {code}
> java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TBL,,1521251842845.153781990c0fb4bc34e3f2c721a6f415.: requestSeek cannot be 
> called on ReversedKeyValueHeap
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2808)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3045)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> Caused by: java.lang.IllegalStateException: requestSeek cannot be called on 
> ReversedKeyValueHeap
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:65)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6485)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6412)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6126)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6112)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 10 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-03-22 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-4669:
--
Attachment: PHOENIX-4669-UT.patch

> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:368)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1794)
>   at 

[jira] [Commented] (PHOENIX-4662) NullPointerException in TableResultIterator.java on cache resend

2018-03-22 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409721#comment-16409721
 ] 

Josh Elser commented on PHOENIX-4662:
-

Until my laptop shutdown, didn't see any test failures. +1 from me ;)

> NullPointerException in TableResultIterator.java on cache resend
> 
>
> Key: PHOENIX-4662
> URL: https://issues.apache.org/jira/browse/PHOENIX-4662
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4662.patch
>
>
> In the fix for PHOENIX-4010, there is a potential null dereference. Turned 
> out when we ran a previous version of HashJoinIT with PHOENIX-4010 backported.
> The caches field is initialized to null and may be dereferenced after 
> "Retrying when Hash Join cache is not found on the server ,by sending the 
> cache again".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-03-22 Thread Toshihiro Suzuki (JIRA)
Toshihiro Suzuki created PHOENIX-4669:
-

 Summary: NoSuchColumnFamilyException when creating index on views 
that are built on tables which have named column family
 Key: PHOENIX-4669
 URL: https://issues.apache.org/jira/browse/PHOENIX-4669
 Project: Phoenix
  Issue Type: Bug
Reporter: Toshihiro Suzuki


Steps to reproduce are as follows:

1. Create a table with specified column family
{code}
CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
{code}
2. Upsert data into the table
{code}
UPSERT INTO TBL VALUES ('AAA','BBB')
{code}
3. Create a view on the table
{code}
CREATE VIEW VW AS SELECT * FROM TBL
{code}
4. Create a index on the view
{code}
CREATE INDEX IDX ON VW (CF.COL2)
{code}

By following the above steps, I faced the following error.
{code}
Exception in thread "main" org.apache.phoenix.execute.CommitException: 
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
Column family 0 does not exist in region 
_IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table '_IDX_TBL', 
{TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 
=> 
'|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', 
coprocessor$3 => 
'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
coprocessor$4 => 
'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
coprocessor$5 => 
'|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
 METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER => 
'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', 
DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 'NONE', 
MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
REPLICATION_SCOPE => '0'}
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
: 1 time, servers with issues: 10.0.1.3,57208,1521731670016
at 
org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
at 
org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
at 
org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
at 
org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
at 
org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
at 
org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
at 
org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
at 
org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:368)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1794)
at Case00162740.main(Case00162740.java:20)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: 
Failed 1 action: 
org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family 
0 does not exist in region 
_IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table '_IDX_TBL', 
{TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 
=> 

[jira] [Commented] (PHOENIX-4655) Upgrade to latest version from version <4.8.0 may fail with TableNotFoundException if we have local indexes but no multi-tenant tables.

2018-03-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409484#comment-16409484
 ] 

Hudson commented on PHOENIX-4655:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #70 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/70/])
PHOENIX-4655 Upgrade to latest version from version <4.8.0 may fail with 
(rajeshbabu: rev d2d10ca2d532e2f8b02e87082cd7da130420be05)
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java


> Upgrade to latest version from version <4.8.0 may fail with 
> TableNotFoundException if we have local indexes but no multi-tenant tables.
> ---
>
> Key: PHOENIX-4655
> URL: https://issues.apache.org/jira/browse/PHOENIX-4655
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4655.patch
>
>
> if we upgrade from the version less than 4.8.0 to latest version when we have 
> local indexes but not multi-tenant tables then upgrade may fail with 
> TableNotFoundException
> {noformat}
> Caused by: org.apache.hadoop.hbase.TableNotFoundException: 
> org.apache.hadoop.hbase.TableNotFoundException: _IDX_PERFORMANCE_100
> at 
> org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.prepareDisable(DisableTableProcedure.java:220)
> at 
> org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.executeFromState(DisableTableProcedure.java:87)
> at 
> org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.executeFromState(DisableTableProcedure.java:39)
> at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:180)
> at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:845)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1453)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1222)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$800(ProcedureExecutor.java:75)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1741)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:93)
> at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:83)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:361)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:349)
> at 
> org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:102)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3048)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3040)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmin.java:896)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.java:890)
> at 
> org.apache.phoenix.util.UpgradeUtil.disableViewIndexes(UpgradeUtil.java:610)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:2838)
> ... 21 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4655) Upgrade to latest version from version <4.8.0 may fail with TableNotFoundException if we have local indexes but no multi-tenant tables.

2018-03-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409453#comment-16409453
 ] 

Hudson commented on PHOENIX-4655:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1840 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1840/])
PHOENIX-4655 Upgrade to latest version from version <4.8.0 may fail with 
(rajeshbabu: rev 421620138761ca1cbe08b7c79e608c6ed753bfca)
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java


> Upgrade to latest version from version <4.8.0 may fail with 
> TableNotFoundException if we have local indexes but no multi-tenant tables.
> ---
>
> Key: PHOENIX-4655
> URL: https://issues.apache.org/jira/browse/PHOENIX-4655
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4655.patch
>
>
> if we upgrade from the version less than 4.8.0 to latest version when we have 
> local indexes but not multi-tenant tables then upgrade may fail with 
> TableNotFoundException
> {noformat}
> Caused by: org.apache.hadoop.hbase.TableNotFoundException: 
> org.apache.hadoop.hbase.TableNotFoundException: _IDX_PERFORMANCE_100
> at 
> org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.prepareDisable(DisableTableProcedure.java:220)
> at 
> org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.executeFromState(DisableTableProcedure.java:87)
> at 
> org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.executeFromState(DisableTableProcedure.java:39)
> at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:180)
> at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:845)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1453)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1222)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$800(ProcedureExecutor.java:75)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1741)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:93)
> at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:83)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:361)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:349)
> at 
> org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:102)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3048)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3040)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmin.java:896)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.java:890)
> at 
> org.apache.phoenix.util.UpgradeUtil.disableViewIndexes(UpgradeUtil.java:610)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:2838)
> ... 21 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409377#comment-16409377
 ] 

Hudson commented on PHOENIX-4661:
-

ABORTED: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #69 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/69/])
PHOENIX-4661 Handled deleted PTables in the MetadataCache(Addendum) 
(ankitsinghal59: rev 1cce283d47660ef3efbae3b4e408b40b6deb63ed)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemTablePermissionsIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java


> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4661.patch, PHOENIX-4661_v1.patch, 
> PHOENIX-4661_v2.patch, PHOENIX-4661_v2_addendum.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at sqlline.Commands.sql(Commands.java:732)
>     at sqlline.SqlLine.dispatch(SqlLine.java:813)
>     at sqlline.SqlLine.begin(SqlLine.java:686)

[jira] [Resolved] (PHOENIX-4655) Upgrade to latest version from version <4.8.0 may fail with TableNotFoundException if we have local indexes but no multi-tenant tables.

2018-03-22 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved PHOENIX-4655.
--
Resolution: Fixed

Committed to 5.x and 4.x branches. Thanks for reviews.

> Upgrade to latest version from version <4.8.0 may fail with 
> TableNotFoundException if we have local indexes but no multi-tenant tables.
> ---
>
> Key: PHOENIX-4655
> URL: https://issues.apache.org/jira/browse/PHOENIX-4655
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4655.patch
>
>
> if we upgrade from the version less than 4.8.0 to latest version when we have 
> local indexes but not multi-tenant tables then upgrade may fail with 
> TableNotFoundException
> {noformat}
> Caused by: org.apache.hadoop.hbase.TableNotFoundException: 
> org.apache.hadoop.hbase.TableNotFoundException: _IDX_PERFORMANCE_100
> at 
> org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.prepareDisable(DisableTableProcedure.java:220)
> at 
> org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.executeFromState(DisableTableProcedure.java:87)
> at 
> org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.executeFromState(DisableTableProcedure.java:39)
> at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:180)
> at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:845)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1453)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1222)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$800(ProcedureExecutor.java:75)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1741)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:93)
> at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:83)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:361)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:349)
> at 
> org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:102)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3048)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3040)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmin.java:896)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.java:890)
> at 
> org.apache.phoenix.util.UpgradeUtil.disableViewIndexes(UpgradeUtil.java:610)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:2838)
> ... 21 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4497) Fix Local Index IT tests

2018-03-22 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved PHOENIX-4497.
--
Resolution: Fixed

> Fix Local Index IT tests
> 
>
> Key: PHOENIX-4497
> URL: https://issues.apache.org/jira/browse/PHOENIX-4497
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4440) Local index split/merge IT tests are failing

2018-03-22 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved PHOENIX-4440.
--
Resolution: Fixed

Committed. PHOENIX-4576 also handled as part of this.

> Local index split/merge IT tests are failing
> 
>
> Key: PHOENIX-4440
> URL: https://issues.apache.org/jira/browse/PHOENIX-4440
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4440.patch, PHOENIX-4440_v2.patch, 
> PHOENIX-4440_v3.patch
>
>
> IndexHalfStoreFileReaderGenerator#preStoreFileReaderOpen is not getting 
> called and going by default behaviour so split/merge not working.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4440) Local index split/merge IT tests are failing

2018-03-22 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409234#comment-16409234
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-4440:
--

v3 patch fixing all the local index tests. Going to commit it

> Local index split/merge IT tests are failing
> 
>
> Key: PHOENIX-4440
> URL: https://issues.apache.org/jira/browse/PHOENIX-4440
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4440.patch, PHOENIX-4440_v2.patch, 
> PHOENIX-4440_v3.patch
>
>
> IndexHalfStoreFileReaderGenerator#preStoreFileReaderOpen is not getting 
> called and going by default behaviour so split/merge not working.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4440) Local index split/merge IT tests are failing

2018-03-22 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4440:
-
Attachment: PHOENIX-4440_v3.patch

> Local index split/merge IT tests are failing
> 
>
> Key: PHOENIX-4440
> URL: https://issues.apache.org/jira/browse/PHOENIX-4440
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4440.patch, PHOENIX-4440_v2.patch, 
> PHOENIX-4440_v3.patch
>
>
> IndexHalfStoreFileReaderGenerator#preStoreFileReaderOpen is not getting 
> called and going by default behaviour so split/merge not working.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-22 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409175#comment-16409175
 ] 

Ankit Singhal commented on PHOENIX-4661:


{quote}I have a strong feeling that changes in PhoenixAccessController.java are 
not related to this issue and need to be reverted.
{quote}
Yes, it is not related to this issue, I could have done this in another 
JIRA(PHOENIX-4545), the change was necessary in 5.x as we can't use 
ObserverContext without MasterCoprocessorEnvironment for preGetTableDescriptor 
API after connection changes in HBase-2.0. 

Have committed the addendum in 4.x-HBase-1.2, 4.x-HBase-1.3 and master only as 
others do not have PHOENIX-672 , so are not affected.

 

 

 

 

> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4661.patch, PHOENIX-4661_v1.patch, 
> PHOENIX-4661_v2.patch, PHOENIX-4661_v2_addendum.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at 

[jira] [Resolved] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-22 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-4661.

Resolution: Fixed

> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4661.patch, PHOENIX-4661_v1.patch, 
> PHOENIX-4661_v2.patch, PHOENIX-4661_v2_addendum.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at sqlline.Commands.sql(Commands.java:732)
>     at sqlline.SqlLine.dispatch(SqlLine.java:813)
>     at sqlline.SqlLine.begin(SqlLine.java:686)
>     at sqlline.SqlLine.start(SqlLine.java:398)
>     at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>  

[jira] [Updated] (PHOENIX-4609) Error Occurs while selecting a specific set of columns : ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, but had 2

2018-03-22 Thread Aman Jha (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Jha updated PHOENIX-4609:
--
Priority: Blocker  (was: Critical)

> Error Occurs while selecting a specific set of columns : ERROR 201 (22000): 
> Illegal data. Expected length of at least 8 bytes, but had 2
> 
>
> Key: PHOENIX-4609
> URL: https://issues.apache.org/jira/browse/PHOENIX-4609
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0, 4.13.0
>Reporter: Aman Jha
>Priority: Blocker
> Attachments: DML_DDL.sql, SelectStatement.sql, TestPhoenix.java
>
>
> While selecting columns from a table, an error occurs for Illegal Data.
> h3. _*ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, 
> but had 2*_
> The data is read/write only through the Phoenix Client.
> Moreover, this error only occurs while running queries via Java Program only 
> and not through the Squirrel SQL client. Is there any other way to access 
> results from the ResultSet that is returned from Phoenix Client. 
>  
> *Environment Details* : 
> *HBase Version* : _1.2.6 on Hadoop 2.8.2_
> *Phoenix Version* : _4.11.0-HBase-1.2_
> *OS*: _LINUX(RHEL)_
>  
> The following error is caused when selecting columns via a Java Program
> {code:java}
> ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, but had 
> 2; nested exception is java.sql.SQLException: ERROR 201 (22000): Illegal 
> data. Expected length of at least 8 bytes, but had 2
> at 
> org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:102)
> at 
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
> at 
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
> at 
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
> at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:419)
> at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:474)
> at 
> com.zycus.qe.service.impl.PhoenixHBaseDAOImpl.fetchAggregationResult(PhoenixHBaseDAOImpl.java:752)
> ... 14 common frames omitted
> Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected 
> length of at least 8 bytes, but had 2
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:483)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:213)
> at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:165)
> at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:171)
> at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:175)
> at 
> org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:115)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:260)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.next(OrderedResultIterator.java:199)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:65)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:255)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.next(OrderedResultIterator.java:199)
> at 
> org.apache.phoenix.iterate.OrderedAggregatingResultIterator.next(OrderedAggregatingResultIterator.java:51)
> at 
> org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
> at 
> org.apache.phoenix.execute.TupleProjectionPlan$1.next(TupleProjectionPlan.java:62)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:65)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
> 

[jira] [Updated] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-22 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4661:
---
Attachment: PHOENIX-4661_v2_addendum.patch

> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4661.patch, PHOENIX-4661_v1.patch, 
> PHOENIX-4661_v2.patch, PHOENIX-4661_v2_addendum.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at sqlline.Commands.sql(Commands.java:732)
>     at sqlline.SqlLine.dispatch(SqlLine.java:813)
>     at sqlline.SqlLine.begin(SqlLine.java:686)
>     at sqlline.SqlLine.start(SqlLine.java:398)
>     at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> 

[jira] [Resolved] (PHOENIX-2717) Unable to login if no "create" permission in HBase

2018-03-22 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-2717.

   Resolution: Fixed
Fix Version/s: 4.11.0

fixed as part of https://issues.apache.org/jira/browse/PHOENIX-3756 

> Unable to login if no "create" permission in HBase
> --
>
> Key: PHOENIX-2717
> URL: https://issues.apache.org/jira/browse/PHOENIX-2717
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
> Environment: HDP 2.3.4
>Reporter: mathias kluba
>Priority: Blocker
> Fix For: 4.11.0
>
>
> I'm using HBase with Ranger, but I guess that we could have the same issue 
> with internal HBase permission system.
> When I try to connect to "hbase" using phoenix client, it crashes because of 
> "Access Denied" exception.
> The phoenix client try to create the SYSTEM.CATALOG table (and other SYSTEM 
> tables) and catch only 2 exceptions :
> NewerTableAlreadyExistsException and TableAlreadyExistsException 
> It doesn't catch the "access denied" exception.
> https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L2279
> In the end, I'm not able to connect to HBase using Phoenix for read purpose, 
> I don't need to be able to create these SYSTEM tables...
> I think that the code is a little bit dirty: it should check the existence of 
> the table instead of trying to create it and catch exception.
> I have a workaround for now: I grant the "create" permission in Ranger for 
> "SYSTEM.*" tables: they already exist before the user try to connect, so it's 
> not a problem to give them this access.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409144#comment-16409144
 ] 

Hudson commented on PHOENIX-4661:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1816 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1816/])
PHOENIX-4661 Handled deleted PTables in the MetadataCache (elserj: rev 
e9324cc811e8763f81475af15c46fd739dec26a4)
* (add) phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/PhoenixAccessController.java


> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4661.patch, PHOENIX-4661_v1.patch, 
> PHOENIX-4661_v2.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at sqlline.Commands.sql(Commands.java:732)
>     at sqlline.SqlLine.dispatch(SqlLine.java:813)
>  

[jira] [Commented] (PHOENIX-4636) Include python-phoenixdb into Phoenix

2018-03-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409145#comment-16409145
 ] 

Hudson commented on PHOENIX-4636:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1816 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1816/])
PHOENIX-4636 Add the python PQS driver (elserj: rev 
1b72808b9f232a8d8029f76fc2a4f1708e48)
* (add) python/ci/phoenix/Dockerfile
* (add) python/doc/api.rst
* (add) python/examples/basic.py
* (add) python/phoenixdb/avatica/proto/__init__.py
* (add) python/phoenixdb/tests/test_connection.py
* (add) python/phoenixdb/avatica/__init__.py
* (add) python/phoenixdb/tests/test_errors.py
* (add) python/ci/phoenix/docker-entrypoint.sh
* (add) python/phoenixdb/errors.py
* (add) python/doc/Makefile
* (add) python/phoenixdb/tests/test_dbapi20.py
* (add) python/examples/shell.py
* (add) python/phoenixdb/avatica/proto/requests_pb2.py
* (add) python/ci/build-env/Dockerfile
* (add) python/RELEASING.rst
* (add) python/.gitignore
* (add) python/.gitlab-ci.yml
* (add) python/phoenixdb/__init__.py
* (add) python/setup.cfg
* (add) python/docker-compose.yml
* (add) python/phoenixdb/cursor.py
* (add) python/phoenixdb/types.py
* (add) python/phoenixdb/avatica/client.py
* (add) python/phoenixdb/tests/__init__.py
* (add) python/phoenixdb/avatica/proto/responses_pb2.py
* (edit) dev/make_rc.sh
* (add) python/README.rst
* (add) python/phoenixdb/tests/test_avatica.py
* (add) python/phoenixdb/connection.py
* (add) python/phoenixdb/avatica/proto/common_pb2.py
* (add) python/phoenixdb/tests/test_db.py
* (add) python/doc/index.rst
* (add) python/doc/versions.rst
* (add) python/tox.ini
* (add) python/ci/phoenix/hbase-site.xml
* (add) python/doc/conf.py
* (add) python/gen-protobuf.sh
* (add) python/phoenixdb/tests/test_types.py
* (add) python/setup.py
* (add) python/NEWS.rst
* (edit) pom.xml
* (add) python/requirements.txt
* (add) python/phoenixdb/tests/dbapi20.py


> Include python-phoenixdb into Phoenix
> -
>
> Key: PHOENIX-4636
> URL: https://issues.apache.org/jira/browse/PHOENIX-4636
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Josh Elser
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
>
> Include [https://github.com/lalinsky/python-phoenixdb] in Phoenix.
> Details about the library can be found at:-
>  [http://python-phoenixdb.readthedocs.io/en/latest/]
> Discussion thread:-
> [https://www.mail-archive.com/dev@phoenix.apache.org/msg45424.html]
> commit:-
> [https://github.com/lalinsky/python-phoenixdb/commit/1bb34488dd530ca65f91b29ef16aa7b71f26b806]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)