[jira] [Created] (PHOENIX-4576) Fix LocalIndexSplitMergeIT tests failing in master branch

2018-01-31 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created PHOENIX-4576:


 Summary: Fix LocalIndexSplitMergeIT tests failing in master branch
 Key: PHOENIX-4576
 URL: https://issues.apache.org/jira/browse/PHOENIX-4576
 Project: Phoenix
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 4.14.0


Currenty LocalIndexSplitMergeIT#testLocalIndexScanAfterRegionsMerge is failing 
in master branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4575) Phoenix metadata KEEP_DELETED_CELLS and VERSIONS should be property driven

2018-01-31 Thread Mujtaba Chohan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348034#comment-16348034
 ] 

Mujtaba Chohan commented on PHOENIX-4575:
-

Thanks for the review [~jamestaylor]. I’ll work on the changes. 

> Phoenix metadata KEEP_DELETED_CELLS and VERSIONS should be property driven
> --
>
> Key: PHOENIX-4575
> URL: https://issues.apache.org/jira/browse/PHOENIX-4575
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Mujtaba Chohan
>Assignee: Mujtaba Chohan
>Priority: Major
> Attachments: PHOENIX-4575.patch
>
>
> This is to cater for circumstances where we need to alter state of 
> KEEP_DELETED_CELLS/VERSION on Phoenix meta tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4130) Avoid server retries for mutable indexes

2018-01-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347979#comment-16347979
 ] 

Hudson commented on PHOENIX-4130:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1924 (See 
[https://builds.apache.org/job/Phoenix-master/1924/])
PHOENIX-4130 Avoid server retries for mutable indexes (Addendum) (vincentpoon: 
rev 0ef77b18cc200afe097ab803338af33997e3935d)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexFailurePolicy.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/MultiIndexWriteFailureException.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/SingleIndexWriteFailureException.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/IndexWriteException.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java


> Avoid server retries for mutable indexes
> 
>
> Key: PHOENIX-4130
> URL: https://issues.apache.org/jira/browse/PHOENIX-4130
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4130.addendum.master.patch, 
> PHOENIX-4130.v1.master.patch, PHOENIX-4130.v10.master.patch, 
> PHOENIX-4130.v2.master.patch, PHOENIX-4130.v3.master.patch, 
> PHOENIX-4130.v4.master.patch, PHOENIX-4130.v5.master.patch, 
> PHOENIX-4130.v6.master.patch, PHOENIX-4130.v7.master.patch, 
> PHOENIX-4130.v8.master.patch, PHOENIX-4130.v9.master.patch
>
>
> Had some discussions with [~jamestaylor], [~samarthjain], and [~vincentpoon], 
> during which I suggested that we can possibly eliminate retry loops happening 
> at the server that cause the handler threads to be stuck potentially for 
> quite a while (at least multiple seconds to ride over common scenarios like 
> splits).
> Instead we can do the retries at the Phoenix client that.
> So:
> # The index updates are not retried on the server. (retries = 0)
> # A failed index update would set the failed index timestamp but leave the 
> index enabled.
> # Now the handler thread is done, it throws an appropriate exception back to 
> the client.
> # The Phoenix client can now retry. When those retries fail the index is 
> disabled (if the policy dictates that) and throw the exception back to its 
> caller.
> So no more waiting is needed on the server, handler threads are freed 
> immediately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4533) Phoenix Query Server should not use SPNEGO principal to proxy user requests

2018-01-31 Thread Lev Bronshtein (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347980#comment-16347980
 ] 

Lev Bronshtein commented on PHOENIX-4533:
-

Fixed the tests as well.  Also it looks like I incorrectly generated the last 
patch, so I created a new one and attached it.

> Phoenix Query Server should not use SPNEGO principal to proxy user requests
> ---
>
> Key: PHOENIX-4533
> URL: https://issues.apache.org/jira/browse/PHOENIX-4533
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lev Bronshtein
>Assignee: Lev Bronshtein
>Priority: Minor
> Attachments: PHOENIX-4533.1.patch
>
>
> Currently the HTTP/ principal is used by various components in the HADOOP 
> ecosystem to perform SPNEGO authentication.  Since there can only be one 
> HTTP/ per host, even outside of the Hadoop ecosystem, the keytab containing 
> key material for local HTTP/ principal is shared among a few applications.  
> With so many applications having access to the HTTP/ credentials, this 
> increases the chances of an attack on the proxy user capabilities of Hadoop.  
> This JIRA proposes that two different key tabs can be used to
> 1. Authenticate kerberized web requests
> 2. Communicate with the phoenix back end



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4533) Phoenix Query Server should not use SPNEGO principal to proxy user requests

2018-01-31 Thread Lev Bronshtein (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lev Bronshtein updated PHOENIX-4533:

Attachment: PHOENIX-4533.1.patch

> Phoenix Query Server should not use SPNEGO principal to proxy user requests
> ---
>
> Key: PHOENIX-4533
> URL: https://issues.apache.org/jira/browse/PHOENIX-4533
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lev Bronshtein
>Assignee: Lev Bronshtein
>Priority: Minor
> Attachments: PHOENIX-4533.1.patch
>
>
> Currently the HTTP/ principal is used by various components in the HADOOP 
> ecosystem to perform SPNEGO authentication.  Since there can only be one 
> HTTP/ per host, even outside of the Hadoop ecosystem, the keytab containing 
> key material for local HTTP/ principal is shared among a few applications.  
> With so many applications having access to the HTTP/ credentials, this 
> increases the chances of an attack on the proxy user capabilities of Hadoop.  
> This JIRA proposes that two different key tabs can be used to
> 1. Authenticate kerberized web requests
> 2. Communicate with the phoenix back end



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4533) Phoenix Query Server should not use SPNEGO principal to proxy user requests

2018-01-31 Thread Lev Bronshtein (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lev Bronshtein updated PHOENIX-4533:

Attachment: (was: PHOENIX-4533.1.patch)

> Phoenix Query Server should not use SPNEGO principal to proxy user requests
> ---
>
> Key: PHOENIX-4533
> URL: https://issues.apache.org/jira/browse/PHOENIX-4533
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lev Bronshtein
>Assignee: Lev Bronshtein
>Priority: Minor
> Attachments: PHOENIX-4533.1.patch
>
>
> Currently the HTTP/ principal is used by various components in the HADOOP 
> ecosystem to perform SPNEGO authentication.  Since there can only be one 
> HTTP/ per host, even outside of the Hadoop ecosystem, the keytab containing 
> key material for local HTTP/ principal is shared among a few applications.  
> With so many applications having access to the HTTP/ credentials, this 
> increases the chances of an attack on the proxy user capabilities of Hadoop.  
> This JIRA proposes that two different key tabs can be used to
> 1. Authenticate kerberized web requests
> 2. Communicate with the phoenix back end



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4130) Avoid server retries for mutable indexes

2018-01-31 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347819#comment-16347819
 ] 

James Taylor commented on PHOENIX-4130:
---

+1. Thanks, [~vincentpoon].

> Avoid server retries for mutable indexes
> 
>
> Key: PHOENIX-4130
> URL: https://issues.apache.org/jira/browse/PHOENIX-4130
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4130.addendum.master.patch, 
> PHOENIX-4130.v1.master.patch, PHOENIX-4130.v10.master.patch, 
> PHOENIX-4130.v2.master.patch, PHOENIX-4130.v3.master.patch, 
> PHOENIX-4130.v4.master.patch, PHOENIX-4130.v5.master.patch, 
> PHOENIX-4130.v6.master.patch, PHOENIX-4130.v7.master.patch, 
> PHOENIX-4130.v8.master.patch, PHOENIX-4130.v9.master.patch
>
>
> Had some discussions with [~jamestaylor], [~samarthjain], and [~vincentpoon], 
> during which I suggested that we can possibly eliminate retry loops happening 
> at the server that cause the handler threads to be stuck potentially for 
> quite a while (at least multiple seconds to ride over common scenarios like 
> splits).
> Instead we can do the retries at the Phoenix client that.
> So:
> # The index updates are not retried on the server. (retries = 0)
> # A failed index update would set the failed index timestamp but leave the 
> index enabled.
> # Now the handler thread is done, it throws an appropriate exception back to 
> the client.
> # The Phoenix client can now retry. When those retries fail the index is 
> disabled (if the policy dictates that) and throw the exception back to its 
> caller.
> So no more waiting is needed on the server, handler threads are freed 
> immediately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4556) Sync branch 4.x-cdh5.11.2

2018-01-31 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado resolved PHOENIX-4556.
--
Resolution: Fixed

Committed 519cca954..9994059a0

> Sync branch 4.x-cdh5.11.2
> -
>
> Key: PHOENIX-4556
> URL: https://issues.apache.org/jira/browse/PHOENIX-4556
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4556-patch.tar.gz
>
>
> Syncing 4.x-cdh5.11.2 with master - it was quite behind -  and version up to 
> 4.14 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-4556) Sync branch 4.x-cdh5.11.2

2018-01-31 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado closed PHOENIX-4556.


> Sync branch 4.x-cdh5.11.2
> -
>
> Key: PHOENIX-4556
> URL: https://issues.apache.org/jira/browse/PHOENIX-4556
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4556-patch.tar.gz
>
>
> Syncing 4.x-cdh5.11.2 with master - it was quite behind -  and version up to 
> 4.14 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4575) Phoenix metadata KEEP_DELETED_CELLS and VERSIONS should be property driven

2018-01-31 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347793#comment-16347793
 ] 

James Taylor commented on PHOENIX-4575:
---

Thanks for the patch. Here's some feedback:
- Can we define these new properties attributes in QueryServices and their 
defaults in QueryServicesOptions instead?
- How about instead of META_DATA_KEEP_DELETED_CELLS we use a name of 
DEFAULT_SYSTEM_KEEP_DELETED_CELLS_ATTRIB 
(phoenix.system.default.keep.deleted.cells) as the property name and 
DEFAULT_SYSTEM_KEEP_DELETED_CELLS = false?
- How about a name of DEFAULT_SYSTEM_MAX_VERSIONS_ATTRIB 
(phoenix.system.default.max.versions) as the property name and 
DEFAULT_SYSTEM_MAX_VERSIONS = 1?
- I think you want to set the value of KEEP_DELETED_CELLS to the value of the 
new DEFAULT_SYSTEM_KEEP_DELETED_CELLS_ATTRIB (i.e. you don't want to set it to 
the string of property):
{code}
-HConstants.VERSIONS + "=" + 
MetaDataProtocol.DEFAULT_MAX_META_DATA_VERSIONS + ",\n" +
-HColumnDescriptor.KEEP_DELETED_CELLS + "="  + 
MetaDataProtocol.DEFAULT_META_DATA_KEEP_DELETED_CELLS + ",\n"+
+HConstants.VERSIONS + "=" + 
MetaDataProtocol.MAX_META_DATA_VERSIONS + ",\n" +
+HColumnDescriptor.KEEP_DELETED_CELLS + "="  + 
MetaDataProtocol.META_DATA_KEEP_DELETED_CELLS + ",\n"+
{code}
- Instead, put %s placeholders in the QueryConstant constant like this:
{code}
public static final String CREATE_TABLE_METADATA =
// Do not use IF NOT EXISTS as we sometimes catch the 
TableAlreadyExists
// exception and add columns to the SYSTEM.TABLE dynamically.
"CREATE TABLE " + SYSTEM_CATALOG_SCHEMA + ".\"" + 
SYSTEM_CATALOG_TABLE + "\"(\n" +
...
HConstants.VERSIONS + "=" + %s + ",\n" +
HColumnDescriptor.KEEP_DELETED_CELLS + "="  + %s + ",\n" +
...
{code}
and modify this method in ConnectionQueryServiceImpl (and 
ConnectionlessQueryServiceImpl):
{code}
// Available for testing
protected String getSystemCatalogDML() {
return String.format(QueryConstants.CREATE_TABLE_METADATA, 
props.getInt(DEFAULT_SYSTEM_MAX_VERSIONS_ATTRIB, 
DEFAULT_SYSTEM_MAX_VERSIONS),
props.getBoolean(DEFAULT_SYSTEM_KEEP_DELETED_CELLS_ATTRIB, 
DEFAULT_SYSTEM_KEEP_DELETED_CELLS));
}
{code}
- Do the same as above for the CREATE_FUNCTION_METADATA constant.
- Remove KEEP_DELETED_CELLS and VERSIONS from CREATE_SEQUENCE_METADATA and 
CREATE_STATS_TABLE_METADATA as it doesn't make sense for keep deleted cells to 
be on for these tables.
- Remove this code from ConnectionQueryServicesImpl as it'll like cause issues 
(and doesn't make sense):
{code}
if(props.get(QueryServices.DEFAULT_KEEP_DELETED_CELLS_ATTRIB) != 
null){
columnDesc.setKeepDeletedCells(props.getBoolean(
QueryServices.DEFAULT_KEEP_DELETED_CELLS_ATTRIB, 
QueryServicesOptions.DEFAULT_KEEP_DELETED_CELLS));
}
{code}
- Remove the QueryServices.DEFAULT_KEEP_DELETED_CELLS_ATTRIB and 
QueryServicesOptions.DEFAULT_KEEP_DELETED_CELLS as they won't be referenced any 
more.
- Remove these properties from MetaDataProtocol:
{code}
public static final int DEFAULT_MAX_META_DATA_VERSIONS = 1000;
public static final boolean DEFAULT_META_DATA_KEEP_DELETED_CELLS = true;
public static final int DEFAULT_MAX_STAT_DATA_VERSIONS = 1;
public static final boolean DEFAULT_STATS_KEEP_DELETED_CELLS = false;
{code} 


> Phoenix metadata KEEP_DELETED_CELLS and VERSIONS should be property driven
> --
>
> Key: PHOENIX-4575
> URL: https://issues.apache.org/jira/browse/PHOENIX-4575
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Mujtaba Chohan
>Assignee: Mujtaba Chohan
>Priority: Major
> Attachments: PHOENIX-4575.patch
>
>
> This is to cater for circumstances where we need to alter state of 
> KEEP_DELETED_CELLS/VERSION on Phoenix meta tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4575) Phoenix metadata KEEP_DELETED_CELLS and VERSIONS should be property driven

2018-01-31 Thread Mujtaba Chohan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347730#comment-16347730
 ] 

Mujtaba Chohan commented on PHOENIX-4575:
-

Attached patch adds 2 new properties that can be set in hbase-site.xml: 
 # phoenix.metadata.keep.deleted.cells
 # phoenix.metadata.max.versions

 

> Phoenix metadata KEEP_DELETED_CELLS and VERSIONS should be property driven
> --
>
> Key: PHOENIX-4575
> URL: https://issues.apache.org/jira/browse/PHOENIX-4575
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Mujtaba Chohan
>Priority: Major
> Attachments: PHOENIX-4575.patch
>
>
> This is to cater for circumstances where we need to alter state of 
> KEEP_DELETED_CELLS/VERSION on Phoenix meta tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4575) Phoenix metadata KEEP_DELETED_CELLS and VERSIONS should be property driven

2018-01-31 Thread Mujtaba Chohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mujtaba Chohan reassigned PHOENIX-4575:
---

Assignee: Mujtaba Chohan

> Phoenix metadata KEEP_DELETED_CELLS and VERSIONS should be property driven
> --
>
> Key: PHOENIX-4575
> URL: https://issues.apache.org/jira/browse/PHOENIX-4575
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Mujtaba Chohan
>Assignee: Mujtaba Chohan
>Priority: Major
> Attachments: PHOENIX-4575.patch
>
>
> This is to cater for circumstances where we need to alter state of 
> KEEP_DELETED_CELLS/VERSION on Phoenix meta tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4575) Phoenix metadata KEEP_DELETED_CELLS and VERSIONS should be property driven

2018-01-31 Thread Mujtaba Chohan (JIRA)
Mujtaba Chohan created PHOENIX-4575:
---

 Summary: Phoenix metadata KEEP_DELETED_CELLS and VERSIONS should 
be property driven
 Key: PHOENIX-4575
 URL: https://issues.apache.org/jira/browse/PHOENIX-4575
 Project: Phoenix
  Issue Type: New Feature
Reporter: Mujtaba Chohan
 Attachments: PHOENIX-4575.patch

This is to cater for circumstances where we need to alter state of 
KEEP_DELETED_CELLS/VERSION on Phoenix meta tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4575) Phoenix metadata KEEP_DELETED_CELLS and VERSIONS should be property driven

2018-01-31 Thread Mujtaba Chohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mujtaba Chohan updated PHOENIX-4575:

Attachment: PHOENIX-4575.patch

> Phoenix metadata KEEP_DELETED_CELLS and VERSIONS should be property driven
> --
>
> Key: PHOENIX-4575
> URL: https://issues.apache.org/jira/browse/PHOENIX-4575
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Mujtaba Chohan
>Priority: Major
> Attachments: PHOENIX-4575.patch
>
>
> This is to cater for circumstances where we need to alter state of 
> KEEP_DELETED_CELLS/VERSION on Phoenix meta tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4130) Avoid server retries for mutable indexes

2018-01-31 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347719#comment-16347719
 ] 

Vincent Poon commented on PHOENIX-4130:
---

Actually it turned out not to be that bad, I just swap the local index names 
for the table names if it's a local index.  See attached addendum - I'll put it 
in master as well to keep the branches more consistent.

 

As for namespaces, I'm relying on HTableInterfaceReference#toString() and 
parsing using commas, so things should work - if not, even the old code would 
have issues.

> Avoid server retries for mutable indexes
> 
>
> Key: PHOENIX-4130
> URL: https://issues.apache.org/jira/browse/PHOENIX-4130
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4130.addendum.master.patch, 
> PHOENIX-4130.v1.master.patch, PHOENIX-4130.v10.master.patch, 
> PHOENIX-4130.v2.master.patch, PHOENIX-4130.v3.master.patch, 
> PHOENIX-4130.v4.master.patch, PHOENIX-4130.v5.master.patch, 
> PHOENIX-4130.v6.master.patch, PHOENIX-4130.v7.master.patch, 
> PHOENIX-4130.v8.master.patch, PHOENIX-4130.v9.master.patch
>
>
> Had some discussions with [~jamestaylor], [~samarthjain], and [~vincentpoon], 
> during which I suggested that we can possibly eliminate retry loops happening 
> at the server that cause the handler threads to be stuck potentially for 
> quite a while (at least multiple seconds to ride over common scenarios like 
> splits).
> Instead we can do the retries at the Phoenix client that.
> So:
> # The index updates are not retried on the server. (retries = 0)
> # A failed index update would set the failed index timestamp but leave the 
> index enabled.
> # Now the handler thread is done, it throws an appropriate exception back to 
> the client.
> # The Phoenix client can now retry. When those retries fail the index is 
> disabled (if the policy dictates that) and throw the exception back to its 
> caller.
> So no more waiting is needed on the server, handler threads are freed 
> immediately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4130) Avoid server retries for mutable indexes

2018-01-31 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4130:
--
Attachment: PHOENIX-4130.addendum.master.patch

> Avoid server retries for mutable indexes
> 
>
> Key: PHOENIX-4130
> URL: https://issues.apache.org/jira/browse/PHOENIX-4130
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4130.addendum.master.patch, 
> PHOENIX-4130.v1.master.patch, PHOENIX-4130.v10.master.patch, 
> PHOENIX-4130.v2.master.patch, PHOENIX-4130.v3.master.patch, 
> PHOENIX-4130.v4.master.patch, PHOENIX-4130.v5.master.patch, 
> PHOENIX-4130.v6.master.patch, PHOENIX-4130.v7.master.patch, 
> PHOENIX-4130.v8.master.patch, PHOENIX-4130.v9.master.patch
>
>
> Had some discussions with [~jamestaylor], [~samarthjain], and [~vincentpoon], 
> during which I suggested that we can possibly eliminate retry loops happening 
> at the server that cause the handler threads to be stuck potentially for 
> quite a while (at least multiple seconds to ride over common scenarios like 
> splits).
> Instead we can do the retries at the Phoenix client that.
> So:
> # The index updates are not retried on the server. (retries = 0)
> # A failed index update would set the failed index timestamp but leave the 
> index enabled.
> # Now the handler thread is done, it throws an appropriate exception back to 
> the client.
> # The Phoenix client can now retry. When those retries fail the index is 
> disabled (if the policy dictates that) and throw the exception back to its 
> caller.
> So no more waiting is needed on the server, handler threads are freed 
> immediately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ANNOUNCE] New Phoenix committer: Pedro Boado

2018-01-31 Thread Pedro Boado
Thanks everyone for the warm welcome!

On 31 January 2018 at 19:38, Vincent Poon  wrote:

> Congrats Pedro!
>
> On Wed, Jan 31, 2018 at 6:25 AM, Ankit Singhal 
> wrote:
>
> > Congrats and welcome, Pedro!
> >
> > On Wed, Jan 31, 2018 at 4:35 AM, Flavio Pompermaier <
> pomperma...@okkam.it>
> > wrote:
> >
> > > Congratulations and thans for the great work!
> > >
> > > On 30 Jan 2018 23:45, "James Taylor"  wrote:
> > >
> > > On behalf of the Apache Phoenix PMC, I'm please to announce that Pedro
> > > Boado has accepted our invitation to become a committer. He's jumped in
> > and
> > > saved the HBase 1.2 branch from EOL by volunteering to be the release
> > > manager [1], has diligently kept the branch in sync with master [2],
> > > revived our community's desire for CDH compatibility by porting the
> > > required changes to the new CDH branch [3], modified our release
> scripts
> > to
> > > build a parcels directory for easy consumption by Cloudera Manager [4],
> > and
> > > successfully RMed our first CDH-compatible release [5]. Fantastic job,
> > > Pedro!
> > >
> > > Please give Pedro a warm welcome to the team!
> > >
> > > James
> > >
> > > [1]
> > > https://lists.apache.org/thread.html/e4312e2cb329a35979576f132d0d72
> > > 3e0b022e45ce9083b3cae4abe5@%3Cuser.phoenix.apache.org%3E
> > > [2] https://issues.apache.org/jira/browse/PHOENIX-4461
> > > [3] https://issues.apache.org/jira/browse/PHOENIX-4372
> > > [4] https://issues.apache.org/jira/browse/PHOENIX-
> > > [5]
> > > https://lists.apache.org/thread.html/e51576bf9bac5100e0f12ddb93e469
> > > cb35bfa9c80e73922df635ad12@%3Cuser.phoenix.apache.org%3E
> > >
> >
>


[jira] [Closed] (PHOENIX-4554) Sync branch 4.x-HBase-1.2

2018-01-31 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado closed PHOENIX-4554.


> Sync branch 4.x-HBase-1.2
> -
>
> Key: PHOENIX-4554
> URL: https://issues.apache.org/jira/browse/PHOENIX-4554
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: 
> 0001-PHOENIX-4437-Make-QueryPlan.getEstimatedBytesToScan-.patch, 
> 0002-PHOENIX-4488-Cache-config-parameters-for-MetaDataEnd.patch
>
>
> Syncing 4.x-HBase-1.2 with master (  two commits missing ) .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4554) Sync branch 4.x-HBase-1.2

2018-01-31 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado resolved PHOENIX-4554.
--
Resolution: Fixed

Done, 878a264e5..afe21dc72

> Sync branch 4.x-HBase-1.2
> -
>
> Key: PHOENIX-4554
> URL: https://issues.apache.org/jira/browse/PHOENIX-4554
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: 
> 0001-PHOENIX-4437-Make-QueryPlan.getEstimatedBytesToScan-.patch, 
> 0002-PHOENIX-4488-Cache-config-parameters-for-MetaDataEnd.patch
>
>
> Syncing 4.x-HBase-1.2 with master (  two commits missing ) .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4574) Disable failing local indexing ITs

2018-01-31 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4574:

Attachment: PHOENIX-4574.2.patch

> Disable failing local indexing ITs
> --
>
> Key: PHOENIX-4574
> URL: https://issues.apache.org/jira/browse/PHOENIX-4574
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4574.2.patch, PHOENIX-4574.patch
>
>
> [~rajeshbabu] still has some work ongoing to fix up local indexing for HBase 
> 2.
> Temporarily disable related tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4561) Temporarily disable transactional tests

2018-01-31 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4561:

Attachment: PHOENIX-4561.addendum.patch

> Temporarily disable transactional tests
> ---
>
> Key: PHOENIX-4561
> URL: https://issues.apache.org/jira/browse/PHOENIX-4561
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4561.001.patch, PHOENIX-4561.addendum.patch
>
>
> All 5.x transactional table tests are failing because of a necessary Tephra 
> release which is pending.
> Let's disable these tests so we have a better idea of the state of the build.
> FYI [~an...@apache.org]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4574) Disable failing local indexing ITs

2018-01-31 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4574:

Attachment: PHOENIX-4574.patch

> Disable failing local indexing ITs
> --
>
> Key: PHOENIX-4574
> URL: https://issues.apache.org/jira/browse/PHOENIX-4574
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4574.patch
>
>
> [~rajeshbabu] still has some work ongoing to fix up local indexing for HBase 
> 2.
> Temporarily disable related tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4565) IndexScrutinyToolIT is failing

2018-01-31 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4565:

Attachment: PHOENIX-4565.patch

> IndexScrutinyToolIT is failing
> --
>
> Key: PHOENIX-4565
> URL: https://issues.apache.org/jira/browse/PHOENIX-4565
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Priority: Critical
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4565.patch
>
>
> {noformat}
> [ERROR] 
> testScrutinyWhileTakingWrites[0](org.apache.phoenix.end2end.IndexScrutinyToolIT)
>   Time elapsed: 12.494 s  <<< FAILURE!
> java.lang.AssertionError: expected:<1000> but was:<996>
>     at 
> org.apache.phoenix.end2end.IndexScrutinyToolIT.testScrutinyWhileTakingWrites(IndexScrutinyToolIT.java:253)
> [ERROR] 
> testScrutinyWhileTakingWrites[1](org.apache.phoenix.end2end.IndexScrutinyToolIT)
>   Time elapsed: 7.437 s  <<< FAILURE!
> java.lang.AssertionError: expected:<1000> but was:<997>
>     at 
> org.apache.phoenix.end2end.IndexScrutinyToolIT.testScrutinyWhileTakingWrites(IndexScrutinyToolIT.java:253)
> [ERROR] 
> testScrutinyWhileTakingWrites[2](org.apache.phoenix.end2end.IndexScrutinyToolIT)
>   Time elapsed: 12.195 s  <<< FAILURE!
> java.lang.AssertionError: expected:<1000> but was:<999>
>     at 
> org.apache.phoenix.end2end.IndexScrutinyToolIT.testScrutinyWhileTakingWrites(IndexScrutinyToolIT.java:253)
> {noformat}
> Saw this on a {{mvn verify}} of 5.x. I don't know if we expect this one to be 
> broken or not -- I didn't see an open issue tracking it.
> Is this one we should get fixed before shipping an alpha/beta? My opinion 
> would be: unless it is a trivial/simple fix, we should get it for the next 
> release.
> [~sergey.soldatov], [~an...@apache.org], [~rajeshbabu].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ANNOUNCE] New Phoenix committer: Pedro Boado

2018-01-31 Thread Vincent Poon
Congrats Pedro!

On Wed, Jan 31, 2018 at 6:25 AM, Ankit Singhal 
wrote:

> Congrats and welcome, Pedro!
>
> On Wed, Jan 31, 2018 at 4:35 AM, Flavio Pompermaier 
> wrote:
>
> > Congratulations and thans for the great work!
> >
> > On 30 Jan 2018 23:45, "James Taylor"  wrote:
> >
> > On behalf of the Apache Phoenix PMC, I'm please to announce that Pedro
> > Boado has accepted our invitation to become a committer. He's jumped in
> and
> > saved the HBase 1.2 branch from EOL by volunteering to be the release
> > manager [1], has diligently kept the branch in sync with master [2],
> > revived our community's desire for CDH compatibility by porting the
> > required changes to the new CDH branch [3], modified our release scripts
> to
> > build a parcels directory for easy consumption by Cloudera Manager [4],
> and
> > successfully RMed our first CDH-compatible release [5]. Fantastic job,
> > Pedro!
> >
> > Please give Pedro a warm welcome to the team!
> >
> > James
> >
> > [1]
> > https://lists.apache.org/thread.html/e4312e2cb329a35979576f132d0d72
> > 3e0b022e45ce9083b3cae4abe5@%3Cuser.phoenix.apache.org%3E
> > [2] https://issues.apache.org/jira/browse/PHOENIX-4461
> > [3] https://issues.apache.org/jira/browse/PHOENIX-4372
> > [4] https://issues.apache.org/jira/browse/PHOENIX-
> > [5]
> > https://lists.apache.org/thread.html/e51576bf9bac5100e0f12ddb93e469
> > cb35bfa9c80e73922df635ad12@%3Cuser.phoenix.apache.org%3E
> >
>


[jira] [Updated] (PHOENIX-4482) Fix WALReplayWithIndexWritesAndCompressedWALIT failing with ClassCastException

2018-01-31 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4482:

Attachment: PHOENIX-4482.patch

> Fix WALReplayWithIndexWritesAndCompressedWALIT failing with ClassCastException
> --
>
> Key: PHOENIX-4482
> URL: https://issues.apache.org/jira/browse/PHOENIX-4482
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4482.patch
>
>
> {noformat}
> ERROR] 
> testReplayEditsWrittenViaHRegion(org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT)
>   Time elapsed: 82.455 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL cannot be cast to 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT.createWAL(WALReplayWithIndexWritesAndCompressedWALIT.java:274)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT.testReplayEditsWrittenViaHRegion(WALReplayWithIndexWritesAndCompressedWALIT.java:192)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4482) Fix WALReplayWithIndexWritesAndCompressedWALIT failing with ClassCastException

2018-01-31 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347432#comment-16347432
 ] 

Josh Elser commented on PHOENIX-4482:
-

Forcing FSHLog doesn't make the test pass immediately. Disabling it and moving 
on.

> Fix WALReplayWithIndexWritesAndCompressedWALIT failing with ClassCastException
> --
>
> Key: PHOENIX-4482
> URL: https://issues.apache.org/jira/browse/PHOENIX-4482
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>
> {noformat}
> ERROR] 
> testReplayEditsWrittenViaHRegion(org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT)
>   Time elapsed: 82.455 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL cannot be cast to 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT.createWAL(WALReplayWithIndexWritesAndCompressedWALIT.java:274)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT.testReplayEditsWrittenViaHRegion(WALReplayWithIndexWritesAndCompressedWALIT.java:192)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4574) Disable failing local indexing ITs

2018-01-31 Thread Josh Elser (JIRA)
Josh Elser created PHOENIX-4574:
---

 Summary: Disable failing local indexing ITs
 Key: PHOENIX-4574
 URL: https://issues.apache.org/jira/browse/PHOENIX-4574
 Project: Phoenix
  Issue Type: Task
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 5.0.0


[~rajeshbabu] still has some work ongoing to fix up local indexing for HBase 2.

Temporarily disable related tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4459) Region assignments are failing for the test cases with extended clocks to support SCN

2018-01-31 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347254#comment-16347254
 ] 

Josh Elser commented on PHOENIX-4459:
-

Parking a patch which disables some of the tests which are failing (until we 
get this fix in).

> Region assignments are failing for the test cases with extended clocks to 
> support SCN
> -
>
> Key: PHOENIX-4459
> URL: https://issues.apache.org/jira/browse/PHOENIX-4459
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4459-v1.patch, 
> PHOENIX-4459.test-disabling.patch, jstack_PHOENIX-4459
>
>
> There are test cases using own clock are failing with TableNotFoundException 
> during region assignment. The reason is the meta scan is not giving any 
> results because of the past timestamps. Need to check in more details. 
> Because of the region assignment failures during create table procedure hbase 
> client wait for 30 mins. So not able to continue running the other tests as 
> well.
> {noformat}
> 2017-12-14 16:48:03,153 ERROR [ProcExecWrkr-9] 
> org.apache.hadoop.hbase.master.TableStateManager(135): Unable to get table 
> T08 state
> org.apache.hadoop.hbase.TableNotFoundException: T08
>   at 
> org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:175)
>   at 
> org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:132)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignProcedure.startTransition(AssignProcedure.java:161)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:294)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:85)
>   at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:845)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1452)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1221)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$800(ProcedureExecutor.java:77)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1731)
> {noformat}
> List of tests hanging because of this:-
> ExplainPlanWithStatsEnabledIT#testBytesRowsForSelectOnTenantViews
> ConcurrentMutationsIT
> PartialIndexRebuilderIT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4459) Region assignments are failing for the test cases with extended clocks to support SCN

2018-01-31 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4459:

Attachment: PHOENIX-4459.test-disabling.patch

> Region assignments are failing for the test cases with extended clocks to 
> support SCN
> -
>
> Key: PHOENIX-4459
> URL: https://issues.apache.org/jira/browse/PHOENIX-4459
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4459-v1.patch, 
> PHOENIX-4459.test-disabling.patch, jstack_PHOENIX-4459
>
>
> There are test cases using own clock are failing with TableNotFoundException 
> during region assignment. The reason is the meta scan is not giving any 
> results because of the past timestamps. Need to check in more details. 
> Because of the region assignment failures during create table procedure hbase 
> client wait for 30 mins. So not able to continue running the other tests as 
> well.
> {noformat}
> 2017-12-14 16:48:03,153 ERROR [ProcExecWrkr-9] 
> org.apache.hadoop.hbase.master.TableStateManager(135): Unable to get table 
> T08 state
> org.apache.hadoop.hbase.TableNotFoundException: T08
>   at 
> org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:175)
>   at 
> org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:132)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignProcedure.startTransition(AssignProcedure.java:161)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:294)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:85)
>   at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:845)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1452)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1221)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$800(ProcedureExecutor.java:77)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1731)
> {noformat}
> List of tests hanging because of this:-
> ExplainPlanWithStatsEnabledIT#testBytesRowsForSelectOnTenantViews
> ConcurrentMutationsIT
> PartialIndexRebuilderIT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4570) Phoenix pherf tests fail with Mockito dependency issue.

2018-01-31 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347242#comment-16347242
 ] 

Josh Elser commented on PHOENIX-4570:
-

Missed a few other places that move from classifier:tests to type:test-jar per 
the Maven guide https://maven.apache.org/guides/mini/guide-attached-tests.html

> Phoenix pherf tests fail with Mockito dependency issue.
> ---
>
> Key: PHOENIX-4570
> URL: https://issues.apache.org/jira/browse/PHOENIX-4570
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Blocker
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4570.002.patch, PHOENIX-4570.patch
>
>
> {noformat}
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.006 s <<< 
> FAILURE! - in org.apache.phoenix.pherf.SchemaReaderIT
> org.apache.phoenix.pherf.SchemaReaderIT  Time elapsed: 0.005 s  <<< ERROR!
> java.lang.RuntimeException: java.lang.NoClassDefFoundError: 
> org/mockito/stubbing/Answer
> Caused by: java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.004 s <<< 
> FAILURE! - in org.apache.phoenix.pherf.PherfMainIT
> org.apache.phoenix.pherf.PherfMainIT  Time elapsed: 0.004 s  <<< ERROR!
> java.lang.RuntimeException: java.lang.NoClassDefFoundError: 
> org/mockito/stubbing/Answer
> Caused by: java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.004 s <<< 
> FAILURE! - in org.apache.phoenix.pherf.DataIngestIT
> org.apache.phoenix.pherf.DataIngestIT  Time elapsed: 0.003 s  <<< ERROR!
> java.lang.RuntimeException: java.lang.NoClassDefFoundError: 
> org/mockito/stubbing/Answer
> Caused by: java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
> {noformat}
> Looks like a pretty easy dependency issue to fix up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4570) Phoenix pherf tests fail with Mockito dependency issue.

2018-01-31 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4570:

Attachment: PHOENIX-4570.002.patch

> Phoenix pherf tests fail with Mockito dependency issue.
> ---
>
> Key: PHOENIX-4570
> URL: https://issues.apache.org/jira/browse/PHOENIX-4570
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Blocker
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4570.002.patch, PHOENIX-4570.patch
>
>
> {noformat}
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.006 s <<< 
> FAILURE! - in org.apache.phoenix.pherf.SchemaReaderIT
> org.apache.phoenix.pherf.SchemaReaderIT  Time elapsed: 0.005 s  <<< ERROR!
> java.lang.RuntimeException: java.lang.NoClassDefFoundError: 
> org/mockito/stubbing/Answer
> Caused by: java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.004 s <<< 
> FAILURE! - in org.apache.phoenix.pherf.PherfMainIT
> org.apache.phoenix.pherf.PherfMainIT  Time elapsed: 0.004 s  <<< ERROR!
> java.lang.RuntimeException: java.lang.NoClassDefFoundError: 
> org/mockito/stubbing/Answer
> Caused by: java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.004 s <<< 
> FAILURE! - in org.apache.phoenix.pherf.DataIngestIT
> org.apache.phoenix.pherf.DataIngestIT  Time elapsed: 0.003 s  <<< ERROR!
> java.lang.RuntimeException: java.lang.NoClassDefFoundError: 
> org/mockito/stubbing/Answer
> Caused by: java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
> {noformat}
> Looks like a pretty easy dependency issue to fix up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4573) Temporarily disable phoenix-hive ITs

2018-01-31 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4573:

Attachment: PHOENIX-4573.patch

> Temporarily disable phoenix-hive ITs
> 
>
> Key: PHOENIX-4573
> URL: https://issues.apache.org/jira/browse/PHOENIX-4573
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4573.patch
>
>
> phoenix-hive ITs aren't working with the newer version of Hive. Ankit and I 
> have been trying to make this work in PHOENIX-4423, but the work is not yet 
> done.
> Disable them for now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4573) Temporarily disable phoenix-hive ITs

2018-01-31 Thread Josh Elser (JIRA)
Josh Elser created PHOENIX-4573:
---

 Summary: Temporarily disable phoenix-hive ITs
 Key: PHOENIX-4573
 URL: https://issues.apache.org/jira/browse/PHOENIX-4573
 Project: Phoenix
  Issue Type: Task
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 5.0.0


phoenix-hive ITs aren't working with the newer version of Hive. Ankit and I 
have been trying to make this work in PHOENIX-4423, but the work is not yet 
done.

Disable them for now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4572) Phoenix-queryserver ITs failing on hbase2

2018-01-31 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4572:

Attachment: PHOENIX-4572.patch

> Phoenix-queryserver ITs failing on hbase2
> -
>
> Key: PHOENIX-4572
> URL: https://issues.apache.org/jira/browse/PHOENIX-4572
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Priority: Major
> Attachments: PHOENIX-4572.patch
>
>
> {noformat}
> 5694 2018-01-30 15:55:47,530 WARN  [Thread-67] datanode.BPServiceActor(849): 
> Unexpected exception in block pool Block pool  (Datanode Uuid 
> e3f356a4-76ff-4aee-a98f-b316672a35e7) service to localhost/127.0.0.1:57626
> 5695 java.lang.NoClassDefFoundError: Could not initialize class 
> com.fasterxml.jackson.databind.ObjectMapper
> 5696   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.(FsVolumeImpl.java:105)
> 5697   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImplBuilder.build(FsVolumeImplBuilder.java:70)
> 5698   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.addVolume(FsDatasetImpl.java:428)
> 5699   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.(FsDatasetImpl.java:318)
> 5700   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
> 5701   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
> 5702   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1703)
> 5703   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1650)
> 5704   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:376)
> 5705   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
> 5706   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
> 5707   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> Looks like a jackson version conflict is causing the MiniDFSCluster to crash. 
> There's a new shaded jar out of Hadoop3 which should help this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4572) Phoenix-queryserver ITs failing on hbase2

2018-01-31 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned PHOENIX-4572:
---

Assignee: Josh Elser

> Phoenix-queryserver ITs failing on hbase2
> -
>
> Key: PHOENIX-4572
> URL: https://issues.apache.org/jira/browse/PHOENIX-4572
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Attachments: PHOENIX-4572.patch
>
>
> {noformat}
> 5694 2018-01-30 15:55:47,530 WARN  [Thread-67] datanode.BPServiceActor(849): 
> Unexpected exception in block pool Block pool  (Datanode Uuid 
> e3f356a4-76ff-4aee-a98f-b316672a35e7) service to localhost/127.0.0.1:57626
> 5695 java.lang.NoClassDefFoundError: Could not initialize class 
> com.fasterxml.jackson.databind.ObjectMapper
> 5696   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.(FsVolumeImpl.java:105)
> 5697   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImplBuilder.build(FsVolumeImplBuilder.java:70)
> 5698   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.addVolume(FsDatasetImpl.java:428)
> 5699   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.(FsDatasetImpl.java:318)
> 5700   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
> 5701   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
> 5702   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1703)
> 5703   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1650)
> 5704   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:376)
> 5705   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
> 5706   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
> 5707   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> Looks like a jackson version conflict is causing the MiniDFSCluster to crash. 
> There's a new shaded jar out of Hadoop3 which should help this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4130) Avoid server retries for mutable indexes

2018-01-31 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347046#comment-16347046
 ] 

James Taylor commented on PHOENIX-4130:
---

How about for HBase 1.2 and below we retry on server for local indexes? We 
don’t really recommend local indexes on pre 1.3 versions, so perhaps best to 
keep it simple? Otherwise, I’d go with your option #1 and pass more state 
through RPC. One unrelated question: if namespaces are in use, does your means 
of parsing out the table names still work? Not sure if there will be a : 
separator or a . separator.

> Avoid server retries for mutable indexes
> 
>
> Key: PHOENIX-4130
> URL: https://issues.apache.org/jira/browse/PHOENIX-4130
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4130.v1.master.patch, 
> PHOENIX-4130.v10.master.patch, PHOENIX-4130.v2.master.patch, 
> PHOENIX-4130.v3.master.patch, PHOENIX-4130.v4.master.patch, 
> PHOENIX-4130.v5.master.patch, PHOENIX-4130.v6.master.patch, 
> PHOENIX-4130.v7.master.patch, PHOENIX-4130.v8.master.patch, 
> PHOENIX-4130.v9.master.patch
>
>
> Had some discussions with [~jamestaylor], [~samarthjain], and [~vincentpoon], 
> during which I suggested that we can possibly eliminate retry loops happening 
> at the server that cause the handler threads to be stuck potentially for 
> quite a while (at least multiple seconds to ride over common scenarios like 
> splits).
> Instead we can do the retries at the Phoenix client that.
> So:
> # The index updates are not retried on the server. (retries = 0)
> # A failed index update would set the failed index timestamp but leave the 
> index enabled.
> # Now the handler thread is done, it throws an appropriate exception back to 
> the client.
> # The Phoenix client can now retry. When those retries fail the index is 
> disabled (if the policy dictates that) and throw the exception back to its 
> caller.
> So no more waiting is needed on the server, handler threads are freed 
> immediately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4533) Phoenix Query Server should not use SPNEGO principal to proxy user requests

2018-01-31 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346936#comment-16346936
 ] 

Josh Elser commented on PHOENIX-4533:
-

bq. Actually I think I already figured it out (though not clear how this 
affects other components).  It looks like the login is done eternally.  Just 
need to make sure the avatica server will still do SPNEGO auth

Yup, you got it. That was meant to disable Avatica from trying to login while 
when we already did the login in the test setup.

As long as you have {{kerberos}} set as the value for 
{{QueryServices.QUERY_SERVER_HBASE_SECURITY_CONF_ATTRIB}}, PQS should end up 
calling {{withSpnegoAuth(..)}} which is what forces the SPNEGO authentication 
to happen.

> Phoenix Query Server should not use SPNEGO principal to proxy user requests
> ---
>
> Key: PHOENIX-4533
> URL: https://issues.apache.org/jira/browse/PHOENIX-4533
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lev Bronshtein
>Assignee: Lev Bronshtein
>Priority: Minor
> Attachments: PHOENIX-4533.1.patch
>
>
> Currently the HTTP/ principal is used by various components in the HADOOP 
> ecosystem to perform SPNEGO authentication.  Since there can only be one 
> HTTP/ per host, even outside of the Hadoop ecosystem, the keytab containing 
> key material for local HTTP/ principal is shared among a few applications.  
> With so many applications having access to the HTTP/ credentials, this 
> increases the chances of an attack on the proxy user capabilities of Hadoop.  
> This JIRA proposes that two different key tabs can be used to
> 1. Authenticate kerberized web requests
> 2. Communicate with the phoenix back end



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ANNOUNCE] New Phoenix committer: Pedro Boado

2018-01-31 Thread Ankit Singhal
Congrats and welcome, Pedro!

On Wed, Jan 31, 2018 at 4:35 AM, Flavio Pompermaier 
wrote:

> Congratulations and thans for the great work!
>
> On 30 Jan 2018 23:45, "James Taylor"  wrote:
>
> On behalf of the Apache Phoenix PMC, I'm please to announce that Pedro
> Boado has accepted our invitation to become a committer. He's jumped in and
> saved the HBase 1.2 branch from EOL by volunteering to be the release
> manager [1], has diligently kept the branch in sync with master [2],
> revived our community's desire for CDH compatibility by porting the
> required changes to the new CDH branch [3], modified our release scripts to
> build a parcels directory for easy consumption by Cloudera Manager [4], and
> successfully RMed our first CDH-compatible release [5]. Fantastic job,
> Pedro!
>
> Please give Pedro a warm welcome to the team!
>
> James
>
> [1]
> https://lists.apache.org/thread.html/e4312e2cb329a35979576f132d0d72
> 3e0b022e45ce9083b3cae4abe5@%3Cuser.phoenix.apache.org%3E
> [2] https://issues.apache.org/jira/browse/PHOENIX-4461
> [3] https://issues.apache.org/jira/browse/PHOENIX-4372
> [4] https://issues.apache.org/jira/browse/PHOENIX-
> [5]
> https://lists.apache.org/thread.html/e51576bf9bac5100e0f12ddb93e469
> cb35bfa9c80e73922df635ad12@%3Cuser.phoenix.apache.org%3E
>


[jira] [Commented] (PHOENIX-4533) Phoenix Query Server should not use SPNEGO principal to proxy user requests

2018-01-31 Thread Lev Bronshtein (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346859#comment-16346859
 ] 

Lev Bronshtein commented on PHOENIX-4533:
-

Actually I think I already figured it out (though not clear how this affects 
other components).  It looks like the login is done eternally.  Just need to 
make sure the avatica server will still do SPNEGO auth

> Phoenix Query Server should not use SPNEGO principal to proxy user requests
> ---
>
> Key: PHOENIX-4533
> URL: https://issues.apache.org/jira/browse/PHOENIX-4533
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lev Bronshtein
>Assignee: Lev Bronshtein
>Priority: Minor
> Attachments: PHOENIX-4533.1.patch
>
>
> Currently the HTTP/ principal is used by various components in the HADOOP 
> ecosystem to perform SPNEGO authentication.  Since there can only be one 
> HTTP/ per host, even outside of the Hadoop ecosystem, the keytab containing 
> key material for local HTTP/ principal is shared among a few applications.  
> With so many applications having access to the HTTP/ credentials, this 
> increases the chances of an attack on the proxy user capabilities of Hadoop.  
> This JIRA proposes that two different key tabs can be used to
> 1. Authenticate kerberized web requests
> 2. Communicate with the phoenix back end



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4533) Phoenix Query Server should not use SPNEGO principal to proxy user requests

2018-01-31 Thread Lev Bronshtein (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346840#comment-16346840
 ] 

Lev Bronshtein commented on PHOENIX-4533:
-

Josh, I am having some trouble understanding why this line is being set in both 
tests
{code:java}
conf.setBoolean(QueryServices.QUERY_SERVER_DISABLE_KERBEROS_LOGIN, true);
{code}
Especially since this seems to turn off the specific parts we want to test


{code:java}
final boolean disableLogin = 
getConf().getBoolean(QueryServices.QUERY_SERVER_DISABLE_KERBEROS_LOGIN,
QueryServicesOptions.DEFAULT_QUERY_SERVER_DISABLE_KERBEROS_LOGIN);

...

if (isKerberos && !disableSpnego && !disableLogin) {
hostname = Strings.domainNamePointerToHostName(DNS.getDefaultHost(
getConf().get(QueryServices.QUERY_SERVER_DNS_INTERFACE_ATTRIB, "default"),
getConf().get(QueryServices.QUERY_SERVER_DNS_NAMESERVER_ATTRIB, "default")));
if (LOG.isDebugEnabled()) {
LOG.debug("Login to " + hostname + " using " + getConf().get(
QueryServices.QUERY_SERVER_KEYTAB_FILENAME_ATTRIB)
+ " and principal " + getConf().get(
QueryServices.QUERY_SERVER_KERBEROS_PRINCIPAL_ATTRIB) + ".");
}
SecurityUtil.login(getConf(), QueryServices.QUERY_SERVER_KEYTAB_FILENAME_ATTRIB,
QueryServices.QUERY_SERVER_KERBEROS_PRINCIPAL_ATTRIB, hostname);
LOG.info("Login successful.");
} else {
hostname = InetAddress.getLocalHost().getHostName();
LOG.info(" Kerberos is off and hostname is : "+hostname);
}
{code}

> Phoenix Query Server should not use SPNEGO principal to proxy user requests
> ---
>
> Key: PHOENIX-4533
> URL: https://issues.apache.org/jira/browse/PHOENIX-4533
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lev Bronshtein
>Assignee: Lev Bronshtein
>Priority: Minor
> Attachments: PHOENIX-4533.1.patch
>
>
> Currently the HTTP/ principal is used by various components in the HADOOP 
> ecosystem to perform SPNEGO authentication.  Since there can only be one 
> HTTP/ per host, even outside of the Hadoop ecosystem, the keytab containing 
> key material for local HTTP/ principal is shared among a few applications.  
> With so many applications having access to the HTTP/ credentials, this 
> increases the chances of an attack on the proxy user capabilities of Hadoop.  
> This JIRA proposes that two different key tabs can be used to
> 1. Authenticate kerberized web requests
> 2. Communicate with the phoenix back end



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4533) Phoenix Query Server should not use SPNEGO principal to proxy user requests

2018-01-31 Thread Lev Bronshtein (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346840#comment-16346840
 ] 

Lev Bronshtein edited comment on PHOENIX-4533 at 1/31/18 1:43 PM:
--

Josh, I am having some trouble understanding why this line is being set in both 
tests
{code:java}
conf.setBoolean(QueryServices.QUERY_SERVER_DISABLE_KERBEROS_LOGIN, true);
{code}
Especially since this seems to turn off the specific parts we want to test in 

*phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java*
{code:java}
final boolean disableLogin = 
getConf().getBoolean(QueryServices.QUERY_SERVER_DISABLE_KERBEROS_LOGIN,
QueryServicesOptions.DEFAULT_QUERY_SERVER_DISABLE_KERBEROS_LOGIN);

...

if (isKerberos && !disableSpnego && !disableLogin) {
hostname = Strings.domainNamePointerToHostName(DNS.getDefaultHost(
getConf().get(QueryServices.QUERY_SERVER_DNS_INTERFACE_ATTRIB, "default"),
getConf().get(QueryServices.QUERY_SERVER_DNS_NAMESERVER_ATTRIB, "default")));
if (LOG.isDebugEnabled()) {
LOG.debug("Login to " + hostname + " using " + getConf().get(
QueryServices.QUERY_SERVER_KEYTAB_FILENAME_ATTRIB)
+ " and principal " + getConf().get(
QueryServices.QUERY_SERVER_KERBEROS_PRINCIPAL_ATTRIB) + ".");
}
SecurityUtil.login(getConf(), QueryServices.QUERY_SERVER_KEYTAB_FILENAME_ATTRIB,
QueryServices.QUERY_SERVER_KERBEROS_PRINCIPAL_ATTRIB, hostname);
LOG.info("Login successful.");
} else {
hostname = InetAddress.getLocalHost().getHostName();
LOG.info(" Kerberos is off and hostname is : "+hostname);
}
{code}


was (Author: lbronshtein):
Josh, I am having some trouble understanding why this line is being set in both 
tests
{code:java}
conf.setBoolean(QueryServices.QUERY_SERVER_DISABLE_KERBEROS_LOGIN, true);
{code}
Especially since this seems to turn off the specific parts we want to test


{code:java}
final boolean disableLogin = 
getConf().getBoolean(QueryServices.QUERY_SERVER_DISABLE_KERBEROS_LOGIN,
QueryServicesOptions.DEFAULT_QUERY_SERVER_DISABLE_KERBEROS_LOGIN);

...

if (isKerberos && !disableSpnego && !disableLogin) {
hostname = Strings.domainNamePointerToHostName(DNS.getDefaultHost(
getConf().get(QueryServices.QUERY_SERVER_DNS_INTERFACE_ATTRIB, "default"),
getConf().get(QueryServices.QUERY_SERVER_DNS_NAMESERVER_ATTRIB, "default")));
if (LOG.isDebugEnabled()) {
LOG.debug("Login to " + hostname + " using " + getConf().get(
QueryServices.QUERY_SERVER_KEYTAB_FILENAME_ATTRIB)
+ " and principal " + getConf().get(
QueryServices.QUERY_SERVER_KERBEROS_PRINCIPAL_ATTRIB) + ".");
}
SecurityUtil.login(getConf(), QueryServices.QUERY_SERVER_KEYTAB_FILENAME_ATTRIB,
QueryServices.QUERY_SERVER_KERBEROS_PRINCIPAL_ATTRIB, hostname);
LOG.info("Login successful.");
} else {
hostname = InetAddress.getLocalHost().getHostName();
LOG.info(" Kerberos is off and hostname is : "+hostname);
}
{code}

> Phoenix Query Server should not use SPNEGO principal to proxy user requests
> ---
>
> Key: PHOENIX-4533
> URL: https://issues.apache.org/jira/browse/PHOENIX-4533
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lev Bronshtein
>Assignee: Lev Bronshtein
>Priority: Minor
> Attachments: PHOENIX-4533.1.patch
>
>
> Currently the HTTP/ principal is used by various components in the HADOOP 
> ecosystem to perform SPNEGO authentication.  Since there can only be one 
> HTTP/ per host, even outside of the Hadoop ecosystem, the keytab containing 
> key material for local HTTP/ principal is shared among a few applications.  
> With so many applications having access to the HTTP/ credentials, this 
> increases the chances of an attack on the proxy user capabilities of Hadoop.  
> This JIRA proposes that two different key tabs can be used to
> 1. Authenticate kerberized web requests
> 2. Communicate with the phoenix back end



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (PHOENIX-4418) UPPER() and LOWER() functions should be locale-aware

2018-01-31 Thread Shehzaad Nakhoda (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shehzaad Nakhoda updated PHOENIX-4418:
--
Comment: was deleted

(was: [~jamestaylor] a patch for your perusal.)

> UPPER() and LOWER() functions should be locale-aware
> 
>
> Key: PHOENIX-4418
> URL: https://issues.apache.org/jira/browse/PHOENIX-4418
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.0
>Reporter: Shehzaad Nakhoda
>Assignee: Shehzaad Nakhoda
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4418_v1.patch
>
>
> Correct conversion of a string to upper or lower case depends on Locale.
> Java's upper case and lower case conversion routines allow passing in a 
> locale.
> It should be possible to pass in a locale to UPPER() and LOWER() in Phoenix 
> so that locale-specific case conversion can be supported in Phoenix.
> See java.lang.String#toUpperCase()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4418) UPPER() and LOWER() functions should be locale-aware

2018-01-31 Thread Shehzaad Nakhoda (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shehzaad Nakhoda updated PHOENIX-4418:
--
Attachment: PHOENIX-4418_v1.patch

> UPPER() and LOWER() functions should be locale-aware
> 
>
> Key: PHOENIX-4418
> URL: https://issues.apache.org/jira/browse/PHOENIX-4418
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.0
>Reporter: Shehzaad Nakhoda
>Assignee: Shehzaad Nakhoda
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4418_v1.patch
>
>
> Correct conversion of a string to upper or lower case depends on Locale.
> Java's upper case and lower case conversion routines allow passing in a 
> locale.
> It should be possible to pass in a locale to UPPER() and LOWER() in Phoenix 
> so that locale-specific case conversion can be supported in Phoenix.
> See java.lang.String#toUpperCase()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4418) UPPER() and LOWER() functions should be locale-aware

2018-01-31 Thread Shehzaad Nakhoda (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shehzaad Nakhoda reassigned PHOENIX-4418:
-

 Assignee: Shehzaad Nakhoda
Fix Version/s: 4.14.0

> UPPER() and LOWER() functions should be locale-aware
> 
>
> Key: PHOENIX-4418
> URL: https://issues.apache.org/jira/browse/PHOENIX-4418
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.0
>Reporter: Shehzaad Nakhoda
>Assignee: Shehzaad Nakhoda
>Priority: Major
> Fix For: 4.14.0
>
>
> Correct conversion of a string to upper or lower case depends on Locale.
> Java's upper case and lower case conversion routines allow passing in a 
> locale.
> It should be possible to pass in a locale to UPPER() and LOWER() in Phoenix 
> so that locale-specific case conversion can be supported in Phoenix.
> See java.lang.String#toUpperCase()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4418) UPPER() and LOWER() functions should be locale-aware

2018-01-31 Thread Shehzaad Nakhoda (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346556#comment-16346556
 ] 

Shehzaad Nakhoda commented on PHOENIX-4418:
---

[~jamestaylor] a patch for your perusal.

> UPPER() and LOWER() functions should be locale-aware
> 
>
> Key: PHOENIX-4418
> URL: https://issues.apache.org/jira/browse/PHOENIX-4418
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.0
>Reporter: Shehzaad Nakhoda
>Assignee: Shehzaad Nakhoda
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4418_v1.patch
>
>
> Correct conversion of a string to upper or lower case depends on Locale.
> Java's upper case and lower case conversion routines allow passing in a 
> locale.
> It should be possible to pass in a locale to UPPER() and LOWER() in Phoenix 
> so that locale-specific case conversion can be supported in Phoenix.
> See java.lang.String#toUpperCase()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)