[ANNOUNCE] Apache Phoenix 4.8.1 is available for download

2016-09-27 Thread larsh
The Phoenix Team is pleased to announce the immediate release of Apache Phoenix 
4.8.1.
Download it from your favorite Apache mirror [1].

Apache Phoenix 4.8.1 a bug fix release for the Phoenix 4.8 release line, 
compatible with Apache HBase 0.98, 1.0, 1.1 & 1.2.

This release fixes the following 43 issues:
    [PHOENIX-1367] - VIEW derived from another VIEW doesn't use parent VIEW 
indexes
    [PHOENIX-3195] - Slight safety improvement for using DistinctPrefixFilter
    [PHOENIX-3228] - Index tables should not be configured with a 
custom/smaller MAX_FILESIZE
    [PHOENIX-930] - duplicated columns cause query exception and drop table 
exception
    [PHOENIX-1647] - Correctly return that Phoenix supports schema name 
references in DatabaseMetaData
    [PHOENIX-2336] - Queries with small case column-names return empty 
result-set when working with Spark Datasource Plugin
    [PHOENIX-2474] - Cannot round to a negative precision (to the left of the 
decimal)
    [PHOENIX-2641] - Implicit wildcard in LIKE predicate search pattern
    [PHOENIX-2645] - Wildcard characters do not match newline characters
    [PHOENIX-2853] - Delete Immutable rows from View does not work if immutable 
index(secondary index) exists
    [PHOENIX-2944] - DATE Comparison Broken
    [PHOENIX-2946] - Projected comparison between date and timestamp columns 
always returns true
    [PHOENIX-2995] - Write performance severely degrades with large number of 
views
    [PHOENIX-3046] - NOT LIKE with wildcard unexpectedly returns results
    [PHOENIX-3054] - Counting zero null rows returns an empty result set
    [PHOENIX-3072] - Deadlock on region opening with secondary index recovery
    [PHOENIX-3148] - Reduce size of PTable so that more tables can be cached in 
the metada cache.
    [PHOENIX-3162] - TableNotFoundException might be thrown when an index 
dropped while upserting.
    [PHOENIX-3164] - PhoenixConnection leak in PQS with security enabled
    [PHOENIX-3170] - Remove the futuretask from the list if 
StaleRegionBoundaryCacheException is thrown while initializing the scanners
    [PHOENIX-3175] - Unnecessary UGI proxy user impersonation check
    [PHOENIX-3185] - Error: ERROR 514 (42892): A duplicate column name was 
detected in the object definition or ALTER TABLE statement. 
columnName=TEST_TABLE.C1 (state=42892,code=514)
    [PHOENIX-3189] - HBase/ZooKeeper connection leaks when providing 
principal/keytab in JDBC url
    [PHOENIX-3203] - Tenant cache lookup in Global Cache fails in certain 
conditions
    [PHOENIX-3207] - Fix compilation failure on 4.8-HBase-1.2, 4.8-HBase-1.1 
and 4.8-HBase-1.0 branches after PHOENIX-3148
    [PHOENIX-3210] - Exception trying to cast Double to BigDecimal in 
UpsertCompiler
    [PHOENIX-3223] - Add hadoop classpath to PQS classpath
    [PHOENIX-3230] - Upgrade code running concurrently on different JVMs could 
make clients unusuable
    [PHOENIX-3237] - Automatic rebuild of disabled index will fail if indexes 
of two tables are disabled at the same time
    [PHOENIX-3246] - U+2002 (En Space) not handled as whitespace in grammar
    [PHOENIX-3260] - MetadataRegionObserver.postOpen() can prevent region 
server from shutting down for a long duration
    [PHOENIX-3268] - Upgrade to Tephra 0.9.0
    [PHOENIX-3280] - Automatic attempt to rebuild all disabled index
    [PHOENIX-3291] - Do not throw return value of Throwables#propagate call
    [PHOENIX-3307] - Backward compatibility fails for tables with index (4.7.0 
client - 4.8.1 server)
    [PHOENIX-3323] - make_rc script fails to build the RC
    [PHOENIX-2785] - Do not store NULLs for immutable tables
    [PHOENIX-3081] - MIsleading exception on async stats update after major 
compaction
    [PHOENIX-3116] - Support incompatible HBase 1.1.5 and HBase 1.2.2
    [PHOENIX-808] - Create snapshot of SYSTEM.CATALOG prior to upgrade and 
restore on any failure
    [PHOENIX-2990] - Ensure documentation on "time/date" datatypes/functions 
acknowledge lack of JDBC compliance
    [PHOENIX-2991] - Add missing documentation for functions
    [PHOENIX-3255] - Increase test coverage for TIMESTAMP

See also the full release notes [2].

Yours,
The Apache Phoenix Team

[1] http://www.apache.org/dyn/closer.lua/phoenix/
[2] 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315120=12337964



Making upgrade to a new minor release a manual step

2016-09-27 Thread Samarth Jain
Hello Phoenix folks,

The purpose of this email is two fold:
1) to introduce folks about the new optional upgrade process and,
2) to get a consensus on what should the default behavior of the process
be.

As you may already know, when a new minor release is rolled out, in order
to support new features Phoenix needs to update its internal metadata. This
update is done as part of the upgrade process that is automatically kicked
off when a Phoenix client for a new minor release connects to the HBase
cluster.

To provide more control on when this upgrade should be run, we wrote a new
feature which makes this upgrade optionally a manual step (see
https://issues.apache.org/jira/browse/PHOENIX-3174 for details). The
upgrade behavior is controlled by a client side config named
phoenix.autoupgrade.enabled. If the config is set to false, then Phoenix
won't kick off the upgrade process automatically. When ready to upgrade,
users can kick off the upgrade process by calling EXECUTE UPGRADE sql
command.

Keep in mind that till the upgrade is run, Phoenix won't let you execute
any other SQL commands using the new minor release client. Clients running
older versions of Phoenix though will continue to work as before.

I propose that we should by default have the config
phoenix.autoupgrade.enabled set to false. Providing users more control and
making the upgrade process an explicit manual step is the right thing to
do, IMHO.

Interested to know what do you all think.

Thanks,
Samarth


[jira] [Comment Edited] (PHOENIX-3259) Create fat jar for transaction manager

2016-09-27 Thread Francis Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528453#comment-15528453
 ] 

Francis Chuang edited comment on PHOENIX-3259 at 9/28/16 5:22 AM:
--

Are there any workarounds for this with Phoenix 4.8.1-rc0 in the meantime?


was (Author: francischuang):
Are there any workarounds for this in the meantime?

> Create fat jar for transaction manager
> --
>
> Key: PHOENIX-3259
> URL: https://issues.apache.org/jira/browse/PHOENIX-3259
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>
> Due to the incompatible guava version in HBase (12 instead of 13), the 
> transaction manager will not work by just pointing it to the HBase lib dir. A 
> reasonable alternative would be for Phoenix to build another fat jar 
> specifically for the transaction manager which includes all necessary 
> dependencies (namely guava 13). Then the bin/tephra script (we should rename 
> that to tephra.sh perhaps?) would simply need to have this jar on the 
> classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3259) Create fat jar for transaction manager

2016-09-27 Thread Francis Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528453#comment-15528453
 ] 

Francis Chuang commented on PHOENIX-3259:
-

Are there any workarounds for this in the meantime?

> Create fat jar for transaction manager
> --
>
> Key: PHOENIX-3259
> URL: https://issues.apache.org/jira/browse/PHOENIX-3259
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>
> Due to the incompatible guava version in HBase (12 instead of 13), the 
> transaction manager will not work by just pointing it to the HBase lib dir. A 
> reasonable alternative would be for Phoenix to build another fat jar 
> specifically for the transaction manager which includes all necessary 
> dependencies (namely guava 13). Then the bin/tephra script (we should rename 
> that to tephra.sh perhaps?) would simply need to have this jar on the 
> classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] The first rc (RC0) for Phoenix 4.8.1 is available

2016-09-27 Thread larsh
With 4 +1's (including mine) and no -1 the vote passes.I will release RC0 as 
Phoenix 4.8.1
On to 4.8.2 and/or 4.9.0 :)
-- Lars
  From: James Taylor 
 To: "dev@phoenix.apache.org" ; lars hofhansl 
 
 Sent: Monday, September 26, 2016 3:54 PM
 Subject: Re: [VOTE] The first rc (RC0) for Phoenix 4.8.1 is available
   
+1. Downloaded, compiled, and verified all unit tests pass.
    James
On Sat, Sep 24, 2016 at 8:55 PM,  wrote:

My key is the KEYS file in the dev directory only.
Will load into KEYS file you mention.
My key's fingerprint is A1A7 5143 64FB 05BC C9A1  5A9D D0BE B8C5 C7CF E328.
-- Lars

      From: Josh Elser 
 To: dev@phoenix.apache.org
 Sent: Friday, September 23, 2016 2:48 PM
 Subject: Re: [VOTE] The first rc (RC0) for Phoenix 4.8.1 is available

+1 (binding)

However, it looks like your GPG key isn't present in the include KEYS
file nor in subversion[1], Lars. I don't think the fact that it's not in
the source is problem, but it definitely needs to be present in dist.a.o
before this is released.

For src's

* sigs/xsums OK
* L look OK
* apache-rat:check OK
* mvn verify -DskipITs OK (ITs started crashing on my poor little laptop)

For bin's:

* sigs/xsums OK
* L look OK
* Verified no new dependencies were added since 4.8.0

- Josh

[1] https://dist.apache.org/repos/ dist/release/phoenix/KEYS

la...@apache.org wrote:
> Hello Fellow Phoenix'ers,
>
> The first RC for Apache Phoenix 4.8.1 is available. This is a patch release 
> for the Phoenix 4.8 release line,
> compatible with Apache HBase 0.98, 1.0, 1.1&  1.2.
>
> This release fixes the following 43 issues:
>      [PHOENIX-1367] - VIEW derived from another VIEW doesn't use parent VIEW 
>indexes
>      [PHOENIX-3195] - Slight safety improvement for using DistinctPrefixFilter
>      [PHOENIX-3228] - Index tables should not be configured with a 
>custom/smaller MAX_FILESIZE
>      [PHOENIX-930] - duplicated columns cause query exception and drop table 
>exception
>      [PHOENIX-1647] - Correctly return that Phoenix supports schema name 
>references in DatabaseMetaData
>      [PHOENIX-2336] - Queries with small case column-names return empty 
>result-set when working with Spark Datasource Plugin
>      [PHOENIX-2474] - Cannot round to a negative precision (to the left of 
>the decimal)
>      [PHOENIX-2641] - Implicit wildcard in LIKE predicate search pattern
>      [PHOENIX-2645] - Wildcard characters do not match newline characters
>      [PHOENIX-2853] - Delete Immutable rows from View does not work if 
>immutable index(secondary index) exists
>      [PHOENIX-2944] - DATE Comparison Broken
>      [PHOENIX-2946] - Projected comparison between date and timestamp columns 
>always returns true
>      [PHOENIX-2995] - Write performance severely degrades with large number 
>of views
>      [PHOENIX-3046] - NOT LIKE with wildcard unexpectedly returns results
>      [PHOENIX-3054] - Counting zero null rows returns an empty result set
>      [PHOENIX-3072] - Deadlock on region opening with secondary index recovery
>      [PHOENIX-3148] - Reduce size of PTable so that more tables can be cached 
>in the metada cache.
>      [PHOENIX-3162] - TableNotFoundException might be thrown when an index 
>dropped while upserting.
>      [PHOENIX-3164] - PhoenixConnection leak in PQS with security enabled
>      [PHOENIX-3170] - Remove the futuretask from the list if 
>StaleRegionBoundaryCacheExcept ion is thrown while initializing the scanners
>      [PHOENIX-3175] - Unnecessary UGI proxy user impersonation check
>      [PHOENIX-3185] - Error: ERROR 514 (42892): A duplicate column name was 
>detected in the object definition or ALTER TABLE statement. 
>columnName=TEST_TABLE.C1 (state=42892,code=514)
>      [PHOENIX-3189] - HBase/ZooKeeper connection leaks when providing 
>principal/keytab in JDBC url
>      [PHOENIX-3203] - Tenant cache lookup in Global Cache fails in certain 
>conditions
>      [PHOENIX-3207] - Fix compilation failure on 4.8-HBase-1.2, 4.8-HBase-1.1 
>and 4.8-HBase-1.0 branches after PHOENIX-3148
>      [PHOENIX-3210] - Exception trying to cast Double to BigDecimal in 
>UpsertCompiler
>      [PHOENIX-3223] - Add hadoop classpath to PQS classpath
>      [PHOENIX-3230] - Upgrade code running concurrently on different JVMs 
>could make clients unusuable
>      [PHOENIX-3237] - Automatic rebuild of disabled index will fail if 
>indexes of two tables are disabled at the same time
>      [PHOENIX-3246] - U+2002 (En Space) not handled as whitespace in grammar
>      [PHOENIX-3260] - MetadataRegionObserver. postOpen() can prevent region 
>server from shutting down for a long duration
>      [PHOENIX-3268] - Upgrade to Tephra 0.9.0
>      [PHOENIX-3280] - Automatic attempt to rebuild all disabled index
>      [PHOENIX-3291] - Do not throw return value of Throwables#propagate call
>      [PHOENIX-3307] - Backward compatibility fails for tables with 

[jira] [Commented] (PHOENIX-3334) ConnectionQueryServicesImpl should close HConnection if init fails

2016-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528330#comment-15528330
 ] 

Hudson commented on PHOENIX-3334:
-

SUCCESS: Integrated in Jenkins build Phoenix-master #1421 (See 
[https://builds.apache.org/job/Phoenix-master/1421/])
PHOENIX-3334 ConnectionQueryServicesImpl should close HConnection if (samarth: 
rev 58596bbc416ce577d3407910f1a127150d8c)
* (add) 
phoenix-core/src/main/java/org/apache/phoenix/exception/RetriableUpgradeException.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/exception/UpgradeRequiredException.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/exception/UpgradeNotRequiredException.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/exception/UpgradeInProgressException.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java


> ConnectionQueryServicesImpl should close HConnection if init fails
> --
>
> Key: PHOENIX-3334
> URL: https://issues.apache.org/jira/browse/PHOENIX-3334
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Vincent Poon
>Assignee: Samarth Jain
> Fix For: 4.9.0, 4.8.2
>
> Attachments: PHOENIX-3334.patch, PHOENIX-3334_4.x-HBase-0.98.patch
>
>
> We are seeing ZK connection leaks when there's an error during Phoenix 
> connection creation.  ConnectionQueryServicesImpl grabs an HConnection during 
> init, which creates a ZK ClientCnxn which starts two threads (EventThread, 
> SendThread).  Later in the Phoenix connection init, there's an exception (in 
> our case, incorrect server jar version).  Phoenix bubbles up the exception 
> but never explicitly calls close on the HConnection, so the ZK threads stay 
> alive.
> This was perhaps partially by design as the HConnectionImplementation is 
> supposed to have a DelayedClosing reaper thread that reaps any stale ZK 
> connections.  However, because of HBASE-11354, that reaper never gets 
> started. (we are running HBase 0.98)
> In any case, this reaper stuff was deprecated in HBASE-6778, so clients 
> should close the connection themselves.
> {code}
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility(ConnectionQueryServicesImpl.java:1167)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1034)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1370)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2116)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:828) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2275)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> 

[jira] [Commented] (PHOENIX-3334) ConnectionQueryServicesImpl should close HConnection if init fails

2016-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528148#comment-15528148
 ] 

Hudson commented on PHOENIX-3334:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.8-HBase-1.2 #31 (See 
[https://builds.apache.org/job/Phoenix-4.8-HBase-1.2/31/])
PHOENIX-3334 ConnectionQueryServicesImpl should close HConnection if (samarth: 
rev e9c65009d2cbf6fcc73e45eca7a3856d732d8dd7)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java


> ConnectionQueryServicesImpl should close HConnection if init fails
> --
>
> Key: PHOENIX-3334
> URL: https://issues.apache.org/jira/browse/PHOENIX-3334
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Vincent Poon
>Assignee: Samarth Jain
> Fix For: 4.9.0, 4.8.2
>
> Attachments: PHOENIX-3334.patch, PHOENIX-3334_4.x-HBase-0.98.patch
>
>
> We are seeing ZK connection leaks when there's an error during Phoenix 
> connection creation.  ConnectionQueryServicesImpl grabs an HConnection during 
> init, which creates a ZK ClientCnxn which starts two threads (EventThread, 
> SendThread).  Later in the Phoenix connection init, there's an exception (in 
> our case, incorrect server jar version).  Phoenix bubbles up the exception 
> but never explicitly calls close on the HConnection, so the ZK threads stay 
> alive.
> This was perhaps partially by design as the HConnectionImplementation is 
> supposed to have a DelayedClosing reaper thread that reaps any stale ZK 
> connections.  However, because of HBASE-11354, that reaper never gets 
> started. (we are running HBase 0.98)
> In any case, this reaper stuff was deprecated in HBASE-6778, so clients 
> should close the connection themselves.
> {code}
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility(ConnectionQueryServicesImpl.java:1167)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1034)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1370)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2116)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:828) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2275)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:233)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:135)
>  

[jira] [Resolved] (PHOENIX-3334) ConnectionQueryServicesImpl should close HConnection if init fails

2016-09-27 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain resolved PHOENIX-3334.
---
Resolution: Fixed

> ConnectionQueryServicesImpl should close HConnection if init fails
> --
>
> Key: PHOENIX-3334
> URL: https://issues.apache.org/jira/browse/PHOENIX-3334
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Vincent Poon
>Assignee: Samarth Jain
> Fix For: 4.9.0, 4.8.2
>
> Attachments: PHOENIX-3334.patch, PHOENIX-3334_4.x-HBase-0.98.patch
>
>
> We are seeing ZK connection leaks when there's an error during Phoenix 
> connection creation.  ConnectionQueryServicesImpl grabs an HConnection during 
> init, which creates a ZK ClientCnxn which starts two threads (EventThread, 
> SendThread).  Later in the Phoenix connection init, there's an exception (in 
> our case, incorrect server jar version).  Phoenix bubbles up the exception 
> but never explicitly calls close on the HConnection, so the ZK threads stay 
> alive.
> This was perhaps partially by design as the HConnectionImplementation is 
> supposed to have a DelayedClosing reaper thread that reaps any stale ZK 
> connections.  However, because of HBASE-11354, that reaper never gets 
> started. (we are running HBase 0.98)
> In any case, this reaper stuff was deprecated in HBASE-6778, so clients 
> should close the connection themselves.
> {code}
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility(ConnectionQueryServicesImpl.java:1167)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1034)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1370)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2116)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:828) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2275)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:233)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:135)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at java.sql.DriverManager.getConnection(DriverManager.java:664) 
> ~[na:1.8.0_60]
>   at java.sql.DriverManager.getConnection(DriverManager.java:270) 
> ~[na:1.8.0_60]
> 

[jira] [Updated] (PHOENIX-3334) ConnectionQueryServicesImpl should close HConnection if init fails

2016-09-27 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-3334:
--
Attachment: PHOENIX-3334_4.x-HBase-0.98.patch

> ConnectionQueryServicesImpl should close HConnection if init fails
> --
>
> Key: PHOENIX-3334
> URL: https://issues.apache.org/jira/browse/PHOENIX-3334
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Vincent Poon
>Assignee: Samarth Jain
> Fix For: 4.9.0, 4.8.2
>
> Attachments: PHOENIX-3334.patch, PHOENIX-3334_4.x-HBase-0.98.patch
>
>
> We are seeing ZK connection leaks when there's an error during Phoenix 
> connection creation.  ConnectionQueryServicesImpl grabs an HConnection during 
> init, which creates a ZK ClientCnxn which starts two threads (EventThread, 
> SendThread).  Later in the Phoenix connection init, there's an exception (in 
> our case, incorrect server jar version).  Phoenix bubbles up the exception 
> but never explicitly calls close on the HConnection, so the ZK threads stay 
> alive.
> This was perhaps partially by design as the HConnectionImplementation is 
> supposed to have a DelayedClosing reaper thread that reaps any stale ZK 
> connections.  However, because of HBASE-11354, that reaper never gets 
> started. (we are running HBase 0.98)
> In any case, this reaper stuff was deprecated in HBASE-6778, so clients 
> should close the connection themselves.
> {code}
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility(ConnectionQueryServicesImpl.java:1167)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1034)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1370)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2116)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:828) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2275)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:233)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:135)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at java.sql.DriverManager.getConnection(DriverManager.java:664) 
> ~[na:1.8.0_60]
>   at 

[jira] [Updated] (PHOENIX-3334) ConnectionQueryServicesImpl should close HConnection if init fails

2016-09-27 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-3334:
--
Attachment: (was: PHOENIX-3334_4.x-HBase-0.98.patch)

> ConnectionQueryServicesImpl should close HConnection if init fails
> --
>
> Key: PHOENIX-3334
> URL: https://issues.apache.org/jira/browse/PHOENIX-3334
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Vincent Poon
>Assignee: Samarth Jain
> Fix For: 4.9.0, 4.8.2
>
> Attachments: PHOENIX-3334.patch
>
>
> We are seeing ZK connection leaks when there's an error during Phoenix 
> connection creation.  ConnectionQueryServicesImpl grabs an HConnection during 
> init, which creates a ZK ClientCnxn which starts two threads (EventThread, 
> SendThread).  Later in the Phoenix connection init, there's an exception (in 
> our case, incorrect server jar version).  Phoenix bubbles up the exception 
> but never explicitly calls close on the HConnection, so the ZK threads stay 
> alive.
> This was perhaps partially by design as the HConnectionImplementation is 
> supposed to have a DelayedClosing reaper thread that reaps any stale ZK 
> connections.  However, because of HBASE-11354, that reaper never gets 
> started. (we are running HBase 0.98)
> In any case, this reaper stuff was deprecated in HBASE-6778, so clients 
> should close the connection themselves.
> {code}
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility(ConnectionQueryServicesImpl.java:1167)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1034)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1370)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2116)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:828) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2275)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:233)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:135)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at java.sql.DriverManager.getConnection(DriverManager.java:664) 
> ~[na:1.8.0_60]
>   at java.sql.DriverManager.getConnection(DriverManager.java:270) 
> ~[na:1.8.0_60]

[jira] [Updated] (PHOENIX-3334) ConnectionQueryServicesImpl should close HConnection if init fails

2016-09-27 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-3334:
--
Attachment: PHOENIX-3334_4.x-HBase-0.98.patch

Slightly different patch for the 4.x and master branches.

> ConnectionQueryServicesImpl should close HConnection if init fails
> --
>
> Key: PHOENIX-3334
> URL: https://issues.apache.org/jira/browse/PHOENIX-3334
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Vincent Poon
>Assignee: Samarth Jain
> Fix For: 4.9.0, 4.8.2
>
> Attachments: PHOENIX-3334.patch, PHOENIX-3334_4.x-HBase-0.98.patch
>
>
> We are seeing ZK connection leaks when there's an error during Phoenix 
> connection creation.  ConnectionQueryServicesImpl grabs an HConnection during 
> init, which creates a ZK ClientCnxn which starts two threads (EventThread, 
> SendThread).  Later in the Phoenix connection init, there's an exception (in 
> our case, incorrect server jar version).  Phoenix bubbles up the exception 
> but never explicitly calls close on the HConnection, so the ZK threads stay 
> alive.
> This was perhaps partially by design as the HConnectionImplementation is 
> supposed to have a DelayedClosing reaper thread that reaps any stale ZK 
> connections.  However, because of HBASE-11354, that reaper never gets 
> started. (we are running HBase 0.98)
> In any case, this reaper stuff was deprecated in HBASE-6778, so clients 
> should close the connection themselves.
> {code}
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility(ConnectionQueryServicesImpl.java:1167)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1034)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1370)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2116)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:828) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2275)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:233)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:135)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at java.sql.DriverManager.getConnection(DriverManager.java:664) 
> ~[na:1.8.0_60]
>  

[jira] [Commented] (PHOENIX-3334) ConnectionQueryServicesImpl should close HConnection if init fails

2016-09-27 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527803#comment-15527803
 ] 

James Taylor commented on PHOENIX-3334:
---

+1

> ConnectionQueryServicesImpl should close HConnection if init fails
> --
>
> Key: PHOENIX-3334
> URL: https://issues.apache.org/jira/browse/PHOENIX-3334
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Vincent Poon
>Assignee: Samarth Jain
> Fix For: 4.9.0, 4.8.2
>
> Attachments: PHOENIX-3334.patch
>
>
> We are seeing ZK connection leaks when there's an error during Phoenix 
> connection creation.  ConnectionQueryServicesImpl grabs an HConnection during 
> init, which creates a ZK ClientCnxn which starts two threads (EventThread, 
> SendThread).  Later in the Phoenix connection init, there's an exception (in 
> our case, incorrect server jar version).  Phoenix bubbles up the exception 
> but never explicitly calls close on the HConnection, so the ZK threads stay 
> alive.
> This was perhaps partially by design as the HConnectionImplementation is 
> supposed to have a DelayedClosing reaper thread that reaps any stale ZK 
> connections.  However, because of HBASE-11354, that reaper never gets 
> started. (we are running HBase 0.98)
> In any case, this reaper stuff was deprecated in HBASE-6778, so clients 
> should close the connection themselves.
> {code}
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility(ConnectionQueryServicesImpl.java:1167)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1034)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1370)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2116)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:828) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2275)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:233)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:135)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at java.sql.DriverManager.getConnection(DriverManager.java:664) 
> ~[na:1.8.0_60]
>   at java.sql.DriverManager.getConnection(DriverManager.java:270) 
> ~[na:1.8.0_60]
> {code}



--

[jira] [Updated] (PHOENIX-3334) ConnectionQueryServicesImpl should close HConnection if init fails

2016-09-27 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-3334:
--
Attachment: PHOENIX-3334.patch

[~jamestaylor], please review.

> ConnectionQueryServicesImpl should close HConnection if init fails
> --
>
> Key: PHOENIX-3334
> URL: https://issues.apache.org/jira/browse/PHOENIX-3334
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Vincent Poon
>Assignee: Samarth Jain
> Fix For: 4.9.0, 4.8.2
>
> Attachments: PHOENIX-3334.patch
>
>
> We are seeing ZK connection leaks when there's an error during Phoenix 
> connection creation.  ConnectionQueryServicesImpl grabs an HConnection during 
> init, which creates a ZK ClientCnxn which starts two threads (EventThread, 
> SendThread).  Later in the Phoenix connection init, there's an exception (in 
> our case, incorrect server jar version).  Phoenix bubbles up the exception 
> but never explicitly calls close on the HConnection, so the ZK threads stay 
> alive.
> This was perhaps partially by design as the HConnectionImplementation is 
> supposed to have a DelayedClosing reaper thread that reaps any stale ZK 
> connections.  However, because of HBASE-11354, that reaper never gets 
> started. (we are running HBase 0.98)
> In any case, this reaper stuff was deprecated in HBASE-6778, so clients 
> should close the connection themselves.
> {code}
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility(ConnectionQueryServicesImpl.java:1167)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1034)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1370)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2116)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:828) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2275)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:233)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:135)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at java.sql.DriverManager.getConnection(DriverManager.java:664) 
> ~[na:1.8.0_60]
>   at java.sql.DriverManager.getConnection(DriverManager.java:270) 
> 

[jira] [Commented] (PHOENIX-3153) Convert join-related IT tests to be derived from BaseHBaseManagedTimeTableReuseIT

2016-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527432#comment-15527432
 ] 

Hudson commented on PHOENIX-3153:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1420 (See 
[https://builds.apache.org/job/Phoenix-master/1420/])
PHOENIX-3153 Convert join-related IT tests to be derived from (jamestaylor: rev 
c6e703dd24f8f47bb7cb610143fca4967fdfacdf)
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/SubqueryIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelRunListener.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexIT.java
* (edit) phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/SortMergeJoinIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/HashJoinLocalIndexIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryWithLimitIT.java
* (edit) pom.xml
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/HashJoinIT.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/query/BaseConnectionlessQueryTest.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/SubqueryUsingSortMergeJoinIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/UserDefinedFunctionsIT.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/compile/JoinQueryCompilerTest.java
* (add) phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseJoinIT.java


> Convert join-related IT tests to be derived from 
> BaseHBaseManagedTimeTableReuseIT
> -
>
> Key: PHOENIX-3153
> URL: https://issues.apache.org/jira/browse/PHOENIX-3153
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: prakul agarwal
>Assignee: James Taylor
>Priority: Minor
> Attachments: PHOENIX-3153.patch
>
>
> The following 5 test cases follow same pattern (initJoinTableValues method 
> for table generation) and are still extending BaseHBaseManagedTimeIT .
> HashJoinIt
> SortMergeJoinIT
> HashJoinLocal index
> SubqueryIT
> SubqueryUsingSortMergeJoinIT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-476) Support declaration of DEFAULT in CREATE statement

2016-09-27 Thread Kevin Liew (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527244#comment-15527244
 ] 

Kevin Liew edited comment on PHOENIX-476 at 9/27/16 8:08 PM:
-

Thanks James, will do. I didn’t have the right context to understand the 
interactions between the coprocessor and driver; I was too focused on the 
individual components but I’m starting to see how it fits together.


was (Author: kliew):
Thanks James, will do. I didn’t have the right context; I was too focused on 
the individual parts but I’m starting to see how it fits together.

> Support declaration of DEFAULT in CREATE statement
> --
>
> Key: PHOENIX-476
> URL: https://issues.apache.org/jira/browse/PHOENIX-476
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 3.0-Release
>Reporter: James Taylor
>Assignee: Kevin Liew
>  Labels: enhancement
> Attachments: PHOENIX-476.2.patch, PHOENIX-476.3.patch, 
> PHOENIX-476.patch
>
>
> Support the declaration of a default value in the CREATE TABLE/VIEW statement 
> like this:
> CREATE TABLE Persons (
> Pid int NOT NULL PRIMARY KEY,
> LastName varchar(255) NOT NULL,
> FirstName varchar(255),
> Address varchar(255),
> City varchar(255) DEFAULT 'Sandnes'
> )
> To implement this, we'd need to:
> 1. add a new DEFAULT_VALUE key value column in SYSTEM.TABLE and pass through 
> the value when the table is created (in MetaDataClient).
> 2. always set NULLABLE to ResultSetMetaData.columnNoNulls if a default value 
> is present, since the column will never be null.
> 3. add a getDefaultValue() accessor in PColumn
> 4.  for a row key column, during UPSERT use the default value if no value was 
> specified for that column. This could be done in the PTableImpl.newKey method.
> 5.  for a key value column with a default value, we can get away without 
> incurring any storage cost. Although a little bit of extra effort than if we 
> persisted the default value on an UPSERT for key value columns, this approach 
> has the benefit of not incurring any storage cost for a default value.
> * serialize any default value into KeyValueColumnExpression
> * in the evaluate method of KeyValueColumnExpression, conditionally use 
> the default value if the column value is not present. If doing partial 
> evaluation, you should not yet return the default value, as we may not have 
> encountered the the KeyValue for the column yet (since a filter evaluates 
> each time it sees each KeyValue, and there may be more than one KeyValue 
> referenced in the expression). Partial evaluation is determined by calling 
> Tuple.isImmutable(), where false means it is NOT doing partial evaluation, 
> while true means it is.
> * modify EvaluateOnCompletionVisitor by adding a visitor method for 
> RowKeyColumnExpression and KeyValueColumnExpression to set 
> evaluateOnCompletion to true if they have a default value specified. This 
> will cause filter evaluation to execute one final time after all KeyValues 
> for a row have been seen, since it's at this time we know we should use the 
> default value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-476) Support declaration of DEFAULT in CREATE statement

2016-09-27 Thread Kevin Liew (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527244#comment-15527244
 ] 

Kevin Liew commented on PHOENIX-476:


Thanks James, will do. I didn’t have the right context; I was too focused on 
the individual parts but I’m starting to see how it fits together.

> Support declaration of DEFAULT in CREATE statement
> --
>
> Key: PHOENIX-476
> URL: https://issues.apache.org/jira/browse/PHOENIX-476
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 3.0-Release
>Reporter: James Taylor
>Assignee: Kevin Liew
>  Labels: enhancement
> Attachments: PHOENIX-476.2.patch, PHOENIX-476.3.patch, 
> PHOENIX-476.patch
>
>
> Support the declaration of a default value in the CREATE TABLE/VIEW statement 
> like this:
> CREATE TABLE Persons (
> Pid int NOT NULL PRIMARY KEY,
> LastName varchar(255) NOT NULL,
> FirstName varchar(255),
> Address varchar(255),
> City varchar(255) DEFAULT 'Sandnes'
> )
> To implement this, we'd need to:
> 1. add a new DEFAULT_VALUE key value column in SYSTEM.TABLE and pass through 
> the value when the table is created (in MetaDataClient).
> 2. always set NULLABLE to ResultSetMetaData.columnNoNulls if a default value 
> is present, since the column will never be null.
> 3. add a getDefaultValue() accessor in PColumn
> 4.  for a row key column, during UPSERT use the default value if no value was 
> specified for that column. This could be done in the PTableImpl.newKey method.
> 5.  for a key value column with a default value, we can get away without 
> incurring any storage cost. Although a little bit of extra effort than if we 
> persisted the default value on an UPSERT for key value columns, this approach 
> has the benefit of not incurring any storage cost for a default value.
> * serialize any default value into KeyValueColumnExpression
> * in the evaluate method of KeyValueColumnExpression, conditionally use 
> the default value if the column value is not present. If doing partial 
> evaluation, you should not yet return the default value, as we may not have 
> encountered the the KeyValue for the column yet (since a filter evaluates 
> each time it sees each KeyValue, and there may be more than one KeyValue 
> referenced in the expression). Partial evaluation is determined by calling 
> Tuple.isImmutable(), where false means it is NOT doing partial evaluation, 
> while true means it is.
> * modify EvaluateOnCompletionVisitor by adding a visitor method for 
> RowKeyColumnExpression and KeyValueColumnExpression to set 
> evaluateOnCompletion to true if they have a default value specified. This 
> will cause filter evaluation to execute one final time after all KeyValues 
> for a row have been seen, since it's at this time we know we should use the 
> default value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3273) Support both "!=" and "<>" in Calcite-Phoenix

2016-09-27 Thread Maryann Xue (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maryann Xue resolved PHOENIX-3273.
--
Resolution: Fixed

Now with CALCITE-1374, we can allow "!=" as an alternative to "<>" in 
Calcite-Phoenix by changing our default SQL conformance level to ORACLE_10. 
I'll revert the previous check-in which replaced all occurrences of "!=" in SQL 
queries with "<>".

> Support both "!=" and "<>" in Calcite-Phoenix
> -
>
> Key: PHOENIX-3273
> URL: https://issues.apache.org/jira/browse/PHOENIX-3273
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Maryann Xue
>Assignee: Maryann Xue
>  Labels: calcite
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3273) Support both "!=" and "<>" in Calcite-Phoenix

2016-09-27 Thread Maryann Xue (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maryann Xue updated PHOENIX-3273:
-
Summary: Support both "!=" and "<>" in Calcite-Phoenix  (was: Replace "!=" 
with "<>" in all test cases)

> Support both "!=" and "<>" in Calcite-Phoenix
> -
>
> Key: PHOENIX-3273
> URL: https://issues.apache.org/jira/browse/PHOENIX-3273
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Maryann Xue
>Assignee: Maryann Xue
>  Labels: calcite
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3334) ConnectionQueryServicesImpl should close HConnection if init fails

2016-09-27 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15526997#comment-15526997
 ] 

Samarth Jain commented on PHOENIX-3334:
---

Sure, let me take a look.

> ConnectionQueryServicesImpl should close HConnection if init fails
> --
>
> Key: PHOENIX-3334
> URL: https://issues.apache.org/jira/browse/PHOENIX-3334
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Vincent Poon
>Assignee: Samarth Jain
> Fix For: 4.9.0, 4.8.2
>
>
> We are seeing ZK connection leaks when there's an error during Phoenix 
> connection creation.  ConnectionQueryServicesImpl grabs an HConnection during 
> init, which creates a ZK ClientCnxn which starts two threads (EventThread, 
> SendThread).  Later in the Phoenix connection init, there's an exception (in 
> our case, incorrect server jar version).  Phoenix bubbles up the exception 
> but never explicitly calls close on the HConnection, so the ZK threads stay 
> alive.
> This was perhaps partially by design as the HConnectionImplementation is 
> supposed to have a DelayedClosing reaper thread that reaps any stale ZK 
> connections.  However, because of HBASE-11354, that reaper never gets 
> started. (we are running HBase 0.98)
> In any case, this reaper stuff was deprecated in HBASE-6778, so clients 
> should close the connection themselves.
> {code}
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility(ConnectionQueryServicesImpl.java:1167)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1034)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1370)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2116)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:828) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2275)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:233)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:135)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at java.sql.DriverManager.getConnection(DriverManager.java:664) 
> ~[na:1.8.0_60]
>   at java.sql.DriverManager.getConnection(DriverManager.java:270) 
> ~[na:1.8.0_60]
> {code}



--
This message was sent 

[jira] [Commented] (PHOENIX-3334) ConnectionQueryServicesImpl should close HConnection if init fails

2016-09-27 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15526992#comment-15526992
 ] 

James Taylor commented on PHOENIX-3334:
---

How about a quick fix on this one, [~samarthjain]?

> ConnectionQueryServicesImpl should close HConnection if init fails
> --
>
> Key: PHOENIX-3334
> URL: https://issues.apache.org/jira/browse/PHOENIX-3334
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Vincent Poon
>Assignee: Samarth Jain
> Fix For: 4.9.0, 4.8.2
>
>
> We are seeing ZK connection leaks when there's an error during Phoenix 
> connection creation.  ConnectionQueryServicesImpl grabs an HConnection during 
> init, which creates a ZK ClientCnxn which starts two threads (EventThread, 
> SendThread).  Later in the Phoenix connection init, there's an exception (in 
> our case, incorrect server jar version).  Phoenix bubbles up the exception 
> but never explicitly calls close on the HConnection, so the ZK threads stay 
> alive.
> This was perhaps partially by design as the HConnectionImplementation is 
> supposed to have a DelayedClosing reaper thread that reaps any stale ZK 
> connections.  However, because of HBASE-11354, that reaper never gets 
> started. (we are running HBase 0.98)
> In any case, this reaper stuff was deprecated in HBASE-6778, so clients 
> should close the connection themselves.
> {code}
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility(ConnectionQueryServicesImpl.java:1167)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1034)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1370)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2116)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:828) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2275)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:233)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:135)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at java.sql.DriverManager.getConnection(DriverManager.java:664) 
> ~[na:1.8.0_60]
>   at java.sql.DriverManager.getConnection(DriverManager.java:270) 
> ~[na:1.8.0_60]
> {code}




[jira] [Updated] (PHOENIX-3334) ConnectionQueryServicesImpl should close HConnection if init fails

2016-09-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3334:
--
Fix Version/s: 4.8.2

> ConnectionQueryServicesImpl should close HConnection if init fails
> --
>
> Key: PHOENIX-3334
> URL: https://issues.apache.org/jira/browse/PHOENIX-3334
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Vincent Poon
>Assignee: Samarth Jain
> Fix For: 4.9.0, 4.8.2
>
>
> We are seeing ZK connection leaks when there's an error during Phoenix 
> connection creation.  ConnectionQueryServicesImpl grabs an HConnection during 
> init, which creates a ZK ClientCnxn which starts two threads (EventThread, 
> SendThread).  Later in the Phoenix connection init, there's an exception (in 
> our case, incorrect server jar version).  Phoenix bubbles up the exception 
> but never explicitly calls close on the HConnection, so the ZK threads stay 
> alive.
> This was perhaps partially by design as the HConnectionImplementation is 
> supposed to have a DelayedClosing reaper thread that reaps any stale ZK 
> connections.  However, because of HBASE-11354, that reaper never gets 
> started. (we are running HBase 0.98)
> In any case, this reaper stuff was deprecated in HBASE-6778, so clients 
> should close the connection themselves.
> {code}
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility(ConnectionQueryServicesImpl.java:1167)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1034)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1370)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2116)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:828) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2275)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2244)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:233)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:135)
>  ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202) 
> ~[phoenix-core-4.7.0-sfdc-1.0.8.jar:4.7.0-sfdc-1.0.8]
>   at java.sql.DriverManager.getConnection(DriverManager.java:664) 
> ~[na:1.8.0_60]
>   at java.sql.DriverManager.getConnection(DriverManager.java:270) 
> ~[na:1.8.0_60]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3240) ClassCastException from Pig loader

2016-09-27 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15526974#comment-15526974
 ] 

Josh Elser commented on PHOENIX-3240:
-

Tentative "yes", [~jamestaylor]. I'll grab the assignee for this, but anyone 
else having cycle to work on this can nab it from me if they'd like :)

> ClassCastException from Pig loader
> --
>
> Key: PHOENIX-3240
> URL: https://issues.apache.org/jira/browse/PHOENIX-3240
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: YoungWoo Kim
> Fix For: 4.9.0
>
>
> I'm loading data from Hive table to Phoenix table in using Phoenix-Pig 
> integration. my pig script looks like following:
> {code}
> T = LOAD 'mydb.$TBL' USING org.apache.hive.hcatalog.pig.HCatLoader();
> STORE T into 'hbase://MYSCHEMA.$TBL' using
> org.apache.phoenix.pig.PhoenixHBaseStorage('i004,i005,i006','-batchSize 
> 1000');
> {code}
> If the source table has timestamp, MapReduce job for Pig script does not work:
> {noformat}
> ERROR 0: java.lang.ClassCastException: org.joda.time.DateTime cannot be cast 
> to org.apache.phoenix.shaded.org.joda.time.DateTime
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> java.lang.ClassCastException: org.joda.time.DateTime cannot be cast to 
> org.apache.phoenix.shaded.org.joda.time.DateTime
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.getStats(MapReduceLauncher.java:822)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:452)
>   at 
> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:304)
>   at org.apache.pig.PigServer.launchPlan(PigServer.java:1390)
>   at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1375)
>   at org.apache.pig.PigServer.execute(PigServer.java:1364)
>   at org.apache.pig.PigServer.executeBatch(PigServer.java:415)
>   at org.apache.pig.PigServer.executeBatch(PigServer.java:398)
>   at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:171)
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:234)
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
>   at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
>   at org.apache.pig.Main.run(Main.java:502)
>   at org.apache.pig.Main.main(Main.java:177)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.lang.ClassCastException: org.joda.time.DateTime cannot be 
> cast to org.apache.phoenix.shaded.org.joda.time.DateTime
>   at 
> org.apache.phoenix.pig.util.TypeUtil.castPigTypeToPhoenix(TypeUtil.java:201)
>   at 
> org.apache.phoenix.pig.PhoenixHBaseStorage.putNext(PhoenixHBaseStorage.java:189)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:136)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:95)
>   at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
>   at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
>   at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:260)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-3240) ClassCastException from Pig loader

2016-09-27 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned PHOENIX-3240:
---

Assignee: Josh Elser

> ClassCastException from Pig loader
> --
>
> Key: PHOENIX-3240
> URL: https://issues.apache.org/jira/browse/PHOENIX-3240
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: YoungWoo Kim
>Assignee: Josh Elser
> Fix For: 4.9.0
>
>
> I'm loading data from Hive table to Phoenix table in using Phoenix-Pig 
> integration. my pig script looks like following:
> {code}
> T = LOAD 'mydb.$TBL' USING org.apache.hive.hcatalog.pig.HCatLoader();
> STORE T into 'hbase://MYSCHEMA.$TBL' using
> org.apache.phoenix.pig.PhoenixHBaseStorage('i004,i005,i006','-batchSize 
> 1000');
> {code}
> If the source table has timestamp, MapReduce job for Pig script does not work:
> {noformat}
> ERROR 0: java.lang.ClassCastException: org.joda.time.DateTime cannot be cast 
> to org.apache.phoenix.shaded.org.joda.time.DateTime
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> java.lang.ClassCastException: org.joda.time.DateTime cannot be cast to 
> org.apache.phoenix.shaded.org.joda.time.DateTime
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.getStats(MapReduceLauncher.java:822)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:452)
>   at 
> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:304)
>   at org.apache.pig.PigServer.launchPlan(PigServer.java:1390)
>   at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1375)
>   at org.apache.pig.PigServer.execute(PigServer.java:1364)
>   at org.apache.pig.PigServer.executeBatch(PigServer.java:415)
>   at org.apache.pig.PigServer.executeBatch(PigServer.java:398)
>   at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:171)
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:234)
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
>   at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
>   at org.apache.pig.Main.run(Main.java:502)
>   at org.apache.pig.Main.main(Main.java:177)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.lang.ClassCastException: org.joda.time.DateTime cannot be 
> cast to org.apache.phoenix.shaded.org.joda.time.DateTime
>   at 
> org.apache.phoenix.pig.util.TypeUtil.castPigTypeToPhoenix(TypeUtil.java:201)
>   at 
> org.apache.phoenix.pig.PhoenixHBaseStorage.putNext(PhoenixHBaseStorage.java:189)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:136)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:95)
>   at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
>   at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
>   at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:260)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3240) ClassCastException from Pig loader

2016-09-27 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15526964#comment-15526964
 ] 

James Taylor commented on PHOENIX-3240:
---

Would you have some spare cycles in the next few weeks to fix this in time for 
4.9.0, [~elserj]?

> ClassCastException from Pig loader
> --
>
> Key: PHOENIX-3240
> URL: https://issues.apache.org/jira/browse/PHOENIX-3240
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: YoungWoo Kim
> Fix For: 4.9.0
>
>
> I'm loading data from Hive table to Phoenix table in using Phoenix-Pig 
> integration. my pig script looks like following:
> {code}
> T = LOAD 'mydb.$TBL' USING org.apache.hive.hcatalog.pig.HCatLoader();
> STORE T into 'hbase://MYSCHEMA.$TBL' using
> org.apache.phoenix.pig.PhoenixHBaseStorage('i004,i005,i006','-batchSize 
> 1000');
> {code}
> If the source table has timestamp, MapReduce job for Pig script does not work:
> {noformat}
> ERROR 0: java.lang.ClassCastException: org.joda.time.DateTime cannot be cast 
> to org.apache.phoenix.shaded.org.joda.time.DateTime
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> java.lang.ClassCastException: org.joda.time.DateTime cannot be cast to 
> org.apache.phoenix.shaded.org.joda.time.DateTime
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.getStats(MapReduceLauncher.java:822)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:452)
>   at 
> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:304)
>   at org.apache.pig.PigServer.launchPlan(PigServer.java:1390)
>   at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1375)
>   at org.apache.pig.PigServer.execute(PigServer.java:1364)
>   at org.apache.pig.PigServer.executeBatch(PigServer.java:415)
>   at org.apache.pig.PigServer.executeBatch(PigServer.java:398)
>   at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:171)
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:234)
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
>   at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
>   at org.apache.pig.Main.run(Main.java:502)
>   at org.apache.pig.Main.main(Main.java:177)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.lang.ClassCastException: org.joda.time.DateTime cannot be 
> cast to org.apache.phoenix.shaded.org.joda.time.DateTime
>   at 
> org.apache.phoenix.pig.util.TypeUtil.castPigTypeToPhoenix(TypeUtil.java:201)
>   at 
> org.apache.phoenix.pig.PhoenixHBaseStorage.putNext(PhoenixHBaseStorage.java:189)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:136)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:95)
>   at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
>   at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
>   at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:260)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3240) ClassCastException from Pig loader

2016-09-27 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15526887#comment-15526887
 ] 

Josh Elser commented on PHOENIX-3240:
-

There are a couple of options I can think of quickly..

1) We turn the phoenix-pig jar into a self-contained artifact instead of using 
phoenix-client and phoenix-pig (which I think is presently the case)
2) phoenix-pig should bundle the dependencies that it needs instead of relying 
on pulling them from phoenix-client

Ultimately, they're both solving the problem in the same way: The phoenix-pig 
artifact should not expect to use bundled dependencies from the phoenix-client 
jar. The isolation the shaded client jar is *intended* to give us just 
backfired on us :)

> ClassCastException from Pig loader
> --
>
> Key: PHOENIX-3240
> URL: https://issues.apache.org/jira/browse/PHOENIX-3240
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: YoungWoo Kim
> Fix For: 4.9.0
>
>
> I'm loading data from Hive table to Phoenix table in using Phoenix-Pig 
> integration. my pig script looks like following:
> {code}
> T = LOAD 'mydb.$TBL' USING org.apache.hive.hcatalog.pig.HCatLoader();
> STORE T into 'hbase://MYSCHEMA.$TBL' using
> org.apache.phoenix.pig.PhoenixHBaseStorage('i004,i005,i006','-batchSize 
> 1000');
> {code}
> If the source table has timestamp, MapReduce job for Pig script does not work:
> {noformat}
> ERROR 0: java.lang.ClassCastException: org.joda.time.DateTime cannot be cast 
> to org.apache.phoenix.shaded.org.joda.time.DateTime
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> java.lang.ClassCastException: org.joda.time.DateTime cannot be cast to 
> org.apache.phoenix.shaded.org.joda.time.DateTime
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.getStats(MapReduceLauncher.java:822)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:452)
>   at 
> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:304)
>   at org.apache.pig.PigServer.launchPlan(PigServer.java:1390)
>   at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1375)
>   at org.apache.pig.PigServer.execute(PigServer.java:1364)
>   at org.apache.pig.PigServer.executeBatch(PigServer.java:415)
>   at org.apache.pig.PigServer.executeBatch(PigServer.java:398)
>   at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:171)
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:234)
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
>   at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
>   at org.apache.pig.Main.run(Main.java:502)
>   at org.apache.pig.Main.main(Main.java:177)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.lang.ClassCastException: org.joda.time.DateTime cannot be 
> cast to org.apache.phoenix.shaded.org.joda.time.DateTime
>   at 
> org.apache.phoenix.pig.util.TypeUtil.castPigTypeToPhoenix(TypeUtil.java:201)
>   at 
> org.apache.phoenix.pig.PhoenixHBaseStorage.putNext(PhoenixHBaseStorage.java:189)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:136)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:95)
>   at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
>   at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
>   at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:260)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at 

Re: REGEXP_REPLACE does not replace to ESC character

2016-09-27 Thread James Taylor
Please file a JIRA, ideally with a unit test that reproduces the issue.

Thanks,
James

On Tue, Sep 27, 2016 at 3:27 AM, Sangeeta Soman 
wrote:

> Hi,
>
> We are facing an issue where we are trying to replace a particular pattern
> to "Esc" character by using the REGEXP_REPLACE function.
>
> We see that in the output we get only "e" and not the "Esc" character.
>
> Steps to reproduce:
>
> 1. Create a test table
>
> CREATE TABLE TEST (ID INTEGER PRIMARY KEY, DATA VARCHAR);
>
>
> 2. Upsert sample data
>
> UPSERT INTO TEST VALUES (1, '#@@##!@@#SANJAY');
>
>
> 3. Replace a character sequence with "Esc" (\e)
>
>  SELECT REGEXP_REPLACE(DATA, '#!@@#', '\e') FROM TEST;
>
>
> 4. Result :
>
>  +--+
>
> | REGEXP_REPLACE(DATA, '#!@@#', '\e')  |
>
> +--+
>
> | #@@#eSANJAY  |
>
> +--+
>
>
>
> We are using Phoenix on Amazon EMR. EMR version is 4.7.2. And Phoenix
> version that EMR runs is 4.7.0
>
>
>
> Best Regards,
> Sangeeta Soman
>


REGEXP_REPLACE does not replace to ESC character

2016-09-27 Thread Sangeeta Soman
Hi,

We are facing an issue where we are trying to replace a particular pattern
to "Esc" character by using the REGEXP_REPLACE function.

We see that in the output we get only "e" and not the "Esc" character.

Steps to reproduce:

1. Create a test table

CREATE TABLE TEST (ID INTEGER PRIMARY KEY, DATA VARCHAR);


2. Upsert sample data

UPSERT INTO TEST VALUES (1, '#@@##!@@#SANJAY');


3. Replace a character sequence with "Esc" (\e)

 SELECT REGEXP_REPLACE(DATA, '#!@@#', '\e') FROM TEST;


4. Result :

 +--+

| REGEXP_REPLACE(DATA, '#!@@#', '\e')  |

+--+

| #@@#eSANJAY  |

+--+



We are using Phoenix on Amazon EMR. EMR version is 4.7.2. And Phoenix
version that EMR runs is 4.7.0



Best Regards,
Sangeeta Soman


[jira] [Commented] (PHOENIX-3335) Improve documention of unsigned_long type mapping

2016-09-27 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15526762#comment-15526762
 ] 

James Taylor commented on PHOENIX-3335:
---

Agreed, [~yhxx511]. If you want to put together a documentation patch, that'd 
be much appreciated. See https://phoenix.apache.org/building_website.html

> Improve documention of unsigned_long type mapping
> -
>
> Key: PHOENIX-3335
> URL: https://issues.apache.org/jira/browse/PHOENIX-3335
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: Kui Xiang
>
> when i use the function of increament in hbase 2.0.x
> push a value like '\x00\x00\x00\x00\x00\x00\x00\x01' in hbase 
> then i create pheonix table like:
> 
> create table click_pv(pk varchar primary key,"default"."min_time" 
> VARCHAR,"default"."pid" VARCHAR,"default"."pv" BIGINT);
> 
> the pv column 's type use BIGINT will mapping the value to 
> -9223372036854775805
> and when i use UNSIGNED_LONG type ,it will works ok.
> it looks a little strange..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3335) Improve documention of unsigned_long type mapping

2016-09-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3335:
--
Summary: Improve documention of unsigned_long type mapping  (was: long type 
in hbase mapping pheonix type error)

> Improve documention of unsigned_long type mapping
> -
>
> Key: PHOENIX-3335
> URL: https://issues.apache.org/jira/browse/PHOENIX-3335
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: Kui Xiang
>
> when i use the function of increament in hbase 2.0.x
> push a value like '\x00\x00\x00\x00\x00\x00\x00\x01' in hbase 
> then i create pheonix table like:
> 
> create table click_pv(pk varchar primary key,"default"."min_time" 
> VARCHAR,"default"."pid" VARCHAR,"default"."pv" BIGINT);
> 
> the pv column 's type use BIGINT will mapping the value to 
> -9223372036854775805
> and when i use UNSIGNED_LONG type ,it will works ok.
> it looks a little strange..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3335) long type in hbase mapping pheonix type error

2016-09-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3335:
--
Issue Type: Task  (was: Improvement)

> long type in hbase mapping pheonix type error
> -
>
> Key: PHOENIX-3335
> URL: https://issues.apache.org/jira/browse/PHOENIX-3335
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: Kui Xiang
>
> when i use the function of increament in hbase 2.0.x
> push a value like '\x00\x00\x00\x00\x00\x00\x00\x01' in hbase 
> then i create pheonix table like:
> 
> create table click_pv(pk varchar primary key,"default"."min_time" 
> VARCHAR,"default"."pid" VARCHAR,"default"."pv" BIGINT);
> 
> the pv column 's type use BIGINT will mapping the value to 
> -9223372036854775805
> and when i use UNSIGNED_LONG type ,it will works ok.
> it looks a little strange..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3240) ClassCastException from Pig loader

2016-09-27 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15526745#comment-15526745
 ] 

James Taylor commented on PHOENIX-3240:
---

WDYT, [~elserj] & [~sergey.soldatov]?

> ClassCastException from Pig loader
> --
>
> Key: PHOENIX-3240
> URL: https://issues.apache.org/jira/browse/PHOENIX-3240
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: YoungWoo Kim
> Fix For: 4.9.0
>
>
> I'm loading data from Hive table to Phoenix table in using Phoenix-Pig 
> integration. my pig script looks like following:
> {code}
> T = LOAD 'mydb.$TBL' USING org.apache.hive.hcatalog.pig.HCatLoader();
> STORE T into 'hbase://MYSCHEMA.$TBL' using
> org.apache.phoenix.pig.PhoenixHBaseStorage('i004,i005,i006','-batchSize 
> 1000');
> {code}
> If the source table has timestamp, MapReduce job for Pig script does not work:
> {noformat}
> ERROR 0: java.lang.ClassCastException: org.joda.time.DateTime cannot be cast 
> to org.apache.phoenix.shaded.org.joda.time.DateTime
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> java.lang.ClassCastException: org.joda.time.DateTime cannot be cast to 
> org.apache.phoenix.shaded.org.joda.time.DateTime
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.getStats(MapReduceLauncher.java:822)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:452)
>   at 
> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:304)
>   at org.apache.pig.PigServer.launchPlan(PigServer.java:1390)
>   at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1375)
>   at org.apache.pig.PigServer.execute(PigServer.java:1364)
>   at org.apache.pig.PigServer.executeBatch(PigServer.java:415)
>   at org.apache.pig.PigServer.executeBatch(PigServer.java:398)
>   at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:171)
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:234)
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
>   at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
>   at org.apache.pig.Main.run(Main.java:502)
>   at org.apache.pig.Main.main(Main.java:177)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.lang.ClassCastException: org.joda.time.DateTime cannot be 
> cast to org.apache.phoenix.shaded.org.joda.time.DateTime
>   at 
> org.apache.phoenix.pig.util.TypeUtil.castPigTypeToPhoenix(TypeUtil.java:201)
>   at 
> org.apache.phoenix.pig.PhoenixHBaseStorage.putNext(PhoenixHBaseStorage.java:189)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:136)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:95)
>   at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
>   at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
>   at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:260)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3240) ClassCastException from Pig loader

2016-09-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3240:
--
Fix Version/s: 4.9.0

> ClassCastException from Pig loader
> --
>
> Key: PHOENIX-3240
> URL: https://issues.apache.org/jira/browse/PHOENIX-3240
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: YoungWoo Kim
> Fix For: 4.9.0
>
>
> I'm loading data from Hive table to Phoenix table in using Phoenix-Pig 
> integration. my pig script looks like following:
> {code}
> T = LOAD 'mydb.$TBL' USING org.apache.hive.hcatalog.pig.HCatLoader();
> STORE T into 'hbase://MYSCHEMA.$TBL' using
> org.apache.phoenix.pig.PhoenixHBaseStorage('i004,i005,i006','-batchSize 
> 1000');
> {code}
> If the source table has timestamp, MapReduce job for Pig script does not work:
> {noformat}
> ERROR 0: java.lang.ClassCastException: org.joda.time.DateTime cannot be cast 
> to org.apache.phoenix.shaded.org.joda.time.DateTime
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> java.lang.ClassCastException: org.joda.time.DateTime cannot be cast to 
> org.apache.phoenix.shaded.org.joda.time.DateTime
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.getStats(MapReduceLauncher.java:822)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:452)
>   at 
> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:304)
>   at org.apache.pig.PigServer.launchPlan(PigServer.java:1390)
>   at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1375)
>   at org.apache.pig.PigServer.execute(PigServer.java:1364)
>   at org.apache.pig.PigServer.executeBatch(PigServer.java:415)
>   at org.apache.pig.PigServer.executeBatch(PigServer.java:398)
>   at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:171)
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:234)
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
>   at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
>   at org.apache.pig.Main.run(Main.java:502)
>   at org.apache.pig.Main.main(Main.java:177)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.lang.ClassCastException: org.joda.time.DateTime cannot be 
> cast to org.apache.phoenix.shaded.org.joda.time.DateTime
>   at 
> org.apache.phoenix.pig.util.TypeUtil.castPigTypeToPhoenix(TypeUtil.java:201)
>   at 
> org.apache.phoenix.pig.PhoenixHBaseStorage.putNext(PhoenixHBaseStorage.java:189)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:136)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:95)
>   at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
>   at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
>   at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:260)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3240) ClassCastException from Pig loader

2016-09-27 Thread Dimitri Capitaine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15526693#comment-15526693
 ] 

Dimitri Capitaine commented on PHOENIX-3240:


i had the same problem. i unshaded joda time

> ClassCastException from Pig loader
> --
>
> Key: PHOENIX-3240
> URL: https://issues.apache.org/jira/browse/PHOENIX-3240
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: YoungWoo Kim
>
> I'm loading data from Hive table to Phoenix table in using Phoenix-Pig 
> integration. my pig script looks like following:
> {code}
> T = LOAD 'mydb.$TBL' USING org.apache.hive.hcatalog.pig.HCatLoader();
> STORE T into 'hbase://MYSCHEMA.$TBL' using
> org.apache.phoenix.pig.PhoenixHBaseStorage('i004,i005,i006','-batchSize 
> 1000');
> {code}
> If the source table has timestamp, MapReduce job for Pig script does not work:
> {noformat}
> ERROR 0: java.lang.ClassCastException: org.joda.time.DateTime cannot be cast 
> to org.apache.phoenix.shaded.org.joda.time.DateTime
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> java.lang.ClassCastException: org.joda.time.DateTime cannot be cast to 
> org.apache.phoenix.shaded.org.joda.time.DateTime
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.getStats(MapReduceLauncher.java:822)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:452)
>   at 
> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:304)
>   at org.apache.pig.PigServer.launchPlan(PigServer.java:1390)
>   at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1375)
>   at org.apache.pig.PigServer.execute(PigServer.java:1364)
>   at org.apache.pig.PigServer.executeBatch(PigServer.java:415)
>   at org.apache.pig.PigServer.executeBatch(PigServer.java:398)
>   at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:171)
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:234)
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
>   at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
>   at org.apache.pig.Main.run(Main.java:502)
>   at org.apache.pig.Main.main(Main.java:177)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.lang.ClassCastException: org.joda.time.DateTime cannot be 
> cast to org.apache.phoenix.shaded.org.joda.time.DateTime
>   at 
> org.apache.phoenix.pig.util.TypeUtil.castPigTypeToPhoenix(TypeUtil.java:201)
>   at 
> org.apache.phoenix.pig.PhoenixHBaseStorage.putNext(PhoenixHBaseStorage.java:189)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:136)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:95)
>   at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
>   at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
>   at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:260)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-3240) ClassCastException from Pig loader

2016-09-27 Thread Dimitri Capitaine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15526693#comment-15526693
 ] 

Dimitri Capitaine edited comment on PHOENIX-3240 at 9/27/16 4:46 PM:
-

I had the same problem. I unshaded joda-time library


was (Author: pirion):
i had the same problem. i unshaded joda-time library

> ClassCastException from Pig loader
> --
>
> Key: PHOENIX-3240
> URL: https://issues.apache.org/jira/browse/PHOENIX-3240
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: YoungWoo Kim
>
> I'm loading data from Hive table to Phoenix table in using Phoenix-Pig 
> integration. my pig script looks like following:
> {code}
> T = LOAD 'mydb.$TBL' USING org.apache.hive.hcatalog.pig.HCatLoader();
> STORE T into 'hbase://MYSCHEMA.$TBL' using
> org.apache.phoenix.pig.PhoenixHBaseStorage('i004,i005,i006','-batchSize 
> 1000');
> {code}
> If the source table has timestamp, MapReduce job for Pig script does not work:
> {noformat}
> ERROR 0: java.lang.ClassCastException: org.joda.time.DateTime cannot be cast 
> to org.apache.phoenix.shaded.org.joda.time.DateTime
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> java.lang.ClassCastException: org.joda.time.DateTime cannot be cast to 
> org.apache.phoenix.shaded.org.joda.time.DateTime
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.getStats(MapReduceLauncher.java:822)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:452)
>   at 
> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:304)
>   at org.apache.pig.PigServer.launchPlan(PigServer.java:1390)
>   at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1375)
>   at org.apache.pig.PigServer.execute(PigServer.java:1364)
>   at org.apache.pig.PigServer.executeBatch(PigServer.java:415)
>   at org.apache.pig.PigServer.executeBatch(PigServer.java:398)
>   at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:171)
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:234)
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
>   at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
>   at org.apache.pig.Main.run(Main.java:502)
>   at org.apache.pig.Main.main(Main.java:177)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.lang.ClassCastException: org.joda.time.DateTime cannot be 
> cast to org.apache.phoenix.shaded.org.joda.time.DateTime
>   at 
> org.apache.phoenix.pig.util.TypeUtil.castPigTypeToPhoenix(TypeUtil.java:201)
>   at 
> org.apache.phoenix.pig.PhoenixHBaseStorage.putNext(PhoenixHBaseStorage.java:189)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:136)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:95)
>   at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
>   at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
>   at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:260)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-3240) ClassCastException from Pig loader

2016-09-27 Thread Dimitri Capitaine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15526693#comment-15526693
 ] 

Dimitri Capitaine edited comment on PHOENIX-3240 at 9/27/16 4:45 PM:
-

i had the same problem. i unshaded joda-time library


was (Author: pirion):
i had the same problem. i unshaded joda time

> ClassCastException from Pig loader
> --
>
> Key: PHOENIX-3240
> URL: https://issues.apache.org/jira/browse/PHOENIX-3240
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: YoungWoo Kim
>
> I'm loading data from Hive table to Phoenix table in using Phoenix-Pig 
> integration. my pig script looks like following:
> {code}
> T = LOAD 'mydb.$TBL' USING org.apache.hive.hcatalog.pig.HCatLoader();
> STORE T into 'hbase://MYSCHEMA.$TBL' using
> org.apache.phoenix.pig.PhoenixHBaseStorage('i004,i005,i006','-batchSize 
> 1000');
> {code}
> If the source table has timestamp, MapReduce job for Pig script does not work:
> {noformat}
> ERROR 0: java.lang.ClassCastException: org.joda.time.DateTime cannot be cast 
> to org.apache.phoenix.shaded.org.joda.time.DateTime
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> java.lang.ClassCastException: org.joda.time.DateTime cannot be cast to 
> org.apache.phoenix.shaded.org.joda.time.DateTime
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.getStats(MapReduceLauncher.java:822)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:452)
>   at 
> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:304)
>   at org.apache.pig.PigServer.launchPlan(PigServer.java:1390)
>   at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1375)
>   at org.apache.pig.PigServer.execute(PigServer.java:1364)
>   at org.apache.pig.PigServer.executeBatch(PigServer.java:415)
>   at org.apache.pig.PigServer.executeBatch(PigServer.java:398)
>   at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:171)
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:234)
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
>   at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
>   at org.apache.pig.Main.run(Main.java:502)
>   at org.apache.pig.Main.main(Main.java:177)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.lang.ClassCastException: org.joda.time.DateTime cannot be 
> cast to org.apache.phoenix.shaded.org.joda.time.DateTime
>   at 
> org.apache.phoenix.pig.util.TypeUtil.castPigTypeToPhoenix(TypeUtil.java:201)
>   at 
> org.apache.phoenix.pig.PhoenixHBaseStorage.putNext(PhoenixHBaseStorage.java:189)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:136)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:95)
>   at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
>   at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
>   at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:260)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


bulk-upsert spark phoenix

2016-09-27 Thread Antonio Murgia

Hi,

I would like to perform a Bulk insert to HBase using Apache Phoenix from 
Spark. I tried using Apache Spark Phoenix library but, as far as I was 
able to understand from the code, it looks like it performs a jdbc batch 
of upserts (am I right?). Instead I want to perform a Bulk load like the 
one described in this blog post 
(https://zeyuanxy.github.io/HBase-Bulk-Loading/) but taking advance of 
the automatic transformation between java/scala types to Bytes.


I'm actually using phoenix 4.5.2, therefore I cannot use hive to 
manipulate the phoenix table, and if it possible i want to avoid to 
spawn a MR job that reads data from csv 
(https://phoenix.apache.org/bulk_dataload.html). Actually i just want to 
do what the csv loader is doing with MR but programmatically with Spark 
(since the data I want to persist is already loaded in memory).


Thank you all!



[jira] [Comment Edited] (PHOENIX-3335) long type in hbase mapping pheonix type error

2016-09-27 Thread William Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15525803#comment-15525803
 ] 

William Yang edited comment on PHOENIX-3335 at 9/27/16 11:17 AM:
-

This is exactly how it works. see 
http://phoenix.apache.org/language/datatypes.html
BIGINT is a signed data type. As you know, the first bit of a negative value is 
1, but positive value 0. So negative values will be 'greater than' positive 
values in dictionary order. In order to let all values sorted in the right way, 
PHOENIX flip the first bit for signed values, so that negative values will sort 
before positive values. But for unsigned types, this is not needed.

So, this is not a bug. If you want to map hbase table to phoenix, make sure 
that your are using unsigned types for integers.

As so many people have encountered this problem, I think we should add a 
explicit and detailed explanation here 
http://phoenix.apache.org/faq.html#How_I_map_Phoenix_table_to_an_existing_HBase_table
[~giacomotaylor]


was (Author: yhxx511):
This is exactly how it works. see 
http://phoenix.apache.org/language/datatypes.html
BIGINT is a signed data type. As you know, the first bit of a negative value is 
1, but positive value 0. So negative values will be 'greater than' positive 
values in dictionary order. In order to let all values sorted in the right way, 
PHOENIX flip the first bit for signed values, so that negative values will sort 
before positive values. But for unsigned types, this is not needed.

So, this is not a bug. If you want to map hbase table to phoenix, make sure 
that your are using unsigned types for integers.

> long type in hbase mapping pheonix type error
> -
>
> Key: PHOENIX-3335
> URL: https://issues.apache.org/jira/browse/PHOENIX-3335
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.8.0
>Reporter: Kui Xiang
>
> when i use the function of increament in hbase 2.0.x
> push a value like '\x00\x00\x00\x00\x00\x00\x00\x01' in hbase 
> then i create pheonix table like:
> 
> create table click_pv(pk varchar primary key,"default"."min_time" 
> VARCHAR,"default"."pid" VARCHAR,"default"."pv" BIGINT);
> 
> the pv column 's type use BIGINT will mapping the value to 
> -9223372036854775805
> and when i use UNSIGNED_LONG type ,it will works ok.
> it looks a little strange..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3335) long type in hbase mapping pheonix type error

2016-09-27 Thread William Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15525803#comment-15525803
 ] 

William Yang commented on PHOENIX-3335:
---

This is exactly how it works. see 
http://phoenix.apache.org/language/datatypes.html
BIGINT is a signed data type. As you know, the first bit of a negative value is 
1, but positive value 0. So negative values will be 'greater than' positive 
values in dictionary order. In order to let all values sorted in the right way, 
PHOENIX flip the first bit for signed values, so that negative values will sort 
before positive values. But for unsigned types, this is not needed.

So, this is not a bug. If you want to map hbase table to phoenix, make sure 
that your are using unsigned types for integers.

> long type in hbase mapping pheonix type error
> -
>
> Key: PHOENIX-3335
> URL: https://issues.apache.org/jira/browse/PHOENIX-3335
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.8.0
>Reporter: Kui Xiang
>
> when i use the function of increament in hbase 2.0.x
> push a value like '\x00\x00\x00\x00\x00\x00\x00\x01' in hbase 
> then i create pheonix table like:
> 
> create table click_pv(pk varchar primary key,"default"."min_time" 
> VARCHAR,"default"."pid" VARCHAR,"default"."pv" BIGINT);
> 
> the pv column 's type use BIGINT will mapping the value to 
> -9223372036854775805
> and when i use UNSIGNED_LONG type ,it will works ok.
> it looks a little strange..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3335) long type in hbase mapping pheonix type error

2016-09-27 Thread Kui Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kui Xiang updated PHOENIX-3335:
---
Description: 
when i use the function of increament in hbase 2.0.x

push a value like '\x00\x00\x00\x00\x00\x00\x00\x01' in hbase 

then i create pheonix table like:

create table click_pv(pk varchar primary key,"default"."min_time" 
VARCHAR,"default"."pid" VARCHAR,"default"."pv" BIGINT);

the pv column 's type use BIGINT will mapping the value to -9223372036854775805

and when i use UNSIGNED_LONG type ,it will works ok.

it looks a little strange..


  was:
when i use the function of increament in hbase 2.0.x

push a value like '\x00\x00\x00\x00\x00\x00\x00\x01' in hbase 

then i create pheonix table like:

create table click_pv(pk varchar primary key,"default"."min_time" 
VARCHAR,"default"."pid" VARCHAR,"default"."pv" BIGINT);

the pv column 's type use BIGINT will mapping the value to -9223372036854775805

and when i use UNSIGNED_LONG type ,it will works ok.



> long type in hbase mapping pheonix type error
> -
>
> Key: PHOENIX-3335
> URL: https://issues.apache.org/jira/browse/PHOENIX-3335
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.8.0
>Reporter: Kui Xiang
>
> when i use the function of increament in hbase 2.0.x
> push a value like '\x00\x00\x00\x00\x00\x00\x00\x01' in hbase 
> then i create pheonix table like:
> 
> create table click_pv(pk varchar primary key,"default"."min_time" 
> VARCHAR,"default"."pid" VARCHAR,"default"."pv" BIGINT);
> 
> the pv column 's type use BIGINT will mapping the value to 
> -9223372036854775805
> and when i use UNSIGNED_LONG type ,it will works ok.
> it looks a little strange..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3335) long type in hbase mapping pheonix type error

2016-09-27 Thread Kui Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kui Xiang updated PHOENIX-3335:
---
Description: 
when i use the function of increament in hbase 2.0.x

push a value like '\x00\x00\x00\x00\x00\x00\x00\x01' in hbase 

then i create pheonix table like:

create table click_pv(pk varchar primary key,"default"."min_time" 
VARCHAR,"default"."pid" VARCHAR,"default"."pv" BIGINT);

the pv column 's type use BIGINT will mapping the value to -9223372036854775805

and when i use UNSIGNED_LONG type ,it will works ok.


  was:
when i use the function of increament in hbase 2.0.x

push a value like '\x00\x00\x00\x00\x00\x00\x00\x01' in hbase 

then i create pheonix table like:

create table click_pv(pk varchar primary key,"default"."min_time" 
VARCHAR,"default"."pid" VARCHAR,"default"."pv" BIGINT);

the pv column 's type use BIGINT will mapping the value to -9223372036854775805

and when i use UNSIGNED_LONG type ,it will works ok.



> long type in hbase mapping pheonix type error
> -
>
> Key: PHOENIX-3335
> URL: https://issues.apache.org/jira/browse/PHOENIX-3335
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.8.0
>Reporter: Kui Xiang
>
> when i use the function of increament in hbase 2.0.x
> push a value like '\x00\x00\x00\x00\x00\x00\x00\x01' in hbase 
> then i create pheonix table like:
> 
> create table click_pv(pk varchar primary key,"default"."min_time" 
> VARCHAR,"default"."pid" VARCHAR,"default"."pv" BIGINT);
> 
> the pv column 's type use BIGINT will mapping the value to 
> -9223372036854775805
> and when i use UNSIGNED_LONG type ,it will works ok.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3335) long type in hbase mapping pheonix type error

2016-09-27 Thread Kui Xiang (JIRA)
Kui Xiang created PHOENIX-3335:
--

 Summary: long type in hbase mapping pheonix type error
 Key: PHOENIX-3335
 URL: https://issues.apache.org/jira/browse/PHOENIX-3335
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.8.0
Reporter: Kui Xiang


when i use the function of increament in hbase 2.0.x

push a value like '\x00\x00\x00\x00\x00\x00\x00\x01' in hbase 

then i create pheonix table like:

create table click_pv(pk varchar primary key,"default"."min_time" 
VARCHAR,"default"."pid" VARCHAR,"default"."pv" BIGINT);

the pv column 's type use BIGINT will mapping the value to -9223372036854775805

and when i use UNSIGNED_LONG type ,it will works ok.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3254) IndexId Sequence is incremented even if index exists already.

2016-09-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15525597#comment-15525597
 ] 

ASF GitHub Bot commented on PHOENIX-3254:
-

GitHub user ankitsinghal opened a pull request:

https://github.com/apache/phoenix/pull/211

PHOENIX-3254 IndexId Sequence is incremented even if index exists alr…

…eady

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ankitsinghal/phoenix PHOENIX-3254

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/211.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #211


commit 5268a90d38c21e8f93b8b2782f2d9e146642a463
Author: Ankit Singhal 
Date:   2016-09-27T09:29:11Z

PHOENIX-3254 IndexId Sequence is incremented even if index exists already




> IndexId Sequence is incremented even if index exists already.
> -
>
> Key: PHOENIX-3254
> URL: https://issues.apache.org/jira/browse/PHOENIX-3254
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Attachments: PHOENIX-3254.patch, PHOENIX-3254_wip.patch
>
>
> As we are incrementing sequence at the client even if we are not going to 
> create a index (in case index already exists and user is using CREATE INDEX 
> IF NOT EXISTS) or DDL failed in later stage(due to parent table not found or 
> something).
> If this keeps on happening then user may reach the limit of Short.MAX_VALUE 
> and TOO_MANY_INDEXES exception will be thrown if user tries to create a new 
> index.
> To prevent, this we should increment sequences when we are about to create a 
> index at server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #211: PHOENIX-3254 IndexId Sequence is incremented even...

2016-09-27 Thread ankitsinghal
GitHub user ankitsinghal opened a pull request:

https://github.com/apache/phoenix/pull/211

PHOENIX-3254 IndexId Sequence is incremented even if index exists alr…

…eady

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ankitsinghal/phoenix PHOENIX-3254

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/211.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #211


commit 5268a90d38c21e8f93b8b2782f2d9e146642a463
Author: Ankit Singhal 
Date:   2016-09-27T09:29:11Z

PHOENIX-3254 IndexId Sequence is incremented even if index exists already




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX 2890 Extend IndexTool to allow incrementa...

2016-09-27 Thread ankitsinghal
GitHub user ankitsinghal opened a pull request:

https://github.com/apache/phoenix/pull/210

PHOENIX 2890 Extend IndexTool to allow incremental index rebuilds



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ankitsinghal/phoenix PHOENIX-2890

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/210.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #210


commit 5cbf908184900e3708f5cf9471a529b8cf579e97
Author: Ankit Singhal 
Date:   2016-09-27T09:25:59Z

PHOENIX 2890 Extend IndexTool to allow incremental index rebuilds




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (PHOENIX-3153) Convert join-related IT tests to be derived from BaseHBaseManagedTimeTableReuseIT

2016-09-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3153:
--
Attachment: PHOENIX-3153.patch

> Convert join-related IT tests to be derived from 
> BaseHBaseManagedTimeTableReuseIT
> -
>
> Key: PHOENIX-3153
> URL: https://issues.apache.org/jira/browse/PHOENIX-3153
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: prakul agarwal
>Assignee: James Taylor
>Priority: Minor
> Attachments: PHOENIX-3153.patch
>
>
> The following 5 test cases follow same pattern (initJoinTableValues method 
> for table generation) and are still extending BaseHBaseManagedTimeIT .
> HashJoinIt
> SortMergeJoinIT
> HashJoinLocal index
> SubqueryIT
> SubqueryUsingSortMergeJoinIT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-3153) Convert join-related IT tests to be derived from BaseHBaseManagedTimeTableReuseIT

2016-09-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-3153:
-

Assignee: James Taylor

> Convert join-related IT tests to be derived from 
> BaseHBaseManagedTimeTableReuseIT
> -
>
> Key: PHOENIX-3153
> URL: https://issues.apache.org/jira/browse/PHOENIX-3153
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: prakul agarwal
>Assignee: James Taylor
>Priority: Minor
>
> The following 5 test cases follow same pattern (initJoinTableValues method 
> for table generation) and are still extending BaseHBaseManagedTimeIT .
> HashJoinIt
> SortMergeJoinIT
> HashJoinLocal index
> SubqueryIT
> SubqueryUsingSortMergeJoinIT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)