[jira] [Commented] (PHOENIX-2976) PKs included in index cause ArrayIndexOutOfBoundsException on ScanUtil

2016-09-22 Thread Saurabh Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515497#comment-15515497
 ] 

Saurabh Seth commented on PHOENIX-2976:
---

The patch actually includes additional tests in ScanQueryIT but the QA bot 
doesn't detect this. The javadoc warnings and zombie test are unrelated to this 
patch.

> PKs included in index cause ArrayIndexOutOfBoundsException on ScanUtil
> --
>
> Key: PHOENIX-2976
> URL: https://issues.apache.org/jira/browse/PHOENIX-2976
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Hoonmin Kim
>Assignee: Saurabh Seth
> Attachments: PHOENIX-2976.patch
>
>
> ScanUtil seems to throw an ArrayIndexOutOfBoundsException iif :
> - There are more than one PK column INCLUDED in the index.
> - And there's IN clause with PKs (indexed columns).
> - And there's a non-PK column comparison.
> I could avoid the exception by modifying the ScanUtil.java to check the array 
> length and offset, but it's just a temp work.
> *To reproduce my problem:*
> Table & Index
> {code}
> CREATE TABLE INCLUDED_PK (id1 SMALLINT NOT NULL, id2 SMALLINT NOT NULL, tid1 
> SMALLINT NOT NULL, liked SMALLINT CONSTRAINT pk PRIMARY KEY (id1, id2, tid1))
> CREATE INDEX INCLUDED_PK_INDEX ON INCLUDED_PK(id1 ASC, id2 ASC, liked ASC) 
> INCLUDE (tid1)
> {code}
> Query throwing an exception
> {code}
> SELECT id1, id2, tid1, liked FROM INCLUDED_PK WHERE ((id1,id2) IN ((1,1), 
> (2,2), (3,3))) AND liked > 10 limit 10
> {code}
> Testing the query with QueryPlanTest
> {code}
> 2016-06-09 14:18:44,589 DEBUG [main] 
> org.apache.phoenix.util.ReadOnlyProps(317): Creating new ReadOnlyProps due to 
> phoenix.query.dateFormat with -MM-dd HH:mm:ss.SSS!=-MM-dd
> 2016-06-09 14:18:44,597 DEBUG [main] 
> org.apache.phoenix.jdbc.PhoenixStatement(1402): Execute query: EXPLAIN SELECT 
> id1, id2, tid1, liked FROM INCLUDED_PK WHERE ((id1,id2) IN ((1,1), (2,2), 
> (3,3))) AND liked > 10 limit 10
> 2016-06-09 14:18:44,601 DEBUG [main] 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver(568): Re-resolved 
> stale table INCLUDED_PK with seqNum 0 at timestamp 0 with 4 columns: [ID1, 
> ID2, TID1, 0.LIKED]
> 2016-06-09 14:18:44,683 DEBUG [main] 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver(568): Re-resolved 
> stale table INCLUDED_PK_INDEX with seqNum 0 at timestamp 0 with 4 columns: 
> [:ID1, :ID2, 0:LIKED, :TID1]
> 2016-06-09 14:18:44,684 DEBUG [main] 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver(568): Re-resolved 
> stale table INCLUDED_PK with seqNum 0 at timestamp 0 with 4 columns: [ID1, 
> ID2, TID1, 0.LIKED]
> java.lang.ArrayIndexOutOfBoundsException: 6
>   at org.apache.phoenix.util.ScanUtil.setKey(ScanUtil.java:417)
>   at org.apache.phoenix.util.ScanUtil.setKey(ScanUtil.java:348)
>   at org.apache.phoenix.util.ScanUtil.getKey(ScanUtil.java:320)
>   at org.apache.phoenix.util.ScanUtil.getMinKey(ScanUtil.java:293)
>   at org.apache.phoenix.compile.ScanRanges.create(ScanRanges.java:130)
>   at 
> org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOptimizer.java:302)
>   at 
> org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:149)
>   at 
> org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:100)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:559)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:510)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:205)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:157)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:236)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:146)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:94)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:80)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.getExplainPlan(BaseQueryPlan.java:484)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:464)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:443)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:271)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:265)
>   at 
> 

[VOTE] The first rc (RC0) for Phoenix 4.8.1 is available

2016-09-22 Thread larsh
Hello Fellow Phoenix'ers,

The first RC for Apache Phoenix 4.8.1 is available. This is a patch release for 
the Phoenix 4.8 release line,
compatible with Apache HBase 0.98, 1.0, 1.1 & 1.2.

This release fixes the following 43 issues:
    [PHOENIX-1367] - VIEW derived from another VIEW doesn't use parent VIEW 
indexes
    [PHOENIX-3195] - Slight safety improvement for using DistinctPrefixFilter
    [PHOENIX-3228] - Index tables should not be configured with a 
custom/smaller MAX_FILESIZE
    [PHOENIX-930] - duplicated columns cause query exception and drop table 
exception
    [PHOENIX-1647] - Correctly return that Phoenix supports schema name 
references in DatabaseMetaData
    [PHOENIX-2336] - Queries with small case column-names return empty 
result-set when working with Spark Datasource Plugin
    [PHOENIX-2474] - Cannot round to a negative precision (to the left of the 
decimal)
    [PHOENIX-2641] - Implicit wildcard in LIKE predicate search pattern
    [PHOENIX-2645] - Wildcard characters do not match newline characters
    [PHOENIX-2853] - Delete Immutable rows from View does not work if immutable 
index(secondary index) exists
    [PHOENIX-2944] - DATE Comparison Broken
    [PHOENIX-2946] - Projected comparison between date and timestamp columns 
always returns true
    [PHOENIX-2995] - Write performance severely degrades with large number of 
views
    [PHOENIX-3046] - NOT LIKE with wildcard unexpectedly returns results
    [PHOENIX-3054] - Counting zero null rows returns an empty result set
    [PHOENIX-3072] - Deadlock on region opening with secondary index recovery
    [PHOENIX-3148] - Reduce size of PTable so that more tables can be cached in 
the metada cache.
    [PHOENIX-3162] - TableNotFoundException might be thrown when an index 
dropped while upserting.
    [PHOENIX-3164] - PhoenixConnection leak in PQS with security enabled
    [PHOENIX-3170] - Remove the futuretask from the list if 
StaleRegionBoundaryCacheException is thrown while initializing the scanners
    [PHOENIX-3175] - Unnecessary UGI proxy user impersonation check
    [PHOENIX-3185] - Error: ERROR 514 (42892): A duplicate column name was 
detected in the object definition or ALTER TABLE statement. 
columnName=TEST_TABLE.C1 (state=42892,code=514)
    [PHOENIX-3189] - HBase/ZooKeeper connection leaks when providing 
principal/keytab in JDBC url
    [PHOENIX-3203] - Tenant cache lookup in Global Cache fails in certain 
conditions
    [PHOENIX-3207] - Fix compilation failure on 4.8-HBase-1.2, 4.8-HBase-1.1 
and 4.8-HBase-1.0 branches after PHOENIX-3148
    [PHOENIX-3210] - Exception trying to cast Double to BigDecimal in 
UpsertCompiler
    [PHOENIX-3223] - Add hadoop classpath to PQS classpath
    [PHOENIX-3230] - Upgrade code running concurrently on different JVMs could 
make clients unusuable
    [PHOENIX-3237] - Automatic rebuild of disabled index will fail if indexes 
of two tables are disabled at the same time
    [PHOENIX-3246] - U+2002 (En Space) not handled as whitespace in grammar
    [PHOENIX-3260] - MetadataRegionObserver.postOpen() can prevent region 
server from shutting down for a long duration
    [PHOENIX-3268] - Upgrade to Tephra 0.9.0
    [PHOENIX-3280] - Automatic attempt to rebuild all disabled index
    [PHOENIX-3291] - Do not throw return value of Throwables#propagate call
    [PHOENIX-3307] - Backward compatibility fails for tables with index (4.7.0 
client - 4.8.1 server)
    [PHOENIX-3323] - make_rc script fails to build the RC
    [PHOENIX-2785] - Do not store NULLs for immutable tables
    [PHOENIX-3081] - MIsleading exception on async stats update after major 
compaction
    [PHOENIX-3116] - Support incompatible HBase 1.1.5 and HBase 1.2.2
    [PHOENIX-808] - Create snapshot of SYSTEM.CATALOG prior to upgrade and 
restore on any failure
    [PHOENIX-2990] - Ensure documentation on "time/date" datatypes/functions 
acknowledge lack of JDBC compliance
    [PHOENIX-2991] - Add missing documentation for functions
    [PHOENIX-3255] - Increase test coverage for TIMESTAMP

(the release notes are also available here: 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315120=12337964)

The release includes both a source-only release and a convenience binary 
release for each supported HBase version.

The source tarball for supported HBase versions, including signatures, digests, 
etc can be found at:
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.8.1-HBase-1.2-rc0/src/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.8.1-HBase-1.1-rc0/src/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.8.1-HBase-1.0-rc0/src/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.8.1-HBase-0.98-rc0/src/

The binary artifacts for supported HBase versions can be found at:
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.8.1-HBase-1.2-rc0/bin/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.8.1-HBase-1.1-rc0/bin/

[jira] [Commented] (PHOENIX-3273) Replace "!=" with "<>" in all test cases

2016-09-22 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515401#comment-15515401
 ] 

Maryann Xue commented on PHOENIX-3273:
--

The question is whether we should stop supporting "!=". Checked online and 
found out that most databases actually support both, although "!=" is not 
SQL-92 standard.

> Replace "!=" with "<>" in all test cases
> 
>
> Key: PHOENIX-3273
> URL: https://issues.apache.org/jira/browse/PHOENIX-3273
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Maryann Xue
>Assignee: Maryann Xue
>  Labels: calcite
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3323) make_rc script fails to build the RC

2016-09-22 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved PHOENIX-3323.

   Resolution: Fixed
Fix Version/s: (was: 4.9.0)

> make_rc script fails to build the RC
> 
>
> Key: PHOENIX-3323
> URL: https://issues.apache.org/jira/browse/PHOENIX-3323
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 4.8.1
>
>
> cp: will not overwrite just-created 
> '/tmp/phoenix/dev/../release/apache-phoenix-4.8.1-HBase-0.98/apache-phoenix-4.8.1-HBase-0.98-bin/phoenix-core-4.8.1-HBase-0.98.jar'
>  with './phoenix-core/target/phoenix-core-4.8.1-HBase-0.98.jar'
> cp: will not overwrite just-created 
> '/tmp/phoenix/dev/../release/apache-phoenix-4.8.1-HBase-0.98/apache-phoenix-4.8.1-HBase-0.98-bin/phoenix-core-4.8.1-HBase-0.98-tests.jar'
>  with './phoenix-core/target/phoenix-core-4.8.1-HBase-0.98-tests.jar
> The fix is to exclude the phoenix-\{core}|\{test} jars in Hive's dependency 
> directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-3323) make_rc script fails to build the RC

2016-09-22 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned PHOENIX-3323:
--

Assignee: Lars Hofhansl

> make_rc script fails to build the RC
> 
>
> Key: PHOENIX-3323
> URL: https://issues.apache.org/jira/browse/PHOENIX-3323
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 4.9.0, 4.8.1
>
>
> cp: will not overwrite just-created 
> '/tmp/phoenix/dev/../release/apache-phoenix-4.8.1-HBase-0.98/apache-phoenix-4.8.1-HBase-0.98-bin/phoenix-core-4.8.1-HBase-0.98.jar'
>  with './phoenix-core/target/phoenix-core-4.8.1-HBase-0.98.jar'
> cp: will not overwrite just-created 
> '/tmp/phoenix/dev/../release/apache-phoenix-4.8.1-HBase-0.98/apache-phoenix-4.8.1-HBase-0.98-bin/phoenix-core-4.8.1-HBase-0.98-tests.jar'
>  with './phoenix-core/target/phoenix-core-4.8.1-HBase-0.98-tests.jar
> The fix is to exclude the phoenix-\{core}|\{test} jars in Hive's dependency 
> directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Skip scan optimization failed for multi pk columns

2016-09-22 Thread William
Hi all,
   This is a simple scenario, there are two tables:
   create table t1 (pk integer primary key, a integer);
   create table t2 (pk1 integer not null, pk2 integer not null, a integer 
constraint pk primary key (pk1, pk2));


   Do the following selects:
  1. explain select * from t1 where (pk > 10 and pk < 20) or (pk > 30 and pk < 
40);
  results: SKIP SCAN ON 2 RANGES OVER T1 [11] - [40]
  
  2. explain select * from t2 where (pk1 > 10 and pk1 < 20) or (pk1 > 30 and 
pk1 < 40);
  results: FULL SCAN OVER T2 
   SERVER FILTER BY ((PK1 > 10 AND PK1 < 20) OR (PK1 > 30 AND 
PK1 < 40))


Apparently, 2nd SELECT statement should use skip scan instead of full table 
scan. But T2 has two pk columns and then WhereOptimizer failed to optimize it. 
I went through the code and made a small improvement.
In WhereOptimizer#KeyExpressionVisitor#orKeySlots(),  see the attached 
patch file for detail. The main idea is we allow slot in childSlot is null, 
only if all slots afterwards are null too. So the following statements are 
still rejected:
select * from t2 where (pk1 > 10 and pk1 < 20)  or (pk2 > 30 and pk2 < 40)


Please review this. Thanks.
William.
   

Re: Architectural Understanding

2016-09-22 Thread James Taylor
On Thu, Sep 22, 2016 at 12:47 PM, John Leach  wrote:

> Can you validate my understanding?
>
> 1.  Importing Data: Online load via python and offline load via MapReduce.
>

There are many ways to import data. Since Phoenix stays true to the basic
HBase data model, you can import data in any way that HBase supports,
independent of Phoenix APIs, as long as you define your Phoenix schema to
be compatible with the serialization format you used for your cell values
and row key. Many users operate in this way. For example, there's a Storm
bolt that imports data directly.

Specifically using Phoenix-backed APIs, you can import using:
- CSV bulk import (MR-based)
- Hive (using our Phoenix Hive Storage Handler)
- Pig scripts (using our Phoenix StoreFunc)
- MR directly (using our Phoenix RecordWriter)
- Flume (using our Phoenix Flume integration)


> 2.  Transactional System: Tephra (Centralized Transactional System based
> on Yanoo’s Omid)
>

Yes. Stay tuned - we may have an Apache Omid one in the future too.


> 3.  Analytical Engine: HBase Coprocessors and JDBC Server/Client (i.e.
> where do you do aggregations and handle intermediate results)
>

We also have Spark, Hive, and Pig integration for analytics.


> 4.  Yarn Support: No except for MapReduce and Index Creation Bits
>

See above - many of those APIs tie into YARN. The standard queries you do
in our JDBC driver do not, though.


> 5.  Resource Management: ?  Thread pool w/ first in first out?
>

We have some configurations that drive resource management, most around
memory usage. For example, you can restrict a tenant to use a percentage of
available memory on the server-side. Outside of that we rely on HBase to do
resource management to a large extent. For example, a schema in Phoenix
maps to an namespace in HBase which allows various ways of doing resource
management.


>
> Is this accurate?
>
> Regards,
> John Leach


[jira] [Commented] (PHOENIX-3323) make_rc script fails to build the RC

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515244#comment-15515244
 ] 

Hudson commented on PHOENIX-3323:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.8-HBase-1.2 #30 (See 
[https://builds.apache.org/job/Phoenix-4.8-HBase-1.2/30/])
PHOENIX-3323 make_rc script fails to build the RC. (larsh: rev 
39d54a10f664d32f2ae0e45aa3b53e40ae515af7)
* (edit) dev/make_rc.sh


> make_rc script fails to build the RC
> 
>
> Key: PHOENIX-3323
> URL: https://issues.apache.org/jira/browse/PHOENIX-3323
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 4.9.0, 4.8.1
>
>
> cp: will not overwrite just-created 
> '/tmp/phoenix/dev/../release/apache-phoenix-4.8.1-HBase-0.98/apache-phoenix-4.8.1-HBase-0.98-bin/phoenix-core-4.8.1-HBase-0.98.jar'
>  with './phoenix-core/target/phoenix-core-4.8.1-HBase-0.98.jar'
> cp: will not overwrite just-created 
> '/tmp/phoenix/dev/../release/apache-phoenix-4.8.1-HBase-0.98/apache-phoenix-4.8.1-HBase-0.98-bin/phoenix-core-4.8.1-HBase-0.98-tests.jar'
>  with './phoenix-core/target/phoenix-core-4.8.1-HBase-0.98-tests.jar
> The fix is to exclude the phoenix-\{core}|\{test} jars in Hive's dependency 
> directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Architectural Understanding

2016-09-22 Thread John Leach
Can you validate my understanding?

1.  Importing Data: Online load via python and offline load via MapReduce.
2.  Transactional System: Tephra (Centralized Transactional System based on 
Yanoo’s Omid)
3.  Analytical Engine: HBase Coprocessors and JDBC Server/Client (i.e. where do 
you do aggregations and handle intermediate results)
4.  Yarn Support: No except for MapReduce and Index Creation Bits
5.  Resource Management: ?  Thread pool w/ first in first out?

Is this accurate?

Regards,
John Leach

[jira] [Commented] (PHOENIX-3273) Replace "!=" with "<>" in all test cases

2016-09-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515049#comment-15515049
 ] 

James Taylor commented on PHOENIX-3273:
---

[~maryannxue] - feel free to make this change in phoenix 4.x and master 
branches to reduce any merge headaches. Also, please make a note of this in 
PHOENIX-3283 if we plan to not support !=.

> Replace "!=" with "<>" in all test cases
> 
>
> Key: PHOENIX-3273
> URL: https://issues.apache.org/jira/browse/PHOENIX-3273
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Maryann Xue
>Assignee: Maryann Xue
>  Labels: calcite
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3253) Make changes to tests to support method level parallelization

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515030#comment-15515030
 ] 

Hudson commented on PHOENIX-3253:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1417 (See 
[https://builds.apache.org/job/Phoenix-master/1417/])
PHOENIX-3253 Make changes to tests to support method level (jamestaylor: rev 
bebb5cedf761b132b78db675a19ede5849f6ea94)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ServerExceptionIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/DisableLocalIndexIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/DynamicFamilyIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/EvaluationOfORIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/trace/PhoenixTracingEndToEndIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/DateTimeIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ChildViewsUseParentViewIndexIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantIdTypeIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/tx/TxCheckpointIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/DeleteIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/StatementHintsIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ReverseFunctionIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/StatsCollectorIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ArraysWithNullsIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/GlobalIndexOptimizationIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/CSVCommonsLoaderIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/LastValueFunctionIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/PercentileIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertValuesIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/MappingTableDataTypeIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/NamespaceSchemaMappingIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/SaltedIndexIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/NthValueFunctionIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexExpressionIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/txn/MutableRollbackIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/MultiCfQueryExecIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificTablesDMLIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ModulusExpressionIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/RegexpSubstrFunctionIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/CoalesceFunctionIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/AbsFunctionEnd2EndIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificTablesDDLIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/FirstValueFunctionIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/SkipScanQueryIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ToDateFunctionIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/RegexpSplitFunctionIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/PowerFunctionEnd2EndIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/TimezoneOffsetFunctionIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/ReadOnlyIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/RTrimFunctionIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/DynamicUpsertIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/SkipScanAfterManualSplitIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ToCharFunctionIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexMetadataIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/SqrtFunctionEnd2EndIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/IsNullIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ArrayAppendFunctionIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseViewIT.java
* (edit) 

[jira] [Updated] (PHOENIX-3273) Replace "!=" with "<>" in all test cases

2016-09-22 Thread Maryann Xue (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maryann Xue updated PHOENIX-3273:
-
Summary: Replace "!=" with "<>" in all test cases  (was: Support "!=" as an 
alternative to "<>" in Calcite parser)

> Replace "!=" with "<>" in all test cases
> 
>
> Key: PHOENIX-3273
> URL: https://issues.apache.org/jira/browse/PHOENIX-3273
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Maryann Xue
>Assignee: Maryann Xue
>  Labels: calcite
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3323) make_rc script fails to build the RC

2016-09-22 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3323:
---
Summary: make_rc script fails to build the RC  (was: make_rc script failes 
to build the RC)

> make_rc script fails to build the RC
> 
>
> Key: PHOENIX-3323
> URL: https://issues.apache.org/jira/browse/PHOENIX-3323
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 4.9.0, 4.8.1
>
>
> cp: will not overwrite just-created 
> '/tmp/phoenix/dev/../release/apache-phoenix-4.8.1-HBase-0.98/apache-phoenix-4.8.1-HBase-0.98-bin/phoenix-core-4.8.1-HBase-0.98.jar'
>  with './phoenix-core/target/phoenix-core-4.8.1-HBase-0.98.jar'
> cp: will not overwrite just-created 
> '/tmp/phoenix/dev/../release/apache-phoenix-4.8.1-HBase-0.98/apache-phoenix-4.8.1-HBase-0.98-bin/phoenix-core-4.8.1-HBase-0.98-tests.jar'
>  with './phoenix-core/target/phoenix-core-4.8.1-HBase-0.98-tests.jar
> The fix is to exclude the phoenix-\{core}|\{test} jars in Hive's dependency 
> directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3323) make_rc script failes to build the RC

2016-09-22 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3323:
---
Fix Version/s: 4.8.1
   4.9.0

> make_rc script failes to build the RC
> -
>
> Key: PHOENIX-3323
> URL: https://issues.apache.org/jira/browse/PHOENIX-3323
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 4.9.0, 4.8.1
>
>
> cp: will not overwrite just-created 
> '/tmp/phoenix/dev/../release/apache-phoenix-4.8.1-HBase-0.98/apache-phoenix-4.8.1-HBase-0.98-bin/phoenix-core-4.8.1-HBase-0.98.jar'
>  with './phoenix-core/target/phoenix-core-4.8.1-HBase-0.98.jar'
> cp: will not overwrite just-created 
> '/tmp/phoenix/dev/../release/apache-phoenix-4.8.1-HBase-0.98/apache-phoenix-4.8.1-HBase-0.98-bin/phoenix-core-4.8.1-HBase-0.98-tests.jar'
>  with './phoenix-core/target/phoenix-core-4.8.1-HBase-0.98-tests.jar
> The fix is to exclude the phoenix-\{core}|\{test} jars in Hive's dependency 
> directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3323) make_rc script failes to build the RC

2016-09-22 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3323:
---
Description: 
cp: will not overwrite just-created 
'/tmp/phoenix/dev/../release/apache-phoenix-4.8.1-HBase-0.98/apache-phoenix-4.8.1-HBase-0.98-bin/phoenix-core-4.8.1-HBase-0.98.jar'
 with './phoenix-core/target/phoenix-core-4.8.1-HBase-0.98.jar'
cp: will not overwrite just-created 
'/tmp/phoenix/dev/../release/apache-phoenix-4.8.1-HBase-0.98/apache-phoenix-4.8.1-HBase-0.98-bin/phoenix-core-4.8.1-HBase-0.98-tests.jar'
 with './phoenix-core/target/phoenix-core-4.8.1-HBase-0.98-tests.jar

The fix is to exclude the phoenix-\{core}|\{test} jars in Hive's dependency 
directory.

  was:
cp: will not overwrite just-created 
'/tmp/phoenix/dev/../release/apache-phoenix-4.8.1-HBase-0.98/apache-phoenix-4.8.1-HBase-0.98-bin/phoenix-core-4.8.1-HBase-0.98.jar'
 with './phoenix-core/target/phoenix-core-4.8.1-HBase-0.98.jar'
cp: will not overwrite just-created 
'/tmp/phoenix/dev/../release/apache-phoenix-4.8.1-HBase-0.98/apache-phoenix-4.8.1-HBase-0.98-bin/phoenix-core-4.8.1-HBase-0.98-tests.jar'
 with './phoenix-core/target/phoenix-core-4.8.1-HBase-0.98-tests.jar

The fix is to exclude the phoenix-{core}|{test} jars in Hive's dependency 
directory.


> make_rc script failes to build the RC
> -
>
> Key: PHOENIX-3323
> URL: https://issues.apache.org/jira/browse/PHOENIX-3323
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>
> cp: will not overwrite just-created 
> '/tmp/phoenix/dev/../release/apache-phoenix-4.8.1-HBase-0.98/apache-phoenix-4.8.1-HBase-0.98-bin/phoenix-core-4.8.1-HBase-0.98.jar'
>  with './phoenix-core/target/phoenix-core-4.8.1-HBase-0.98.jar'
> cp: will not overwrite just-created 
> '/tmp/phoenix/dev/../release/apache-phoenix-4.8.1-HBase-0.98/apache-phoenix-4.8.1-HBase-0.98-bin/phoenix-core-4.8.1-HBase-0.98-tests.jar'
>  with './phoenix-core/target/phoenix-core-4.8.1-HBase-0.98-tests.jar
> The fix is to exclude the phoenix-\{core}|\{test} jars in Hive's dependency 
> directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3323) make_rc script failes to build the RC

2016-09-22 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created PHOENIX-3323:
--

 Summary: make_rc script failes to build the RC
 Key: PHOENIX-3323
 URL: https://issues.apache.org/jira/browse/PHOENIX-3323
 Project: Phoenix
  Issue Type: Bug
Reporter: Lars Hofhansl


cp: will not overwrite just-created 
'/tmp/phoenix/dev/../release/apache-phoenix-4.8.1-HBase-0.98/apache-phoenix-4.8.1-HBase-0.98-bin/phoenix-core-4.8.1-HBase-0.98.jar'
 with './phoenix-core/target/phoenix-core-4.8.1-HBase-0.98.jar'
cp: will not overwrite just-created 
'/tmp/phoenix/dev/../release/apache-phoenix-4.8.1-HBase-0.98/apache-phoenix-4.8.1-HBase-0.98-bin/phoenix-core-4.8.1-HBase-0.98-tests.jar'
 with './phoenix-core/target/phoenix-core-4.8.1-HBase-0.98-tests.jar

The fix is to exclude the phoenix-{core}|{test} jars in Hive's dependency 
directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3253) Make changes to tests to support method level parallelization

2016-09-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3253:
--
Attachment: PHOENIX-3253_addendum3.patch

Rename method to generate unique table names and have it use atomic long. Also, 
move flaky tests to test class that turns off parallelization and fix some 
miscellaneous tests.

> Make changes to tests to support method level parallelization
> -
>
> Key: PHOENIX-3253
> URL: https://issues.apache.org/jira/browse/PHOENIX-3253
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: prakul agarwal
>Assignee: James Taylor
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3253.patch, PHOENIX-3253_addendum1.patch, 
> PHOENIX-3253_addendum2.patch, PHOENIX-3253_addendum3.patch
>
>
> Changes are necessary in our BaseHBaseTimeManagedIT tests to ensure that they 
> can run in parallel:
> - Using unique table names in all tests
> - Removing any usage of BaseTest.ensureTableCreated()
> - Change static methods and member variables to be instance level
> - Remove any required state dependencies between tests



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3307) Backward compatibility fails for tables with index (4.7.0 client - 4.8.1 server)

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514860#comment-15514860
 ] 

Hudson commented on PHOENIX-3307:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.8-HBase-1.2 #29 (See 
[https://builds.apache.org/job/Phoenix-4.8-HBase-1.2/29/])
PHOENIX-3307 Backward compatibility fails for tables with index (4.7.0 
(tdsilva: rev fa993d685875a82a0a0bd7172818814a1a162e61)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/PTableProtos.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
* (edit) phoenix-protocol/src/main/PTable.proto


> Backward compatibility fails for tables with index (4.7.0 client - 4.8.1 
> server)
> 
>
> Key: PHOENIX-3307
> URL: https://issues.apache.org/jira/browse/PHOENIX-3307
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0, 4.8.1
>Reporter: Mujtaba Chohan
>Assignee: Thomas D'Silva
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3307.patch
>
>
> Steps: 
> * With Phoenix 4.7.0 client and server, create index on table that contain 
> schema name
> * Upgrade only server side to Phoenix 4.8.1 
> (https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=0bac1025f37f8c695246e42c47546acfb46c79ef)
> {noformat}
> Error: ERROR 1012 (42M03): Table undefined. tableName=SCH.SCH.T 
> (state=42M03,code=1012)
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=SCH.SCH.T
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:414)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:285)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolver(FromCompiler.java:199)
>   at 
> org.apache.phoenix.parse.IndexExpressionParseNodeRewriter.(IndexExpressionParseNodeRewriter.java:45)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:233)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:146)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:94)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:80)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.getExplainPlan(BaseQueryPlan.java:467)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:456)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:435)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:263)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:258)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:257)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1297)
> Note: table name was SCH.T and not SCH.SCH.T
> {noformat}
> Following commit caused it:
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=71b0b62d98c96870db585f9a232dfb63db3a698d



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3307) Backward compatibility fails for tables with index (4.7.0 client - 4.8.1 server)

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514841#comment-15514841
 ] 

Hudson commented on PHOENIX-3307:
-

SUCCESS: Integrated in Jenkins build Phoenix-master #1416 (See 
[https://builds.apache.org/job/Phoenix-master/1416/])
PHOENIX-3307 Backward compatibility fails for tables with index (4.7.0 
(tdsilva: rev 16f0da3d94506126d5ae8e36389e1590f7357ff0)
* (edit) phoenix-protocol/src/main/PTable.proto
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/PTableProtos.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java


> Backward compatibility fails for tables with index (4.7.0 client - 4.8.1 
> server)
> 
>
> Key: PHOENIX-3307
> URL: https://issues.apache.org/jira/browse/PHOENIX-3307
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0, 4.8.1
>Reporter: Mujtaba Chohan
>Assignee: Thomas D'Silva
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3307.patch
>
>
> Steps: 
> * With Phoenix 4.7.0 client and server, create index on table that contain 
> schema name
> * Upgrade only server side to Phoenix 4.8.1 
> (https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=0bac1025f37f8c695246e42c47546acfb46c79ef)
> {noformat}
> Error: ERROR 1012 (42M03): Table undefined. tableName=SCH.SCH.T 
> (state=42M03,code=1012)
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=SCH.SCH.T
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:414)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:285)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolver(FromCompiler.java:199)
>   at 
> org.apache.phoenix.parse.IndexExpressionParseNodeRewriter.(IndexExpressionParseNodeRewriter.java:45)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:233)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:146)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:94)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:80)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.getExplainPlan(BaseQueryPlan.java:467)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:456)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:435)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:263)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:258)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:257)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1297)
> Note: table name was SCH.T and not SCH.SCH.T
> {noformat}
> Following commit caused it:
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=71b0b62d98c96870db585f9a232dfb63db3a698d



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


tracking progress of calcite branch

2016-09-22 Thread James Taylor
For those of you interested in tracking the progress of the Phoenix Calcite
integration we're doing over on the calcite branch, you can monitor the
automatic Jenkins builds we do with each check-in here:
https://builds.apache.org/job/Phoenix-calcite/

For example, the last check-in reduced the failed tests by three as shown
here: https://builds.apache.org/job/Phoenix-calcite/13/ and here:
https://builds.apache.org/job/Phoenix-calcite/13/testReport/

Thanks,
James


[jira] [Commented] (PHOENIX-3264) Allow TRUE and FALSE to be used as literal constants

2016-09-22 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514734#comment-15514734
 ] 

Maryann Xue commented on PHOENIX-3264:
--

Thank you, [~julianhyde], for the solution! Actually it's exactly what we do in 
ToExpressionTest 
(phoenix-core/src/test/java/org/apache/phoenix/calcite/ToExpressionTest.java), 
which tests that a SQL string parsed by Calcite will be converted into the same 
Phoenix Expression as what it would if the string is parsed through old 
Phoenix. It is only that for this issue we won't have to go through the SQL 
string and parser again, coz we already have this SqlNode, we just need to wrap 
it in a SqlSelect. But the rest of the work should be pretty much the same as 
in ToExpressionTest.

> Allow TRUE and FALSE to be used as literal constants
> 
>
> Key: PHOENIX-3264
> URL: https://issues.apache.org/jira/browse/PHOENIX-3264
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Eric Lomore
> Attachments: Sql2RelImplementation.png, SqlLiteral.png, 
> SqlNodeToRexConverterImpl.png, SqlOptionNode.png, objectdependencies.png, 
> objectdependencies2.png, stacktrace.png
>
>
> Phoenix supports TRUE and FALSE as boolean literals, but perhaps Calcite 
> doesn't? Looks like this is leading to a fair number of failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3174) Make minor upgrade a manual step

2016-09-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514729#comment-15514729
 ] 

James Taylor commented on PHOENIX-3174:
---

bq. FWIW, this is a 4.9 feature. So you won't running into this exception on 
4.8.1.
Ah, yes - I forgot. Was thinking this was a blocker for 4.8.1 RC, but it's not. 
Never mind about using 4.7 against 4.8.1.

> Make minor upgrade a manual step
> 
>
> Key: PHOENIX-3174
> URL: https://issues.apache.org/jira/browse/PHOENIX-3174
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3174.patch, PHOENIX-3174_v2.patch, 
> PHOENIX-3174_v3_master.patch, PHOENIX-3174_v4_master.patch, 
> PHOENIX-3174_v5_master.patch
>
>
> Instead of automatically performing minor release upgrades to system tables 
> in an automated manner (on first connection), we should make upgrade a 
> separate manual step. If a newer client attempts to connect to a newer server 
> without the upgrade step having occurred, we'd fail the connection.
> Though not as automated, this would give users more control and visibility 
> into when an upgrade over system tables occurs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3174) Make minor upgrade a manual step

2016-09-22 Thread Mujtaba Chohan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514727#comment-15514727
 ] 

Mujtaba Chohan commented on PHOENIX-3174:
-

Restart didn't help.

> Make minor upgrade a manual step
> 
>
> Key: PHOENIX-3174
> URL: https://issues.apache.org/jira/browse/PHOENIX-3174
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3174.patch, PHOENIX-3174_v2.patch, 
> PHOENIX-3174_v3_master.patch, PHOENIX-3174_v4_master.patch, 
> PHOENIX-3174_v5_master.patch
>
>
> Instead of automatically performing minor release upgrades to system tables 
> in an automated manner (on first connection), we should make upgrade a 
> separate manual step. If a newer client attempts to connect to a newer server 
> without the upgrade step having occurred, we'd fail the connection.
> Though not as automated, this would give users more control and visibility 
> into when an upgrade over system tables occurs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3174) Make minor upgrade a manual step

2016-09-22 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514721#comment-15514721
 ] 

Samarth Jain commented on PHOENIX-3174:
---

FWIW, this is a 4.9 feature. So you won't running into this exception on 4.8.1. 

> Make minor upgrade a manual step
> 
>
> Key: PHOENIX-3174
> URL: https://issues.apache.org/jira/browse/PHOENIX-3174
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3174.patch, PHOENIX-3174_v2.patch, 
> PHOENIX-3174_v3_master.patch, PHOENIX-3174_v4_master.patch, 
> PHOENIX-3174_v5_master.patch
>
>
> Instead of automatically performing minor release upgrades to system tables 
> in an automated manner (on first connection), we should make upgrade a 
> separate manual step. If a newer client attempts to connect to a newer server 
> without the upgrade step having occurred, we'd fail the connection.
> Though not as automated, this would give users more control and visibility 
> into when an upgrade over system tables occurs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3174) Make minor upgrade a manual step

2016-09-22 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514705#comment-15514705
 ] 

Samarth Jain commented on PHOENIX-3174:
---

[~mujtabachohan] - do you know if restarting the region server that hosts 
SYSTEM.CATALOG after you have restored it using the snapshot helps? 

> Make minor upgrade a manual step
> 
>
> Key: PHOENIX-3174
> URL: https://issues.apache.org/jira/browse/PHOENIX-3174
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3174.patch, PHOENIX-3174_v2.patch, 
> PHOENIX-3174_v3_master.patch, PHOENIX-3174_v4_master.patch, 
> PHOENIX-3174_v5_master.patch
>
>
> Instead of automatically performing minor release upgrades to system tables 
> in an automated manner (on first connection), we should make upgrade a 
> separate manual step. If a newer client attempts to connect to a newer server 
> without the upgrade step having occurred, we'd fail the connection.
> Though not as automated, this would give users more control and visibility 
> into when an upgrade over system tables occurs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3307) Backward compatibility fails for tables with index (4.7.0 client - 4.8.1 server)

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514677#comment-15514677
 ] 

Hadoop QA commented on PHOENIX-3307:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12829928/PHOENIX-3307.patch
  against master branch at commit b1682ddd541031437d2731c570a54fc6494c9801.
  ATTACHMENT ID: 12829928

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
38 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  new java.lang.String[] { "SchemaNameBytes", 
"TableNameBytes", "TableType", "IndexState", "SequenceNumber", "TimeStamp", 
"PkNameBytes", "BucketNum", "Columns", "Indexes", "IsImmutableRows", 
"DataTableNameBytes", "DefaultFamilyName", "DisableWAL", "MultiTenant", 
"ViewType", "ViewStatement", "PhysicalNames", "TenantId", "ViewIndexId", 
"IndexType", "StatsTimeStamp", "StoreNulls", "BaseColumnCount", 
"RowKeyOrderOptimizable", "Transactional", "UpdateCacheFrequency", 
"IndexDisableTimestamp", "IsNamespaceMapped", "AutoParititonSeqName", 
"IsAppendOnlySchema", "ParentNameBytes", });
+parentSchemaName = 
PNameFactory.newName(SchemaUtil.getSchemaNameFromFullName((table.getParentNameBytes().toByteArray(;
+parentTableName = 
PNameFactory.newName(SchemaUtil.getTableNameFromFullName(table.getParentNameBytes().toByteArray()));

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/592//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/592//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/592//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/592//console

This message is automatically generated.

> Backward compatibility fails for tables with index (4.7.0 client - 4.8.1 
> server)
> 
>
> Key: PHOENIX-3307
> URL: https://issues.apache.org/jira/browse/PHOENIX-3307
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0, 4.8.1
>Reporter: Mujtaba Chohan
>Assignee: Thomas D'Silva
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3307.patch
>
>
> Steps: 
> * With Phoenix 4.7.0 client and server, create index on table that contain 
> schema name
> * Upgrade only server side to Phoenix 4.8.1 
> (https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=0bac1025f37f8c695246e42c47546acfb46c79ef)
> {noformat}
> Error: ERROR 1012 (42M03): Table undefined. tableName=SCH.SCH.T 
> (state=42M03,code=1012)
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=SCH.SCH.T
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:414)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:285)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolver(FromCompiler.java:199)
>   at 
> org.apache.phoenix.parse.IndexExpressionParseNodeRewriter.(IndexExpressionParseNodeRewriter.java:45)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:233)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:146)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:94)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:80)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.getExplainPlan(BaseQueryPlan.java:467)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:456)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:435)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:263)
>   at 
> 

[jira] [Commented] (PHOENIX-3307) Backward compatibility fails for tables with index (4.7.0 client - 4.8.1 server)

2016-09-22 Thread Mujtaba Chohan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514643#comment-15514643
 ] 

Mujtaba Chohan commented on PHOENIX-3307:
-

It's fixed.

> Backward compatibility fails for tables with index (4.7.0 client - 4.8.1 
> server)
> 
>
> Key: PHOENIX-3307
> URL: https://issues.apache.org/jira/browse/PHOENIX-3307
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0, 4.8.1
>Reporter: Mujtaba Chohan
>Assignee: Thomas D'Silva
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3307.patch
>
>
> Steps: 
> * With Phoenix 4.7.0 client and server, create index on table that contain 
> schema name
> * Upgrade only server side to Phoenix 4.8.1 
> (https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=0bac1025f37f8c695246e42c47546acfb46c79ef)
> {noformat}
> Error: ERROR 1012 (42M03): Table undefined. tableName=SCH.SCH.T 
> (state=42M03,code=1012)
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=SCH.SCH.T
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:414)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:285)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolver(FromCompiler.java:199)
>   at 
> org.apache.phoenix.parse.IndexExpressionParseNodeRewriter.(IndexExpressionParseNodeRewriter.java:45)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:233)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:146)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:94)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:80)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.getExplainPlan(BaseQueryPlan.java:467)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:456)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:435)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:263)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:258)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:257)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1297)
> Note: table name was SCH.T and not SCH.SCH.T
> {noformat}
> Following commit caused it:
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=71b0b62d98c96870db585f9a232dfb63db3a698d



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3313) Commit missing changes to 4.x and master branches

2016-09-22 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain resolved PHOENIX-3313.
---
Resolution: Fixed

> Commit missing changes to 4.x and master branches
> -
>
> Key: PHOENIX-3313
> URL: https://issues.apache.org/jira/browse/PHOENIX-3313
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3313-4.x-HBase-0.98.patch
>
>
> Looks like part(s) of the following commits here were never committed to 
> 4.x-HBase-1.1 and master (which I stumbled on when getting a merge conflict 
> on a change I made): 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=a6301c198bd71cb76b3805b43b50ecf1ee29c69d
> Also, it looks like there are tabs in the file. Please fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3307) Backward compatibility fails for tables with index (4.7.0 client - 4.8.1 server)

2016-09-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514610#comment-15514610
 ] 

James Taylor commented on PHOENIX-3307:
---

Please let us know if this fixes the b/w compat issue, [~mujtabachohan].

> Backward compatibility fails for tables with index (4.7.0 client - 4.8.1 
> server)
> 
>
> Key: PHOENIX-3307
> URL: https://issues.apache.org/jira/browse/PHOENIX-3307
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0, 4.8.1
>Reporter: Mujtaba Chohan
>Assignee: Thomas D'Silva
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3307.patch
>
>
> Steps: 
> * With Phoenix 4.7.0 client and server, create index on table that contain 
> schema name
> * Upgrade only server side to Phoenix 4.8.1 
> (https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=0bac1025f37f8c695246e42c47546acfb46c79ef)
> {noformat}
> Error: ERROR 1012 (42M03): Table undefined. tableName=SCH.SCH.T 
> (state=42M03,code=1012)
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=SCH.SCH.T
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:414)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:285)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolver(FromCompiler.java:199)
>   at 
> org.apache.phoenix.parse.IndexExpressionParseNodeRewriter.(IndexExpressionParseNodeRewriter.java:45)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:233)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:146)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:94)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:80)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.getExplainPlan(BaseQueryPlan.java:467)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:456)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:435)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:263)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:258)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:257)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1297)
> Note: table name was SCH.T and not SCH.SCH.T
> {noformat}
> Following commit caused it:
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=71b0b62d98c96870db585f9a232dfb63db3a698d



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3174) Make minor upgrade a manual step

2016-09-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514607#comment-15514607
 ] 

James Taylor commented on PHOENIX-3174:
---

Let's stick to released versions for this upgrade testing, please. Can you try 
with 4.7.0 and 4.8.1? You'll likely get the same error.

> Make minor upgrade a manual step
> 
>
> Key: PHOENIX-3174
> URL: https://issues.apache.org/jira/browse/PHOENIX-3174
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3174.patch, PHOENIX-3174_v2.patch, 
> PHOENIX-3174_v3_master.patch, PHOENIX-3174_v4_master.patch, 
> PHOENIX-3174_v5_master.patch
>
>
> Instead of automatically performing minor release upgrades to system tables 
> in an automated manner (on first connection), we should make upgrade a 
> separate manual step. If a newer client attempts to connect to a newer server 
> without the upgrade step having occurred, we'd fail the connection.
> Though not as automated, this would give users more control and visibility 
> into when an upgrade over system tables occurs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3322) TPCH 100 query 2 exceeds size of hash cache

2016-09-22 Thread Aaron Molitor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514567#comment-15514567
 ] 

Aaron Molitor commented on PHOENIX-3322:


Thanks [~jamestaylor], the second error is with 
{{phoenix.query.maxServerCacheBytes = 2147483648}}. I'll have to try the query 
hint.  

Here's the explain plan that was generated for this query (should have included 
originally):
{noformat}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/09/13 16:22:17 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
16/09/13 16:22:18 WARN shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.
1/1  EXPLAIN SELECT 
S_ACCTBAL, 
S_NAME, 
N_NAME, 
P_PARTKEY, 
P_MFGR, 
S_ADDRESS, 
S_PHONE, 
S_COMMENT 
FROM 
TPCH.PART, 
TPCH.SUPPLIER, 
TPCH.PARTSUPP, 
TPCH.NATION, 
TPCH.REGION 
WHERE 
P_PARTKEY = PS_PARTKEY 
AND S_SUPPKEY = PS_SUPPKEY 
AND P_SIZE = 15 
AND P_TYPE LIKE '%BRASS' 
AND S_NATIONKEY = N_NATIONKEY 
AND N_REGIONKEY = R_REGIONKEY 
AND R_NAME = 'EUROPE' 
AND PS_SUPPLYCOST = ( 
SELECT MIN(PS_SUPPLYCOST) 
FROM 
TPCH.PARTSUPP, 
TPCH.SUPPLIER, 
TPCH.NATION, 
TPCH.REGION 
WHERE 
P_PARTKEY = PS_PARTKEY 
AND S_SUPPKEY = PS_SUPPKEY 
AND S_NATIONKEY = N_NATIONKEY 
AND N_REGIONKEY = R_REGIONKEY 
AND R_NAME = 'EUROPE' 
) 
ORDER BY 
S_ACCTBAL DESC, 
N_NAME, 
S_NAME, 
P_PARTKEY 
LIMIT 100 
;
+--+
|   PLAN
   |
+--+
| CLIENT 26-CHUNK 19045195 ROWS 7549747668 BYTES PARALLEL 26-WAY FULL SCAN OVER 
TPCH.PART  |
| SERVER FILTER BY (P_SIZE = 15 AND P_TYPE LIKE '%BRASS')   
   |
| SERVER TOP 100 ROWS SORTED BY [TPCH.SUPPLIER.S_ACCTBAL DESC, 
TPCH.NATION.N_NAME, TPCH.SUPPLIER.S_NAME, TPCH.PART.P_PARTKEY]  |
| CLIENT MERGE SORT 
   |
| PARALLEL INNER-JOIN TABLE 0   
   |
| CLIENT 2-CHUNK 879771 ROWS 314572800 BYTES PARALLEL 1-WAY ROUND ROBIN 
FULL SCAN OVER TPCH.SUPPLIER   |
| PARALLEL INNER-JOIN TABLE 1(DELAYED EVALUATION)   
   |
| CLIENT 81-CHUNK 74110723 ROWS 21076381005 BYTES PARALLEL 1-WAY ROUND 
ROBIN FULL SCAN OVER TPCH.PARTSUPP  |
| PARALLEL INNER-JOIN TABLE 2(DELAYED EVALUATION)   
   |
| CLIENT 1-CHUNK 0 ROWS 0 BYTES PARALLEL 1-WAY ROUND ROBIN FULL SCAN 
OVER TPCH.NATION  |
| PARALLEL INNER-JOIN TABLE 3(DELAYED EVALUATION) (SKIP MERGE)  
   |
| CLIENT 1-CHUNK 0 ROWS 0 BYTES PARALLEL 1-WAY ROUND ROBIN FULL SCAN 
OVER TPCH.REGION  |
| SERVER FILTER BY R_NAME = 'EUROPE'
   |
| PARALLEL INNER-JOIN TABLE 4(DELAYED EVALUATION) (SKIP MERGE)  
   |
| CLIENT 81-CHUNK 74110723 ROWS 21076381005 BYTES PARALLEL 1-WAY FULL 
SCAN OVER TPCH.PARTSUPP  |
| SERVER AGGREGATE INTO ORDERED DISTINCT ROWS BY 
[TPCH.PARTSUPP.PS_PARTKEY]|
| PARALLEL INNER-JOIN TABLE 0   
   |
| CLIENT 2-CHUNK 879771 ROWS 314572800 BYTES PARALLEL 1-WAY 
ROUND ROBIN FULL SCAN OVER TPCH.SUPPLIER   |
| PARALLEL INNER-JOIN TABLE 1(DELAYED EVALUATION)   
   |
| CLIENT 1-CHUNK 0 ROWS 0 BYTES PARALLEL 1-WAY ROUND ROBIN FULL 
SCAN OVER TPCH.NATION  |
| 

[jira] [Commented] (PHOENIX-3174) Make minor upgrade a manual step

2016-09-22 Thread Mujtaba Chohan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514570#comment-15514570
 ] 

Mujtaba Chohan commented on PHOENIX-3174:
-

I mean current head of 4.x (4.9 snapshot). Yes, was expecting it would 
auto-upgrade again.

> Make minor upgrade a manual step
> 
>
> Key: PHOENIX-3174
> URL: https://issues.apache.org/jira/browse/PHOENIX-3174
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3174.patch, PHOENIX-3174_v2.patch, 
> PHOENIX-3174_v3_master.patch, PHOENIX-3174_v4_master.patch, 
> PHOENIX-3174_v5_master.patch
>
>
> Instead of automatically performing minor release upgrades to system tables 
> in an automated manner (on first connection), we should make upgrade a 
> separate manual step. If a newer client attempts to connect to a newer server 
> without the upgrade step having occurred, we'd fail the connection.
> Though not as automated, this would give users more control and visibility 
> into when an upgrade over system tables occurs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3263) Allow comma before CONSTRAINT to be optional

2016-09-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514561#comment-15514561
 ] 

James Taylor commented on PHOENIX-3263:
---

+1. Thanks, [~lomoree]. I committed this on your behalf.

> Allow comma before CONSTRAINT to be optional
> 
>
> Key: PHOENIX-3263
> URL: https://issues.apache.org/jira/browse/PHOENIX-3263
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Eric Lomore
> Attachments: PHOENIX-3263.patch
>
>
> In Phoenix, the comma before the CONSTRAINT is optional (which matches 
> Oracle). Can this be supported in Calcite Phoenix?
> For example, this is ok in Phoenix:
> {code}
> CREATE TABLE T (
> K VARCHAR
> CONSTRAINT PK PRIMARY KEY (K));
> {code}
> as is this:
> {code}
> CREATE TABLE T (
> K VARCHAR,
> CONSTRAINT PK PRIMARY KEY (K));
> {code}
> If this is not feasible, we could require the comma and change the tests. 
> This is leading to a lot of failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3174) Make minor upgrade a manual step

2016-09-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514553#comment-15514553
 ] 

James Taylor commented on PHOENIX-3174:
---

You mean 4.8 client, right, [~mujtabachohan]? Could this be due to the caching 
of the SYSTEM.CATALOG on the server-side, [~samarthjain]? In theory, were you 
expected that it would auto upgrade you back what's on the server side once you 
connect again after restoring the snapshot, Mujtaba?

> Make minor upgrade a manual step
> 
>
> Key: PHOENIX-3174
> URL: https://issues.apache.org/jira/browse/PHOENIX-3174
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3174.patch, PHOENIX-3174_v2.patch, 
> PHOENIX-3174_v3_master.patch, PHOENIX-3174_v4_master.patch, 
> PHOENIX-3174_v5_master.patch
>
>
> Instead of automatically performing minor release upgrades to system tables 
> in an automated manner (on first connection), we should make upgrade a 
> separate manual step. If a newer client attempts to connect to a newer server 
> without the upgrade step having occurred, we'd fail the connection.
> Though not as automated, this would give users more control and visibility 
> into when an upgrade over system tables occurs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3263) Allow comma before CONSTRAINT to be optional

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514535#comment-15514535
 ] 

Hadoop QA commented on PHOENIX-3263:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12829925/PHOENIX-3263.patch
  against master branch at commit b1682ddd541031437d2731c570a54fc6494c9801.
  ATTACHMENT ID: 12829925

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/593//console

This message is automatically generated.

> Allow comma before CONSTRAINT to be optional
> 
>
> Key: PHOENIX-3263
> URL: https://issues.apache.org/jira/browse/PHOENIX-3263
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Eric Lomore
> Attachments: PHOENIX-3263.patch
>
>
> In Phoenix, the comma before the CONSTRAINT is optional (which matches 
> Oracle). Can this be supported in Calcite Phoenix?
> For example, this is ok in Phoenix:
> {code}
> CREATE TABLE T (
> K VARCHAR
> CONSTRAINT PK PRIMARY KEY (K));
> {code}
> as is this:
> {code}
> CREATE TABLE T (
> K VARCHAR,
> CONSTRAINT PK PRIMARY KEY (K));
> {code}
> If this is not feasible, we could require the comma and change the tests. 
> This is leading to a lot of failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3174) Make minor upgrade a manual step

2016-09-22 Thread Mujtaba Chohan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514519#comment-15514519
 ] 

Mujtaba Chohan commented on PHOENIX-3174:
-

[~samarthjain] After upgrading from 4.7 -> 4.9 and then restoring 
SYSTEM.CATALOG snapshot, re-connecting 4.9 client leads to the following error 
(all default Phoenix properties used):

{noformat}
Error: Operation not allowed since cluster hasn't been upgraded. Call EXECUTE 
UPGRADE. (state=INT13,code=2011)
org.apache.phoenix.exception.UpgradeRequiredException: Operation not allowed 
since cluster hasn't been upgraded. Call EXECUTE UPGRADE. 
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:275)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:267)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:266)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1424)
at 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.getColumns(PhoenixDatabaseMetaData.java:529)
 
{noformat}

> Make minor upgrade a manual step
> 
>
> Key: PHOENIX-3174
> URL: https://issues.apache.org/jira/browse/PHOENIX-3174
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3174.patch, PHOENIX-3174_v2.patch, 
> PHOENIX-3174_v3_master.patch, PHOENIX-3174_v4_master.patch, 
> PHOENIX-3174_v5_master.patch
>
>
> Instead of automatically performing minor release upgrades to system tables 
> in an automated manner (on first connection), we should make upgrade a 
> separate manual step. If a newer client attempts to connect to a newer server 
> without the upgrade step having occurred, we'd fail the connection.
> Though not as automated, this would give users more control and visibility 
> into when an upgrade over system tables occurs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3322) TPCH 100 query 2 exceeds size of hash cache

2016-09-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514507#comment-15514507
 ] 

James Taylor commented on PHOENIX-3322:
---

Two options to fix this:
- Add the /*+ USE_SORT_MERGE_JOIN */ after SELECT.
- Increase the allowed size of the hash cache through the client-side 
phoenix.query.maxServerCacheBytes setting in your hbase-site.xml

> TPCH 100 query 2 exceeds size of hash cache
> ---
>
> Key: PHOENIX-3322
> URL: https://issues.apache.org/jira/browse/PHOENIX-3322
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: HDP 2.4.2 + 4.0.8 binary download
>Reporter: Aaron Molitor
>
> Executing  TPC-H query 2 results in the following error:
> h5. output from sqlline:
> {noformat}
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> 16/09/13 20:35:29 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 16/09/13 20:35:30 WARN shortcircuit.DomainSocketFactory: The short-circuit 
> local reads feature cannot be used because libhadoop cannot be loaded.
> 1/1  SELECT 
> S_ACCTBAL, 
> S_NAME, 
> N_NAME, 
> P_PARTKEY, 
> P_MFGR, 
> S_ADDRESS, 
> S_PHONE, 
> S_COMMENT 
> FROM 
> TPCH.PART, 
> TPCH.SUPPLIER, 
> TPCH.PARTSUPP, 
> TPCH.NATION, 
> TPCH.REGION 
> WHERE 
> P_PARTKEY = PS_PARTKEY 
> AND S_SUPPKEY = PS_SUPPKEY 
> AND P_SIZE = 15  
> AND P_TYPE LIKE '%BRASS' 
> AND S_NATIONKEY = N_NATIONKEY 
> AND N_REGIONKEY = R_REGIONKEY 
> AND R_NAME = 'EUROPE' 
> AND PS_SUPPLYCOST = ( 
> SELECT MIN(PS_SUPPLYCOST) 
> FROM 
> TPCH.PARTSUPP, 
> TPCH.SUPPLIER, 
> TPCH.NATION, 
> TPCH.REGION 
> WHERE 
> P_PARTKEY = PS_PARTKEY 
> AND S_SUPPKEY = PS_SUPPKEY 
> AND S_NATIONKEY = N_NATIONKEY 
> AND N_REGIONKEY = R_REGIONKEY 
> AND R_NAME = 'EUROPE' 
> ) 
> ORDER BY  
> S_ACCTBAL DESC, 
> N_NAME, 
> S_NAME, 
> P_PARTKEY 
> LIMIT 100 
> ;
> Error: Encountered exception in sub plan [0] execution. (state=,code=0)
> java.sql.SQLException: Encountered exception in sub plan [0] execution.
> at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:198)
> at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:143)
> at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:138)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:281)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:265)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1444)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:807)
> at sqlline.SqlLine.runCommands(SqlLine.java:1710)
> at sqlline.Commands.run(Commands.java:1285)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
> at sqlline.SqlLine.dispatch(SqlLine.java:803)
> at sqlline.SqlLine.initArgs(SqlLine.java:613)
> at sqlline.SqlLine.begin(SqlLine.java:656)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: org.apache.phoenix.join.MaxServerCacheSizeExceededException: Size 
> of hash cache (104857615 bytes) exceeds the maximum allowed size (104857600 
> bytes)
> at 
> org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:110)
> at 
> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:83)
> at 
> org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:385)
> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:167)
> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:163)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
> at 
> 

[jira] [Created] (PHOENIX-3322) TPCH 100 query 2 exceeds size of hash cache

2016-09-22 Thread Aaron Molitor (JIRA)
Aaron Molitor created PHOENIX-3322:
--

 Summary: TPCH 100 query 2 exceeds size of hash cache
 Key: PHOENIX-3322
 URL: https://issues.apache.org/jira/browse/PHOENIX-3322
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.8.0
 Environment: HDP 2.4.2 + 4.0.8 binary download
Reporter: Aaron Molitor


Executing  TPC-H query 2 results in the following error:

h5. output from sqlline:
{noformat}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/09/13 20:35:29 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
16/09/13 20:35:30 WARN shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.
1/1  SELECT 
S_ACCTBAL, 
S_NAME, 
N_NAME, 
P_PARTKEY, 
P_MFGR, 
S_ADDRESS, 
S_PHONE, 
S_COMMENT 
FROM 
TPCH.PART, 
TPCH.SUPPLIER, 
TPCH.PARTSUPP, 
TPCH.NATION, 
TPCH.REGION 
WHERE 
P_PARTKEY = PS_PARTKEY 
AND S_SUPPKEY = PS_SUPPKEY 
AND P_SIZE = 15  
AND P_TYPE LIKE '%BRASS' 
AND S_NATIONKEY = N_NATIONKEY 
AND N_REGIONKEY = R_REGIONKEY 
AND R_NAME = 'EUROPE' 
AND PS_SUPPLYCOST = ( 
SELECT MIN(PS_SUPPLYCOST) 
FROM 
TPCH.PARTSUPP, 
TPCH.SUPPLIER, 
TPCH.NATION, 
TPCH.REGION 
WHERE 
P_PARTKEY = PS_PARTKEY 
AND S_SUPPKEY = PS_SUPPKEY 
AND S_NATIONKEY = N_NATIONKEY 
AND N_REGIONKEY = R_REGIONKEY 
AND R_NAME = 'EUROPE' 
) 
ORDER BY  
S_ACCTBAL DESC, 
N_NAME, 
S_NAME, 
P_PARTKEY 
LIMIT 100 
;
Error: Encountered exception in sub plan [0] execution. (state=,code=0)
java.sql.SQLException: Encountered exception in sub plan [0] execution.
at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:198)
at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:143)
at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:138)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:281)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:265)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1444)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:807)
at sqlline.SqlLine.runCommands(SqlLine.java:1710)
at sqlline.Commands.run(Commands.java:1285)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:803)
at sqlline.SqlLine.initArgs(SqlLine.java:613)
at sqlline.SqlLine.begin(SqlLine.java:656)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: org.apache.phoenix.join.MaxServerCacheSizeExceededException: Size of 
hash cache (104857615 bytes) exceeds the maximum allowed size (104857600 bytes)
at 
org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:110)
at 
org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:83)
at 
org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:385)
at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:167)
at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:163)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Aborting command set because "force" is false and command failed: "SELECT 
S_ACCTBAL, 
S_NAME, 
N_NAME, 
P_PARTKEY, 
P_MFGR, 
S_ADDRESS, 
S_PHONE, 
S_COMMENT 
FROM 
TPCH.PART, 
TPCH.SUPPLIER, 
TPCH.PARTSUPP, 
TPCH.NATION, 
TPCH.REGION 
WHERE 
P_PARTKEY = PS_PARTKEY 
AND S_SUPPKEY = PS_SUPPKEY 
AND P_SIZE = 15  
AND P_TYPE LIKE '%BRASS' 
AND S_NATIONKEY = N_NATIONKEY 
AND N_REGIONKEY = R_REGIONKEY 
AND R_NAME = 'EUROPE' 
AND 

[jira] [Commented] (PHOENIX-3318) TPCH 100G: Query 13 Cannot Parse

2016-09-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514443#comment-15514443
 ] 

James Taylor commented on PHOENIX-3318:
---

Have you tried this in our calcite branch, [~jleach]? FWIW, this is where we'd 
add support for any missing syntax. FYI, [~maryannxue].

Also, Phoenix doesn't expect a trailing semi-column.

> TPCH 100G: Query 13 Cannot Parse
> 
>
> Key: PHOENIX-3318
> URL: https://issues.apache.org/jira/browse/PHOENIX-3318
> Project: Phoenix
>  Issue Type: Bug
>Reporter: John Leach
>
> {NOFORMAT}
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> 16/09/13 20:45:31 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 16/09/13 20:45:31 WARN shortcircuit.DomainSocketFactory: The short-circuit 
> local reads feature cannot be used because libhadoop cannot be loaded.
> 1/1  SELECT 
> C_COUNT, 
> COUNT(*) AS CUSTDIST 
> FROM 
> ( 
> SELECT 
> C_CUSTKEY, 
> COUNT(O_ORDERKEY) 
> FROM 
> TPCH.CUSTOMER 
> LEFT OUTER JOIN TPCH.ORDERS ON 
> C_CUSTKEY = O_CUSTKEY 
> AND O_COMMENT NOT LIKE '%SPECIAL%REQUESTS%' 
> GROUP BY 
> C_CUSTKEY 
> ) AS C_ORDERS 
> (C_CUSTKEY, C_COUNT) 
> GROUP BY 
> C_COUNT 
> ORDER BY 
> CUSTDIST DESC, 
> C_COUNT DESC 
> ;
> Error: ERROR 602 (42P00): Syntax error. Missing "EOF" at line 17, column 1. 
> (state=42P00,code=602)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 602 (42P00): 
> Syntax error. Missing "EOF" at line 17, column 1.
>   at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
>   at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1280)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1363)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1434)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:807)
>   at sqlline.SqlLine.runCommands(SqlLine.java:1710)
>   at sqlline.Commands.run(Commands.java:1285)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
>   at sqlline.SqlLine.dispatch(SqlLine.java:803)
>   at sqlline.SqlLine.initArgs(SqlLine.java:613)
>   at sqlline.SqlLine.begin(SqlLine.java:656)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: MissingTokenException(inserted [@-1,0:0='',<-1>,17:0] 
> at ()
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.recoverFromMismatchedToken(PhoenixSQLParser.java:358)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:518)
>   at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
>   ... 18 more
> Aborting command set because "force" is false and command failed: "SELECT 
> C_COUNT, 
> COUNT(*) AS CUSTDIST 
> FROM 
> ( 
> SELECT 
> C_CUSTKEY, 
> COUNT(O_ORDERKEY) 
> FROM 
> TPCH.CUSTOMER 
> LEFT OUTER JOIN TPCH.ORDERS ON 
> C_CUSTKEY = O_CUSTKEY 
> AND O_COMMENT NOT LIKE '%SPECIAL%REQUESTS%' 
> GROUP BY 
> C_CUSTKEY 
> ) AS C_ORDERS 
> (C_CUSTKEY, C_COUNT) 
> GROUP BY 
> C_COUNT 
> ORDER BY 
> CUSTDIST DESC, 
> C_COUNT DESC 
> ;"
> Closing: org.apache.phoenix.jdbc.PhoenixConnection
> {NOFORMAT}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3320) TPCH 100G: Query 20 Execution Exception

2016-09-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514437#comment-15514437
 ] 

James Taylor commented on PHOENIX-3320:
---

Have you tried this in our calcite branch, [~jleach]? FWIW, this is where we'd 
add support for any missing syntax. FYI, [~maryannxue].

> TPCH 100G: Query 20 Execution Exception
> ---
>
> Key: PHOENIX-3320
> URL: https://issues.apache.org/jira/browse/PHOENIX-3320
> Project: Phoenix
>  Issue Type: Bug
>Reporter: John Leach
>
> {NOFORMAT}
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> 16/09/13 20:52:21 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 16/09/13 20:52:22 WARN shortcircuit.DomainSocketFactory: The short-circuit 
> local reads feature cannot be used because libhadoop cannot be loaded.
> 1/1  SELECT 
> S_NAME, 
> S_ADDRESS 
> FROM 
> TPCH.SUPPLIER, 
> TPCH.NATION 
> WHERE 
> S_SUPPKEY IN ( 
> SELECT PS_SUPPKEY 
> FROM 
> TPCH.PARTSUPP 
> WHERE 
> PS_PARTKEY IN ( 
> SELECT P_PARTKEY 
> FROM 
> TPCH.PART 
> WHERE 
> P_NAME LIKE 'FOREST%' 
> ) 
> AND PS_AVAILQTY > ( 
> SELECT 0.5 * SUM(L_QUANTITY) 
> FROM 
> TPCH.LINEITEM 
> WHERE 
> L_PARTKEY = PS_PARTKEY 
> AND L_SUPPKEY = PS_SUPPKEY 
> AND L_SHIPDATE >= TO_DATE('1994-01-01') 
> AND L_SHIPDATE < TO_DATE('1995-01-01') 
> ) 
> ) 
> AND S_NATIONKEY = N_NATIONKEY 
> AND N_NAME = 'CANADA' 
> ORDER BY 
> S_NAME 
> ;
> 16/09/13 20:55:51 WARN client.ScannerCallable: Ignore, probably already closed
> org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to 
> stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 failed on local 
> exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: 
> Connection to stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 is closing. 
> Call id=411932, waitTime=149848
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:275)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:318)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32831)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:356)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:196)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:144)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:258)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:241)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:534)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
>   at 
> org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:126)
>   at 
> org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:254)
>   at 
> org.apache.phoenix.iterate.OrderedResultIterator.peek(OrderedResultIterator.java:277)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> 

[jira] [Commented] (PHOENIX-3319) TPCH 100G: Query 15 with view cannot parse

2016-09-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514439#comment-15514439
 ] 

James Taylor commented on PHOENIX-3319:
---

Have you tried this in our calcite branch, [~jleach]? FWIW, this is where we'd 
add support for any missing syntax. FYI, [~maryannxue].

> TPCH 100G: Query 15 with view cannot parse
> --
>
> Key: PHOENIX-3319
> URL: https://issues.apache.org/jira/browse/PHOENIX-3319
> Project: Phoenix
>  Issue Type: Bug
>Reporter: John Leach
>
> {NOFORMAT}
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> 16/09/13 20:45:42 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 16/09/13 20:45:43 WARN shortcircuit.DomainSocketFactory: The short-circuit 
> local reads feature cannot be used because libhadoop cannot be loaded.
> 1/5  CREATE VIEW REVENUE0 (SUPPLIER_NO INTEGER, TOTAL_REVENUE 
> DECIMAL(15,2)) AS 
> SELECT 
> L_SUPPKEY, 
> SUM(L_EXTENDEDPRICE * (1 - L_DISCOUNT)) 
> FROM 
> TPCH.LINEITEM 
> WHERE 
> L_SHIPDATE >= TO_DATE('1996-01-01') 
> AND L_SHIPDATE < TO_DATE('1996-04-01') 
> GROUP BY 
> L_SUPPKEY;
> Error: ERROR 604 (42P00): Syntax error. Mismatched input. Expecting 
> "ASTERISK", got "L_SUPPKEY" at line 3, column 1. (state=42P00,code=604)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 604 (42P00): 
> Syntax error. Mismatched input. Expecting "ASTERISK", got "L_SUPPKEY" at line 
> 3, column 1.
>   at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
>   at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1280)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1363)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1434)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:807)
>   at sqlline.SqlLine.runCommands(SqlLine.java:1710)
>   at sqlline.Commands.run(Commands.java:1285)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
>   at sqlline.SqlLine.dispatch(SqlLine.java:803)
>   at sqlline.SqlLine.initArgs(SqlLine.java:613)
>   at sqlline.SqlLine.begin(SqlLine.java:656)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: MismatchedTokenException(99!=13)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.recoverFromMismatchedToken(PhoenixSQLParser.java:360)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.create_view_node(PhoenixSQLParser.java:1336)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:834)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:508)
>   at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
>   ... 18 more
> Aborting command set because "force" is false and command failed: "CREATE 
> VIEW REVENUE0 (SUPPLIER_NO INTEGER, TOTAL_REVENUE DECIMAL(15,2)) AS 
> SELECT 
> L_SUPPKEY, 
> SUM(L_EXTENDEDPRICE * (1 - L_DISCOUNT)) 
> FROM 
> TPCH.LINEITEM 
> WHERE 
> L_SHIPDATE >= TO_DATE('1996-01-01') 
> AND L_SHIPDATE < TO_DATE('1996-04-01') 
> GROUP BY 
> L_SUPPKEY;"
> Closing: org.apache.phoenix.jdbc.PhoenixConnection
> {NOFORMAT}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-3321) TPCH 100G: Query 21 Missing Equi-Join Support

2016-09-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514434#comment-15514434
 ] 

James Taylor edited comment on PHOENIX-3321 at 9/22/16 8:44 PM:


Have you tried this in our calcite branch, [~jleach]? FWIW, this is where we'd 
add support for any missing syntax. FYI, [~maryannxue].


was (Author: jamestaylor):
Have you tried this in our calcite branch, [~jleach]? FWIW, this is where we'd 
add support for any missing syntax.

> TPCH 100G: Query 21 Missing Equi-Join Support
> -
>
> Key: PHOENIX-3321
> URL: https://issues.apache.org/jira/browse/PHOENIX-3321
> Project: Phoenix
>  Issue Type: Bug
>Reporter: John Leach
>
> {NOFORMAT}
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> 16/09/13 20:55:54 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 16/09/13 20:55:54 WARN shortcircuit.DomainSocketFactory: The short-circuit 
> local reads feature cannot be used because libhadoop cannot be loaded.
> 1/1  SELECT 
> S_NAME, 
> COUNT(*) AS NUMWAIT 
> FROM 
> TPCH.SUPPLIER, 
> TPCH.LINEITEM L1, 
> TPCH.ORDERS, 
> TPCH.NATION 
> WHERE 
> S_SUPPKEY = L1.L_SUPPKEY 
> AND O_ORDERKEY = L1.L_ORDERKEY 
> AND O_ORDERSTATUS = 'F' 
> AND L1.L_RECEIPTDATE > L1.L_COMMITDATE 
> AND EXISTS( 
> SELECT * 
> FROM 
> TPCH.LINEITEM L2 
> WHERE 
> L2.L_ORDERKEY = L1.L_ORDERKEY 
> AND L2.L_SUPPKEY <> L1.L_SUPPKEY 
> ) 
> AND NOT EXISTS( 
> SELECT * 
> FROM 
> TPCH.LINEITEM L3 
> WHERE 
> L3.L_ORDERKEY = L1.L_ORDERKEY 
> AND L3.L_SUPPKEY <> L1.L_SUPPKEY 
> AND L3.L_RECEIPTDATE > L3.L_COMMITDATE 
> ) 
> AND S_NATIONKEY = N_NATIONKEY 
> AND N_NAME = 'SAUDI ARABIA' 
> GROUP BY 
> S_NAME 
> ORDER BY 
> NUMWAIT DESC, 
> S_NAME 
> LIMIT 100 
> ;
> Error: Does not support non-standard or non-equi correlated-subquery 
> conditions. (state=,code=0)
> java.sql.SQLFeatureNotSupportedException: Does not support non-standard or 
> non-equi correlated-subquery conditions.
>   at 
> org.apache.phoenix.compile.SubqueryRewriter$JoinConditionExtractor.leaveBooleanNode(SubqueryRewriter.java:485)
>   at 
> org.apache.phoenix.compile.SubqueryRewriter$JoinConditionExtractor.visitLeave(SubqueryRewriter.java:505)
>   at 
> org.apache.phoenix.compile.SubqueryRewriter$JoinConditionExtractor.visitLeave(SubqueryRewriter.java:411)
>   at 
> org.apache.phoenix.parse.ComparisonParseNode.accept(ComparisonParseNode.java:47)
>   at 
> org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
>   at org.apache.phoenix.parse.AndParseNode.accept(AndParseNode.java:47)
>   at 
> org.apache.phoenix.compile.SubqueryRewriter.visitLeave(SubqueryRewriter.java:168)
>   at 
> org.apache.phoenix.compile.SubqueryRewriter.visitLeave(SubqueryRewriter.java:70)
>   at 
> org.apache.phoenix.parse.ExistsParseNode.accept(ExistsParseNode.java:53)
>   at 
> org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
>   at org.apache.phoenix.parse.AndParseNode.accept(AndParseNode.java:47)
>   at 
> org.apache.phoenix.parse.ParseNodeRewriter.rewrite(ParseNodeRewriter.java:48)
>   at 
> org.apache.phoenix.compile.SubqueryRewriter.transform(SubqueryRewriter.java:84)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:399)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:378)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:271)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:265)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1444)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:807)
>   at sqlline.SqlLine.runCommands(SqlLine.java:1710)
>   at sqlline.Commands.run(Commands.java:1285)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> 

[jira] [Commented] (PHOENIX-3321) TPCH 100G: Query 21 Missing Equi-Join Support

2016-09-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514434#comment-15514434
 ] 

James Taylor commented on PHOENIX-3321:
---

Have you tried this in our calcite branch, [~jleach]? FWIW, this is where we'd 
add support for any missing syntax.

> TPCH 100G: Query 21 Missing Equi-Join Support
> -
>
> Key: PHOENIX-3321
> URL: https://issues.apache.org/jira/browse/PHOENIX-3321
> Project: Phoenix
>  Issue Type: Bug
>Reporter: John Leach
>
> {NOFORMAT}
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> 16/09/13 20:55:54 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 16/09/13 20:55:54 WARN shortcircuit.DomainSocketFactory: The short-circuit 
> local reads feature cannot be used because libhadoop cannot be loaded.
> 1/1  SELECT 
> S_NAME, 
> COUNT(*) AS NUMWAIT 
> FROM 
> TPCH.SUPPLIER, 
> TPCH.LINEITEM L1, 
> TPCH.ORDERS, 
> TPCH.NATION 
> WHERE 
> S_SUPPKEY = L1.L_SUPPKEY 
> AND O_ORDERKEY = L1.L_ORDERKEY 
> AND O_ORDERSTATUS = 'F' 
> AND L1.L_RECEIPTDATE > L1.L_COMMITDATE 
> AND EXISTS( 
> SELECT * 
> FROM 
> TPCH.LINEITEM L2 
> WHERE 
> L2.L_ORDERKEY = L1.L_ORDERKEY 
> AND L2.L_SUPPKEY <> L1.L_SUPPKEY 
> ) 
> AND NOT EXISTS( 
> SELECT * 
> FROM 
> TPCH.LINEITEM L3 
> WHERE 
> L3.L_ORDERKEY = L1.L_ORDERKEY 
> AND L3.L_SUPPKEY <> L1.L_SUPPKEY 
> AND L3.L_RECEIPTDATE > L3.L_COMMITDATE 
> ) 
> AND S_NATIONKEY = N_NATIONKEY 
> AND N_NAME = 'SAUDI ARABIA' 
> GROUP BY 
> S_NAME 
> ORDER BY 
> NUMWAIT DESC, 
> S_NAME 
> LIMIT 100 
> ;
> Error: Does not support non-standard or non-equi correlated-subquery 
> conditions. (state=,code=0)
> java.sql.SQLFeatureNotSupportedException: Does not support non-standard or 
> non-equi correlated-subquery conditions.
>   at 
> org.apache.phoenix.compile.SubqueryRewriter$JoinConditionExtractor.leaveBooleanNode(SubqueryRewriter.java:485)
>   at 
> org.apache.phoenix.compile.SubqueryRewriter$JoinConditionExtractor.visitLeave(SubqueryRewriter.java:505)
>   at 
> org.apache.phoenix.compile.SubqueryRewriter$JoinConditionExtractor.visitLeave(SubqueryRewriter.java:411)
>   at 
> org.apache.phoenix.parse.ComparisonParseNode.accept(ComparisonParseNode.java:47)
>   at 
> org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
>   at org.apache.phoenix.parse.AndParseNode.accept(AndParseNode.java:47)
>   at 
> org.apache.phoenix.compile.SubqueryRewriter.visitLeave(SubqueryRewriter.java:168)
>   at 
> org.apache.phoenix.compile.SubqueryRewriter.visitLeave(SubqueryRewriter.java:70)
>   at 
> org.apache.phoenix.parse.ExistsParseNode.accept(ExistsParseNode.java:53)
>   at 
> org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
>   at org.apache.phoenix.parse.AndParseNode.accept(AndParseNode.java:47)
>   at 
> org.apache.phoenix.parse.ParseNodeRewriter.rewrite(ParseNodeRewriter.java:48)
>   at 
> org.apache.phoenix.compile.SubqueryRewriter.transform(SubqueryRewriter.java:84)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:399)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:378)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:271)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:265)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1444)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:807)
>   at sqlline.SqlLine.runCommands(SqlLine.java:1710)
>   at sqlline.Commands.run(Commands.java:1285)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> 

[jira] [Commented] (PHOENIX-3307) Backward compatibility fails for tables with index (4.7.0 client - 4.8.1 server)

2016-09-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514413#comment-15514413
 ] 

James Taylor commented on PHOENIX-3307:
---

+1. Thanks, [~tdsilva]. Please commit to 4.8, 4.x, and master branches.

> Backward compatibility fails for tables with index (4.7.0 client - 4.8.1 
> server)
> 
>
> Key: PHOENIX-3307
> URL: https://issues.apache.org/jira/browse/PHOENIX-3307
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0, 4.8.1
>Reporter: Mujtaba Chohan
>Assignee: Thomas D'Silva
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3307.patch
>
>
> Steps: 
> * With Phoenix 4.7.0 client and server, create index on table that contain 
> schema name
> * Upgrade only server side to Phoenix 4.8.1 
> (https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=0bac1025f37f8c695246e42c47546acfb46c79ef)
> {noformat}
> Error: ERROR 1012 (42M03): Table undefined. tableName=SCH.SCH.T 
> (state=42M03,code=1012)
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=SCH.SCH.T
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:414)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:285)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolver(FromCompiler.java:199)
>   at 
> org.apache.phoenix.parse.IndexExpressionParseNodeRewriter.(IndexExpressionParseNodeRewriter.java:45)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:233)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:146)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:94)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:80)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.getExplainPlan(BaseQueryPlan.java:467)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:456)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:435)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:263)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:258)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:257)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1297)
> Note: table name was SCH.T and not SCH.SCH.T
> {noformat}
> Following commit caused it:
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=71b0b62d98c96870db585f9a232dfb63db3a698d



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3321) TPCH 100G: Query 21 Missing Equi-Join Support

2016-09-22 Thread John Leach (JIRA)
John Leach created PHOENIX-3321:
---

 Summary: TPCH 100G: Query 21 Missing Equi-Join Support
 Key: PHOENIX-3321
 URL: https://issues.apache.org/jira/browse/PHOENIX-3321
 Project: Phoenix
  Issue Type: Bug
Reporter: John Leach


{NOFORMAT}

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/09/13 20:55:54 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
16/09/13 20:55:54 WARN shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.
1/1  SELECT 
S_NAME, 
COUNT(*) AS NUMWAIT 
FROM 
TPCH.SUPPLIER, 
TPCH.LINEITEM L1, 
TPCH.ORDERS, 
TPCH.NATION 
WHERE 
S_SUPPKEY = L1.L_SUPPKEY 
AND O_ORDERKEY = L1.L_ORDERKEY 
AND O_ORDERSTATUS = 'F' 
AND L1.L_RECEIPTDATE > L1.L_COMMITDATE 
AND EXISTS( 
SELECT * 
FROM 
TPCH.LINEITEM L2 
WHERE 
L2.L_ORDERKEY = L1.L_ORDERKEY 
AND L2.L_SUPPKEY <> L1.L_SUPPKEY 
) 
AND NOT EXISTS( 
SELECT * 
FROM 
TPCH.LINEITEM L3 
WHERE 
L3.L_ORDERKEY = L1.L_ORDERKEY 
AND L3.L_SUPPKEY <> L1.L_SUPPKEY 
AND L3.L_RECEIPTDATE > L3.L_COMMITDATE 
) 
AND S_NATIONKEY = N_NATIONKEY 
AND N_NAME = 'SAUDI ARABIA' 
GROUP BY 
S_NAME 
ORDER BY 
NUMWAIT DESC, 
S_NAME 
LIMIT 100 
;
Error: Does not support non-standard or non-equi correlated-subquery 
conditions. (state=,code=0)
java.sql.SQLFeatureNotSupportedException: Does not support non-standard or 
non-equi correlated-subquery conditions.
at 
org.apache.phoenix.compile.SubqueryRewriter$JoinConditionExtractor.leaveBooleanNode(SubqueryRewriter.java:485)
at 
org.apache.phoenix.compile.SubqueryRewriter$JoinConditionExtractor.visitLeave(SubqueryRewriter.java:505)
at 
org.apache.phoenix.compile.SubqueryRewriter$JoinConditionExtractor.visitLeave(SubqueryRewriter.java:411)
at 
org.apache.phoenix.parse.ComparisonParseNode.accept(ComparisonParseNode.java:47)
at 
org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
at org.apache.phoenix.parse.AndParseNode.accept(AndParseNode.java:47)
at 
org.apache.phoenix.compile.SubqueryRewriter.visitLeave(SubqueryRewriter.java:168)
at 
org.apache.phoenix.compile.SubqueryRewriter.visitLeave(SubqueryRewriter.java:70)
at 
org.apache.phoenix.parse.ExistsParseNode.accept(ExistsParseNode.java:53)
at 
org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
at org.apache.phoenix.parse.AndParseNode.accept(AndParseNode.java:47)
at 
org.apache.phoenix.parse.ParseNodeRewriter.rewrite(ParseNodeRewriter.java:48)
at 
org.apache.phoenix.compile.SubqueryRewriter.transform(SubqueryRewriter.java:84)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:399)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:378)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:271)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:265)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1444)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:807)
at sqlline.SqlLine.runCommands(SqlLine.java:1710)
at sqlline.Commands.run(Commands.java:1285)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:803)
at sqlline.SqlLine.initArgs(SqlLine.java:613)
at sqlline.SqlLine.begin(SqlLine.java:656)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Aborting command set because "force" is false and command failed: "SELECT 
S_NAME, 
COUNT(*) AS NUMWAIT 
FROM 
TPCH.SUPPLIER, 
TPCH.LINEITEM L1, 
TPCH.ORDERS, 
TPCH.NATION 
WHERE 
S_SUPPKEY = 

[jira] [Updated] (PHOENIX-3307) Backward compatibility fails for tables with index (4.7.0 client - 4.8.1 server)

2016-09-22 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-3307:

Attachment: PHOENIX-3307.patch

[~jamestaylor]

Can you please review?

> Backward compatibility fails for tables with index (4.7.0 client - 4.8.1 
> server)
> 
>
> Key: PHOENIX-3307
> URL: https://issues.apache.org/jira/browse/PHOENIX-3307
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0, 4.8.1
>Reporter: Mujtaba Chohan
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3307.patch
>
>
> Steps: 
> * With Phoenix 4.7.0 client and server, create index on table that contain 
> schema name
> * Upgrade only server side to Phoenix 4.8.1 
> (https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=0bac1025f37f8c695246e42c47546acfb46c79ef)
> {noformat}
> Error: ERROR 1012 (42M03): Table undefined. tableName=SCH.SCH.T 
> (state=42M03,code=1012)
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=SCH.SCH.T
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:414)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:285)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolver(FromCompiler.java:199)
>   at 
> org.apache.phoenix.parse.IndexExpressionParseNodeRewriter.(IndexExpressionParseNodeRewriter.java:45)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:233)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:146)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:94)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:80)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.getExplainPlan(BaseQueryPlan.java:467)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:456)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:435)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:263)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:258)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:257)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1297)
> Note: table name was SCH.T and not SCH.SCH.T
> {noformat}
> Following commit caused it:
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=71b0b62d98c96870db585f9a232dfb63db3a698d



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-3307) Backward compatibility fails for tables with index (4.7.0 client - 4.8.1 server)

2016-09-22 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reassigned PHOENIX-3307:
---

Assignee: Thomas D'Silva

> Backward compatibility fails for tables with index (4.7.0 client - 4.8.1 
> server)
> 
>
> Key: PHOENIX-3307
> URL: https://issues.apache.org/jira/browse/PHOENIX-3307
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0, 4.8.1
>Reporter: Mujtaba Chohan
>Assignee: Thomas D'Silva
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3307.patch
>
>
> Steps: 
> * With Phoenix 4.7.0 client and server, create index on table that contain 
> schema name
> * Upgrade only server side to Phoenix 4.8.1 
> (https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=0bac1025f37f8c695246e42c47546acfb46c79ef)
> {noformat}
> Error: ERROR 1012 (42M03): Table undefined. tableName=SCH.SCH.T 
> (state=42M03,code=1012)
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=SCH.SCH.T
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:414)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:285)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolver(FromCompiler.java:199)
>   at 
> org.apache.phoenix.parse.IndexExpressionParseNodeRewriter.(IndexExpressionParseNodeRewriter.java:45)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:233)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:146)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:94)
>   at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:80)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.getExplainPlan(BaseQueryPlan.java:467)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:456)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:435)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:263)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:258)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:257)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1297)
> Note: table name was SCH.T and not SCH.SCH.T
> {noformat}
> Following commit caused it:
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=71b0b62d98c96870db585f9a232dfb63db3a698d



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3320) TPCH 100G: Query 20 Execution Exception

2016-09-22 Thread John Leach (JIRA)
John Leach created PHOENIX-3320:
---

 Summary: TPCH 100G: Query 20 Execution Exception
 Key: PHOENIX-3320
 URL: https://issues.apache.org/jira/browse/PHOENIX-3320
 Project: Phoenix
  Issue Type: Bug
Reporter: John Leach


{NOFORMAT}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/09/13 20:52:21 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
16/09/13 20:52:22 WARN shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.
1/1  SELECT 
S_NAME, 
S_ADDRESS 
FROM 
TPCH.SUPPLIER, 
TPCH.NATION 
WHERE 
S_SUPPKEY IN ( 
SELECT PS_SUPPKEY 
FROM 
TPCH.PARTSUPP 
WHERE 
PS_PARTKEY IN ( 
SELECT P_PARTKEY 
FROM 
TPCH.PART 
WHERE 
P_NAME LIKE 'FOREST%' 
) 
AND PS_AVAILQTY > ( 
SELECT 0.5 * SUM(L_QUANTITY) 
FROM 
TPCH.LINEITEM 
WHERE 
L_PARTKEY = PS_PARTKEY 
AND L_SUPPKEY = PS_SUPPKEY 
AND L_SHIPDATE >= TO_DATE('1994-01-01') 
AND L_SHIPDATE < TO_DATE('1995-01-01') 
) 
) 
AND S_NATIONKEY = N_NATIONKEY 
AND N_NAME = 'CANADA' 
ORDER BY 
S_NAME 
;
16/09/13 20:55:51 WARN client.ScannerCallable: Ignore, probably already closed
org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to 
stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 failed on local exception: 
org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to 
stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 is closing. Call id=411932, 
waitTime=149848
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:275)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:318)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32831)
at 
org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:356)
at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:196)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:144)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
at 
org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:258)
at 
org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:241)
at 
org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:534)
at 
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
at 
org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55)
at 
org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:126)
at 
org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:254)
at 
org.apache.phoenix.iterate.OrderedResultIterator.peek(OrderedResultIterator.java:277)
at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)
at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: 
Connection to stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 is closing. 
Call id=411932, waitTime=149848
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1057)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:856)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:575)
16/09/13 20:55:51 

[jira] [Created] (PHOENIX-3319) TPCH 100G: Query 15 with view cannot parse

2016-09-22 Thread John Leach (JIRA)
John Leach created PHOENIX-3319:
---

 Summary: TPCH 100G: Query 15 with view cannot parse
 Key: PHOENIX-3319
 URL: https://issues.apache.org/jira/browse/PHOENIX-3319
 Project: Phoenix
  Issue Type: Bug
Reporter: John Leach


{NOFORMAT}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/09/13 20:45:42 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
16/09/13 20:45:43 WARN shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.
1/5  CREATE VIEW REVENUE0 (SUPPLIER_NO INTEGER, TOTAL_REVENUE 
DECIMAL(15,2)) AS 
SELECT 
L_SUPPKEY, 
SUM(L_EXTENDEDPRICE * (1 - L_DISCOUNT)) 
FROM 
TPCH.LINEITEM 
WHERE 
L_SHIPDATE >= TO_DATE('1996-01-01') 
AND L_SHIPDATE < TO_DATE('1996-04-01') 
GROUP BY 
L_SUPPKEY;
Error: ERROR 604 (42P00): Syntax error. Mismatched input. Expecting "ASTERISK", 
got "L_SUPPKEY" at line 3, column 1. (state=42P00,code=604)
org.apache.phoenix.exception.PhoenixParserException: ERROR 604 (42P00): Syntax 
error. Mismatched input. Expecting "ASTERISK", got "L_SUPPKEY" at line 3, 
column 1.
at 
org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
at 
org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1280)
at 
org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1363)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1434)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:807)
at sqlline.SqlLine.runCommands(SqlLine.java:1710)
at sqlline.Commands.run(Commands.java:1285)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:803)
at sqlline.SqlLine.initArgs(SqlLine.java:613)
at sqlline.SqlLine.begin(SqlLine.java:656)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: MismatchedTokenException(99!=13)
at 
org.apache.phoenix.parse.PhoenixSQLParser.recoverFromMismatchedToken(PhoenixSQLParser.java:360)
at 
org.apache.phoenix.shaded.org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
at 
org.apache.phoenix.parse.PhoenixSQLParser.create_view_node(PhoenixSQLParser.java:1336)
at 
org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:834)
at 
org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:508)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
... 18 more
Aborting command set because "force" is false and command failed: "CREATE VIEW 
REVENUE0 (SUPPLIER_NO INTEGER, TOTAL_REVENUE DECIMAL(15,2)) AS 
SELECT 
L_SUPPKEY, 
SUM(L_EXTENDEDPRICE * (1 - L_DISCOUNT)) 
FROM 
TPCH.LINEITEM 
WHERE 
L_SHIPDATE >= TO_DATE('1996-01-01') 
AND L_SHIPDATE < TO_DATE('1996-04-01') 
GROUP BY 
L_SUPPKEY;"
Closing: org.apache.phoenix.jdbc.PhoenixConnection

{NOFORMAT}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3318) TPCH 100G: Query 13 Cannot Parse

2016-09-22 Thread John Leach (JIRA)
John Leach created PHOENIX-3318:
---

 Summary: TPCH 100G: Query 13 Cannot Parse
 Key: PHOENIX-3318
 URL: https://issues.apache.org/jira/browse/PHOENIX-3318
 Project: Phoenix
  Issue Type: Bug
Reporter: John Leach


{NOFORMAT}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/09/13 20:45:31 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
16/09/13 20:45:31 WARN shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.
1/1  SELECT 
C_COUNT, 
COUNT(*) AS CUSTDIST 
FROM 
( 
SELECT 
C_CUSTKEY, 
COUNT(O_ORDERKEY) 
FROM 
TPCH.CUSTOMER 
LEFT OUTER JOIN TPCH.ORDERS ON 
C_CUSTKEY = O_CUSTKEY 
AND O_COMMENT NOT LIKE '%SPECIAL%REQUESTS%' 
GROUP BY 
C_CUSTKEY 
) AS C_ORDERS 
(C_CUSTKEY, C_COUNT) 
GROUP BY 
C_COUNT 
ORDER BY 
CUSTDIST DESC, 
C_COUNT DESC 
;
Error: ERROR 602 (42P00): Syntax error. Missing "EOF" at line 17, column 1. 
(state=42P00,code=602)
org.apache.phoenix.exception.PhoenixParserException: ERROR 602 (42P00): Syntax 
error. Missing "EOF" at line 17, column 1.
at 
org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
at 
org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1280)
at 
org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1363)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1434)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:807)
at sqlline.SqlLine.runCommands(SqlLine.java:1710)
at sqlline.Commands.run(Commands.java:1285)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:803)
at sqlline.SqlLine.initArgs(SqlLine.java:613)
at sqlline.SqlLine.begin(SqlLine.java:656)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: MissingTokenException(inserted [@-1,0:0='',<-1>,17:0] 
at ()
at 
org.apache.phoenix.parse.PhoenixSQLParser.recoverFromMismatchedToken(PhoenixSQLParser.java:358)
at 
org.apache.phoenix.shaded.org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
at 
org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:518)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
... 18 more
Aborting command set because "force" is false and command failed: "SELECT 
C_COUNT, 
COUNT(*) AS CUSTDIST 
FROM 
( 
SELECT 
C_CUSTKEY, 
COUNT(O_ORDERKEY) 
FROM 
TPCH.CUSTOMER 
LEFT OUTER JOIN TPCH.ORDERS ON 
C_CUSTKEY = O_CUSTKEY 
AND O_COMMENT NOT LIKE '%SPECIAL%REQUESTS%' 
GROUP BY 
C_CUSTKEY 
) AS C_ORDERS 
(C_CUSTKEY, C_COUNT) 
GROUP BY 
C_COUNT 
ORDER BY 
CUSTDIST DESC, 
C_COUNT DESC 
;"
Closing: org.apache.phoenix.jdbc.PhoenixConnection
{NOFORMAT}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2827) Support OFFSET in Calcite-Phoenix

2016-09-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2827:
--
Assignee: Eric Lomore  (was: Maryann Xue)

> Support OFFSET in Calcite-Phoenix
> -
>
> Key: PHOENIX-2827
> URL: https://issues.apache.org/jira/browse/PHOENIX-2827
> Project: Phoenix
>  Issue Type: Task
>Reporter: Maryann Xue
>Assignee: Eric Lomore
>  Labels: calcite
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3298) Create Table: Single column primary key may not be null

2016-09-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3298:
--
Assignee: Eric Lomore

> Create Table: Single column primary key may not be null
> ---
>
> Key: PHOENIX-3298
> URL: https://issues.apache.org/jira/browse/PHOENIX-3298
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Eric Lomore
>Assignee: Eric Lomore
>
> Create table statements with a single column currently must have "NOT NULL" 
> identifier to pass tests.
> Running this code results in failure
> {code}CREATE TABLE t (k VARCHAR PRIMARY KEY DESC){code}
> While this allows tests to pass
> {code}CREATE TABLE t (k VARCHAR NOT NULL PRIMARY KEY DESC){code}
> Must either enforce the not null condition and update test cases, or apply a 
> fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3317) TPCH 100G: Query 11 Could Not Find Hash Cache For Join ID

2016-09-22 Thread John Leach (JIRA)
John Leach created PHOENIX-3317:
---

 Summary: TPCH 100G: Query 11 Could Not Find Hash Cache For Join ID
 Key: PHOENIX-3317
 URL: https://issues.apache.org/jira/browse/PHOENIX-3317
 Project: Phoenix
  Issue Type: Bug
Reporter: John Leach


{NOFORMAT}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/09/14 04:54:31 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
16/09/14 04:54:32 WARN shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.
1/1  SELECT 
PS_PARTKEY, 
SUM(PS_SUPPLYCOST * PS_AVAILQTY) AS VAL 
FROM 
TPCH.PARTSUPP, 
TPCH.SUPPLIER, 
TPCH.NATION 
WHERE 
PS_SUPPKEY = S_SUPPKEY 
AND S_NATIONKEY = N_NATIONKEY 
AND N_NAME = 'GERMANY' 
GROUP BY 
PS_PARTKEY 
HAVING 
SUM(PS_SUPPLYCOST * PS_AVAILQTY) > ( 
SELECT SUM(PS_SUPPLYCOST * PS_AVAILQTY) * 0.01 
FROM 
TPCH.PARTSUPP, 
TPCH.SUPPLIER, 
TPCH.NATION 
WHERE 
PS_SUPPKEY = S_SUPPKEY 
AND S_NATIONKEY = N_NATIONKEY 
AND N_NAME = 'GERMANY' 
) 
ORDER BY 
VAL DESC 
;
16/09/14 04:55:05 WARN execute.HashJoinPlan: Hash plan [0] execution seems too 
slow. Earlier hash cache(s) might have expired on servers.
Error: org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
joinId: �(��˻�. The cache might have expired and have been removed.
at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:99)
at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(GroupedAggregateRegionObserver.java:148)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:215)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1318)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1748)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1712)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1313)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2261)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
joinId: �(��˻�. The cache might have expired and have been removed.
at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:99)
at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(GroupedAggregateRegionObserver.java:148)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:215)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1318)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1748)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1712)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1313)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2261)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at 

[jira] [Updated] (PHOENIX-3263) Allow comma before CONSTRAINT to be optional

2016-09-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3263:
--
Assignee: Eric Lomore

> Allow comma before CONSTRAINT to be optional
> 
>
> Key: PHOENIX-3263
> URL: https://issues.apache.org/jira/browse/PHOENIX-3263
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Eric Lomore
> Attachments: PHOENIX-3263.patch
>
>
> In Phoenix, the comma before the CONSTRAINT is optional (which matches 
> Oracle). Can this be supported in Calcite Phoenix?
> For example, this is ok in Phoenix:
> {code}
> CREATE TABLE T (
> K VARCHAR
> CONSTRAINT PK PRIMARY KEY (K));
> {code}
> as is this:
> {code}
> CREATE TABLE T (
> K VARCHAR,
> CONSTRAINT PK PRIMARY KEY (K));
> {code}
> If this is not feasible, we could require the comma and change the tests. 
> This is leading to a lot of failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3263) Allow comma before CONSTRAINT to be optional

2016-09-22 Thread Eric Lomore (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lomore updated PHOENIX-3263:
-
Attachment: PHOENIX-3263.patch

> Allow comma before CONSTRAINT to be optional
> 
>
> Key: PHOENIX-3263
> URL: https://issues.apache.org/jira/browse/PHOENIX-3263
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
> Attachments: PHOENIX-3263.patch
>
>
> In Phoenix, the comma before the CONSTRAINT is optional (which matches 
> Oracle). Can this be supported in Calcite Phoenix?
> For example, this is ok in Phoenix:
> {code}
> CREATE TABLE T (
> K VARCHAR
> CONSTRAINT PK PRIMARY KEY (K));
> {code}
> as is this:
> {code}
> CREATE TABLE T (
> K VARCHAR,
> CONSTRAINT PK PRIMARY KEY (K));
> {code}
> If this is not feasible, we could require the comma and change the tests. 
> This is leading to a lot of failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3314) ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local indexes

2016-09-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3314:
--
Assignee: (was: James Taylor)

> ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local 
> indexes
> -
>
> Key: PHOENIX-3314
> URL: https://issues.apache.org/jira/browse/PHOENIX-3314
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
> Fix For: 4.9.0
>
>
> The ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is currently not run 
> for local indexes and when it is, it fails with the following errors:
> {code}
> Tests run: 12, Failures: 0, Errors: 2, Skipped: 4, Time elapsed: 744.655 sec 
> <<< FAILURE! - in org.apache.phoenix.end2end.index.ImmutableIndexIT
> testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=false](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 314.079 sec  <<< ERROR!
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)
> testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 310.882 sec  <<< ERROR!
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3314) ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local indexes

2016-09-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514245#comment-15514245
 ] 

James Taylor commented on PHOENIX-3314:
---

[~rajeshbabu] - can you take a look to see if there's a bug lurking here?

> ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local 
> indexes
> -
>
> Key: PHOENIX-3314
> URL: https://issues.apache.org/jira/browse/PHOENIX-3314
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
> Fix For: 4.9.0
>
>
> The ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is currently not run 
> for local indexes and when it is, it fails with the following errors:
> {code}
> Tests run: 12, Failures: 0, Errors: 2, Skipped: 4, Time elapsed: 744.655 sec 
> <<< FAILURE! - in org.apache.phoenix.end2end.index.ImmutableIndexIT
> testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=false](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 314.079 sec  <<< ERROR!
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)
> testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 310.882 sec  <<< ERROR!
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3316) Don't allow calcite enumerable convention to be used for Phoenix queries

2016-09-22 Thread James Taylor (JIRA)
James Taylor created PHOENIX-3316:
-

 Summary: Don't allow calcite enumerable convention to be used for 
Phoenix queries
 Key: PHOENIX-3316
 URL: https://issues.apache.org/jira/browse/PHOENIX-3316
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Maryann Xue


We should disable usage of the default calcite enumerable convention when 
running queries so that we
- recognize in our tests more easily where missing Phoenix implementations exist
- don't compile something down that won't scale well (though functionally it 
may work)

Once the Calcite Phoenix integration is complete, we can potentially go back 
and allow the default calcite enumerable convention on a case-by-case basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3264) Allow TRUE and FALSE to be used as literal constants

2016-09-22 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513988#comment-15513988
 ] 

Julian Hyde commented on PHOENIX-3264:
--

Hopefully you can understand why I am pushing back and do not want to make 
every class and method public. Increased API surface area means more things to 
break.

Some methods are currently public (even though they are members of private 
classes) not because we intended them to be public some day, but because in 
Java methods that implement interface methods HAVE to be public.

Suppose you have a SqlNode e, an expression that you can to convert. Wrap it in 
a SqlSelect, so that it becomes the select clause of a dummy query, {{select e 
from (values 1)}}. Now use SqlToRelConverter to convert this query to a 
RelNode, which will naturally be a project, and take the sole item of its 
project list. That may be a very long way around, but it will work.

> Allow TRUE and FALSE to be used as literal constants
> 
>
> Key: PHOENIX-3264
> URL: https://issues.apache.org/jira/browse/PHOENIX-3264
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
> Attachments: Sql2RelImplementation.png, SqlLiteral.png, 
> SqlNodeToRexConverterImpl.png, SqlOptionNode.png, objectdependencies.png, 
> objectdependencies2.png, stacktrace.png
>
>
> Phoenix supports TRUE and FALSE as boolean literals, but perhaps Calcite 
> doesn't? Looks like this is leading to a fair number of failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3313) Commit missing changes to 4.x and master branches

2016-09-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513877#comment-15513877
 ] 

James Taylor commented on PHOENIX-3313:
---

+1

> Commit missing changes to 4.x and master branches
> -
>
> Key: PHOENIX-3313
> URL: https://issues.apache.org/jira/browse/PHOENIX-3313
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3313-4.x-HBase-0.98.patch
>
>
> Looks like part(s) of the following commits here were never committed to 
> 4.x-HBase-1.1 and master (which I stumbled on when getting a merge conflict 
> on a change I made): 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=a6301c198bd71cb76b3805b43b50ecf1ee29c69d
> Also, it looks like there are tabs in the file. Please fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3313) Commit missing changes to 4.x and master branches

2016-09-22 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-3313:
--
Attachment: PHOENIX-3313-4.x-HBase-0.98.patch

Reverting the change to isolate test directories for concurrently running 
transaction tests for now since the latest master build is successful without 
it. 

> Commit missing changes to 4.x and master branches
> -
>
> Key: PHOENIX-3313
> URL: https://issues.apache.org/jira/browse/PHOENIX-3313
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3313-4.x-HBase-0.98.patch
>
>
> Looks like part(s) of the following commits here were never committed to 
> 4.x-HBase-1.1 and master (which I stumbled on when getting a merge conflict 
> on a change I made): 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=a6301c198bd71cb76b3805b43b50ecf1ee29c69d
> Also, it looks like there are tabs in the file. Please fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3308) Shutdown minicluster after parallel tests complete

2016-09-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3308.
---
Resolution: Fixed

> Shutdown minicluster after parallel tests complete
> --
>
> Key: PHOENIX-3308
> URL: https://issues.apache.org/jira/browse/PHOENIX-3308
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3308_addendum.patch, 
> PHOENIX-3308_addendum2.patch, PHOENIX-3308_addendum3.patch
>
>
> We're currently not shutting down the mini cluster for tests that don't run 
> in their own mini cluster. This might be the cause of the hangs we're seeing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3313) Commit missing changes to 4.x and master branches

2016-09-22 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513839#comment-15513839
 ] 

Samarth Jain commented on PHOENIX-3313:
---

Oops, sorry about that. Will fix it right away.

> Commit missing changes to 4.x and master branches
> -
>
> Key: PHOENIX-3313
> URL: https://issues.apache.org/jira/browse/PHOENIX-3313
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.9.0
>
>
> Looks like part(s) of the following commits here were never committed to 
> 4.x-HBase-1.1 and master (which I stumbled on when getting a merge conflict 
> on a change I made): 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=a6301c198bd71cb76b3805b43b50ecf1ee29c69d
> Also, it looks like there are tabs in the file. Please fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3314) ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local indexes

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513747#comment-15513747
 ] 

Hudson commented on PHOENIX-3314:
-

SUCCESS: Integrated in Jenkins build Phoenix-master #1415 (See 
[https://builds.apache.org/job/Phoenix-master/1415/])
PHOENIX-3314 ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is 
(jamestaylor: rev b1682ddd541031437d2731c570a54fc6494c9801)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ImmutableIndexIT.java


> ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local 
> indexes
> -
>
> Key: PHOENIX-3314
> URL: https://issues.apache.org/jira/browse/PHOENIX-3314
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.9.0
>
>
> The ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is currently not run 
> for local indexes and when it is, it fails with the following errors:
> {code}
> Tests run: 12, Failures: 0, Errors: 2, Skipped: 4, Time elapsed: 744.655 sec 
> <<< FAILURE! - in org.apache.phoenix.end2end.index.ImmutableIndexIT
> testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=false](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 314.079 sec  <<< ERROR!
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)
> testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 310.882 sec  <<< ERROR!
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-3264) Allow TRUE and FALSE to be used as literal constants

2016-09-22 Thread Eric Lomore (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513741#comment-15513741
 ] 

Eric Lomore edited comment on PHOENIX-3264 at 9/22/16 4:30 PM:
---

The exprConverter field in SqlToRelConverter is the only instance of 
SqlToRexNodeConverterImpl. It is a private field, and the only way to use it 
this hidden method, or other methods that require instances of Blackboard 
(which are also not possible since there is no public way to instantiate 
Blackboard).


was (Author: lomoree):
The exprConverter field in SqlToRelConverter is the only instance of 
SqlToRexNodeConverterImpl. It is a private field, and the only way to use it 
this protected method, or other methods that require instances of Blackboard 
(which are also not possible since there is no public way to instantiate 
Blackboard).

> Allow TRUE and FALSE to be used as literal constants
> 
>
> Key: PHOENIX-3264
> URL: https://issues.apache.org/jira/browse/PHOENIX-3264
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
> Attachments: Sql2RelImplementation.png, SqlLiteral.png, 
> SqlNodeToRexConverterImpl.png, SqlOptionNode.png, objectdependencies.png, 
> objectdependencies2.png, stacktrace.png
>
>
> Phoenix supports TRUE and FALSE as boolean literals, but perhaps Calcite 
> doesn't? Looks like this is leading to a fair number of failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3264) Allow TRUE and FALSE to be used as literal constants

2016-09-22 Thread Eric Lomore (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513741#comment-15513741
 ] 

Eric Lomore commented on PHOENIX-3264:
--

The exprConverter field in SqlToRelConverter is the only instance of 
SqlToRexNodeConverterImpl. It is a private field, and the only way to use it 
this protected method, or other methods that require instances of Blackboard 
(which are also not possible since there is no public way to instantiate 
Blackboard).

> Allow TRUE and FALSE to be used as literal constants
> 
>
> Key: PHOENIX-3264
> URL: https://issues.apache.org/jira/browse/PHOENIX-3264
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
> Attachments: Sql2RelImplementation.png, SqlLiteral.png, 
> SqlNodeToRexConverterImpl.png, SqlOptionNode.png, objectdependencies.png, 
> objectdependencies2.png, stacktrace.png
>
>
> Phoenix supports TRUE and FALSE as boolean literals, but perhaps Calcite 
> doesn't? Looks like this is leading to a fair number of failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3263) Allow comma before CONSTRAINT to be optional

2016-09-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513709#comment-15513709
 ] 

James Taylor commented on PHOENIX-3263:
---

[~lomoree] - would you mind attaching a patch file so we can get this fix 
committed? To generate a patch file:
- commit your fix to your local repo with a commit message of the form 
"PHOENIX-### ", so for this one it'd be "PHOENIX-3263 Allow comma 
before CONSTRAINT to be optional"
- create a patch file by doing {{git format-patch --stdout HEAD^ > 
PHOENIX-.patch}}, so for this one it'd be {{git format-patch --stdout HEAD^ 
> PHOENIX-3263.patch}}
- attach the patch file to this JIRA using the More->Attach Files

For more on this process, see https://phoenix.apache.org/contributing.html

> Allow comma before CONSTRAINT to be optional
> 
>
> Key: PHOENIX-3263
> URL: https://issues.apache.org/jira/browse/PHOENIX-3263
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>
> In Phoenix, the comma before the CONSTRAINT is optional (which matches 
> Oracle). Can this be supported in Calcite Phoenix?
> For example, this is ok in Phoenix:
> {code}
> CREATE TABLE T (
> K VARCHAR
> CONSTRAINT PK PRIMARY KEY (K));
> {code}
> as is this:
> {code}
> CREATE TABLE T (
> K VARCHAR,
> CONSTRAINT PK PRIMARY KEY (K));
> {code}
> If this is not feasible, we could require the comma and change the tests. 
> This is leading to a lot of failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-3312) Dividing int constant values should result in decimal

2016-09-22 Thread Kevin Liew (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Liew reassigned PHOENIX-3312:
---

Assignee: Kevin Liew

> Dividing int constant values should result in decimal
> -
>
> Key: PHOENIX-3312
> URL: https://issues.apache.org/jira/browse/PHOENIX-3312
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: noam bulvik
>Assignee: Kevin Liew
> Fix For: 4.9.0
>
>
> When dividing int constants the result is an int while it should be decimal  
> (like 1/3 = 0 and not 0.33) 
> There is a work around to do 1/3.0 but this is not a solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-2536) Return of aggregation functions do not have the correct data type and precision

2016-09-22 Thread Kevin Liew (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Liew reassigned PHOENIX-2536:
---

Assignee: Kevin Liew  (was: Sergey Soldatov)

> Return of aggregation functions do not have the correct data type and 
> precision
> ---
>
> Key: PHOENIX-2536
> URL: https://issues.apache.org/jira/browse/PHOENIX-2536
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>  Labels: function, phoenix
> Fix For: 4.9.0
>
>
> ANSI SQL specifies that 
> {quote}If SUM is specified and DT is exact numeric with scale
>   S, then the data type of the result is exact numeric with
>   implementation-defined precision and scale S.
> ...
> If DT is approximate numeric, then the data type of the
>   result is approximate numeric with implementation-defined
>   precision not less than the precision of DT.{quote}
> However, when summing integer types (first operand) with float or double 
> (second operand), Phoenix returns the value with the same data type as the 
> first operand. 
> Doing a sum with the first operand being a FLOAT or DOUBLE will return a 
> DECIMAL with a fixed scale of 4. 
> Doing any multiplication or division will also result in a DECIMAL with scale 
> 4.
> In all of the cases outlined above, the return data type does not meet the 
> ANSI standard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3264) Allow TRUE and FALSE to be used as literal constants

2016-09-22 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513622#comment-15513622
 ] 

Julian Hyde commented on PHOENIX-3264:
--

Probably convertLiteral is intentionally private and there is a public method 
that can handle any expression. What is the existing API?

> Allow TRUE and FALSE to be used as literal constants
> 
>
> Key: PHOENIX-3264
> URL: https://issues.apache.org/jira/browse/PHOENIX-3264
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
> Attachments: Sql2RelImplementation.png, SqlLiteral.png, 
> SqlNodeToRexConverterImpl.png, SqlOptionNode.png, objectdependencies.png, 
> objectdependencies2.png, stacktrace.png
>
>
> Phoenix supports TRUE and FALSE as boolean literals, but perhaps Calcite 
> doesn't? Looks like this is leading to a fair number of failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3264) Allow TRUE and FALSE to be used as literal constants

2016-09-22 Thread Eric Lomore (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lomore updated PHOENIX-3264:
-
Attachment: Sql2RelImplementation.png

> Allow TRUE and FALSE to be used as literal constants
> 
>
> Key: PHOENIX-3264
> URL: https://issues.apache.org/jira/browse/PHOENIX-3264
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
> Attachments: Sql2RelImplementation.png, SqlLiteral.png, 
> SqlNodeToRexConverterImpl.png, SqlOptionNode.png, objectdependencies.png, 
> objectdependencies2.png, stacktrace.png
>
>
> Phoenix supports TRUE and FALSE as boolean literals, but perhaps Calcite 
> doesn't? Looks like this is leading to a fair number of failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3264) Allow TRUE and FALSE to be used as literal constants

2016-09-22 Thread Eric Lomore (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513605#comment-15513605
 ] 

Eric Lomore commented on PHOENIX-3264:
--

Thanks for the help everyone! 
There is one last piece that is missing, it appears calcite is missing a public 
implementation for convertLiteral. Since Blackboard is a protected inner class 
there is no way to access it's convertLiteral implementation currently. It 
appears that a public access method never got added to its parent class, 
Sql2RelConverter.
I've attached a sample implementation below (untested, but the actual 
instantiation details of blackboard shouldn't matter as none of the passed 
parameters are used in the convert literal process)

!Sql2RelImplementation.png!

> Allow TRUE and FALSE to be used as literal constants
> 
>
> Key: PHOENIX-3264
> URL: https://issues.apache.org/jira/browse/PHOENIX-3264
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
> Attachments: Sql2RelImplementation.png, SqlLiteral.png, 
> SqlNodeToRexConverterImpl.png, SqlOptionNode.png, objectdependencies.png, 
> objectdependencies2.png, stacktrace.png
>
>
> Phoenix supports TRUE and FALSE as boolean literals, but perhaps Calcite 
> doesn't? Looks like this is leading to a fair number of failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3314) ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local indexes

2016-09-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3314:
--
Description: 
The ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is currently not run 
for local indexes and when it is, it fails with the following errors:

{code}
Tests run: 12, Failures: 0, Errors: 2, Skipped: 4, Time elapsed: 744.655 sec 
<<< FAILURE! - in org.apache.phoenix.end2end.index.ImmutableIndexIT
testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=false](org.apache.phoenix.end2end.index.ImmutableIndexIT)
  Time elapsed: 314.079 sec  <<< ERROR!
java.sql.SQLTimeoutException: Operation timed out.
at 
org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)

testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
  Time elapsed: 310.882 sec  <<< ERROR!
java.sql.SQLTimeoutException: Operation timed out.
at 
org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)
{code}


  was:


Tests run: 12, Failures: 0, Errors: 2, Skipped: 4, Time elapsed: 744.655 sec 
<<< FAILURE! - in org.apache.phoenix.end2end.index.ImmutableIndexIT
testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=false](org.apache.phoenix.end2end.index.ImmutableIndexIT)
  Time elapsed: 314.079 sec  <<< ERROR!
java.sql.SQLTimeoutException: Operation timed out.
at 
org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)

testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
  Time elapsed: 310.882 sec  <<< ERROR!
java.sql.SQLTimeoutException: Operation timed out.
at 
org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)




> ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local 
> indexes
> -
>
> Key: PHOENIX-3314
> URL: https://issues.apache.org/jira/browse/PHOENIX-3314
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.9.0
>
>
> The ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is currently not run 
> for local indexes and when it is, it fails with the following errors:
> {code}
> Tests run: 12, Failures: 0, Errors: 2, Skipped: 4, Time elapsed: 744.655 sec 
> <<< FAILURE! - in org.apache.phoenix.end2end.index.ImmutableIndexIT
> testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=false](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 314.079 sec  <<< ERROR!
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)
> testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 310.882 sec  <<< ERROR!
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3314) ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local indexes

2016-09-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3314:
--
Summary: ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing 
for local indexes  (was: Ignore 
ImmutableIndexIT.testDropIfImmutableKeyValueColumn)

> ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local 
> indexes
> -
>
> Key: PHOENIX-3314
> URL: https://issues.apache.org/jira/browse/PHOENIX-3314
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.9.0
>
>
> Change ImmutableIndexIT.testDropIfImmutableKeyValueColumn() back to being 
> ignored as this is how it was checked in here: 
> https://github.com/apache/phoenix/commit/a138cfe0f3df47091a0d9fe0285a8e572d76b252#diff-014cc89e6a6fea73cfc8274de9fe3e8b
> I think it never worked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3314) ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local indexes

2016-09-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3314:
--
Description: 


Tests run: 12, Failures: 0, Errors: 2, Skipped: 4, Time elapsed: 744.655 sec 
<<< FAILURE! - in org.apache.phoenix.end2end.index.ImmutableIndexIT
testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=false](org.apache.phoenix.end2end.index.ImmutableIndexIT)
  Time elapsed: 314.079 sec  <<< ERROR!
java.sql.SQLTimeoutException: Operation timed out.
at 
org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)

testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
  Time elapsed: 310.882 sec  <<< ERROR!
java.sql.SQLTimeoutException: Operation timed out.
at 
org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)



  was:
Change ImmutableIndexIT.testDropIfImmutableKeyValueColumn() back to being 
ignored as this is how it was checked in here: 
https://github.com/apache/phoenix/commit/a138cfe0f3df47091a0d9fe0285a8e572d76b252#diff-014cc89e6a6fea73cfc8274de9fe3e8b

I think it never worked.


> ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local 
> indexes
> -
>
> Key: PHOENIX-3314
> URL: https://issues.apache.org/jira/browse/PHOENIX-3314
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.9.0
>
>
> Tests run: 12, Failures: 0, Errors: 2, Skipped: 4, Time elapsed: 744.655 sec 
> <<< FAILURE! - in org.apache.phoenix.end2end.index.ImmutableIndexIT
> testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=false](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 314.079 sec  <<< ERROR!
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)
> testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 310.882 sec  <<< ERROR!
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3242) Support CREATE OR REPLACE FUNCTION in Phoenix-Calcite Integration

2016-09-22 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-3242:
-
Attachment: PHOENIX-3242_v6.patch

Here is the rebased patch on latest calcite branch.

> Support CREATE OR REPLACE FUNCTION in Phoenix-Calcite Integration
> -
>
> Key: PHOENIX-3242
> URL: https://issues.apache.org/jira/browse/PHOENIX-3242
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: calcite
> Attachments: PHOENIX-3242-wip.patch, PHOENIX-3242_v1.patch, 
> PHOENIX-3242_v2.patch, PHOENIX-3242_v3.patch, PHOENIX-3242_v4.patch, 
> PHOENIX-3242_v5.patch, PHOENIX-3242_v6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3315) property object is being modified

2016-09-22 Thread Prabhjyot Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhjyot Singh updated PHOENIX-3315:
-
Description: 
Creating this JIRA from mail thread 
https://lists.apache.org/thread.html/5029f1f09c95a76b6e60a0f80e6f145dedf0b51cfdc08b964fb3b060@%3Cuser.phoenix.apache.org%3E


I'm using DriverManager.getConnection(url, properties) using following 
properties 

{code}
url -> "jdbc:phoenix:thin:url=http://prabhu-3.novalocal:8765;serialization 
=PROTOBUF"  
{code}
{code}
properties -> 
0 = {java.util.Hashtable$Entry@1491} "user" -> "phoenixuser" 
1 = {java.util.Hashtable$Entry@1492} "password" -> 
2 = {java.util.Hashtable$Entry@1493} "url" -> 
"jdbc:phoenix:thin:url=http://prabhu-3.novalocal:8765;serialization =PROTOBUF" 
3 = {java.util.Hashtable$Entry@1494} "hbase.client.retries.number" -> "4" 
4 = {java.util.Hashtable$Entry@1495} "driver" -> 
"org.apache.phoenix.jdbc.PhoenixDriver"  
{code}

With the above propert/setting/config it returns a connection to the URL 
specified, but it also modifies my properties object to following  

{code}
properties -> 
0 = {java.util.Hashtable$Entry@2361} "serialization" -> "PROTOBUF" 
1 = {java.util.Hashtable$Entry@2362} "user" -> "phoenixuser" 
2 = {java.util.Hashtable$Entry@2363} "password" -> 
*3 = {java.util.Hashtable$Entry@2364} "url" -> "http://prabhu-3.novalocal:8765 
"* 
4 = {java.util.Hashtable$Entry@2365} "hbase.client.retries.number" -> "4" 
5 = {java.util.Hashtable$Entry@2366} "driver" -> 
"org.apache.phoenix.jdbc.PhoenixDriver"   
{code}

The above only happens if I'm using *thin-client*. Is this the expected 
behaviour ?  

I plan to use this "properties" object after getting the connection for 
something else. 

Also, I'm using following in my maven dependency 
"org.apache.phoenix:phoenix-server-client:4.7.0-HBase-1.1" 

  was:
Creating this JIRA from mail thread 
https://lists.apache.org/thread.html/5029f1f09c95a76b6e60a0f80e6f145dedf0b51cfdc08b964fb3b060@%3Cuser.phoenix.apache.org%3E


I'm using DriverManager.getConnection(url, properties) using following 
properties 

url -> "jdbc:phoenix:thin:url=http://prabhu-3.novalocal:8765;serialization 
=PROTOBUF"  

properties -> 
0 = {java.util.Hashtable$Entry@1491} "user" -> "phoenixuser" 
1 = {java.util.Hashtable$Entry@1492} "password" -> 
2 = {java.util.Hashtable$Entry@1493} "url" -> 
"jdbc:phoenix:thin:url=http://prabhu-3.novalocal:8765;serialization =PROTOBUF" 
3 = {java.util.Hashtable$Entry@1494} "hbase.client.retries.number" -> "4" 
4 = {java.util.Hashtable$Entry@1495} "driver" -> 
"org.apache.phoenix.jdbc.PhoenixDriver"  

With the above propert/setting/config it returns a connection to the URL 
specified, but it also modifies my properties object to following  

properties -> 
0 = {java.util.Hashtable$Entry@2361} "serialization" -> "PROTOBUF" 
1 = {java.util.Hashtable$Entry@2362} "user" -> "phoenixuser" 
2 = {java.util.Hashtable$Entry@2363} "password" -> 
*3 = {java.util.Hashtable$Entry@2364} "url" -> "http://prabhu-3.novalocal:8765 
"* 
4 = {java.util.Hashtable$Entry@2365} "hbase.client.retries.number" -> "4" 
5 = {java.util.Hashtable$Entry@2366} "driver" -> 
"org.apache.phoenix.jdbc.PhoenixDriver"   

The above only happens if I'm using *thin-client*. Is this the expected 
behaviour ?  

I plan to use this "properties" object after getting the connection for 
something else. 

Also, I'm using following in my maven dependency 
"org.apache.phoenix:phoenix-server-client:4.7.0-HBase-1.1" 


> property object is being modified
> -
>
> Key: PHOENIX-3315
> URL: https://issues.apache.org/jira/browse/PHOENIX-3315
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Prabhjyot Singh
>
> Creating this JIRA from mail thread 
> https://lists.apache.org/thread.html/5029f1f09c95a76b6e60a0f80e6f145dedf0b51cfdc08b964fb3b060@%3Cuser.phoenix.apache.org%3E
> I'm using DriverManager.getConnection(url, properties) using following 
> properties 
> {code}
> url -> "jdbc:phoenix:thin:url=http://prabhu-3.novalocal:8765;serialization 
> =PROTOBUF"  
> {code}
> {code}
> properties -> 
> 0 = {java.util.Hashtable$Entry@1491} "user" -> "phoenixuser" 
> 1 = {java.util.Hashtable$Entry@1492} "password" -> 
> 2 = {java.util.Hashtable$Entry@1493} "url" -> 
> "jdbc:phoenix:thin:url=http://prabhu-3.novalocal:8765;serialization 
> =PROTOBUF" 
> 3 = {java.util.Hashtable$Entry@1494} "hbase.client.retries.number" -> "4" 
> 4 = {java.util.Hashtable$Entry@1495} "driver" -> 
> "org.apache.phoenix.jdbc.PhoenixDriver"  
> {code}
> With the above propert/setting/config it returns a connection to the URL 
> specified, but it also modifies my properties object to following  
> {code}
> properties -> 
> 0 = {java.util.Hashtable$Entry@2361} "serialization" -> "PROTOBUF" 
> 1 = 

[jira] [Commented] (PHOENIX-3314) Ignore ImmutableIndexIT.testDropIfImmutableKeyValueColumn

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15512852#comment-15512852
 ] 

Hudson commented on PHOENIX-3314:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1414 (See 
[https://builds.apache.org/job/Phoenix-master/1414/])
PHOENIX-3314 Ignore ImmutableIndexIT.testDropIfImmutableKeyValueColumn 
(jamestaylor: rev f50b5eca6da7fe0e790825985d57393eb93a681e)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ImmutableIndexIT.java


> Ignore ImmutableIndexIT.testDropIfImmutableKeyValueColumn
> -
>
> Key: PHOENIX-3314
> URL: https://issues.apache.org/jira/browse/PHOENIX-3314
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.9.0
>
>
> Change ImmutableIndexIT.testDropIfImmutableKeyValueColumn() back to being 
> ignored as this is how it was checked in here: 
> https://github.com/apache/phoenix/commit/a138cfe0f3df47091a0d9fe0285a8e572d76b252#diff-014cc89e6a6fea73cfc8274de9fe3e8b
> I think it never worked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3315) property object is being modified

2016-09-22 Thread Prabhjyot Singh (JIRA)
Prabhjyot Singh created PHOENIX-3315:


 Summary: property object is being modified
 Key: PHOENIX-3315
 URL: https://issues.apache.org/jira/browse/PHOENIX-3315
 Project: Phoenix
  Issue Type: Bug
Reporter: Prabhjyot Singh


Creating this JIRA from mail thread 
https://lists.apache.org/thread.html/5029f1f09c95a76b6e60a0f80e6f145dedf0b51cfdc08b964fb3b060@%3Cuser.phoenix.apache.org%3E


I'm using DriverManager.getConnection(url, properties) using following 
properties 

url -> "jdbc:phoenix:thin:url=http://prabhu-3.novalocal:8765;serialization 
=PROTOBUF"  

properties -> 
0 = {java.util.Hashtable$Entry@1491} "user" -> "phoenixuser" 
1 = {java.util.Hashtable$Entry@1492} "password" -> 
2 = {java.util.Hashtable$Entry@1493} "url" -> 
"jdbc:phoenix:thin:url=http://prabhu-3.novalocal:8765;serialization =PROTOBUF" 
3 = {java.util.Hashtable$Entry@1494} "hbase.client.retries.number" -> "4" 
4 = {java.util.Hashtable$Entry@1495} "driver" -> 
"org.apache.phoenix.jdbc.PhoenixDriver"  

With the above propert/setting/config it returns a connection to the URL 
specified, but it also modifies my properties object to following  

properties -> 
0 = {java.util.Hashtable$Entry@2361} "serialization" -> "PROTOBUF" 
1 = {java.util.Hashtable$Entry@2362} "user" -> "phoenixuser" 
2 = {java.util.Hashtable$Entry@2363} "password" -> 
*3 = {java.util.Hashtable$Entry@2364} "url" -> "http://prabhu-3.novalocal:8765 
"* 
4 = {java.util.Hashtable$Entry@2365} "hbase.client.retries.number" -> "4" 
5 = {java.util.Hashtable$Entry@2366} "driver" -> 
"org.apache.phoenix.jdbc.PhoenixDriver"   

The above only happens if I'm using *thin-client*. Is this the expected 
behaviour ?  

I plan to use this "properties" object after getting the connection for 
something else. 

Also, I'm using following in my maven dependency 
"org.apache.phoenix:phoenix-server-client:4.7.0-HBase-1.1" 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3242) Support CREATE OR REPLACE FUNCTION in Phoenix-Calcite Integration

2016-09-22 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15512529#comment-15512529
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-3242:
--

Uploaded the patch handling review comments.
[~julianhyde]
bq. In this case "ALTER SYSTEM ADD JAR ..." seems about right.
I tried to change the COPY JAR query to ALTER SYSTEM ADD JAR but the query 
parsing is failing because calcite already have ALTER SYTEM statements to 
set/reset connection properties. So as of now changed it to UPLOAD JARS as 
[~jamestaylor] suggested.
bq. I'm not sure I like the idea of copy-pasting SqlParserTest from Calcite 
into Phoenix. It will break fairly often. Is it possible to create a sub-class 
of the test instead? 
In the latest patch extended SqlParserTest and added test for ddl statements. 

If I get +1 will rebase to latest code and commit it.
Ping [~maryannxue]

> Support CREATE OR REPLACE FUNCTION in Phoenix-Calcite Integration
> -
>
> Key: PHOENIX-3242
> URL: https://issues.apache.org/jira/browse/PHOENIX-3242
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: calcite
> Attachments: PHOENIX-3242-wip.patch, PHOENIX-3242_v1.patch, 
> PHOENIX-3242_v2.patch, PHOENIX-3242_v3.patch, PHOENIX-3242_v4.patch, 
> PHOENIX-3242_v5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3242) Support CREATE OR REPLACE FUNCTION in Phoenix-Calcite Integration

2016-09-22 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-3242:
-
Attachment: PHOENIX-3242_v5.patch

> Support CREATE OR REPLACE FUNCTION in Phoenix-Calcite Integration
> -
>
> Key: PHOENIX-3242
> URL: https://issues.apache.org/jira/browse/PHOENIX-3242
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: calcite
> Attachments: PHOENIX-3242-wip.patch, PHOENIX-3242_v1.patch, 
> PHOENIX-3242_v2.patch, PHOENIX-3242_v3.patch, PHOENIX-3242_v4.patch, 
> PHOENIX-3242_v5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3242) Support CREATE OR REPLACE FUNCTION in Phoenix-Calcite Integration

2016-09-22 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-3242:
-
Attachment: (was: PHOENIX-3163_v4.patch)

> Support CREATE OR REPLACE FUNCTION in Phoenix-Calcite Integration
> -
>
> Key: PHOENIX-3242
> URL: https://issues.apache.org/jira/browse/PHOENIX-3242
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: calcite
> Attachments: PHOENIX-3242-wip.patch, PHOENIX-3242_v1.patch, 
> PHOENIX-3242_v2.patch, PHOENIX-3242_v3.patch, PHOENIX-3242_v4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3242) Support CREATE OR REPLACE FUNCTION in Phoenix-Calcite Integration

2016-09-22 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-3242:
-
Attachment: PHOENIX-3242_v4.patch

> Support CREATE OR REPLACE FUNCTION in Phoenix-Calcite Integration
> -
>
> Key: PHOENIX-3242
> URL: https://issues.apache.org/jira/browse/PHOENIX-3242
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: calcite
> Attachments: PHOENIX-3242-wip.patch, PHOENIX-3242_v1.patch, 
> PHOENIX-3242_v2.patch, PHOENIX-3242_v3.patch, PHOENIX-3242_v4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3242) Support CREATE OR REPLACE FUNCTION in Phoenix-Calcite Integration

2016-09-22 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-3242:
-
Attachment: PHOENIX-3163_v4.patch

> Support CREATE OR REPLACE FUNCTION in Phoenix-Calcite Integration
> -
>
> Key: PHOENIX-3242
> URL: https://issues.apache.org/jira/browse/PHOENIX-3242
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: calcite
> Attachments: PHOENIX-3163_v4.patch, PHOENIX-3242-wip.patch, 
> PHOENIX-3242_v1.patch, PHOENIX-3242_v2.patch, PHOENIX-3242_v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3314) Ignore ImmutableIndexIT.testDropIfImmutableKeyValueColumn

2016-09-22 Thread James Taylor (JIRA)
James Taylor created PHOENIX-3314:
-

 Summary: Ignore ImmutableIndexIT.testDropIfImmutableKeyValueColumn
 Key: PHOENIX-3314
 URL: https://issues.apache.org/jira/browse/PHOENIX-3314
 Project: Phoenix
  Issue Type: Test
Reporter: James Taylor
Assignee: James Taylor
 Fix For: 4.9.0


Change ImmutableIndexIT.testDropIfImmutableKeyValueColumn() back to being 
ignored as this is how it was checked in here: 
https://github.com/apache/phoenix/commit/a138cfe0f3df47091a0d9fe0285a8e572d76b252#diff-014cc89e6a6fea73cfc8274de9fe3e8b

I think it never worked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3181) Run test methods in parallel to reduce test suite run time

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15512471#comment-15512471
 ] 

Hadoop QA commented on PHOENIX-3181:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12829768/PHOENIX-3181_v5.patch
  against master branch at commit ad07077c7271398a17d14dc4cd11b8bee3b7d4d9.
  ATTACHMENT ID: 12829768

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
38 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-[LocalIndexIT_isNamespaceMapped=false]
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.AlterTableIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.tx.TransactionIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.trace.PhoenixTracingEndToEndIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.ImmutableIndexIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/591//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/591//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/591//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/591//console

This message is automatically generated.

> Run test methods in parallel to reduce test suite run time
> --
>
> Key: PHOENIX-3181
> URL: https://issues.apache.org/jira/browse/PHOENIX-3181
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-3181_4.x-HBase-1.1_WIP.patch, 
> PHOENIX-3181_v1.patch, PHOENIX-3181_v2.patch, PHOENIX-3181_v3.patch, 
> PHOENIX-3181_v4.patch, PHOENIX-3181_v5.patch, 
> parallelize_4.x-Hbase1.1_wip.patch, serverLogForParallelTests.txt
>
>
> With PHOENIX-3036, the tests within a test class can be executed in parallel 
> since the table names won't conflict. This should greatly reduce the time 
> taken to run all of our tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2930) Cannot resolve columns aliased to its own name

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15512451#comment-15512451
 ] 

Hudson commented on PHOENIX-2930:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1413 (See 
[https://builds.apache.org/job/Phoenix-master/1413/])
PHOENIX-2930 Cannot resolve columns aliased to its own name (maryannxue: rev 
2aa2d1f5b6cc3c2e71e73982bea58380f24d3f87)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/parse/ParseNodeRewriter.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/GroupByCaseIT.java


> Cannot resolve columns aliased to its own name
> --
>
> Key: PHOENIX-2930
> URL: https://issues.apache.org/jira/browse/PHOENIX-2930
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
>Reporter: Kevin Liew
>Assignee: Ankit Singhal
>  Labels: alias
> Fix For: 4.9.0
>
> Attachments: PHOENIX-2930.2.patch, PHOENIX-2930.patch
>
>
> Tableau generates queries that alias a fully-qualified column name to its 
> shortened name.
> Similar to:
> {code}
> create table test (pk integer primary key);
> select test.pk as pk from test group by pk;
> {code}
> Phoenix reports:
> {code}
> 8org.apache.calcite.avatica.proto.Responses$ErrorResponse¨
> ðjava.lang.RuntimeException: 
> org.apache.phoenix.schema.AmbiguousColumnException: ERROR 502 (42702): Column 
> reference ambiguous or duplicate names. columnName=PK
>   at org.apache.calcite.avatica.jdbc.JdbcMeta.propagate(JdbcMeta.java:671)
>   at org.apache.calcite.avatica.jdbc.JdbcMeta.prepare(JdbcMeta.java:695)
>   at 
> org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:208)
>   at 
> org.apache.calcite.avatica.remote.Service$PrepareRequest.accept(Service.java:1157)
>   at 
> org.apache.calcite.avatica.remote.Service$PrepareRequest.accept(Service.java:1131)
>   at 
> org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:95)
>   at 
> org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:127)
>   at 
> org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.phoenix.schema.AmbiguousColumnException: ERROR 502 
> (42702): Column reference ambiguous or duplicate names. columnName=PK
>   at 
> org.apache.phoenix.parse.ParseNodeRewriter.visit(ParseNodeRewriter.java:406)
>   at 
> org.apache.phoenix.compile.StatementNormalizer.visit(StatementNormalizer.java:177)
>   at 
> org.apache.phoenix.compile.StatementNormalizer.visit(StatementNormalizer.java:58)
>   at 
> org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:56)
>   at 
> org.apache.phoenix.parse.ParseNodeRewriter.rewrite(ParseNodeRewriter.java:111)
>   at 
> org.apache.phoenix.compile.StatementNormalizer.normalize(StatementNormalizer.java:107)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:378)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.getMetaData(PhoenixPreparedStatement.java:223)
>   at org.apache.calcite.avatica.jdbc.JdbcMeta.prepare(JdbcMeta.java:689)
>   ... 15 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3308) Shutdown minicluster after parallel tests complete

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15512452#comment-15512452
 ] 

Hudson commented on PHOENIX-3308:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1413 (See 
[https://builds.apache.org/job/Phoenix-master/1413/])
PHOENIX-3308 Shutdown minicluster after parallel tests complete (jamestaylor: 
rev ad07077c7271398a17d14dc4cd11b8bee3b7d4d9)
* (edit) 
phoenix-pherf/src/it/java/org/apache/phoenix/pherf/ResultBaseTestIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelRunListener.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ImmutableIndexIT.java
* (edit) pom.xml
* (edit) phoenix-pherf/src/it/java/org/apache/phoenix/pherf/SchemaReaderIT.java
* (edit) phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
* (edit) 
phoenix-flume/src/it/java/org/apache/phoenix/flume/RegexEventSerializerIT.java
* (edit) phoenix-pig/src/it/java/org/apache/phoenix/pig/BasePigIT.java
* (edit) 
phoenix-queryserver/src/it/java/org/apache/phoenix/end2end/QueryServerBasicsIT.java
* (delete) 
phoenix-core/src/it/java/org/apache/hadoop/hbase/regionserver/wal/WALReplayWithIndexWritesAndUncompressedWALInHBase_094_9_IT.java
* (edit) phoenix-flume/src/it/java/org/apache/phoenix/flume/PhoenixSinkIT.java


> Shutdown minicluster after parallel tests complete
> --
>
> Key: PHOENIX-3308
> URL: https://issues.apache.org/jira/browse/PHOENIX-3308
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3308_addendum.patch, 
> PHOENIX-3308_addendum2.patch, PHOENIX-3308_addendum3.patch
>
>
> We're currently not shutting down the mini cluster for tests that don't run 
> in their own mini cluster. This might be the cause of the hangs we're seeing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3313) Commit missing changes to 4.x and master branches

2016-09-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3313:
--
Fix Version/s: 4.9.0

> Commit missing changes to 4.x and master branches
> -
>
> Key: PHOENIX-3313
> URL: https://issues.apache.org/jira/browse/PHOENIX-3313
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.9.0
>
>
> Looks like part(s) of the following commits here were never committed to 
> 4.x-HBase-1.1 and master (which I stumbled on when getting a merge conflict 
> on a change I made): 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=a6301c198bd71cb76b3805b43b50ecf1ee29c69d
> Also, it looks like there are tabs in the file. Please fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3313) Commit missing changes to 4.x and master branches

2016-09-22 Thread James Taylor (JIRA)
James Taylor created PHOENIX-3313:
-

 Summary: Commit missing changes to 4.x and master branches
 Key: PHOENIX-3313
 URL: https://issues.apache.org/jira/browse/PHOENIX-3313
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Samarth Jain


Looks like part(s) of the following commits here were never committed to 
4.x-HBase-1.1 and master (which I stumbled on when getting a merge conflict on 
a change I made): 
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=a6301c198bd71cb76b3805b43b50ecf1ee29c69d

Also, it looks like there are tabs in the file. Please fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3304) Tracing tests failing

2016-09-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3304.
---
Resolution: Fixed

> Tracing tests failing
> -
>
> Key: PHOENIX-3304
> URL: https://issues.apache.org/jira/browse/PHOENIX-3304
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: James Taylor
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3304.patch
>
>
> Sample run:
> https://builds.apache.org/job/Phoenix-4.x-HBase-1.1/lastCompletedBuild/testReport/
> I reverted by changes for PHOENIX-3174 since that was the last commit that 
> modified PhoenixMetricsSink but that didn't fix the test failures. It may 
> have to do with the refactoring to enable running test methods in parallel. 
> FYI, [~jamestaylor]. I am able to reproduce the failure locally on 
> 4.x-HBase-1.1 branch too. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3312) Dividing int constant values should result in decimal

2016-09-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15512288#comment-15512288
 ] 

James Taylor commented on PHOENIX-3312:
---

[~kliew] - would you have some spare cycles for this one?

> Dividing int constant values should result in decimal
> -
>
> Key: PHOENIX-3312
> URL: https://issues.apache.org/jira/browse/PHOENIX-3312
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: noam bulvik
> Fix For: 4.9.0
>
>
> When dividing int constants the result is an int while it should be decimal  
> (like 1/3 = 0 and not 0.33) 
> There is a work around to do 1/3.0 but this is not a solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)