[jira] [Commented] (PHOENIX-2734) Literal expressions for UNSIGNED_DATE/UNSIGNED_TIME/etc

2016-03-15 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196746#comment-15196746
 ] 

James Taylor commented on PHOENIX-2734:
---

Thanks, [~sergey.soldatov]. Looks good. Just wanted to confirm that all the 
tests pass with the change before it gets committed.

> Literal expressions for UNSIGNED_DATE/UNSIGNED_TIME/etc
> ---
>
> Key: PHOENIX-2734
> URL: https://issues.apache.org/jira/browse/PHOENIX-2734
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2734-2.patch, PHOENIX-2734.patch
>
>
> At the moment LiteralExpression accepts convertion from varchar to  PDate, 
> PTimestamp and PTime, but not for the PUnsigned... type. Is there any reason 
> for that or we can just add an additional check for Unsigned types as well? 
> The problem is that there is no way to load values to Unsigned date/time 
> types using bulk load ( for regular upsert there is an workaround to use 
> TO_TIME/etc functions). [~pctony] what do you think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2178) Tracing - total time listed for a certain trace does not correlate with query wall clock time

2016-03-15 Thread Ashish Tiwari (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196740#comment-15196740
 ] 

Ashish Tiwari commented on PHOENIX-2178:


[~mujtabachohan] [~giacomotaylor]

Hi All,

My name is Ashish and I am very interested in working on this project.  First 
of all, I would like to thank you for mentoring the project. I have started 
working on the proposal and would like some inputs on it:

1. I would like to reproduce the inconsistencies in SQL trace results. I am 
setting up my local dev environments by following 
(https://phoenix.apache.org/develop.html). Are there any performance tests that 
I can use to reproduce this?

2. Is there a doc on how the tracing works? What modules are responsible to 
report stats and how these stats are collected?

3. How should I go about dividing the tasks for this project?

About me:
I have an undergraduate degree in computer science and starting my master's 
degree in computer science from Arizona State University. I also have 5 years 
of professional  experience developing enterprise web applications in J2EE.

Thanks,
Ashish

> Tracing - total time listed for a certain trace does not correlate with query 
> wall clock time
> -
>
> Key: PHOENIX-2178
> URL: https://issues.apache.org/jira/browse/PHOENIX-2178
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.5.0
>Reporter: Mujtaba Chohan
>  Labels: gsoc2016, tracing
>
> Wall clock for a count * takes over a large table takes 3+ms however 
> total sum(end_time - start_time) is less than 250ms for trace_id generated 
> for this count * query.
> {code}
> Output of trace table:
> select sum(end_time  - start_time),count(*), description from 
> SYSTEM.TRACING_STATS WHERE TRACE_ID=X group by description;
> +--+--+--+
> |   SUM((END_TIME - START_TIME))   | COUNT(1) 
> |   DESCRIPTION|
> +--+--+--+
> | 0| 3
> | ClientService.Scan   |
> | 240  | 253879   
> | HFileReaderV2.readBlock  |
> | 1| 1
> | Scanner opened on server |
> +--+--+--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2714) Correct byte estimate in BaseResultIterators and expose as interface

2016-03-15 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196688#comment-15196688
 ] 

James Taylor commented on PHOENIX-2714:
---

Are you still planning on getting us a patch soon on this one, [~maryannxue]?

> Correct byte estimate in BaseResultIterators and expose as interface
> 
>
> Key: PHOENIX-2714
> URL: https://issues.apache.org/jira/browse/PHOENIX-2714
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Maryann Xue
>Assignee: Maryann Xue
>  Labels: statistics
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2714.patch
>
>
> The bytes are accumulated even if the range intersect is empty (produces a 
> null scan).
> {code}
> while (guideIndex < gpsSize && 
> (currentGuidePost.compareTo(endKey) <= 0 || endKey.length == 0)) {
> Scan newScan = scanRanges.intersectScan(scan, 
> currentKeyBytes, currentGuidePostBytes, keyOffset,
> false);
> estimatedRows += gps.getRowCounts().get(guideIndex);
> estimatedSize += gps.getByteCounts().get(guideIndex);
> scans = addNewScan(parallelScans, scans, newScan, 
> currentGuidePostBytes, false, regionLocation);
> currentKeyBytes = currentGuidePost.copyBytes();
> currentGuidePost = PrefixByteCodec.decode(decoder, input);
> currentGuidePostBytes = currentGuidePost.copyBytes();
> guideIndex++;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2772) Upgrade pom to sqlline 1.1.9

2016-03-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196644#comment-15196644
 ] 

Hudson commented on PHOENIX-2772:
-

FAILURE: Integrated in Phoenix-master #1174 (See 
[https://builds.apache.org/job/Phoenix-master/1174/])
PHOENIX-2772 Upgrade pom to sqlline 1.1.19 (jtaylor: rev 
6eadadd2a4faa640fec4b141c357f7d4e2180dcc)
* pom.xml


> Upgrade pom to sqlline 1.1.9
> 
>
> Key: PHOENIX-2772
> URL: https://issues.apache.org/jira/browse/PHOENIX-2772
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.8.0
>
>
> Upgrade sqlline 1.1.9 which is hosted in maven central now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2773) Remove unnecessary repos in poms

2016-03-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196643#comment-15196643
 ] 

Hudson commented on PHOENIX-2773:
-

FAILURE: Integrated in Phoenix-master #1174 (See 
[https://builds.apache.org/job/Phoenix-master/1174/])
PHOENIX-2773 Remove unnecessary repos in poms (jtaylor: rev 
e7bcfe4f2b889575f7a20e0e2d9f76175282108c)
* pom.xml


> Remove unnecessary repos in poms
> 
>
> Key: PHOENIX-2773
> URL: https://issues.apache.org/jira/browse/PHOENIX-2773
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2773.patch
>
>
> It seems we have a lot of cruft in our pom in terms of repos. Now that 
> sqlline is hosted in maven central (see PHOENIX-2772), it seems like we 
> should only need the one repo 
> {{https://repository.apache.org/content/repositories/releases}}.
> {code}
>   
> 
>   apache release
>   https://repository.apache.org/content/repositories/releases/
> 
> 
>   conjars.org
>   http://conjars.org/repo
> 
> 
>   apache snapshot
>   https://repository.apache.org/content/repositories/snapshots/
>   
> true
>   
> 
> 
>   sonatype-nexus-snapshots
>   Sonatype Nexus Snapshots
>   https://oss.sonatype.org/content/repositories/snapshots
>   
> true
>   
> 
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2768) Add test for case sensitive table with index hint

2016-03-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196642#comment-15196642
 ] 

Hudson commented on PHOENIX-2768:
-

FAILURE: Integrated in Phoenix-master #1174 (See 
[https://builds.apache.org/job/Phoenix-master/1174/])
PHOENIX-2768 Add test for case sensitive table with index hint (jtaylor: rev 
d8b45bcb3d923124455788d27718aafd3c0158c1)
* phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java


> Add test for case sensitive table with index hint
> -
>
> Key: PHOENIX-2768
> URL: https://issues.apache.org/jira/browse/PHOENIX-2768
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2768.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2774) MemStoreScanner and KeyValueStore should not be aware of KeyValueScanner

2016-03-15 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196641#comment-15196641
 ] 

James Taylor commented on PHOENIX-2774:
---

I like it, [~churromorales]. WDYT, [~jesse_yates]? Are we losing anything by 
not deriving from NonLazyKeyValueScanner and in particular having this code:
{code}
  @Override
  public boolean requestSeek(Cell kv, boolean forward, boolean useBloom)
  throws IOException {
return doRealSeek(this, kv, forward);
  }

  public static boolean doRealSeek(KeyValueScanner scanner,
  Cell kv, boolean forward) throws IOException {
return forward ? scanner.reseek(kv) : scanner.seek(kv);
  }
{code}

Minor nit - maybe rename PhoenixKeyValueScanner to ReseekableScanner or 
something like that?

> MemStoreScanner and KeyValueStore should not be aware of KeyValueScanner
> 
>
> Key: PHOENIX-2774
> URL: https://issues.apache.org/jira/browse/PHOENIX-2774
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: churro morales
>Assignee: churro morales
> Attachments: PHOENIX-2774.patch
>
>
> Relates to PHOENIX-2756, trying to remove all dependencies on @Private 
> interfaces in HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2734) Literal expressions for UNSIGNED_DATE/UNSIGNED_TIME/etc

2016-03-15 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196621#comment-15196621
 ] 

James Taylor commented on PHOENIX-2734:
---

IllegalFormatException won't be caught by that catch, so it'd have the same 
behavior as before.

> Literal expressions for UNSIGNED_DATE/UNSIGNED_TIME/etc
> ---
>
> Key: PHOENIX-2734
> URL: https://issues.apache.org/jira/browse/PHOENIX-2734
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2734.patch
>
>
> At the moment LiteralExpression accepts convertion from varchar to  PDate, 
> PTimestamp and PTime, but not for the PUnsigned... type. Is there any reason 
> for that or we can just add an additional check for Unsigned types as well? 
> The problem is that there is no way to load values to Unsigned date/time 
> types using bulk load ( for regular upsert there is an workaround to use 
> TO_TIME/etc functions). [~pctony] what do you think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2743) HivePhoenixHandler for big-big join with predicate push down

2016-03-15 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196616#comment-15196616
 ] 

James Taylor commented on PHOENIX-2743:
---

[~warwithin] - yes, it's a dup, but both have implementations against them. 
Hopefully they'll talk and we'll get the best of both worlds. :-)

> HivePhoenixHandler for big-big join with predicate push down
> 
>
> Key: PHOENIX-2743
> URL: https://issues.apache.org/jira/browse/PHOENIX-2743
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.5.0, 4.6.0
> Environment: hive-1.2.1
>Reporter: JeongMin Ju
>  Labels: features, performance
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Phoenix support hash join & sort-merge join. But in case of big*big join does 
> not process well.
> Therefore Need other method like Hive.
> I implemented hive-phoenix-handler that can access Apache Phoenix table on 
> HBase using HiveQL.
> hive-phoenix-handler is very faster than hive-hbase-handler because of 
> applying predicate push down.
> I am publishing source code to github for contribution and maybe will be 
> completed by next week.
> https://github.com/mini666/hive-phoenix-handler
> please, review my proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2769) Order by desc return wrong results

2016-03-15 Thread Joseph Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Sun updated PHOENIX-2769:

Affects Version/s: 4.7.0
   4.6.0
  Environment: 3  centos servers

> Order by desc return wrong results
> --
>
> Key: PHOENIX-2769
> URL: https://issues.apache.org/jira/browse/PHOENIX-2769
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0, 4.7.0
> Environment: 3  centos servers
>Reporter: Joseph Sun
>  Labels: test
>
> create table access_logs (
>   event_time date not null,
>   uuid varchar(36) not null,
>   event_type varchar(32),
>   CONSTRAINT pk PRIMARY KEY (event_time,uuid)
> ) VERSIONS=1,SALT_BUCKETS=6,IMMUTABLE_ROWS=true;
> I insert 2,000,000 records to access_logs ,and event_time between 2016-01-06  
> to 2016-03-15 .
> I execute SQL. 
> >select event_time from access_logs  order by event_time asc limit 10;
> +--+
> |EVENT_TIME|
> +--+
> | 2016-01-06 18:41:54.000  |
> | 2016-01-06 19:56:46.000  |
> | 2016-01-06 20:25:12.000  |
> | 2016-01-06 20:41:37.000  |
> | 2016-01-06 20:46:20.000  |
> | 2016-01-06 20:53:10.000  |
> | 2016-01-06 21:04:09.000  |
> | 2016-01-07 01:22:57.000  |
> | 2016-01-07 10:59:11.000  |
> | 2016-01-07 12:52:56.000  |
> +--+
>  
> > select event_time from access_logs order by event_time desc limit 10;
> +--+
> |EVENT_TIME|
> +--+
> | 2016-02-11 13:07:25.000  |
> | 2016-02-11 13:07:24.000  |
> | 2016-02-11 13:07:24.000  |
> | 2016-02-11 13:07:23.000  |
> | 2016-02-11 13:07:23.000  |
> | 2016-02-11 13:07:22.000  |
> | 2016-02-11 13:07:21.000  |
> | 2016-02-11 13:07:21.000  |
> | 2016-02-11 13:07:20.000  |
> | 2016-02-11 13:07:20.000  |
> > select event_time from access_logs where event_time>to_date('2016-02-11 
> > 13:07:25') order by event_time desc limit 10;
> +--+
> |EVENT_TIME|
> +--+
> | 2016-02-25 18:34:17.000  |
> | 2016-02-25 18:34:17.000  |
> | 2016-02-25 18:34:16.000  |
> | 2016-02-25 18:34:16.000  |
> | 2016-02-25 18:34:15.000  |
> | 2016-02-25 18:34:15.000  |
> | 2016-02-25 18:34:14.000  |
> | 2016-02-25 18:34:14.000  |
> | 2016-02-25 18:34:14.000  |
> | 2016-02-25 18:34:14.000  |
> +--+
> Check the return results ,the 'order by event_time desc' is not return 
> correct results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2743) HivePhoenixHandler for big-big join with predicate push down

2016-03-15 Thread YoungWoo Kim (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196603#comment-15196603
 ] 

YoungWoo Kim commented on PHOENIX-2743:
---

Dup of PHOENIX-331?

> HivePhoenixHandler for big-big join with predicate push down
> 
>
> Key: PHOENIX-2743
> URL: https://issues.apache.org/jira/browse/PHOENIX-2743
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.5.0, 4.6.0
> Environment: hive-1.2.1
>Reporter: JeongMin Ju
>  Labels: features, performance
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Phoenix support hash join & sort-merge join. But in case of big*big join does 
> not process well.
> Therefore Need other method like Hive.
> I implemented hive-phoenix-handler that can access Apache Phoenix table on 
> HBase using HiveQL.
> hive-phoenix-handler is very faster than hive-hbase-handler because of 
> applying predicate push down.
> I am publishing source code to github for contribution and maybe will be 
> completed by next week.
> https://github.com/mini666/hive-phoenix-handler
> please, review my proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1523) Make it easy to provide a tab literal as separator for CSV imports

2016-03-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196587#comment-15196587
 ] 

Hadoop QA commented on PHOENIX-1523:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12793628/PHOENIX-1523-3.patch
  against master branch at commit e7bcfe4f2b889575f7a20e0e2d9f76175282108c.
  ATTACHMENT ID: 12793628

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
20 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites(TestSeveralNameNodes.java:64)

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/276//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/276//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/276//console

This message is automatically generated.

> Make it easy to provide a tab literal as separator for CSV imports
> --
>
> Key: PHOENIX-1523
> URL: https://issues.apache.org/jira/browse/PHOENIX-1523
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Gabriel Reid
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-1523-2.patch, PHOENIX-1523-3.patch, 
> PHOENIX-1523.patch
>
>
> When importing CSV data that is tab-separated, it's quite inconvenient to 
> supply a tab literal on the command line to override the default separator 
> (you need to press Ctrl-V, followed by Tab, which isn't well known).
> It would be better if it was possible to supply something that looks like a 
> tab literal, i.e. '\t', or something else like '', to reduce the 
> complexity in supplying a tab separator.
> Regardless of the approach, the way to supply a tab character should also be 
> added to the documentation (or to the command line docs provided by the tool).
> This should be done for both the map reduce bulk loader, and psql.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2765) PhoenixConnection commit throws ArrayIndexOutOfBoundsException after upserting multiple rows with dynamic columns

2016-03-15 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2765.
---
Resolution: Not A Problem

> PhoenixConnection commit throws ArrayIndexOutOfBoundsException after 
> upserting multiple rows with dynamic columns
> -
>
> Key: PHOENIX-2765
> URL: https://issues.apache.org/jira/browse/PHOENIX-2765
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Clifford Ker
>Assignee: Thomas D'Silva
>
> 1. Create a Phoenix table with dynamic columns.
> 2. Obtain a PhoenixConnection and PreparedStatement.
> 3. Use the PreparedStatement to upsert multiple records using dynamic columns.
> 4. Commit the transaction.
> 5. Phoenix will throw ArrayIndexOutOfBoundException.
> java.lang.ArrayIndexOutOfBoundsException: 14
> at 
> org.apache.phoenix.execute.MutationState.validate(MutationState.java:384)
> at org.apache.phoenix.execute.MutationState.commit(MutationState.java:419)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:476)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:473)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:473)
> at 
> phoenix.connection.ProtectedPhoenixConnection.commit(ProtectedPhoenixConnection.java:41
> Pseudo Code and DML
> CREATE TABLE IF NOT EXISTS FOO_RECORD
> (
> KEY1 char(15) not null,
> KEY2 char(15) not null,
> KEY3 SMALLINT not null,
> STATUS TINYINT not null,  
> KEY4 varchar(30), 
> KEY5 char(15) not null,   
> FOO_RECORD_ID char(15),   
> SYSMODSTAMP TIMESTAMP,
> "av"."_" char(1), -- Column family av
> "rv"."_" char(1), -- Column family rv
> "fd"."_" char(1), -- Column family dv
> CONSTRAINT PK PRIMARY KEY (
>   KEY1,
>   KEY2,
>   KEY3,
>   STATUS,
>   KEY4,
> KEY5
> )
> ) VERSIONS=1,MULTI_TENANT=true,REPLICATION_SCOPE=1
> try {
> PreparedStatement preparedStmt = 
> phoenixConnection.prepareStatement(upsertStatement);
> for (DARecord record : daRecords) {
> prepareToUpserFooRecord(preparedStmt, fieldInfo, record);
> }
> preparedStmt.close();
> phoenixConnection.commit();
> } finally {
> phoenixConnection.close();
> }
> UPSERT INTO FOO_RECORD(
> KEY1, KEY2, KEY3, STATUS, KEY4, KEY5, SYSMODSTAMP, "fa"."FIELD1" VARCHAR, 
> "av"."FIELD2" VARCHAR, "av"."FIELD3" VARCHAR, "av"."FIELD4" VARCHAR, 
> "rv"."FIELD1" VARCHAR, "rv"."FIELD2" VARCHAR, "rv"."FIELD3" VARCHAR, 
> "rv"."FIELD4" VARCHAR, "rv"."FIELD5" VARCHAR, "rv"."FIELD6" VARCHAR, 
> "rv"."FIELD7" VARCHAR, "rv"."FIELD8" VARCHAR, "rv"."FIELD9" VARCHAR, 
> "rv"."FIELD10" VARCHAR, "rv"."FIELD11" VARCHAR, "rv"."FIELD12" VARCHAR, 
> "rv"."FIELD13" VARCHAR, "rv"."FIELD14" VARCHAR, "rv"."FIELD15" VARCHAR, 
> "fd"."FIELD1" TINYINT, "fd"."FIELD2" TINYINT, "fd"."FIELD3" TINYINT, 
> "fd"."FIELD4" TINYINT, "fd"."FIELD5" TINYINT, "fd"."FIELD6" TINYINT, 
> "fd"."FIELD7" TINYINT, "fd"."FIELD8" TINYINT, "fd"."FIELD9" TINYINT, 
> "fd"."FIELD10" TINYINT, "fd"."FIELD11" TINYINT, "fd"."FIELD12" TINYINT, 
> "fd"."FIELD13" TINYINT, "fd"."FIELD13" TINYINT, "fd"."FIELD15" TINYINT) 
> VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2765) PhoenixConnection commit throws ArrayIndexOutOfBoundsException after upserting multiple rows with dynamic columns

2016-03-15 Thread Clifford Ker (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196572#comment-15196572
 ] 

Clifford Ker commented on PHOENIX-2765:
---

Upgrade to 4.7 definitely fixed the problem. Thanks for everyone jumping in !

> PhoenixConnection commit throws ArrayIndexOutOfBoundsException after 
> upserting multiple rows with dynamic columns
> -
>
> Key: PHOENIX-2765
> URL: https://issues.apache.org/jira/browse/PHOENIX-2765
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Clifford Ker
>Assignee: Thomas D'Silva
>
> 1. Create a Phoenix table with dynamic columns.
> 2. Obtain a PhoenixConnection and PreparedStatement.
> 3. Use the PreparedStatement to upsert multiple records using dynamic columns.
> 4. Commit the transaction.
> 5. Phoenix will throw ArrayIndexOutOfBoundException.
> java.lang.ArrayIndexOutOfBoundsException: 14
> at 
> org.apache.phoenix.execute.MutationState.validate(MutationState.java:384)
> at org.apache.phoenix.execute.MutationState.commit(MutationState.java:419)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:476)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:473)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:473)
> at 
> phoenix.connection.ProtectedPhoenixConnection.commit(ProtectedPhoenixConnection.java:41
> Pseudo Code and DML
> CREATE TABLE IF NOT EXISTS FOO_RECORD
> (
> KEY1 char(15) not null,
> KEY2 char(15) not null,
> KEY3 SMALLINT not null,
> STATUS TINYINT not null,  
> KEY4 varchar(30), 
> KEY5 char(15) not null,   
> FOO_RECORD_ID char(15),   
> SYSMODSTAMP TIMESTAMP,
> "av"."_" char(1), -- Column family av
> "rv"."_" char(1), -- Column family rv
> "fd"."_" char(1), -- Column family dv
> CONSTRAINT PK PRIMARY KEY (
>   KEY1,
>   KEY2,
>   KEY3,
>   STATUS,
>   KEY4,
> KEY5
> )
> ) VERSIONS=1,MULTI_TENANT=true,REPLICATION_SCOPE=1
> try {
> PreparedStatement preparedStmt = 
> phoenixConnection.prepareStatement(upsertStatement);
> for (DARecord record : daRecords) {
> prepareToUpserFooRecord(preparedStmt, fieldInfo, record);
> }
> preparedStmt.close();
> phoenixConnection.commit();
> } finally {
> phoenixConnection.close();
> }
> UPSERT INTO FOO_RECORD(
> KEY1, KEY2, KEY3, STATUS, KEY4, KEY5, SYSMODSTAMP, "fa"."FIELD1" VARCHAR, 
> "av"."FIELD2" VARCHAR, "av"."FIELD3" VARCHAR, "av"."FIELD4" VARCHAR, 
> "rv"."FIELD1" VARCHAR, "rv"."FIELD2" VARCHAR, "rv"."FIELD3" VARCHAR, 
> "rv"."FIELD4" VARCHAR, "rv"."FIELD5" VARCHAR, "rv"."FIELD6" VARCHAR, 
> "rv"."FIELD7" VARCHAR, "rv"."FIELD8" VARCHAR, "rv"."FIELD9" VARCHAR, 
> "rv"."FIELD10" VARCHAR, "rv"."FIELD11" VARCHAR, "rv"."FIELD12" VARCHAR, 
> "rv"."FIELD13" VARCHAR, "rv"."FIELD14" VARCHAR, "rv"."FIELD15" VARCHAR, 
> "fd"."FIELD1" TINYINT, "fd"."FIELD2" TINYINT, "fd"."FIELD3" TINYINT, 
> "fd"."FIELD4" TINYINT, "fd"."FIELD5" TINYINT, "fd"."FIELD6" TINYINT, 
> "fd"."FIELD7" TINYINT, "fd"."FIELD8" TINYINT, "fd"."FIELD9" TINYINT, 
> "fd"."FIELD10" TINYINT, "fd"."FIELD11" TINYINT, "fd"."FIELD12" TINYINT, 
> "fd"."FIELD13" TINYINT, "fd"."FIELD13" TINYINT, "fd"."FIELD15" TINYINT) 
> VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2765) PhoenixConnection commit throws ArrayIndexOutOfBoundsException after upserting multiple rows with dynamic columns

2016-03-15 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196560#comment-15196560
 ] 

James Taylor commented on PHOENIX-2765:
---

That exception is correct. You can't use setBytes to bind a data type that is a 
TINYINT. Please let us know how it works when you use the setInt call instead, 
[~c...@salesforce.com].

> PhoenixConnection commit throws ArrayIndexOutOfBoundsException after 
> upserting multiple rows with dynamic columns
> -
>
> Key: PHOENIX-2765
> URL: https://issues.apache.org/jira/browse/PHOENIX-2765
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Clifford Ker
>Assignee: Thomas D'Silva
>
> 1. Create a Phoenix table with dynamic columns.
> 2. Obtain a PhoenixConnection and PreparedStatement.
> 3. Use the PreparedStatement to upsert multiple records using dynamic columns.
> 4. Commit the transaction.
> 5. Phoenix will throw ArrayIndexOutOfBoundException.
> java.lang.ArrayIndexOutOfBoundsException: 14
> at 
> org.apache.phoenix.execute.MutationState.validate(MutationState.java:384)
> at org.apache.phoenix.execute.MutationState.commit(MutationState.java:419)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:476)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:473)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:473)
> at 
> phoenix.connection.ProtectedPhoenixConnection.commit(ProtectedPhoenixConnection.java:41
> Pseudo Code and DML
> CREATE TABLE IF NOT EXISTS FOO_RECORD
> (
> KEY1 char(15) not null,
> KEY2 char(15) not null,
> KEY3 SMALLINT not null,
> STATUS TINYINT not null,  
> KEY4 varchar(30), 
> KEY5 char(15) not null,   
> FOO_RECORD_ID char(15),   
> SYSMODSTAMP TIMESTAMP,
> "av"."_" char(1), -- Column family av
> "rv"."_" char(1), -- Column family rv
> "fd"."_" char(1), -- Column family dv
> CONSTRAINT PK PRIMARY KEY (
>   KEY1,
>   KEY2,
>   KEY3,
>   STATUS,
>   KEY4,
> KEY5
> )
> ) VERSIONS=1,MULTI_TENANT=true,REPLICATION_SCOPE=1
> try {
> PreparedStatement preparedStmt = 
> phoenixConnection.prepareStatement(upsertStatement);
> for (DARecord record : daRecords) {
> prepareToUpserFooRecord(preparedStmt, fieldInfo, record);
> }
> preparedStmt.close();
> phoenixConnection.commit();
> } finally {
> phoenixConnection.close();
> }
> UPSERT INTO FOO_RECORD(
> KEY1, KEY2, KEY3, STATUS, KEY4, KEY5, SYSMODSTAMP, "fa"."FIELD1" VARCHAR, 
> "av"."FIELD2" VARCHAR, "av"."FIELD3" VARCHAR, "av"."FIELD4" VARCHAR, 
> "rv"."FIELD1" VARCHAR, "rv"."FIELD2" VARCHAR, "rv"."FIELD3" VARCHAR, 
> "rv"."FIELD4" VARCHAR, "rv"."FIELD5" VARCHAR, "rv"."FIELD6" VARCHAR, 
> "rv"."FIELD7" VARCHAR, "rv"."FIELD8" VARCHAR, "rv"."FIELD9" VARCHAR, 
> "rv"."FIELD10" VARCHAR, "rv"."FIELD11" VARCHAR, "rv"."FIELD12" VARCHAR, 
> "rv"."FIELD13" VARCHAR, "rv"."FIELD14" VARCHAR, "rv"."FIELD15" VARCHAR, 
> "fd"."FIELD1" TINYINT, "fd"."FIELD2" TINYINT, "fd"."FIELD3" TINYINT, 
> "fd"."FIELD4" TINYINT, "fd"."FIELD5" TINYINT, "fd"."FIELD6" TINYINT, 
> "fd"."FIELD7" TINYINT, "fd"."FIELD8" TINYINT, "fd"."FIELD9" TINYINT, 
> "fd"."FIELD10" TINYINT, "fd"."FIELD11" TINYINT, "fd"."FIELD12" TINYINT, 
> "fd"."FIELD13" TINYINT, "fd"."FIELD13" TINYINT, "fd"."FIELD15" TINYINT) 
> VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2765) PhoenixConnection commit throws ArrayIndexOutOfBoundsException after upserting multiple rows with dynamic columns

2016-03-15 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196558#comment-15196558
 ] 

Thomas D'Silva commented on PHOENIX-2765:
-

I get an exception when I try to use setBytes. Can you try using setInt

org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
mismatch. SMALLINT and VARBINARY for [B@5633ed82
at 
org.apache.phoenix.schema.TypeMismatchException.newException(TypeMismatchException.java:53)
at 
org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:192)
at 
org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:172)
at 
org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:159)
at 
org.apache.phoenix.compile.UpsertCompiler$UpsertValuesCompiler.visit(UpsertCompiler.java:979)
at 
org.apache.phoenix.compile.ExpressionCompiler.visit(ExpressionCompiler.java:1)
at org.apache.phoenix.parse.BindParseNode.accept(BindParseNode.java:47)
at org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:832)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:578)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:1)
at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:331)
at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:325)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:199)
at 
org.apache.phoenix.end2end.DynamicColumnIT.testManyDynamicCols(DynamicColumnIT.java:302)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)

> PhoenixConnection commit throws ArrayIndexOutOfBoundsException after 
> upserting multiple rows with dynamic columns
> -
>
> Key: PHOENIX-2765
> URL: https://issues.apache.org/jira/browse/PHOENIX-2765
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Clifford Ker
>Assignee: Thomas D'Silva
>
> 1. Create a Phoenix table with dynamic columns.
> 2. Obtain a PhoenixConnection and PreparedStatement.
> 3. Use the PreparedStatement to upsert multiple records using dynamic columns.
> 4. Commit the transaction.
> 5. Phoenix will throw ArrayIndexOutOfBoundException.
> java.lang.ArrayIndexOutOfBoundsException: 14
> at 
> org.apache.phoenix.execute.MutationState.validate(MutationState.java:384)
> at org.apache.phoenix.execute.MutationState.commit(MutationState.java:419)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:476)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:473)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:473)
> at 
> phoenix.connection.ProtectedPhoenixConnection.commit(ProtectedPhoenixConnection.java:41
> Pseudo Code and DML
> CREATE TABLE IF NOT EXISTS FOO_RECORD
> (
> KEY1 char(15) not null,
> KEY2 char(15) not null,
> KEY3 SMALLINT not null,
> STATUS TINYINT not null,  
> KEY4 varchar(30), 
> KEY5 char(15) not null,   
> FOO_RECORD_ID char(15),   
> SYSMODSTAMP TIMESTAMP,
> "av"."_" char(1), -- Column family av
> "rv"."_" char(1), -- Column family rv
> "fd"."_" char(1), -- Column family dv
> CONSTRAINT PK PRIMARY KEY (
>   KEY1,
>   KEY2,
>   KEY3,
>   STATUS,
>   KEY4,
> KEY5
> )
> ) VERSIONS=1,MULTI_TENANT=true,REPLICATION_SCOPE=1
> try {
> PreparedStatement preparedStmt = 
> phoenixConnection.prepareStatement(upsertStatement);
> for (DARecord record : daRecords) {
> prepareToUpserFooRecord(preparedStmt, fieldInfo, record);
> }
> preparedStmt.close();
> phoenixConnection.commit();
> } finally {
> phoenixConnection.close();
> }
> UPSERT INTO FOO_RECORD(
> KEY1, KEY2, KEY3, STATUS, KEY4, KEY5, SYSMODSTAMP, "fa"."FIELD1" 

[jira] [Resolved] (PHOENIX-2768) Add test for case sensitive table with index hint

2016-03-15 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2768.
---
Resolution: Fixed

> Add test for case sensitive table with index hint
> -
>
> Key: PHOENIX-2768
> URL: https://issues.apache.org/jira/browse/PHOENIX-2768
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2768.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2772) Upgrade pom to sqlline 1.1.9

2016-03-15 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2772.
---
   Resolution: Fixed
 Assignee: James Taylor
Fix Version/s: 4.8.0

> Upgrade pom to sqlline 1.1.9
> 
>
> Key: PHOENIX-2772
> URL: https://issues.apache.org/jira/browse/PHOENIX-2772
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.8.0
>
>
> Upgrade sqlline 1.1.9 which is hosted in maven central now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2773) Remove unnecessary repos in poms

2016-03-15 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2773.
---
Resolution: Fixed
  Assignee: James Taylor

> Remove unnecessary repos in poms
> 
>
> Key: PHOENIX-2773
> URL: https://issues.apache.org/jira/browse/PHOENIX-2773
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2773.patch
>
>
> It seems we have a lot of cruft in our pom in terms of repos. Now that 
> sqlline is hosted in maven central (see PHOENIX-2772), it seems like we 
> should only need the one repo 
> {{https://repository.apache.org/content/repositories/releases}}.
> {code}
>   
> 
>   apache release
>   https://repository.apache.org/content/repositories/releases/
> 
> 
>   conjars.org
>   http://conjars.org/repo
> 
> 
>   apache snapshot
>   https://repository.apache.org/content/repositories/snapshots/
>   
> true
>   
> 
> 
>   sonatype-nexus-snapshots
>   Sonatype Nexus Snapshots
>   https://oss.sonatype.org/content/repositories/snapshots
>   
> true
>   
> 
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2765) PhoenixConnection commit throws ArrayIndexOutOfBoundsException after upserting multiple rows with dynamic columns

2016-03-15 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196493#comment-15196493
 ] 

Samarth Jain commented on PHOENIX-2765:
---

[~c...@salesforce.com] - just to be sure, are you seeing this error with 4.6 or 
4.7 version of phoenix. Looking at the "affects version" on this JIRA I am 
implying you are running into this problem on 4.6. Can you try with 4.7? 

> PhoenixConnection commit throws ArrayIndexOutOfBoundsException after 
> upserting multiple rows with dynamic columns
> -
>
> Key: PHOENIX-2765
> URL: https://issues.apache.org/jira/browse/PHOENIX-2765
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Clifford Ker
>Assignee: Thomas D'Silva
>
> 1. Create a Phoenix table with dynamic columns.
> 2. Obtain a PhoenixConnection and PreparedStatement.
> 3. Use the PreparedStatement to upsert multiple records using dynamic columns.
> 4. Commit the transaction.
> 5. Phoenix will throw ArrayIndexOutOfBoundException.
> java.lang.ArrayIndexOutOfBoundsException: 14
> at 
> org.apache.phoenix.execute.MutationState.validate(MutationState.java:384)
> at org.apache.phoenix.execute.MutationState.commit(MutationState.java:419)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:476)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:473)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:473)
> at 
> phoenix.connection.ProtectedPhoenixConnection.commit(ProtectedPhoenixConnection.java:41
> Pseudo Code and DML
> CREATE TABLE IF NOT EXISTS FOO_RECORD
> (
> KEY1 char(15) not null,
> KEY2 char(15) not null,
> KEY3 SMALLINT not null,
> STATUS TINYINT not null,  
> KEY4 varchar(30), 
> KEY5 char(15) not null,   
> FOO_RECORD_ID char(15),   
> SYSMODSTAMP TIMESTAMP,
> "av"."_" char(1), -- Column family av
> "rv"."_" char(1), -- Column family rv
> "fd"."_" char(1), -- Column family dv
> CONSTRAINT PK PRIMARY KEY (
>   KEY1,
>   KEY2,
>   KEY3,
>   STATUS,
>   KEY4,
> KEY5
> )
> ) VERSIONS=1,MULTI_TENANT=true,REPLICATION_SCOPE=1
> try {
> PreparedStatement preparedStmt = 
> phoenixConnection.prepareStatement(upsertStatement);
> for (DARecord record : daRecords) {
> prepareToUpserFooRecord(preparedStmt, fieldInfo, record);
> }
> preparedStmt.close();
> phoenixConnection.commit();
> } finally {
> phoenixConnection.close();
> }
> UPSERT INTO FOO_RECORD(
> KEY1, KEY2, KEY3, STATUS, KEY4, KEY5, SYSMODSTAMP, "fa"."FIELD1" VARCHAR, 
> "av"."FIELD2" VARCHAR, "av"."FIELD3" VARCHAR, "av"."FIELD4" VARCHAR, 
> "rv"."FIELD1" VARCHAR, "rv"."FIELD2" VARCHAR, "rv"."FIELD3" VARCHAR, 
> "rv"."FIELD4" VARCHAR, "rv"."FIELD5" VARCHAR, "rv"."FIELD6" VARCHAR, 
> "rv"."FIELD7" VARCHAR, "rv"."FIELD8" VARCHAR, "rv"."FIELD9" VARCHAR, 
> "rv"."FIELD10" VARCHAR, "rv"."FIELD11" VARCHAR, "rv"."FIELD12" VARCHAR, 
> "rv"."FIELD13" VARCHAR, "rv"."FIELD14" VARCHAR, "rv"."FIELD15" VARCHAR, 
> "fd"."FIELD1" TINYINT, "fd"."FIELD2" TINYINT, "fd"."FIELD3" TINYINT, 
> "fd"."FIELD4" TINYINT, "fd"."FIELD5" TINYINT, "fd"."FIELD6" TINYINT, 
> "fd"."FIELD7" TINYINT, "fd"."FIELD8" TINYINT, "fd"."FIELD9" TINYINT, 
> "fd"."FIELD10" TINYINT, "fd"."FIELD11" TINYINT, "fd"."FIELD12" TINYINT, 
> "fd"."FIELD13" TINYINT, "fd"."FIELD13" TINYINT, "fd"."FIELD15" TINYINT) 
> VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-2765) PhoenixConnection commit throws ArrayIndexOutOfBoundsException after upserting multiple rows with dynamic columns

2016-03-15 Thread Clifford Ker (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196472#comment-15196472
 ] 

Clifford Ker edited comment on PHOENIX-2765 at 3/15/16 11:43 PM:
-

[~tdsilva] It's very similar to our code. The only difference is we call 
setByte() where you call setInt(), and executeBatch() vs executeUpdate().

I modified my code to use executeUpdate() and still see the same exception.


was (Author: c...@salesforce.com):
[~tdsilva] It's very similar to our code. The only difference is we call 
setByte() where you call setInt(), and executeBatch() vs executeUpdate()

> PhoenixConnection commit throws ArrayIndexOutOfBoundsException after 
> upserting multiple rows with dynamic columns
> -
>
> Key: PHOENIX-2765
> URL: https://issues.apache.org/jira/browse/PHOENIX-2765
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Clifford Ker
>Assignee: Thomas D'Silva
>
> 1. Create a Phoenix table with dynamic columns.
> 2. Obtain a PhoenixConnection and PreparedStatement.
> 3. Use the PreparedStatement to upsert multiple records using dynamic columns.
> 4. Commit the transaction.
> 5. Phoenix will throw ArrayIndexOutOfBoundException.
> java.lang.ArrayIndexOutOfBoundsException: 14
> at 
> org.apache.phoenix.execute.MutationState.validate(MutationState.java:384)
> at org.apache.phoenix.execute.MutationState.commit(MutationState.java:419)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:476)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:473)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:473)
> at 
> phoenix.connection.ProtectedPhoenixConnection.commit(ProtectedPhoenixConnection.java:41
> Pseudo Code and DML
> CREATE TABLE IF NOT EXISTS FOO_RECORD
> (
> KEY1 char(15) not null,
> KEY2 char(15) not null,
> KEY3 SMALLINT not null,
> STATUS TINYINT not null,  
> KEY4 varchar(30), 
> KEY5 char(15) not null,   
> FOO_RECORD_ID char(15),   
> SYSMODSTAMP TIMESTAMP,
> "av"."_" char(1), -- Column family av
> "rv"."_" char(1), -- Column family rv
> "fd"."_" char(1), -- Column family dv
> CONSTRAINT PK PRIMARY KEY (
>   KEY1,
>   KEY2,
>   KEY3,
>   STATUS,
>   KEY4,
> KEY5
> )
> ) VERSIONS=1,MULTI_TENANT=true,REPLICATION_SCOPE=1
> try {
> PreparedStatement preparedStmt = 
> phoenixConnection.prepareStatement(upsertStatement);
> for (DARecord record : daRecords) {
> prepareToUpserFooRecord(preparedStmt, fieldInfo, record);
> }
> preparedStmt.close();
> phoenixConnection.commit();
> } finally {
> phoenixConnection.close();
> }
> UPSERT INTO FOO_RECORD(
> KEY1, KEY2, KEY3, STATUS, KEY4, KEY5, SYSMODSTAMP, "fa"."FIELD1" VARCHAR, 
> "av"."FIELD2" VARCHAR, "av"."FIELD3" VARCHAR, "av"."FIELD4" VARCHAR, 
> "rv"."FIELD1" VARCHAR, "rv"."FIELD2" VARCHAR, "rv"."FIELD3" VARCHAR, 
> "rv"."FIELD4" VARCHAR, "rv"."FIELD5" VARCHAR, "rv"."FIELD6" VARCHAR, 
> "rv"."FIELD7" VARCHAR, "rv"."FIELD8" VARCHAR, "rv"."FIELD9" VARCHAR, 
> "rv"."FIELD10" VARCHAR, "rv"."FIELD11" VARCHAR, "rv"."FIELD12" VARCHAR, 
> "rv"."FIELD13" VARCHAR, "rv"."FIELD14" VARCHAR, "rv"."FIELD15" VARCHAR, 
> "fd"."FIELD1" TINYINT, "fd"."FIELD2" TINYINT, "fd"."FIELD3" TINYINT, 
> "fd"."FIELD4" TINYINT, "fd"."FIELD5" TINYINT, "fd"."FIELD6" TINYINT, 
> "fd"."FIELD7" TINYINT, "fd"."FIELD8" TINYINT, "fd"."FIELD9" TINYINT, 
> "fd"."FIELD10" TINYINT, "fd"."FIELD11" TINYINT, "fd"."FIELD12" TINYINT, 
> "fd"."FIELD13" TINYINT, "fd"."FIELD13" TINYINT, "fd"."FIELD15" TINYINT) 
> VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2765) PhoenixConnection commit throws ArrayIndexOutOfBoundsException after upserting multiple rows with dynamic columns

2016-03-15 Thread Clifford Ker (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196472#comment-15196472
 ] 

Clifford Ker commented on PHOENIX-2765:
---

[~tdsilva] It's very similar to our code. The only difference is we call 
setByte() where you call setInt(), and executeBatch() vs executeUpdate()

> PhoenixConnection commit throws ArrayIndexOutOfBoundsException after 
> upserting multiple rows with dynamic columns
> -
>
> Key: PHOENIX-2765
> URL: https://issues.apache.org/jira/browse/PHOENIX-2765
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Clifford Ker
>Assignee: Thomas D'Silva
>
> 1. Create a Phoenix table with dynamic columns.
> 2. Obtain a PhoenixConnection and PreparedStatement.
> 3. Use the PreparedStatement to upsert multiple records using dynamic columns.
> 4. Commit the transaction.
> 5. Phoenix will throw ArrayIndexOutOfBoundException.
> java.lang.ArrayIndexOutOfBoundsException: 14
> at 
> org.apache.phoenix.execute.MutationState.validate(MutationState.java:384)
> at org.apache.phoenix.execute.MutationState.commit(MutationState.java:419)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:476)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:473)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:473)
> at 
> phoenix.connection.ProtectedPhoenixConnection.commit(ProtectedPhoenixConnection.java:41
> Pseudo Code and DML
> CREATE TABLE IF NOT EXISTS FOO_RECORD
> (
> KEY1 char(15) not null,
> KEY2 char(15) not null,
> KEY3 SMALLINT not null,
> STATUS TINYINT not null,  
> KEY4 varchar(30), 
> KEY5 char(15) not null,   
> FOO_RECORD_ID char(15),   
> SYSMODSTAMP TIMESTAMP,
> "av"."_" char(1), -- Column family av
> "rv"."_" char(1), -- Column family rv
> "fd"."_" char(1), -- Column family dv
> CONSTRAINT PK PRIMARY KEY (
>   KEY1,
>   KEY2,
>   KEY3,
>   STATUS,
>   KEY4,
> KEY5
> )
> ) VERSIONS=1,MULTI_TENANT=true,REPLICATION_SCOPE=1
> try {
> PreparedStatement preparedStmt = 
> phoenixConnection.prepareStatement(upsertStatement);
> for (DARecord record : daRecords) {
> prepareToUpserFooRecord(preparedStmt, fieldInfo, record);
> }
> preparedStmt.close();
> phoenixConnection.commit();
> } finally {
> phoenixConnection.close();
> }
> UPSERT INTO FOO_RECORD(
> KEY1, KEY2, KEY3, STATUS, KEY4, KEY5, SYSMODSTAMP, "fa"."FIELD1" VARCHAR, 
> "av"."FIELD2" VARCHAR, "av"."FIELD3" VARCHAR, "av"."FIELD4" VARCHAR, 
> "rv"."FIELD1" VARCHAR, "rv"."FIELD2" VARCHAR, "rv"."FIELD3" VARCHAR, 
> "rv"."FIELD4" VARCHAR, "rv"."FIELD5" VARCHAR, "rv"."FIELD6" VARCHAR, 
> "rv"."FIELD7" VARCHAR, "rv"."FIELD8" VARCHAR, "rv"."FIELD9" VARCHAR, 
> "rv"."FIELD10" VARCHAR, "rv"."FIELD11" VARCHAR, "rv"."FIELD12" VARCHAR, 
> "rv"."FIELD13" VARCHAR, "rv"."FIELD14" VARCHAR, "rv"."FIELD15" VARCHAR, 
> "fd"."FIELD1" TINYINT, "fd"."FIELD2" TINYINT, "fd"."FIELD3" TINYINT, 
> "fd"."FIELD4" TINYINT, "fd"."FIELD5" TINYINT, "fd"."FIELD6" TINYINT, 
> "fd"."FIELD7" TINYINT, "fd"."FIELD8" TINYINT, "fd"."FIELD9" TINYINT, 
> "fd"."FIELD10" TINYINT, "fd"."FIELD11" TINYINT, "fd"."FIELD12" TINYINT, 
> "fd"."FIELD13" TINYINT, "fd"."FIELD13" TINYINT, "fd"."FIELD15" TINYINT) 
> VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2765) PhoenixConnection commit throws ArrayIndexOutOfBoundsException after upserting multiple rows with dynamic columns

2016-03-15 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196420#comment-15196420
 ] 

Thomas D'Silva commented on PHOENIX-2765:
-

[~c...@salesforce.com]
I am unable to repro the exception using the steps you outlined. The test I ran 
is below, can you let me know if you are doing something else?

{code}
 @Test
public void testManyDynamicCols() throws Exception {
Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
conn.setAutoCommit(false);
String ddl = "CREATE TABLE FOO_RECORD ("
+ "KEY1 char(15) not null,"
+ "KEY2 char(15) not null,"
+ "KEY3 SMALLINT not null,"
+ "STATUS TINYINT not null, "
+ "KEY4 varchar(30),"
+ "KEY5 char(15) not null,  "
+ "FOO_RECORD_ID char(15),  "
+ "SYSMODSTAMP TIMESTAMP,   "
+ "\"fa\".\"_\" char(1),"
+ "\"av\".\"_\" char(1),"
+ "\"rv\".\"_\" char(1),"
+ "\"fd\".\"_\" char(1), "
+ "CONSTRAINT PK PRIMARY KEY (KEY1, KEY2, KEY3, 
STATUS, KEY4, KEY5)"
+ ") 
VERSIONS=1,MULTI_TENANT=true,REPLICATION_SCOPE=1";
Statement stmt = conn.createStatement();
stmt.execute(ddl);

String upsert = "UPSERT INTO FOO_RECORD(KEY1, KEY2, KEY3, 
STATUS, KEY4, KEY5, SYSMODSTAMP, "
+ "\"fa\".\"FIELD1\" VARCHAR, \"av\".\"FIELD2\" 
VARCHAR, \"av\".\"FIELD3\" VARCHAR, \"av\".\"FIELD4\" VARCHAR, 
\"rv\".\"FIELD1\" VARCHAR, "
+ "\"rv\".\"FIELD2\" VARCHAR, \"rv\".\"FIELD3\" 
VARCHAR, \"rv\".\"FIELD4\" VARCHAR, \"rv\".\"FIELD5\" VARCHAR, 
\"rv\".\"FIELD6\" VARCHAR, "
+ "\"rv\".\"FIELD7\" VARCHAR, \"rv\".\"FIELD8\" 
VARCHAR, \"rv\".\"FIELD9\" VARCHAR, \"rv\".\"FIELD10\" VARCHAR, 
\"rv\".\"FIELD11\" VARCHAR,"
+ " \"rv\".\"FIELD12\" VARCHAR, 
\"rv\".\"FIELD13\" VARCHAR, \"rv\".\"FIELD14\" VARCHAR, \"rv\".\"FIELD15\" 
VARCHAR, \"fd\".\"FIELD1\" TINYINT, "
+ "\"fd\".\"FIELD2\" TINYINT, \"fd\".\"FIELD3\" 
TINYINT, \"fd\".\"FIELD4\" TINYINT, \"fd\".\"FIELD5\" TINYINT, 
\"fd\".\"FIELD6\" TINYINT,"
+ " \"fd\".\"FIELD7\" TINYINT, 
\"fd\".\"FIELD8\" TINYINT, \"fd\".\"FIELD9\" TINYINT, \"fd\".\"FIELD10\" 
TINYINT, \"fd\".\"FIELD11\" TINYINT,"
+ " \"fd\".\"FIELD12\" TINYINT, 
\"fd\".\"FIELD13\" TINYINT, \"fd\".\"FIELD14\" TINYINT, \"fd\".\"FIELD15\" 
TINYINT) "
+ "VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 
?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 
?, ?, ?)";

PreparedStatement pstmt = conn.prepareStatement(upsert);
for (int i=0; i<1000; ++i) {
pstmt.setString(1, "000");
pstmt.setString(2, "000");
pstmt.setInt(3, i);
pstmt.setInt(4, 1);
pstmt.setString(5, "varchar");
pstmt.setString(6, "000");
Date date = DateUtil.parseDate("2015-01-01 00:00:00");
pstmt.setDate(7, date);
pstmt.setString(8, "varchar");
pstmt.setString(9, "varchar");
pstmt.setString(10, "varchar");
pstmt.setString(11, "varchar");
pstmt.setString(12, "varchar");
pstmt.setString(13, "varchar");
pstmt.setString(14, "varchar");
pstmt.setString(15, "varchar");
pstmt.setString(16, "varchar");
pstmt.setString(17, "varchar");
pstmt.setString(18, "varchar");
pstmt.setString(19, "varchar");
pstmt.setString(20, "varchar");
pstmt.setString(21, "varchar");
pstmt.setString(22, "varchar");
pstmt.setString(23, "varchar");
pstmt.setString(24, "varchar");
pstmt.setString(25, "varchar");
pstmt.setString(26, "varchar");
pstmt.setInt(27, 1);
pstmt.setInt(28, 1);
pstmt.setInt(29, 1);
pstmt.setInt(30, 1);
  

[jira] [Updated] (PHOENIX-1523) Make it easy to provide a tab literal as separator for CSV imports

2016-03-15 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-1523:
-
Attachment: PHOENIX-1523-3.patch

Added the same check to PhoenixRuntime

> Make it easy to provide a tab literal as separator for CSV imports
> --
>
> Key: PHOENIX-1523
> URL: https://issues.apache.org/jira/browse/PHOENIX-1523
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Gabriel Reid
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-1523-2.patch, PHOENIX-1523-3.patch, 
> PHOENIX-1523.patch
>
>
> When importing CSV data that is tab-separated, it's quite inconvenient to 
> supply a tab literal on the command line to override the default separator 
> (you need to press Ctrl-V, followed by Tab, which isn't well known).
> It would be better if it was possible to supply something that looks like a 
> tab literal, i.e. '\t', or something else like '', to reduce the 
> complexity in supplying a tab separator.
> Regardless of the approach, the way to supply a tab character should also be 
> added to the documentation (or to the command line docs provided by the tool).
> This should be done for both the map reduce bulk loader, and psql.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2734) Literal expressions for UNSIGNED_DATE/UNSIGNED_TIME/etc

2016-03-15 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196115#comment-15196115
 ] 

Sergey Soldatov commented on PHOENIX-2734:
--

Hm. There is still a problem. If the IllegalDataException was thrown it can be 
caused by invalid format as well ( like date with type or something like that 
). But with this approach the user will get misleading exception that there is 
a type mismatch.

> Literal expressions for UNSIGNED_DATE/UNSIGNED_TIME/etc
> ---
>
> Key: PHOENIX-2734
> URL: https://issues.apache.org/jira/browse/PHOENIX-2734
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2734.patch
>
>
> At the moment LiteralExpression accepts convertion from varchar to  PDate, 
> PTimestamp and PTime, but not for the PUnsigned... type. Is there any reason 
> for that or we can just add an additional check for Unsigned types as well? 
> The problem is that there is no way to load values to Unsigned date/time 
> types using bulk load ( for regular upsert there is an workaround to use 
> TO_TIME/etc functions). [~pctony] what do you think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2775) Phoenix tracing webapp runnable jar missing in the binary distribution

2016-03-15 Thread Biju Nair (JIRA)
Biju Nair created PHOENIX-2775:
--

 Summary: Phoenix tracing webapp runnable jar missing in the binary 
distribution
 Key: PHOENIX-2775
 URL: https://issues.apache.org/jira/browse/PHOENIX-2775
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.6.0
Reporter: Biju Nair
Priority: Minor


Phoenix tracing webapp runnable jar is not included in the binary distribution 
due to which bringing up the web app is failing to start. The failure is due to 
the pattern used in 
[phoenix_utils.py|https://github.com/apache/phoenix/blob/4.x-HBase-0.98/bin/phoenix_utils.py#L74]
 not finding the jar file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2756) FilteredKeyValueScanner should not implement KeyValueScanner

2016-03-15 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-2756.
-
Resolution: Fixed

> FilteredKeyValueScanner should not implement KeyValueScanner
> 
>
> Key: PHOENIX-2756
> URL: https://issues.apache.org/jira/browse/PHOENIX-2756
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: churro morales
>Assignee: churro morales
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2756-4.x-HBase-0.98.patch, 
> PHOENIX-2756-addendum.patch, PHOENIX-2756.patch
>
>
> In HBASE-14355, the API for KeyValueScanner changed.  More specifically the 
> method shouldUseScanner() had a signature change.  Phoenix has a class: 
> FilteredKeyValueScanner which implements KeyValueScanner.  For HBase 98, I 
> will put up a patch that doesn't change the API signature ( a little hacky) 
> but this signature change is already in HBase-1.2+.  Either we can raise the 
> scope of KeyValueScanner to @LimitedPrivate in HBase land.  Right now its 
> @Private so people don't assume that external services are depending on the 
> API.  I'll look into how we can make things work in Phoenix land.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2771) Improve the performance of IndexTool by building the index mutations at reducer side

2016-03-15 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196091#comment-15196091
 ] 

Sergey Soldatov commented on PHOENIX-2771:
--

Sure

> Improve the performance of IndexTool by building the index mutations at 
> reducer side
> 
>
> Key: PHOENIX-2771
> URL: https://issues.apache.org/jira/browse/PHOENIX-2771
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Sergey Soldatov
> Fix For: 4.8.0
>
>
> Instead of writing the full index mutations to map output at mapper we can 
> just write combined value of indexed column values and prepare proper key 
> values at reducer same as PHOENIX-1973.
> [~sergey.soldatov] can you take up this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2719) RS crashed and HBase is not recovering during log split

2016-03-15 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195991#comment-15195991
 ] 

Jonathan Hsieh commented on PHOENIX-2719:
-

Since we've started having phoenix release in Cloudera Labs, we have been 
trying to release a version near when we release a cdh hbase that is 
incompatible with the labs parcel.  That is the plan going forward as well.

The latest version of the code we shipped is available in github here: 
https://github.com/cloudera-labs/phoenix/commits/phoenix1-4.5.2_1.2.0



> RS crashed and HBase is not recovering during log split
> ---
>
> Key: PHOENIX-2719
> URL: https://issues.apache.org/jira/browse/PHOENIX-2719
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.2
> Environment: We are using phoenix 4.5.2 in CDH 5.5 as a parcel.
>Reporter: Gokhan Cagrici
>Priority: Blocker
>
> Hi,
> Several RSs crashed and now HBase is trying to recover but log splitting 
> phase is getting exception:
> Caught throwable while processing event RS_LOG_REPLAY
> java.lang.NoSuchFieldError: in
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$IndexKeyValueDecoder.parseCell(IndexedWALEditCodec.java:98)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$IndexKeyValueDecoder.parseCell(IndexedWALEditCodec.java:85)
>   at 
> org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:67)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.WALEdit.readFromCells(WALEdit.java:244)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:343)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:104)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:87)
>   at 
> org.apache.hadoop.hbase.wal.WALSplitter.getNextLogLine(WALSplitter.java:799)
>   at 
> org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:332)
>   at 
> org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:242)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:104)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.WALSplitterHandler.process(WALSplitterHandler.java:72)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> I believe this bug is related to PHOENIX-2629 however we cannot build the jar 
> file from github since this is a CDH parcel and definitely needs som 
> intervention.
> Our system is completely down at the moment.
> What needs to be done?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2773) Remove unnecessary repos in poms

2016-03-15 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195948#comment-15195948
 ] 

Thomas D'Silva commented on PHOENIX-2773:
-

+1
If we bump up the sqlline version to 1.1.9 (PHOENIX-2772)), we don't need the 
conjars.org repo. I think we don't need the other two repos as well, I can 
build the phoenix jars without them.

> Remove unnecessary repos in poms
> 
>
> Key: PHOENIX-2773
> URL: https://issues.apache.org/jira/browse/PHOENIX-2773
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2773.patch
>
>
> It seems we have a lot of cruft in our pom in terms of repos. Now that 
> sqlline is hosted in maven central (see PHOENIX-2772), it seems like we 
> should only need the one repo 
> {{https://repository.apache.org/content/repositories/releases}}.
> {code}
>   
> 
>   apache release
>   https://repository.apache.org/content/repositories/releases/
> 
> 
>   conjars.org
>   http://conjars.org/repo
> 
> 
>   apache snapshot
>   https://repository.apache.org/content/repositories/snapshots/
>   
> true
>   
> 
> 
>   sonatype-nexus-snapshots
>   Sonatype Nexus Snapshots
>   https://oss.sonatype.org/content/repositories/snapshots
>   
> true
>   
> 
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2772) Upgrade pom to sqlline 1.1.9

2016-03-15 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2772:

Description: Upgrade sqlline 1.1.9 which is hosted in maven central now.  
(was: Upgrade sqlline 1.1.19 which is hosted in maven central now.)

> Upgrade pom to sqlline 1.1.9
> 
>
> Key: PHOENIX-2772
> URL: https://issues.apache.org/jira/browse/PHOENIX-2772
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>
> Upgrade sqlline 1.1.9 which is hosted in maven central now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2772) Upgrade pom to sqlline 1.1.9

2016-03-15 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2772:

Summary: Upgrade pom to sqlline 1.1.9  (was: Upgrade pom to sqlline 1.1.19)

> Upgrade pom to sqlline 1.1.9
> 
>
> Key: PHOENIX-2772
> URL: https://issues.apache.org/jira/browse/PHOENIX-2772
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>
> Upgrade sqlline 1.1.19 which is hosted in maven central now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2774) MemStoreScanner and KeyValueStore should not be aware of KeyValueScanner

2016-03-15 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195872#comment-15195872
 ] 

churro morales commented on PHOENIX-2774:
-

[~jamestaylor], [~tdsilva] Here is a patch getting rid of the KeyValueScanner 
dependencies in MemStoreScanner and KeyValueStore.  Would like a review. 

> MemStoreScanner and KeyValueStore should not be aware of KeyValueScanner
> 
>
> Key: PHOENIX-2774
> URL: https://issues.apache.org/jira/browse/PHOENIX-2774
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: churro morales
>Assignee: churro morales
> Attachments: PHOENIX-2774.patch
>
>
> Relates to PHOENIX-2756, trying to remove all dependencies on @Private 
> interfaces in HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2774) MemStoreScanner and KeyValueStore should not be aware of KeyValueScanner

2016-03-15 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated PHOENIX-2774:

Description: Relates to PHOENIX-2756, trying to remove all dependencies on 
@Private interfaces in HBase.  (was: Relates to HBASE-14355, trying to remove 
all dependencies on @Private interfaces in HBase.)

> MemStoreScanner and KeyValueStore should not be aware of KeyValueScanner
> 
>
> Key: PHOENIX-2774
> URL: https://issues.apache.org/jira/browse/PHOENIX-2774
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: churro morales
>Assignee: churro morales
> Attachments: PHOENIX-2774.patch
>
>
> Relates to PHOENIX-2756, trying to remove all dependencies on @Private 
> interfaces in HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2774) MemStoreScanner and KeyValueStore should not be aware of KeyValueScanner

2016-03-15 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated PHOENIX-2774:

Attachment: PHOENIX-2774.patch

Here is an attempt at getting rid of NonLazyKeyValueScanner dependencies. 

> MemStoreScanner and KeyValueStore should not be aware of KeyValueScanner
> 
>
> Key: PHOENIX-2774
> URL: https://issues.apache.org/jira/browse/PHOENIX-2774
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: churro morales
>Assignee: churro morales
> Attachments: PHOENIX-2774.patch
>
>
> Relates to HBASE-14355, trying to remove all dependencies on @Private 
> interfaces in HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2773) Remove unnecessary repos in poms

2016-03-15 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2773:
--
Attachment: PHOENIX-2773.patch

Please review, [~tdsilva]. We don't need any of those extra repos, correct?

> Remove unnecessary repos in poms
> 
>
> Key: PHOENIX-2773
> URL: https://issues.apache.org/jira/browse/PHOENIX-2773
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2773.patch
>
>
> It seems we have a lot of cruft in our pom in terms of repos. Now that 
> sqlline is hosted in maven central (see PHOENIX-2772), it seems like we 
> should only need the one repo 
> {{https://repository.apache.org/content/repositories/releases}}.
> {code}
>   
> 
>   apache release
>   https://repository.apache.org/content/repositories/releases/
> 
> 
>   conjars.org
>   http://conjars.org/repo
> 
> 
>   apache snapshot
>   https://repository.apache.org/content/repositories/snapshots/
>   
> true
>   
> 
> 
>   sonatype-nexus-snapshots
>   Sonatype Nexus Snapshots
>   https://oss.sonatype.org/content/repositories/snapshots
>   
> true
>   
> 
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2774) MemStoreScanner and KeyValueStore should not be aware of KeyValueScanner

2016-03-15 Thread churro morales (JIRA)
churro morales created PHOENIX-2774:
---

 Summary: MemStoreScanner and KeyValueStore should not be aware of 
KeyValueScanner
 Key: PHOENIX-2774
 URL: https://issues.apache.org/jira/browse/PHOENIX-2774
 Project: Phoenix
  Issue Type: Task
Affects Versions: 4.8.0
Reporter: churro morales
Assignee: churro morales


Relates to HBASE-14355, trying to remove all dependencies on @Private 
interfaces in HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2760) Upgrade phoenix-server and phoenix-server-client to Avatica 1.7.1

2016-03-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-2760:

Attachment: PHOENIX-2760.001.patch

Here's a patch (essentially what James did), but includes the modification to 
the phoenix-server assembly.

Also, relies on Avatica 1.7.1 (which should come out this week). We should not 
use Avatica 1.7.0.

> Upgrade phoenix-server and phoenix-server-client to Avatica 1.7.1
> -
>
> Key: PHOENIX-2760
> URL: https://issues.apache.org/jira/browse/PHOENIX-2760
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Josh Elser
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2760.001.patch, PHOENIX-2760.patch
>
>
> Once Avatica 1.7.0 is released, we should update our poms to use this as a 
> dependency instead of all of Calcite (on which we don't yet depend).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-2760) Upgrade phoenix-server and phoenix-server-client to Avatica 1.7.1

2016-03-15 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195617#comment-15195617
 ] 

Josh Elser edited comment on PHOENIX-2760 at 3/15/16 4:46 PM:
--

Here's a patch (essentially what James did), but includes the modification to 
the phoenix-server assembly.

Also, relies on Avatica 1.7.1 (which should come out this week). We should not 
use Avatica 1.7.0. Thus, not marking this one as Patch Available until things 
are in place upstream.


was (Author: elserj):
Here's a patch (essentially what James did), but includes the modification to 
the phoenix-server assembly.

Also, relies on Avatica 1.7.1 (which should come out this week). We should not 
use Avatica 1.7.0.

> Upgrade phoenix-server and phoenix-server-client to Avatica 1.7.1
> -
>
> Key: PHOENIX-2760
> URL: https://issues.apache.org/jira/browse/PHOENIX-2760
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Josh Elser
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2760.001.patch, PHOENIX-2760.patch
>
>
> Once Avatica 1.7.0 is released, we should update our poms to use this as a 
> dependency instead of all of Calcite (on which we don't yet depend).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2760) Upgrade phoenix-server and phoenix-server-client to Avatica 1.7.1

2016-03-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-2760:

Summary: Upgrade phoenix-server and phoenix-server-client to Avatica 1.7.1  
(was: Upgrade phoenix-server and phoenix-server-client to Avatica 1.7.0)

> Upgrade phoenix-server and phoenix-server-client to Avatica 1.7.1
> -
>
> Key: PHOENIX-2760
> URL: https://issues.apache.org/jira/browse/PHOENIX-2760
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Josh Elser
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2760.patch
>
>
> Once Avatica 1.7.0 is released, we should update our poms to use this as a 
> dependency instead of all of Calcite (on which we don't yet depend).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2773) Remove unnecessary repos in poms

2016-03-15 Thread James Taylor (JIRA)
James Taylor created PHOENIX-2773:
-

 Summary: Remove unnecessary repos in poms
 Key: PHOENIX-2773
 URL: https://issues.apache.org/jira/browse/PHOENIX-2773
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
 Fix For: 4.8.0


It seems we have a lot of cruft in our pom in terms of repos. Now that sqlline 
is hosted in maven central (see PHOENIX-2772), it seems like we should only 
need the one repo 
{{https://repository.apache.org/content/repositories/releases}}.
{code}
  

  apache release
  https://repository.apache.org/content/repositories/releases/


  conjars.org
  http://conjars.org/repo


  apache snapshot
  https://repository.apache.org/content/repositories/snapshots/
  
true
  


  sonatype-nexus-snapshots
  Sonatype Nexus Snapshots
  https://oss.sonatype.org/content/repositories/snapshots
  
true
  

  
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2772) Upgrade pom to sqlline 1.1.19

2016-03-15 Thread James Taylor (JIRA)
James Taylor created PHOENIX-2772:
-

 Summary: Upgrade pom to sqlline 1.1.19
 Key: PHOENIX-2772
 URL: https://issues.apache.org/jira/browse/PHOENIX-2772
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor


Upgrade sqlline 1.1.19 which is hosted in maven central now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2746) Delete on the table with immutable rows may fail with INVALID_FILTER_ON_IMMUTABLE_ROWS error code.

2016-03-15 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195516#comment-15195516
 ] 

James Taylor commented on PHOENIX-2746:
---

Sorry, ignore the above. Your change is fine, but minor nit - I think it would 
read slightly better like this:
{code}
-private List orderPlansBestToWorst(SelectStatement select, 
List plans) {
+private List orderPlansBestToWorst(SelectStatement select, 
List plans, boolean stopAtBestPlan) {
 final QueryPlan dataPlan = plans.get(0);
 if (plans.size() == 1) {
 return plans;
@@ -330,9 +330,13 @@ public class QueryOptimizer {
  * keys), then favor those first.
  */
 List candidates = 
Lists.newArrayListWithExpectedSize(plans.size());
-for (QueryPlan plan : plans) {
-if (plan.getContext().getScanRanges().isPointLookup()) {
-candidates.add(plan);
+if (stopAtBestPlan) { // If we're stopping at the best plan, only 
consider point lookups if there are any
+for (QueryPlan plan : plans) {
+if (plan.getContext().getScanRanges().isPointLookup()) {
+candidates.add(plan);
+}
+} else {
+candidates.addAll(plans);
 }
 }
{code}

> Delete on the table with immutable rows may fail with 
> INVALID_FILTER_ON_IMMUTABLE_ROWS error code.
> --
>
> Key: PHOENIX-2746
> URL: https://issues.apache.org/jira/browse/PHOENIX-2746
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2746.patch, PHOENIX-2746_v2.patch
>
>
> Some times delete on table with immutable rows is failing with below error 
> even all the indexes are having the column in where condition. If we have 
> condition on primary key columns it's always failing.
> {noformat}
> 0: jdbc:phoenix:localhost> delete from t2 where a='raj1';
> Error: ERROR 1027 (42Y86): All columns referenced in a WHERE clause must be 
> available in every index for a table with immutable rows. tableName=T2 
> (state=42Y86,code=1027)
> java.sql.SQLException: ERROR 1027 (42Y86): All columns referenced in a WHERE 
> clause must be available in every index for a table with immutable rows. 
> tableName=T2
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:386)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>   at 
> org.apache.phoenix.compile.DeleteCompiler.compile(DeleteCompiler.java:390)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:546)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:534)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:302)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1247)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:808)
>   at sqlline.SqlLine.begin(SqlLine.java:681)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292
> {noformat}
> The reason is we are collecting nondisable indexes and adding to list. 
> 1) Once after resolving the table data table.
> 2) One after running select with where condition from delete. 
> So the references of index table objects will be different two times if cache 
> updated again 2nd time.
> {noformat}
> immutableIndex = getNonDisabledImmutableIndexes(tableRefToBe);
> {noformat}
> So here when remove a table from immutableIndex list we should compare 
> references because PTable doesn't have equal or hashCode implementations 
> which will not remove any index from the list and we throw SQLException.
> {noformat}
> while (plans.hasNext()) {
> QueryPlan plan = plans.next();
> PTable table = plan.getTableRef().getTable();
> if (table.getType() == PTableType.INDEX) { // index plans
> tableRefs[i++] = plan.getTableRef();
> immutableIndex.remove(table);
> } else { // data plan
> /*
>  * If we have immutable 

[jira] [Commented] (PHOENIX-2746) Delete on the table with immutable rows may fail with INVALID_FILTER_ON_IMMUTABLE_ROWS error code.

2016-03-15 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195466#comment-15195466
 ] 

James Taylor commented on PHOENIX-2746:
---

Thanks, [~rajeshbabu]. Thinking about this more, I think the 
orderPlansBestToWorst should always iterate through all plans and add the point 
lookup plans first, like this (so no need to pass through the stopAtBestPlan 
argument):
{code}
private List orderPlansBestToWorst(SelectStatement select, 
List plans) {
 final QueryPlan dataPlan = plans.get(0);
 if (plans.size() == 1) {
 return plans;
@@ -330,9 +330,13 @@ public class QueryOptimizer {
  * keys), then favor those first.
  */
 List candidates = 
Lists.newArrayListWithExpectedSize(plans.size());
-for (QueryPlan plan : plans) {
-if (plan.getContext().getScanRanges().isPointLookup()) {
-candidates.add(plan);
+for (QueryPlan plan : plans) {
+if (plan.getContext().getScanRanges().isPointLookup()) {
+candidates.add(plan);
+}
{code}

> Delete on the table with immutable rows may fail with 
> INVALID_FILTER_ON_IMMUTABLE_ROWS error code.
> --
>
> Key: PHOENIX-2746
> URL: https://issues.apache.org/jira/browse/PHOENIX-2746
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2746.patch, PHOENIX-2746_v2.patch
>
>
> Some times delete on table with immutable rows is failing with below error 
> even all the indexes are having the column in where condition. If we have 
> condition on primary key columns it's always failing.
> {noformat}
> 0: jdbc:phoenix:localhost> delete from t2 where a='raj1';
> Error: ERROR 1027 (42Y86): All columns referenced in a WHERE clause must be 
> available in every index for a table with immutable rows. tableName=T2 
> (state=42Y86,code=1027)
> java.sql.SQLException: ERROR 1027 (42Y86): All columns referenced in a WHERE 
> clause must be available in every index for a table with immutable rows. 
> tableName=T2
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:386)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>   at 
> org.apache.phoenix.compile.DeleteCompiler.compile(DeleteCompiler.java:390)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:546)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:534)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:302)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1247)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:808)
>   at sqlline.SqlLine.begin(SqlLine.java:681)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292
> {noformat}
> The reason is we are collecting nondisable indexes and adding to list. 
> 1) Once after resolving the table data table.
> 2) One after running select with where condition from delete. 
> So the references of index table objects will be different two times if cache 
> updated again 2nd time.
> {noformat}
> immutableIndex = getNonDisabledImmutableIndexes(tableRefToBe);
> {noformat}
> So here when remove a table from immutableIndex list we should compare 
> references because PTable doesn't have equal or hashCode implementations 
> which will not remove any index from the list and we throw SQLException.
> {noformat}
> while (plans.hasNext()) {
> QueryPlan plan = plans.next();
> PTable table = plan.getTableRef().getTable();
> if (table.getType() == PTableType.INDEX) { // index plans
> tableRefs[i++] = plan.getTableRef();
> immutableIndex.remove(table);
> } else { // data plan
> /*
>  * If we have immutable indexes that we need to maintain, 
> don't execute the data plan
>  * as we can save a query by piggy-backing on any of the 
> other index queries, since the
> 

[jira] [Commented] (PHOENIX-1311) HBase namespaces surfaced in phoenix

2016-03-15 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195333#comment-15195333
 ] 

Ankit Singhal commented on PHOENIX-1311:


Thanks [~samarthjain] for the review comments. I'll incorporate them and look 
forward for more comments.
bq. Is this change needed in IndexIT.java
{code}
props.put(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, Boolean.toString(true));
{code}
No , it is not needed , I'll enable this for some test to check whether 
namespaces and indexes are getting created properly.

bq. Make sure your code is formatted and coding guidelines followed. I see 
wrong indentation, missing spaces, etc. in several places.
Please ignore formatting for now, I'll do it during rebasing of the patch.

> HBase namespaces surfaced in phoenix
> 
>
> Key: PHOENIX-1311
> URL: https://issues.apache.org/jira/browse/PHOENIX-1311
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: nicolas maillard
>Assignee: Ankit Singhal
>Priority: Minor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-1311.docx, PHOENIX-1311_v1.patch, 
> PHOENIX-1311_wip.patch, PHOENIX-1311_wip_2.patch
>
>
> Hbase (HBASE-8015) has the concept of namespaces in the form of 
> myNamespace:MyTable it would be great if Phoenix leveraged this feature to 
> give a database like feature on top of the table.
> Maybe to stay close to Hbase it could also be a create DB:Table...
> or DB.Table which is a more standard annotation?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2722) support mysql "limit,offset" clauses

2016-03-15 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-2722:
---
Attachment: PHOENIX-2722.patch

thanks [~jamestaylor] for the comments. 
can you please review the attached patch.

Attached patch includes both of your input :-
* Pushing offset to server for serial queries(non-order by and non-aggregate)
* SQL syntax compatible with calcite.



> support mysql "limit,offset" clauses 
> -
>
> Key: PHOENIX-2722
> URL: https://issues.apache.org/jira/browse/PHOENIX-2722
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Minor
> Attachments: PHOENIX-2722.patch
>
>
> For serial query(query with “serial" hint or  “limit" without "order by”), we 
> can limit each scan(using page filter) to “limit+offset” instead of limit 
> earlier.
> And then, for all queries, we can forward the relevant client iterators to 
> the offset provided and then return the result.
> WDYT, [~jamestaylor]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2723) Make BulkLoad able to load several tables at once

2016-03-15 Thread YoungWoo Kim (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195204#comment-15195204
 ] 

YoungWoo Kim commented on PHOENIX-2723:
---

+0 from me :-) because use cases are different but I'm on same side on 
Gabriel's opinion. Thanks!

> Make BulkLoad able to load several tables at once
> -
>
> Key: PHOENIX-2723
> URL: https://issues.apache.org/jira/browse/PHOENIX-2723
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2723-1.patch
>
>
> It comes that usually bulk load is required for more than one table and 
> usually it's done by running jobs one by one. The idea is to provide lists of 
> tables and corresponding input sources to the MR BulkLoad job. Syntax can be 
> something like :
> yarn ... CsvBulkLoadTool -t table1,table2,table3 --input input1,input2,input3
> Having map tableName => input during map phase we can determine to which 
> table the current split belongs to and produce necessary tableRowKeyPair. 
> Any thoughts, suggestions?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2769) Order by desc return wrong results

2016-03-15 Thread Joseph Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195198#comment-15195198
 ] 

Joseph Sun commented on PHOENIX-2769:
-

I'm using 4.6.0-HBase-1.1 and 4.7.0-HBase-1.1 that have same problem.


> Order by desc return wrong results
> --
>
> Key: PHOENIX-2769
> URL: https://issues.apache.org/jira/browse/PHOENIX-2769
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Joseph Sun
>  Labels: test
>
> create table access_logs (
>   event_time date not null,
>   uuid varchar(36) not null,
>   event_type varchar(32),
>   CONSTRAINT pk PRIMARY KEY (event_time,uuid)
> ) VERSIONS=1,SALT_BUCKETS=6,IMMUTABLE_ROWS=true;
> I insert 2,000,000 records to access_logs ,and event_time between 2016-01-06  
> to 2016-03-15 .
> I execute SQL. 
> >select event_time from access_logs  order by event_time asc limit 10;
> +--+
> |EVENT_TIME|
> +--+
> | 2016-01-06 18:41:54.000  |
> | 2016-01-06 19:56:46.000  |
> | 2016-01-06 20:25:12.000  |
> | 2016-01-06 20:41:37.000  |
> | 2016-01-06 20:46:20.000  |
> | 2016-01-06 20:53:10.000  |
> | 2016-01-06 21:04:09.000  |
> | 2016-01-07 01:22:57.000  |
> | 2016-01-07 10:59:11.000  |
> | 2016-01-07 12:52:56.000  |
> +--+
>  
> > select event_time from access_logs order by event_time desc limit 10;
> +--+
> |EVENT_TIME|
> +--+
> | 2016-02-11 13:07:25.000  |
> | 2016-02-11 13:07:24.000  |
> | 2016-02-11 13:07:24.000  |
> | 2016-02-11 13:07:23.000  |
> | 2016-02-11 13:07:23.000  |
> | 2016-02-11 13:07:22.000  |
> | 2016-02-11 13:07:21.000  |
> | 2016-02-11 13:07:21.000  |
> | 2016-02-11 13:07:20.000  |
> | 2016-02-11 13:07:20.000  |
> > select event_time from access_logs where event_time>to_date('2016-02-11 
> > 13:07:25') order by event_time desc limit 10;
> +--+
> |EVENT_TIME|
> +--+
> | 2016-02-25 18:34:17.000  |
> | 2016-02-25 18:34:17.000  |
> | 2016-02-25 18:34:16.000  |
> | 2016-02-25 18:34:16.000  |
> | 2016-02-25 18:34:15.000  |
> | 2016-02-25 18:34:15.000  |
> | 2016-02-25 18:34:14.000  |
> | 2016-02-25 18:34:14.000  |
> | 2016-02-25 18:34:14.000  |
> | 2016-02-25 18:34:14.000  |
> +--+
> Check the return results ,the 'order by event_time desc' is not return 
> correct results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2723) Make BulkLoad able to load several tables at once

2016-03-15 Thread Gabriel Reid (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195060#comment-15195060
 ] 

Gabriel Reid commented on PHOENIX-2723:
---

{quote}
well, the logic is quite simple. If there are several input files and one table 
name - all those files will be loaded to this table. Otherwise the number of 
tables need to be equal number of inputs.
{quote}

This sounds like the semantics of one input parameter is then changed by the 
contents of other input parameters, which I'm personally not in favor of.

I think that sticking with a single invocation is for loading a single table is 
the best way to stay in line with the [Principle of least 
astonishment|https://en.wikipedia.org/wiki/Principle_of_least_astonishment] 
(mostly because it is in line with how most other tools work), and the 
advantages of not having to write shell scripts around it and reduced start-up 
time don't feel like a bit enough win to compromise on simplicity here. That's 
just my opinion of course.

> Make BulkLoad able to load several tables at once
> -
>
> Key: PHOENIX-2723
> URL: https://issues.apache.org/jira/browse/PHOENIX-2723
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2723-1.patch
>
>
> It comes that usually bulk load is required for more than one table and 
> usually it's done by running jobs one by one. The idea is to provide lists of 
> tables and corresponding input sources to the MR BulkLoad job. Syntax can be 
> something like :
> yarn ... CsvBulkLoadTool -t table1,table2,table3 --input input1,input2,input3
> Having map tableName => input during map phase we can determine to which 
> table the current split belongs to and produce necessary tableRowKeyPair. 
> Any thoughts, suggestions?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2723) Make BulkLoad able to load several tables at once

2016-03-15 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-2723:
-
Attachment: PHOENIX-2723-1.patch

> Make BulkLoad able to load several tables at once
> -
>
> Key: PHOENIX-2723
> URL: https://issues.apache.org/jira/browse/PHOENIX-2723
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2723-1.patch
>
>
> It comes that usually bulk load is required for more than one table and 
> usually it's done by running jobs one by one. The idea is to provide lists of 
> tables and corresponding input sources to the MR BulkLoad job. Syntax can be 
> something like :
> yarn ... CsvBulkLoadTool -t table1,table2,table3 --input input1,input2,input3
> Having map tableName => input during map phase we can determine to which 
> table the current split belongs to and produce necessary tableRowKeyPair. 
> Any thoughts, suggestions?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2723) Make BulkLoad able to load several tables at once

2016-03-15 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194981#comment-15194981
 ] 

Sergey Soldatov commented on PHOENIX-2723:
--

well, the logic is quite simple. If there are several input files and one table 
name - all those files will be loaded to this table. Otherwise the number of 
tables need to be equal number of inputs. The advantage is to avoid writing 
iterating scripts, reduce time of job creation and scheduling and theoretically 
make a better load for the cluster.

> Make BulkLoad able to load several tables at once
> -
>
> Key: PHOENIX-2723
> URL: https://issues.apache.org/jira/browse/PHOENIX-2723
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>
> It comes that usually bulk load is required for more than one table and 
> usually it's done by running jobs one by one. The idea is to provide lists of 
> tables and corresponding input sources to the MR BulkLoad job. Syntax can be 
> something like :
> yarn ... CsvBulkLoadTool -t table1,table2,table3 --input input1,input2,input3
> Having map tableName => input during map phase we can determine to which 
> table the current split belongs to and produce necessary tableRowKeyPair. 
> Any thoughts, suggestions?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2769) Order by desc return wrong results

2016-03-15 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194941#comment-15194941
 ] 

Ankit Singhal commented on PHOENIX-2769:


can you please confirm the Phoenix version you are using?

> Order by desc return wrong results
> --
>
> Key: PHOENIX-2769
> URL: https://issues.apache.org/jira/browse/PHOENIX-2769
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Joseph Sun
>  Labels: test
>
> create table access_logs (
>   event_time date not null,
>   uuid varchar(36) not null,
>   event_type varchar(32),
>   CONSTRAINT pk PRIMARY KEY (event_time,uuid)
> ) VERSIONS=1,SALT_BUCKETS=6,IMMUTABLE_ROWS=true;
> I insert 2,000,000 records to access_logs ,and event_time between 2016-01-06  
> to 2016-03-15 .
> I execute SQL. 
> >select event_time from access_logs  order by event_time asc limit 10;
> +--+
> |EVENT_TIME|
> +--+
> | 2016-01-06 18:41:54.000  |
> | 2016-01-06 19:56:46.000  |
> | 2016-01-06 20:25:12.000  |
> | 2016-01-06 20:41:37.000  |
> | 2016-01-06 20:46:20.000  |
> | 2016-01-06 20:53:10.000  |
> | 2016-01-06 21:04:09.000  |
> | 2016-01-07 01:22:57.000  |
> | 2016-01-07 10:59:11.000  |
> | 2016-01-07 12:52:56.000  |
> +--+
>  
> > select event_time from access_logs order by event_time desc limit 10;
> +--+
> |EVENT_TIME|
> +--+
> | 2016-02-11 13:07:25.000  |
> | 2016-02-11 13:07:24.000  |
> | 2016-02-11 13:07:24.000  |
> | 2016-02-11 13:07:23.000  |
> | 2016-02-11 13:07:23.000  |
> | 2016-02-11 13:07:22.000  |
> | 2016-02-11 13:07:21.000  |
> | 2016-02-11 13:07:21.000  |
> | 2016-02-11 13:07:20.000  |
> | 2016-02-11 13:07:20.000  |
> > select event_time from access_logs where event_time>to_date('2016-02-11 
> > 13:07:25') order by event_time desc limit 10;
> +--+
> |EVENT_TIME|
> +--+
> | 2016-02-25 18:34:17.000  |
> | 2016-02-25 18:34:17.000  |
> | 2016-02-25 18:34:16.000  |
> | 2016-02-25 18:34:16.000  |
> | 2016-02-25 18:34:15.000  |
> | 2016-02-25 18:34:15.000  |
> | 2016-02-25 18:34:14.000  |
> | 2016-02-25 18:34:14.000  |
> | 2016-02-25 18:34:14.000  |
> | 2016-02-25 18:34:14.000  |
> +--+
> Check the return results ,the 'order by event_time desc' is not return 
> correct results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2769) Order by desc return wrong results

2016-03-15 Thread Joseph Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194928#comment-15194928
 ] 

Joseph Sun commented on PHOENIX-2769:
-

I clear table system.stats and restart hbase,It's back to normal.

> Order by desc return wrong results
> --
>
> Key: PHOENIX-2769
> URL: https://issues.apache.org/jira/browse/PHOENIX-2769
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Joseph Sun
>  Labels: test
>
> create table access_logs (
>   event_time date not null,
>   uuid varchar(36) not null,
>   event_type varchar(32),
>   CONSTRAINT pk PRIMARY KEY (event_time,uuid)
> ) VERSIONS=1,SALT_BUCKETS=6,IMMUTABLE_ROWS=true;
> I insert 2,000,000 records to access_logs ,and event_time between 2016-01-06  
> to 2016-03-15 .
> I execute SQL. 
> >select event_time from access_logs  order by event_time asc limit 10;
> +--+
> |EVENT_TIME|
> +--+
> | 2016-01-06 18:41:54.000  |
> | 2016-01-06 19:56:46.000  |
> | 2016-01-06 20:25:12.000  |
> | 2016-01-06 20:41:37.000  |
> | 2016-01-06 20:46:20.000  |
> | 2016-01-06 20:53:10.000  |
> | 2016-01-06 21:04:09.000  |
> | 2016-01-07 01:22:57.000  |
> | 2016-01-07 10:59:11.000  |
> | 2016-01-07 12:52:56.000  |
> +--+
>  
> > select event_time from access_logs order by event_time desc limit 10;
> +--+
> |EVENT_TIME|
> +--+
> | 2016-02-11 13:07:25.000  |
> | 2016-02-11 13:07:24.000  |
> | 2016-02-11 13:07:24.000  |
> | 2016-02-11 13:07:23.000  |
> | 2016-02-11 13:07:23.000  |
> | 2016-02-11 13:07:22.000  |
> | 2016-02-11 13:07:21.000  |
> | 2016-02-11 13:07:21.000  |
> | 2016-02-11 13:07:20.000  |
> | 2016-02-11 13:07:20.000  |
> > select event_time from access_logs where event_time>to_date('2016-02-11 
> > 13:07:25') order by event_time desc limit 10;
> +--+
> |EVENT_TIME|
> +--+
> | 2016-02-25 18:34:17.000  |
> | 2016-02-25 18:34:17.000  |
> | 2016-02-25 18:34:16.000  |
> | 2016-02-25 18:34:16.000  |
> | 2016-02-25 18:34:15.000  |
> | 2016-02-25 18:34:15.000  |
> | 2016-02-25 18:34:14.000  |
> | 2016-02-25 18:34:14.000  |
> | 2016-02-25 18:34:14.000  |
> | 2016-02-25 18:34:14.000  |
> +--+
> Check the return results ,the 'order by event_time desc' is not return 
> correct results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2771) Improve the performance of IndexTool by building the index mutations at reducer side

2016-03-15 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-2771:
-
Fix Version/s: 4.8.0

> Improve the performance of IndexTool by building the index mutations at 
> reducer side
> 
>
> Key: PHOENIX-2771
> URL: https://issues.apache.org/jira/browse/PHOENIX-2771
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Sergey Soldatov
> Fix For: 4.8.0
>
>
> Instead of writing the full index mutations to map output at mapper we can 
> just write combined value of indexed column values and prepare proper key 
> values at reducer same as PHOENIX-1973.
> [~sergey.soldatov] can you take up this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2771) Improve the performance of IndexTool by building the index mutations at reducer side

2016-03-15 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created PHOENIX-2771:


 Summary: Improve the performance of IndexTool by building the 
index mutations at reducer side
 Key: PHOENIX-2771
 URL: https://issues.apache.org/jira/browse/PHOENIX-2771
 Project: Phoenix
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Sergey Soldatov


Instead of writing the full index mutations to map output at mapper we can just 
write combined value of indexed column values and prepare proper key values at 
reducer same as PHOENIX-1973.

[~sergey.soldatov] can you take up this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2062) Support COUNT DISTINCT with multiple arguments

2016-03-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194906#comment-15194906
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-2062:
-

So currently we don't do COUNT DISTINCT on a combination of columns. The logic 
to find the unique elements are done only for the single column?  

> Support COUNT DISTINCT with multiple arguments
> --
>
> Key: PHOENIX-2062
> URL: https://issues.apache.org/jira/browse/PHOENIX-2062
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>  Labels: gsoc2016
>
> I have a situation where I want to count the distinct combination of a couple 
> of columns.
> When I try the following:-
> select count(distinct a.col1, b.col2)
> from table tab1 a
> inner join tab2 b on b.joincol = a.joincol
> where a.col3 = ‘some condition’
> and b.col4 = ‘some other condition';
> I get the following error:-
> Error: ERROR 605 (42P00): Syntax error. Unknown function: "DISTINCT_COUNT". 
> (state=42P00,code=605)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2746) Delete on the table with immutable rows may fail with INVALID_FILTER_ON_IMMUTABLE_ROWS error code.

2016-03-15 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-2746:
-
Attachment: PHOENIX-2746_v2.patch

Thanks for review [~jamestaylor]. Handled the review comment.  Raised 
PHOENIX-2770 to add equal and hashCode to compare objects of PTableImpl .

> Delete on the table with immutable rows may fail with 
> INVALID_FILTER_ON_IMMUTABLE_ROWS error code.
> --
>
> Key: PHOENIX-2746
> URL: https://issues.apache.org/jira/browse/PHOENIX-2746
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2746.patch, PHOENIX-2746_v2.patch
>
>
> Some times delete on table with immutable rows is failing with below error 
> even all the indexes are having the column in where condition. If we have 
> condition on primary key columns it's always failing.
> {noformat}
> 0: jdbc:phoenix:localhost> delete from t2 where a='raj1';
> Error: ERROR 1027 (42Y86): All columns referenced in a WHERE clause must be 
> available in every index for a table with immutable rows. tableName=T2 
> (state=42Y86,code=1027)
> java.sql.SQLException: ERROR 1027 (42Y86): All columns referenced in a WHERE 
> clause must be available in every index for a table with immutable rows. 
> tableName=T2
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:386)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>   at 
> org.apache.phoenix.compile.DeleteCompiler.compile(DeleteCompiler.java:390)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:546)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:534)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:302)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1247)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:808)
>   at sqlline.SqlLine.begin(SqlLine.java:681)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292
> {noformat}
> The reason is we are collecting nondisable indexes and adding to list. 
> 1) Once after resolving the table data table.
> 2) One after running select with where condition from delete. 
> So the references of index table objects will be different two times if cache 
> updated again 2nd time.
> {noformat}
> immutableIndex = getNonDisabledImmutableIndexes(tableRefToBe);
> {noformat}
> So here when remove a table from immutableIndex list we should compare 
> references because PTable doesn't have equal or hashCode implementations 
> which will not remove any index from the list and we throw SQLException.
> {noformat}
> while (plans.hasNext()) {
> QueryPlan plan = plans.next();
> PTable table = plan.getTableRef().getTable();
> if (table.getType() == PTableType.INDEX) { // index plans
> tableRefs[i++] = plan.getTableRef();
> immutableIndex.remove(table);
> } else { // data plan
> /*
>  * If we have immutable indexes that we need to maintain, 
> don't execute the data plan
>  * as we can save a query by piggy-backing on any of the 
> other index queries, since the
>  * PK columns that we need are always in each index row.
>  */
> plans.remove();
> }
> {noformat}
> If the where condition is PK column then the plans returned by compiler is 
> only one because we are passing USE_DATA_OVER_INDEX_TABLE hint. Then also the 
> immutableIndex list is not empty. Then also we through exception.
> {noformat}
> noQueryReqd = !hasLimit;
> // Can't run on same server for transactional data, as we 
> need the row keys for the data
> // that is being upserted for conflict detection purposes.
> runOnServer = isAutoCommit && noQueryReqd && 
> !table.isTransactional();
> HintNode hint = delete.getHint();

[jira] [Created] (PHOENIX-2770) Add equal and hashCode methods to PTableImpl to check the equality of it's objects

2016-03-15 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created PHOENIX-2770:


 Summary: Add equal and hashCode methods to PTableImpl to check the 
equality of it's objects
 Key: PHOENIX-2770
 URL: https://issues.apache.org/jira/browse/PHOENIX-2770
 Project: Phoenix
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 4.8.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2769) Order by desc return wrong results

2016-03-15 Thread Joseph Sun (JIRA)
Joseph Sun created PHOENIX-2769:
---

 Summary: Order by desc return wrong results
 Key: PHOENIX-2769
 URL: https://issues.apache.org/jira/browse/PHOENIX-2769
 Project: Phoenix
  Issue Type: Bug
Reporter: Joseph Sun


create table access_logs (
event_time date not null,
uuid varchar(36) not null,
event_type varchar(32),
CONSTRAINT pk PRIMARY KEY (event_time,uuid)
) VERSIONS=1,SALT_BUCKETS=6,IMMUTABLE_ROWS=true;


I insert 2,000,000 records to access_logs ,and event_time between 2016-01-06  
to 2016-03-15 .

I execute SQL. 

>select event_time from access_logs  order by event_time asc limit 10;
+--+
|EVENT_TIME|
+--+
| 2016-01-06 18:41:54.000  |
| 2016-01-06 19:56:46.000  |
| 2016-01-06 20:25:12.000  |
| 2016-01-06 20:41:37.000  |
| 2016-01-06 20:46:20.000  |
| 2016-01-06 20:53:10.000  |
| 2016-01-06 21:04:09.000  |
| 2016-01-07 01:22:57.000  |
| 2016-01-07 10:59:11.000  |
| 2016-01-07 12:52:56.000  |
+--+
 
> select event_time from access_logs order by event_time desc limit 10;
+--+
|EVENT_TIME|
+--+
| 2016-02-11 13:07:25.000  |
| 2016-02-11 13:07:24.000  |
| 2016-02-11 13:07:24.000  |
| 2016-02-11 13:07:23.000  |
| 2016-02-11 13:07:23.000  |
| 2016-02-11 13:07:22.000  |
| 2016-02-11 13:07:21.000  |
| 2016-02-11 13:07:21.000  |
| 2016-02-11 13:07:20.000  |
| 2016-02-11 13:07:20.000  |

> select event_time from access_logs where event_time>to_date('2016-02-11 
> 13:07:25') order by event_time desc limit 10;
+--+
|EVENT_TIME|
+--+
| 2016-02-25 18:34:17.000  |
| 2016-02-25 18:34:17.000  |
| 2016-02-25 18:34:16.000  |
| 2016-02-25 18:34:16.000  |
| 2016-02-25 18:34:15.000  |
| 2016-02-25 18:34:15.000  |
| 2016-02-25 18:34:14.000  |
| 2016-02-25 18:34:14.000  |
| 2016-02-25 18:34:14.000  |
| 2016-02-25 18:34:14.000  |
+--+


Check the return results ,the 'order by event_time desc' is not return correct 
results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)