[jira] [Resolved] (PHOENIX-4445) Modify all ITs to not use CurrentSCN or CURRENT_SCN

2018-03-23 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak resolved PHOENIX-4445.

Resolution: Fixed

> Modify all ITs to not use CurrentSCN or CURRENT_SCN
> ---
>
> Key: PHOENIX-4445
> URL: https://issues.apache.org/jira/browse/PHOENIX-4445
> Project: Phoenix
>  Issue Type: Test
>Reporter: Csaba Skrabak
>Priority: Major
> Fix For: 4.12.0
>
>
> This is a collection of "Modify ...IT to not use CurrentSCN" issues. Look for 
> your test name here to see if it is covered and which issue does so if so.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4662) NullPointerException in TableResultIterator.java on cache resend

2018-03-20 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak reassigned PHOENIX-4662:
--

Assignee: Csaba Skrabak

> NullPointerException in TableResultIterator.java on cache resend
> 
>
> Key: PHOENIX-4662
> URL: https://issues.apache.org/jira/browse/PHOENIX-4662
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Major
> Attachments: PHOENIX-4662.patch
>
>
> In the fix for PHOENIX-4010, there is a potential null dereference. Turned 
> out when we ran a previous version of HashJoinIT with PHOENIX-4010 backported.
> The caches field is initialized to null and may be dereferenced after 
> "Retrying when Hash Join cache is not found on the server ,by sending the 
> cache again".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4662) NullPointerException in TableResultIterator.java on cache resend

2018-03-20 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4662:
---
Priority: Major  (was: Blocker)

> NullPointerException in TableResultIterator.java on cache resend
> 
>
> Key: PHOENIX-4662
> URL: https://issues.apache.org/jira/browse/PHOENIX-4662
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Csaba Skrabak
>Priority: Major
> Attachments: PHOENIX-4662.patch
>
>
> In the fix for PHOENIX-4010, there is a potential null dereference. Turned 
> out when we ran a previous version of HashJoinIT with PHOENIX-4010 backported.
> The caches field is initialized to null and may be dereferenced after 
> "Retrying when Hash Join cache is not found on the server ,by sending the 
> cache again".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4662) NullPointerException in TableResultIterator.java on cache resend

2018-03-20 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4662:
---
Priority: Blocker  (was: Major)

> NullPointerException in TableResultIterator.java on cache resend
> 
>
> Key: PHOENIX-4662
> URL: https://issues.apache.org/jira/browse/PHOENIX-4662
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Blocker
> Attachments: PHOENIX-4662.patch
>
>
> In the fix for PHOENIX-4010, there is a potential null dereference. Turned 
> out when we ran a previous version of HashJoinIT with PHOENIX-4010 backported.
> The caches field is initialized to null and may be dereferenced after 
> "Retrying when Hash Join cache is not found on the server ,by sending the 
> cache again".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4662) NullPointerException in TableResultIterator.java on cache resend

2018-03-20 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak reassigned PHOENIX-4662:
--

Assignee: (was: Csaba Skrabak)

> NullPointerException in TableResultIterator.java on cache resend
> 
>
> Key: PHOENIX-4662
> URL: https://issues.apache.org/jira/browse/PHOENIX-4662
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Csaba Skrabak
>Priority: Blocker
> Attachments: PHOENIX-4662.patch
>
>
> In the fix for PHOENIX-4010, there is a potential null dereference. Turned 
> out when we ran a previous version of HashJoinIT with PHOENIX-4010 backported.
> The caches field is initialized to null and may be dereferenced after 
> "Retrying when Hash Join cache is not found on the server ,by sending the 
> cache again".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4662) NullPointerException in TableResultIterator.java on cache resend

2018-03-20 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4662:
---
Attachment: PHOENIX-4662.patch

> NullPointerException in TableResultIterator.java on cache resend
> 
>
> Key: PHOENIX-4662
> URL: https://issues.apache.org/jira/browse/PHOENIX-4662
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Major
> Attachments: PHOENIX-4662.patch
>
>
> In the fix for PHOENIX-4010, there is a potential null dereference. Turned 
> out when we ran a previous version of HashJoinIT with PHOENIX-4010 backported.
> The caches field is initialized to null and may be dereferenced after 
> "Retrying when Hash Join cache is not found on the server ,by sending the 
> cache again".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4010) Hash Join cache may not be send to all regionservers when we have stale HBase meta cache

2018-03-20 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16406005#comment-16406005
 ] 

Csaba Skrabak commented on PHOENIX-4010:


Oh, it's released. :( Got it, [~an...@apache.org]. Added link to the new 
PHOENIX-4662.

> Hash Join cache may not be send to all regionservers when we have stale HBase 
> meta cache
> 
>
> Key: PHOENIX-4010
> URL: https://issues.apache.org/jira/browse/PHOENIX-4010
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4010.addendum.patch, PHOENIX-4010.patch, 
> PHOENIX-4010_v1.patch, PHOENIX-4010_v2.patch, PHOENIX-4010_v2_rebased.patch, 
> PHOENIX-4010_v2_rebased_1.patch
>
>
>  If the region locations changed and our HBase meta cache is not updated then 
> we might not be sending hash join cache to all region servers hosting the 
> regions.
> ConnectionQueryServicesImpl#getAllTableRegions
> {code}
> boolean reload =false;
> while (true) {
> try {
> // We could surface the package projected 
> HConnectionImplementation.getNumberOfCachedRegionLocations
> // to get the sizing info we need, but this would require a 
> new class in the same package and a cast
> // to this implementation class, so it's probably not worth 
> it.
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> return locations;
> {code}
> Skipping duplicate servers in ServerCacheClient#addServerCache
> {code}
> List locations = 
> services.getAllTableRegions(cacheUsingTable.getPhysicalName().getBytes());
> int nRegions = locations.size();
> 
> .
>  if ( ! servers.contains(entry) && 
> keyRanges.intersectRegion(regionStartKey, 
> regionEndKey,
> cacheUsingTable.getIndexType() == 
> IndexType.LOCAL)) {  
> // Call RPC once per server
> servers.add(entry);
> {code}
> For eg:- Table ’T’ has two regions R1 and R2 originally hosted on 
> regionserver RS1. 
> while Phoenix/Hbase connection is still active, R2 is transitioned to RS2 ,  
> but stale meta cache will still give old region locations i.e R1 and R2 on 
> RS1 and when we start copying hash table, we copy for R1 and skip R2 as they 
> are hosted on same regionserver. so, the query on a table will fail as it 
> will unable to find hash table cache on RS2 for processing regions R2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4662) NullPointerException in TableResultIterator.java on cache resend

2018-03-20 Thread Csaba Skrabak (JIRA)
Csaba Skrabak created PHOENIX-4662:
--

 Summary: NullPointerException in TableResultIterator.java on cache 
resend
 Key: PHOENIX-4662
 URL: https://issues.apache.org/jira/browse/PHOENIX-4662
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.12.0
Reporter: Csaba Skrabak
Assignee: Csaba Skrabak


In the fix for PHOENIX-4010, there is a potential null dereference. Turned out 
when we ran a previous version of HashJoinIT with PHOENIX-4010 backported.

The caches field is initialized to null and may be dereferenced after "Retrying 
when Hash Join cache is not found on the server ,by sending the cache again".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4010) Hash Join cache may not be send to all regionservers when we have stale HBase meta cache

2018-03-19 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405029#comment-16405029
 ] 

Csaba Skrabak edited comment on PHOENIX-4010 at 3/19/18 4:01 PM:
-

Running an older version of HashJoinIT revealed that the fix introduced a 
potential null dereference. Please check [^PHOENIX-4010.addendum.patch]


was (Author: cskrabak):
Running an older version of HashJoinIT revealed that the fix introduced a 
potential null dereference.

> Hash Join cache may not be send to all regionservers when we have stale HBase 
> meta cache
> 
>
> Key: PHOENIX-4010
> URL: https://issues.apache.org/jira/browse/PHOENIX-4010
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Csaba Skrabak
>Priority: Major
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4010.addendum.patch, PHOENIX-4010.patch, 
> PHOENIX-4010_v1.patch, PHOENIX-4010_v2.patch, PHOENIX-4010_v2_rebased.patch, 
> PHOENIX-4010_v2_rebased_1.patch
>
>
>  If the region locations changed and our HBase meta cache is not updated then 
> we might not be sending hash join cache to all region servers hosting the 
> regions.
> ConnectionQueryServicesImpl#getAllTableRegions
> {code}
> boolean reload =false;
> while (true) {
> try {
> // We could surface the package projected 
> HConnectionImplementation.getNumberOfCachedRegionLocations
> // to get the sizing info we need, but this would require a 
> new class in the same package and a cast
> // to this implementation class, so it's probably not worth 
> it.
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> return locations;
> {code}
> Skipping duplicate servers in ServerCacheClient#addServerCache
> {code}
> List locations = 
> services.getAllTableRegions(cacheUsingTable.getPhysicalName().getBytes());
> int nRegions = locations.size();
> 
> .
>  if ( ! servers.contains(entry) && 
> keyRanges.intersectRegion(regionStartKey, 
> regionEndKey,
> cacheUsingTable.getIndexType() == 
> IndexType.LOCAL)) {  
> // Call RPC once per server
> servers.add(entry);
> {code}
> For eg:- Table ’T’ has two regions R1 and R2 originally hosted on 
> regionserver RS1. 
> while Phoenix/Hbase connection is still active, R2 is transitioned to RS2 ,  
> but stale meta cache will still give old region locations i.e R1 and R2 on 
> RS1 and when we start copying hash table, we copy for R1 and skip R2 as they 
> are hosted on same regionserver. so, the query on a table will fail as it 
> will unable to find hash table cache on RS2 for processing regions R2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (PHOENIX-4010) Hash Join cache may not be send to all regionservers when we have stale HBase meta cache

2018-03-19 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak reopened PHOENIX-4010:

  Assignee: Csaba Skrabak  (was: Ankit Singhal)

Running an older version of HashJoinIT revealed that the fix introduced a 
potential null dereference.

> Hash Join cache may not be send to all regionservers when we have stale HBase 
> meta cache
> 
>
> Key: PHOENIX-4010
> URL: https://issues.apache.org/jira/browse/PHOENIX-4010
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Csaba Skrabak
>Priority: Major
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4010.addendum.patch, PHOENIX-4010.patch, 
> PHOENIX-4010_v1.patch, PHOENIX-4010_v2.patch, PHOENIX-4010_v2_rebased.patch, 
> PHOENIX-4010_v2_rebased_1.patch
>
>
>  If the region locations changed and our HBase meta cache is not updated then 
> we might not be sending hash join cache to all region servers hosting the 
> regions.
> ConnectionQueryServicesImpl#getAllTableRegions
> {code}
> boolean reload =false;
> while (true) {
> try {
> // We could surface the package projected 
> HConnectionImplementation.getNumberOfCachedRegionLocations
> // to get the sizing info we need, but this would require a 
> new class in the same package and a cast
> // to this implementation class, so it's probably not worth 
> it.
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> return locations;
> {code}
> Skipping duplicate servers in ServerCacheClient#addServerCache
> {code}
> List locations = 
> services.getAllTableRegions(cacheUsingTable.getPhysicalName().getBytes());
> int nRegions = locations.size();
> 
> .
>  if ( ! servers.contains(entry) && 
> keyRanges.intersectRegion(regionStartKey, 
> regionEndKey,
> cacheUsingTable.getIndexType() == 
> IndexType.LOCAL)) {  
> // Call RPC once per server
> servers.add(entry);
> {code}
> For eg:- Table ’T’ has two regions R1 and R2 originally hosted on 
> regionserver RS1. 
> while Phoenix/Hbase connection is still active, R2 is transitioned to RS2 ,  
> but stale meta cache will still give old region locations i.e R1 and R2 on 
> RS1 and when we start copying hash table, we copy for R1 and skip R2 as they 
> are hosted on same regionserver. so, the query on a table will fail as it 
> will unable to find hash table cache on RS2 for processing regions R2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4010) Hash Join cache may not be send to all regionservers when we have stale HBase meta cache

2018-03-19 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4010:
---
Attachment: PHOENIX-4010.addendum.patch

> Hash Join cache may not be send to all regionservers when we have stale HBase 
> meta cache
> 
>
> Key: PHOENIX-4010
> URL: https://issues.apache.org/jira/browse/PHOENIX-4010
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Csaba Skrabak
>Priority: Major
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4010.addendum.patch, PHOENIX-4010.patch, 
> PHOENIX-4010_v1.patch, PHOENIX-4010_v2.patch, PHOENIX-4010_v2_rebased.patch, 
> PHOENIX-4010_v2_rebased_1.patch
>
>
>  If the region locations changed and our HBase meta cache is not updated then 
> we might not be sending hash join cache to all region servers hosting the 
> regions.
> ConnectionQueryServicesImpl#getAllTableRegions
> {code}
> boolean reload =false;
> while (true) {
> try {
> // We could surface the package projected 
> HConnectionImplementation.getNumberOfCachedRegionLocations
> // to get the sizing info we need, but this would require a 
> new class in the same package and a cast
> // to this implementation class, so it's probably not worth 
> it.
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> return locations;
> {code}
> Skipping duplicate servers in ServerCacheClient#addServerCache
> {code}
> List locations = 
> services.getAllTableRegions(cacheUsingTable.getPhysicalName().getBytes());
> int nRegions = locations.size();
> 
> .
>  if ( ! servers.contains(entry) && 
> keyRanges.intersectRegion(regionStartKey, 
> regionEndKey,
> cacheUsingTable.getIndexType() == 
> IndexType.LOCAL)) {  
> // Call RPC once per server
> servers.add(entry);
> {code}
> For eg:- Table ’T’ has two regions R1 and R2 originally hosted on 
> regionserver RS1. 
> while Phoenix/Hbase connection is still active, R2 is transitioned to RS2 ,  
> but stale meta cache will still give old region locations i.e R1 and R2 on 
> RS1 and when we start copying hash table, we copy for R1 and skip R2 as they 
> are hosted on same regionserver. so, the query on a table will fail as it 
> will unable to find hash table cache on RS2 for processing regions R2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4139) select distinct with identical aggregations return weird values

2018-01-30 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4139:
---
Attachment: PHOENIX-4139_v3.patch

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4139.patch, PHOENIX-4139_v2.patch, 
> PHOENIX-4139_v3.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
> ++++
> | TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
> ++++
> | pulkit | pulkit | pulkit |
> ++++
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4139) select distinct with identical aggregations return weird values

2018-01-30 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345338#comment-16345338
 ] 

Csaba Skrabak edited comment on PHOENIX-4139 at 1/30/18 4:39 PM:
-

Oh. I have to revert a change to the test case. Uploading 
[^PHOENIX-4139_v3.patch] that contains the old test with the one-line fix.


was (Author: cskrabak):
Oh. I have to revert a change to the test case. Uploading v3 that contains the 
old test with the one-line fix.

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4139.patch, PHOENIX-4139_v2.patch, 
> PHOENIX-4139_v3.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
> ++++
> | TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
> ++++
> | pulkit | pulkit | pulkit |
> ++++
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4139) select distinct with identical aggregations return weird values

2018-01-30 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345338#comment-16345338
 ] 

Csaba Skrabak commented on PHOENIX-4139:


Oh. I have to revert a change to the test case. Uploading v3 that contains the 
old test with the one-line fix.

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4139.patch, PHOENIX-4139_v2.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
> ++++
> | TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
> ++++
> | pulkit | pulkit | pulkit |
> ++++
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4139) select distinct with identical aggregations return weird values

2018-01-30 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345322#comment-16345322
 ] 

Csaba Skrabak commented on PHOENIX-4139:


Wow, it failed on my side, now double checking...

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4139.patch, PHOENIX-4139_v2.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
> ++++
> | TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
> ++++
> | pulkit | pulkit | pulkit |
> ++++
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4139) select distinct with identical aggregations return weird values

2018-01-30 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345299#comment-16345299
 ] 

Csaba Skrabak edited comment on PHOENIX-4139 at 1/30/18 4:25 PM:
-

[~jamestaylor], [^PHOENIX-4139_v2.patch] already has a kind of fix that makes 
the test case pass. I think it would be nice to include this fix.

But still we should keep a Jira ticket open because the internal behavior of 
the code is not as designed. I'm thinking of producing an error case but I did 
not yet find one. I'll clone this ticket in case you would commit my current 
fix and close. Then in the clone we can track the final solution.


was (Author: cskrabak):
[~jamestaylor], my v2 patch already has a kind of fix that makes the test case 
pass. I think it would be nice to include this fix.

But still we should keep a Jira ticket open because the internal behavior of 
the code is not as designed. I'm thinking of producing an error case but I did 
not yet find one. I'll clone this ticket in case you would commit my current 
fix and close. Then in the clone we can track the final solution.

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4139.patch, PHOENIX-4139_v2.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
> ++++
> | TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
> ++++
> | pulkit | pulkit | pulkit |
> ++++
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4568) Duplicate entries in the GroupBy structure when running AggregateIT.testTrimDistinct

2018-01-30 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4568:
---
Description: 
AggregateIT.testTrimDistinct case is introduced in the fix of  PHOENIX-4139.

Trace-debugging the test reveals that the GroupBy class may store duplicates of 
accessor objects in its list fields, keyExpressions and expressions.

"Since the second trim expression is the same a the first one, the group by 
(district turns into a group by) of the second one should be ignored as it 
serves no purpose. That is what occurs when you do a select without the 
distinct. Perhaps this logic is missing from GroupByCompiler?" 
https://issues.apache.org/jira/browse/PHOENIX-4139?focusedCommentId=16274531=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16274531

I have not yet found any test case in which this internal behavior would cause 
an error but still.

  was:
AggregateIT.testTrimDistinct case is introduced in the fix of  PHOENIX-4139.

Trace-debugging the test reveals that the GroupBy class may store duplicates of 
accessor objects in its list fields, keyExpressions and expressions.

"Since the second trim expression is the same a the first one, the group by 
(district turns into a group by) of the second one should be ignored as it 
serves no purpose. That is what occurs when you do a select without the 
distinct. Perhaps this logic is missing from GroupByCompiler?" 
https://issues.apache.org/jira/browse/PHOENIX-4139?focusedCommentId=16274531=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16274531


> Duplicate entries in the GroupBy structure when running 
> AggregateIT.testTrimDistinct
> 
>
> Key: PHOENIX-4568
> URL: https://issues.apache.org/jira/browse/PHOENIX-4568
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.14.0
>
>
> AggregateIT.testTrimDistinct case is introduced in the fix of  PHOENIX-4139.
> Trace-debugging the test reveals that the GroupBy class may store duplicates 
> of accessor objects in its list fields, keyExpressions and expressions.
> "Since the second trim expression is the same a the first one, the group by 
> (district turns into a group by) of the second one should be ignored as it 
> serves no purpose. That is what occurs when you do a select without the 
> distinct. Perhaps this logic is missing from GroupByCompiler?" 
> https://issues.apache.org/jira/browse/PHOENIX-4139?focusedCommentId=16274531=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16274531
> I have not yet found any test case in which this internal behavior would 
> cause an error but still.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4568) Duplicate entries in the GroupBy structure when running AggregateIT.testTrimDistinct

2018-01-30 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4568:
---
Description: 
AggregateIT.testTrimDistinct case is introduced in the fix of  PHOENIX-4139.

Trace-debugging the test reveals that the GroupBy class may store duplicates of 
accessor objects in its list fields, keyExpressions and expressions.

"Since the second trim expression is the same a the first one, the group by 
(district turns into a group by) of the second one should be ignored as it 
serves no purpose. That is what occurs when you do a select without the 
distinct. Perhaps this logic is missing from GroupByCompiler?" 
https://issues.apache.org/jira/browse/PHOENIX-4139?focusedCommentId=16274531=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16274531

  was:This is a follow-up ticket after PHOENIX-4139. The 


> Duplicate entries in the GroupBy structure when running 
> AggregateIT.testTrimDistinct
> 
>
> Key: PHOENIX-4568
> URL: https://issues.apache.org/jira/browse/PHOENIX-4568
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.14.0
>
>
> AggregateIT.testTrimDistinct case is introduced in the fix of  PHOENIX-4139.
> Trace-debugging the test reveals that the GroupBy class may store duplicates 
> of accessor objects in its list fields, keyExpressions and expressions.
> "Since the second trim expression is the same a the first one, the group by 
> (district turns into a group by) of the second one should be ignored as it 
> serves no purpose. That is what occurs when you do a select without the 
> distinct. Perhaps this logic is missing from GroupByCompiler?" 
> https://issues.apache.org/jira/browse/PHOENIX-4139?focusedCommentId=16274531=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16274531



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4568) Duplicate entries in the GroupBy structure when running AggregateIT.testTrimDistinct

2018-01-30 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4568:
---
Description: This is a follow-up ticket after PHOENIX-4139. The   (was: 
From sme-hbase hipchat room:
Pulkit Bhardwaj·10:31

i'm seeing a weird issue with phoenix, appreciate some thoughts

Created a simple table in phoenix
{noformat}
0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
VARCHAR(20), id BIGINT
. . . . . . . . > constraint my_pk primary key (id));

0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
values('pulkit','badaun',1);

0: jdbc:phoenix:> select * from test_select;
+-+--+-+
|   NAM   | ADDRESS  | ID  |
+-+--+-+
| pulkit  | badaun   | 1   |
+-+--+-+


0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
test_select;
+--+-+
| test_column  |   NAM   |
+--+-+
| harshit  | pulkit  |
+--+-+


0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
trim(nam) from test_select;
+--+++
| test_column  |   TRIM(NAM)|   TRIM(NAM)|
+--+++
| harshit  | pulkitpulkit  | pulkitpulkit  |
+--+++
{noformat}

When I apply a trim on the nam column and use it multiple times, the output has 
the cell data duplicated!
{noformat}
0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
trim(nam), trim(nam) from test_select;
+--+---+---+---+
| test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
TRIM(NAM)   |
+--+---+---+---+
| harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | pulkitpulkitpulkit 
 |
+--+---+---+---+
{noformat}

Wondering if someone has seen this before??

One thing to note is, if I remove the —— distinct 'harshit' as "test_column" —— 
 The issue is not seen
{noformat}
0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
++++
| TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
++++
| pulkit | pulkit | pulkit |
++++
{noformat})

> Duplicate entries in the GroupBy structure when running 
> AggregateIT.testTrimDistinct
> 
>
> Key: PHOENIX-4568
> URL: https://issues.apache.org/jira/browse/PHOENIX-4568
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.14.0
>
>
> This is a follow-up ticket after PHOENIX-4139. The 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4568) Duplicate entries in the GroupBy structure when running AggregateIT.testTrimDistinct

2018-01-30 Thread Csaba Skrabak (JIRA)
Csaba Skrabak created PHOENIX-4568:
--

 Summary: Duplicate entries in the GroupBy structure when running 
AggregateIT.testTrimDistinct
 Key: PHOENIX-4568
 URL: https://issues.apache.org/jira/browse/PHOENIX-4568
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.12.0
 Environment: minicluster
Reporter: Csaba Skrabak
Assignee: Csaba Skrabak
 Fix For: 4.14.0


>From sme-hbase hipchat room:
Pulkit Bhardwaj·10:31

i'm seeing a weird issue with phoenix, appreciate some thoughts

Created a simple table in phoenix
{noformat}
0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
VARCHAR(20), id BIGINT
. . . . . . . . > constraint my_pk primary key (id));

0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
values('pulkit','badaun',1);

0: jdbc:phoenix:> select * from test_select;
+-+--+-+
|   NAM   | ADDRESS  | ID  |
+-+--+-+
| pulkit  | badaun   | 1   |
+-+--+-+


0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
test_select;
+--+-+
| test_column  |   NAM   |
+--+-+
| harshit  | pulkit  |
+--+-+


0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
trim(nam) from test_select;
+--+++
| test_column  |   TRIM(NAM)|   TRIM(NAM)|
+--+++
| harshit  | pulkitpulkit  | pulkitpulkit  |
+--+++
{noformat}

When I apply a trim on the nam column and use it multiple times, the output has 
the cell data duplicated!
{noformat}
0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
trim(nam), trim(nam) from test_select;
+--+---+---+---+
| test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
TRIM(NAM)   |
+--+---+---+---+
| harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | pulkitpulkitpulkit 
 |
+--+---+---+---+
{noformat}

Wondering if someone has seen this before??

One thing to note is, if I remove the —— distinct 'harshit' as "test_column" —— 
 The issue is not seen
{noformat}
0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
++++
| TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
++++
| pulkit | pulkit | pulkit |
++++
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4139) select distinct with identical aggregations return weird values

2018-01-30 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345299#comment-16345299
 ] 

Csaba Skrabak commented on PHOENIX-4139:


[~jamestaylor], my v2 patch already has a kind of fix that makes the test case 
pass. I think it would be nice to include this fix.

But still we should keep a Jira ticket open because the internal behavior of 
the code is not as designed. I'm thinking of producing an error case but I did 
not yet find one. I'll clone this ticket in case you would commit my current 
fix and close. Then in the clone we can track the final solution.

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4139.patch, PHOENIX-4139_v2.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
> ++++
> | TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
> ++++
> | pulkit | pulkit | pulkit |
> ++++
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4447) Modify PointInTimeQueryIT to not use CurrentSCN

2017-12-11 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak resolved PHOENIX-4447.

Resolution: Not A Problem

Then something else must have made the test fail, sorry for the alarm.

> Modify PointInTimeQueryIT to not use CurrentSCN
> ---
>
> Key: PHOENIX-4447
> URL: https://issues.apache.org/jira/browse/PHOENIX-4447
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Csaba Skrabak
> Fix For: 4.12.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4447) Modify PointInTimeQueryIT to not use CurrentSCN

2017-12-08 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283831#comment-16283831
 ] 

Csaba Skrabak commented on PHOENIX-4447:


Seemed to me, no test should use CurrentSCN, so this one is to be ignored 
altogether?

> Modify PointInTimeQueryIT to not use CurrentSCN
> ---
>
> Key: PHOENIX-4447
> URL: https://issues.apache.org/jira/browse/PHOENIX-4447
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Csaba Skrabak
> Fix For: 4.12.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-4445) Modify all ITs to not use CurrentSCN or CURRENT_SCN

2017-12-08 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak reassigned PHOENIX-4445:
--

Assignee: (was: James Taylor)

> Modify all ITs to not use CurrentSCN or CURRENT_SCN
> ---
>
> Key: PHOENIX-4445
> URL: https://issues.apache.org/jira/browse/PHOENIX-4445
> Project: Phoenix
>  Issue Type: Test
>Reporter: Csaba Skrabak
> Fix For: 4.12.0
>
>
> This is a collection of "Modify ...IT to not use CurrentSCN" issues. Look for 
> your test name here to see if it is covered and which issue does so if so.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4445) Modify all ITs to not use CurrentSCN or CURRENT_SCN

2017-12-08 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4445:
---
Description: This is a collection of "Modify ...IT to not use CurrentSCN" 
issues. Look for your test name here to see if it is covered and which issue 
does so if so.  (was: Converting misc tests not to use CURRENT_SCN)

> Modify all ITs to not use CurrentSCN or CURRENT_SCN
> ---
>
> Key: PHOENIX-4445
> URL: https://issues.apache.org/jira/browse/PHOENIX-4445
> Project: Phoenix
>  Issue Type: Test
>Reporter: Csaba Skrabak
>Assignee: James Taylor
> Fix For: 4.12.0
>
>
> This is a collection of "Modify ...IT to not use CurrentSCN" issues. Look for 
> your test name here to see if it is covered and which issue does so if so.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4445) Modify all ITs to not use CurrentSCN or CURRENT_SCN

2017-12-08 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283294#comment-16283294
 ] 

Csaba Skrabak commented on PHOENIX-4445:


Tests covered in PHOENIX-4208:
 .../org/apache/phoenix/end2end/DropSchemaIT.java   | 80 +++---
 .../java/org/apache/phoenix/end2end/GroupByIT.java | 26 ---
 .../org/apache/phoenix/end2end/MutableQueryIT.java |  4 +-
 .../phoenix/end2end/ReadIsolationLevelIT.java  | 46 +++--
 .../end2end/RebuildIndexConnectionPropsIT.java |  2 -
 .../org/apache/phoenix/end2end/ScanQueryIT.java| 21 ++
 .../org/apache/phoenix/end2end/StoreNullsIT.java   |  5 +-
 .../java/org/apache/phoenix/rpc/UpdateCacheIT.java | 30 +++-
 .../apache/phoenix/rpc/UpdateCacheWithScnIT.java   | 49 -

PHOENIX-4180:
./phoenix-core/src/it/java/org/apache/phoenix/end2end/ArrayIT.java
./phoenix-core/src/it/java/org/apache/phoenix/end2end/ClientTimeArithmeticQueryIT.java
./phoenix-core/src/it/java/org/apache/phoenix/end2end/ColumnProjectionOptimizationIT.java
./phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
./phoenix-core/src/it/java/org/apache/phoenix/end2end/CursorWithRowValueConstructorIT.java

PHOENIX-4175:
CreateSchemaIT, CustomEntityDataIT, and UpsertSelectIT

> Modify all ITs to not use CurrentSCN or CURRENT_SCN
> ---
>
> Key: PHOENIX-4445
> URL: https://issues.apache.org/jira/browse/PHOENIX-4445
> Project: Phoenix
>  Issue Type: Test
>Reporter: Csaba Skrabak
>Assignee: James Taylor
> Fix For: 4.12.0
>
>
> Converting misc tests not to use CURRENT_SCN



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4447) Modify PointInTimeQueryIT to not use CurrentSCN

2017-12-08 Thread Csaba Skrabak (JIRA)
Csaba Skrabak created PHOENIX-4447:
--

 Summary: Modify PointInTimeQueryIT to not use CurrentSCN
 Key: PHOENIX-4447
 URL: https://issues.apache.org/jira/browse/PHOENIX-4447
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Csaba Skrabak






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4175) Convert tests using CURRENT_SCN to not use it when possible

2017-12-08 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4175:
---
Issue Type: Sub-task  (was: Test)
Parent: PHOENIX-4445

> Convert tests using CURRENT_SCN to not use it when possible
> ---
>
> Key: PHOENIX-4175
> URL: https://issues.apache.org/jira/browse/PHOENIX-4175
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-4175_1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4180) Modify tests to generate unique table names and not use CURRENT_SCN

2017-12-08 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4180:
---
Issue Type: Sub-task  (was: Test)
Parent: PHOENIX-4445

> Modify tests to generate unique table names and not use CURRENT_SCN
> ---
>
> Key: PHOENIX-4180
> URL: https://issues.apache.org/jira/browse/PHOENIX-4180
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.12.0
>Reporter: Rahul Shrivastava
>Assignee: Rahul Shrivastava
>Priority: Minor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4180.patch
>
>
> Here is update provided by [~jamestaylor]
> - switch from using hard coded table names to generated table names (using 
> the BaseTest.generateUniqueName() function).
> - remove the setting of the CURRENT_SCN property name
> - verify the tests still passes
> Here's an example commit of the conversion of one of them: 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commitdiff;h=814276d4b4b08be0681f1c402cfb3cc35f01fa0a;hp=b46cbd375e3d2ee9a11644825c13937572c027cd
> Here's the list of tests that need to be converted:
> ./phoenix-core/src/it/java/org/apache/phoenix/end2end/ArrayIT.java
> ./phoenix-core/src/it/java/org/apache/phoenix/end2end/ClientTimeArithmeticQueryIT.java
> ./phoenix-core/src/it/java/org/apache/phoenix/end2end/ColumnProjectionOptimizationIT.java
> ./phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
> ./phoenix-core/src/it/java/org/apache/phoenix/end2end/CursorWithRowValueConstructorIT.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4185) Convert PercentileIT and ProductMetricsIT to not use CURRENT_SCN

2017-12-08 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4185:
---
Issue Type: Sub-task  (was: Bug)
Parent: PHOENIX-4445

> Convert PercentileIT and ProductMetricsIT to not use CURRENT_SCN
> 
>
> Key: PHOENIX-4185
> URL: https://issues.apache.org/jira/browse/PHOENIX-4185
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.12.0
>Reporter: Ethan Wang
>Assignee: Ethan Wang
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4185-v1.patch
>
>
> converting tests for disable DDL/DML for 4.12. Disallow customized SCN 
> timestamp assignment
> ./phoenix-core/src/it/java/org/apache/phoenix/end2end/PercentileIT.java
> ./phoenix-core/src/it/java/org/apache/phoenix/end2end/ProductMetricsIT.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4205) Modify OutOfOrderMutationsIT to not use CURRENT_SCN

2017-12-08 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4205:
---
Issue Type: Sub-task  (was: Test)
Parent: PHOENIX-4445

> Modify OutOfOrderMutationsIT to not use CURRENT_SCN
> ---
>
> Key: PHOENIX-4205
> URL: https://issues.apache.org/jira/browse/PHOENIX-4205
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4205.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4218) Remove usage of current scn from UserDefinedFunctionsIT.testUDFsWithLatestTimestamp

2017-12-08 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4218:
---
Issue Type: Sub-task  (was: Bug)
Parent: PHOENIX-4445

> Remove usage of current scn from 
> UserDefinedFunctionsIT.testUDFsWithLatestTimestamp
> ---
>
> Key: PHOENIX-4218
> URL: https://issues.apache.org/jira/browse/PHOENIX-4218
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4218.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4194) Modify RoundFloorCeilFuncIT, RowValueConstructorIT, SaltedTableIT, TenantIdTypeIT, StoreNullsIT and RebuildIndexConnectionPropsIT to not use CurrentSCN

2017-12-08 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4194:
---
Issue Type: Sub-task  (was: Test)
Parent: PHOENIX-4445

> Modify RoundFloorCeilFuncIT, RowValueConstructorIT, SaltedTableIT, 
> TenantIdTypeIT, StoreNullsIT and RebuildIndexConnectionPropsIT to not use 
> CurrentSCN
> ---
>
> Key: PHOENIX-4194
> URL: https://issues.apache.org/jira/browse/PHOENIX-4194
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Mujtaba Chohan
>Assignee: Mujtaba Chohan
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4194.patch, PHOENIX-4194_v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4199) Modify SequenceIT.java to not use CurrentSCN

2017-12-08 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4199:
---
Issue Type: Sub-task  (was: Test)
Parent: PHOENIX-4445

> Modify SequenceIT.java to not use CurrentSCN
> 
>
> Key: PHOENIX-4199
> URL: https://issues.apache.org/jira/browse/PHOENIX-4199
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Mujtaba Chohan
>Assignee: Mujtaba Chohan
> Attachments: PHOENIX-4199.patch, PHOENIX-4199_v2.patch, 
> PHOENIX-4199_v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4213) Modify ExtendedQueryExecIT and FunkyNamesIT to not use currentSCN

2017-12-08 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4213:
---
Issue Type: Sub-task  (was: Test)
Parent: PHOENIX-4445

> Modify ExtendedQueryExecIT and FunkyNamesIT to not use currentSCN
> -
>
> Key: PHOENIX-4213
> URL: https://issues.apache.org/jira/browse/PHOENIX-4213
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Mujtaba Chohan
>Assignee: Mujtaba Chohan
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4213.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4204) Modify SequenceBulkAllocationIT.java to not use currentSCN

2017-12-08 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4204:
---
Issue Type: Sub-task  (was: Test)
Parent: PHOENIX-4445

> Modify SequenceBulkAllocationIT.java to not use currentSCN
> --
>
> Key: PHOENIX-4204
> URL: https://issues.apache.org/jira/browse/PHOENIX-4204
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Mujtaba Chohan
>Assignee: Mujtaba Chohan
> Attachments: PHOENIX-4204.patch, PHOENIX-4204_v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4186) Modify NativeHBaseTypesIT to not use CurrentSCN

2017-12-08 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4186:
---
Issue Type: Sub-task  (was: Test)
Parent: PHOENIX-4445

> Modify NativeHBaseTypesIT to not use CurrentSCN
> ---
>
> Key: PHOENIX-4186
> URL: https://issues.apache.org/jira/browse/PHOENIX-4186
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4186-v2.patch, PHOENIX-4186.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4208) Modify tests to not use CurrentSCN

2017-12-08 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4208:
---
Issue Type: Sub-task  (was: Test)
Parent: PHOENIX-4445

> Modify tests to not use CurrentSCN
> --
>
> Key: PHOENIX-4208
> URL: https://issues.apache.org/jira/browse/PHOENIX-4208
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4208.patch
>
>
> Converting misc tests not to use CURRENT_SCN



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4445) Modify all ITs to not use CurrentSCN or CURRENT_SCN

2017-12-08 Thread Csaba Skrabak (JIRA)
Csaba Skrabak created PHOENIX-4445:
--

 Summary: Modify all ITs to not use CurrentSCN or CURRENT_SCN
 Key: PHOENIX-4445
 URL: https://issues.apache.org/jira/browse/PHOENIX-4445
 Project: Phoenix
  Issue Type: Test
Reporter: Csaba Skrabak
Assignee: James Taylor
 Fix For: 4.12.0


Converting misc tests not to use CURRENT_SCN



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3636) CurrentSCN doesn't work with phoenix-spark plugin

2017-12-08 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283247#comment-16283247
 ] 

Csaba Skrabak commented on PHOENIX-3636:


[~jmahonin], isn't this a PHOENIX-2429 duplicate?

> CurrentSCN doesn't work with phoenix-spark plugin
> -
>
> Key: PHOENIX-3636
> URL: https://issues.apache.org/jira/browse/PHOENIX-3636
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Suhas Nalapure
>
> Specifying CurrentSCN property while creating a Spark DataFrame as shown 
> below doesn't give the expected results.
> E.g. below code doesn't return those records from Hbase that have row 
> timestamp > current system timestamp  
> Map params = new HashMap();
> params.put("table", tableName);
> params.put("zkUrl", zkUrl);
> Calendar cal = Calendar.getInstance();
> cal.set(2017, 3, 15);
> params.put("CurrentSCN", Long.toString(cal.getTime().getTime()));
>   df = 
> sqlContext.read().format(hbaseFormat).options(params).load();
> df.show()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4139) select distinct with identical aggregations return weird values

2017-12-01 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16274513#comment-16274513
 ] 

Csaba Skrabak commented on PHOENIX-4139:


While [^PHOENIX-4139_v2.patch] will fix the test, I'm still unhappy because the 
"accessor" of the second TRIM(NAM) is the same as that of the first one. First 
it will contain the correct index but the second time it contains that of the 
first expression. What is the accessor for if we don't care what exactly it 
accesses? The information it holds can be derived, actually it IS derived in 
the wrapGroupByExpression method from the Expression object. The fact that the 
index may be right or wrong in it may lead to other issues.

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4139.patch, PHOENIX-4139_v2.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
> ++++
> | TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
> ++++
> | pulkit | pulkit | pulkit |
> ++++
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4139) select distinct with identical aggregations return weird values

2017-12-01 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4139:
---
Attachment: PHOENIX-4139_v2.patch

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4139.patch, PHOENIX-4139_v2.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
> ++++
> | TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
> ++++
> | pulkit | pulkit | pulkit |
> ++++
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-4139) select distinct with identical aggregations return weird values

2017-11-10 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243646#comment-16243646
 ] 

Csaba Skrabak edited comment on PHOENIX-4139 at 11/10/17 9:50 AM:
--

In ExpressionCompiler.wrapGroupByExpression(Expression) method, there is an 
indexOf call:
int index = groupBy.getExpressions().indexOf(expression);

If there are two equal expressions in the groupBy, they should have different 
index in their accessors but both get the return from the above indexOf (which 
by design gives the first found element) just a few lines below:

RowKeyValueAccessor accessor = new 
RowKeyValueAccessor(groupBy.getKeyExpressions(), index);
expression = new RowKeyColumnExpression(expression, accessor, 
groupBy.getKeyExpressions().get(index).getDataType());

This makes me think that the GroupBy fields should have more powerful data 
structures than Lists to store keyExpressions and expressions. But now I'm not 
sure what the whole GroupBy class is really used for. I don't want to tinker 
with it and maybe break the design until I understand.

So the list is what we're doing a GROUP BY over I think but its elements are 
wrong.


was (Author: cskrabak):
In ExpressionCompiler.wrapGroupByExpression(Expression) method, there is an 
indexOf call:
int index = groupBy.getExpressions().indexOf(expression);

If there are two equal expressions in the groupBy, they should have different 
index in their accessors but both get the return from an indexOf (which by 
design gives the first found element just a few lines below,)

RowKeyValueAccessor accessor = new 
RowKeyValueAccessor(groupBy.getKeyExpressions(), index);
expression = new RowKeyColumnExpression(expression, accessor, 
groupBy.getKeyExpressions().get(index).getDataType());

This makes me think that the GroupBy fields should have more powerful data 
structures than Lists to store keyExpressions and expressions. But now I'm not 
sure what the whole GroupBy class is really used for. I don't want to tinker 
with it and maybe break the design until I understand.

So the list is what we're doing a GROUP BY over I think but its elements are 
wrong.

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4139.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: 

[jira] [Comment Edited] (PHOENIX-4139) select distinct with identical aggregations return weird values

2017-11-08 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243646#comment-16243646
 ] 

Csaba Skrabak edited comment on PHOENIX-4139 at 11/8/17 10:05 AM:
--

In ExpressionCompiler.wrapGroupByExpression(Expression) method, there is an 
indexOf call:
int index = groupBy.getExpressions().indexOf(expression);

If there are two equal expressions in the groupBy, they should have different 
index in their accessors but both get the return from an indexOf (which by 
design gives the first found element just a few lines below,)

RowKeyValueAccessor accessor = new 
RowKeyValueAccessor(groupBy.getKeyExpressions(), index);
expression = new RowKeyColumnExpression(expression, accessor, 
groupBy.getKeyExpressions().get(index).getDataType());

This makes me think that the GroupBy fields should have more powerful data 
structures than Lists to store keyExpressions and expressions. But now I'm not 
sure what the whole GroupBy class is really used for. I don't want to tinker 
with it and maybe break the design until I understand.

So the list is what we're doing a GROUP BY over I think but its elements are 
wrong.


was (Author: cskrabak):
In ExpressionCompiler.wrapGroupByExpression(Expression) method, there is an 
indexOf call:
int index = groupBy.getExpressions().indexOf(expression);

If there are two equal expressions in the groupBy, they should have different 
index in their accessors but both get the return from an indexOf (which by 
design gives the first found element just a few lines below:)

RowKeyValueAccessor accessor = new 
RowKeyValueAccessor(groupBy.getKeyExpressions(), index);
expression = new RowKeyColumnExpression(expression, accessor, 
groupBy.getKeyExpressions().get(index).getDataType());

This makes me think that the GroupBy fields should have more powerful data 
structures than Lists to store keyExpressions and expressions. But now I'm not 
sure what the whole GroupBy class is really used for. I don't want to tinker 
with it and maybe break the design until I understand.

So the list is what we're doing a GROUP BY over I think but its elements are 
wrong.

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4139.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> 

[jira] [Commented] (PHOENIX-4139) select distinct with identical aggregations return weird values

2017-11-08 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243646#comment-16243646
 ] 

Csaba Skrabak commented on PHOENIX-4139:


In ExpressionCompiler.wrapGroupByExpression(Expression) method, there is an 
indexOf call:
int index = groupBy.getExpressions().indexOf(expression);

If there are two equal expressions in the groupBy, they should have different 
index in their accessors but both get the return from an indexOf (which by 
design gives the first found element just a few lines below:)

RowKeyValueAccessor accessor = new 
RowKeyValueAccessor(groupBy.getKeyExpressions(), index);
expression = new RowKeyColumnExpression(expression, accessor, 
groupBy.getKeyExpressions().get(index).getDataType());

This makes me think that the GroupBy fields should have more powerful data 
structures than Lists to store keyExpressions and expressions. But now I'm not 
sure what the whole GroupBy class is really used for. I don't want to tinker 
with it and maybe break the design until I understand.

So the list is what we're doing a GROUP BY over I think but its elements are 
wrong.

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4139.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
> ++++
> | TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
> ++++
> | pulkit | pulkit | pulkit |
> ++++
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4139) select distinct with identical aggregations return weird values

2017-10-20 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212859#comment-16212859
 ] 

Csaba Skrabak commented on PHOENIX-4139:


If you modify the select like this:
select distinct 'harshit' as "test_column", trim(nam), trim(nam), lower(nam) 
from test_select;
...then accessor.hasSeparator will be true and the result is correct!

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.13.0
>
> Attachments: PHOENIX-4139.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
> ++++
> | TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
> ++++
> | pulkit | pulkit | pulkit |
> ++++
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-4139) select distinct with identical aggregations return weird values

2017-10-20 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212846#comment-16212846
 ] 

Csaba Skrabak edited comment on PHOENIX-4139 at 10/20/17 4:32 PM:
--

The NUL-separated value is generated in 
GroupedAggregateRegionObserver.scanUnordered but it is intentional. It's called 
from hbase scan for each row. 
RowKeyColumnExpression.evaluate is called by ExpressionProjector.getValue on 
each Tuple generated by scanUnordered. In the error case this evaluate returns 
(sets ptr to) a string containing the NUL. Its accessor.getOffset and 
accessor.getLength does not return correct value. The accessor.hasSeparator 
should be true, it is false instead!


was (Author: cskrabak):
The NUL-separated value is generated in 
GroupedAggregateRegionObserver.scanUnordered but it is intentional. It's called 
from hbase scan for each row. 
RowKeyColumnExpression.evaluate is called by ExpressionProjector.getValue on 
each Tuple generated by scanUnordered. In the error case this evaluate returns 
(sets ptr to) a string containing the NUL. Its accessor.getOffset and 
accessor.getLength does not return correct value.

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.13.0
>
> Attachments: PHOENIX-4139.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
> ++++
> | TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
> ++++
> | pulkit | pulkit | pulkit |
> ++++
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4139) select distinct with identical aggregations return weird values

2017-10-20 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212846#comment-16212846
 ] 

Csaba Skrabak commented on PHOENIX-4139:


The NUL-separated value is generated in 
GroupedAggregateRegionObserver.scanUnordered but it is intentional. It's called 
from hbase scan for each row. 
RowKeyColumnExpression.evaluate is called by ExpressionProjector.getValue on 
each Tuple generated by scanUnordered. In the error case this evaluate returns 
(sets ptr to) a string containing the NUL. Its accessor.getOffset and 
accessor.getLength does not return correct value.

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.13.0
>
> Attachments: PHOENIX-4139.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
> ++++
> | TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
> ++++
> | pulkit | pulkit | pulkit |
> ++++
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3708) Tests introduced in PHOENIX-3346 doesn't work well with failsafe plugin

2017-09-20 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16172864#comment-16172864
 ] 

Csaba Skrabak commented on PHOENIX-3708:


Hi [~sergey.soldatov], did the reenabled tests succeed eventually? They give me 
exceptions in local runs:
{noformat}
[2017-09-19 16:20:51,440 WARN  [main] 
org.apache.hadoop.util.NativeCodeLoader(62): Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
2017-09-19 16:20:52,659 DEBUG [main] 
org.apache.commons.configuration.ConfigurationUtils(447): 
ConfigurationUtils.locate(): base is null, name is 
hadoop-metrics2-namenode.properties
2017-09-19 16:20:52,662 DEBUG [main] 
org.apache.commons.configuration.ConfigurationUtils(447): 
ConfigurationUtils.locate(): base is null, name is hadoop-metrics2.properties
2017-09-19 16:20:52,662 DEBUG [main] 
org.apache.commons.configuration.ConfigurationUtils(580): Loading configuration 
from the context classpath (hadoop-metrics2.properties)
2017-09-19 16:20:53,314 WARN  [main] 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem(705): Encountered exception 
loading fsimage
java.io.FileNotFoundException: No valid image files found
at 
org.apache.hadoop.hdfs.server.namenode.FSImageTransactionalStorageInspector.getLatestImages(FSImageTransactionalStorageInspector.java:165)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:618)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:289)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1045)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:703)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:992)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:976)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1155)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1030)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:754)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:624)
at 
org.apache.hadoop.hive.shims.Hadoop23Shims.getMiniDfs(Hadoop23Shims.java:512)
at org.apache.phoenix.hive.HiveTestUtil.(HiveTestUtil.java:303)
at org.apache.phoenix.hive.HiveTestUtil.(HiveTestUtil.java:261)
at 
org.apache.phoenix.hive.BaseHivePhoenixStoreIT.setup(BaseHivePhoenixStoreIT.java:82)
at org.apache.phoenix.hive.HiveTezIT.setUpBeforeClass(HiveTezIT.java:31)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:367)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:274)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:161)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:290)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:242)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:121)
2017-09-19 16:20:53,336 ERROR [main] 
org.apache.hadoop.hdfs.MiniDFSCluster(828): IOE creating namenodes. Permissions 
dump:
path 'build/test/data/dfs/data': 

absolute:/Users/cskrabak/git/phoenix/phoenix-hive/build/test/data/dfs/data
permissions: drwx
path 'build/test/data/dfs': 

[jira] [Commented] (PHOENIX-4139) select distinct with identical aggregations return weird values

2017-08-29 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16145598#comment-16145598
 ] 

Csaba Skrabak commented on PHOENIX-4139:


org.apache.phoenix.jdbc.PhoenixResultSet#getString(int) calls getValue on a 
projector object that looks the same as the other column's projector. 
ColumnProjector object identifies itself with table name and column name, 
column index as an information is lost.
org.apache.phoenix.compile.ExpressionProjector#getValue returns the weird 
string containing all matching columns zero separatedly.

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Attachments: PHOENIX-4139.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
> ++++
> | TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
> ++++
> | pulkit | pulkit | pulkit |
> ++++
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-4139) select distinct with identical aggregations return weird values

2017-08-29 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16145578#comment-16145578
 ] 

Csaba Skrabak edited comment on PHOENIX-4139 at 8/29/17 4:20 PM:
-

[^PHOENIX-4139.patch] Patch contains the test only, which reproduced the issue.


was (Author: cskrabak):
Patch contains the test only, which reproduced the issue.

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Attachments: PHOENIX-4139.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
> ++++
> | TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
> ++++
> | pulkit | pulkit | pulkit |
> ++++
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4139) select distinct with identical aggregations return weird values

2017-08-29 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4139:
---
Attachment: PHOENIX-4139.patch

Patch contains the test only, which reproduced the issue.

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Attachments: PHOENIX-4139.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
> ++++
> | TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
> ++++
> | pulkit | pulkit | pulkit |
> ++++
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4139) select distinct with identical aggregations return weird values

2017-08-29 Thread Csaba Skrabak (JIRA)
Csaba Skrabak created PHOENIX-4139:
--

 Summary: select distinct with identical aggregations return weird 
values 
 Key: PHOENIX-4139
 URL: https://issues.apache.org/jira/browse/PHOENIX-4139
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.12.0
 Environment: minicluster
Reporter: Csaba Skrabak
Assignee: Csaba Skrabak
Priority: Minor


>From sme-hbase hipchat room:
Pulkit Bhardwaj·10:31

i'm seeing a weird issue with phoenix, appreciate some thoughts

Created a simple table in phoenix
{noformat}
0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
VARCHAR(20), id BIGINT
. . . . . . . . > constraint my_pk primary key (id));

0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
values('pulkit','badaun',1);

0: jdbc:phoenix:> select * from test_select;
+-+--+-+
|   NAM   | ADDRESS  | ID  |
+-+--+-+
| pulkit  | badaun   | 1   |
+-+--+-+


0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
test_select;
+--+-+
| test_column  |   NAM   |
+--+-+
| harshit  | pulkit  |
+--+-+


0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
trim(nam) from test_select;
+--+++
| test_column  |   TRIM(NAM)|   TRIM(NAM)|
+--+++
| harshit  | pulkitpulkit  | pulkitpulkit  |
+--+++
{noformat}

When I apply a trim on the nam column and use it multiple times, the output has 
the cell data duplicated!
{noformat}
0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
trim(nam), trim(nam) from test_select;
+--+---+---+---+
| test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
TRIM(NAM)   |
+--+---+---+---+
| harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | pulkitpulkitpulkit 
 |
+--+---+---+---+
{noformat}

Wondering if someone has seen this before??

One thing to note is, if I remove the —— distinct 'harshit' as "test_column" —— 
 The issue is not seen
{noformat}
0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
++++
| TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
++++
| pulkit | pulkit | pulkit |
++++
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4120) Can't get the records by select when the operator is "!=" and the specified column value is null

2017-08-24 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139754#comment-16139754
 ] 

Csaba Skrabak commented on PHOENIX-4120:


I think this is the expected behavior in SQL in general. According to 
three-valued logic, "name"!='alex' means NOT("name" = 'alex'), which evaluates 
to NOT(NULL = 'alex') in the second row of your example. NULL = 'alex' 
evaluates to Unknown, its negative is also Unknown, and Unknown in a WHERE 
clause should filter out the row. Or am I wrong? See 
[https://en.wikipedia.org/wiki/Null_(SQL)#Effect_of_Unknown_in_WHERE_clauses]

> Can't get the records by select  when the operator is "!="  and the specified 
> column value is null
> --
>
> Key: PHOENIX-4120
> URL: https://issues.apache.org/jira/browse/PHOENIX-4120
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: phoenix-4.8.1-HBase-1.0-SNAPSHOT-server.jar
>Reporter: alexBai
>  Labels: null-values
>
> {code:java}
> 0: jdbc:phoenix:> create table alex ("id" BIGINT, "name" varchar(10), "age" 
> BIGINT, constraint pk primary key("id"));
> 2 rows affected (1.257 seconds)
> 0: jdbc:phoenix:> upsert into alex values(1, 'alex', 28);
> 1 row affected (0.071 seconds)
> 0: jdbc:phoenix:> upsert into alex values(2, null, 28);
> 1 row affected (0.012 seconds)
> 0: jdbc:phoenix:> select * from alex;
> +-+---+--+
> | id  | name  | age  |
> +-+---+--+
> | 1   | alex  | 28|
> | 2   |   | 28|
> +-+---+--+
> 2 rows selected (0.063 seconds)
> 0: jdbc:phoenix:> select * from alex where "name"!='alex';
> +-+---+--+
> | id  | name  | age  |
> +-+---+--+
> +-+---+--+
> No rows selected (0.053 seconds)
> 0: jdbc:phoenix:> select * from alex where "name" is null;
> +-+---+--+
> | id  | name  | age  |
> +-+---+--+
> | 2   |   | 28   |
> +-+---+--+
> {code}
> Does phoenix just design like that?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-2048) change to_char() function to use HALF_UP rounding mode

2017-08-23 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136845#comment-16136845
 ] 

Csaba Skrabak edited comment on PHOENIX-2048 at 8/23/17 11:45 AM:
--

Help, @hadoopqa lies:
* phoenix-core/src/it/java/org/apache/phoenix/end2end/ToCharFunctionIT.java 
_is_ a test and modification of it is included.
* There is no Javadoc warning about the modified files in the linked txt.
* Jenkins reported, Test Result (no failures)
* Yes, I can break the long lines.


was (Author: cskrabak):
Help, @hadoopqa lies:
* phoenix-core/src/it/java/org/apache/phoenix/end2end/ToCharFunctionIT.java 
_is_ a test and modification of it is included.
* There is no Javadoc warning about the modified files in the linked txt.
* Yes, I can break the long lines.

> change to_char() function to use HALF_UP rounding mode
> --
>
> Key: PHOENIX-2048
> URL: https://issues.apache.org/jira/browse/PHOENIX-2048
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: verify
>Reporter: Jonathan Leech
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-2048.patch, PHOENIX-2048_v2.patch
>
>
> to_char() function uses the default rounding mode in java DecimalFormat, 
> which is a strange one called HALF_EVEN, which rounds a '5' in the last 
> position either up or down depending on the preceding digit. 
> Change it to HALF_UP so it rounds the same way as the round() function does, 
> or provide a way to override the behavior; e.g. globally or as a client 
> config, or an argument to the to_char() function.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-2370) ResultSetMetaData.getColumnDisplaySize() returns bad value for varchar and varbinary columns

2017-08-23 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-2370:
---
Attachment: PHOENIX-2370_v2.patch

org.apache.phoenix.end2end.NotQueryIT ran OK on my side, is it intermittent? 
Broke the long lines in [^PHOENIX-2370_v2.patch].

> ResultSetMetaData.getColumnDisplaySize() returns bad value for varchar and 
> varbinary columns
> 
>
> Key: PHOENIX-2370
> URL: https://issues.apache.org/jira/browse/PHOENIX-2370
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
> Environment: Linux lnxx64r6 2.6.32-131.0.15.el6.x86_64 #1 SMP Tue May 
> 10 15:42:40 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Sergio Lob
>Assignee: Csaba Skrabak
>  Labels: newbie, verify
> Fix For: 4.12.0
>
> Attachments: PHOENIX-2370.patch, PHOENIX-2370_v2.patch
>
>
> ResultSetMetaData.getColumnDisplaySize() returns bad values for varchar and 
> varbinary columns. Specifically, for the following table:
> CREATE TABLE SERGIO (I INTEGER, V10 VARCHAR(10),
> VHUGE VARCHAR(2147483647), V VARCHAR, VB10 VARBINARY(10), VBHUGE 
> VARBINARY(2147483647), VB VARBINARY) ;
> 1. getColumnDisplaySize() returns 20 for all varbinary columns, no matter the 
> defined size. This should return the max possible size of the column, so:
>  getColumnDisplaySize() should return 10 for column VB10,
>  getColumnDisplaySize() should return 2147483647 for column VBHUGE,
>  getColumnDisplaySize() should return 2147483647 for column VB, assuming that 
> a column defined with no size should default to the maximum size.
> 2. getColumnDisplaySize() returns 40 for all varchar columns that are not 
> defined with a size, like in column V in the above CREATE TABLE.  I would 
> think that a VARCHAR column defined with no size parameter should default to 
> the maximum size possible, not to a random number like 40.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-2048) change to_char() function to use HALF_UP rounding mode

2017-08-22 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-2048:
---
Attachment: PHOENIX-2048_v2.patch

> change to_char() function to use HALF_UP rounding mode
> --
>
> Key: PHOENIX-2048
> URL: https://issues.apache.org/jira/browse/PHOENIX-2048
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: verify
>Reporter: Jonathan Leech
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-2048.patch, PHOENIX-2048_v2.patch
>
>
> to_char() function uses the default rounding mode in java DecimalFormat, 
> which is a strange one called HALF_EVEN, which rounds a '5' in the last 
> position either up or down depending on the preceding digit. 
> Change it to HALF_UP so it rounds the same way as the round() function does, 
> or provide a way to override the behavior; e.g. globally or as a client 
> config, or an argument to the to_char() function.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-2048) change to_char() function to use HALF_UP rounding mode

2017-08-22 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136845#comment-16136845
 ] 

Csaba Skrabak commented on PHOENIX-2048:


Help, @hadoopqa lies:
* phoenix-core/src/it/java/org/apache/phoenix/end2end/ToCharFunctionIT.java 
_is_ a test and modification of it is included.
* There is no Javadoc warning about the modified files in the linked txt.
* Yes, I can break the long lines.

> change to_char() function to use HALF_UP rounding mode
> --
>
> Key: PHOENIX-2048
> URL: https://issues.apache.org/jira/browse/PHOENIX-2048
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: verify
>Reporter: Jonathan Leech
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-2048.patch
>
>
> to_char() function uses the default rounding mode in java DecimalFormat, 
> which is a strange one called HALF_EVEN, which rounds a '5' in the last 
> position either up or down depending on the preceding digit. 
> Change it to HALF_UP so it rounds the same way as the round() function does, 
> or provide a way to override the behavior; e.g. globally or as a client 
> config, or an argument to the to_char() function.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-2048) change to_char() function to use HALF_UP rounding mode

2017-08-17 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-2048:
---
Attachment: (was: phoenix-2048.patch)

> change to_char() function to use HALF_UP rounding mode
> --
>
> Key: PHOENIX-2048
> URL: https://issues.apache.org/jira/browse/PHOENIX-2048
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: verify
>Reporter: Jonathan Leech
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-2048.patch
>
>
> to_char() function uses the default rounding mode in java DecimalFormat, 
> which is a strange one called HALF_EVEN, which rounds a '5' in the last 
> position either up or down depending on the preceding digit. 
> Change it to HALF_UP so it rounds the same way as the round() function does, 
> or provide a way to override the behavior; e.g. globally or as a client 
> config, or an argument to the to_char() function.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-2048) change to_char() function to use HALF_UP rounding mode

2017-08-17 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-2048:
---
Attachment: PHOENIX-2048.patch

Whitespace errors fixed in [^PHOENIX-2048.patch].

> change to_char() function to use HALF_UP rounding mode
> --
>
> Key: PHOENIX-2048
> URL: https://issues.apache.org/jira/browse/PHOENIX-2048
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: verify
>Reporter: Jonathan Leech
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.12.0
>
> Attachments: phoenix-2048.patch, PHOENIX-2048.patch
>
>
> to_char() function uses the default rounding mode in java DecimalFormat, 
> which is a strange one called HALF_EVEN, which rounds a '5' in the last 
> position either up or down depending on the preceding digit. 
> Change it to HALF_UP so it rounds the same way as the round() function does, 
> or provide a way to override the behavior; e.g. globally or as a client 
> config, or an argument to the to_char() function.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-2370) ResultSetMetaData.getColumnDisplaySize() returns bad value for varchar and varbinary columns

2017-08-17 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak reassigned PHOENIX-2370:
--

Assignee: Csaba Skrabak

> ResultSetMetaData.getColumnDisplaySize() returns bad value for varchar and 
> varbinary columns
> 
>
> Key: PHOENIX-2370
> URL: https://issues.apache.org/jira/browse/PHOENIX-2370
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
> Environment: Linux lnxx64r6 2.6.32-131.0.15.el6.x86_64 #1 SMP Tue May 
> 10 15:42:40 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Sergio Lob
>Assignee: Csaba Skrabak
>  Labels: newbie, verify
> Fix For: 4.12.0
>
> Attachments: PHOENIX-2370.patch
>
>
> ResultSetMetaData.getColumnDisplaySize() returns bad values for varchar and 
> varbinary columns. Specifically, for the following table:
> CREATE TABLE SERGIO (I INTEGER, V10 VARCHAR(10),
> VHUGE VARCHAR(2147483647), V VARCHAR, VB10 VARBINARY(10), VBHUGE 
> VARBINARY(2147483647), VB VARBINARY) ;
> 1. getColumnDisplaySize() returns 20 for all varbinary columns, no matter the 
> defined size. This should return the max possible size of the column, so:
>  getColumnDisplaySize() should return 10 for column VB10,
>  getColumnDisplaySize() should return 2147483647 for column VBHUGE,
>  getColumnDisplaySize() should return 2147483647 for column VB, assuming that 
> a column defined with no size should default to the maximum size.
> 2. getColumnDisplaySize() returns 40 for all varchar columns that are not 
> defined with a size, like in column V in the above CREATE TABLE.  I would 
> think that a VARCHAR column defined with no size parameter should default to 
> the maximum size possible, not to a random number like 40.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-2048) change to_char() function to use HALF_UP rounding mode

2017-08-17 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak reassigned PHOENIX-2048:
--

Assignee: Csaba Skrabak

> change to_char() function to use HALF_UP rounding mode
> --
>
> Key: PHOENIX-2048
> URL: https://issues.apache.org/jira/browse/PHOENIX-2048
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: verify
>Reporter: Jonathan Leech
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.12.0
>
> Attachments: phoenix-2048.patch
>
>
> to_char() function uses the default rounding mode in java DecimalFormat, 
> which is a strange one called HALF_EVEN, which rounds a '5' in the last 
> position either up or down depending on the preceding digit. 
> Change it to HALF_UP so it rounds the same way as the round() function does, 
> or provide a way to override the behavior; e.g. globally or as a client 
> config, or an argument to the to_char() function.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4088) SQLExceptionCode.java code beauty and typos

2017-08-17 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130746#comment-16130746
 ] 

Csaba Skrabak commented on PHOENIX-4088:


[~elserj], thanks.

> SQLExceptionCode.java code beauty and typos
> ---
>
> Key: PHOENIX-4088
> URL: https://issues.apache.org/jira/browse/PHOENIX-4088
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Trivial
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4088.patch
>
>
> * Fix typos in log message strings
> * Fix typo in enum constant name introduced in PHOENIX-2862
> * Organize line breaks around the last enum constants like they are in the 
> top ones



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4088) SQLExceptionCode.java code beauty and typos

2017-08-16 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4088:
---
Attachment: PHOENIX-4088.patch

> SQLExceptionCode.java code beauty and typos
> ---
>
> Key: PHOENIX-4088
> URL: https://issues.apache.org/jira/browse/PHOENIX-4088
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Csaba Skrabak
>Priority: Trivial
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4088.patch
>
>
> * Fix typos in log message strings
> * Fix typo in enum constant name introduced in PHOENIX-2862
> * Organize line breaks around the last enum constants like they are in the 
> top ones



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4088) SQLExceptionCode.java code beauty and typos

2017-08-16 Thread Csaba Skrabak (JIRA)
Csaba Skrabak created PHOENIX-4088:
--

 Summary: SQLExceptionCode.java code beauty and typos
 Key: PHOENIX-4088
 URL: https://issues.apache.org/jira/browse/PHOENIX-4088
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.8.0
Reporter: Csaba Skrabak
Priority: Trivial


* Fix typos in log message strings
* Fix typo in enum constant name introduced in PHOENIX-2862
* Organize line breaks around the last enum constants like they are in the top 
ones




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-2048) change to_char() function to use HALF_UP rounding mode

2017-07-17 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089808#comment-16089808
 ] 

Csaba Skrabak commented on PHOENIX-2048:


Found nothing about rounding in the Oracle online documentation. We tried in 
Live SQL, it rounded with HALF_UP. Still not sure if there is a config value in 
Oracle that changes rounding mode. So the question is the same. Do you think we 
need to do this in a configurable way?

> change to_char() function to use HALF_UP rounding mode
> --
>
> Key: PHOENIX-2048
> URL: https://issues.apache.org/jira/browse/PHOENIX-2048
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: verify
>Reporter: Jonathan Leech
>Priority: Minor
> Fix For: 4.12.0
>
> Attachments: phoenix-2048.patch
>
>
> to_char() function uses the default rounding mode in java DecimalFormat, 
> which is a strange one called HALF_EVEN, which rounds a '5' in the last 
> position either up or down depending on the preceding digit. 
> Change it to HALF_UP so it rounds the same way as the round() function does, 
> or provide a way to override the behavior; e.g. globally or as a client 
> config, or an argument to the to_char() function.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-2370) ResultSetMetaData.getColumnDisplaySize() returns bad value for varchar and varbinary columns

2017-07-14 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087325#comment-16087325
 ] 

Csaba Skrabak commented on PHOENIX-2370:


[~jamestaylor], what do you think about the 40 vs 2147483647?

> ResultSetMetaData.getColumnDisplaySize() returns bad value for varchar and 
> varbinary columns
> 
>
> Key: PHOENIX-2370
> URL: https://issues.apache.org/jira/browse/PHOENIX-2370
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
> Environment: Linux lnxx64r6 2.6.32-131.0.15.el6.x86_64 #1 SMP Tue May 
> 10 15:42:40 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Sergio Lob
>  Labels: newbie, verify
> Fix For: 4.12.0
>
> Attachments: PHOENIX-2370.patch
>
>
> ResultSetMetaData.getColumnDisplaySize() returns bad values for varchar and 
> varbinary columns. Specifically, for the following table:
> CREATE TABLE SERGIO (I INTEGER, V10 VARCHAR(10),
> VHUGE VARCHAR(2147483647), V VARCHAR, VB10 VARBINARY(10), VBHUGE 
> VARBINARY(2147483647), VB VARBINARY) ;
> 1. getColumnDisplaySize() returns 20 for all varbinary columns, no matter the 
> defined size. This should return the max possible size of the column, so:
>  getColumnDisplaySize() should return 10 for column VB10,
>  getColumnDisplaySize() should return 2147483647 for column VBHUGE,
>  getColumnDisplaySize() should return 2147483647 for column VB, assuming that 
> a column defined with no size should default to the maximum size.
> 2. getColumnDisplaySize() returns 40 for all varchar columns that are not 
> defined with a size, like in column V in the above CREATE TABLE.  I would 
> think that a VARCHAR column defined with no size parameter should default to 
> the maximum size possible, not to a random number like 40.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-2048) change to_char() function to use HALF_UP rounding mode

2017-07-14 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087322#comment-16087322
 ] 

Csaba Skrabak commented on PHOENIX-2048:


[~elserj] [~asinghal] [~enis], do you think we need to do this in a 
configurable way?

> change to_char() function to use HALF_UP rounding mode
> --
>
> Key: PHOENIX-2048
> URL: https://issues.apache.org/jira/browse/PHOENIX-2048
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: verify
>Reporter: Jonathan Leech
>Priority: Minor
> Fix For: 4.12.0
>
> Attachments: phoenix-2048.patch
>
>
> to_char() function uses the default rounding mode in java DecimalFormat, 
> which is a strange one called HALF_EVEN, which rounds a '5' in the last 
> position either up or down depending on the preceding digit. 
> Change it to HALF_UP so it rounds the same way as the round() function does, 
> or provide a way to override the behavior; e.g. globally or as a client 
> config, or an argument to the to_char() function.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-2370) ResultSetMetaData.getColumnDisplaySize() returns bad value for varchar and varbinary columns

2017-07-14 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-2370:
---
Attachment: PHOENIX-2370.patch

> ResultSetMetaData.getColumnDisplaySize() returns bad value for varchar and 
> varbinary columns
> 
>
> Key: PHOENIX-2370
> URL: https://issues.apache.org/jira/browse/PHOENIX-2370
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
> Environment: Linux lnxx64r6 2.6.32-131.0.15.el6.x86_64 #1 SMP Tue May 
> 10 15:42:40 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Sergio Lob
>  Labels: newbie, verify
> Fix For: 4.12.0
>
> Attachments: PHOENIX-2370.patch
>
>
> ResultSetMetaData.getColumnDisplaySize() returns bad values for varchar and 
> varbinary columns. Specifically, for the following table:
> CREATE TABLE SERGIO (I INTEGER, V10 VARCHAR(10),
> VHUGE VARCHAR(2147483647), V VARCHAR, VB10 VARBINARY(10), VBHUGE 
> VARBINARY(2147483647), VB VARBINARY) ;
> 1. getColumnDisplaySize() returns 20 for all varbinary columns, no matter the 
> defined size. This should return the max possible size of the column, so:
>  getColumnDisplaySize() should return 10 for column VB10,
>  getColumnDisplaySize() should return 2147483647 for column VBHUGE,
>  getColumnDisplaySize() should return 2147483647 for column VB, assuming that 
> a column defined with no size should default to the maximum size.
> 2. getColumnDisplaySize() returns 40 for all varchar columns that are not 
> defined with a size, like in column V in the above CREATE TABLE.  I would 
> think that a VARCHAR column defined with no size parameter should default to 
> the maximum size possible, not to a random number like 40.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-2370) ResultSetMetaData.getColumnDisplaySize() returns bad value for varchar and varbinary columns

2017-07-14 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087305#comment-16087305
 ] 

Csaba Skrabak commented on PHOENIX-2370:


I have added a test to current master branch and concluded that 
getColumnDisplaySize now returns 10, 2147483647 and 40 to the vb10, vbhuge and 
vb fields, respectively, as defined in the description. Assuming that returning 
40 for fields without explicit sizes is intentional and expected, there is 
nothing left to do here.

> ResultSetMetaData.getColumnDisplaySize() returns bad value for varchar and 
> varbinary columns
> 
>
> Key: PHOENIX-2370
> URL: https://issues.apache.org/jira/browse/PHOENIX-2370
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
> Environment: Linux lnxx64r6 2.6.32-131.0.15.el6.x86_64 #1 SMP Tue May 
> 10 15:42:40 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Sergio Lob
>  Labels: newbie, verify
> Fix For: 4.12.0
>
>
> ResultSetMetaData.getColumnDisplaySize() returns bad values for varchar and 
> varbinary columns. Specifically, for the following table:
> CREATE TABLE SERGIO (I INTEGER, V10 VARCHAR(10),
> VHUGE VARCHAR(2147483647), V VARCHAR, VB10 VARBINARY(10), VBHUGE 
> VARBINARY(2147483647), VB VARBINARY) ;
> 1. getColumnDisplaySize() returns 20 for all varbinary columns, no matter the 
> defined size. This should return the max possible size of the column, so:
>  getColumnDisplaySize() should return 10 for column VB10,
>  getColumnDisplaySize() should return 2147483647 for column VBHUGE,
>  getColumnDisplaySize() should return 2147483647 for column VB, assuming that 
> a column defined with no size should default to the maximum size.
> 2. getColumnDisplaySize() returns 40 for all varchar columns that are not 
> defined with a size, like in column V in the above CREATE TABLE.  I would 
> think that a VARCHAR column defined with no size parameter should default to 
> the maximum size possible, not to a random number like 40.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-2370) ResultSetMetaData.getColumnDisplaySize() returns bad value for varchar and varbinary columns

2017-07-14 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087213#comment-16087213
 ] 

Csaba Skrabak commented on PHOENIX-2370:


40 as the varchar size is not a random number but introduced as a default in 
PHOENIX-1394. Do we want to change that decision now?

> ResultSetMetaData.getColumnDisplaySize() returns bad value for varchar and 
> varbinary columns
> 
>
> Key: PHOENIX-2370
> URL: https://issues.apache.org/jira/browse/PHOENIX-2370
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
> Environment: Linux lnxx64r6 2.6.32-131.0.15.el6.x86_64 #1 SMP Tue May 
> 10 15:42:40 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Sergio Lob
>  Labels: newbie, verify
> Fix For: 4.12.0
>
>
> ResultSetMetaData.getColumnDisplaySize() returns bad values for varchar and 
> varbinary columns. Specifically, for the following table:
> CREATE TABLE SERGIO (I INTEGER, V10 VARCHAR(10),
> VHUGE VARCHAR(2147483647), V VARCHAR, VB10 VARBINARY(10), VBHUGE 
> VARBINARY(2147483647), VB VARBINARY) ;
> 1. getColumnDisplaySize() returns 20 for all varbinary columns, no matter the 
> defined size. This should return the max possible size of the column, so:
>  getColumnDisplaySize() should return 10 for column VB10,
>  getColumnDisplaySize() should return 2147483647 for column VBHUGE,
>  getColumnDisplaySize() should return 2147483647 for column VB, assuming that 
> a column defined with no size should default to the maximum size.
> 2. getColumnDisplaySize() returns 40 for all varchar columns that are not 
> defined with a size, like in column V in the above CREATE TABLE.  I would 
> think that a VARCHAR column defined with no size parameter should default to 
> the maximum size possible, not to a random number like 40.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3893) Using the TO_TIMESTAMP function in a WHERE statement causes an error.

2017-06-19 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053723#comment-16053723
 ] 

Csaba Skrabak commented on PHOENIX-3893:


I have tried the failing query with the where clause on a Phoenix version 4.7. 
I didn't get any NullPointerException. 

With 4.10 code, the stack trace was like this:

java.lang.NullPointerException
at 
org.apache.phoenix.compile.JoinCompiler$JoinTableConstructor.resolveTable(JoinCompiler.java:181)
at 
org.apache.phoenix.compile.JoinCompiler$JoinTableConstructor.visit(JoinCompiler.java:218)
at 
org.apache.phoenix.compile.JoinCompiler$JoinTableConstructor.visit(JoinCompiler.java:175)
at 
org.apache.phoenix.parse.DerivedTableNode.accept(DerivedTableNode.java:49)
at 
org.apache.phoenix.compile.JoinCompiler$JoinTableConstructor.visit(JoinCompiler.java:195)
at 
org.apache.phoenix.compile.JoinCompiler$JoinTableConstructor.visit(JoinCompiler.java:175)
at org.apache.phoenix.parse.JoinTableNode.accept(JoinTableNode.java:81)
at 
org.apache.phoenix.compile.JoinCompiler.compile(JoinCompiler.java:135)
at 
org.apache.phoenix.compile.JoinCompiler.optimize(JoinCompiler.java:1164)
at 
org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:194)
at 
org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:157)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:420)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:394)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:280)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:270)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:269)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1479)

> Using the TO_TIMESTAMP function in a WHERE statement causes an error.
> -
>
> Key: PHOENIX-3893
> URL: https://issues.apache.org/jira/browse/PHOENIX-3893
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: seoungho park
>
> {code}
> SELECT 
>   "type", COUNT(DISTINCT "offset")
> FROM "log"
> LEFT OUTER JOIN (SELECT TO_TIMESTAMP('2017-05-29 02:53:00') AS "start") AS 
> "s" ON 1=1
> LEFT OUTER JOIN (SELECT TO_TIMESTAMP('2017-05-29 02:55:00') AS "end") AS "e" 
> ON 1=1
> WHERE "date" >= "start" AND "date" < "end"
> GROUP BY "type"
> {code}
> -> It return the correct result
> {code}
> SELECT 
>   "type", COUNT(DISTINCT "offset")
> FROM "log"
> WHERE "date" >= TO_TIMESTAMP('2017-05-29 02:53:00') AND "date" < 
> TO_TIMESTAMP('2017-05-29 02:55:00')
> GROUP BY "type"
> {code}
> -> It return the java.lang.NullPointerException
> The second query should return the same result as above, and why is this 
> issue happening?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3710) Cannot use lowername data table name with indextool

2017-04-24 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15981088#comment-15981088
 ] 

Csaba Skrabak commented on PHOENIX-3710:


Workaround should work:
{noformat}
--data-table \"\"my_lowcase_table\"\"
{noformat}
Backslashes prevent bash from interpreting the quotation marks. The outmost 
pair of double quotes will be swallowed by the commons-cli but one pair of 
double quotes remain there. Then Phoenix code comes to interpret the thing as a 
case sensitive table name.

> Cannot use lowername data table name with indextool
> ---
>
> Key: PHOENIX-3710
> URL: https://issues.apache.org/jira/browse/PHOENIX-3710
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Matthew Shipton
>Priority: Minor
>
> {code}
> hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table 
> \"my_lowcase_table\" --index-table INDEX_TABLE --output-path /tmp/some_path
> {code}
> results in:
> {code}
> java.lang.IllegalArgumentException:  INDEX_TABLE is not an index table for 
> MY_LOWCASE_TABLE
> {code}
> This is despite the data table being explictly lowercased.
> Appears to be referring to the lowcase table, not the uppercase version.
> Workaround exists by changing the tablename, but this is not always feasible.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3710) Cannot use lowername data table name with indextool

2017-04-21 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979096#comment-15979096
 ] 

Csaba Skrabak commented on PHOENIX-3710:


It's because commons-cli cannot pass over quoted arguments. Phoenix depends on 
commons-cli version 1.2, where the issue is present and not even addressed. 1.3 
version already addresses the quote issue with a fix for CLI-185 but still not 
good enough for this case. Opened bug CLI-275.

> Cannot use lowername data table name with indextool
> ---
>
> Key: PHOENIX-3710
> URL: https://issues.apache.org/jira/browse/PHOENIX-3710
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Matthew Shipton
>Priority: Minor
>
> {code}
> hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table 
> \"my_lowcase_table\" --index-table INDEX_TABLE --output-path /tmp/some_path
> {code}
> results in:
> {code}
> java.lang.IllegalArgumentException:  INDEX_TABLE is not an index table for 
> MY_LOWCASE_TABLE
> {code}
> This is despite the data table being explictly lowercased.
> Appears to be referring to the lowcase table, not the uppercase version.
> Workaround exists by changing the tablename, but this is not always feasible.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (PHOENIX-3736) ArithmeticQueryIT.testDecimalUpsertSelect fails

2017-03-24 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940492#comment-15940492
 ] 

Csaba Skrabak edited comment on PHOENIX-3736 at 3/24/17 5:02 PM:
-

With hbase 1.2.3, test also passes.

Actual SQL error code 6000 is timeout. Query really lasts long with HBase 1.2.4.

The expected SQLException with the code 206 is thrown in 1.2.4, too. But since 
the surrounding DoNotRetryIOException is hidden inside an 
UnknownScannerException's cause, client keeps retrying forever. Finally the 
timeout happens.

Related:
* HBASE-16604 (which is present in rel/1.2.4 but not yet in 1.2.3)
* HBASE-17187 (which is neither present in 1.2.4 nor 1.2.3)

{noformat}
2017-03-24 16:50:55,118 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=60631] 
org.apache.hadoop.hbase.ipc.CallRunner(115): 
B.defaultRpcServer.handler=0,queue=0,port=60631: callId: 8270 service: 
ClientService methodName: Scan size: 34 connection: 10.200.51.63:60706
org.apache.hadoop.hbase.UnknownScannerException: Throwing 
UnknownScannerException to reset the client scanner state for clients older 
than 1.3.
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2661)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
T01,,1490370472874.2b5fe29b201777af35213d8329049ca2.: 
java.sql.SQLException: ERROR 206 (22003): The data exceeds the max capacity for 
the data type. COL4 DECIMAL(4,4) value=100.12
at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:89)
at 
org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:55)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:256)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:282)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2563)
... 6 more
Caused by: org.apache.phoenix.exception.DataExceedsCapacityException: 
java.sql.SQLException: ERROR 206 (22003): The data exceeds the max capacity for 
the data type. COL4 DECIMAL(4,4) value=100.12
at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:651)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:237)
... 8 more
Caused by: java.sql.SQLException: ERROR 206 (22003): The data exceeds the max 
capacity for the data type. COL4 DECIMAL(4,4) value=100.12
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:476)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.exception.DataExceedsCapacityException.(DataExceedsCapacityException.java:37)
... 10 more
2017-03-24 16:50:55,119 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=60631] 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver(471): Starting 
ungrouped coprocessor scan 
{"loadColumnFamiliesOnDemand":true,"startRow":"","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":2,"maxResultSize":2097152,"families":{"0":["\\x00\\x00\\x00\\x00","\\x80\\x0B"]},"caching":2147483647,"maxVersions":1,"timeRange":[0,1490370475524]}
 {ENCODED => 2b5fe29b201777af35213d8329049ca2, NAME => 
'T01,,1490370472874.2b5fe29b201777af35213d8329049ca2.', STARTKEY => '', 
ENDKEY => ''}
2017-03-24 16:50:55,119 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=60631] 
org.apache.hadoop.hbase.ipc.CallRunner(115): 
B.defaultRpcServer.handler=3,queue=0,port=60631: callId: 8272 service: 
ClientService methodName: Scan size: 34 connection: 10.200.51.63:60706
org.apache.hadoop.hbase.UnknownScannerException: Throwing 
UnknownScannerException to reset the client scanner state for clients older 
than 1.3.
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2661)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 

[jira] [Comment Edited] (PHOENIX-3736) ArithmeticQueryIT.testDecimalUpsertSelect fails

2017-03-24 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940492#comment-15940492
 ] 

Csaba Skrabak edited comment on PHOENIX-3736 at 3/24/17 4:25 PM:
-

With hbase 1.2.3, test also passes.

Actual SQL error code 6000 is timeout. Query really lasts long with HBase 1.2.4.

The expected SQLException with the code 206 is thrown in 1.2.4, too. But since 
the surrounding DoNotRetryIOException is hidden inside an 
UnknownScannerException's cause, client keeps retrying forever. Finally the 
timeout happens.

Related: HBASE-16604 HBASE-17187

{noformat}
2017-03-24 16:50:55,118 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=60631] 
org.apache.hadoop.hbase.ipc.CallRunner(115): 
B.defaultRpcServer.handler=0,queue=0,port=60631: callId: 8270 service: 
ClientService methodName: Scan size: 34 connection: 10.200.51.63:60706
org.apache.hadoop.hbase.UnknownScannerException: Throwing 
UnknownScannerException to reset the client scanner state for clients older 
than 1.3.
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2661)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
T01,,1490370472874.2b5fe29b201777af35213d8329049ca2.: 
java.sql.SQLException: ERROR 206 (22003): The data exceeds the max capacity for 
the data type. COL4 DECIMAL(4,4) value=100.12
at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:89)
at 
org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:55)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:256)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:282)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2563)
... 6 more
Caused by: org.apache.phoenix.exception.DataExceedsCapacityException: 
java.sql.SQLException: ERROR 206 (22003): The data exceeds the max capacity for 
the data type. COL4 DECIMAL(4,4) value=100.12
at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:651)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:237)
... 8 more
Caused by: java.sql.SQLException: ERROR 206 (22003): The data exceeds the max 
capacity for the data type. COL4 DECIMAL(4,4) value=100.12
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:476)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.exception.DataExceedsCapacityException.(DataExceedsCapacityException.java:37)
... 10 more
2017-03-24 16:50:55,119 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=60631] 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver(471): Starting 
ungrouped coprocessor scan 
{"loadColumnFamiliesOnDemand":true,"startRow":"","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":2,"maxResultSize":2097152,"families":{"0":["\\x00\\x00\\x00\\x00","\\x80\\x0B"]},"caching":2147483647,"maxVersions":1,"timeRange":[0,1490370475524]}
 {ENCODED => 2b5fe29b201777af35213d8329049ca2, NAME => 
'T01,,1490370472874.2b5fe29b201777af35213d8329049ca2.', STARTKEY => '', 
ENDKEY => ''}
2017-03-24 16:50:55,119 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=60631] 
org.apache.hadoop.hbase.ipc.CallRunner(115): 
B.defaultRpcServer.handler=3,queue=0,port=60631: callId: 8272 service: 
ClientService methodName: Scan size: 34 connection: 10.200.51.63:60706
org.apache.hadoop.hbase.UnknownScannerException: Throwing 
UnknownScannerException to reset the client scanner state for clients older 
than 1.3.
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2661)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at 

[jira] [Comment Edited] (PHOENIX-3736) ArithmeticQueryIT.testDecimalUpsertSelect fails

2017-03-24 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940492#comment-15940492
 ] 

Csaba Skrabak edited comment on PHOENIX-3736 at 3/24/17 4:19 PM:
-

With hbase 1.2.3, test also passes.

Actual SQL error code 6000 is timeout. Query really lasts long with HBase 1.2.4.

The expected SQLException with the code 206 is thrown in 1.2.4, too. But since 
the surrounding DoNotRetryIOException is hidden inside an 
UnknownScannerException's cause, client keeps retrying forever. Finally the 
timeout happens.

Related: HBASE-16604 HBASE-17187

```
2017-03-24 16:50:55,118 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=60631] 
org.apache.hadoop.hbase.ipc.CallRunner(115): 
B.defaultRpcServer.handler=0,queue=0,port=60631: callId: 8270 service: 
ClientService methodName: Scan size: 34 connection: 10.200.51.63:60706
org.apache.hadoop.hbase.UnknownScannerException: Throwing 
UnknownScannerException to reset the client scanner state for clients older 
than 1.3.
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2661)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
T01,,1490370472874.2b5fe29b201777af35213d8329049ca2.: 
java.sql.SQLException: ERROR 206 (22003): The data exceeds the max capacity for 
the data type. COL4 DECIMAL(4,4) value=100.12
at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:89)
at 
org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:55)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:256)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:282)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2563)
... 6 more
Caused by: org.apache.phoenix.exception.DataExceedsCapacityException: 
java.sql.SQLException: ERROR 206 (22003): The data exceeds the max capacity for 
the data type. COL4 DECIMAL(4,4) value=100.12
at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:651)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:237)
... 8 more
Caused by: java.sql.SQLException: ERROR 206 (22003): The data exceeds the max 
capacity for the data type. COL4 DECIMAL(4,4) value=100.12
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:476)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.exception.DataExceedsCapacityException.(DataExceedsCapacityException.java:37)
... 10 more
2017-03-24 16:50:55,119 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=60631] 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver(471): Starting 
ungrouped coprocessor scan 
{"loadColumnFamiliesOnDemand":true,"startRow":"","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":2,"maxResultSize":2097152,"families":{"0":["\\x00\\x00\\x00\\x00","\\x80\\x0B"]},"caching":2147483647,"maxVersions":1,"timeRange":[0,1490370475524]}
 {ENCODED => 2b5fe29b201777af35213d8329049ca2, NAME => 
'T01,,1490370472874.2b5fe29b201777af35213d8329049ca2.', STARTKEY => '', 
ENDKEY => ''}
2017-03-24 16:50:55,119 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=60631] 
org.apache.hadoop.hbase.ipc.CallRunner(115): 
B.defaultRpcServer.handler=3,queue=0,port=60631: callId: 8272 service: 
ClientService methodName: Scan size: 34 connection: 10.200.51.63:60706
org.apache.hadoop.hbase.UnknownScannerException: Throwing 
UnknownScannerException to reset the client scanner state for clients older 
than 1.3.
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2661)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at 

[jira] [Comment Edited] (PHOENIX-3736) ArithmeticQueryIT.testDecimalUpsertSelect fails

2017-03-24 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940492#comment-15940492
 ] 

Csaba Skrabak edited comment on PHOENIX-3736 at 3/24/17 4:05 PM:
-

With hbase 1.2.3, test also passes.

Actual SQL error code 6000 is timeout. Query really lasts long with HBase 1.2.4.

The expected SQLException with the code 206 is thrown in 1.2.4, too. But since 
the surrounding DoNotRetryIOException is hidden inside an 
UnknownScannerException's cause, client keeps retrying forever. Finally the 
timeout happens.

Related: HBASE-16604

```
2017-03-24 16:50:55,118 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=60631] 
org.apache.hadoop.hbase.ipc.CallRunner(115): 
B.defaultRpcServer.handler=0,queue=0,port=60631: callId: 8270 service: 
ClientService methodName: Scan size: 34 connection: 10.200.51.63:60706
org.apache.hadoop.hbase.UnknownScannerException: Throwing 
UnknownScannerException to reset the client scanner state for clients older 
than 1.3.
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2661)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
T01,,1490370472874.2b5fe29b201777af35213d8329049ca2.: 
java.sql.SQLException: ERROR 206 (22003): The data exceeds the max capacity for 
the data type. COL4 DECIMAL(4,4) value=100.12
at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:89)
at 
org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:55)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:256)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:282)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2563)
... 6 more
Caused by: org.apache.phoenix.exception.DataExceedsCapacityException: 
java.sql.SQLException: ERROR 206 (22003): The data exceeds the max capacity for 
the data type. COL4 DECIMAL(4,4) value=100.12
at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:651)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:237)
... 8 more
Caused by: java.sql.SQLException: ERROR 206 (22003): The data exceeds the max 
capacity for the data type. COL4 DECIMAL(4,4) value=100.12
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:476)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.exception.DataExceedsCapacityException.(DataExceedsCapacityException.java:37)
... 10 more
2017-03-24 16:50:55,119 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=60631] 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver(471): Starting 
ungrouped coprocessor scan 
{"loadColumnFamiliesOnDemand":true,"startRow":"","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":2,"maxResultSize":2097152,"families":{"0":["\\x00\\x00\\x00\\x00","\\x80\\x0B"]},"caching":2147483647,"maxVersions":1,"timeRange":[0,1490370475524]}
 {ENCODED => 2b5fe29b201777af35213d8329049ca2, NAME => 
'T01,,1490370472874.2b5fe29b201777af35213d8329049ca2.', STARTKEY => '', 
ENDKEY => ''}
2017-03-24 16:50:55,119 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=60631] 
org.apache.hadoop.hbase.ipc.CallRunner(115): 
B.defaultRpcServer.handler=3,queue=0,port=60631: callId: 8272 service: 
ClientService methodName: Scan size: 34 connection: 10.200.51.63:60706
org.apache.hadoop.hbase.UnknownScannerException: Throwing 
UnknownScannerException to reset the client scanner state for clients older 
than 1.3.
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2661)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)

[jira] [Comment Edited] (PHOENIX-3736) ArithmeticQueryIT.testDecimalUpsertSelect fails

2017-03-24 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940492#comment-15940492
 ] 

Csaba Skrabak edited comment on PHOENIX-3736 at 3/24/17 4:01 PM:
-

With hbase 1.2.3, test also passes.

Actual SQL error code 6000 is timeout. Query really lasts long with HBase 1.2.4.

The expected SQLException with the code 206 is thrown in 1.2.4, too. But since 
the surrounding DoNotRetryIOException is hidden inside an 
UnknownScannerException's cause, client keeps retrying forever. Finally the 
timeout happens.

```
2017-03-24 16:50:55,118 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=60631] 
org.apache.hadoop.hbase.ipc.CallRunner(115): 
B.defaultRpcServer.handler=0,queue=0,port=60631: callId: 8270 service: 
ClientService methodName: Scan size: 34 connection: 10.200.51.63:60706
org.apache.hadoop.hbase.UnknownScannerException: Throwing 
UnknownScannerException to reset the client scanner state for clients older 
than 1.3.
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2661)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
T01,,1490370472874.2b5fe29b201777af35213d8329049ca2.: 
java.sql.SQLException: ERROR 206 (22003): The data exceeds the max capacity for 
the data type. COL4 DECIMAL(4,4) value=100.12
at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:89)
at 
org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:55)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:256)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:282)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2563)
... 6 more
Caused by: org.apache.phoenix.exception.DataExceedsCapacityException: 
java.sql.SQLException: ERROR 206 (22003): The data exceeds the max capacity for 
the data type. COL4 DECIMAL(4,4) value=100.12
at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:651)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:237)
... 8 more
Caused by: java.sql.SQLException: ERROR 206 (22003): The data exceeds the max 
capacity for the data type. COL4 DECIMAL(4,4) value=100.12
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:476)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.exception.DataExceedsCapacityException.(DataExceedsCapacityException.java:37)
... 10 more
2017-03-24 16:50:55,119 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=60631] 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver(471): Starting 
ungrouped coprocessor scan 
{"loadColumnFamiliesOnDemand":true,"startRow":"","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":2,"maxResultSize":2097152,"families":{"0":["\\x00\\x00\\x00\\x00","\\x80\\x0B"]},"caching":2147483647,"maxVersions":1,"timeRange":[0,1490370475524]}
 {ENCODED => 2b5fe29b201777af35213d8329049ca2, NAME => 
'T01,,1490370472874.2b5fe29b201777af35213d8329049ca2.', STARTKEY => '', 
ENDKEY => ''}
2017-03-24 16:50:55,119 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=60631] 
org.apache.hadoop.hbase.ipc.CallRunner(115): 
B.defaultRpcServer.handler=3,queue=0,port=60631: callId: 8272 service: 
ClientService methodName: Scan size: 34 connection: 10.200.51.63:60706
org.apache.hadoop.hbase.UnknownScannerException: Throwing 
UnknownScannerException to reset the client scanner state for clients older 
than 1.3.
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2661)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at 

[jira] [Commented] (PHOENIX-3631) Null pointer error when case expression returns null

2017-03-10 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905206#comment-15905206
 ] 

Csaba Skrabak commented on PHOENIX-3631:


I just tried to reproduce in a sandbox with HDP-2.5.0.0. No luck with your 
example, it returned the expected empty rows (as many as tversion table 
contains, with an only column called null.) Do you happen to have the full 
exception text with stack trace?


> Null pointer error when case expression returns null
> 
>
> Key: PHOENIX-3631
> URL: https://issues.apache.org/jira/browse/PHOENIX-3631
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: Hortonworks 2.5.3
>Reporter: N Campbell
>Priority: Minor
>
> phoenix-4.7.0.2.5.3.0-37-client
> select case when 1 = 1 then null end  from TVERSION
> Remote driver error: NullPointerException: (null exception message)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3071) Surface more information on failed locations

2017-03-10 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904870#comment-15904870
 ] 

Csaba Skrabak commented on PHOENIX-3071:


What method exactly is throwing RetriesExhaustedWithDetailsException in which 
case? I'm looking for easy things to fix but this one is not clear enough to me.

> Surface more information on failed locations
> 
>
> Key: PHOENIX-3071
> URL: https://issues.apache.org/jira/browse/PHOENIX-3071
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Priority: Minor
>
> Phoenix is throwing a RetriesExhaustedWithDetailsException out of 
> MutationState. RetriesExhaustedWithDetailsException carries a lot of 
> interesting information accessible via type specific methods. Instead of just 
> rethrowing RetriesExhaustedWithDetailsException, MutationState should 
> construct a CommitException that enumerates rows and last known server 
> location using RetriesExhaustedWithDetailsException#getRow(int) and 
> RetriesExhaustedWithDetailsException#getHostnamePort(int). 
> Consider alerts of the form (prettily formatted):
> {noformat}
> CommitException: RetriesExhaustedWithDetailsException: 
> Failed 88 actions: IOException: 88 times, at MutationState [
>   Table: TEST_TABLE [
> Row: testRowc1900d75072f9fc8217735631dda44a3
> Location: host1234.domain.company.com,
> Row:  ... 
>]
> ]
> {noformat}
> The additional information in the exception message can significantly aid 
> debugging and remediation efforts because the location(s) causing issues for 
> the client will be known up front.
> Probably want to stop enumerating after a configurable number of locations 
> and skip an entry if the preceding entry also has the same row and location.
> Should do the same wherever rethrowing or bubbling up 
> RetriesExhaustedWithDetailsException.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-2048) change to_char() function to use HALF_UP rounding mode

2017-02-01 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848114#comment-15848114
 ] 

Csaba Skrabak commented on PHOENIX-2048:


[^phoenix-2048.patch] is the solution that always uses HALF_UP. It is an 
_incompatible_ change.
I don't know about the configurable ideas. Is there any example in Phoenix for 
a function that behaves different ways depending on configuration?

> change to_char() function to use HALF_UP rounding mode
> --
>
> Key: PHOENIX-2048
> URL: https://issues.apache.org/jira/browse/PHOENIX-2048
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: verify
>Reporter: Jonathan Leech
>Priority: Minor
> Fix For: 4.10.0
>
> Attachments: phoenix-2048.patch
>
>
> to_char() function uses the default rounding mode in java DecimalFormat, 
> which is a strange one called HALF_EVEN, which rounds a '5' in the last 
> position either up or down depending on the preceding digit. 
> Change it to HALF_UP so it rounds the same way as the round() function does, 
> or provide a way to override the behavior; e.g. globally or as a client 
> config, or an argument to the to_char() function.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (PHOENIX-2048) change to_char() function to use HALF_UP rounding mode

2017-01-31 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-2048:
---
Attachment: phoenix-2048.patch

> change to_char() function to use HALF_UP rounding mode
> --
>
> Key: PHOENIX-2048
> URL: https://issues.apache.org/jira/browse/PHOENIX-2048
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: verify
>Reporter: Jonathan Leech
>Priority: Minor
> Fix For: 4.10.0
>
> Attachments: phoenix-2048.patch
>
>
> to_char() function uses the default rounding mode in java DecimalFormat, 
> which is a strange one called HALF_EVEN, which rounds a '5' in the last 
> position either up or down depending on the preceding digit. 
> Change it to HALF_UP so it rounds the same way as the round() function does, 
> or provide a way to override the behavior; e.g. globally or as a client 
> config, or an argument to the to_char() function.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)