[jira] [Commented] (PHOENIX-4622) Phoenix 4.13 order by issue

2018-02-27 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379684#comment-16379684
 ] 

chenglei commented on PHOENIX-4622:
---

[~mini666] , I see, thank you,I 'll try to reproduce it and give a patch ,I 
think you can  add KEEP_DELETED_CELLS property to your column to  bypass this 
bug temporarily.

> Phoenix 4.13 order by issue
> ---
>
> Key: PHOENIX-4622
> URL: https://issues.apache.org/jira/browse/PHOENIX-4622
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
> Environment: phoenix 4.13
> hbase 1.2.5
>Reporter: tom thmas
>Priority: Critical
>
> *1.create table and insert data.*
> create table test2
> (
>  id varchar(200) primary key,
>  cardid varchar(200),
>  ctime date 
> )
> upsert into test2 (id,cardid,ctime) values('a1','123',to_date('2017-12-01 
> 17:42:45'))
> *2.query sql like this:*
> select id,ctime from test2  where cardid='123' order by ctime
> error log:
> {color:#FF}org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TEST2,,1519221167250.813e4ce0510965a7a7898413da2a17ad.: null{color}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4631) PhoenixInputFormat should close connection after generateSplits()

2018-02-27 Thread Hui Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379681#comment-16379681
 ] 

Hui Zheng commented on PHOENIX-4631:


It may be related to 
[PHOENIX-4319|https://issues.apache.org/jira/browse/PHOENIX-4319] .

> PhoenixInputFormat should close connection after generateSplits()
> -
>
> Key: PHOENIX-4631
> URL: https://issues.apache.org/jira/browse/PHOENIX-4631
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Hui Zheng
>Priority: Major
>
> In our sparkstreaming usecase which loads a phoenix table as a Dataset, it 
> will leak zookeeper connection(3 connections per batch)   and lead to OOM 
> exception in its driver process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4631) PhoenixInputFormat should close connection after generateSplits()

2018-02-27 Thread Hui Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379674#comment-16379674
 ] 

Hui Zheng commented on PHOENIX-4631:


phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
I think the connection should be cloesd after splits get generated here.
{code:java}
private List generateSplits(final QueryPlan qplan, final 
List splits, Configuration config) throws IOException {
Preconditions.checkNotNull(qplan);
Preconditions.checkNotNull(splits);
// Get the RegionSizeCalculator
try(org.apache.hadoop.hbase.client.Connection connection =
HBaseFactoryProvider.getHConnectionFactory().createConnection(config)) {
RegionLocator regionLocator = 
connection.getRegionLocator(TableName.valueOf(qplan
.getTableRef().getTable().getPhysicalName().toString()));
RegionSizeCalculator sizeCalculator = new RegionSizeCalculator(regionLocator, 
connection
.getAdmin());

{code}

> PhoenixInputFormat should close connection after generateSplits()
> -
>
> Key: PHOENIX-4631
> URL: https://issues.apache.org/jira/browse/PHOENIX-4631
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Hui Zheng
>Priority: Major
>
> In our sparkstreaming usecase which loads a phoenix table as a Dataset, it 
> will leak zookeeper connection(3 connections per batch)   and lead to OOM 
> exception in its driver process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4631) PhoenixInputFormat should close connection after generateSplits()

2018-02-27 Thread Hui Zheng (JIRA)
Hui Zheng created PHOENIX-4631:
--

 Summary: PhoenixInputFormat should close connection after 
generateSplits()
 Key: PHOENIX-4631
 URL: https://issues.apache.org/jira/browse/PHOENIX-4631
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.10.0
Reporter: Hui Zheng


In our sparkstreaming usecase which loads a phoenix table as a Dataset, it will 
leak zookeeper connection(3 connections per batch)   and lead to OOM exception 
in its driver process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4630) Reverse scan does not working

2018-02-27 Thread JeongMin Ju (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JeongMin Ju updated PHOENIX-4630:
-
Description: 
In version 4.13, if the query plan is a reverse scan, an error or incorrect 
data is returned.

This is a problem that occurs when the query plan is a reverse range scan in 
the case of an "order by desc" query for the row key.

The table schem is as follows.
{code:java}
create table if not exists app_log (
  app_tag varchar not null,
  timestamp date not null,
  uuid varchar not null,
  log varchar
  constraint pk primary key(app_tag, timestamp row_timestamp, uuid)
)
data_block_encoding='FAST_DIFF',
compression='LZ4',
update_cache_frequency=60,
column_encoded_bytes = 1,
ttl=2592000,
salt_buckets=50
;
{code}
The current data is as follows.
{code:java}
upsert into app_log values ('test', now(), 'test', 'test');
...

select * from app_log order by timestamp;
+---+--+---+---+
|  APP_TAG  |TIMESTAMP | UUID  |  LOG  |
+---+--+---+---+
| test  | 2018-02-28 01:02:16.985  | test  | test  |
| test  | 2018-02-28 01:02:19.472  | test  | test  |
| test  | 2018-02-28 01:02:21.568  | test  | test  |
| test  | 2018-02-28 01:02:23.332  | test  | test  |
| test  | 2018-02-28 01:02:25.200  | test  | test  |
| test  | 2018-02-28 01:02:27.055  | test  | test  |
| test  | 2018-02-28 01:02:29.008  | test  | test  |
| test  | 2018-02-28 01:02:30.911  | test  | test  |
| test  | 2018-02-28 01:02:32.775  | test  | test  |
| test  | 2018-02-28 01:02:34.663  | test  | test  |
+---+--+---+---+
{code}
You can see errors if you run a simple query after adding some data.

Depending on the data, an error may occur and incorrect data may be output.
{code:java}
select * from app_log where app_tag = 'test' and timestamp between 
to_date('2018-02-28 01:02:16') and to_date('2018-02-28 01:02:34') order by 
timestamp desc;

Error: org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
APP_LOG,\x0D\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1519778082466.6dd30a7d7a26a38c5c06d63008bbff3d.:
 seekToPreviousRow must not be called on a non-reversed scanner
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2561)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
Caused by: org.apache.commons.lang.NotImplementedException: seekToPreviousRow 
must not be called on a non-reversed scanner
at 
org.apache.hadoop.hbase.regionserver.NonReversedNonLazyKeyValueScanner.seekToPreviousRow(NonReversedNonLazyKeyValueScanner.java:44)
at 
org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.seekToPreviousRow(ReversedKeyValueHeap.java:89)
at 
org.apache.hadoop.hbase.regionserver.ReversedRegionScannerImpl.nextRow(ReversedRegionScannerImpl.java:71)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5938)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5673)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5659)
at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
... 9 more (state=08000,code=101)
{code}
 Query plan is as follow.
{code:java}
explain select * from app_log where app_tag = 'test' and timestamp between 
to_date('2018-02-28 01:02:20') and to_date('2018-02-28 01:02:30') order by 
timestamp desc;
++-++--+
|
PLAN
| EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |

[jira] [Updated] (PHOENIX-4630) Reverse scan does not working

2018-02-27 Thread JeongMin Ju (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JeongMin Ju updated PHOENIX-4630:
-
Description: 
In version 4.13, if the query plan is a reverse scan, an error or incorrect 
data is returned.

This is a problem that occurs when the query plan is a reverse range scan in 
the case of an "order by desc" query for the row key.

The table schem is as follows.
{code:java}
create table if not exists app_log (
  app_tag varchar not null,
  timestamp date not null,
  uuid varchar not null,
  log varchar
  constraint pk primary key(app_tag, timestamp row_timestamp, uuid)
)
data_block_encoding='FAST_DIFF',
compression='LZ4',
update_cache_frequency=60,
column_encoded_bytes = 1,
ttl=2592000,
salt_buckets=50
;
{code}
The current data is as follows.
{code:java}
upsert into app_log values ('test', now(), 'test', 'test');
...

select * from app_log order by timestamp;
+---+--+---+---+
|  APP_TAG  |TIMESTAMP | UUID  |  LOG  |
+---+--+---+---+
| test  | 2018-02-28 01:02:16.985  | test  | test  |
| test  | 2018-02-28 01:02:19.472  | test  | test  |
| test  | 2018-02-28 01:02:21.568  | test  | test  |
| test  | 2018-02-28 01:02:23.332  | test  | test  |
| test  | 2018-02-28 01:02:25.200  | test  | test  |
| test  | 2018-02-28 01:02:27.055  | test  | test  |
| test  | 2018-02-28 01:02:29.008  | test  | test  |
| test  | 2018-02-28 01:02:30.911  | test  | test  |
| test  | 2018-02-28 01:02:32.775  | test  | test  |
| test  | 2018-02-28 01:02:34.663  | test  | test  |
+---+--+---+---+
{code}
You can see errors if you run a simple query after adding some data.

Depending on the data, an error may occur and incorrect data may be output.
{code:java}
select * from app_log where app_tag = 'test' and timestamp between 
to_date('2018-02-28 01:02:16') and to_date('2018-02-28 01:02:34') order by 
timestamp desc;

Error: org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
APP_LOG,\x0D\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1519778082466.6dd30a7d7a26a38c5c06d63008bbff3d.:
 seekToPreviousRow must not be called on a non-reversed scanner
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2561)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
Caused by: org.apache.commons.lang.NotImplementedException: seekToPreviousRow 
must not be called on a non-reversed scanner
at 
org.apache.hadoop.hbase.regionserver.NonReversedNonLazyKeyValueScanner.seekToPreviousRow(NonReversedNonLazyKeyValueScanner.java:44)
at 
org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.seekToPreviousRow(ReversedKeyValueHeap.java:89)
at 
org.apache.hadoop.hbase.regionserver.ReversedRegionScannerImpl.nextRow(ReversedRegionScannerImpl.java:71)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5938)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5673)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5659)
at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
... 9 more (state=08000,code=101)
{code}
 Query plan is as follow.
{code:java}
explain select * from app_log where app_tag = 'test' and timestamp between 
to_date('2018-02-28 01:02:20') and to_date('2018-02-28 01:02:30') order by 
timestamp desc;
++-++--+
|
PLAN
| EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |

[jira] [Commented] (PHOENIX-4622) Phoenix 4.13 order by issue

2018-02-27 Thread JeongMin Ju (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379615#comment-16379615
 ] 

JeongMin Ju commented on PHOENIX-4622:
--

OK. I'll

See the PHOENIX-4630
  

> Phoenix 4.13 order by issue
> ---
>
> Key: PHOENIX-4622
> URL: https://issues.apache.org/jira/browse/PHOENIX-4622
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
> Environment: phoenix 4.13
> hbase 1.2.5
>Reporter: tom thmas
>Priority: Critical
>
> *1.create table and insert data.*
> create table test2
> (
>  id varchar(200) primary key,
>  cardid varchar(200),
>  ctime date 
> )
> upsert into test2 (id,cardid,ctime) values('a1','123',to_date('2017-12-01 
> 17:42:45'))
> *2.query sql like this:*
> select id,ctime from test2  where cardid='123' order by ctime
> error log:
> {color:#FF}org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TEST2,,1519221167250.813e4ce0510965a7a7898413da2a17ad.: null{color}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4630) Reverse scan does not working

2018-02-27 Thread JeongMin Ju (JIRA)
JeongMin Ju created PHOENIX-4630:


 Summary: Reverse scan does not working
 Key: PHOENIX-4630
 URL: https://issues.apache.org/jira/browse/PHOENIX-4630
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.13.0, 4.13.1, 4.13.2, 4.13.2-cdh5.11.2
Reporter: JeongMin Ju


In version 4.13, if the query plan is a reverse scan, an error or incorrect 
data is returned.

This is a problem that occurs when the query plan is a reverse range scan in 
the case of an "order by desc" query for the row key.

The table schem is as follows.
{code:java}
create table if not exists app_log (
  app_tag varchar not null,
  timestamp date not null,
  uuid varchar not null,
  log varchar
  constraint pk primary key(app_tag, timestamp row_timestamp, uuid)
)
data_block_encoding='FAST_DIFF',
compression='LZ4',
update_cache_frequency=60,
column_encoded_bytes = 1,
ttl=2592000,
salt_buckets=50
;
{code}
The current data is as follows.
{code:java}
upsert into app_log values ('test', now(), 'test', 'test');
...

select * from app_log order by timestamp;
+---+--+---+---+
|  APP_TAG  |TIMESTAMP | UUID  |  LOG  |
+---+--+---+---+
| test  | 2018-02-28 01:02:16.985  | test  | test  |
| test  | 2018-02-28 01:02:19.472  | test  | test  |
| test  | 2018-02-28 01:02:21.568  | test  | test  |
| test  | 2018-02-28 01:02:23.332  | test  | test  |
| test  | 2018-02-28 01:02:25.200  | test  | test  |
| test  | 2018-02-28 01:02:27.055  | test  | test  |
| test  | 2018-02-28 01:02:29.008  | test  | test  |
| test  | 2018-02-28 01:02:30.911  | test  | test  |
| test  | 2018-02-28 01:02:32.775  | test  | test  |
| test  | 2018-02-28 01:02:34.663  | test  | test  |
+---+--+---+---+
{code}
You can see errors if you run a simple query after adding some data.

Depending on the data, an error may occur and incorrect data may be output.
{code:java}
select * from app_log where app_tag = 'test' and timestamp between 
to_date('2018-02-28 01:02:16') and to_date('2018-02-28 01:02:34') order by 
timestamp desc;

Error: org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
KEMI:REAL_RECENT_LOG,\x0D\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1519778082466.6dd30a7d7a26a38c5c06d63008bbff3d.:
 seekToPreviousRow must not be called on a non-reversed scanner
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2561)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
Caused by: org.apache.commons.lang.NotImplementedException: seekToPreviousRow 
must not be called on a non-reversed scanner
at 
org.apache.hadoop.hbase.regionserver.NonReversedNonLazyKeyValueScanner.seekToPreviousRow(NonReversedNonLazyKeyValueScanner.java:44)
at 
org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.seekToPreviousRow(ReversedKeyValueHeap.java:89)
at 
org.apache.hadoop.hbase.regionserver.ReversedRegionScannerImpl.nextRow(ReversedRegionScannerImpl.java:71)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5938)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5673)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5659)
at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
... 9 more (state=08000,code=101)
{code}
 Query plan is as follow.
{code:java}
explain select * from app_log where app_tag = 'test' and timestamp between 
to_date('2018-02-28 01:02:20') and to_date('2018-02-28 01:02:30') order by 
timestamp desc;
++-++--+
|  

[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-02-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379466#comment-16379466
 ] 

ASF GitHub Bot commented on PHOENIX-4231:
-

Github user aertoria commented on the issue:

https://github.com/apache/phoenix/pull/292
  
Sent out another patch for review. Let us close this p.r.


> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] phoenix issue #292: PHOENIX-4231: Support restriction of remote UDF load sou...

2018-02-27 Thread aertoria
Github user aertoria commented on the issue:

https://github.com/apache/phoenix/pull/292
  
Sent out another patch for review. Let us close this p.r.


---


[jira] [Comment Edited] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-02-27 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379456#comment-16379456
 ] 

Ethan Wang edited comment on PHOENIX-4231 at 2/27/18 11:10 PM:
---

Please review the patch. [~rajeshbabu]
{quote}Either way, we want UDF loading to be restricted to one place only.
{quote}
This patch basically does exactly this.

FYI [~ckulkarni] [~jamestaylor] [~apurtell]


was (Author: aertoria):
Please review the patch. [~apurtell] [~rajeshbabu] [~jamestaylor]
{quote}Either way, we want UDF loading to be restricted to one place only.
{quote}
This patch basically does exactly this.

FYI [~ckulkarni]

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-02-27 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379456#comment-16379456
 ] 

Ethan Wang edited comment on PHOENIX-4231 at 2/27/18 11:09 PM:
---

Please review the patch. [~apurtell] [~rajeshbabu] [~jamestaylor]
{quote}Either way, we want UDF loading to be restricted to one place only.
{quote}
This patch basically does exactly this.

FYI [~ckulkarni]


was (Author: aertoria):
Please review the patch. [~apurtell] [~rajeshbabu]
{quote}Either way, we want UDF loading to be restricted to one place only.
{quote}
This patch basically does exactly this.

FYI [~ckulkarni]

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-02-27 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379456#comment-16379456
 ] 

Ethan Wang commented on PHOENIX-4231:
-

Please review the patch. [~apurtell] [~rajeshbabu]
{quote}Either way, we want UDF loading to be restricted to one place only.
{quote}
This patch basically does exactly this.

FYI [~ckulkarni]

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-02-27 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-4231:

Attachment: PHOENIX-4231.patch

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-02-27 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-4231:

Attachment: pom.xml

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-02-27 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-4231:

Attachment: (was: pom.xml)

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4628) Allow min time between update stats to be configurable separately from stats cache TTL

2018-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379436#comment-16379436
 ] 

Hudson commented on PHOENIX-4628:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #48 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/48/])
PHOENIX-4628 Allow min time between update stats to be configurable (jtaylor: 
rev a22c8de6a0479745a2c861f1f5c553f219e9466c)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java


> Allow min time between update stats to be configurable separately from stats 
> cache TTL
> --
>
> Key: PHOENIX-4628
> URL: https://issues.apache.org/jira/browse/PHOENIX-4628
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4628_v1.patch
>
>
> We only have a single default config that controls both how long stats are 
> cached and how often we allow UPDATE STATISTICS to be called. We should have 
> separate property values for those two distinct things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4333) Incorrect estimate when stats are updated on a tenant specific view

2018-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379434#comment-16379434
 ] 

Hudson commented on PHOENIX-4333:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #48 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/48/])
PHOENIX-4333 Incorrect estimate when stats are updated on a tenant (jtaylor: 
rev db656fbaf6e130fde942f5edd121040e0a5f70f9)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ExplainPlanWithStatsEnabledIT.java


> Incorrect estimate when stats are updated on a tenant specific view
> ---
>
> Key: PHOENIX-4333
> URL: https://issues.apache.org/jira/browse/PHOENIX-4333
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Mujtaba Chohan
>Assignee: James Taylor
>Priority: Major
>  Labels: SFDC, stats
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4333_test.patch, PHOENIX-4333_v1.patch, 
> PHOENIX-4333_v2.patch, PHOENIX-4333_v3.patch, PHOENIX-4333_wip1.patch, 
> PHOENIX-4333_wip2.patch, PHOENIX-4333_wip3.patch, PHOENIX-4333_wip4.patch
>
>
> Consider two tenants A, B with tenant specific view on 2 separate 
> regions/region servers.
> {noformat}
> Region 1 keys:
> A,1
> A,2
> B,1
> Region 2 keys:
> B,2
> B,3
> {noformat}
> When stats are updated on tenant A view. Querying stats on tenant B view 
> yield partial results (only contains stats for B,1) which are incorrect even 
> though it shows updated timestamp as current.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4626) Increase time allowed for partial index rebuild to complete

2018-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379435#comment-16379435
 ] 

Hudson commented on PHOENIX-4626:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #48 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/48/])
PHOENIX-4626 Increase time allowed for partial index rebuild to complete 
(jtaylor: rev 4110f0830fec85ee9d6337a2cb5603a32f81cce2)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java


> Increase time allowed for partial index rebuild to complete
> ---
>
> Key: PHOENIX-4626
> URL: https://issues.apache.org/jira/browse/PHOENIX-4626
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4626_v1.patch
>
>
> Currently a mutable index is marked as disabled if it cannot be caught up by 
> the partial index rebuilder after 30 minutes. This is too short a time. 
> Instead, we should allow 24 hours.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4628) Allow min time between update stats to be configurable separately from stats cache TTL

2018-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379432#comment-16379432
 ] 

Hudson commented on PHOENIX-4628:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1822 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1822/])
PHOENIX-4628 Allow min time between update stats to be configurable (jtaylor: 
rev e56d92fe95ba4a4dec024b08578c42c45201541f)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java


> Allow min time between update stats to be configurable separately from stats 
> cache TTL
> --
>
> Key: PHOENIX-4628
> URL: https://issues.apache.org/jira/browse/PHOENIX-4628
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4628_v1.patch
>
>
> We only have a single default config that controls both how long stats are 
> cached and how often we allow UPDATE STATISTICS to be called. We should have 
> separate property values for those two distinct things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4626) Increase time allowed for partial index rebuild to complete

2018-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379431#comment-16379431
 ] 

Hudson commented on PHOENIX-4626:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1822 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1822/])
PHOENIX-4626 Increase time allowed for partial index rebuild to complete 
(jtaylor: rev 86985b638a9de4840ebfc67089ad3d53bb6a1e6b)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java


> Increase time allowed for partial index rebuild to complete
> ---
>
> Key: PHOENIX-4626
> URL: https://issues.apache.org/jira/browse/PHOENIX-4626
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4626_v1.patch
>
>
> Currently a mutable index is marked as disabled if it cannot be caught up by 
> the partial index rebuilder after 30 minutes. This is too short a time. 
> Instead, we should allow 24 hours.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4333) Incorrect estimate when stats are updated on a tenant specific view

2018-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379430#comment-16379430
 ] 

Hudson commented on PHOENIX-4333:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1822 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1822/])
PHOENIX-4333 Incorrect estimate when stats are updated on a tenant (jtaylor: 
rev 48e4980b7efd8202fa36270eb7d9827f9cde828e)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ExplainPlanWithStatsEnabledIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java


> Incorrect estimate when stats are updated on a tenant specific view
> ---
>
> Key: PHOENIX-4333
> URL: https://issues.apache.org/jira/browse/PHOENIX-4333
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Mujtaba Chohan
>Assignee: James Taylor
>Priority: Major
>  Labels: SFDC, stats
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4333_test.patch, PHOENIX-4333_v1.patch, 
> PHOENIX-4333_v2.patch, PHOENIX-4333_v3.patch, PHOENIX-4333_wip1.patch, 
> PHOENIX-4333_wip2.patch, PHOENIX-4333_wip3.patch, PHOENIX-4333_wip4.patch
>
>
> Consider two tenants A, B with tenant specific view on 2 separate 
> regions/region servers.
> {noformat}
> Region 1 keys:
> A,1
> A,2
> B,1
> Region 2 keys:
> B,2
> B,3
> {noformat}
> When stats are updated on tenant A view. Querying stats on tenant B view 
> yield partial results (only contains stats for B,1) which are incorrect even 
> though it shows updated timestamp as current.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4626) Increase time allowed for partial index rebuild to complete

2018-02-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4626.
---
   Resolution: Fixed
Fix Version/s: 5.0.0

> Increase time allowed for partial index rebuild to complete
> ---
>
> Key: PHOENIX-4626
> URL: https://issues.apache.org/jira/browse/PHOENIX-4626
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4626_v1.patch
>
>
> Currently a mutable index is marked as disabled if it cannot be caught up by 
> the partial index rebuilder after 30 minutes. This is too short a time. 
> Instead, we should allow 24 hours.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4333) Incorrect estimate when stats are updated on a tenant specific view

2018-02-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4333.
---
   Resolution: Fixed
Fix Version/s: 5.0.0

> Incorrect estimate when stats are updated on a tenant specific view
> ---
>
> Key: PHOENIX-4333
> URL: https://issues.apache.org/jira/browse/PHOENIX-4333
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Mujtaba Chohan
>Assignee: James Taylor
>Priority: Major
>  Labels: SFDC, stats
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4333_test.patch, PHOENIX-4333_v1.patch, 
> PHOENIX-4333_v2.patch, PHOENIX-4333_v3.patch, PHOENIX-4333_wip1.patch, 
> PHOENIX-4333_wip2.patch, PHOENIX-4333_wip3.patch, PHOENIX-4333_wip4.patch
>
>
> Consider two tenants A, B with tenant specific view on 2 separate 
> regions/region servers.
> {noformat}
> Region 1 keys:
> A,1
> A,2
> B,1
> Region 2 keys:
> B,2
> B,3
> {noformat}
> When stats are updated on tenant A view. Querying stats on tenant B view 
> yield partial results (only contains stats for B,1) which are incorrect even 
> though it shows updated timestamp as current.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4628) Allow min time between update stats to be configurable separately from stats cache TTL

2018-02-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4628.
---
   Resolution: Fixed
Fix Version/s: 5.0.0

> Allow min time between update stats to be configurable separately from stats 
> cache TTL
> --
>
> Key: PHOENIX-4628
> URL: https://issues.apache.org/jira/browse/PHOENIX-4628
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4628_v1.patch
>
>
> We only have a single default config that controls both how long stats are 
> cached and how often we allow UPDATE STATISTICS to be called. We should have 
> separate property values for those two distinct things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4628) Allow min time between update stats to be configurable separately from stats cache TTL

2018-02-27 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379247#comment-16379247
 ] 

Thomas D'Silva commented on PHOENIX-4628:
-

+1

> Allow min time between update stats to be configurable separately from stats 
> cache TTL
> --
>
> Key: PHOENIX-4628
> URL: https://issues.apache.org/jira/browse/PHOENIX-4628
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4628_v1.patch
>
>
> We only have a single default config that controls both how long stats are 
> cached and how often we allow UPDATE STATISTICS to be called. We should have 
> separate property values for those two distinct things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4628) Allow min time between update stats to be configurable separately from stats cache TTL

2018-02-27 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379244#comment-16379244
 ] 

James Taylor commented on PHOENIX-4628:
---

Pretty trivial patch - please review, [~tdsilva].

> Allow min time between update stats to be configurable separately from stats 
> cache TTL
> --
>
> Key: PHOENIX-4628
> URL: https://issues.apache.org/jira/browse/PHOENIX-4628
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4628_v1.patch
>
>
> We only have a single default config that controls both how long stats are 
> cached and how often we allow UPDATE STATISTICS to be called. We should have 
> separate property values for those two distinct things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4628) Allow min time between update stats to be configurable separately from stats cache TTL

2018-02-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4628:
--
Attachment: PHOENIX-4628_v1.patch

> Allow min time between update stats to be configurable separately from stats 
> cache TTL
> --
>
> Key: PHOENIX-4628
> URL: https://issues.apache.org/jira/browse/PHOENIX-4628
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4628_v1.patch
>
>
> We only have a single default config that controls both how long stats are 
> cached and how often we allow UPDATE STATISTICS to be called. We should have 
> separate property values for those two distinct things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-1890) Provide queries for adding/deleting jars to/from common place in hdfs which is used by dynamic class loader

2018-02-27 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378825#comment-16378825
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-1890:
--

[~aertoria]
bq. I see. and it is not supporting something like:
Yes we are not supporting this. 

> Provide queries for adding/deleting jars to/from common place in hdfs which 
> is used by dynamic class loader
> ---
>
> Key: PHOENIX-1890
> URL: https://issues.apache.org/jira/browse/PHOENIX-1890
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.5.0
>
> Attachments: PHOENIX-1890.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4423) Phoenix-hive compilation broken on >=Hive 2.3

2018-02-27 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378690#comment-16378690
 ] 

Ankit Singhal commented on PHOENIX-4423:


Thanks [~elserj], HIVE-15680 is the root cause, after setting the properties 
you mentioned, it resolved the issue on local cluster. let me test with 
test-cases and upload the patch.

> Phoenix-hive compilation broken on >=Hive 2.3
> -
>
> Key: PHOENIX-4423
> URL: https://issues.apache.org/jira/browse/PHOENIX-4423
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4423.002.patch, PHOENIX-4423_wip1.patch, 
> PHOENIX-4423_wip2.patch
>
>
> HIVE-15167 removed an interface which we're using in Phoenix which obviously 
> fails compilation. Will need to figure out how to work with Hive 1.x, <2.3.0, 
> and >=2.3.0.
> FYI [~sergey.soldatov]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4423) Phoenix-hive compilation broken on >=Hive 2.3

2018-02-27 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378479#comment-16378479
 ] 

Ankit Singhal edited comment on PHOENIX-4423 at 2/27/18 2:43 PM:
-

[~sergey.soldatov], Attached wip2 patch without dependency on hive-it 
artifact(QTestUtil is cloned). Still only JOIN tests are failing, Inner join is 
returning same row multiple times
{code}
java.lang.AssertionError: Unexpected exception java.lang.Exception: 
java.lang.AssertionError: Client Execution succeeded but contained differences 
(error code = 1) after executing testJoin 1,4d0
< Warning: Shuffle Join JOIN[8][tables = [$hdt$_0, $hdt$_1]] in Stage 
'Stage-1:MAPRED' is a cross product
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
{code}

This doesn't seem to be a test issue as it is happening on the local cluster as 
well. 



was (Author: an...@apache.org):
[~sergey.soldatov], Attached wip2 patch without dependency on hive-it 
artifact(QTestUtil is cloned). Still only JOIN tests are failing, it seems join 
condition "ON" is not getting passed somehow to hive as we are getting 
following warning for a cross-product.
{code}
java.lang.AssertionError: Unexpected exception java.lang.Exception: 
java.lang.AssertionError: Client Execution succeeded but contained differences 
(error code = 1) after executing testJoin 1,4d0
< Warning: Shuffle Join JOIN[8][tables = [$hdt$_0, $hdt$_1]] in Stage 
'Stage-1:MAPRED' is a cross product
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
{code}

This is doesn't seems to be the test issue as it is happening with the local 
cluster as well. 


> Phoenix-hive compilation broken on >=Hive 2.3
> -
>
> Key: PHOENIX-4423
> URL: https://issues.apache.org/jira/browse/PHOENIX-4423
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4423.002.patch, PHOENIX-4423_wip1.patch, 
> PHOENIX-4423_wip2.patch
>
>
> HIVE-15167 removed an interface which we're using in Phoenix which obviously 
> fails compilation. Will need to figure out how to work with Hive 1.x, <2.3.0, 
> and >=2.3.0.
> FYI [~sergey.soldatov]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4423) Phoenix-hive compilation broken on >=Hive 2.3

2018-02-27 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378683#comment-16378683
 ] 

Josh Elser commented on PHOENIX-4423:
-

[~an...@apache.org], were you using the latest from Hive's master branch? You 
might have run into the changing of HIVE-15680. [~thejas] pinged me yesterday 
about a failure in the AccumuloStorageHandler. What you're describing certainly 
sounds like what I briefly read on this yesterday.

https://issues.apache.org/jira/browse/HIVE-18695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377592#comment-16377592
 has a suggestion to try:
{code}
set hive.optimize.index.filter=false;
set hive.optimize.ppd=false;
{code}

> Phoenix-hive compilation broken on >=Hive 2.3
> -
>
> Key: PHOENIX-4423
> URL: https://issues.apache.org/jira/browse/PHOENIX-4423
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4423.002.patch, PHOENIX-4423_wip1.patch, 
> PHOENIX-4423_wip2.patch
>
>
> HIVE-15167 removed an interface which we're using in Phoenix which obviously 
> fails compilation. Will need to figure out how to work with Hive 1.x, <2.3.0, 
> and >=2.3.0.
> FYI [~sergey.soldatov]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4423) Phoenix-hive compilation broken on >=Hive 2.3

2018-02-27 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378479#comment-16378479
 ] 

Ankit Singhal edited comment on PHOENIX-4423 at 2/27/18 2:35 PM:
-

[~sergey.soldatov], Attached wip2 patch without dependency on hive-it 
artifact(QTestUtil is cloned). Still only JOIN tests are failing, it seems join 
condition "ON" is not getting passed somehow to hive as we are getting 
following warning for a cross-product.
{code}
java.lang.AssertionError: Unexpected exception java.lang.Exception: 
java.lang.AssertionError: Client Execution succeeded but contained differences 
(error code = 1) after executing testJoin 1,4d0
< Warning: Shuffle Join JOIN[8][tables = [$hdt$_0, $hdt$_1]] in Stage 
'Stage-1:MAPRED' is a cross product
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
{code}

This is doesn't seems to be the test issue as it is happening with the local 
cluster as well. 



was (Author: an...@apache.org):
[~sergey.soldatov], Attached wip2 patch without dependency on hive-it 
artifact(QTestUtil is cloned). Still only JOIN tests are failing, it seems join 
condition "ON" is not getting passed somehow to hive as we are getting 
following warning for a cross-product.
{code}
java.lang.AssertionError: Unexpected exception java.lang.Exception: 
java.lang.AssertionError: Client Execution succeeded but contained differences 
(error code = 1) after executing testJoin 1,4d0
< Warning: Shuffle Join JOIN[8][tables = [$hdt$_0, $hdt$_1]] in Stage 
'Stage-1:MAPRED' is a cross product
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
{code}

Need to check if this is just with the tests or it's happening on the cluster 
as well. 


> Phoenix-hive compilation broken on >=Hive 2.3
> -
>
> Key: PHOENIX-4423
> URL: https://issues.apache.org/jira/browse/PHOENIX-4423
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4423.002.patch, PHOENIX-4423_wip1.patch, 
> PHOENIX-4423_wip2.patch
>
>
> HIVE-15167 removed an interface which we're using in Phoenix which obviously 
> fails compilation. Will need to figure out how to work with Hive 1.x, <2.3.0, 
> and >=2.3.0.
> FYI [~sergey.soldatov]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4622) Phoenix 4.13 order by issue

2018-02-27 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378454#comment-16378454
 ] 

chenglei edited comment on PHOENIX-4622 at 2/27/18 1:34 PM:


[~mini666] ,seems that it is a serious bug and  is a different issue from 
current jira, can you open a new jira and give your DDL and sample data ? We 
can not simply add {{scan.isReversed ()}}  to the if statement.


was (Author: comnetwork):
[~mini666] ,seems that it is a serious bug and  is a different issue from 
current jira, can you open a new jira and give your DDL and sample data ? we 
can not simply add {{scan.isReversed ()}}  to the if statement

> Phoenix 4.13 order by issue
> ---
>
> Key: PHOENIX-4622
> URL: https://issues.apache.org/jira/browse/PHOENIX-4622
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
> Environment: phoenix 4.13
> hbase 1.2.5
>Reporter: tom thmas
>Priority: Critical
>
> *1.create table and insert data.*
> create table test2
> (
>  id varchar(200) primary key,
>  cardid varchar(200),
>  ctime date 
> )
> upsert into test2 (id,cardid,ctime) values('a1','123',to_date('2017-12-01 
> 17:42:45'))
> *2.query sql like this:*
> select id,ctime from test2  where cardid='123' order by ctime
> error log:
> {color:#FF}org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TEST2,,1519221167250.813e4ce0510965a7a7898413da2a17ad.: null{color}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4622) Phoenix 4.13 order by issue

2018-02-27 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378454#comment-16378454
 ] 

chenglei edited comment on PHOENIX-4622 at 2/27/18 1:33 PM:


[~mini666] ,seems that it is a serious bug and  is a different issue from 
current jira, can you open a new jira and give your DDL and sample data ? we 
can not simply add {{scan.isReverse}} 


was (Author: comnetwork):
[~mini666] ,seems that it is a serious bug and  is a different issue from 
current jira, can you open a new jira and give your DDL and sample data ?

> Phoenix 4.13 order by issue
> ---
>
> Key: PHOENIX-4622
> URL: https://issues.apache.org/jira/browse/PHOENIX-4622
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
> Environment: phoenix 4.13
> hbase 1.2.5
>Reporter: tom thmas
>Priority: Critical
>
> *1.create table and insert data.*
> create table test2
> (
>  id varchar(200) primary key,
>  cardid varchar(200),
>  ctime date 
> )
> upsert into test2 (id,cardid,ctime) values('a1','123',to_date('2017-12-01 
> 17:42:45'))
> *2.query sql like this:*
> select id,ctime from test2  where cardid='123' order by ctime
> error log:
> {color:#FF}org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TEST2,,1519221167250.813e4ce0510965a7a7898413da2a17ad.: null{color}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4622) Phoenix 4.13 order by issue

2018-02-27 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378454#comment-16378454
 ] 

chenglei edited comment on PHOENIX-4622 at 2/27/18 1:33 PM:


[~mini666] ,seems that it is a serious bug and  is a different issue from 
current jira, can you open a new jira and give your DDL and sample data ? we 
can not simply add {{scan.isReversed ()}}  to the if statement


was (Author: comnetwork):
[~mini666] ,seems that it is a serious bug and  is a different issue from 
current jira, can you open a new jira and give your DDL and sample data ? we 
can not simply add {{scan.isReverse}} 

> Phoenix 4.13 order by issue
> ---
>
> Key: PHOENIX-4622
> URL: https://issues.apache.org/jira/browse/PHOENIX-4622
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
> Environment: phoenix 4.13
> hbase 1.2.5
>Reporter: tom thmas
>Priority: Critical
>
> *1.create table and insert data.*
> create table test2
> (
>  id varchar(200) primary key,
>  cardid varchar(200),
>  ctime date 
> )
> upsert into test2 (id,cardid,ctime) values('a1','123',to_date('2017-12-01 
> 17:42:45'))
> *2.query sql like this:*
> select id,ctime from test2  where cardid='123' order by ctime
> error log:
> {color:#FF}org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TEST2,,1519221167250.813e4ce0510965a7a7898413da2a17ad.: null{color}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4423) Phoenix-hive compilation broken on >=Hive 2.3

2018-02-27 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378479#comment-16378479
 ] 

Ankit Singhal edited comment on PHOENIX-4423 at 2/27/18 12:13 PM:
--

[~sergey.soldatov], Attached wip2 patch without dependency on hive-it 
artifact(QTestUtil is cloned). Still only JOIN tests are failing, it seems join 
condition "ON" is not getting passed somehow to hive as we are getting 
following warning for a cross-product.
{code}
java.lang.AssertionError: Unexpected exception java.lang.Exception: 
java.lang.AssertionError: Client Execution succeeded but contained differences 
(error code = 1) after executing testJoin 1,4d0
< Warning: Shuffle Join JOIN[8][tables = [$hdt$_0, $hdt$_1]] in Stage 
'Stage-1:MAPRED' is a cross product
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
{code}

Need to check if this is just with the tests or it's happening on the cluster 
as well. 



was (Author: an...@apache.org):
[~sergey.soldatov], Attached wip2 patch without dependency on hive-it 
artifact(QTestUtil is cloned). Still only JOIN tests are failing, it seems join 
condition "ON" is not getting passed somehow to hive as we are getting 
following warning for a cross-product.
{code}
java.lang.AssertionError: Unexpected exception java.lang.Exception: 
java.lang.AssertionError: Client Execution succeeded but contained differences 
(error code = 1) after executing testJoin 1,4d0
< Warning: Shuffle Join JOIN[8][tables = [$hdt$_0, $hdt$_1]] in Stage 
'Stage-1:MAPRED' is a cross product
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
{code}



> Phoenix-hive compilation broken on >=Hive 2.3
> -
>
> Key: PHOENIX-4423
> URL: https://issues.apache.org/jira/browse/PHOENIX-4423
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4423.002.patch, PHOENIX-4423_wip1.patch, 
> PHOENIX-4423_wip2.patch
>
>
> HIVE-15167 removed an interface which we're using in Phoenix which obviously 
> fails compilation. Will need to figure out how to work with Hive 1.x, <2.3.0, 
> and >=2.3.0.
> FYI [~sergey.soldatov]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4423) Phoenix-hive compilation broken on >=Hive 2.3

2018-02-27 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4423:
---
Attachment: PHOENIX-4423_wip2.patch

> Phoenix-hive compilation broken on >=Hive 2.3
> -
>
> Key: PHOENIX-4423
> URL: https://issues.apache.org/jira/browse/PHOENIX-4423
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4423.002.patch, PHOENIX-4423_wip1.patch, 
> PHOENIX-4423_wip2.patch
>
>
> HIVE-15167 removed an interface which we're using in Phoenix which obviously 
> fails compilation. Will need to figure out how to work with Hive 1.x, <2.3.0, 
> and >=2.3.0.
> FYI [~sergey.soldatov]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4423) Phoenix-hive compilation broken on >=Hive 2.3

2018-02-27 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378479#comment-16378479
 ] 

Ankit Singhal commented on PHOENIX-4423:


[~sergey.soldatov], Attached wip2 patch without dependency on hive-it 
artifact(QTestUtil is cloned). Still only JOIN tests are failing, it seems join 
condition "ON" is not getting passed somehow to hive as we are getting 
following warning for a cross-product.
{code}
java.lang.AssertionError: Unexpected exception java.lang.Exception: 
java.lang.AssertionError: Client Execution succeeded but contained differences 
(error code = 1) after executing testJoin 1,4d0
< Warning: Shuffle Join JOIN[8][tables = [$hdt$_0, $hdt$_1]] in Stage 
'Stage-1:MAPRED' is a cross product
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
< 10part2   foodesc 200.0   2.0 -1  10  part2   foodesc 200.0   
2.0 -1
{code}



> Phoenix-hive compilation broken on >=Hive 2.3
> -
>
> Key: PHOENIX-4423
> URL: https://issues.apache.org/jira/browse/PHOENIX-4423
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4423.002.patch, PHOENIX-4423_wip1.patch
>
>
> HIVE-15167 removed an interface which we're using in Phoenix which obviously 
> fails compilation. Will need to figure out how to work with Hive 1.x, <2.3.0, 
> and >=2.3.0.
> FYI [~sergey.soldatov]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4423) Phoenix-hive compilation broken on >=Hive 2.3

2018-02-27 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4423:
---
Attachment: (was: PHOENIX-4423_wip2.patch)

> Phoenix-hive compilation broken on >=Hive 2.3
> -
>
> Key: PHOENIX-4423
> URL: https://issues.apache.org/jira/browse/PHOENIX-4423
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4423.002.patch, PHOENIX-4423_wip1.patch
>
>
> HIVE-15167 removed an interface which we're using in Phoenix which obviously 
> fails compilation. Will need to figure out how to work with Hive 1.x, <2.3.0, 
> and >=2.3.0.
> FYI [~sergey.soldatov]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4423) Phoenix-hive compilation broken on >=Hive 2.3

2018-02-27 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4423:
---
Attachment: PHOENIX-4423_wip2.patch

> Phoenix-hive compilation broken on >=Hive 2.3
> -
>
> Key: PHOENIX-4423
> URL: https://issues.apache.org/jira/browse/PHOENIX-4423
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4423.002.patch, PHOENIX-4423_wip1.patch, 
> PHOENIX-4423_wip2.patch
>
>
> HIVE-15167 removed an interface which we're using in Phoenix which obviously 
> fails compilation. Will need to figure out how to work with Hive 1.x, <2.3.0, 
> and >=2.3.0.
> FYI [~sergey.soldatov]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4622) Phoenix 4.13 order by issue

2018-02-27 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378454#comment-16378454
 ] 

chenglei commented on PHOENIX-4622:
---

[~mini666] ,seems that it is a serious bug and  is a different issue from 
current jira, can you open a new jira and give your DDL and sample data ?

> Phoenix 4.13 order by issue
> ---
>
> Key: PHOENIX-4622
> URL: https://issues.apache.org/jira/browse/PHOENIX-4622
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
> Environment: phoenix 4.13
> hbase 1.2.5
>Reporter: tom thmas
>Priority: Critical
>
> *1.create table and insert data.*
> create table test2
> (
>  id varchar(200) primary key,
>  cardid varchar(200),
>  ctime date 
> )
> upsert into test2 (id,cardid,ctime) values('a1','123',to_date('2017-12-01 
> 17:42:45'))
> *2.query sql like this:*
> select id,ctime from test2  where cardid='123' order by ctime
> error log:
> {color:#FF}org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TEST2,,1519221167250.813e4ce0510965a7a7898413da2a17ad.: null{color}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)