[jira] [Commented] (PHOENIX-6600) Replace deprecated getCall with updated getRpcCall

2021-11-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17449436#comment-17449436
 ] 

ASF GitHub Bot commented on PHOENIX-6600:
-

kiran-maturi commented on pull request #1356:
URL: https://github.com/apache/phoenix/pull/1356#issuecomment-979763440


   @virajjasani thanks for reviewing. The test are flaky in my local as well 
sometimes they fail and sometimes they pass


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace deprecated getCall with updated getRpcCall
> --
>
> Key: PHOENIX-6600
> URL: https://issues.apache.org/jira/browse/PHOENIX-6600
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.1, 5.1.2
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Critical
>
> If we are using PhoenixRPCScheduler. It currently fails with no such method 
> getCall. This is due to the CallRunner.getCall() being deprecated as part of 
> https://issues.apache.org/jira/browse/HBASE-17221
> {code:java}
> 2021-11-22 05:36:35,161 TRACE [-EventLoopGroup-1-49] ipc.NettyRpcServer - 
> Connection /10.231.90.110:55489; caught unexpected downstream exception.
> java.lang.NoSuchMethodError: 
> org.apache.hadoop.hbase.ipc.CallRunner.getCall()Lorg/apache/hadoop/hbase/ipc/ServerCall;
> at 
> org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler.dispatch(PhoenixRpcScheduler.java:84)
> at 
> org.apache.hadoop.hbase.ipc.ServerRpcConnection.processRequest(ServerRpcConnection.java:720)
> at 
> org.apache.hadoop.hbase.ipc.ServerRpcConnection.processOneRpc(ServerRpcConnection.java:457)
> at 
> org.apache.hadoop.hbase.ipc.ServerRpcConnection.saslReadAndProcess(ServerRpcConnection.java:344)
> at 
> org.apache.hadoop.hbase.ipc.NettyServerRpcConnection.process(NettyServerRpcConnection.java:87)
> at 
> org.apache.hadoop.hbase.ipc.NettyServerRpcConnection.process(NettyServerRpcConnection.java:63)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcServerRequestDecoder.channelRead(NettyRpcServerRequestDecoder.java:62)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
> at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
> at 
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
> at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This 

[GitHub] [phoenix] kiran-maturi commented on pull request #1356: PHOENIX-6600 Replace deprecated getCall with updated getRpcCall

2021-11-25 Thread GitBox


kiran-maturi commented on pull request #1356:
URL: https://github.com/apache/phoenix/pull/1356#issuecomment-979763440


   @virajjasani thanks for reviewing. The test are flaky in my local as well 
sometimes they fail and sometimes they pass


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (PHOENIX-6600) PhoenixRPCScheduler fails with NoSuchMethod getCall() for hbase-2

2021-11-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17449397#comment-17449397
 ] 

ASF GitHub Bot commented on PHOENIX-6600:
-

virajjasani commented on pull request #1356:
URL: https://github.com/apache/phoenix/pull/1356#issuecomment-979732541


   Test results for 2nd build: 
https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1356/2/testReport/
   
   Btw this PR and Jira title can be changed to "Replace deprecated getCall 
with updated getRpcCall" because method not found seems un-relevant.
   And I will trigger one more build to ensure the recent failed tests are not 
repetitive (although they are known flakies).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> PhoenixRPCScheduler fails with NoSuchMethod getCall() for hbase-2
> -
>
> Key: PHOENIX-6600
> URL: https://issues.apache.org/jira/browse/PHOENIX-6600
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.1, 5.1.2
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Critical
>
> If we are using PhoenixRPCScheduler. It currently fails with no such method 
> getCall. This is due to the CallRunner.getCall() being deprecated as part of 
> https://issues.apache.org/jira/browse/HBASE-17221
> {code:java}
> 2021-11-22 05:36:35,161 TRACE [-EventLoopGroup-1-49] ipc.NettyRpcServer - 
> Connection /10.231.90.110:55489; caught unexpected downstream exception.
> java.lang.NoSuchMethodError: 
> org.apache.hadoop.hbase.ipc.CallRunner.getCall()Lorg/apache/hadoop/hbase/ipc/ServerCall;
> at 
> org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler.dispatch(PhoenixRpcScheduler.java:84)
> at 
> org.apache.hadoop.hbase.ipc.ServerRpcConnection.processRequest(ServerRpcConnection.java:720)
> at 
> org.apache.hadoop.hbase.ipc.ServerRpcConnection.processOneRpc(ServerRpcConnection.java:457)
> at 
> org.apache.hadoop.hbase.ipc.ServerRpcConnection.saslReadAndProcess(ServerRpcConnection.java:344)
> at 
> org.apache.hadoop.hbase.ipc.NettyServerRpcConnection.process(NettyServerRpcConnection.java:87)
> at 
> org.apache.hadoop.hbase.ipc.NettyServerRpcConnection.process(NettyServerRpcConnection.java:63)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcServerRequestDecoder.channelRead(NettyRpcServerRequestDecoder.java:62)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
> at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)

[GitHub] [phoenix] virajjasani commented on pull request #1356: PHOENIX-6600 fixes PhoenixRpcScheduler.dispatch getCall method not found

2021-11-25 Thread GitBox


virajjasani commented on pull request #1356:
URL: https://github.com/apache/phoenix/pull/1356#issuecomment-979732541


   Test results for 2nd build: 
https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1356/2/testReport/
   
   Btw this PR and Jira title can be changed to "Replace deprecated getCall 
with updated getRpcCall" because method not found seems un-relevant.
   And I will trigger one more build to ensure the recent failed tests are not 
repetitive (although they are known flakies).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (PHOENIX-6604) Index not used for wildcard topN query on salted table

2021-11-25 Thread Lars Hofhansl (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17449277#comment-17449277
 ] 

Lars Hofhansl commented on PHOENIX-6604:


With a test. (Can't run tests locally, but I think this is the right test).

 

> Index not used for wildcard topN query on salted table
> --
>
> Key: PHOENIX-6604
> URL: https://issues.apache.org/jira/browse/PHOENIX-6604
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.2
>Reporter: Lars Hofhansl
>Priority: Major
> Fix For: 5.1.3
>
> Attachments: 6604-1.5.1.3, 6604.5.1.3
>
>
> Just randomly came across this, playing with TPCH data.
> {code:java}
> CREATE TABLE lineitem (
>  orderkey bigint not null,
>  partkey bigint,
>  suppkey bigint,
>  linenumber integer not null,
>  quantity double,
>  extendedprice double,
>  discount double,
>  tax double,
>  returnflag varchar(1),
>  linestatus varchar(1),
>  shipdate date,
>  commitdate date,
>  receiptdate date,
>  shipinstruct varchar(25),
>  shipmode varchar(10),
>  comment varchar(44)
>  constraint pk primary key(orderkey, linenumber)) 
> IMMUTABLE_ROWS=true,SALT_BUCKETS=4;{code}
> Now:
> {code:java}
>  > explain select * from lineitem order by shipdate limit 1;
> +---+
> |                                          PLAN                               
>       |
> +---+
> | CLIENT 199-CHUNK 8859938 ROWS 2044738843 BYTES PARALLEL 199-WAY FULL SCAN 
> OVER LI |
> |     SERVER TOP 1 ROW SORTED BY [SHIPDATE]                                   
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT LIMIT 1                                                              
>       |
> +---+
> 4 rows selected (6.525 seconds)
> -- SAME COLUMNS!
> > explain select ORDERKEY, PARTKEY, SUPPKEY, LINENUMBER, QUANTITY, 
> > EXTENDEDPRICE, DISCOUNT, TAX, RETURNFLAG, LINESTATUS, SHIPDATE, COMMITDATE, 
> > RECEIPTDATE, SHIPINSTRUCT, SHIPMODE, COMMENT from lineitem order by 
> > shipdate limit 1;
> +---+
> |                                                                             
>       |
> +---+
> | CLIENT 4-CHUNK 4 ROWS 204 BYTES PARALLEL 4-WAY RANGE SCAN OVER LINEITEM [1] 
>       |
> |     SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
> 0.DISCOUNT,  |
> |     SERVER FILTER BY FIRST KEY ONLY                                         
>       |
> |     SERVER 1 ROW LIMIT                                                      
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT 1 ROW LIMIT                                                          
>       |
> +---+
> 6 rows selected (2.736 seconds){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] (PHOENIX-5066) The TimeZone is incorrectly used during writing or reading data

2021-11-25 Thread Jira


[ https://issues.apache.org/jira/browse/PHOENIX-5066 ]


Michał Wieleba deleted comment on PHOENIX-5066:
-

was (Author: majsiel):
Guys, do you have some precise guidelines how to deal with this issue? Working 
with timezones in Apache Phoenix can be very confusing and documentation says 
only about one parameter. Especially when it comes to changing from winter time 
to summer time and vice versa. Now CET is GMT+1, but Phoenix show it as GMT+2
{code:java}
select TO_CHAR(date1, '-MM-dd HH:mm:ss') DATE, timezone_offset('GMT', 
date1) GMT, timezone_offset('CET', date1) CET from (...)

| DATE | GMT |CET 
| 2020-10-01 22:00:00 | 0 | 120                                                 
     {code}
 

> The TimeZone is incorrectly used during writing or reading data
> ---
>
> Key: PHOENIX-5066
> URL: https://issues.apache.org/jira/browse/PHOENIX-5066
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai Zhang
>Assignee: Istvan Toth
>Priority: Critical
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
> Attachments: DateTest.java, PHOENIX-5066.4x.v1.patch, 
> PHOENIX-5066.4x.v2.patch, PHOENIX-5066.4x.v3.patch, 
> PHOENIX-5066.master.v1.patch, PHOENIX-5066.master.v2.patch, 
> PHOENIX-5066.master.v3.patch, PHOENIX-5066.master.v4.patch, 
> PHOENIX-5066.master.v5.patch, PHOENIX-5066.master.v6.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We have two methods to write data when uses JDBC API.
> #1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
> #2. Uses the _prepareStatement_ method to set some objects and execute.
> The _string_ data needs to convert to a new object by the schema information 
> of tables. we'll use some date formatters to convert string data to object 
> for Date/Time/Timestamp types when writes data and the formatters are used 
> when reads data as well.
>  
> *Uses default timezone test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47') 
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
> {code}
> Reading the table by the getString methods 
> {code:java}
> 1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 
> 07:45:07.660
> {code}
>  *Uses GMT+8 test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47')
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
> 2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
> 3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
> Reading the table by the getString methods
> {code:java}
>  1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 
> 23:40:47.000
> 2 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000 | 2018-12-10 
> 15:40:47.000
> 3 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106 | 2018-12-10 
> 15:40:47.106
> {code}
>  
> _We_ have a historical problem,  we'll parse the string to 
> Date/Time/Timestamp objects with timezone in #1, which means the actual data 
> is going to be changed when stored in HBase table。



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (PHOENIX-5066) The TimeZone is incorrectly used during writing or reading data

2021-11-25 Thread Jira


[ 
https://issues.apache.org/jira/browse/PHOENIX-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17449000#comment-17449000
 ] 

Michał Wieleba edited comment on PHOENIX-5066 at 11/25/21, 9:09 AM:


Guys, do you have some precise guidelines how to deal with this issue? Working 
with timezones in Apache Phoenix can be very confusing and documentation says 
only about one parameter. Especially when it comes to changing from winter time 
to summer time and vice versa. Now CET is GMT+1, but Phoenix show it as GMT+2
{code:java}
select TO_CHAR(date1, '-MM-dd HH:mm:ss') DATE, timezone_offset('GMT', 
date1) GMT, timezone_offset('CET', date1) CET from (...)

| DATE | GMT |CET 
| 2020-10-01 22:00:00 | 0 | 120                                                 
     {code}
 


was (Author: majsiel):
Guys, do you have some precise guidelines how to deal with this issue? Working 
with timezones in Apache Phoenix can be very confusing and documentation says 
only about one parameter. Especially when it comes to changing from winter time 
to summer time and vice versa. Now CET is GMT+1, but Phoenix show it as GMT+2
{code:java}
select TO_CHAR(date1, '-MM-dd HH:mm:ss'), timezone_offset('GMT', date1), 
timezone_offset('CET', date1) from (...)
+-+-+
| TO_CHAR(TO_DATE("date1")) | TIMEZONE_OFFSET('GMT', TO_DATE("date1")) | 
TIMEZONE_OFFSET('CET', TO_DATE("date1")) 
-+-+-+
| 2020-10-01 22:00:00       | 0                                        | 120    
                                             
     {code}
 

> The TimeZone is incorrectly used during writing or reading data
> ---
>
> Key: PHOENIX-5066
> URL: https://issues.apache.org/jira/browse/PHOENIX-5066
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai Zhang
>Assignee: Istvan Toth
>Priority: Critical
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
> Attachments: DateTest.java, PHOENIX-5066.4x.v1.patch, 
> PHOENIX-5066.4x.v2.patch, PHOENIX-5066.4x.v3.patch, 
> PHOENIX-5066.master.v1.patch, PHOENIX-5066.master.v2.patch, 
> PHOENIX-5066.master.v3.patch, PHOENIX-5066.master.v4.patch, 
> PHOENIX-5066.master.v5.patch, PHOENIX-5066.master.v6.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We have two methods to write data when uses JDBC API.
> #1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
> #2. Uses the _prepareStatement_ method to set some objects and execute.
> The _string_ data needs to convert to a new object by the schema information 
> of tables. we'll use some date formatters to convert string data to object 
> for Date/Time/Timestamp types when writes data and the formatters are used 
> when reads data as well.
>  
> *Uses default timezone test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47') 
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
> {code}
> Reading the table by the getString methods 
> {code:java}
> 1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 
> 07:45:07.660
> {code}
>  *Uses GMT+8 test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47')
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
> 2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
> 3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
> Reading the table by the getString methods
> {code:java}
>  1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 
> 

[jira] [Comment Edited] (PHOENIX-5066) The TimeZone is incorrectly used during writing or reading data

2021-11-25 Thread Jira


[ 
https://issues.apache.org/jira/browse/PHOENIX-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17449000#comment-17449000
 ] 

Michał Wieleba edited comment on PHOENIX-5066 at 11/25/21, 9:08 AM:


Guys, do you have some precise guidelines how to deal with this issue? Working 
with timezones in Apache Phoenix can be very confusing and documentation says 
only about one parameter. Especially when it comes to changing from winter time 
to summer time and vice versa. Now CET is GMT+1, but Phoenix show it as GMT+2
{code:java}
select TO_CHAR(date1, '-MM-dd HH:mm:ss'), timezone_offset('GMT', date1), 
timezone_offset('CET', date1) from (...)
+-+-+
| TO_CHAR(TO_DATE("date1")) | TIMEZONE_OFFSET('GMT', TO_DATE("date1")) | 
TIMEZONE_OFFSET('CET', TO_DATE("date1")) 
-+-+-+
| 2020-10-01 22:00:00       | 0                                        | 120    
                                             
     {code}
 


was (Author: majsiel):
Guys, do you have some precise guidelines how to deal with this issue? Working 
with timezones in Apache Phoenix can be very confusing and documentation says 
only about one parameter. Especially when it comes to changing from winter time 
to summer time and vice versa.

> The TimeZone is incorrectly used during writing or reading data
> ---
>
> Key: PHOENIX-5066
> URL: https://issues.apache.org/jira/browse/PHOENIX-5066
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai Zhang
>Assignee: Istvan Toth
>Priority: Critical
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
> Attachments: DateTest.java, PHOENIX-5066.4x.v1.patch, 
> PHOENIX-5066.4x.v2.patch, PHOENIX-5066.4x.v3.patch, 
> PHOENIX-5066.master.v1.patch, PHOENIX-5066.master.v2.patch, 
> PHOENIX-5066.master.v3.patch, PHOENIX-5066.master.v4.patch, 
> PHOENIX-5066.master.v5.patch, PHOENIX-5066.master.v6.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We have two methods to write data when uses JDBC API.
> #1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
> #2. Uses the _prepareStatement_ method to set some objects and execute.
> The _string_ data needs to convert to a new object by the schema information 
> of tables. we'll use some date formatters to convert string data to object 
> for Date/Time/Timestamp types when writes data and the formatters are used 
> when reads data as well.
>  
> *Uses default timezone test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47') 
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
> {code}
> Reading the table by the getString methods 
> {code:java}
> 1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 
> 07:45:07.660
> {code}
>  *Uses GMT+8 test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47')
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
> 2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
> 3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
> Reading the table by the getString methods
> {code:java}
>  1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 
> 23:40:47.000
> 2 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000 | 2018-12-10 
> 15:40:47.000
> 3 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106 | 2018-12-10 
> 15:40:47.106
> {code}
>  
> _We_ have a historical problem,  we'll parse the string to 
> Date/Time/Timestamp objects with timezone in #1, which 

[jira] [Comment Edited] (PHOENIX-5066) The TimeZone is incorrectly used during writing or reading data

2021-11-25 Thread Jira


[ 
https://issues.apache.org/jira/browse/PHOENIX-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17449000#comment-17449000
 ] 

Michał Wieleba edited comment on PHOENIX-5066 at 11/25/21, 9:04 AM:


Guys, do you have some precise guidelines how to deal with this issue? Working 
with timezones in Apache Phoenix can be very confusing and documentation says 
only about one parameter. Especially when it comes to changing from winter time 
to summer time and vice versa.


was (Author: majsiel):
Guys, do you have some precise guidelines how to deal with this issue? Working 
with timezones in Apache Phoenix can be very confusing and documentation says 
only about one parameter.

> The TimeZone is incorrectly used during writing or reading data
> ---
>
> Key: PHOENIX-5066
> URL: https://issues.apache.org/jira/browse/PHOENIX-5066
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai Zhang
>Assignee: Istvan Toth
>Priority: Critical
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
> Attachments: DateTest.java, PHOENIX-5066.4x.v1.patch, 
> PHOENIX-5066.4x.v2.patch, PHOENIX-5066.4x.v3.patch, 
> PHOENIX-5066.master.v1.patch, PHOENIX-5066.master.v2.patch, 
> PHOENIX-5066.master.v3.patch, PHOENIX-5066.master.v4.patch, 
> PHOENIX-5066.master.v5.patch, PHOENIX-5066.master.v6.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We have two methods to write data when uses JDBC API.
> #1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
> #2. Uses the _prepareStatement_ method to set some objects and execute.
> The _string_ data needs to convert to a new object by the schema information 
> of tables. we'll use some date formatters to convert string data to object 
> for Date/Time/Timestamp types when writes data and the formatters are used 
> when reads data as well.
>  
> *Uses default timezone test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47') 
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
> {code}
> Reading the table by the getString methods 
> {code:java}
> 1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 
> 07:45:07.660
> {code}
>  *Uses GMT+8 test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47')
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
> 2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
> 3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
> Reading the table by the getString methods
> {code:java}
>  1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 
> 23:40:47.000
> 2 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000 | 2018-12-10 
> 15:40:47.000
> 3 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106 | 2018-12-10 
> 15:40:47.106
> {code}
>  
> _We_ have a historical problem,  we'll parse the string to 
> Date/Time/Timestamp objects with timezone in #1, which means the actual data 
> is going to be changed when stored in HBase table。



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-5066) The TimeZone is incorrectly used during writing or reading data

2021-11-25 Thread Jira


[ 
https://issues.apache.org/jira/browse/PHOENIX-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17449000#comment-17449000
 ] 

Michał Wieleba commented on PHOENIX-5066:
-

Guys, do you have some precise guidelines how to deal with this issue? Working 
with timezones in Apache Phoenix can be very confusing and documentation says 
only about one parameter.

> The TimeZone is incorrectly used during writing or reading data
> ---
>
> Key: PHOENIX-5066
> URL: https://issues.apache.org/jira/browse/PHOENIX-5066
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai Zhang
>Assignee: Istvan Toth
>Priority: Critical
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
> Attachments: DateTest.java, PHOENIX-5066.4x.v1.patch, 
> PHOENIX-5066.4x.v2.patch, PHOENIX-5066.4x.v3.patch, 
> PHOENIX-5066.master.v1.patch, PHOENIX-5066.master.v2.patch, 
> PHOENIX-5066.master.v3.patch, PHOENIX-5066.master.v4.patch, 
> PHOENIX-5066.master.v5.patch, PHOENIX-5066.master.v6.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We have two methods to write data when uses JDBC API.
> #1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
> #2. Uses the _prepareStatement_ method to set some objects and execute.
> The _string_ data needs to convert to a new object by the schema information 
> of tables. we'll use some date formatters to convert string data to object 
> for Date/Time/Timestamp types when writes data and the formatters are used 
> when reads data as well.
>  
> *Uses default timezone test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47') 
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
> {code}
> Reading the table by the getString methods 
> {code:java}
> 1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 
> 07:45:07.660
> {code}
>  *Uses GMT+8 test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47')
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
> 2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
> 3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
> Reading the table by the getString methods
> {code:java}
>  1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 
> 23:40:47.000
> 2 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000 | 2018-12-10 
> 15:40:47.000
> 3 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106 | 2018-12-10 
> 15:40:47.106
> {code}
>  
> _We_ have a historical problem,  we'll parse the string to 
> Date/Time/Timestamp objects with timezone in #1, which means the actual data 
> is going to be changed when stored in HBase table。



--
This message was sent by Atlassian Jira
(v8.20.1#820001)