Re: [ANNOUNCE] New Phoenix committer: Chinmay Kulkarni

2018-12-10 Thread Karan Mehta
Congrats Chinmay!
ᐧ

On Mon, Dec 10, 2018 at 10:10 PM Geoffrey Jacoby  wrote:

> Congratulations, Chinmay!
>
> Geoffrey Jacoby
>
> On Mon, Dec 10, 2018 at 1:45 PM Thomas D'Silva  wrote:
>
> > On behalf of the Apache Phoenix PMC, I am pleased to announce that
> Chinmay
> > Kulkarni has accepted our invitation to become a committer. Chinmay has
> > contributed several metadata management improvements. He has also worked
> on
> > improving Phoenix code quality and fixed many bugs.
> >
> > Please welcome him to the Apache Phoenix team.
> >
> > Thank you,
> > Thomas
> >
> > [1]
> >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20PHOENIX%20AND%20assignee%3Dckulkarni%20AND%20status%3DResolved
> >
>


Re: [ANNOUNCE] New Phoenix committer: Jaanai Zhang

2018-12-10 Thread Geoffrey Jacoby
Congratulations, Jaanai!

On Mon, Dec 10, 2018 at 2:05 PM Thomas D'Silva  wrote:

> On behalf of the Apache Phoenix PMC, I am pleased to announce that Jaanai
> Zhang has accepted our invitation to become a committer. He has found and
> fixed several bugs [1]. He is also very active on the mailing list helping
> out users and providing feedback on new features.
>
> We are looking forward to more great work.
>
> Thank you,
> Thomas
>
> [1] *
> https://issues.apache.org/jira/browse/PHOENIX-4974?jql=project%20%3D%20PHOENIX%20AND%20assignee%3Djaanai%20%20
> <
> https://issues.apache.org/jira/browse/PHOENIX-4974?jql=project%20%3D%20PHOENIX%20AND%20assignee%3Djaanai%20%20
> >*
>


Re: [ANNOUNCE] New Phoenix committer: Chinmay Kulkarni

2018-12-10 Thread Geoffrey Jacoby
Congratulations, Chinmay!

Geoffrey Jacoby

On Mon, Dec 10, 2018 at 1:45 PM Thomas D'Silva  wrote:

> On behalf of the Apache Phoenix PMC, I am pleased to announce that Chinmay
> Kulkarni has accepted our invitation to become a committer. Chinmay has
> contributed several metadata management improvements. He has also worked on
> improving Phoenix code quality and fixed many bugs.
>
> Please welcome him to the Apache Phoenix team.
>
> Thank you,
> Thomas
>
> [1]
>
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20PHOENIX%20AND%20assignee%3Dckulkarni%20AND%20status%3DResolved
>


[jira] [Updated] (PHOENIX-5055) Split mutations batches probably affects correctness of index data

2018-12-10 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-5055:

Attachment: PHOENIX-5055-4.x-HBase-1.4-v4.patch

> Split mutations batches probably affects correctness of index data
> --
>
> Key: PHOENIX-5055
> URL: https://issues.apache.org/jira/browse/PHOENIX-5055
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Critical
> Fix For: 5.1.0
>
> Attachments: ConcurrentTest.java, 
> PHOENIX-5055-4.x-HBase-1.4-v2.patch, PHOENIX-5055-4.x-HBase-1.4-v3.patch, 
> PHOENIX-5055-4.x-HBase-1.4-v4.patch, PHOENIX-5055-v4.x-HBase-1.4.patch
>
>
> In order to get more performance, we split the list of mutations into 
> multiple batches in MutationSate.  For one upsert SQL with some null values 
> that will produce two type KeyValues(Put and DeleteColumn),  These KeyValues 
> should have the same timestamp so that keep on an atomic operation for 
> corresponding the row key.
> [^ConcurrentTest.java] produced some random upsert/delete SQL and 
> concurrently executed, some SQL snippets as follows:
> {code:java}
> 1149:UPSERT INTO ConcurrentReadWritTest(A,C,E,F,G) VALUES 
> ('3826','2563','3052','3170','3767');
> 1864:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,E,F,G) VALUES 
> ('2563','4926','3526','678',null,null,'1617');
> 2332:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,E,F,G) VALUES 
> ('1052','2563','1120','2314','1456',null,null);
> 2846:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,G) VALUES 
> ('1922','146',null,'469','2563');
> 2847:DELETE FROM ConcurrentReadWritTest WHERE A = '2563’;
> {code}
> Found incorrect indexed data for the index tables by sqlline.
> !https://gw.alicdn.com/tfscom/TB1nSDqpxTpK1RjSZFGXXcHqFXa.png|width=665,height=400!
> Debugged the mutations of batches on the server side. the DeleteColumns and 
> Puts were splitted into the different batches for the once upsert,  the 
> DeleteFaimly also was executed by another thread.  due to DeleteColumns's 
> timestamp is larger than DeleteFaimly under multiple threads.
> !https://gw.alicdn.com/tfscom/TB1frHmpCrqK1RjSZK9XXXyypXa.png|width=901,height=120!
>  
> Running the following:
> {code:java}
> conn.createStatement().executeUpdate( "CREATE TABLE " + tableName + " (" + "A 
> VARCHAR NOT NULL PRIMARY KEY," + "B VARCHAR," + "C VARCHAR," + "D VARCHAR) 
> COLUMN_ENCODED_BYTES = 0"); 
> conn.createStatement().executeUpdate("CREATE INDEX " + indexName + " on " + 
> tableName + " (C) INCLUDE(D)"); 
> conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
> VALUES ('A2','B2','C2','D2')"); 
> conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
> VALUES ('A3','B3', 'C3', null)");
> {code}
> dump IndexMemStore:
> {code:java}
> hbase.index.covered.data.IndexMemStore(117): 
> Inserting:\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
> phoenix.hbase.index.covered.data.IndexMemStore(133): Current kv state: 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:B/1542190446167/Put/vlen=2/seqid=5/value=B3 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:C/1542190446167/Put/vlen=2/seqid=5/value=C3 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:_0/1542190446167/Put/vlen=1/seqid=5/value=x 
> phoenix.hbase.index.covered.data.IndexMemStore(137): == END MemStore 
> Dump ==
> {code}
>  
> The DeleteColumn's timestamp larger than other mutations.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-12-10 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: PHOENIX-4993-4.x-HBase-1.3.01.patch

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-12-10 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: (was: PHOENIX-4993-4.x-HBase-1.3.01.patch)

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5025) Tool to clean up orphan views

2018-12-10 Thread Kadir OZDEMIR (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir OZDEMIR updated PHOENIX-5025:
---
Attachment: PHOENIX-5025.master.0001.patch

> Tool to clean up orphan views
> -
>
> Key: PHOENIX-5025
> URL: https://issues.apache.org/jira/browse/PHOENIX-5025
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Major
> Attachments: PHOENIX-5025.master.0001.patch, PHOENIX-5025.master.patch
>
>
> A view without its base table is an orphan view. Since views are virtual 
> tables and their data is stored in their base tables, they are useless when 
> they become orphan. A base table can have child views, grandchild views and 
> so on. Due to some reasons/bugs, when a base table was dropped, its views 
> were not not properly cleaned up in the past. For example, the drop table 
> code did not support cleaning up grandchild views. This has been recently 
> fixed by PHOENIX-4764. Although PHOENIX-4764 prevents new orphan views due to 
> table drop operations, it does not clean up existing orphan views. It is also 
> believed that when the system catalog table was split due to a bug in the 
> past, it also contributed to creating orphan views as Phoenix did not support 
> splittable system catalog. Therefore, Phoenix needs a tool to clean up orphan 
> views.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4951) Rename viewIndexType to viewIndexIdType in PTableImpl

2018-12-10 Thread Chinmay Kulkarni (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni resolved PHOENIX-4951.
---
Resolution: Fixed

> Rename viewIndexType to viewIndexIdType in PTableImpl
> -
>
> Key: PHOENIX-4951
> URL: https://issues.apache.org/jira/browse/PHOENIX-4951
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Assignee: Chinmay Kulkarni
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[ANNOUNCE] New Phoenix committer: Jaanai Zhang

2018-12-10 Thread Thomas D'Silva
On behalf of the Apache Phoenix PMC, I am pleased to announce that Jaanai
Zhang has accepted our invitation to become a committer. He has found and
fixed several bugs [1]. He is also very active on the mailing list helping
out users and providing feedback on new features.

We are looking forward to more great work.

Thank you,
Thomas

[1] 
*https://issues.apache.org/jira/browse/PHOENIX-4974?jql=project%20%3D%20PHOENIX%20AND%20assignee%3Djaanai%20%20
*


[ANNOUNCE] New Phoenix committer: Chinmay Kulkarni

2018-12-10 Thread Thomas D'Silva
On behalf of the Apache Phoenix PMC, I am pleased to announce that Chinmay
Kulkarni has accepted our invitation to become a committer. Chinmay has
contributed several metadata management improvements. He has also worked on
improving Phoenix code quality and fixed many bugs.

Please welcome him to the Apache Phoenix team.

Thank you,
Thomas

[1]
https://issues.apache.org/jira/issues/?jql=project%20%3D%20PHOENIX%20AND%20assignee%3Dckulkarni%20AND%20status%3DResolved


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-12-10 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: (was: PHOENIX-4993-v1.patch)

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-12-10 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: PHOENIX-4993-4.x-HBase-1.3.01.patch

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch, 
> PHOENIX-4993-v1.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5066) The TimeZone is incorrectly used during writing or reading data

2018-12-10 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-5066:

Description: 
We have two methods to write data when uses JDBC API.
#1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
#2. Uses the _prepareStatement_ method to set some objects and execute.

The _string_ data needs to convert to a new object by the schema information of 
tables. we'll use some date formatters to convert string data to object for 
Date/Time/Timestamp types when writes data and the formatters are used when 
reads data as well.

 

*Uses default timezone test*

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47') 
UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
{code}
Reading the table by the getString methods 
{code:java}
1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660
{code}
 *Uses GMT+8 test*

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47')

UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
Reading the table by the getString methods
{code:java}
 1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000
2 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000
3 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106
{code}
 

_We_ have a historical problem,  we'll parse the string to Date/Time/Timestamp 
objects with timezone in #1, which means the actual data is going to be changed 
when stored in HBase table。

  was:
We have two methods to write data when uses JDBC API.
 #1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
 #2. Uses the _prepareStatement_ method to set some objects and execute.

The _string_ data needs to convert to a new object by the schema information of 
tables. we'll use some date formatters to convert string data to object for 
Date/Time/Timestamp types when writes data and the formatters are used when 
reads data as well.

  

*Uses default timezone test*

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47') 
UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
{code}
Reading the table by the getString methods 
{code:java}
1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660
{code}
 

 *Uses GMT+8 test*

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47')

UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
Reading the table by the getString methods
{code:java}
 1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 

[jira] [Updated] (PHOENIX-5066) The TimeZone is incorrectly used during writing or reading data

2018-12-10 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-5066:

Description: 
We have two methods to write data when uses JDBC API.
 #1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
 #2. Uses the _prepareStatement_ method to set some objects and execute.

The _string_ data needs to convert to a new object by the schema information of 
tables. we'll use some date formatters to convert string data to object for 
Date/Time/Timestamp types when writes data and the formatters are used when 
reads data as well.

  

*Uses default timezone test*

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47') 
UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
{code}
Reading the table by the getString methods 
{code:java}
1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660
{code}
 

 *Uses GMT+8 test*

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47')

UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
Reading the table by the getString methods
{code:java}
 1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000
2 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000
3 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106
{code}
 

_We_ have a historical problem,  we'll parse the string to Date/Time/Timestamp 
objects with timezone in #1, which means the actual data is going to be changed 
when stored in HBase table。

  was:
We have two methods to write data when uses JDBC API.
#1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
#2. Uses the _prepareStatement_ method to set some object and execute.

The _string_ data needs to convert to a new object by the schema information of 
tables. we'll use some date formatters to convert string data to object for 
Date/Time/Timestamp types when writes data and the formatters are used when 
reads data as well.

  

*Uses default timezone test*

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47') 
UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
{code}
Reading the table by the getString methods 
{code:java}
1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660
{code}
 

 *Uses GMT+8 test*

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47')

UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
Reading the table by the getString methods
{code:java}
 1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-

[jira] [Updated] (PHOENIX-5066) The TimeZone is incorrectly used during writing or reading data

2018-12-10 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-5066:

Attachment: DateTest.java

> The TimeZone is incorrectly used during writing or reading data
> ---
>
> Key: PHOENIX-5066
> URL: https://issues.apache.org/jira/browse/PHOENIX-5066
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Critical
> Fix For: 4.15.0, 5.1
>
> Attachments: DateTest.java
>
>
> We have two methods to write data when uses JDBC API.
> #1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
> #2. Uses the _prepareStatement_ method to set some object and execute.
> The _string_ data needs to convert to a new object by the schema information 
> of tables. we'll use some date formatters to convert string data to object 
> for Date/Time/Timestamp types when writes data and the formatters are used 
> when reads data as well.
>   
> *Uses default timezone test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47') 
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
> {code}
> Reading the table by the getString methods 
> {code:java}
> 1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 
> 07:45:07.660
> {code}
>  
>  *Uses GMT+8 test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47')
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
> 2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
> 3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
> Reading the table by the getString methods
> {code:java}
>  1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 
> 23:40:47.000
> 2 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000 | 2018-12-10 
> 15:40:47.000
> 3 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106 | 2018-12-10 
> 15:40:47.106
> {code}
>  
> _We_ have a historical problem,  we'll parse the string to 
> Date/Time/Timestamp objects with timezone in #1, which means the actual data 
> is going to be changed when stored in HBase table。



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-12-10 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: (was: PHOENIX-4993.patch)

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-v1.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-12-10 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: PHOENIX-4993-v1.patch

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-v1.patch, PHOENIX-4993.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5066) The TimeZone is incorrectly used during writing or reading data

2018-12-10 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-5066:

Description: 
We have two methods to write data when uses JDBC API.
#1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
#2. Uses the _prepareStatement_ method to set some object and execute.

The _string_ data needs to convert to a new object by the schema information of 
tables. we'll use some date formatters to convert string data to object for 
Date/Time/Timestamp types when writes data and the formatters are used when 
reads data as well.

  

*Uses default timezone test*

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47') 
UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
{code}
Reading the table by the getString methods 
{code:java}
1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660
{code}
 

 *Uses GMT+8 test*

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47')

UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
Reading the table by the getString methods
{code:java}
 1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000
2 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000
3 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106
{code}
 

_We_ have a historical problem,  we'll parse the string to Date/Time/Timestamp 
objects with timezone in #1, which means the actual data is going to be changed 
when stored in HBase table。

  was:
We have two methods to write data when uses JDBC API.
#1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
#2. Uses the _prepareStatement_ method to set some object and execute.

The _string_ data needs to convert to a new object by the schema information of 
tables. we'll use some date formatters to convert string data to object for 
Date/Time/Timestamp types when writes data and the formatters are used when 
reads data as well.

 

 

## Uses default timezone test

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47') 
UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.

 
{code:java}
1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
{code}
 

Reading the table by the getString methods

 
{code:java}
1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660
{code}
 

 

## Uses GMT+8 test

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47')

UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
Reading the table by the getString methods
{code:java}
 1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000

[jira] [Created] (PHOENIX-5066) The TimeZone is incorrectly used during writing or reading data

2018-12-10 Thread Jaanai (JIRA)
Jaanai created PHOENIX-5066:
---

 Summary: The TimeZone is incorrectly used during writing or 
reading data
 Key: PHOENIX-5066
 URL: https://issues.apache.org/jira/browse/PHOENIX-5066
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.1, 5.0.0
Reporter: Jaanai
Assignee: Jaanai
 Fix For: 4.15.0, 5.1


We have two methods to write data when uses JDBC API.
#1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
#2. Uses the _prepareStatement_ method to set some object and execute.

The _string_ data needs to convert to a new object by the schema information of 
tables. we'll use some date formatters to convert string data to object for 
Date/Time/Timestamp types when writes data and the formatters are used when 
reads data as well.

 

 

## Uses default timezone test

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47') 
UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.

 
{code:java}
1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
{code}
 

Reading the table by the getString methods

 
{code:java}
1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660
{code}
 

 

## Uses GMT+8 test

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47')

UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
Reading the table by the getString methods
{code:java}
 1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000
2 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000
3 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106
{code}
 

_We_ have a historical problem,  we'll parse the string to Date/Time/Timestamp 
objects with timezone in #1, which means the actual data is going to be changed 
when stored in HBase table。



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)