[jira] [Created] (PHOENIX-5891) Ensure that code coverage does not drop with subsequent commits

2020-05-11 Thread Chinmay Kulkarni (Jira)
Chinmay Kulkarni created PHOENIX-5891:
-

 Summary: Ensure that code coverage does not drop with subsequent 
commits
 Key: PHOENIX-5891
 URL: https://issues.apache.org/jira/browse/PHOENIX-5891
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.14.3, 4.15.0
Reporter: Chinmay Kulkarni


With [PHOENIX-5842|https://issues.apache.org/jira/browse/PHOENIX-5842], we 
added Jacoco code coverage to Hadoop QA precommit runs. We should add a check 
to test-patch.sh to ensure that the code coverage numbers do not drop when 
applying a new patch. 

This can also check that overall code coverage is above a fixed threshold as 
well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5891) Ensure that code coverage does not drop with subsequent commits

2020-05-11 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5891:
--
Labels: quality-improvement  (was: )

> Ensure that code coverage does not drop with subsequent commits
> ---
>
> Key: PHOENIX-5891
> URL: https://issues.apache.org/jira/browse/PHOENIX-5891
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0, 4.14.3
>Reporter: Chinmay Kulkarni
>Priority: Major
>  Labels: quality-improvement
>
> With [PHOENIX-5842|https://issues.apache.org/jira/browse/PHOENIX-5842], we 
> added Jacoco code coverage to Hadoop QA precommit runs. We should add a check 
> to test-patch.sh to ensure that the code coverage numbers do not drop when 
> applying a new patch. 
> This can also check that overall code coverage is above a fixed threshold as 
> well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5891) Ensure that code coverage does not drop with subsequent commits

2020-05-11 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5891:
--
Fix Version/s: 4.16.0

> Ensure that code coverage does not drop with subsequent commits
> ---
>
> Key: PHOENIX-5891
> URL: https://issues.apache.org/jira/browse/PHOENIX-5891
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0, 4.14.3
>Reporter: Chinmay Kulkarni
>Priority: Major
>  Labels: quality-improvement
> Fix For: 4.16.0
>
>
> With [PHOENIX-5842|https://issues.apache.org/jira/browse/PHOENIX-5842], we 
> added Jacoco code coverage to Hadoop QA precommit runs. We should add a check 
> to test-patch.sh to ensure that the code coverage numbers do not drop when 
> applying a new patch. 
> This can also check that overall code coverage is above a fixed threshold as 
> well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5891) Ensure that code coverage does not drop with subsequent commits

2020-05-11 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5891:
--
Fix Version/s: 5.1.0

> Ensure that code coverage does not drop with subsequent commits
> ---
>
> Key: PHOENIX-5891
> URL: https://issues.apache.org/jira/browse/PHOENIX-5891
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0, 4.14.3
>Reporter: Chinmay Kulkarni
>Priority: Major
>  Labels: quality-improvement
> Fix For: 5.1.0, 4.16.0
>
>
> With [PHOENIX-5842|https://issues.apache.org/jira/browse/PHOENIX-5842], we 
> added Jacoco code coverage to Hadoop QA precommit runs. We should add a check 
> to test-patch.sh to ensure that the code coverage numbers do not drop when 
> applying a new patch. 
> This can also check that overall code coverage is above a fixed threshold as 
> well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5891) Ensure that code coverage does not drop with subsequent commits

2020-05-11 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5891:
--
Affects Version/s: 5.0.0

> Ensure that code coverage does not drop with subsequent commits
> ---
>
> Key: PHOENIX-5891
> URL: https://issues.apache.org/jira/browse/PHOENIX-5891
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0, 4.14.3
>Reporter: Chinmay Kulkarni
>Priority: Major
>  Labels: quality-improvement
>
> With [PHOENIX-5842|https://issues.apache.org/jira/browse/PHOENIX-5842], we 
> added Jacoco code coverage to Hadoop QA precommit runs. We should add a check 
> to test-patch.sh to ensure that the code coverage numbers do not drop when 
> applying a new patch. 
> This can also check that overall code coverage is above a fixed threshold as 
> well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Are we embracing Java 8?

2020-05-11 Thread Chinmay Kulkarni
Sorry to slightly digress, but based on these discussions, shouldn't we be
setting JAVA_HOME to point to JDK 7 in all our builds? Especially based on
Andrew's comment above:
"


*Java 7 is not forwards compatible with Java 8, especially with respect
toJRE internals. What that means is if you compile something with Java
8,even if you are not using Java 8 language features, you won't be able
torun it on a Java 7 runtime.*"

Our 4.x build

is configured with java 1.8:
[image: 4.x.png]
In our master build
,
we don't even specify JAVA_HOME, so essentially it picks up the latest
version. Precommit gets JAVA_HOME set from $PHOENIX/dev/jenkensEnv.sh.

There have been issues where master builds failed because they used JDK 11 (
https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/71/consoleFull),
search for "JAVA VERSION" in the log file. The same build does not fail if
run with JDK 8.

Also, doesn't this mean we are already compiling with Java 8 (at least in
the 4.x branches) or am I missing something?



On Wed, Apr 22, 2020 at 8:11 PM Ankit Singhal 
wrote:

> Just linking a discussion we had one and a half years back on the same [1],
> considering nothing has changed since then because Java 7 was EOLed in July
> 2015
> and Phoenix 5.0 was also out, it majorly comes to the point of the
> inconvenience of working
> on old code style and extra efforts required to create patches for each
> branch.
>
> Let's say if we decide to upgrade to Java8(Option 3), don't we require the
> following changes?
>
> * Major version upgrade from 4.x-HBase-1.x to 5.x-HBase-1.x:- As upgrading
> 4.x branches
> to Java8 breaks the compatibility with HBase and Java runtime, we need to
> ensure that
> we adhere to dependency compatibility between the minor releases as it is
> expected that
> Phoenix minor upgrade should be just a server jar drop and a restart (even
> though
> we don't explicit covers Java runtime in our backward compatibility [2] as
> HBase does[3]
> but people are used to it now)
>
> * Release Notes and convenience libraries:- Though we would say that it is
> compatible with
> HBase version 1.x but require a Java8, so we need to be explicit with our
> convenience libraries
> as well , like append "jdk8" to a name or something similar
> (phoenix-server-HBase-1.x-jdk8.jar).
> And also provide clarity on the version upgrade
>
> * Avoiding issues during runtime:- Though JAVA8 and JAVA7 are said to be
> binary compatible
> and are expected to be fine in Java8 runtime but it has been called out
> there are rare instances
> which can cause issues due to some incompatibilities between JRE and
> JDK[4]. (As Andrew also
> called out and might have observed the same)
>
>
> Though I agreed with Istvan that creating multiple patches and dealing with
> change in code style every time
> we switch branches has put additional load on the contributor but IMHO, we
> should wait for
> HBase-1.x to upgrade Java before we do it to avoid the some of the issues
> listed above related to Option 3.
>
> Option2 would be preferred at the time when we decide on whether we want to
> diverge from feature
> parity in our major releases and we do only limited fixes for 4.x branch.
> So basically I'm also in favor of Option 1 (continuing Java 7 for HBase-1.x
> and code style as much possible for 5.x).
>
> [1]
>
> https://lists.apache.org/thread.html/970db0b5cb0560c49654e450aafb8fb759596384cefe4f02808e80cc%40%3Cdev.phoenix.apache.org%3E
> [2]http://phoenix.apache.org/upgrading.html
> [3] https://hbase.apache.org/book.html#hbase.versioning
> [4]
>
> https://www.oracle.com/technetwork/java/javase/8-compatibility-guide-2156366.html#A999387
>
> Regards,
> Ankit Singhal
>


-- 
Chinmay Kulkarni


[jira] [Assigned] (PHOENIX-5891) Ensure that code coverage does not drop with subsequent commits

2020-05-11 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned PHOENIX-5891:
-

Assignee: Chinmay Kulkarni

> Ensure that code coverage does not drop with subsequent commits
> ---
>
> Key: PHOENIX-5891
> URL: https://issues.apache.org/jira/browse/PHOENIX-5891
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0, 4.14.3
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Major
>  Labels: quality-improvement
> Fix For: 5.1.0, 4.16.0
>
>
> With [PHOENIX-5842|https://issues.apache.org/jira/browse/PHOENIX-5842], we 
> added Jacoco code coverage to Hadoop QA precommit runs. We should add a check 
> to test-patch.sh to ensure that the code coverage numbers do not drop when 
> applying a new patch. 
> This can also check that overall code coverage is above a fixed threshold as 
> well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (PHOENIX-5842) Code Coverage tool for Phoenix

2020-05-11 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reopened PHOENIX-5842:
---

> Code Coverage tool for Phoenix
> --
>
> Key: PHOENIX-5842
> URL: https://issues.apache.org/jira/browse/PHOENIX-5842
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0, 4.14.3
>Reporter: Sandeep Guggilam
>Assignee: Sandeep Guggilam
>Priority: Major
>  Labels: quality-improvement
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-5842.4.x.v1.patch, PHOENIX-5842.4.x.v2.patch, 
> PHOENIX-5842.master.v1.patch, PHOENIX-5842_addendum.patch
>
>
> Currently we don't have any code coverage tool for Phoenix. This is required 
> for us to measure the test coverage and helps us in improving the test suite 
> further till we reach may be 80% coverage
> The test coverage results can also be integrated into the Hadoop QA run of 
> the precommit build such that the reviewers could take a look at the report 
> and see if the added code doesn't have enough coverage



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5842) Code Coverage tool for Phoenix

2020-05-11 Thread Sandeep Guggilam (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Guggilam updated PHOENIX-5842:
--
Attachment: PHOENIX-5842_addendum.patch

> Code Coverage tool for Phoenix
> --
>
> Key: PHOENIX-5842
> URL: https://issues.apache.org/jira/browse/PHOENIX-5842
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0, 4.14.3
>Reporter: Sandeep Guggilam
>Assignee: Sandeep Guggilam
>Priority: Major
>  Labels: quality-improvement
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-5842.4.x.v1.patch, PHOENIX-5842.4.x.v2.patch, 
> PHOENIX-5842.master.v1.patch, PHOENIX-5842_addendum.patch
>
>
> Currently we don't have any code coverage tool for Phoenix. This is required 
> for us to measure the test coverage and helps us in improving the test suite 
> further till we reach may be 80% coverage
> The test coverage results can also be integrated into the Hadoop QA run of 
> the precommit build such that the reviewers could take a look at the report 
> and see if the added code doesn't have enough coverage



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5889) row_timestamp wrong column values for on duplicate key update

2020-05-11 Thread Jira


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mariusz Szpatuśko updated PHOENIX-5889:
---
Description: 
{code:java}
CREATE TABLE STORE.TEST1(CREATE TABLE STORE.TEST1(eventStartTimestamp timestamp 
not null,val1 varchar, val2 varcharCONSTRAINT pk PRIMARY KEY 
(eventStartTimestamp desc ROW_TIMESTAMP))VERSIONS=1, 
DATA_BLOCK_ENCODING='FAST_DIFF', COMPRESSION='SNAPPY',BLOOMFILTER='ROW', 
UPDATE_CACHE_FREQUENCY=90, SALT_BUCKETS=32;

hbase(main):054:0> scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW                                                           COLUMN+CELL0 
row(s) in 0.0520 seconds  

hbase(main):037:0> describe 'STORE.TEST1'Table STORE.TEST1 is 
ENABLEDSTORE.TEST1, {TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 
=> 
'|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', 
coprocessor$3 => 
'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
coprocessor$4 => 
'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
coprocessor$5 => 
'|org.apache.phoenix.hbase.index.IndexRegionObserver|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder'}COLUMN
 FAMILIES DESCRIPTION{NAME => '0', BLOOMFILTER => 'ROW', VERSIONS => '1', 
IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 
'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 'SNAPPY', MIN_VERSIONS => '0', 
BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}

UPSERT INTO STORE.TEST1(EVENTSTARTTIMESTAMP, VAL1, VAL2) values ('2002-05-30 
09:30:10','a','b');

{code}
For table with row_timestamp key
{code:java}
hbase(main):055:0> scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW   COLUMN+CELL
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x00\x00\x00\x00, timestamp=102275101, value=x
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0B, timestamp=102275101, value=a
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0C, timestamp=102275101, value=b
1 row(s) in 0.0340 seconds

upsert into store.test1(EVENTSTARTTIMESTAMP, VAL2) values('2002-05-30 
09:30:10','test222') on duplicate key update 
val1='testa',val2='testb';hbase(main):058:0> scan 
'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW   COLUMN+CELL
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x00\x00\x00\x00, timestamp=1589202202193, value=x
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0B, timestamp=1589202202193, value=testa
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0C, timestamp=1589202202193, value=testb
1 row(s) in 0.0510 seconds

{code}
when data is upserted and after that updated for the same key timestamp is 
updated also.

means on duplicate keys deoesn work for row_timestamp
{code:java}
select * from store.test1 where eventstarttimestamp  scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW                                                           COLUMN+CELL0 
row(s) in 0.0520 seconds  

hbase(main):037:0> describe 'STORE.TEST1'Table STORE.TEST1 is 
ENABLEDSTORE.TEST1, {TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 
=> 
'|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', 
coprocessor$3 => 
'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
coprocessor$4 => 
'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
coprocessor$5 => 
'|org.apache.phoenix.hbase.index.IndexRegionObserver|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder'}COLUMN
 FAMILIES DESCRIPTION{NAME => '0', BLOOMFILTER => 'ROW', VERSIONS => '1', 
IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 
'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 'SNAPPY', MIN_VERSIONS => '0', 
BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}

UPSERT INTO STORE.TEST1(EVENTSTARTTIMESTAMP, VAL1, VAL2) values ('2002-05-30 
09:30:10','a','b');

{code}
For table with row_timestamp key
{code:java}
hbase(main):055:0> scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW   COLUMN+CELL
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x00\x00\x00\x00, timestamp=102275101, value=x
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0B, timestamp=102275101, value=a
 

[jira] [Updated] (PHOENIX-5889) row_timestamp wrong column values for on duplicate key update

2020-05-11 Thread Jira


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mariusz Szpatuśko updated PHOENIX-5889:
---
Description: 
{code:java}
CREATE TABLE STORE.TEST1(CREATE TABLE STORE.TEST1(eventStartTimestamp timestamp 
not null,val1 varchar, val2 varcharCONSTRAINT pk PRIMARY KEY 
(eventStartTimestamp desc ROW_TIMESTAMP))VERSIONS=1, 
DATA_BLOCK_ENCODING='FAST_DIFF', COMPRESSION='SNAPPY',BLOOMFILTER='ROW', 
UPDATE_CACHE_FREQUENCY=90, SALT_BUCKETS=32;

hbase(main):054:0> scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW                                                           COLUMN+CELL0 
row(s) in 0.0520 seconds  

hbase(main):037:0> describe 'STORE.TEST1'Table STORE.TEST1 is 
ENABLEDSTORE.TEST1, {TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 
=> 
'|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', 
coprocessor$3 => 
'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
coprocessor$4 => 
'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
coprocessor$5 => 
'|org.apache.phoenix.hbase.index.IndexRegionObserver|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder'}COLUMN
 FAMILIES DESCRIPTION{NAME => '0', BLOOMFILTER => 'ROW', VERSIONS => '1', 
IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 
'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 'SNAPPY', MIN_VERSIONS => '0', 
BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}

UPSERT INTO STORE.TEST1(EVENTSTARTTIMESTAMP, VAL1, VAL2) values ('2002-05-30 
09:30:10','a','b');

{code}
For table with row_timestamp key
{code:java}
hbase(main):055:0> scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW   COLUMN+CELL
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x00\x00\x00\x00, timestamp=102275101, value=x
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0B, timestamp=102275101, value=a
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0C, timestamp=102275101, value=b
1 row(s) in 0.0340 seconds

upsert into store.test1(EVENTSTARTTIMESTAMP, VAL2) values('2002-05-30 
09:30:10','test222') on duplicate key update 
val1='testa',val2='testb';hbase(main):058:0> scan 
'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW   COLUMN+CELL
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x00\x00\x00\x00, timestamp=1589202202193, value=x
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0B, timestamp=1589202202193, value=testa
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0C, timestamp=1589202202193, value=testb
1 row(s) in 0.0510 seconds

{code}
when data is upserted and after that updated for the same key timestamp is 
updated also.

means on duplicate keys deoesn work for row_timestamp
{code:java}
select * from store.test1 where eventstarttimestamp  scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW                                                           COLUMN+CELL0 
row(s) in 0.0520 seconds  

hbase(main):037:0> describe 'STORE.TEST1'Table STORE.TEST1 is 
ENABLEDSTORE.TEST1, {TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 
=> 
'|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', 
coprocessor$3 => 
'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
coprocessor$4 => 
'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
coprocessor$5 => 
'|org.apache.phoenix.hbase.index.IndexRegionObserver|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder'}COLUMN
 FAMILIES DESCRIPTION{NAME => '0', BLOOMFILTER => 'ROW', VERSIONS => '1', 
IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 
'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 'SNAPPY', MIN_VERSIONS => '0', 
BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}

UPSERT INTO STORE.TEST1(EVENTSTARTTIMESTAMP, VAL1, VAL2) values ('2002-05-30 
09:30:10','a','b');

{code}
For table with row_timestamp key
{code:java}
hbase(main):055:0> scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW   COLUMN+CELL
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x00\x00\x00\x00, timestamp=102275101, value=x
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0B, timestamp=102275101, value=a
 

[jira] [Updated] (PHOENIX-5889) row_timestamp wrong column values for on duplicate key update

2020-05-11 Thread Jira


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mariusz Szpatuśko updated PHOENIX-5889:
---
Description: 
{code:java}
CREATE TABLE STORE.TEST1(CREATE TABLE STORE.TEST1(eventStartTimestamp timestamp 
not null,val1 varchar, val2 varcharCONSTRAINT pk PRIMARY KEY 
(eventStartTimestamp desc ROW_TIMESTAMP))VERSIONS=1, 
DATA_BLOCK_ENCODING='FAST_DIFF', COMPRESSION='SNAPPY',BLOOMFILTER='ROW', 
UPDATE_CACHE_FREQUENCY=90, SALT_BUCKETS=32;

hbase(main):037:0> describe 'STORE.TEST1'Table STORE.TEST1 is 
ENABLEDSTORE.TEST1, {TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 
=> 
'|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', 
coprocessor$3 => 
'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
coprocessor$4 => 
'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
coprocessor$5 => 
'|org.apache.phoenix.hbase.index.IndexRegionObserver|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder'}COLUMN
 FAMILIES DESCRIPTION{NAME => '0', BLOOMFILTER => 'ROW', VERSIONS => '1', 
IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 
'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 'SNAPPY', MIN_VERSIONS => '0', 
BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}

UPSERT INTO STORE.TEST1(EVENTSTARTTIMESTAMP, VAL1, VAL2) values ('2002-05-30 
09:30:10','a','b');

{code}
For table with row_timestamp key
{code:java}
hbase(main):055:0> scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW   COLUMN+CELL
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x00\x00\x00\x00, timestamp=102275101, value=x
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0B, timestamp=102275101, value=a
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0C, timestamp=102275101, value=b
1 row(s) in 0.0340 seconds

upsert into store.test1(EVENTSTARTTIMESTAMP, VAL2) values('2002-05-30 
09:30:10','test222') on duplicate key update 
val1='testa',val2='testb';hbase(main):058:0> scan 
'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW   COLUMN+CELL
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x00\x00\x00\x00, timestamp=1589202202193, value=x
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0B, timestamp=1589202202193, value=testa
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0C, timestamp=1589202202193, value=testb
1 row(s) in 0.0510 seconds

{code}
when data is upserted and after that updated for the same key timestamp is 
updated also.

means on duplicate keys deoesn work for row_timestamp
{code:java}
select * from store.test1 where eventstarttimestamp  scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW                                                           COLUMN+CELL0 
row(s) in 0.0520 seconds  

hbase(main):037:0> describe 'STORE.TEST1'Table STORE.TEST1 is 
ENABLEDSTORE.TEST1, {TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 
=> 
'|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', 
coprocessor$3 => 
'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
coprocessor$4 => 
'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
coprocessor$5 => 
'|org.apache.phoenix.hbase.index.IndexRegionObserver|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder'}COLUMN
 FAMILIES DESCRIPTION{NAME => '0', BLOOMFILTER => 'ROW', VERSIONS => '1', 
IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 
'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 'SNAPPY', MIN_VERSIONS => '0', 
BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}

UPSERT INTO STORE.TEST1(EVENTSTARTTIMESTAMP, VAL1, VAL2) values ('2002-05-30 
09:30:10','a','b');

{code}
For table with row_timestamp key
{code:java}
hbase(main):055:0> scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW   COLUMN+CELL
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x00\x00\x00\x00, timestamp=102275101, value=x
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0B, timestamp=102275101, value=a
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0C, timestamp=102275101, value=b
1 row(s) in 0.0340 seconds

upsert into store.test1(EVENTSTARTTIMESTAMP, 

[jira] [Created] (PHOENIX-5889) row_timestamp wrong column values for on duplicate key update

2020-05-11 Thread Jira
Mariusz Szpatuśko created PHOENIX-5889:
--

 Summary: row_timestamp wrong column values for on duplicate key 
update
 Key: PHOENIX-5889
 URL: https://issues.apache.org/jira/browse/PHOENIX-5889
 Project: Phoenix
  Issue Type: Bug
  Components: core
Affects Versions: 4.14.3
Reporter: Mariusz Szpatuśko


{code:java}
CREATE TABLE STORE.TEST1(CREATE TABLE STORE.TEST1(eventStartTimestamp timestamp 
not null,val1 varchar, val2 varcharCONSTRAINT pk PRIMARY KEY 
(eventStartTimestamp desc ROW_TIMESTAMP))VERSIONS=1, 
DATA_BLOCK_ENCODING='FAST_DIFF', COMPRESSION='SNAPPY',BLOOMFILTER='ROW', 
UPDATE_CACHE_FREQUENCY=90, SALT_BUCKETS=32;
hbase(main):054:0> scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}ROW                
                                           COLUMN+CELL0 row(s) in 0.0520 
seconds  
hbase(main):037:0> describe 'STORE.TEST1'Table STORE.TEST1 is 
ENABLEDSTORE.TEST1, {TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 
=> 
'|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', 
coprocessor$3 => 
'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
coprocessor$4 => 
'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
coprocessor$5 => 
'|org.apache.phoenix.hbase.index.IndexRegionObserver|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder'}COLUMN
 FAMILIES DESCRIPTION{NAME => '0', BLOOMFILTER => 'ROW', VERSIONS => '1', 
IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 
'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 'SNAPPY', MIN_VERSIONS => '0', 
BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}UPSERT 
INTO STORE.TEST1(EVENTSTARTTIMESTAMP, VAL1, VAL2) values ('2002-05-30 
09:30:10','a','b');
hbase(main):055:0> scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}ROW                
                                           COLUMN+CELL 
\x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF               
column=0:\x00\x00\x00\x00, timestamp=102275101, value=x 
\x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF               column=0:\x80\x0B, 
timestamp=102275101, value=a \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF 
              column=0:\x80\x0C, timestamp=102275101, value=b1 row(s) in 
0.0340 secondsupsert into store.test1(EVENTSTARTTIMESTAMP, VAL2) 
values('2002-05-30 09:30:10','test222') on duplicate key update 
val1='testa',val2='testb';
hbase(main):058:0> scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}ROW                
                                           COLUMN+CELL 
\x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF               
column=0:\x00\x00\x00\x00, timestamp=1589202202193, value=x 
\x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF               column=0:\x80\x0B, 
timestamp=1589202202193, value=testa 
\x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF               column=0:\x80\x0C, 
timestamp=1589202202193, value=testb1 row(s) in 0.0510 seconds
select * from store.test1 where eventstarttimestamp  scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW   COLUMN+CELL
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x00\x00\x00\x00, timestamp=102275101, value=x
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0B, timestamp=102275101, value=a
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0C, timestamp=102275101, value=b
1 row(s) in 0.0340 seconds
upsert into store.test1(EVENTSTARTTIMESTAMP, VAL2) values('2002-05-30 
09:30:10','test222') on duplicate key update 
val1='testa',val2='testb';hbase(main):058:0> scan 
'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW   COLUMN+CELL
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x00\x00\x00\x00, timestamp=1589202202193, value=x
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0B, timestamp=1589202202193, value=testa
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0C, timestamp=1589202202193, value=testb
1 row(s) in 0.0510 seconds

{code}
when data is upserted and after that updated for the same key timestamp is 
updated also.

means on duplicate keys deoesn work for row_timestamp
{code:java}
select * from store.test1 where eventstarttimestamp 

[jira] [Updated] (PHOENIX-5889) row_timestamp wrong column values for on duplicate key update

2020-05-11 Thread Jira


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mariusz Szpatuśko updated PHOENIX-5889:
---
Description: 
{code:java}
CREATE TABLE STORE.TEST1(CREATE TABLE STORE.TEST1(eventStartTimestamp timestamp 
not null,val1 varchar, val2 varcharCONSTRAINT pk PRIMARY KEY 
(eventStartTimestamp desc ROW_TIMESTAMP))VERSIONS=1, 
DATA_BLOCK_ENCODING='FAST_DIFF', COMPRESSION='SNAPPY',BLOOMFILTER='ROW', 
UPDATE_CACHE_FREQUENCY=90, SALT_BUCKETS=32;

hbase(main):037:0> describe 'STORE.TEST1'Table STORE.TEST1 is 
ENABLEDSTORE.TEST1, {TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 
=> 
'|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', 
coprocessor$3 => 
'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
coprocessor$4 => 
'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
coprocessor$5 => 
'|org.apache.phoenix.hbase.index.IndexRegionObserver|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder'}COLUMN
 FAMILIES DESCRIPTION{NAME => '0', BLOOMFILTER => 'ROW', VERSIONS => '1', 
IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 
'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 'SNAPPY', MIN_VERSIONS => '0', 
BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}

UPSERT INTO STORE.TEST1(EVENTSTARTTIMESTAMP, VAL1, VAL2) values ('2002-05-30 
09:30:10','a','b');

{code}
For table with row_timestamp key
{code:java}
hbase(main):055:0> scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW   COLUMN+CELL
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x00\x00\x00\x00, timestamp=102275101, value=x
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0B, timestamp=102275101, value=a
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0C, timestamp=102275101, value=b
1 row(s) in 0.0340 seconds

upsert into store.test1(EVENTSTARTTIMESTAMP, VAL2) values('2002-05-30 
09:30:10','test222') on duplicate key update val1='testa',val2='testb';

hbase(main):058:0> scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW   COLUMN+CELL
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x00\x00\x00\x00, timestamp=1589202202193, value=x
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0B, timestamp=1589202202193, value=testa
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0C, timestamp=1589202202193, value=testb
1 row(s) in 0.0510 seconds

{code}
when data is upserted and after that updated for the same key timestamp is 
updated also.

means on duplicate keys deoesn work for row_timestamp
{code:java}
select * from store.test1 where eventstarttimestamp  describe 'STORE.TEST1'Table STORE.TEST1 is 
ENABLEDSTORE.TEST1, {TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 
=> 
'|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', 
coprocessor$3 => 
'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
coprocessor$4 => 
'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
coprocessor$5 => 
'|org.apache.phoenix.hbase.index.IndexRegionObserver|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder'}COLUMN
 FAMILIES DESCRIPTION{NAME => '0', BLOOMFILTER => 'ROW', VERSIONS => '1', 
IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 
'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 'SNAPPY', MIN_VERSIONS => '0', 
BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}

UPSERT INTO STORE.TEST1(EVENTSTARTTIMESTAMP, VAL1, VAL2) values ('2002-05-30 
09:30:10','a','b');

{code}
For table with row_timestamp key
{code:java}
hbase(main):055:0> scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW   COLUMN+CELL
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x00\x00\x00\x00, timestamp=102275101, value=x
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0B, timestamp=102275101, value=a
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0C, timestamp=102275101, value=b
1 row(s) in 0.0340 seconds

upsert into store.test1(EVENTSTARTTIMESTAMP, VAL2) values('2002-05-30 
09:30:10','test222') on duplicate key update 
val1='testa',val2='testb';hbase(main):058:0> scan 
'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW  

[jira] [Updated] (PHOENIX-5889) row_timestamp wrong column values for on duplicate key update

2020-05-11 Thread Jira


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mariusz Szpatuśko updated PHOENIX-5889:
---
Description: 
{code:java}
CREATE TABLE STORE.TEST1(CREATE TABLE STORE.TEST1(eventStartTimestamp timestamp 
not null,val1 varchar, val2 varcharCONSTRAINT pk PRIMARY KEY 
(eventStartTimestamp desc ROW_TIMESTAMP))VERSIONS=1, 
DATA_BLOCK_ENCODING='FAST_DIFF', COMPRESSION='SNAPPY',BLOOMFILTER='ROW', 
UPDATE_CACHE_FREQUENCY=90, SALT_BUCKETS=32;

hbase(main):054:0> scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW                                                           COLUMN+CELL0 
row(s) in 0.0520 seconds  

hbase(main):037:0> describe 'STORE.TEST1'Table STORE.TEST1 is 
ENABLEDSTORE.TEST1, {TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 
=> 
'|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', 
coprocessor$3 => 
'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
coprocessor$4 => 
'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
coprocessor$5 => 
'|org.apache.phoenix.hbase.index.IndexRegionObserver|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder'}COLUMN
 FAMILIES DESCRIPTION{NAME => '0', BLOOMFILTER => 'ROW', VERSIONS => '1', 
IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 
'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 'SNAPPY', MIN_VERSIONS => '0', 
BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}

UPSERT INTO STORE.TEST1(EVENTSTARTTIMESTAMP, VAL1, VAL2) values ('2002-05-30 
09:30:10','a','b');

{code}
For table with row_timestamp key
{code:java}
hbase(main):055:0> scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW   COLUMN+CELL
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x00\x00\x00\x00, timestamp=102275101, value=x
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0B, timestamp=102275101, value=a
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0C, timestamp=102275101, value=b
1 row(s) in 0.0340 seconds
upsert into store.test1(EVENTSTARTTIMESTAMP, VAL2) values('2002-05-30 
09:30:10','test222') on duplicate key update 
val1='testa',val2='testb';hbase(main):058:0> scan 
'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW   COLUMN+CELL
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x00\x00\x00\x00, timestamp=1589202202193, value=x
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0B, timestamp=1589202202193, value=testa
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0C, timestamp=1589202202193, value=testb
1 row(s) in 0.0510 seconds

{code}
when data is upserted and after that updated for the same key timestamp is 
updated also.

means on duplicate keys deoesn work for row_timestamp
{code:java}
select * from store.test1 where eventstarttimestamp  scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}ROW                
                                           COLUMN+CELL0 row(s) in 0.0520 
seconds  
hbase(main):037:0> describe 'STORE.TEST1'Table STORE.TEST1 is 
ENABLEDSTORE.TEST1, {TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 
=> 
'|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', 
coprocessor$3 => 
'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
coprocessor$4 => 
'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
coprocessor$5 => 
'|org.apache.phoenix.hbase.index.IndexRegionObserver|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder'}COLUMN
 FAMILIES DESCRIPTION{NAME => '0', BLOOMFILTER => 'ROW', VERSIONS => '1', 
IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 
'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 'SNAPPY', MIN_VERSIONS => '0', 
BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}UPSERT 
INTO STORE.TEST1(EVENTSTARTTIMESTAMP, VAL1, VAL2) values ('2002-05-30 
09:30:10','a','b');
hbase(main):055:0> scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}ROW                
                                           COLUMN+CELL 
\x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF               
column=0:\x00\x00\x00\x00, timestamp=102275101, value=x 
\x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF               column=0:\x80\x0B, 
timestamp=102275101, value=a \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF 
              column=0:\x80\x0C, 

[jira] [Updated] (PHOENIX-5889) row_timestamp wrong column values for on duplicate key update

2020-05-11 Thread Jira


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mariusz Szpatuśko updated PHOENIX-5889:
---
Description: 
{code:java}
CREATE TABLE STORE.TEST1(CREATE TABLE STORE.TEST1(eventStartTimestamp timestamp 
not null,val1 varchar, val2 varcharCONSTRAINT pk PRIMARY KEY 
(eventStartTimestamp desc ROW_TIMESTAMP))VERSIONS=1, 
DATA_BLOCK_ENCODING='FAST_DIFF', COMPRESSION='SNAPPY',BLOOMFILTER='ROW', 
UPDATE_CACHE_FREQUENCY=90, SALT_BUCKETS=32;

hbase(main):054:0> scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW                                                           COLUMN+CELL0 
row(s) in 0.0520 seconds  

hbase(main):037:0> describe 'STORE.TEST1'Table STORE.TEST1 is 
ENABLEDSTORE.TEST1, {TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 
=> 
'|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', 
coprocessor$3 => 
'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
coprocessor$4 => 
'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
coprocessor$5 => 
'|org.apache.phoenix.hbase.index.IndexRegionObserver|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder'}COLUMN
 FAMILIES DESCRIPTION{NAME => '0', BLOOMFILTER => 'ROW', VERSIONS => '1', 
IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 
'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 'SNAPPY', MIN_VERSIONS => '0', 
BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}

UPSERT INTO STORE.TEST1(EVENTSTARTTIMESTAMP, VAL1, VAL2) values ('2002-05-30 
09:30:10','a','b');

{code}
For table with row_timestamp key
{code:java}
hbase(main):055:0> scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW   COLUMN+CELL
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x00\x00\x00\x00, timestamp=102275101, value=x
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0B, timestamp=102275101, value=a
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0C, timestamp=102275101, value=b
1 row(s) in 0.0340 seconds

upsert into store.test1(EVENTSTARTTIMESTAMP, VAL2) values('2002-05-30 
09:30:10','test222') on duplicate key update 
val1='testa',val2='testb';hbase(main):058:0> scan 
'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW   COLUMN+CELL
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x00\x00\x00\x00, timestamp=1589202202193, value=x
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0B, timestamp=1589202202193, value=testa
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0C, timestamp=1589202202193, value=testb
1 row(s) in 0.0510 seconds

{code}
when data is upserted and after that updated for the same key timestamp is 
updated also.

means on duplicate keys deoesn work for row_timestamp
{code:java}
select * from store.test1 where eventstarttimestamp  scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW                                                           COLUMN+CELL0 
row(s) in 0.0520 seconds  

hbase(main):037:0> describe 'STORE.TEST1'Table STORE.TEST1 is 
ENABLEDSTORE.TEST1, {TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 
=> 
'|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', 
coprocessor$3 => 
'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
coprocessor$4 => 
'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
coprocessor$5 => 
'|org.apache.phoenix.hbase.index.IndexRegionObserver|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder'}COLUMN
 FAMILIES DESCRIPTION{NAME => '0', BLOOMFILTER => 'ROW', VERSIONS => '1', 
IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 
'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 'SNAPPY', MIN_VERSIONS => '0', 
BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}

UPSERT INTO STORE.TEST1(EVENTSTARTTIMESTAMP, VAL1, VAL2) values ('2002-05-30 
09:30:10','a','b');

{code}
For table with row_timestamp key
{code:java}
hbase(main):055:0> scan 'STORE.TEST1',{RAW=>true,VERSIONS=>3}
ROW   COLUMN+CELL
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x00\x00\x00\x00, timestamp=102275101, value=x
 \x1A\x7F\xFF\xFF\x11\xDFJ\x13/\xFF\xFF\xFF\xFF   
column=0:\x80\x0B, timestamp=102275101, value=a
 

[jira] [Created] (PHOENIX-5890) Port PHOENIX-5799 to master

2020-05-11 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-5890:


 Summary: Port PHOENIX-5799 to master
 Key: PHOENIX-5890
 URL: https://issues.apache.org/jira/browse/PHOENIX-5890
 Project: Phoenix
  Issue Type: Improvement
Reporter: Geoffrey Jacoby
Assignee: Geoffrey Jacoby






--
This message was sent by Atlassian Jira
(v8.3.4#803005)