[jira] [Commented] (PHOENIX-3799) Error on tracing query with "union all"

2017-06-08 Thread Karan Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043949#comment-16043949
 ] 

Karan Mehta commented on PHOENIX-3799:
--

Hey Marco, 
Could you confirm the Phoenix version number as well as HTrace version number 
that it is using? Till now, Phoenix uses the version of HTrace which doesn't 
throw an exception on this error.

[~samarthjain] I looked into the issue. For a union query, we try tracing 
individual iterators for all the queries involved. Since this happens in a 
single thread, the parent span of each new iterator is equal to previous 
iterator's span, but in fact it should not be tied to it, since both of them 
are independent iterators altogether. For example, in the query {{SELECT K FROM 
TABLEA UNION ALL SELECT K FROM TABLEB UNION ALL SELECT K FROM TABLEC}}, three 
separate iterators are created for all the three queries involved in the same 
thread. Hence, the spans look something like this
{code}
Parent Span
  + TABLEA Iterator Span
  + TABLEB Iterator Span
  + TABLEC Iterator Span
+ HBase Spans for TABLEC
+ HBase Spans for TABLEB
+ HBase Spans for TABLEA
{code}
and hence when these iterators are closed, the span in threadlocal will not 
match the span in the current scope, resulting in this exception. 

What we really want is something like this
{code}
Parent Span
  + TABLEA Iterator Span
  + HBase Spans for TABLEA
  + TABLEB Iterator Span
  + HBase Spans for TABLEA
  + TABLEC Iterator Span
  + HBase Spans for TABLEA
{code}
However such a thing is not possible unless we initialize the iterators 
parallely. Please suggest as to how we should handle this. For now, we don't 
have any parent span with a TraceScope of complete query. 

> Error on tracing query with "union all" 
> 
>
> Key: PHOENIX-3799
> URL: https://issues.apache.org/jira/browse/PHOENIX-3799
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: Phoenix on Cloudera 5.8
>Reporter: Marco
>  Labels: TRACING
>
> When i try to enable tracing for query with a "union all" clause, I receive 
> an error  and the process stop the execution.
> Error: 
> 0: jdbc:phoenix:x> select sum(1) as num from (
> . . . . . . . . . . . . . . > select count(1) as num from my_tab where 
> meas_ym ='201601'
> . . . . . . . . . . . . . . > union all
> . . . . . . . . . . . . . . > select count(1) as num from my_tab where 
> meas_ym ='201602');
> 17/04/20 15:39:38 ERROR htrace.Tracer: Tried to close trace span 
> {"i":"7a2caddba3cc1d5d","s":"f262306696ff7120","b":1492702777540,"d":"Creating
>  basic query for [CLIENT 10-CHUNK 9560319 ROWS 2516584015 BYTES PARALLEL 
> 1-WAY RANGE SCAN OVER MY_TAB ['201601'], SERVER FILTER BY FIRST KEY ONLY, 
> SERVER AGGREGATE INTO SINGLE 
> ROW]","p":["f6e9e018136584b0"],"t":[{"t":1492702777542,"m":"First request 
> completed"}]} but it is not the current span for the main thread.  You have 
> probably forgotten to close or detach 
> {"i":"7a2caddba3cc1d5d","s":"f1a3a546476f1c94","b":1492702777541,"d":"Creating
>  basic query for [CLIENT 36-CHUNK 40590914 ROWS 10380911994 BYTES PARALLEL 
> 1-WAY RANGE SCAN OVER MY_TAB ['201602'], SERVER FILTER BY FIRST KEY ONLY, 
> SERVER AGGREGATE INTO SINGLE ROW]","p":["f262306696ff7120"]}
> java.lang.RuntimeException: Tried to close trace span 
> {"i":"7a2caddba3cc1d5d","s":"f262306696ff7120","b":1492702777540,"d":"Creating
>  basic query for [CLIENT 10-CHUNK 9560319 ROWS 2516584015 BYTES PARALLEL 
> 1-WAY RANGE SCAN OVER MY_TAB ['201601'], SERVER FILTER BY FIRST KEY ONLY, 
> SERVER AGGREGATE INTO SINGLE 
> ROW]","p":["f6e9e018136584b0"],"t":[{"t":1492702777542,"m":"First request 
> completed"}]} but it is not the current span for the main thread.  You have 
> probably forgotten to close or detach 
> {"i":"7a2caddba3cc1d5d","s":"f1a3a546476f1c94","b":1492702777541,"d":"Creating
>  basic query for [CLIENT 36-CHUNK 40590914 ROWS 10380911994 BYTES PARALLEL 
> 1-WAY RANGE SCAN OVER MY_TAB ['201602'], SERVER FILTER BY FIRST KEY ONLY, 
> SERVER AGGREGATE INTO SINGLE ROW]","p":["f262306696ff7120"]}
> at org.apache.htrace.Tracer.clientError(Tracer.java:60)
> at org.apache.htrace.TraceScope.close(TraceScope.java:90)
> at 
> org.apache.phoenix.trace.TracingIterator.close(TracingIterator.java:46)
> at 
> org.apache.phoenix.iterate.DelegateResultIterator.close(DelegateResultIterator.java:39)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.close(LookAheadResultIterator.java:42)
> at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:104)
> at 
> 

[jira] [Commented] (PHOENIX-3898) Empty result set after split with local index on multi-tenant table

2017-06-08 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043824#comment-16043824
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-3898:
--

There is local index column family but the local index data was not written 
back because during compaction we might be seeking with wrong key that's what 
causing it to skip rewriting the data. Looking into find the root cause.

> Empty result set after split with local index on multi-tenant table
> ---
>
> Key: PHOENIX-3898
> URL: https://issues.apache.org/jira/browse/PHOENIX-3898
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Rajeshbabu Chintaguntla
>Priority: Blocker
> Fix For: 4.11.0
>
>
> While testing encounters this(seems related to PHOENIX-3832):-
> {code}
> CREATE TABLE IF NOT EXISTS TM (PKA CHAR(15) NOT NULL, PKF CHAR(3) NOT 
> NULL,PKP CHAR(15) NOT NULL, CRD DATE NOT NULL, EHI CHAR(15) NOT NULL, FID 
> CHAR(15), CREATED_BY_ID VARCHAR,FH VARCHAR, DT VARCHAR, OS VARCHAR, NS 
> VARCHAR, OFN VARCHAR CONSTRAINT PK PRIMARY KEY ( PKA, PKF, PKP, CRD DESC, EHI 
> ))  VERSIONS=1 ,MULTI_TENANT=true;
> CREATE LOCAL INDEX IF NOT EXISTS TIDX ON TM (PKF, CRD, PKP, EHI);
> {code}
> {code}
> 0: jdbc:phoenix:localhost> select count(*) from tidx;
> +---+
> | COUNT(1)  |
> +---+
> | 30|
> +---+
> {code}
> {code}
> hbase(main):002:0> split 'TM'
> {code}
> {code}
> 0: jdbc:phoenix:localhost> select count(*) from tidx;
> +---+
> | COUNT(1)  |
> +---+
> | 0 |
> +---+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3898) Empty result set after split with local index on multi-tenant table

2017-06-08 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043816#comment-16043816
 ] 

Ankit Singhal commented on PHOENIX-3898:


bq. That's a bit strange to query the index table directly, Ankit Singhal 
(though still a bug, of course).
Yeah, this is just for demonstration , I could show it with and without 
NO_INDEX hint too.

bq. Do you get the same behaviour if you select from the data table?
if I do count(*) on data table with NO_INDEX hint, I'll get correct data which 
is there in a data table column family.

bq. Any difference if you quit sqlline, start it again, and reissue the count 
query
We checked after split that data for local index family is not even there in 
physical HBase table 

> Empty result set after split with local index on multi-tenant table
> ---
>
> Key: PHOENIX-3898
> URL: https://issues.apache.org/jira/browse/PHOENIX-3898
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Rajeshbabu Chintaguntla
>Priority: Blocker
> Fix For: 4.11.0
>
>
> While testing encounters this(seems related to PHOENIX-3832):-
> {code}
> CREATE TABLE IF NOT EXISTS TM (PKA CHAR(15) NOT NULL, PKF CHAR(3) NOT 
> NULL,PKP CHAR(15) NOT NULL, CRD DATE NOT NULL, EHI CHAR(15) NOT NULL, FID 
> CHAR(15), CREATED_BY_ID VARCHAR,FH VARCHAR, DT VARCHAR, OS VARCHAR, NS 
> VARCHAR, OFN VARCHAR CONSTRAINT PK PRIMARY KEY ( PKA, PKF, PKP, CRD DESC, EHI 
> ))  VERSIONS=1 ,MULTI_TENANT=true;
> CREATE LOCAL INDEX IF NOT EXISTS TIDX ON TM (PKF, CRD, PKP, EHI);
> {code}
> {code}
> 0: jdbc:phoenix:localhost> select count(*) from tidx;
> +---+
> | COUNT(1)  |
> +---+
> | 30|
> +---+
> {code}
> {code}
> hbase(main):002:0> split 'TM'
> {code}
> {code}
> 0: jdbc:phoenix:localhost> select count(*) from tidx;
> +---+
> | COUNT(1)  |
> +---+
> | 0 |
> +---+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3928) Consider retrying once after any SQLException

2017-06-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043733#comment-16043733
 ] 

James Taylor commented on PHOENIX-3928:
---

WDYT, [~sukuna...@gmail.com]? Is this something you might have spare cycles to 
pursue? An example of a test would be 
QueryCompilerTest.testOnDupKeyWithGlobalIndex() where the first client drops 
the index and then the second client attempts an UPSERT with an ON DUPLICATE 
KEY clause. There are many other examples of this in these negative tests that 
depend on the state of the metadata.

> Consider retrying once after any SQLException
> -
>
> Key: PHOENIX-3928
> URL: https://issues.apache.org/jira/browse/PHOENIX-3928
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.12.0
>
>
> There are more cases in which a retry would successfully execute than when a 
> MetaDataEntityNotFoundException. For example, certain error cases that depend 
> on the state of the metadata would work on retry if the metadata had changed. 
> We may want to retry on any SQLException and simply loop through the tables 
> involved (plan.getSourceRefs().iterator()), and if any meta data was updated, 
> go ahead and retry once.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3928) Consider retrying once after any SQLException

2017-06-08 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3928:
--
Fix Version/s: 4.12.0

> Consider retrying once after any SQLException
> -
>
> Key: PHOENIX-3928
> URL: https://issues.apache.org/jira/browse/PHOENIX-3928
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.12.0
>
>
> There are more cases in which a retry would successfully execute than when a 
> MetaDataEntityNotFoundException. For example, certain error cases that depend 
> on the state of the metadata would work on retry if the metadata had changed. 
> We may want to retry on any SQLException and simply loop through the tables 
> involved (plan.getSourceRefs().iterator()), and if any meta data was updated, 
> go ahead and retry once.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3928) Consider retrying once after any SQLException

2017-06-08 Thread James Taylor (JIRA)
James Taylor created PHOENIX-3928:
-

 Summary: Consider retrying once after any SQLException
 Key: PHOENIX-3928
 URL: https://issues.apache.org/jira/browse/PHOENIX-3928
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor


There are more cases in which a retry would successfully execute than when a 
MetaDataEntityNotFoundException. For example, certain error cases that depend 
on the state of the metadata would work on retry if the metadata had changed. 
We may want to retry on any SQLException and simply loop through the tables 
involved (plan.getSourceRefs().iterator()), and if any meta data was updated, 
go ahead and retry once.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3918) Ensure all function implementations handle null args correctly

2017-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043725#comment-16043725
 ] 

Hadoop QA commented on PHOENIX-3918:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12872168/PHOENIX-3918-v2.patch
  against master branch at commit b9bb918610c04e21b27df8d3fe1c42df508a96f0.
  ATTACHMENT ID: 12872168

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
50 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 5 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+conn.createStatement().execute("CREATE TABLE " + tableName + "(k1 
decimal, k2 decimal, constraint pk primary key (k1))");
+ResultSet rs = conn.createStatement().executeQuery("SELECT 
coalesce(null, null) FROM " + tableName);
+String ddl = "CREATE TABLE " + tableName + " ( pk VARCHAR(10) NOT 
NULL, val INTEGER CONSTRAINT PK PRIMARY KEY (pk))";
+PreparedStatement ps = conn.prepareStatement("UPSERT INTO " + 
tableName + " (pk,val) VALUES (?,?)");
+ResultSet rs = conn.createStatement().executeQuery("SELECT ENCODE(val, 
'BASE62') FROM " + tableName);
+.executeQuery("SELECT OCTET_LENGTH(vb1), 
OCTET_LENGTH(b), OCTET_LENGTH(vb2) FROM " + TABLE_NAME);
+conn.createStatement().execute("CREATE TABLE " + tableName + " (k 
CHAR(3) PRIMARY KEY, v1 VARCHAR, v2 INTEGER)");
+if (ptr.getLength()!=0 && 
!secondChild.getDataType().isCoercibleTo(firstChild.getDataType(), 
secondChild.getDataType().toObject(ptr))) {

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.CreateTableIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.CountDistinctCompressionIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1054//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1054//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1054//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1054//console

This message is automatically generated.

> Ensure all function implementations handle null args correctly
> --
>
> Key: PHOENIX-3918
> URL: https://issues.apache.org/jira/browse/PHOENIX-3918
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Thomas D'Silva
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3918.patch, PHOENIX-3918-v2.patch
>
>
> {code}
> testBothParametersNull(org.apache.phoenix.end2end.TimezoneOffsetFunctionIT)  
> Time elapsed: 2.272 sec  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Unknown timezone 
>   at 
> org.apache.phoenix.end2end.TimezoneOffsetFunctionIT.testBothParametersNull(TimezoneOffsetFunctionIT.java:130)
> timezoneParameterNull(org.apache.phoenix.end2end.TimezoneOffsetFunctionIT)  
> Time elapsed: 2.273 sec  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Unknown timezone 
>   at 
> org.apache.phoenix.end2end.TimezoneOffsetFunctionIT.timezoneParameterNull(TimezoneOffsetFunctionIT.java:151)
> dateParameterNull(org.apache.phoenix.end2end.TimezoneOffsetFunctionIT)  Time 
> elapsed: 2.254 sec  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 8 bytes, but had 0
>   at 
> org.apache.phoenix.end2end.TimezoneOffsetFunctionIT.dateParameterNull(TimezoneOffsetFunctionIT.java:172)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3927) Upgrade to surefire/failsafe version 2.20

2017-06-08 Thread James Taylor (JIRA)
James Taylor created PHOENIX-3927:
-

 Summary: Upgrade to surefire/failsafe version 2.20
 Key: PHOENIX-3927
 URL: https://issues.apache.org/jira/browse/PHOENIX-3927
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: James Taylor
 Fix For: 4.11.0


We should upgrade to surefire/failsafe version 2.20 as it fixes a number of 
issues when tests are parallelized.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3926) Do not use EncodedColumnQualifierCellsList optimization when doing raw scans

2017-06-08 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043707#comment-16043707
 ] 

Samarth Jain commented on PHOENIX-3926:
---

Not too sure what is going on. I see the individual test cases passing though.
https://builds.apache.org/job/Phoenix-master/1652/testReport/junit/org.apache.phoenix.end2end/UpsertSelectIT/
https://builds.apache.org/job/Phoenix-master/1652/testReport/junit/org.apache.phoenix.end2end/QueryDatabaseMetaDataIT/
So likely a failure happened in the tearDown method? 

{code}
@AfterClass
public static void doTeardown() throws Exception {
dropNonSystemTables();
}
{code}

One other possibility is that for some reason HBase is throwing retriable IO 
Exception and phoenix just eventually gives up after retrying n (35?) times 
resulting in a sql operation timed out exception. I have seen that happen a 
couple of times at least on the QA runs. One example being 
ArithmeticQueryIT#testDecimalUpsertSelect. It doesn't happen always though. 

> Do not use EncodedColumnQualifierCellsList optimization when doing raw scans
> 
>
> Key: PHOENIX-3926
> URL: https://issues.apache.org/jira/browse/PHOENIX-3926
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3926.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3917) RowProjector#getEstimatedRowByteSize() returns incorrect value

2017-06-08 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3917:
--
Attachment: PHOENIX-3917_v2.patch

Any reason this simpler patch won't fix the issue, [~gsbiju]?

> RowProjector#getEstimatedRowByteSize() returns incorrect value
> --
>
> Key: PHOENIX-3917
> URL: https://issues.apache.org/jira/browse/PHOENIX-3917
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Assignee: Biju Nair
>Priority: Minor
> Attachments: PHOENIX-3917_v2.patch
>
>
> {{queryPlan..getProjector().getEstimatedRowByteSize()}} returns "0" for a 
> query {{SELECT A_ID FROM TABLE}} where {{A_ID}} is Primary Key. Same is the 
> case for the query {{SELECT A_ID, A_DATA FROM TABLE}} where {{A_DATA}} is a 
> non key column. Assuming that the method is meant to return the estimated 
> number of bytes from the query projection the returned value of 0 is 
> incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3926) Do not use EncodedColumnQualifierCellsList optimization when doing raw scans

2017-06-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043682#comment-16043682
 ] 

James Taylor commented on PHOENIX-3926:
---

We seem to be seeing this exception more frequently: 
java.sql.SQLTimeoutException: Operation timed out. Did anything change wrt 
timeouts recently? Should we increase our 
QueryServicesTestImpl.DEFAULT_THREAD_TIMEOUT_MS? Looks like it's set to 5min 
which seems like it'd be more than enough.

Any ideas, [~samarthjain]?

> Do not use EncodedColumnQualifierCellsList optimization when doing raw scans
> 
>
> Key: PHOENIX-3926
> URL: https://issues.apache.org/jira/browse/PHOENIX-3926
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3926.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3918) Ensure all function implementations handle null args correctly

2017-06-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043672#comment-16043672
 ] 

James Taylor commented on PHOENIX-3918:
---

Thanks for the updated patch, [~tdsilva]. It's good that we identify built-in 
functions that potentially are not handling null correctly. The patch doesn't 
look right, though. Let me try to explain the difference between returning 
false versus true and handling null:
- a function should only return false if any child expressions return false. 
This means "I don't have enough information to calculate a result". This can 
really only happen when executing on the server side during filter evaluation. 
In this case, the expression evaluation is only seeing partial state: 
essentially each Cell is fed into the expression and an attempt is made to 
evaluate it. For example {{WHERE A + B < 5}} might see the Cell for A first, 
but not yet have seen B, so false would be returned for the + expression and 
subsequently by the < expression. Once B is seen, then the expression can be 
evaluated.
- in the case that a child returns true, it may have been evaluated to null. 
This is the case when ptr.getLength() == 0. When a child returns a value, it 
will always return the same same value, so there's no need to continue 
evaluating it again and again. There are compound expressions such as AND and 
OR that take advantage of this. If false is returned, though, these compound 
expressions would be evaluated again and again. So this code isn't correct:
{code}
-if (!offsetExpr.evaluate(tuple, ptr)) return false;
+if (!offsetExpr.evaluate(tuple, ptr) || ptr.getLength() == 0) 
return false;
{code}
- For most (but not all) built-in functions, when null is encountered (i.e. a 
child expression evaluated, returning true, and ptr.getLength()==0), then the 
function can immediately return null. So most often, the code will look like 
this:
{code}
if (!offsetExpr.evaluate(tuple, ptr)) return false;
if (ptr.getLength() == 0) return true;
{code}
- The exception are expressions like || and ARRAY_CAT which combines children 
together simply skipping null children.

> Ensure all function implementations handle null args correctly
> --
>
> Key: PHOENIX-3918
> URL: https://issues.apache.org/jira/browse/PHOENIX-3918
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Thomas D'Silva
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3918.patch, PHOENIX-3918-v2.patch
>
>
> {code}
> testBothParametersNull(org.apache.phoenix.end2end.TimezoneOffsetFunctionIT)  
> Time elapsed: 2.272 sec  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Unknown timezone 
>   at 
> org.apache.phoenix.end2end.TimezoneOffsetFunctionIT.testBothParametersNull(TimezoneOffsetFunctionIT.java:130)
> timezoneParameterNull(org.apache.phoenix.end2end.TimezoneOffsetFunctionIT)  
> Time elapsed: 2.273 sec  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Unknown timezone 
>   at 
> org.apache.phoenix.end2end.TimezoneOffsetFunctionIT.timezoneParameterNull(TimezoneOffsetFunctionIT.java:151)
> dateParameterNull(org.apache.phoenix.end2end.TimezoneOffsetFunctionIT)  Time 
> elapsed: 2.254 sec  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 8 bytes, but had 0
>   at 
> org.apache.phoenix.end2end.TimezoneOffsetFunctionIT.dateParameterNull(TimezoneOffsetFunctionIT.java:172)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3926) Do not use EncodedColumnQualifierCellsList optimization when doing raw scans

2017-06-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043668#comment-16043668
 ] 

Hudson commented on PHOENIX-3926:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1652 (See 
[https://builds.apache.org/job/Phoenix-master/1652/])
PHOENIX-3926 Do not use EncodedColumnQualifierCellsList optimization (samarth: 
rev b9bb918610c04e21b27df8d3fe1c42df508a96f0)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/util/EncodedColumnsUtil.java


> Do not use EncodedColumnQualifierCellsList optimization when doing raw scans
> 
>
> Key: PHOENIX-3926
> URL: https://issues.apache.org/jira/browse/PHOENIX-3926
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3926.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (PHOENIX-3898) Empty result set after split with local index on multi-tenant table

2017-06-08 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla reassigned PHOENIX-3898:


Assignee: Rajeshbabu Chintaguntla
Priority: Blocker  (was: Major)

> Empty result set after split with local index on multi-tenant table
> ---
>
> Key: PHOENIX-3898
> URL: https://issues.apache.org/jira/browse/PHOENIX-3898
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Rajeshbabu Chintaguntla
>Priority: Blocker
> Fix For: 4.11.0
>
>
> While testing encounters this(seems related to PHOENIX-3832):-
> {code}
> CREATE TABLE IF NOT EXISTS TM (PKA CHAR(15) NOT NULL, PKF CHAR(3) NOT 
> NULL,PKP CHAR(15) NOT NULL, CRD DATE NOT NULL, EHI CHAR(15) NOT NULL, FID 
> CHAR(15), CREATED_BY_ID VARCHAR,FH VARCHAR, DT VARCHAR, OS VARCHAR, NS 
> VARCHAR, OFN VARCHAR CONSTRAINT PK PRIMARY KEY ( PKA, PKF, PKP, CRD DESC, EHI 
> ))  VERSIONS=1 ,MULTI_TENANT=true;
> CREATE LOCAL INDEX IF NOT EXISTS TIDX ON TM (PKF, CRD, PKP, EHI);
> {code}
> {code}
> 0: jdbc:phoenix:localhost> select count(*) from tidx;
> +---+
> | COUNT(1)  |
> +---+
> | 30|
> +---+
> {code}
> {code}
> hbase(main):002:0> split 'TM'
> {code}
> {code}
> 0: jdbc:phoenix:localhost> select count(*) from tidx;
> +---+
> | COUNT(1)  |
> +---+
> | 0 |
> +---+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3817) VerifyReplication using SQL

2017-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043578#comment-16043578
 ] 

Hadoop QA commented on PHOENIX-3817:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12872140/PHOENIX-3817.v2.patch
  against master branch at commit 9b402043896fdeb78a236542bddf88e4a7f300e7.
  ATTACHMENT ID: 12872140

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
49 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 6 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+PreparedStatement sourceStmt = 
conn.prepareStatement(String.format(UPSERT_USER, sourceTableName));
+PreparedStatement targetStmt = 
conn.prepareStatement(String.format(UPSERT_USER, targetTableName));
+private void upsertData(PreparedStatement stmt, String tenantId, String 
userId, int age) throws SQLException {
+Path outputDir = new 
Path(job.getConfiguration().get("mapreduce.output.fileoutputformat.outputdir"));
+public RecordReader 
createRecordReader(InputSplit inputSplit,
+GOODROWS, BADROWS, ONLY_IN_SOURCE_TABLE_ROWS, 
ONLY_IN_TARGET_TABLE_ROWS, CONTENT_DIFFERENT_ROWS
+throw new IllegalArgumentException("Unexpected extra parameters: " 
+ cmdLine.getArgList());
+final String currentScnValue = 
configuration.get(PhoenixConfigurationUtil.CURRENT_SCN_VALUE);
+// since we can't set a scn on connections with txn set TX_SCN 
attribute so that the max time range is set by BaseScannerRegionObserver
+scan.setAttribute(BaseScannerRegionObserver.TX_SCN, 
Bytes.toBytes(Long.valueOf(txnScnValue)));

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.MutableIndexFailureIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.mapreduce.VerifyReplicationToolIT

 {color:red}-1 core zombie tests{color}.  There are 4 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1048//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1048//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1048//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1048//console

This message is automatically generated.

> VerifyReplication using SQL
> ---
>
> Key: PHOENIX-3817
> URL: https://issues.apache.org/jira/browse/PHOENIX-3817
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Minor
> Attachments: PHOENIX-3817.v1.patch, PHOENIX-3817.v2.patch
>
>
> Certain use cases may copy or replicate a subset of a table to a different 
> table or cluster. For example, application topologies may map data for 
> specific tenants to different peer clusters.
> It would be useful to have a Phoenix VerifyReplication tool that accepts an 
> SQL query, a target table, and an optional target cluster. The tool would 
> compare data returned by the query on the different tables and update various 
> result counters (similar to HBase's VerifyReplication).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: RC for 4.11.0 by end of week?

2017-06-08 Thread rajeshb...@apache.org
Hi James,

Ankit pointed this PHOENIX-3898. Looking into it. Seems very bad bug.

Thanks,
Rajeshbabu.

On Thu, Jun 8, 2017 at 3:07 PM, James Taylor  wrote:

> Unless anyone knows of any outstanding, required JIRAs for 4.11.0, I think
> we're ready for an RC.
>
> Thanks,
> James
>
> On Wed, May 31, 2017 at 2:28 PM, Samarth Jain 
> wrote:
>
> > I would like to get a fix for
> > https://issues.apache.org/jira/browse/PHOENIX-3836 in too. Taking a look
> > at
> > it right now.
> >
> > On Wed, May 31, 2017 at 1:37 PM, Josh Elser 
> wrote:
> >
> > > https://issues.apache.org/jira/browse/PHOENIX-3891 would be nice to
> get
> > > in. If a user runs into this, things would blow up fairly quickly :).
> > Test
> > > provided, just needs a review.
> > >
> > > Let me get a patch up for 3895.
> > >
> > >
> > > On 5/30/17 8:07 PM, James Taylor wrote:
> > >
> > >> We've got a bunch of good bug fixes in our 4.x branches *and* we have
> > >> support for HBase 1.3 which is great. How about we shoot for an RC by
> > the
> > >> end of the week? Here's a few JIRAs that we can potentially include:
> > >>
> > >> Must Haves
> > >> 
> > >> PHOENIX-3797 Local Index - Compaction fails on table with local index
> > due
> > >> to non-increasing bloom keys
> > >> PHOENIX-3870 Backward compatibility fails between v4.9.0 and head of
> 4.x
> > >> PHOENIX-3896 Fix test failures related to tracing changes
> > >>
> > >> Nice to Haves
> > >> ---
> > >> PHOENIX-3819 Reduce Phoenix load on RS hosting SYSTEM.CATALOG region
> > >> PHOENIX-3815 Only disable indexes on which write failures occurred
> > >> PHOENIX-3895 Update to Apache Calcite Avatica 1.10.0
> > >> PHOENIX-3773 Implement FIRST_VALUES aggregate function
> > >> PHOENIX-3612 Make tracking of max allowed number of mutations bytes
> > based
> > >> instead of row based
> > >>
> > >> Is there other pending work we should consider for 4.11.0?
> > >>
> > >> Thanks,
> > >> James
> > >>
> > >>
> >
>


[jira] [Commented] (PHOENIX-3925) Disallow usage of ON DUPLICATE KEY clause on tables with global secondary indexes

2017-06-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043549#comment-16043549
 ] 

Hudson commented on PHOENIX-3925:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1651 (See 
[https://builds.apache.org/job/Phoenix-master/1651/])
PHOENIX-3925 Disallow usage of ON DUPLICATE KEY clause on tables with 
(jamestaylor: rev 9b402043896fdeb78a236542bddf88e4a7f300e7)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java


> Disallow usage of ON DUPLICATE KEY clause on tables with global secondary 
> indexes
> -
>
> Key: PHOENIX-3925
> URL: https://issues.apache.org/jira/browse/PHOENIX-3925
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3925.patch
>
>
> For the reasons of service protection, rather than just documenting that you 
> shouldn't use the ON DUPLICATE KEY clause on tables with global secondary 
> indexes, we should instead throw an exception if this is attempted. See 
> reasons listed here for the reason: 
> https://phoenix.apache.org/atomic_upsert.html#Limitations



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3926) Do not use EncodedColumnQualifierCellsList optimization when doing raw scans

2017-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043539#comment-16043539
 ] 

Hadoop QA commented on PHOENIX-3926:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12872148/PHOENIX-3926.patch
  against master branch at commit 9b839b56ff54881a8627ab64fd440898ff0cad94.
  ATTACHMENT ID: 12872148

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
47 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 5 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
com.datatorrent.stram.StramMiniClusterTest.testAddAttributeToArgs(StramMiniClusterTest.java:623)

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1053//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1053//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1053//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1053//console

This message is automatically generated.

> Do not use EncodedColumnQualifierCellsList optimization when doing raw scans
> 
>
> Key: PHOENIX-3926
> URL: https://issues.apache.org/jira/browse/PHOENIX-3926
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3926.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: RC for 4.11.0 by end of week?

2017-06-08 Thread James Taylor
Unless anyone knows of any outstanding, required JIRAs for 4.11.0, I think
we're ready for an RC.

Thanks,
James

On Wed, May 31, 2017 at 2:28 PM, Samarth Jain 
wrote:

> I would like to get a fix for
> https://issues.apache.org/jira/browse/PHOENIX-3836 in too. Taking a look
> at
> it right now.
>
> On Wed, May 31, 2017 at 1:37 PM, Josh Elser  wrote:
>
> > https://issues.apache.org/jira/browse/PHOENIX-3891 would be nice to get
> > in. If a user runs into this, things would blow up fairly quickly :).
> Test
> > provided, just needs a review.
> >
> > Let me get a patch up for 3895.
> >
> >
> > On 5/30/17 8:07 PM, James Taylor wrote:
> >
> >> We've got a bunch of good bug fixes in our 4.x branches *and* we have
> >> support for HBase 1.3 which is great. How about we shoot for an RC by
> the
> >> end of the week? Here's a few JIRAs that we can potentially include:
> >>
> >> Must Haves
> >> 
> >> PHOENIX-3797 Local Index - Compaction fails on table with local index
> due
> >> to non-increasing bloom keys
> >> PHOENIX-3870 Backward compatibility fails between v4.9.0 and head of 4.x
> >> PHOENIX-3896 Fix test failures related to tracing changes
> >>
> >> Nice to Haves
> >> ---
> >> PHOENIX-3819 Reduce Phoenix load on RS hosting SYSTEM.CATALOG region
> >> PHOENIX-3815 Only disable indexes on which write failures occurred
> >> PHOENIX-3895 Update to Apache Calcite Avatica 1.10.0
> >> PHOENIX-3773 Implement FIRST_VALUES aggregate function
> >> PHOENIX-3612 Make tracking of max allowed number of mutations bytes
> based
> >> instead of row based
> >>
> >> Is there other pending work we should consider for 4.11.0?
> >>
> >> Thanks,
> >> James
> >>
> >>
>


[jira] [Resolved] (PHOENIX-3910) Tests in UpgradeIT failing after PHOENIX-3823

2017-06-08 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3910.
---
Resolution: Fixed

> Tests in UpgradeIT failing after PHOENIX-3823
> -
>
> Key: PHOENIX-3910
> URL: https://issues.apache.org/jira/browse/PHOENIX-3910
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Maddineni Sukumar
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3910.patch, PHOENIX-3910.v2.patch
>
>
> [~sukuna...@gmail.com], can you please take a look? 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3910) Tests in UpgradeIT failing after PHOENIX-3823

2017-06-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043521#comment-16043521
 ] 

James Taylor commented on PHOENIX-3910:
---

Reviewed already and checked into master and 4.x branches. Thanks for the fix, 
[~sukuna...@gmail.com]!

> Tests in UpgradeIT failing after PHOENIX-3823
> -
>
> Key: PHOENIX-3910
> URL: https://issues.apache.org/jira/browse/PHOENIX-3910
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Maddineni Sukumar
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3910.patch, PHOENIX-3910.v2.patch
>
>
> [~sukuna...@gmail.com], can you please take a look? 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3910) Tests in UpgradeIT failing after PHOENIX-3823

2017-06-08 Thread Maddineni Sukumar (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043512#comment-16043512
 ] 

Maddineni Sukumar commented on PHOENIX-3910:


[~jamestaylor] , please review this patch when ever you have some free time. 
Thanks. 

> Tests in UpgradeIT failing after PHOENIX-3823
> -
>
> Key: PHOENIX-3910
> URL: https://issues.apache.org/jira/browse/PHOENIX-3910
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Maddineni Sukumar
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3910.patch, PHOENIX-3910.v2.patch
>
>
> [~sukuna...@gmail.com], can you please take a look? 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3926) Do not use EncodedColumnQualifierCellsList optimization when doing raw scans

2017-06-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043481#comment-16043481
 ] 

James Taylor commented on PHOENIX-3926:
---

+1

> Do not use EncodedColumnQualifierCellsList optimization when doing raw scans
> 
>
> Key: PHOENIX-3926
> URL: https://issues.apache.org/jira/browse/PHOENIX-3926
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3926.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3926) Do not use EncodedColumnQualifierCellsList optimization when doing raw scans

2017-06-08 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-3926:
--
Attachment: PHOENIX-3926.patch

Turns out HBase doesn't like setting columns on the raw scans. This was the 
reason why index rebuilding was failing with this exception:

{code}
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: Cannot specify any column for a 
raw scan
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:193)
at 
org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2130)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5744)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5716)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5721)
at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2669)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2649)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2631)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2625)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:2491)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2753)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

{code}

This patch basically disables using the encoded list optimization when the scan 
is a raw scan.

[~jamestaylor], please review.

> Do not use EncodedColumnQualifierCellsList optimization when doing raw scans
> 
>
> Key: PHOENIX-3926
> URL: https://issues.apache.org/jira/browse/PHOENIX-3926
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3926.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (PHOENIX-3926) Do not use EncodedColumnQualifierCellsList optimization when doing raw scans

2017-06-08 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain reassigned PHOENIX-3926:
-

Assignee: Samarth Jain

> Do not use EncodedColumnQualifierCellsList optimization when doing raw scans
> 
>
> Key: PHOENIX-3926
> URL: https://issues.apache.org/jira/browse/PHOENIX-3926
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3926.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3926) Do not use EncodedColumnQualifierCellsList optimization when doing raw scans

2017-06-08 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-3926:
--
Summary: Do not use EncodedColumnQualifierCellsList optimization when doing 
raw scans  (was: Fix failing MutableIndexFailureIT tests)

> Do not use EncodedColumnQualifierCellsList optimization when doing raw scans
> 
>
> Key: PHOENIX-3926
> URL: https://issues.apache.org/jira/browse/PHOENIX-3926
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.11.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3923) TimezoneOffsetFunctionIT failing after PHOENIX-3913

2017-06-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043432#comment-16043432
 ] 

Hudson commented on PHOENIX-3923:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1650 (See 
[https://builds.apache.org/job/Phoenix-master/1650/])
PHOENIX-3923 TimezoneOffsetFunctionIT failing after PHOENIX-3913 (jamestaylor: 
rev 7e26add9c75beefdf260bed0e9180673bc3be136)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/TimezoneOffsetFunction.java


> TimezoneOffsetFunctionIT failing after PHOENIX-3913
> ---
>
> Key: PHOENIX-3923
> URL: https://issues.apache.org/jira/browse/PHOENIX-3923
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3923.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3913) Support PArrayDataType.appendItemToArray to append item to array when null or empty

2017-06-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043433#comment-16043433
 ] 

Hudson commented on PHOENIX-3913:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1650 (See 
[https://builds.apache.org/job/Phoenix-master/1650/])
PHOENIX-3923 TimezoneOffsetFunctionIT failing after PHOENIX-3913 (jamestaylor: 
rev 7e26add9c75beefdf260bed0e9180673bc3be136)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/TimezoneOffsetFunction.java


> Support PArrayDataType.appendItemToArray to append item to array when null or 
> empty 
> 
>
> Key: PHOENIX-3913
> URL: https://issues.apache.org/jira/browse/PHOENIX-3913
> Project: Phoenix
>  Issue Type: Task
>Reporter: Loknath Priyatham Teja Singamsetty 
>Assignee: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3913_4.x-HBase-0.98.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3817) VerifyReplication using SQL

2017-06-08 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043434#comment-16043434
 ] 

Andrew Purtell commented on PHOENIX-3817:
-

FWIW HBASE-17448 is not in a shipping version of HBase and isn't expected until 
2.0.0 or 1.4.0

> VerifyReplication using SQL
> ---
>
> Key: PHOENIX-3817
> URL: https://issues.apache.org/jira/browse/PHOENIX-3817
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Minor
> Attachments: PHOENIX-3817.v1.patch, PHOENIX-3817.v2.patch
>
>
> Certain use cases may copy or replicate a subset of a table to a different 
> table or cluster. For example, application topologies may map data for 
> specific tenants to different peer clusters.
> It would be useful to have a Phoenix VerifyReplication tool that accepts an 
> SQL query, a target table, and an optional target cluster. The tool would 
> compare data returned by the query on the different tables and update various 
> result counters (similar to HBase's VerifyReplication).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3917) RowProjector#getEstimatedRowByteSize() returns incorrect value

2017-06-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043435#comment-16043435
 ] 

Hudson commented on PHOENIX-3917:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1650 (See 
[https://builds.apache.org/job/Phoenix-master/1650/])
Revert "PHOENIX-3917 RowProjector#getEstimatedRowByteSize() returns (samarth: 
rev 75401fcfc8c98f8894d49a66b96cd726d7aba925)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/ProjectionCompiler.java


> RowProjector#getEstimatedRowByteSize() returns incorrect value
> --
>
> Key: PHOENIX-3917
> URL: https://issues.apache.org/jira/browse/PHOENIX-3917
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Assignee: Biju Nair
>Priority: Minor
>
> {{queryPlan..getProjector().getEstimatedRowByteSize()}} returns "0" for a 
> query {{SELECT A_ID FROM TABLE}} where {{A_ID}} is Primary Key. Same is the 
> case for the query {{SELECT A_ID, A_DATA FROM TABLE}} where {{A_DATA}} is a 
> non key column. Assuming that the method is meant to return the estimated 
> number of bytes from the query projection the returned value of 0 is 
> incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3817) VerifyReplication using SQL

2017-06-08 Thread Alex Araujo (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Araujo updated PHOENIX-3817:
-
Attachment: PHOENIX-3817.v2.patch

Tested v1 patch with HBase 1.3.1 clusters and security enabled. Found a few 
issues:
# HBASE-17448 made hbase-hadoop2-compat a client dependency, but the jar is not 
added as a dependency for MapReduce jobs. [~apurtell] committed a fix 
(HBASE-18184), but it has not been released. I created PHOENIX-3919 and 
attached a patch that will allow the jar to be added manually from Phoenix as a 
workaround
# Token auth needed to be explicitly setup for reading from a target cluster 
that has security enabled
# MultiTableRecordReader was swallowing SQLException when initializing. This 
was preventing fail fast behavior for MapReduce jobs when the client was not 
able to talk to the target cluster

Attaching v2 patch with the following:
- Rebased to pull in changes from PHOENIX-3744
- Fixed #2 and #3 above

We'll need to commit PHOENIX-3919 and add a workaround for #1 until HBASE-18184 
is released.

Would appreciate a review when you have some spare cycles [~jamestaylor].

> VerifyReplication using SQL
> ---
>
> Key: PHOENIX-3817
> URL: https://issues.apache.org/jira/browse/PHOENIX-3817
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Minor
> Attachments: PHOENIX-3817.v1.patch, PHOENIX-3817.v2.patch
>
>
> Certain use cases may copy or replicate a subset of a table to a different 
> table or cluster. For example, application topologies may map data for 
> specific tenants to different peer clusters.
> It would be useful to have a Phoenix VerifyReplication tool that accepts an 
> SQL query, a target table, and an optional target cluster. The tool would 
> compare data returned by the query on the different tables and update various 
> result counters (similar to HBase's VerifyReplication).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3923) TimezoneOffsetFunctionIT failing after PHOENIX-3913

2017-06-08 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3923.
---
Resolution: Fixed

> TimezoneOffsetFunctionIT failing after PHOENIX-3913
> ---
>
> Key: PHOENIX-3923
> URL: https://issues.apache.org/jira/browse/PHOENIX-3923
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3923.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3925) Disallow usage of ON DUPLICATE KEY clause on tables with global secondary indexes

2017-06-08 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3925.
---
Resolution: Fixed

> Disallow usage of ON DUPLICATE KEY clause on tables with global secondary 
> indexes
> -
>
> Key: PHOENIX-3925
> URL: https://issues.apache.org/jira/browse/PHOENIX-3925
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3925.patch
>
>
> For the reasons of service protection, rather than just documenting that you 
> shouldn't use the ON DUPLICATE KEY clause on tables with global secondary 
> indexes, we should instead throw an exception if this is attempted. See 
> reasons listed here for the reason: 
> https://phoenix.apache.org/atomic_upsert.html#Limitations



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3918) Ensure all function implementations handle null args correctly

2017-06-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043357#comment-16043357
 ] 

James Taylor commented on PHOENIX-3918:
---

Let's hold off on completing this until 4.12.0 as I'm not 100% sure about all 
these changes. Let's just fix any that are leading to test failures for 4.11.0.

> Ensure all function implementations handle null args correctly
> --
>
> Key: PHOENIX-3918
> URL: https://issues.apache.org/jira/browse/PHOENIX-3918
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Thomas D'Silva
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3918.patch
>
>
> {code}
> testBothParametersNull(org.apache.phoenix.end2end.TimezoneOffsetFunctionIT)  
> Time elapsed: 2.272 sec  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Unknown timezone 
>   at 
> org.apache.phoenix.end2end.TimezoneOffsetFunctionIT.testBothParametersNull(TimezoneOffsetFunctionIT.java:130)
> timezoneParameterNull(org.apache.phoenix.end2end.TimezoneOffsetFunctionIT)  
> Time elapsed: 2.273 sec  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Unknown timezone 
>   at 
> org.apache.phoenix.end2end.TimezoneOffsetFunctionIT.timezoneParameterNull(TimezoneOffsetFunctionIT.java:151)
> dateParameterNull(org.apache.phoenix.end2end.TimezoneOffsetFunctionIT)  Time 
> elapsed: 2.254 sec  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 8 bytes, but had 0
>   at 
> org.apache.phoenix.end2end.TimezoneOffsetFunctionIT.dateParameterNull(TimezoneOffsetFunctionIT.java:172)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3918) Ensure all function implementations handle null args correctly

2017-06-08 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3918:
--
Fix Version/s: 4.12.0

> Ensure all function implementations handle null args correctly
> --
>
> Key: PHOENIX-3918
> URL: https://issues.apache.org/jira/browse/PHOENIX-3918
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Thomas D'Silva
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3918.patch
>
>
> {code}
> testBothParametersNull(org.apache.phoenix.end2end.TimezoneOffsetFunctionIT)  
> Time elapsed: 2.272 sec  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Unknown timezone 
>   at 
> org.apache.phoenix.end2end.TimezoneOffsetFunctionIT.testBothParametersNull(TimezoneOffsetFunctionIT.java:130)
> timezoneParameterNull(org.apache.phoenix.end2end.TimezoneOffsetFunctionIT)  
> Time elapsed: 2.273 sec  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Unknown timezone 
>   at 
> org.apache.phoenix.end2end.TimezoneOffsetFunctionIT.timezoneParameterNull(TimezoneOffsetFunctionIT.java:151)
> dateParameterNull(org.apache.phoenix.end2end.TimezoneOffsetFunctionIT)  Time 
> elapsed: 2.254 sec  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 8 bytes, but had 0
>   at 
> org.apache.phoenix.end2end.TimezoneOffsetFunctionIT.dateParameterNull(TimezoneOffsetFunctionIT.java:172)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3925) Disallow usage of ON DUPLICATE KEY clause on tables with global secondary indexes

2017-06-08 Thread Geoffrey Jacoby (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043344#comment-16043344
 ] 

Geoffrey Jacoby commented on PHOENIX-3925:
--

+1, thanks [~jamestaylor]. 

> Disallow usage of ON DUPLICATE KEY clause on tables with global secondary 
> indexes
> -
>
> Key: PHOENIX-3925
> URL: https://issues.apache.org/jira/browse/PHOENIX-3925
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3925.patch
>
>
> For the reasons of service protection, rather than just documenting that you 
> shouldn't use the ON DUPLICATE KEY clause on tables with global secondary 
> indexes, we should instead throw an exception if this is attempted. See 
> reasons listed here for the reason: 
> https://phoenix.apache.org/atomic_upsert.html#Limitations



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3925) Disallow usage of ON DUPLICATE KEY clause on tables with global secondary indexes

2017-06-08 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3925:
--
Attachment: PHOENIX-3925.patch

Please review, [~gjacoby].

> Disallow usage of ON DUPLICATE KEY clause on tables with global secondary 
> indexes
> -
>
> Key: PHOENIX-3925
> URL: https://issues.apache.org/jira/browse/PHOENIX-3925
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3925.patch
>
>
> For the reasons of service protection, rather than just documenting that you 
> shouldn't use the ON DUPLICATE KEY clause on tables with global secondary 
> indexes, we should instead throw an exception if this is attempted. See 
> reasons listed here for the reason: 
> https://phoenix.apache.org/atomic_upsert.html#Limitations



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3926) Fix failing MutableIndexFailureIT tests

2017-06-08 Thread James Taylor (JIRA)
James Taylor created PHOENIX-3926:
-

 Summary: Fix failing MutableIndexFailureIT tests
 Key: PHOENIX-3926
 URL: https://issues.apache.org/jira/browse/PHOENIX-3926
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
 Fix For: 4.11.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3925) Disallow usage of ON DUPLICATE KEY clause on tables with global secondary indexes

2017-06-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043304#comment-16043304
 ] 

James Taylor commented on PHOENIX-3925:
---

Agreed. We'll update the docs when 4.11 is released.

> Disallow usage of ON DUPLICATE KEY clause on tables with global secondary 
> indexes
> -
>
> Key: PHOENIX-3925
> URL: https://issues.apache.org/jira/browse/PHOENIX-3925
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.11.0
>
>
> For the reasons of service protection, rather than just documenting that you 
> shouldn't use the ON DUPLICATE KEY clause on tables with global secondary 
> indexes, we should instead throw an exception if this is attempted. See 
> reasons listed here for the reason: 
> https://phoenix.apache.org/atomic_upsert.html#Limitations



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3925) Disallow usage of ON DUPLICATE KEY clause on tables with global secondary indexes

2017-06-08 Thread Geoffrey Jacoby (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043297#comment-16043297
 ] 

Geoffrey Jacoby commented on PHOENIX-3925:
--

We should probably also make the language on the website stronger. Right now it 
says it's "supported" but "not recommended", as opposed to the other 
limitations which say the stronger "may not". For 4.11 and up Phoenix will 
enforce, but it would be good for users of older versions using the docs to get 
more clarity too. 

> Disallow usage of ON DUPLICATE KEY clause on tables with global secondary 
> indexes
> -
>
> Key: PHOENIX-3925
> URL: https://issues.apache.org/jira/browse/PHOENIX-3925
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.11.0
>
>
> For the reasons of service protection, rather than just documenting that you 
> shouldn't use the ON DUPLICATE KEY clause on tables with global secondary 
> indexes, we should instead throw an exception if this is attempted. See 
> reasons listed here for the reason: 
> https://phoenix.apache.org/atomic_upsert.html#Limitations



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3917) RowProjector#getEstimatedRowByteSize() returns incorrect value

2017-06-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043271#comment-16043271
 ] 

Hudson commented on PHOENIX-3917:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1649 (See 
[https://builds.apache.org/job/Phoenix-master/1649/])
PHOENIX-3917 RowProjector#getEstimatedRowByteSize() returns incorrect (samarth: 
rev 402f99ddc82ac49020b2a871377d6aabf3f9fa72)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/ProjectionCompiler.java


> RowProjector#getEstimatedRowByteSize() returns incorrect value
> --
>
> Key: PHOENIX-3917
> URL: https://issues.apache.org/jira/browse/PHOENIX-3917
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Assignee: Biju Nair
>Priority: Minor
>
> {{queryPlan..getProjector().getEstimatedRowByteSize()}} returns "0" for a 
> query {{SELECT A_ID FROM TABLE}} where {{A_ID}} is Primary Key. Same is the 
> case for the query {{SELECT A_ID, A_DATA FROM TABLE}} where {{A_DATA}} is a 
> non key column. Assuming that the method is meant to return the estimated 
> number of bytes from the query projection the returned value of 0 is 
> incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3922) Update driver version to 4.11.0

2017-06-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043272#comment-16043272
 ] 

Hudson commented on PHOENIX-3922:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1649 (See 
[https://builds.apache.org/job/Phoenix-master/1649/])
PHOENIX-3922 Update driver version to 4.11.0 (samarth: rev 
9e085f905b39e9fb5c6936a2bcf41d209bcb46d1)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java


> Update driver version to 4.11.0
> ---
>
> Key: PHOENIX-3922
> URL: https://issues.apache.org/jira/browse/PHOENIX-3922
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3922.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (PHOENIX-3726) Error while upgrading system tables

2017-06-08 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043270#comment-16043270
 ] 

Ankit Singhal edited comment on PHOENIX-3726 at 6/8/17 7:17 PM:


[~bhaveshvv109], you can follow below steps
{code}
* Clone repository locally (git clone https://github.com/apache/phoenix.git)
* apply attached patch (patch -p1 Error while upgrading system tables
> ---
>
> Key: PHOENIX-3726
> URL: https://issues.apache.org/jira/browse/PHOENIX-3726
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3726_addendum.patch, PHOENIX-3726.patch
>
>
> {code}
> Error: java.lang.IllegalArgumentException: Expected 4 system table only but 
> found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, SYSTEM.SEQUENCE, 
> SYSTEM.STATS] (state=,code=0)
> java.sql.SQLException: java.lang.IllegalArgumentException: Expected 4 system 
> table only but found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, 
> SYSTEM.SEQUENCE, SYSTEM.STATS]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2465)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2382)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2382)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:149)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>   at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
>   at sqlline.Commands.connect(Commands.java:1064)
>   at sqlline.Commands.connect(Commands.java:996)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
>   at sqlline.SqlLine.dispatch(SqlLine.java:809)
>   at sqlline.SqlLine.initArgs(SqlLine.java:588)
>   at sqlline.SqlLine.begin(SqlLine.java:661)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: java.lang.IllegalArgumentException: Expected 4 system table only 
> but found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, SYSTEM.SEQUENCE, 
> SYSTEM.STATS]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureSystemTablesUpgraded(ConnectionQueryServicesImpl.java:3091)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.access$600(ConnectionQueryServicesImpl.java:260)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2418)
>   ... 20 more
> {code}
> ping [~giacomotaylor]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3726) Error while upgrading system tables

2017-06-08 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043270#comment-16043270
 ] 

Ankit Singhal commented on PHOENIX-3726:


[~bhaveshvv109], you can follow below steps
{code}
* Clone repository locally (git clone https://github.com/apache/phoenix.git)
* apply attached patch (patch -p1 Error while upgrading system tables
> ---
>
> Key: PHOENIX-3726
> URL: https://issues.apache.org/jira/browse/PHOENIX-3726
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3726_addendum.patch, PHOENIX-3726.patch
>
>
> {code}
> Error: java.lang.IllegalArgumentException: Expected 4 system table only but 
> found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, SYSTEM.SEQUENCE, 
> SYSTEM.STATS] (state=,code=0)
> java.sql.SQLException: java.lang.IllegalArgumentException: Expected 4 system 
> table only but found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, 
> SYSTEM.SEQUENCE, SYSTEM.STATS]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2465)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2382)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2382)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:149)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>   at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
>   at sqlline.Commands.connect(Commands.java:1064)
>   at sqlline.Commands.connect(Commands.java:996)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
>   at sqlline.SqlLine.dispatch(SqlLine.java:809)
>   at sqlline.SqlLine.initArgs(SqlLine.java:588)
>   at sqlline.SqlLine.begin(SqlLine.java:661)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: java.lang.IllegalArgumentException: Expected 4 system table only 
> but found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, SYSTEM.SEQUENCE, 
> SYSTEM.STATS]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureSystemTablesUpgraded(ConnectionQueryServicesImpl.java:3091)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.access$600(ConnectionQueryServicesImpl.java:260)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2418)
>   ... 20 more
> {code}
> ping [~giacomotaylor]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3917) RowProjector#getEstimatedRowByteSize() returns incorrect value

2017-06-08 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-3917:
--
Fix Version/s: (was: 4.11.0)

> RowProjector#getEstimatedRowByteSize() returns incorrect value
> --
>
> Key: PHOENIX-3917
> URL: https://issues.apache.org/jira/browse/PHOENIX-3917
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Assignee: Biju Nair
>Priority: Minor
>
> {{queryPlan..getProjector().getEstimatedRowByteSize()}} returns "0" for a 
> query {{SELECT A_ID FROM TABLE}} where {{A_ID}} is Primary Key. Same is the 
> case for the query {{SELECT A_ID, A_DATA FROM TABLE}} where {{A_DATA}} is a 
> non key column. Assuming that the method is meant to return the estimated 
> number of bytes from the query projection the returned value of 0 is 
> incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (PHOENIX-3917) RowProjector#getEstimatedRowByteSize() returns incorrect value

2017-06-08 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain reopened PHOENIX-3917:
---

> RowProjector#getEstimatedRowByteSize() returns incorrect value
> --
>
> Key: PHOENIX-3917
> URL: https://issues.apache.org/jira/browse/PHOENIX-3917
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Assignee: Biju Nair
>Priority: Minor
>
> {{queryPlan..getProjector().getEstimatedRowByteSize()}} returns "0" for a 
> query {{SELECT A_ID FROM TABLE}} where {{A_ID}} is Primary Key. Same is the 
> case for the query {{SELECT A_ID, A_DATA FROM TABLE}} where {{A_DATA}} is a 
> non key column. Assuming that the method is meant to return the estimated 
> number of bytes from the query projection the returned value of 0 is 
> incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3917) RowProjector#getEstimatedRowByteSize() returns incorrect value

2017-06-08 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043267#comment-16043267
 ] 

Samarth Jain commented on PHOENIX-3917:
---

Looks like this change is causing test failures.

{code}
testCoveredColumnUpdates[MutableIndexIT_localIndex=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.MutableIndexIT)
  Time elapsed: 5.984 sec  <<< ERROR!
org.apache.phoenix.schema.ColumnFamilyNotFoundException: ERROR 1001 (42I01): 
Undefined column family. familyName=B
at 
org.apache.phoenix.end2end.index.MutableIndexIT.testCoveredColumnUpdates(MutableIndexIT.java:186)

Tests run: 52, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 78.983 sec - 
in org.apache.phoenix.tx.ParameterizedTransactionIT
Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 129.269 sec - 
in org.apache.phoenix.tx.TxCheckpointIT
Tests run: 304, Failures: 0, Errors: 8, Skipped: 0, Time elapsed: 1,023.291 sec 
<<< FAILURE! - in org.apache.phoenix.end2end.index.IndexIT
testSelectCF[IndexIT_localIndex=true,mutable=false,transactional=false,columnEncoded=false](org.apache.phoenix.end2end.index.IndexIT)
  Time elapsed: 5.826 sec  <<< ERROR!
org.apache.phoenix.schema.ColumnFamilyNotFoundException: ERROR 1001 (42I01): 
Undefined column family. familyName=A
at 
org.apache.phoenix.end2end.index.IndexIT.testSelectCF(IndexIT.java:693)

testSelectCF[IndexIT_localIndex=true,mutable=false,transactional=false,columnEncoded=true](org.apache.phoenix.end2end.index.IndexIT)
  Time elapsed: 5.654 sec  <<< ERROR!
org.apache.phoenix.schema.ColumnFamilyNotFoundException: ERROR 1001 (42I01): 
Undefined column family. familyName=A
at 
org.apache.phoenix.end2end.index.IndexIT.testSelectCF(IndexIT.java:693)

testSelectCF[IndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.IndexIT)
  Time elapsed: 5.702 sec  <<< ERROR!
org.apache.phoenix.schema.ColumnFamilyNotFoundException: ERROR 1001 (42I01): 
Undefined column family. familyName=A
at 
org.apache.phoenix.end2end.index.IndexIT.testSelectCF(IndexIT.java:693)

testSelectCF[IndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.IndexIT)
  Time elapsed: 5.424 sec  <<< ERROR!
org.apache.phoenix.schema.ColumnFamilyNotFoundException: ERROR 1001 (42I01): 
Undefined column family. familyName=A
at 
org.apache.phoenix.end2end.index.IndexIT.testSelectCF(IndexIT.java:693)

testSelectCF[IndexIT_localIndex=true,mutable=true,transactional=false,columnEncoded=false](org.apache.phoenix.end2end.index.IndexIT)
  Time elapsed: 5.422 sec  <<< ERROR!
org.apache.phoenix.schema.ColumnFamilyNotFoundException: ERROR 1001 (42I01): 
Undefined column family. familyName=A
at 
org.apache.phoenix.end2end.index.IndexIT.testSelectCF(IndexIT.java:693)

testSelectCF[IndexIT_localIndex=true,mutable=true,transactional=false,columnEncoded=true](org.apache.phoenix.end2end.index.IndexIT)
  Time elapsed: 5.439 sec  <<< ERROR!
org.apache.phoenix.schema.ColumnFamilyNotFoundException: ERROR 1001 (42I01): 
Undefined column family. familyName=A
at 
org.apache.phoenix.end2end.index.IndexIT.testSelectCF(IndexIT.java:693)

testSelectCF[IndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.IndexIT)
  Time elapsed: 5.834 sec  <<< ERROR!
org.apache.phoenix.schema.ColumnFamilyNotFoundException: ERROR 1001 (42I01): 
Undefined column family. familyName=A
at 
org.apache.phoenix.end2end.index.IndexIT.testSelectCF(IndexIT.java:693)

testSelectCF[IndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.IndexIT)
  Time elapsed: 7.504 sec  <<< ERROR!
org.apache.phoenix.schema.ColumnFamilyNotFoundException: ERROR 1001 (42I01): 
Undefined column family. familyName=A
at 
org.apache.phoenix.end2end.index.IndexIT.testSelectCF(IndexIT.java:693)
{code}

 I will revert it for now since the overall effect of this change is minor. 
[~gsbiju], please take a look at these failures. We can target this in the next 
patch release.

> RowProjector#getEstimatedRowByteSize() returns incorrect value
> --
>
> Key: PHOENIX-3917
> URL: https://issues.apache.org/jira/browse/PHOENIX-3917
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Assignee: Biju Nair
>Priority: Minor
>
> {{queryPlan..getProjector().getEstimatedRowByteSize()}} returns "0" for a 
> query {{SELECT A_ID FROM TABLE}} where {{A_ID}} is Primary Key. Same is the 
> case for the query {{SELECT A_ID, A_DATA FROM TABLE}} where {{A_DATA}} is a 
> non key column. Assuming that the method is meant to return the estimated 
> number of bytes from the query 

[jira] [Created] (PHOENIX-3925) Disallow usage of ON DUPLICATE KEY clause on tables with global secondary indexes

2017-06-08 Thread James Taylor (JIRA)
James Taylor created PHOENIX-3925:
-

 Summary: Disallow usage of ON DUPLICATE KEY clause on tables with 
global secondary indexes
 Key: PHOENIX-3925
 URL: https://issues.apache.org/jira/browse/PHOENIX-3925
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: James Taylor
 Fix For: 4.11.0


For the reasons of service protection, rather than just documenting that you 
shouldn't use the ON DUPLICATE KEY clause on tables with global secondary 
indexes, we should instead throw an exception if this is attempted. See reasons 
listed here for the reason: 
https://phoenix.apache.org/atomic_upsert.html#Limitations



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3863) Document DAYOFWEEK, DAYOFYEAR, and any other missing functions

2017-06-08 Thread Peter Conrad (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Conrad updated PHOENIX-3863:
--
Attachment: dayofweekandyear.diff

Added DAYOFWEEK() and DAYOFYEAR() to phoenix.csv

> Document DAYOFWEEK, DAYOFYEAR, and any other missing functions
> --
>
> Key: PHOENIX-3863
> URL: https://issues.apache.org/jira/browse/PHOENIX-3863
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Peter Conrad
> Attachments: dayofweekandyear.diff
>
>
> Looks like the above functions committed in PHOENIX-3201 were never 
> documented here: https://phoenix.apache.org/language/functions.html. We 
> should check if there are others too.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3863) Document DAYOFWEEK, DAYOFYEAR, and any other missing functions

2017-06-08 Thread Peter Conrad (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043208#comment-16043208
 ] 

Peter Conrad commented on PHOENIX-3863:
---

Update: taking a look now.

> Document DAYOFWEEK, DAYOFYEAR, and any other missing functions
> --
>
> Key: PHOENIX-3863
> URL: https://issues.apache.org/jira/browse/PHOENIX-3863
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Peter Conrad
>
> Looks like the above functions committed in PHOENIX-3201 were never 
> documented here: https://phoenix.apache.org/language/functions.html. We 
> should check if there are others too.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3924) Do not disable local indexes on write failure

2017-06-08 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3924.
---
Resolution: Not A Problem

Actually, no change is necessary. Since local index writes won't go through our 
thread pool for updates, we'd never disable them.

> Do not disable local indexes on write failure
> -
>
> Key: PHOENIX-3924
> URL: https://issues.apache.org/jira/browse/PHOENIX-3924
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.11.0
>
>
> For our HBase-1.3 releases, we should not disable local indexes when a global 
> index write fails since updates to local indexes are atomic with data table 
> updates.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3918) Ensure all function implementations handle null args correctly

2017-06-08 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043187#comment-16043187
 ] 

Thomas D'Silva commented on PHOENIX-3918:
-

I have attached a patch the fixes the functions till OctetLengthFunction.

> Ensure all function implementations handle null args correctly
> --
>
> Key: PHOENIX-3918
> URL: https://issues.apache.org/jira/browse/PHOENIX-3918
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Thomas D'Silva
> Attachments: PHOENIX-3918.patch
>
>
> {code}
> testBothParametersNull(org.apache.phoenix.end2end.TimezoneOffsetFunctionIT)  
> Time elapsed: 2.272 sec  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Unknown timezone 
>   at 
> org.apache.phoenix.end2end.TimezoneOffsetFunctionIT.testBothParametersNull(TimezoneOffsetFunctionIT.java:130)
> timezoneParameterNull(org.apache.phoenix.end2end.TimezoneOffsetFunctionIT)  
> Time elapsed: 2.273 sec  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Unknown timezone 
>   at 
> org.apache.phoenix.end2end.TimezoneOffsetFunctionIT.timezoneParameterNull(TimezoneOffsetFunctionIT.java:151)
> dateParameterNull(org.apache.phoenix.end2end.TimezoneOffsetFunctionIT)  Time 
> elapsed: 2.254 sec  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 8 bytes, but had 0
>   at 
> org.apache.phoenix.end2end.TimezoneOffsetFunctionIT.dateParameterNull(TimezoneOffsetFunctionIT.java:172)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3924) Do not disable local indexes on write failure

2017-06-08 Thread James Taylor (JIRA)
James Taylor created PHOENIX-3924:
-

 Summary: Do not disable local indexes on write failure
 Key: PHOENIX-3924
 URL: https://issues.apache.org/jira/browse/PHOENIX-3924
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: James Taylor
 Fix For: 4.11.0


For our HBase-1.3 releases, we should not disable local indexes when a global 
index write fails since updates to local indexes are atomic with data table 
updates.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3923) TimezoneOffsetFunctionIT failing after PHOENIX-3913

2017-06-08 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043186#comment-16043186
 ] 

Thomas D'Silva commented on PHOENIX-3923:
-

+1

> TimezoneOffsetFunctionIT failing after PHOENIX-3913
> ---
>
> Key: PHOENIX-3923
> URL: https://issues.apache.org/jira/browse/PHOENIX-3923
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3923.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3918) Ensure all function implementations handle null args correctly

2017-06-08 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-3918:

Attachment: PHOENIX-3918.patch

> Ensure all function implementations handle null args correctly
> --
>
> Key: PHOENIX-3918
> URL: https://issues.apache.org/jira/browse/PHOENIX-3918
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Thomas D'Silva
> Attachments: PHOENIX-3918.patch
>
>
> {code}
> testBothParametersNull(org.apache.phoenix.end2end.TimezoneOffsetFunctionIT)  
> Time elapsed: 2.272 sec  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Unknown timezone 
>   at 
> org.apache.phoenix.end2end.TimezoneOffsetFunctionIT.testBothParametersNull(TimezoneOffsetFunctionIT.java:130)
> timezoneParameterNull(org.apache.phoenix.end2end.TimezoneOffsetFunctionIT)  
> Time elapsed: 2.273 sec  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Unknown timezone 
>   at 
> org.apache.phoenix.end2end.TimezoneOffsetFunctionIT.timezoneParameterNull(TimezoneOffsetFunctionIT.java:151)
> dateParameterNull(org.apache.phoenix.end2end.TimezoneOffsetFunctionIT)  Time 
> elapsed: 2.254 sec  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 8 bytes, but had 0
>   at 
> org.apache.phoenix.end2end.TimezoneOffsetFunctionIT.dateParameterNull(TimezoneOffsetFunctionIT.java:172)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3923) TimezoneOffsetFunctionIT failing after PHOENIX-3913

2017-06-08 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3923:
--
Attachment: PHOENIX-3923.patch

Please review, [~tdsilva].

> TimezoneOffsetFunctionIT failing after PHOENIX-3913
> ---
>
> Key: PHOENIX-3923
> URL: https://issues.apache.org/jira/browse/PHOENIX-3923
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3923.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3923) TimezoneOffsetFunctionIT failing after PHOENIX-3913

2017-06-08 Thread James Taylor (JIRA)
James Taylor created PHOENIX-3923:
-

 Summary: TimezoneOffsetFunctionIT failing after PHOENIX-3913
 Key: PHOENIX-3923
 URL: https://issues.apache.org/jira/browse/PHOENIX-3923
 Project: Phoenix
  Issue Type: Sub-task
Reporter: James Taylor
Assignee: James Taylor
 Fix For: 4.11.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3922) Update driver version to 4.11.0

2017-06-08 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain resolved PHOENIX-3922.
---
Resolution: Fixed

> Update driver version to 4.11.0
> ---
>
> Key: PHOENIX-3922
> URL: https://issues.apache.org/jira/browse/PHOENIX-3922
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3922.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3922) Update driver version to 4.11.0

2017-06-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043157#comment-16043157
 ] 

James Taylor commented on PHOENIX-3922:
---

+1

> Update driver version to 4.11.0
> ---
>
> Key: PHOENIX-3922
> URL: https://issues.apache.org/jira/browse/PHOENIX-3922
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3922.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3922) Update driver version to 4.11.0

2017-06-08 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-3922:
--
Attachment: PHOENIX-3922.patch

> Update driver version to 4.11.0
> ---
>
> Key: PHOENIX-3922
> URL: https://issues.apache.org/jira/browse/PHOENIX-3922
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3922.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3922) Update driver version to 4.11.0

2017-06-08 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-3922:
-

 Summary: Update driver version to 4.11.0
 Key: PHOENIX-3922
 URL: https://issues.apache.org/jira/browse/PHOENIX-3922
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain
Assignee: Samarth Jain
 Fix For: 4.11.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3920) Stats collection doesn't always create a guide post for last remaining chunk

2017-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043148#comment-16043148
 ] 

Hadoop QA commented on PHOENIX-3920:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12872099/PHOENIX-3920_wip.patch
  against master branch at commit 7cb16d4dd7f5fe11a10dfe4a58eb2bced313b6c1.
  ATTACHMENT ID: 12872099

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
51 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 5 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+@Parameters(name="mutable = {0}, transactional = {1}, 
isUserTableNamespaceMapped = {2}, columnEncoded = {3}")
+"SELECT 
COLUMN_FAMILY,SUM(GUIDE_POSTS_ROW_COUNT),COUNT(*) from \"SYSTEM\".STATS where 
PHYSICAL_NAME = '"
+private void updateStats(final List results, boolean 
scannerHasMoreRows) throws IOException {

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.MutableIndexFailureIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.TimezoneOffsetFunctionIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.IndexExtendedIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.QueryWithOffsetIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1042//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1042//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1042//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1042//console

This message is automatically generated.

> Stats collection doesn't always create a guide post for last remaining chunk
> 
>
> Key: PHOENIX-3920
> URL: https://issues.apache.org/jira/browse/PHOENIX-3920
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-3920_wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3920) Stats collection doesn't always create a guide post for last remaining chunk

2017-06-08 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-3920:
--
Fix Version/s: (was: 4.11.0)

> Stats collection doesn't always create a guide post for last remaining chunk
> 
>
> Key: PHOENIX-3920
> URL: https://issues.apache.org/jira/browse/PHOENIX-3920
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-3920_wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3917) RowProjector#getEstimatedRowByteSize() returns incorrect value

2017-06-08 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain resolved PHOENIX-3917.
---
   Resolution: Fixed
Fix Version/s: 4.11.0

> RowProjector#getEstimatedRowByteSize() returns incorrect value
> --
>
> Key: PHOENIX-3917
> URL: https://issues.apache.org/jira/browse/PHOENIX-3917
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Assignee: Biju Nair
>Priority: Minor
> Fix For: 4.11.0
>
>
> {{queryPlan..getProjector().getEstimatedRowByteSize()}} returns "0" for a 
> query {{SELECT A_ID FROM TABLE}} where {{A_ID}} is Primary Key. Same is the 
> case for the query {{SELECT A_ID, A_DATA FROM TABLE}} where {{A_DATA}} is a 
> non key column. Assuming that the method is meant to return the estimated 
> number of bytes from the query projection the returned value of 0 is 
> incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3726) Error while upgrading system tables

2017-06-08 Thread Bhavesh Vadaliya (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043051#comment-16043051
 ] 

Bhavesh Vadaliya commented on PHOENIX-3726:
---

Hello Everyone,

I am new to apache-phoenix and I have this issue with apache-phoenix-4.8.2.
Could you please let me know how can I apply this patch to apache-phoenix-4.8.2.
Appreciate if you could provide step by step guide.

Thanks,
Bhavesh

> Error while upgrading system tables
> ---
>
> Key: PHOENIX-3726
> URL: https://issues.apache.org/jira/browse/PHOENIX-3726
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3726_addendum.patch, PHOENIX-3726.patch
>
>
> {code}
> Error: java.lang.IllegalArgumentException: Expected 4 system table only but 
> found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, SYSTEM.SEQUENCE, 
> SYSTEM.STATS] (state=,code=0)
> java.sql.SQLException: java.lang.IllegalArgumentException: Expected 4 system 
> table only but found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, 
> SYSTEM.SEQUENCE, SYSTEM.STATS]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2465)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2382)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2382)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:149)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>   at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
>   at sqlline.Commands.connect(Commands.java:1064)
>   at sqlline.Commands.connect(Commands.java:996)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
>   at sqlline.SqlLine.dispatch(SqlLine.java:809)
>   at sqlline.SqlLine.initArgs(SqlLine.java:588)
>   at sqlline.SqlLine.begin(SqlLine.java:661)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: java.lang.IllegalArgumentException: Expected 4 system table only 
> but found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, SYSTEM.SEQUENCE, 
> SYSTEM.STATS]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureSystemTablesUpgraded(ConnectionQueryServicesImpl.java:3091)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.access$600(ConnectionQueryServicesImpl.java:260)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2418)
>   ... 20 more
> {code}
> ping [~giacomotaylor]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3920) Stats collection doesn't always create a guide post for last remaining chunk

2017-06-08 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-3920:
--
Attachment: PHOENIX-3920_wip.patch

> Stats collection doesn't always create a guide post for last remaining chunk
> 
>
> Key: PHOENIX-3920
> URL: https://issues.apache.org/jira/browse/PHOENIX-3920
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3920_wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3920) Stats collection doesn't always create a guide post for last remaining chunk

2017-06-08 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042969#comment-16042969
 ] 

Samarth Jain commented on PHOENIX-3920:
---

I think it makes sense to punt this change for now. I am running into test 
failures with local indexes because of this change. Some other tests are 
failing because earlier there was no guide post, but now with this change, 
there is always at least one guide post. The latter is mostly a test only 
issue. The former needs some closer inspection and not sure if it's worth the 
effort. I will park my wip patch if someone wants to take a look later.

> Stats collection doesn't always create a guide post for last remaining chunk
> 
>
> Key: PHOENIX-3920
> URL: https://issues.apache.org/jira/browse/PHOENIX-3920
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.11.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3920) Stats collection doesn't always create a guide post for last remaining chunk

2017-06-08 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-3920:
--
Attachment: (was: PHOENIX-3920.patch)

> Stats collection doesn't always create a guide post for last remaining chunk
> 
>
> Key: PHOENIX-3920
> URL: https://issues.apache.org/jira/browse/PHOENIX-3920
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.11.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3920) Stats collection doesn't always create a guide post for last remaining chunk

2017-06-08 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-3920:
--
Attachment: (was: PHOENIX-3920_4.x-HBase-0.98.patch)

> Stats collection doesn't always create a guide post for last remaining chunk
> 
>
> Key: PHOENIX-3920
> URL: https://issues.apache.org/jira/browse/PHOENIX-3920
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.11.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3920) Stats collection doesn't always create a guide post for last remaining chunk

2017-06-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042957#comment-16042957
 ] 

James Taylor commented on PHOENIX-3920:
---

Actually, let's talk more about this one. We purposely don't create a guidepost 
at the region boundary. The parallelization code uses both the guideposts 
merged with the region boundaries, so we don't need this. Not having this bit 
of information in our stats is by design - stats are an estimate and users can 
set the guide post width small enough where this becomes unimportant.

> Stats collection doesn't always create a guide post for last remaining chunk
> 
>
> Key: PHOENIX-3920
> URL: https://issues.apache.org/jira/browse/PHOENIX-3920
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3920_4.x-HBase-0.98.patch, PHOENIX-3920.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3860) Implement TAL functionality for Omid

2017-06-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042886#comment-16042886
 ] 

James Taylor commented on PHOENIX-3860:
---

bq. Does the coprocessor update the index with the tombstone information?
Yes. Our IndexMaintainer knows how to interpret the tombstone.
bq. I assume that Tephra uses tombstone and a delete operation during a 
transaction is actually a put.
Yes, Tephra uses an empty byte array for the column qualifier to represent a 
family delete marker to ensure it sorts first. This works well for Phoenix 
because it's not possible to have an empty byte array as the column name.

You can see the code that implements the family delete marker in 
TransactionVisibilityFilter. There's a compaction hook in TransactionProcessor 
that uses this filter to cause the deleted rows to not be written.

> Implement TAL functionality for Omid
> 
>
> Key: PHOENIX-3860
> URL: https://issues.apache.org/jira/browse/PHOENIX-3860
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ohad Shacham
>Assignee: Ohad Shacham
>
> Implement TAL functionality for Omid in order to be able to use Omid as 
> Phoenix's transaction processing engine. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3921) ScanUtil#unsetReversed doesn't seem to unset reversal of Scan

2017-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042658#comment-16042658
 ] 

ASF GitHub Bot commented on PHOENIX-3921:
-

GitHub user bijugs opened a pull request:

https://github.com/apache/phoenix/pull/258

PHOENIX-3921 Change the condition checking in ScanUtil#isReversed

The current logic will return ``isReversed`` as ``true`` whether the 
``BaseScannerRegionObserver.REVERSE_SCAN`` attribute is set to 
``PDataType.TRUE_BYTES`` or ``PDataType.FALSE_BYTES``. The PR is to change it 
to return ``true`` only if  ``BaseScannerRegionObserver.REVERSE_SCAN`` 
attribute is set to ``PDataType.TRUE_BYTES``.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bijugs/phoenix PHOENIX-3921

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/258.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #258


commit b2e845467802b20cf013976e8bdead7f7769cae6
Author: Biju Nair 
Date:   2017-06-08T13:00:20Z

PHOENIX-3921 Change the condition checking in ScanUtil#isReversed




> ScanUtil#unsetReversed doesn't seem to unset reversal of Scan
> -
>
> Key: PHOENIX-3921
> URL: https://issues.apache.org/jira/browse/PHOENIX-3921
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>
> Created a new iterator with a {{scan}} object set to be non reversed using 
> {{ScanUtil.unsetReversed(scan)}}. But the iteration moves in the reverse 
> order. {{BaseResultIterators.java}} has the condition check
> {code}
> boolean isReverse = ScanUtil.isReversed(scan);
> {code}
> Looking at 
> [ScanUtil.java|https://github.com/apache/phoenix/blob/2cb617f352048179439d242d1165a9ffb39ad81c/phoenix-core/src/main/java/org/apache/phoenix/util/ScanUtil.java#L609]
>  {{isReversed}} method is defined as
> {code}
> return scan.getAttribute(BaseScannerRegionObserver.REVERSE_SCAN) != null;
> {code}
> do we need to change the condition check to compare to 
> {{PDataType.TRUE_BYTES}}
> The current logic will return {{isReversed}} as {{true}} whether the 
> {{BaseScannerRegionObserver.REVERSE_SCAN}} attribute is set to 
> {{PDataType.TRUE_BYTES}} or {{PDataType.FALSE_BYTES}} which corresponds to 
> values set in {{setReversed}} and {{unsetReversed}} methods.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] phoenix pull request #258: PHOENIX-3921 Change the condition checking in Sca...

2017-06-08 Thread bijugs
GitHub user bijugs opened a pull request:

https://github.com/apache/phoenix/pull/258

PHOENIX-3921 Change the condition checking in ScanUtil#isReversed

The current logic will return ``isReversed`` as ``true`` whether the 
``BaseScannerRegionObserver.REVERSE_SCAN`` attribute is set to 
``PDataType.TRUE_BYTES`` or ``PDataType.FALSE_BYTES``. The PR is to change it 
to return ``true`` only if  ``BaseScannerRegionObserver.REVERSE_SCAN`` 
attribute is set to ``PDataType.TRUE_BYTES``.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bijugs/phoenix PHOENIX-3921

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/258.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #258


commit b2e845467802b20cf013976e8bdead7f7769cae6
Author: Biju Nair 
Date:   2017-06-08T13:00:20Z

PHOENIX-3921 Change the condition checking in ScanUtil#isReversed




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3917) RowProjector#getEstimatedRowByteSize() returns incorrect value

2017-06-08 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042531#comment-16042531
 ] 

Biju Nair commented on PHOENIX-3917:


Thanks [~ankit.singhal], [~samarthjain] for the review.

> RowProjector#getEstimatedRowByteSize() returns incorrect value
> --
>
> Key: PHOENIX-3917
> URL: https://issues.apache.org/jira/browse/PHOENIX-3917
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Assignee: Biju Nair
>Priority: Minor
>
> {{queryPlan..getProjector().getEstimatedRowByteSize()}} returns "0" for a 
> query {{SELECT A_ID FROM TABLE}} where {{A_ID}} is Primary Key. Same is the 
> case for the query {{SELECT A_ID, A_DATA FROM TABLE}} where {{A_DATA}} is a 
> non key column. Assuming that the method is meant to return the estimated 
> number of bytes from the query projection the returned value of 0 is 
> incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3860) Implement TAL functionality for Omid

2017-06-08 Thread Ohad Shacham (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042471#comment-16042471
 ] 

Ohad Shacham commented on PHOENIX-3860:
---

Thanks [~giacomotaylor].

How does deletion propagates to the index? Does the coprocessor update the 
index with the tombstone information?
I assume that Tephra uses tombstone and a delete operation during a transaction 
is actually a put.

In the same context, what about garbage collection of the index? do you apply a 
coprocessor for gc the index as gc is done on the data table?

Thx
Ohad

> Implement TAL functionality for Omid
> 
>
> Key: PHOENIX-3860
> URL: https://issues.apache.org/jira/browse/PHOENIX-3860
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ohad Shacham
>Assignee: Ohad Shacham
>
> Implement TAL functionality for Omid in order to be able to use Omid as 
> Phoenix's transaction processing engine. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (PHOENIX-3802) NPE with PRowImpl.toRowMutations(PTableImpl.java)

2017-06-08 Thread Loknath Priyatham Teja Singamsetty (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Loknath Priyatham Teja Singamsetty  reassigned PHOENIX-3802:


Assignee: Loknath Priyatham Teja Singamsetty 

> NPE with PRowImpl.toRowMutations(PTableImpl.java)
> -
>
> Key: PHOENIX-3802
> URL: https://issues.apache.org/jira/browse/PHOENIX-3802
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0, 4.10.1
>Reporter: Loknath Priyatham Teja Singamsetty 
>Assignee: Loknath Priyatham Teja Singamsetty 
> Fix For: 4.10.0, 4.11.0, 4.10.1
>
>
> Caused by: org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 2 
> actions: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to process ON 
> DUPLICATE IGNORE for 
> COMMUNITIES.TOP_ENTITY(00DT000Dpvc000RF\x00D5B00SMgzx):
>  null
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:89)
>   at 
> org.apache.phoenix.hbase.index.Indexer.preIncrementAfterRowLock(Indexer.java:234)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$47.call(RegionCoprocessorHost.java:1241)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1621)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1697)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1670)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preIncrementAfterRowLock(RegionCoprocessorHost.java:1236)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:5818)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.increment(HRegionServer.java:4605)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3802)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3693)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32500)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2210)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.phoenix.schema.PTableImpl$PRowImpl.toRowMutations(PTableImpl.java:910)
>   at 
> org.apache.phoenix.index.PhoenixIndexBuilder.executeAtomicOp(PhoenixIndexBuilder.java:246)
>   at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager.executeAtomicOp(IndexBuildManager.java:187)
>   at 
> org.apache.phoenix.hbase.index.Indexer.preIncrementAfterRowLock(Indexer.java:213)
>   ... 15 more
> : 1 time, org.apache.hadoop.hbase.DoNotRetryIOException: Unable to process ON 
> DUPLICATE IGNORE for 
> COMMUNITIES.TOP_ENTITY(00DT000Dpvc000RF\x000TOB00010ic0D5B00SMgzx):
>  null
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:89)
>   at 
> org.apache.phoenix.hbase.index.Indexer.preIncrementAfterRowLock(Indexer.java:234)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$47.call(RegionCoprocessorHost.java:1241)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1621)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1697)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1670)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preIncrementAfterRowLock(RegionCoprocessorHost.java:1236)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:5818)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.increment(HRegionServer.java:4605)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3802)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3693)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32500)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2210)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)