[jira] [Updated] (PHOENIX-4585) Prune local index regions used for join queries

2018-02-15 Thread Maryann Xue (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maryann Xue updated PHOENIX-4585:
-
Attachment: PHOENIX-4585.patch

> Prune local index regions used for join queries
> ---
>
> Key: PHOENIX-4585
> URL: https://issues.apache.org/jira/browse/PHOENIX-4585
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Maryann Xue
>Priority: Major
> Attachments: PHOENIX-4585.patch
>
>
> Some remaining work from PHOENIX-3941: we currently do not capture the data 
> plan as part of the index plan due to the way in which we rewrite the 
> statement during join processing. See comment here for more detail: 
> https://issues.apache.org/jira/browse/PHOENIX-3941?focusedCommentId=16351017=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16351017



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4333) Stats - Incorrect estimate when stats are updated on a tenant specific view

2018-02-15 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366496#comment-16366496
 ] 

James Taylor edited comment on PHOENIX-4333 at 2/16/18 6:18 AM:


Attached WIP patch with all the above implemented. Still need to fix 
ExplainPlanWithStatsEnabledIT. Here are the current failures:
{code}
[ERROR] Tests run: 24, Failures: 3, Errors: 2, Skipped: 0, Time elapsed: 64.285 
s <<< FAILURE! - in org.apache.phoenix.end2end.ExplainPlanWithStatsEnabledIT
[ERROR] 
testSelectQueriesWithStatsForParallelizationOn(org.apache.phoenix.end2end.ExplainPlanWithStatsEnabledIT)
  Time elapsed: 2.387 s  <<< FAILURE!
java.lang.AssertionError: expected:<10> but was:<9>
at 
org.apache.phoenix.end2end.ExplainPlanWithStatsEnabledIT.testSelectQueriesWithFilters(ExplainPlanWithStatsEnabledIT.java:669)
at 
org.apache.phoenix.end2end.ExplainPlanWithStatsEnabledIT.testSelectQueriesWithStatsForParallelizationOn(ExplainPlanWithStatsEnabledIT.java:629)

[ERROR] 
testBytesRowsForSelectWhenKeyOutOfRange(org.apache.phoenix.end2end.ExplainPlanWithStatsEnabledIT)
  Time elapsed: 0.012 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.phoenix.end2end.ExplainPlanWithStatsEnabledIT.testBytesRowsForSelectWhenKeyOutOfRange(ExplainPlanWithStatsEnabledIT.java:116)

[ERROR] 
testBytesRowsForSelectOnTenantViews(org.apache.phoenix.end2end.ExplainPlanWithStatsEnabledIT)
  Time elapsed: 4.654 s  <<< FAILURE!
java.lang.AssertionError: expected:<2000> but was:
at 
org.apache.phoenix.end2end.ExplainPlanWithStatsEnabledIT.testBytesRowsForSelectOnTenantViews(ExplainPlanWithStatsEnabledIT.java:426)

[ERROR] 
testSelectQueriesWithStatsForParallelizationOff(org.apache.phoenix.end2end.ExplainPlanWithStatsEnabledIT)
  Time elapsed: 2.322 s  <<< FAILURE!
java.lang.AssertionError: expected:<10> but was:<9>
at 
org.apache.phoenix.end2end.ExplainPlanWithStatsEnabledIT.testSelectQueriesWithFilters(ExplainPlanWithStatsEnabledIT.java:669)
at 
org.apache.phoenix.end2end.ExplainPlanWithStatsEnabledIT.testSelectQueriesWithStatsForParallelizationOff(ExplainPlanWithStatsEnabledIT.java:624)

[ERROR] 
testEstimatesForAggregateQueries(org.apache.phoenix.end2end.ExplainPlanWithStatsEnabledIT)
  Time elapsed: 2.324 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.phoenix.end2end.ExplainPlanWithStatsEnabledIT.testEstimatesForAggregateQueries(ExplainPlanWithStatsEnabledIT.java:560)

[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 155.149 
s - in org.apache.phoenix.end2end.TenantSpecificTablesDDLIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 149.108 
s - in org.apache.phoenix.end2end.TenantSpecificTablesDMLIT
[INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 280.504 
s - in org.apache.phoenix.end2end.ViewIT
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR]   ExplainPlanWithStatsEnabledIT.testBytesRowsForSelectOnTenantViews:426 
expected:<2000> but was:
[ERROR]   
ExplainPlanWithStatsEnabledIT.testSelectQueriesWithStatsForParallelizationOff:624->testSelectQueriesWithFilters:669
 expected:<10> but was:<9>
[ERROR]   
ExplainPlanWithStatsEnabledIT.testSelectQueriesWithStatsForParallelizationOn:629->testSelectQueriesWithFilters:669
 expected:<10> but was:<9>
[ERROR] Errors: 
[ERROR]   
ExplainPlanWithStatsEnabledIT.testBytesRowsForSelectWhenKeyOutOfRange:116 
NullPointer
[ERROR]   ExplainPlanWithStatsEnabledIT.testEstimatesForAggregateQueries:560 
NullPointer
[INFO] 
{code}


was (Author: jamestaylor):
Attached WIP patch with all the above implemented. Still need to fix 
ExplainPlanWithStatsEnabledIT.

> Stats - Incorrect estimate when stats are updated on a tenant specific view
> ---
>
> Key: PHOENIX-4333
> URL: https://issues.apache.org/jira/browse/PHOENIX-4333
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Mujtaba Chohan
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4333_test.patch, PHOENIX-4333_v1.patch, 
> PHOENIX-4333_v2.patch, PHOENIX-4333_wip1.patch
>
>
> Consider two tenants A, B with tenant specific view on 2 separate 
> regions/region servers.
> {noformat}
> Region 1 keys:
> A,1
> A,2
> B,1
> Region 2 keys:
> B,2
> B,3
> {noformat}
> When stats are updated on tenant A view. Querying stats on tenant B view 
> yield partial results (only contains stats for B,1) which are incorrect even 
> though it shows updated timestamp as current.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4614) Make HashJoinPlan.iterator() runnable in connectionless tests

2018-02-15 Thread Maryann Xue (JIRA)
Maryann Xue created PHOENIX-4614:


 Summary: Make HashJoinPlan.iterator() runnable in connectionless 
tests
 Key: PHOENIX-4614
 URL: https://issues.apache.org/jira/browse/PHOENIX-4614
 Project: Phoenix
  Issue Type: Improvement
Reporter: Maryann Xue
Assignee: Maryann Xue


Right now HashJoinPlan has to call {{HashCacheClient.addHashCache()}} in 
initializing the iterator, and that method in turn throws an Exception as it 
tries to call the join sub-plan's iterators' next() method. To avoid this 
Exception, we can probably create an interface for HashCacheClient and have a 
dummy implementation in connectionless tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4333) Stats - Incorrect estimate when stats are updated on a tenant specific view

2018-02-15 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366496#comment-16366496
 ] 

James Taylor commented on PHOENIX-4333:
---

Attached WIP patch with all the above implemented. Still need to fix 
ExplainPlanWithStatsEnabledIT.

> Stats - Incorrect estimate when stats are updated on a tenant specific view
> ---
>
> Key: PHOENIX-4333
> URL: https://issues.apache.org/jira/browse/PHOENIX-4333
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Mujtaba Chohan
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4333_test.patch, PHOENIX-4333_v1.patch, 
> PHOENIX-4333_v2.patch, PHOENIX-4333_wip1.patch
>
>
> Consider two tenants A, B with tenant specific view on 2 separate 
> regions/region servers.
> {noformat}
> Region 1 keys:
> A,1
> A,2
> B,1
> Region 2 keys:
> B,2
> B,3
> {noformat}
> When stats are updated on tenant A view. Querying stats on tenant B view 
> yield partial results (only contains stats for B,1) which are incorrect even 
> though it shows updated timestamp as current.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4333) Stats - Incorrect estimate when stats are updated on a tenant specific view

2018-02-15 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4333:
--
Attachment: PHOENIX-4333_wip1.patch

> Stats - Incorrect estimate when stats are updated on a tenant specific view
> ---
>
> Key: PHOENIX-4333
> URL: https://issues.apache.org/jira/browse/PHOENIX-4333
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Mujtaba Chohan
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4333_test.patch, PHOENIX-4333_v1.patch, 
> PHOENIX-4333_v2.patch, PHOENIX-4333_wip1.patch
>
>
> Consider two tenants A, B with tenant specific view on 2 separate 
> regions/region servers.
> {noformat}
> Region 1 keys:
> A,1
> A,2
> B,1
> Region 2 keys:
> B,2
> B,3
> {noformat}
> When stats are updated on tenant A view. Querying stats on tenant B view 
> yield partial results (only contains stats for B,1) which are incorrect even 
> though it shows updated timestamp as current.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-2566) Support NOT NULL constraint for any column for immutable table

2018-02-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366426#comment-16366426
 ] 

Hudson commented on PHOENIX-2566:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #43 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/43/])
PHOENIX-2566 Support NOT NULL constraint for any column for immutable (jtaylor: 
rev 82ba1417fdd69a0ac57cbcf2f2327d4aa371bcd9)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryDatabaseMetaDataIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/TupleProjectionCompiler.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java


> Support NOT NULL constraint for any column for immutable table
> --
>
> Key: PHOENIX-2566
> URL: https://issues.apache.org/jira/browse/PHOENIX-2566
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-2566_v1.patch
>
>
> Since write-once/append-only tables do not partially update rows, we can 
> support NOT NULL constraints for non PK columns.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4601) Perform server-side retries if client version < 4.14

2018-02-15 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4601:
--
Description: 
The client version is now available on the server side when index maintenance 
is being performed. Given that this information is available, we should 
conditionally retry on the server depending on the client version (instead of 
relying on the operator to manually update the config after clients have been 
upgraded). 

With  PHOENIX-4613, the client version has been threaded through to the 
IndexCommitter.write() method. All that's left to do is:
- Always set the config on the server side to have no HBase retries.
- Add catch of IOException and conditionally call the retrying exception 
handler code based on clientVersion < 4.14.0 in 
TrackingParallelWriterIndexCommitter and ParallelWriterIndexCommitter.

  was:
The client version is now available on the server side when index maintenance 
is being performed. Given that this information is available, we should 
conditionally retry on the server depending on the client version (instead of 
relying on the operator to manually update the config after clients have been 
upgraded). 

Here's what I believe needs to be done:
- Always set the config on the server side to have no retries.
- Move getClientVersion method declaration from PhoenixIndexMetaData to 
IndexMetaData
- Add getIndexMetaData() method in IndexBuilder and retrieve clientVersion in 
preBatchMutateWithExceptions like this:
{code}
builder.getIndexMetaData(miniBatchOp).getClientVersion();
{code}
- Set clientVersion on BatchMutateContext so it can be accessed later in 
postBatchMutateIndispensably.
- In postBatchMutateIndispensably, access clientVersion through 
BatchMutateContext and pass through IndexWriter.writeAndKillYourselfOnFailure() 
and into the writer.write() method. 
- Add catch of IOException and call to handle the retries in the table.batch 
call in TrackingParallelWriterIndexCommitter.


> Perform server-side retries if client version < 4.14
> 
>
> Key: PHOENIX-4601
> URL: https://issues.apache.org/jira/browse/PHOENIX-4601
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>
> The client version is now available on the server side when index maintenance 
> is being performed. Given that this information is available, we should 
> conditionally retry on the server depending on the client version (instead of 
> relying on the operator to manually update the config after clients have been 
> upgraded). 
> With  PHOENIX-4613, the client version has been threaded through to the 
> IndexCommitter.write() method. All that's left to do is:
> - Always set the config on the server side to have no HBase retries.
> - Add catch of IOException and conditionally call the retrying exception 
> handler code based on clientVersion < 4.14.0 in 
> TrackingParallelWriterIndexCommitter and ParallelWriterIndexCommitter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4613) Thread clientVersion through to IndexCommitter implementors

2018-02-15 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366405#comment-16366405
 ] 

James Taylor commented on PHOENIX-4613:
---

Please review, [~tdsilva] or [~vincentpoon].

> Thread clientVersion through to IndexCommitter implementors
> ---
>
> Key: PHOENIX-4613
> URL: https://issues.apache.org/jira/browse/PHOENIX-4613
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4613_v1.patch
>
>
> In support of PHOENIX-4601, thread the client version into the implementors 
> of IndexCommitter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4613) Thread clientVersion through to IndexCommitter implementors

2018-02-15 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4613:
--
Attachment: PHOENIX-4613_v1.patch

> Thread clientVersion through to IndexCommitter implementors
> ---
>
> Key: PHOENIX-4613
> URL: https://issues.apache.org/jira/browse/PHOENIX-4613
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4613_v1.patch
>
>
> In support of PHOENIX-4601, thread the client version into the implementors 
> of IndexCommitter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4613) Thread clientVersion through to IndexCommitter implementors

2018-02-15 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4613:
-

 Summary: Thread clientVersion through to IndexCommitter 
implementors
 Key: PHOENIX-4613
 URL: https://issues.apache.org/jira/browse/PHOENIX-4613
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: James Taylor
 Fix For: 4.14.0


In support of PHOENIX-4601, thread the client version into the implementors of 
IndexCommitter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Post 5.0.0-alpha

2018-02-15 Thread James Taylor
Sounds like a good plan, Josh. Thanks for the explanation.

On Thu, Feb 15, 2018 at 2:28 PM, Josh Elser  wrote:

> So the official release version that I set was "5.0.0-alpha" (with the
> HBase-2.0 kind of being superfluous since we weren't targeting any other
> HBase versions). To match this Maven version, I switched around the
> fixVersions on JIRA to match:
>
> Resolved issues that were previously 5.0, 5.x, or 5.0.0 are now
> 5.0.0-alpha. Everything else is now 5.0.0 -- which is what we can use for
> future changes.
>
> My hope was that this would avoid us having to announce something like
> "beware! 5.0.0 isn't stable!" (the "alpha" designation doing that for us).
>
> Happy to do something else if this isn't clear to others like it is to me
> :)
>
>
> On 2/15/18 3:08 PM, James Taylor wrote:
>
>> Thanks, Josh. Great job on the release.
>>
>> One question on the fixVersion. I've pushed a few commits for fixes after
>> the 5.0.0 release went out. Since they aren't in the release, I set the
>> fixVersion to 5.1.0 (anticipating that that'll be the next release
>> version). Should we use a different fixVersion that that for these "to be
>> release" fixes?
>>
>> On Thu, Feb 15, 2018 at 12:04 PM, Josh Elser 
>> wrote:
>>
>> Hiya,
>>>
>>> Thanks to everyone who helped get 5.0.0-alpha out the door. A few
>>> book-keeping things:
>>>
>>> * I've updated JIRA, nuking the 5.x and 5.0 fixVersions, renaming 5.0.0
>>> to
>>> 5.0.0-alpha, and renaming 5.1.0 to 5.0.0. 5.0.0 is the fixVersion you
>>> want.
>>> * Announcement has been sent out and I think I've cleaned everything else
>>> up.
>>> * 5.x-HBase-2.0 is open for your commits (with a Maven version of
>>> 5.0.0-SNAPSHOT)
>>> * Rajeshbabu is going to look at making sure the 5.x branch is synced up
>>> with the rest of the changes from 4.x which might have been skipped in
>>> PHOENIX-4610.
>>> * I'll continue to monitor the 5.0.0 fixVersion, trying to keep us on a
>>> trajectory to make that release happen sooner than later.
>>>
>>> - Josh
>>>
>>>
>>


Re: Post 5.0.0-alpha

2018-02-15 Thread Josh Elser
So the official release version that I set was "5.0.0-alpha" (with the 
HBase-2.0 kind of being superfluous since we weren't targeting any other 
HBase versions). To match this Maven version, I switched around the 
fixVersions on JIRA to match:


Resolved issues that were previously 5.0, 5.x, or 5.0.0 are now 
5.0.0-alpha. Everything else is now 5.0.0 -- which is what we can use 
for future changes.


My hope was that this would avoid us having to announce something like 
"beware! 5.0.0 isn't stable!" (the "alpha" designation doing that for us).


Happy to do something else if this isn't clear to others like it is to me :)

On 2/15/18 3:08 PM, James Taylor wrote:

Thanks, Josh. Great job on the release.

One question on the fixVersion. I've pushed a few commits for fixes after
the 5.0.0 release went out. Since they aren't in the release, I set the
fixVersion to 5.1.0 (anticipating that that'll be the next release
version). Should we use a different fixVersion that that for these "to be
release" fixes?

On Thu, Feb 15, 2018 at 12:04 PM, Josh Elser  wrote:


Hiya,

Thanks to everyone who helped get 5.0.0-alpha out the door. A few
book-keeping things:

* I've updated JIRA, nuking the 5.x and 5.0 fixVersions, renaming 5.0.0 to
5.0.0-alpha, and renaming 5.1.0 to 5.0.0. 5.0.0 is the fixVersion you want.
* Announcement has been sent out and I think I've cleaned everything else
up.
* 5.x-HBase-2.0 is open for your commits (with a Maven version of
5.0.0-SNAPSHOT)
* Rajeshbabu is going to look at making sure the 5.x branch is synced up
with the rest of the changes from 4.x which might have been skipped in
PHOENIX-4610.
* I'll continue to monitor the 5.0.0 fixVersion, trying to keep us on a
trajectory to make that release happen sooner than later.

- Josh





[jira] [Commented] (PHOENIX-1160) Allow an index to be declared as immutable

2018-02-15 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366238#comment-16366238
 ] 

James Taylor commented on PHOENIX-1160:
---

We should also fix PHOENIX-4612 by just disallowing the immutability of a table 
to change. Otherwise, we wouldn't know if an index was declared as immutable or 
if its data table was declared as immutable.

> Allow an index to be declared as immutable
> --
>
> Key: PHOENIX-1160
> URL: https://issues.apache.org/jira/browse/PHOENIX-1160
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Priority: Major
>
> Currently, a table must be marked as immutable, through the 
> IMMUTABLE_ROWS=true property specified at creation time. In this case, all 
> indexes added to the table are immutable, while without this property, all 
> indexes are mutable.
> Instead, we should support a mix of immutable and mutable indexes. We already 
> have an INDEX_TYPE field on our metadata row. We can add a new IMMUTABLE 
> keyword and specify an index is immutable like this:
> {code}
> CREATE IMMUTABLE INDEX foo ON bar(c2, c1);
> {code}
> It would be up to the application developer to ensure that only columns that 
> don't mutate are part of an immutable index (we already rely on this anyway).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4612) Index immutability doesn't change when data table immutable changes

2018-02-15 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4612:
--
Description: 
The immutability of an index should change when the data table immutable 
changes. Probably best to not allow table immutability to change as part of 
PHOENIX-1160.

Here's a test that currently fails:
{code}
private static void assertImmutability(Connection conn, String tableName, 
boolean expectedImmutableRows) throws Exception {
ResultSet rs = conn.createStatement().executeQuery("SELECT /*+ NO_INDEX 
*/ v FROM " + tableName);
rs.next();
PTable table = 
conn.unwrap(PhoenixConnection.class).getMetaDataCache().getTableRef(new 
PTableKey(null, tableName)).getTable();
assertEquals(expectedImmutableRows, table.isImmutableRows());
PhoenixStatement stmt = 
conn.createStatement().unwrap(PhoenixStatement.class);
rs = stmt.executeQuery("SELECT v FROM " + tableName);
rs.next();
assertTrue(stmt.getQueryPlan().getTableRef().getTable().getType() == 
PTableType.INDEX);
table = 
conn.unwrap(PhoenixConnection.class).getMetaDataCache().getTableRef(new 
PTableKey(null, tableName)).getTable();
assertEquals(expectedImmutableRows, table.isImmutableRows());
for (PTable index : table.getIndexes()) {
assertEquals(expectedImmutableRows, index.isImmutableRows());
}
}

@Test
public void testIndexImmutabilityChangesWithTable() throws Exception {
Connection conn = DriverManager.getConnection(getUrl());
String tableName = generateUniqueName();
String indexName = generateUniqueName();
conn.createStatement().execute("CREATE IMMUTABLE TABLE " + tableName + 
"(k VARCHAR PRIMARY KEY, v VARCHAR) COLUMN_ENCODED_BYTES=NONE, 
IMMUTABLE_STORAGE_SCHEME = ONE_CELL_PER_COLUMN");
conn.createStatement().execute("CREATE INDEX " + indexName + " ON " + 
tableName + "(v)");
assertImmutability(conn, tableName, true);
conn.createStatement().execute("ALTER TABLE " + tableName + " SET 
IMMUTABLE_ROWS=false");
assertImmutability(conn, tableName, false);
}
{code}

  was:See QueryDatabaseMetaDataIT.testIndexImmutabilityChangesWithTable() which 
currently fails. Probably best to not allow table immutability to change as 
part of PHOENIX-1160.


> Index immutability doesn't change when data table immutable changes
> ---
>
> Key: PHOENIX-4612
> URL: https://issues.apache.org/jira/browse/PHOENIX-4612
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>
> The immutability of an index should change when the data table immutable 
> changes. Probably best to not allow table immutability to change as part of 
> PHOENIX-1160.
> Here's a test that currently fails:
> {code}
> private static void assertImmutability(Connection conn, String tableName, 
> boolean expectedImmutableRows) throws Exception {
> ResultSet rs = conn.createStatement().executeQuery("SELECT /*+ 
> NO_INDEX */ v FROM " + tableName);
> rs.next();
> PTable table = 
> conn.unwrap(PhoenixConnection.class).getMetaDataCache().getTableRef(new 
> PTableKey(null, tableName)).getTable();
> assertEquals(expectedImmutableRows, table.isImmutableRows());
> PhoenixStatement stmt = 
> conn.createStatement().unwrap(PhoenixStatement.class);
> rs = stmt.executeQuery("SELECT v FROM " + tableName);
> rs.next();
> assertTrue(stmt.getQueryPlan().getTableRef().getTable().getType() == 
> PTableType.INDEX);
> table = 
> conn.unwrap(PhoenixConnection.class).getMetaDataCache().getTableRef(new 
> PTableKey(null, tableName)).getTable();
> assertEquals(expectedImmutableRows, table.isImmutableRows());
> for (PTable index : table.getIndexes()) {
> assertEquals(expectedImmutableRows, index.isImmutableRows());
> }
> }
> 
> @Test
> public void testIndexImmutabilityChangesWithTable() throws Exception {
> Connection conn = DriverManager.getConnection(getUrl());
> String tableName = generateUniqueName();
> String indexName = generateUniqueName();
> conn.createStatement().execute("CREATE IMMUTABLE TABLE " + tableName 
> + "(k VARCHAR PRIMARY KEY, v VARCHAR) COLUMN_ENCODED_BYTES=NONE, 
> IMMUTABLE_STORAGE_SCHEME = ONE_CELL_PER_COLUMN");
> conn.createStatement().execute("CREATE INDEX " + indexName + " ON " + 
> tableName + "(v)");
> assertImmutability(conn, tableName, true);
> conn.createStatement().execute("ALTER TABLE " + tableName + " SET 
> IMMUTABLE_ROWS=false");
> assertImmutability(conn, tableName, false);
> }
> {code}



--
This message was sent by Atlassian JIRA

[jira] [Created] (PHOENIX-4612) Index immutability doesn't change when data table immutable changes

2018-02-15 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4612:
-

 Summary: Index immutability doesn't change when data table 
immutable changes
 Key: PHOENIX-4612
 URL: https://issues.apache.org/jira/browse/PHOENIX-4612
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor


See QueryDatabaseMetaDataIT.testIndexImmutabilityChangesWithTable() which 
currently fails. Probably best to not allow table immutability to change as 
part of PHOENIX-1160.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4600) Add retry logic for partial index rebuilder writes

2018-02-15 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4600:
--
Priority: Blocker  (was: Major)

> Add retry logic for partial index rebuilder writes
> --
>
> Key: PHOENIX-4600
> URL: https://issues.apache.org/jira/browse/PHOENIX-4600
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Blocker
>
> A little bit of follow up work is necessary as part of PHOENIX-4130. It looks 
> like the partial index rebuilder writes 
> (UngroupedAggregateRegionObserver.rebuildIndices()) do not have the new retry 
> logic that's necessary. It's somewhat unfortunate that the logic isn't shared 
> between the commits that happen in the loop of 
> UngroupedAggregateRegionObserver.doPostScannerOpen() and rebuildIndices() as 
> it'd be almost identical (except we know that all writes will be local 
> writes).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4611) Not nullable column impact on join query plans

2018-02-15 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4611:
-

 Summary: Not nullable column impact on join query plans
 Key: PHOENIX-4611
 URL: https://issues.apache.org/jira/browse/PHOENIX-4611
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor


With PHOENIX-2566, there's a subtle change in projected tables in that a column 
may end up being not nullable where as before it was nullable when the family 
name is not null. I've kept the old behavior with 
[this|https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=blobdiff;f=phoenix-core/src/main/java/org/apache/phoenix/compile/TupleProjectionCompiler.java;h=fccded2a896855a2a01d727b992f954a1d3fa8ab;hp=d0b900c1a9c21609b89065307433a0d37b12b72a;hb=82ba1417fdd69a0ac57cbcf2f2327d4aa371bcd9;hpb=e126dd1dda5aa80e8296d3b0c84736b22b658999]
 commit, but would you mind confirming what the right thing to do is, 
[~maryannxue]?

Without this change, the explain plan changes in 
SortMergeJoinMoreIT.testBug2894() and the assert fails. Looks like the compiler 
ends up changing the row ordering.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Post 5.0.0-alpha

2018-02-15 Thread James Taylor
Thanks, Josh. Great job on the release.

One question on the fixVersion. I've pushed a few commits for fixes after
the 5.0.0 release went out. Since they aren't in the release, I set the
fixVersion to 5.1.0 (anticipating that that'll be the next release
version). Should we use a different fixVersion that that for these "to be
release" fixes?

On Thu, Feb 15, 2018 at 12:04 PM, Josh Elser  wrote:

> Hiya,
>
> Thanks to everyone who helped get 5.0.0-alpha out the door. A few
> book-keeping things:
>
> * I've updated JIRA, nuking the 5.x and 5.0 fixVersions, renaming 5.0.0 to
> 5.0.0-alpha, and renaming 5.1.0 to 5.0.0. 5.0.0 is the fixVersion you want.
> * Announcement has been sent out and I think I've cleaned everything else
> up.
> * 5.x-HBase-2.0 is open for your commits (with a Maven version of
> 5.0.0-SNAPSHOT)
> * Rajeshbabu is going to look at making sure the 5.x branch is synced up
> with the rest of the changes from 4.x which might have been skipped in
> PHOENIX-4610.
> * I'll continue to monitor the 5.0.0 fixVersion, trying to keep us on a
> trajectory to make that release happen sooner than later.
>
> - Josh
>


Post 5.0.0-alpha

2018-02-15 Thread Josh Elser

Hiya,

Thanks to everyone who helped get 5.0.0-alpha out the door. A few 
book-keeping things:


* I've updated JIRA, nuking the 5.x and 5.0 fixVersions, renaming 5.0.0 
to 5.0.0-alpha, and renaming 5.1.0 to 5.0.0. 5.0.0 is the fixVersion you 
want.
* Announcement has been sent out and I think I've cleaned everything 
else up.
* 5.x-HBase-2.0 is open for your commits (with a Maven version of 
5.0.0-SNAPSHOT)
* Rajeshbabu is going to look at making sure the 5.x branch is synced up 
with the rest of the changes from 4.x which might have been skipped in 
PHOENIX-4610.
* I'll continue to monitor the 5.0.0 fixVersion, trying to keep us on a 
trajectory to make that release happen sooner than later.


- Josh


[ANNOUNCE] Apache Phoenix 5.0.0-alpha released

2018-02-15 Thread Josh Elser
The Apache Phoenix PMC is happy to announce the release of Phoenix 
5.0.0-alpha for Apache Hadoop 3 and Apache HBase 2.0. The release is 
available for download at here[1].


Apache Phoenix enables OLTP and operational analytics in Hadoop for low 
latency applications by combining the power of standard SQL and JDBC 
APIs with full ACID transaction capabilities as well as the flexibility 
of late-bound, schema-on-read capabilities provided by HBase.


This is a "preview" release of Apache Phoenix 5.0.0. This release is 
specifically designed for users who want to use the newest versions of 
Hadoop and HBase while the quality of Phoenix is still incubating. This 
release should be of sufficient quality for most users, but it is not of 
the same quality as most Phoenix releases.


Please refer to the release notes[2] of this release for a full list of 
known issues. The Phoenix developers would be extremely receptive to any 
and all that use this release and report any issues as this will 
directly increase the quality of the 5.0.0 release.


-- The Phoenix PMC

[1] https://phoenix.apache.org/download.html
[2] https://phoenix.apache.org/release_notes.html


[jira] [Commented] (PHOENIX-4576) Fix LocalIndexSplitMergeIT tests failing in master branch

2018-02-15 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366142#comment-16366142
 ] 

Andrew Purtell commented on PHOENIX-4576:
-

I don't remember the exact Jira but remember a change like this otherwise the 
store file locking changes introduced after 1.3 might lead to a FNFE and 
regionserver abort. The call to next is needed to initialize something.

I don't know Phoenix's needs well enough to suggest a workaround.

> Fix LocalIndexSplitMergeIT tests failing in master branch
> -
>
> Key: PHOENIX-4576
> URL: https://issues.apache.org/jira/browse/PHOENIX-4576
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4576.patch
>
>
> Currenty LocalIndexSplitMergeIT#testLocalIndexScanAfterRegionsMerge is 
> failing in master branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-2566) Support NOT NULL constraint for any column for immutable table

2018-02-15 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365950#comment-16365950
 ] 

James Taylor commented on PHOENIX-2566:
---

Sorry about that, [~elserj]. Let me fix it up.

> Support NOT NULL constraint for any column for immutable table
> --
>
> Key: PHOENIX-2566
> URL: https://issues.apache.org/jira/browse/PHOENIX-2566
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-2566_v1.patch
>
>
> Since write-once/append-only tables do not partially update rows, we can 
> support NOT NULL constraints for non PK columns.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4459) Region assignments are failing for the test cases with extended clocks to support SCN

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4459:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> Region assignments are failing for the test cases with extended clocks to 
> support SCN
> -
>
> Key: PHOENIX-4459
> URL: https://issues.apache.org/jira/browse/PHOENIX-4459
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4459-v1.patch, 
> PHOENIX-4459.test-disabling.patch, jstack_PHOENIX-4459
>
>
> There are test cases using own clock are failing with TableNotFoundException 
> during region assignment. The reason is the meta scan is not giving any 
> results because of the past timestamps. Need to check in more details. 
> Because of the region assignment failures during create table procedure hbase 
> client wait for 30 mins. So not able to continue running the other tests as 
> well.
> {noformat}
> 2017-12-14 16:48:03,153 ERROR [ProcExecWrkr-9] 
> org.apache.hadoop.hbase.master.TableStateManager(135): Unable to get table 
> T08 state
> org.apache.hadoop.hbase.TableNotFoundException: T08
>   at 
> org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:175)
>   at 
> org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:132)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignProcedure.startTransition(AssignProcedure.java:161)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:294)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:85)
>   at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:845)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1452)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1221)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$800(ProcedureExecutor.java:77)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1731)
> {noformat}
> List of tests hanging because of this:-
> ExplainPlanWithStatsEnabledIT#testBytesRowsForSelectOnTenantViews
> ConcurrentMutationsIT
> PartialIndexRebuilderIT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4565) IndexScrutinyToolIT is failing

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4565:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> IndexScrutinyToolIT is failing
> --
>
> Key: PHOENIX-4565
> URL: https://issues.apache.org/jira/browse/PHOENIX-4565
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Priority: Critical
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4565.2.patch, PHOENIX-4565.patch
>
>
> {noformat}
> [ERROR] 
> testScrutinyWhileTakingWrites[0](org.apache.phoenix.end2end.IndexScrutinyToolIT)
>   Time elapsed: 12.494 s  <<< FAILURE!
> java.lang.AssertionError: expected:<1000> but was:<996>
>     at 
> org.apache.phoenix.end2end.IndexScrutinyToolIT.testScrutinyWhileTakingWrites(IndexScrutinyToolIT.java:253)
> [ERROR] 
> testScrutinyWhileTakingWrites[1](org.apache.phoenix.end2end.IndexScrutinyToolIT)
>   Time elapsed: 7.437 s  <<< FAILURE!
> java.lang.AssertionError: expected:<1000> but was:<997>
>     at 
> org.apache.phoenix.end2end.IndexScrutinyToolIT.testScrutinyWhileTakingWrites(IndexScrutinyToolIT.java:253)
> [ERROR] 
> testScrutinyWhileTakingWrites[2](org.apache.phoenix.end2end.IndexScrutinyToolIT)
>   Time elapsed: 12.195 s  <<< FAILURE!
> java.lang.AssertionError: expected:<1000> but was:<999>
>     at 
> org.apache.phoenix.end2end.IndexScrutinyToolIT.testScrutinyWhileTakingWrites(IndexScrutinyToolIT.java:253)
> {noformat}
> Saw this on a {{mvn verify}} of 5.x. I don't know if we expect this one to be 
> broken or not -- I didn't see an open issue tracking it.
> Is this one we should get fixed before shipping an alpha/beta? My opinion 
> would be: unless it is a trivial/simple fix, we should get it for the next 
> release.
> [~sergey.soldatov], [~an...@apache.org], [~rajeshbabu].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4321) Replace deprecated HBaseAdmin with Admin

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4321:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> Replace deprecated HBaseAdmin with Admin
> 
>
> Key: PHOENIX-4321
> URL: https://issues.apache.org/jira/browse/PHOENIX-4321
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4321.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4561) Temporarily disable transactional tests

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4561:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> Temporarily disable transactional tests
> ---
>
> Key: PHOENIX-4561
> URL: https://issues.apache.org/jira/browse/PHOENIX-4561
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4561.001.patch, PHOENIX-4561.addendum.patch
>
>
> All 5.x transactional table tests are failing because of a necessary Tephra 
> release which is pending.
> Let's disable these tests so we have a better idea of the state of the build.
> FYI [~an...@apache.org]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4396) Use setPriority method instead of relying on RpcController configuration

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4396:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> Use setPriority method instead of relying on RpcController configuration
> 
>
> Key: PHOENIX-4396
> URL: https://issues.apache.org/jira/browse/PHOENIX-4396
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>
> use setPriority method(implemented as a part of HBASE-15816) for RPC calls.
> Related to PHOENIX-3994



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4298) refactoring to avoid using deprecated API for Put/Delete

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4298:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> refactoring to avoid using deprecated API for Put/Delete
> 
>
> Key: PHOENIX-4298
> URL: https://issues.apache.org/jira/browse/PHOENIX-4298
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4298.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4470) Fix tests of type ParallelStatsDisabledTest

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4470:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> Fix tests of type ParallelStatsDisabledTest 
> 
>
> Key: PHOENIX-4470
> URL: https://issues.apache.org/jira/browse/PHOENIX-4470
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4496) Fix RowValueConstructorIT and IndexMetadataIT

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4496:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> Fix RowValueConstructorIT and IndexMetadataIT
> -
>
> Key: PHOENIX-4496
> URL: https://issues.apache.org/jira/browse/PHOENIX-4496
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Ankit Singhal
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4496.patch
>
>
> {noformat}
> [ERROR] Tests run: 46, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 117.444 s <<< FAILURE! - in org.apache.phoenix.end2end.RowValueConstructorIT
> [ERROR] 
> testRVCLastPkIsTable1stPkIndex(org.apache.phoenix.end2end.RowValueConstructorIT)
>   Time elapsed: 4.516 s  <<< FAILURE!
> java.lang.AssertionError
> at 
> org.apache.phoenix.end2end.RowValueConstructorIT.testRVCLastPkIsTable1stPkIndex(RowValueConstructorIT.java:1584)
> {noformat}
> {noformat}
> ERROR] Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 79.381 s <<< FAILURE! - in org.apache.phoenix.end2end.index.IndexMetadataIT
> [ERROR] 
> testMutableTableOnlyHasPrimaryKeyIndex(org.apache.phoenix.end2end.index.IndexMetadataIT)
>   Time elapsed: 4.504 s  <<< FAILURE!
> java.lang.AssertionError
> at 
> org.apache.phoenix.end2end.index.IndexMetadataIT.helpTestTableOnlyHasPrimaryKeyIndex(IndexMetadataIT.java:662)
> at 
> org.apache.phoenix.end2end.index.IndexMetadataIT.testMutableTableOnlyHasPrimaryKeyIndex(IndexMetadataIT.java:623)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4518) Test phoenix-kafka integration

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4518:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> Test phoenix-kafka integration
> --
>
> Key: PHOENIX-4518
> URL: https://issues.apache.org/jira/browse/PHOENIX-4518
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
>
> Need to test:
> * Does the Kafka integration still work?
> * Should the Kafka dependency version be changed?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4440) Local index split/merge IT tests are failing

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4440:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> Local index split/merge IT tests are failing
> 
>
> Key: PHOENIX-4440
> URL: https://issues.apache.org/jira/browse/PHOENIX-4440
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4440.patch, PHOENIX-4440_v2.patch
>
>
> IndexHalfStoreFileReaderGenerator#preStoreFileReaderOpen is not getting 
> called and going by default behaviour so split/merge not working.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4524) Remove Flume dependency from Phoenix-Kafka plugin

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4524:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> Remove Flume dependency from Phoenix-Kafka plugin
> -
>
> Key: PHOENIX-4524
> URL: https://issues.apache.org/jira/browse/PHOENIX-4524
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Artem Ervits
>Priority: Blocker
> Fix For: 5.0.0
>
>
> Phoenix Kafka plugin heavily depends on Phoenix Flume Plugin which in turn 
> depends on Apache Flume. This jira is proposed to remove Flume dependency 
> from Phoenix-Kafka plugin.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4303) Replace HTableInterface,HConnection with Table,Connection interfaces respectively

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4303:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> Replace HTableInterface,HConnection with Table,Connection interfaces 
> respectively
> -
>
> Key: PHOENIX-4303
> URL: https://issues.apache.org/jira/browse/PHOENIX-4303
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4297_addendum2.patch, PHOENIX-4303.patch, 
> PHOENIX-4303_addendum.patch, PHOENIX-4303_v2.patch
>
>
> In latest versions of HBase HTableInterface,HConnection are replaced with 
> Table and Connection respectively. We can make use of new interfaces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4567) Re-enable Phoenix transactional table tests

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4567:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> Re-enable Phoenix transactional table tests
> ---
>
> Key: PHOENIX-4567
> URL: https://issues.apache.org/jira/browse/PHOENIX-4567
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
>
> PHOENIX-4561 disable the transactional tests because a new Tephra release is 
> needed.
> TEPHRA-272 is tracking this work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4482) Fix WALReplayWithIndexWritesAndCompressedWALIT failing with ClassCastException

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4482:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> Fix WALReplayWithIndexWritesAndCompressedWALIT failing with ClassCastException
> --
>
> Key: PHOENIX-4482
> URL: https://issues.apache.org/jira/browse/PHOENIX-4482
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4482.patch
>
>
> {noformat}
> ERROR] 
> testReplayEditsWrittenViaHRegion(org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT)
>   Time elapsed: 82.455 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL cannot be cast to 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT.createWAL(WALReplayWithIndexWritesAndCompressedWALIT.java:274)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT.testReplayEditsWrittenViaHRegion(WALReplayWithIndexWritesAndCompressedWALIT.java:192)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4365) Make use RegionCoprocessorEnvironment than CoprocessorEnvironment because it's private scope

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4365:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> Make use RegionCoprocessorEnvironment than CoprocessorEnvironment because 
> it's private scope
> 
>
> Key: PHOENIX-4365
> URL: https://issues.apache.org/jira/browse/PHOENIX-4365
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4365.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4494) Fix PhoenixTracingEndToEndIT

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4494:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> Fix PhoenixTracingEndToEndIT
> 
>
> Key: PHOENIX-4494
> URL: https://issues.apache.org/jira/browse/PHOENIX-4494
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHEONXI-4494.001.patch
>
>
> {code}
> [ERROR] Tests run: 8, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 
> 148.175 s <<< FAILURE! - in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
> [ERROR] 
> testScanTracingOnServer(org.apache.phoenix.trace.PhoenixTracingEndToEndIT)  
> Time elapsed: 64.484 s  <<< FAILURE!
> java.lang.AssertionError: Didn't get expected updates to trace table
> at 
> org.apache.phoenix.trace.PhoenixTracingEndToEndIT.testScanTracingOnServer(PhoenixTracingEndToEndIT.java:304)
> [ERROR] 
> testClientServerIndexingTracing(org.apache.phoenix.trace.PhoenixTracingEndToEndIT)
>   Time elapsed: 22.346 s  <<< FAILURE!
> java.lang.AssertionError: Never found indexing updates
> at 
> org.apache.phoenix.trace.PhoenixTracingEndToEndIT.testClientServerIndexingTracing(PhoenixTracingEndToEndIT.java:193)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4408) Figure out Hadoop version compatibility

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4408:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> Figure out Hadoop version compatibility
> ---
>
> Key: PHOENIX-4408
> URL: https://issues.apache.org/jira/browse/PHOENIX-4408
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Priority: Blocker
> Fix For: 5.0.0
>
>
> The 5.x-HBase-2.0 branch is presently incompatible with both Hadoop 2.7 and 
> Hadoop 3.0.
> This stems from PhoenixMetricsSink and LoggingSink both of which extend 
> MetricsSink from Hadoop. Hadoop leaks a commons-configuration class into a 
> method signature and changes the dependency across versions.
> This makes it extremely annoying to work around downstream (we would have to 
> create multiple maven modules to shim around this). We should figure out what 
> compatibility we want to have.  Post PHOENIX-4405, it's only Hadoop3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4580) Upgrade to Tephra 0.14.0-incubating for HBase 2.0 support

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4580:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> Upgrade to Tephra  0.14.0-incubating  for HBase 2.0 support
> ---
>
> Key: PHOENIX-4580
> URL: https://issues.apache.org/jira/browse/PHOENIX-4580
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 5.0.0
>
>
> TEPHRA-272 has the necessary changes that Phoenix needs but we need to get a 
> release from the Tephra folks first.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4497) Fix Local Index IT tests

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4497:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> Fix Local Index IT tests
> 
>
> Key: PHOENIX-4497
> URL: https://issues.apache.org/jira/browse/PHOENIX-4497
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4423) Phoenix-hive compilation broken on >=Hive 2.3

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4423:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> Phoenix-hive compilation broken on >=Hive 2.3
> -
>
> Key: PHOENIX-4423
> URL: https://issues.apache.org/jira/browse/PHOENIX-4423
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4423.002.patch, PHOENIX-4423_wip1.patch
>
>
> HIVE-15167 removed an interface which we're using in Phoenix which obviously 
> fails compilation. Will need to figure out how to work with Hive 1.x, <2.3.0, 
> and >=2.3.0.
> FYI [~sergey.soldatov]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4378) Unable to set KEEP_DELETED_CELLS to true on RS scanner

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4378:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> Unable to set KEEP_DELETED_CELLS to true on RS scanner
> --
>
> Key: PHOENIX-4378
> URL: https://issues.apache.org/jira/browse/PHOENIX-4378
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>
> [~jamestaylor], 
> It seems we may need to fix PHOENIX-4277 differently for HBase 2.0 as we can 
> only update TTL and maxVersions now in preStoreScannerOpen and cannot return 
> a new StoreScanner with updated scanInfo.
> for reference:
> [1]https://issues.apache.org/jira/browse/PHOENIX-4318?focusedCommentId=16249943=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16249943



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4537) RegionServer initiating compaction can trigger schema migration and deadlock the system

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4537:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> RegionServer initiating compaction can trigger schema migration and deadlock 
> the system
> ---
>
> Key: PHOENIX-4537
> URL: https://issues.apache.org/jira/browse/PHOENIX-4537
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Romil Choksi
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4537.001.patch
>
>
> [~sergey.soldatov] has been doing some great digging around a test failure 
> we've been seeing at $dayjob. The situation goes like this.
> 0. Run some arbitrary load
> 1. Stop HBase
> 2. Enable schema mapping ({{phoenix.schema.isNamespaceMappingEnabled=true}} 
> and {{phoenix.schema.mapSystemTablesToNamespace=true}} in hbase-site.xml)
> 3. Start HBase
> 4. Circumstantially, have the SYSTEM.CATALOG table need a compaction to run 
> before a client first connects
> When the RegionServer initiates the compaction, it will end up running 
> {{UngroupedAggregateRegionObserver.clearTsOnDisabledIndexes}} which opens a 
> Phoenix connection. While the RegionServer won't upgrade system tables, it 
> *will* try to migrate them into the schema mapped variants (e.g. 
> SYSTEM.CATALOG to SYSTEM:CATALOG).
> However, one of the first steps in the schema migration is to disable the 
> SYSTEM.CATALOG table. However, the SYSTEM.CATALOG table can't be disabled 
> until the region is CLOSED, and the region cannot be CLOSED until the 
> compaction is finished. *deadlock*
> The "obvious" fix is to avoid RegionServers from triggering system table 
> migrations, but Sergey and [~elserj] both think that this will end badly 
> (RegionServers falling over because they expect the tables to be migrated and 
> they aren't).
> Thoughts? [~ankit.singhal], [~jamestaylor], any others?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4513) Fix the recursive call in ExecutableExplainStatement#getOperation

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4513:

Fix Version/s: (was: 5.0.0-alpha)
   5.0.0

> Fix the recursive call in ExecutableExplainStatement#getOperation
> -
>
> Key: PHOENIX-4513
> URL: https://issues.apache.org/jira/browse/PHOENIX-4513
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Priority: Major
> Fix For: 4.13.0, 5.0.0
>
> Attachments: PHOENIX-4513.v0.patch
>
>
> {code}
> @Override
> public Operation getOperation() {
>   return this.getOperation();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4534) upsert/delete/upsert for the same row corrupts the indexes

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4534:

Fix Version/s: (was: 5.0)
   5.1.0

> upsert/delete/upsert for the same row corrupts the indexes
> --
>
> Key: PHOENIX-4534
> URL: https://issues.apache.org/jira/browse/PHOENIX-4534
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0
>Reporter: Romil Choksi
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
>  Labels: HBase-2.0
> Fix For: 5.1.0
>
> Attachments: PHOENIX-4534.patch, PHOENIX-4534_v2.patch, 
> PHOENIX-4534_v3.patch
>
>
> If we delete and upsert again the same row, the corresponding index has a 
> null value. 
> {noformat}
> 0: jdbc:phoenix:> create table a (id integer primary key, f float);
> No rows affected (2.272 seconds)
> 0: jdbc:phoenix:> create index i1 on a (f);
> No rows affected (5.769 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.021 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> | 0.5  | 1|
> +--+--+
> 1 row selected (0.016 seconds)
> 0: jdbc:phoenix:> delete from a where id = 1;
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> +--+--+
> No rows selected (0.015 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +---+--+
> |  0:F  | :ID  |
> +---+--+
> | null  | 1|
> +---+--+
> 1 row selected (0.013 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4535) Index in array data type is inconsistent.

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4535:

Fix Version/s: (was: 5.0)
   5.1.0

> Index in array data type is inconsistent. 
> --
>
> Key: PHOENIX-4535
> URL: https://issues.apache.org/jira/browse/PHOENIX-4535
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0, 4.13.0
>Reporter: Romil Choksi
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.1.0
>
>
> We allow using zero indexes for elements in array column and returns null by 
> accident. 
> A simple test case to reproduce:
> {noformat}
> Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
> Connection conn = DriverManager.getConnection(getUrl(), props);
> conn.createStatement().execute("create table A(ID INTEGER PRIMARY 
> KEY, array_id VARCHAR[])");
> conn.createStatement().execute("upsert into A values (1, 
> ARRAY['test','test2','test3'])");
> conn.commit();
> ResultSet rs = conn.createStatement().executeQuery("select 
> array_id[0], array_id[1] from A");
> while(rs.next()) {
> System.out.println(rs.getString(1));
> System.out.println(rs.getString(2));
> }
> {noformat}
> The result for 4.x branches would be {null, 'test'} and it would fail with an 
> exception for 5.x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3533) Remove IMMUTABLE_ROWS TableProperty

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-3533:

Fix Version/s: (was: 5.0)
   5.1.0

> Remove IMMUTABLE_ROWS TableProperty
> ---
>
> Key: PHOENIX-3533
> URL: https://issues.apache.org/jira/browse/PHOENIX-3533
> Project: Phoenix
>  Issue Type: Task
>Reporter: Thomas D'Silva
>Priority: Major
> Fix For: 5.1.0
>
>
> Using CREATE IMMUTABLE TABLE ... should be the only way to specify a table is 
> immutable, and this property cannot be altered. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-868) Make Time, Date, and Timestamp handling JDBC-compliant

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-868:
---
Fix Version/s: (was: 5.0)
   5.1.0

> Make Time, Date, and Timestamp handling JDBC-compliant
> --
>
> Key: PHOENIX-868
> URL: https://issues.apache.org/jira/browse/PHOENIX-868
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Gabriel Reid
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.1.0
>
>
> From what I understand from the JDBC documentation, the way that a 
> java.sql.Date should be handled via JDBC is simply as a day, month, and year, 
> despite the fact that it is internally represented as a timestamp (the same 
> kind of thing applies to Time objects, which are a triple of hours, minutes, 
> and seconds).
> Further, my understanding is that it is the responsibility of a JDBC driver 
> to do normalization of incoming Date and Time (and maybe Timestamp) objects 
> to interpret them as being in the current time zone, and remove the extra 
> components (i.e. time components for a Date, and date components for a Time) 
> before storing the value.
> This means that today, if I insert a column value consisting of 'new 
> Date(System.currentTimeMillis())', then I should be able to retrieve that 
> same value with a filter on 'Date.valueOf(“2014-03-18”)’. Additionally, that 
> filter should work regardless of my own local timezone.
> It also means that if I store ‘Time.valueOf("07:00:00”)’ in a TIME field in a 
> database in my current timezone, someone should get “07:00:00” if they run 
> 'ResultSet#getTime(1).toString()’ on that value, even if they’re in a 
> different timezone than me.
> From what I can see right now, Phoenix doesn’t currently exhibit this 
> behavior. Instead, the full long representation of Date, Time, and Timestamps 
> is stored directly in HBase, without dropping the extra date fields or doing 
> timezone conversion.
> From the current analysis, what is required for Phoenix to be JDBC-compliant 
> in terms of time/date/timestamp handling is:
> * All incoming time-style values should be interpreted in the local timezone 
> of the driver, then be normalized and converted to UTC before serialization 
> (unless a Calendar is supplied) in PreparedStatement calls
> * All outgoing time-style values should be converted from UTC into the local 
> timezone (unless a Calendar is supplied) in ResultSet calls
> * Supplying a Calendar to PreparedStatement methods should cause the time 
> value to be converted from the local timezone to the timezone of the calendar 
> (instead of UTC) before being serialized
> * Supplying a Calendar to ResultSet methods should cause the time value from 
> the database to be interpreted as if it was serialized in the timezone of the 
> Calendar, instead of UTC.
> Making the above changes would mean breaking backwards compatibility with 
> existing Phoenix installs (unless some kind of backwards-compatibility mode 
> is introduced or something similar). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4200) QualifierFilter from HBase to be implemented

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4200:

Fix Version/s: (was: 5.0)
   5.1.0

> QualifierFilter from HBase to be implemented
> 
>
> Key: PHOENIX-4200
> URL: https://issues.apache.org/jira/browse/PHOENIX-4200
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.9.0, 4.10.0, 4.11.0
>Reporter: Priyansh Saxena
>Priority: Major
> Fix For: 5.1.0
>
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> Currently, it is not possible to filter columns from a table using an 
> SQL-like query in Phoenix. This impairs the use of Phoenix as an SQL layer 
> over HBase where wide column-family approach is being used and/or the 
> column-names are dynamic.
> Need to determine how to handle and display dynamic column-names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4508) Order-by not optimized in sort-merge-join on salted tables

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4508:

Fix Version/s: (was: 5.0)
   5.0.0-alpha

> Order-by not optimized in sort-merge-join on salted tables
> --
>
> Key: PHOENIX-4508
> URL: https://issues.apache.org/jira/browse/PHOENIX-4508
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2
>Reporter: Flavio Pompermaier
>Assignee: Maryann Xue
>Priority: Major
>  Labels: planner, query
> Fix For: 4.13.0, 5.0.0-alpha, 4.14.0
>
> Attachments: PHOENIX-4508.patch
>
>
> In my Phoenix tables I found that one query ens successfully while another 
> one, logically equal, does not (unless that I don't apply some tuning to 
> timeouts).
> The 2 queries extract the same data but, while the first query terminates the 
> second does not.
> PS:  without the USE_SORT_MERGE_JOIN both queries weren't working
> 
> h2. First query
> {code:sql}
> SELECT /*+ USE_SORT_MERGE_JOIN */ COUNT(*) 
> FROM PEOPLE ds JOIN MYTABLE l ON ds.PERSON_ID = l.LOCALID
> WHERE l.EID IS NULL AND l.DSID = 'PEOPLE' AND l.HAS_CANDIDATES = FALSE;
> {code}
> +---+-+++
> | PLAN
>   | EST_BYTES_READ  | EST_ROWS_READ  |  
> EST_INFO_TS   |
> +---+-+++
> | SORT-MERGE-JOIN (INNER) TABLES  
>   | 14155777900 | 12077867   | 
> 1513754378759  |
> | CLIENT 42-CHUNK 6168903 ROWS 1132461 BYTES PARALLEL 3-WAY FULL SCAN 
> OVER PEOPLE | 14155777900 | 12077867   | 
> 1513754378759  |
> | SERVER FILTER BY FIRST KEY ONLY 
>   | 14155777900 | 12077867   | 
> 1513754378759  |
> | CLIENT MERGE SORT   
>   | 14155777900 | 12077867   | 
> 1513754378759  |
> | AND (SKIP MERGE)
>   | 14155777900 | 12077867   | 
> 1513754378759  |
> | CLIENT 15-CHUNK 5908964 ROWS 2831155679 BYTES PARALLEL 15-WAY RANGE 
> SCAN OVER MYTABLE [0] - [2]  | 14155777900 | 12077867   | 
> 1513754378759  |
> | SERVER FILTER BY (EID IS NULL AND DSID = 'PEOPLE' AND 
> HAS_CANDIDATES = false)   | 14155777900 | 12077867   
> | 1513754378759  |
> | SERVER SORTED BY [L.LOCALID]
>   | 14155777900 | 12077867   | 
> 1513754378759  |
> | CLIENT MERGE SORT   
>   | 14155777900 | 12077867   | 
> 1513754378759  |
> | CLIENT AGGREGATE INTO SINGLE ROW
>   | 14155777900 | 12077867   | 
> 1513754378759  |
> +---+-+++
> 10 rows selected (0.041 seconds)
> 
> h2. Second query
> {code:sql}
> SELECT /*+ USE_SORT_MERGE_JOIN */ COUNT(*) 
> FROM (SELECT LOCALID FROM MYTABLE
> WHERE EID IS NULL AND DSID = 'PEOPLE' AND HAS_CANDIDATES = FALSE) l JOIN 
> PEOPLE  ds ON ds.PERSON_ID = l.LOCALID;
> {code}
> +--+-+++
> | PLAN
>  | EST_BYTES_READ  | EST_ROWS_READ  |  
> EST_INFO_TS   |
> +--+-+++
> | SORT-MERGE-JOIN (INNER) TABLES  
>  | 14155777900 | 12077867   | 
> 1513754378759  |
> | CLIENT 15-CHUNK 5908964 ROWS 2831155679 BYTES PARALLEL 3-WAY RANGE SCAN 
> OVER MYTABLE [0] - [2]  | 14155777900 | 12077867   | 1513754378759  |
> | SERVER FILTER BY (EID IS NULL AND DSID = 'PEOPLE' AND 
> HAS_CANDIDATES 

[jira] [Updated] (PHOENIX-4536) Change getWAL usage due HBASE-19751

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4536:

Fix Version/s: (was: 5.0)
   5.0.0-alpha

> Change getWAL usage due HBASE-19751
> ---
>
> Key: PHOENIX-4536
> URL: https://issues.apache.org/jira/browse/PHOENIX-4536
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0-alpha
>
> Attachments: PHOENIX-4536.patch
>
>
> in HBASE-19751 WALFactory#getWAL signature has been changed from using byte[] 
> to RegionInfo. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4373) Local index variable length key can have trailing nulls while upserting

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4373:

Fix Version/s: (was: 5.0)
   5.0.0-alpha

> Local index variable length key can have trailing nulls while upserting
> ---
>
> Key: PHOENIX-4373
> URL: https://issues.apache.org/jira/browse/PHOENIX-4373
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 5.0.0-alpha, 4.14.0
>
> Attachments: PHOENIX-4373.v1.master.patch
>
>
> In the UpsertCompiler#setValues() , if it's a local index, the key is 
> prefixed with regionPrefix.  During that process, ptr.get() is called to get 
> the base key, and the code assumes the entire array should be used.  However, 
> if it's a variable length key, we could have trailing nulls since the base 
> key ptr array size is just an estimate. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3563) Ensure we release ZooKeeper resources allocated by the Tephra client embedded in the Phoenix connection

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-3563:

Fix Version/s: (was: 5.0)
   5.0.0-alpha

> Ensure we release ZooKeeper resources allocated by the Tephra client embedded 
> in the Phoenix connection
> ---
>
> Key: PHOENIX-3563
> URL: https://issues.apache.org/jira/browse/PHOENIX-3563
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 4.9.1, 4.10.0, 5.0.0-alpha
>
> Attachments: PHOENIX-3563.patch, PHOENIX-3563.patch, 
> PHOENIX-3563.patch
>
>
> When transactions are enabled the Phoenix client will create some Tephra 
> client objects, including TephraZKClientService, which embeds a ZooKeeper 
> instance. Ensure that ZooKeeper instance is properly shut down via 
> ZooKeeper#close. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4382) Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator byte return null in query results

2018-02-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4382:

Fix Version/s: (was: 5.0)
   5.0.0-alpha

> Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator 
> byte return null in query results
> ---
>
> Key: PHOENIX-4382
> URL: https://issues.apache.org/jira/browse/PHOENIX-4382
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 5.0.0-alpha, 4.14.0, 4.13.2, 4.13.2-cdh5.11.2
>
> Attachments: PHOENIX-4382.v1.master.patch, 
> PHOENIX-4382.v2.master.patch, PHOENIX-4382.v3.master.patch, 
> PHOENIX-4382.v4.master.patch, UpsertBigValuesIT.java
>
>
> For immutable tables, upsert of some values like Short.MAX_VALUE results in a 
> null value in query resultsets.  Mutable tables are not affected.  I tried 
> with BigInt and got the same problem.
> For Short, the breaking point seems to be 32512.
> This is happening because of the way we serialize nulls.  For nulls, we write 
> out [separatorByte, #_of_nulls].  However, some data values, like 
> Short.MAX_VALUE, start with separatorByte, we can't distinguish between a 
> null and these values.  Currently the code assumes it's a null when it sees a 
> leading separatorByte, hence the incorrect query results.
> See attached test - testShort() , testBigInt()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4610) Converge 4.x and 5.x branches

2018-02-15 Thread Josh Elser (JIRA)
Josh Elser created PHOENIX-4610:
---

 Summary: Converge 4.x and 5.x branches
 Key: PHOENIX-4610
 URL: https://issues.apache.org/jira/browse/PHOENIX-4610
 Project: Phoenix
  Issue Type: Task
Reporter: Josh Elser
Assignee: Rajeshbabu Chintaguntla
 Fix For: 5.1.0


We have a quite a few improvements which have landed on the 4.x branches which 
have missed the 5.x branch due to its earlier instability. Rajeshbabu 
volunteered offline to me to start this onerous task.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-2566) Support NOT NULL constraint for any column for immutable table

2018-02-15 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365819#comment-16365819
 ] 

Josh Elser commented on PHOENIX-2566:
-

[~jamestaylor], looks like this is causing compilation issues on 5.x. I'm not 
sure if this is because of the divergence of 4.x and 5.x or just an accidental 
oversight.

> Support NOT NULL constraint for any column for immutable table
> --
>
> Key: PHOENIX-2566
> URL: https://issues.apache.org/jira/browse/PHOENIX-2566
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.1.0
>
> Attachments: PHOENIX-2566_v1.patch
>
>
> Since write-once/append-only tables do not partially update rows, we can 
> support NOT NULL constraints for non PK columns.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4609) Error Occurs while selecting a specific set of columns : ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, but had 2

2018-02-15 Thread Aman Jha (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Jha updated PHOENIX-4609:
--
Summary: Error Occurs while selecting a specific set of columns : ERROR 201 
(22000): Illegal data. Expected length of at least 8 bytes, but had 2  (was: 
Error Occurs while selecting a specific set of columns)

> Error Occurs while selecting a specific set of columns : ERROR 201 (22000): 
> Illegal data. Expected length of at least 8 bytes, but had 2
> 
>
> Key: PHOENIX-4609
> URL: https://issues.apache.org/jira/browse/PHOENIX-4609
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0, 4.13.0
>Reporter: Aman Jha
>Priority: Critical
> Attachments: DML_DDL.sql, SelectStatement.sql, TestPhoenix.java
>
>
> While selecting columns from a table, an error occurs for Illegal Data.
> h3. _*ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, 
> but had 2*_
> The data is read/write only through the Phoenix Client.
> Moreover, this error only occurs while running queries via Java Program only 
> and not through the Squirrel SQL client. Is there any other way to access 
> results from the ResultSet that is returned from Phoenix Client. 
>  
> *Environment Details* : 
> *HBase Version* : _1.2.6 on Hadoop 2.8.2_
> *Phoenix Version* : _4.11.0-HBase-1.2_
> *OS*: _LINUX(RHEL)_
>  
> The following error is caused when selecting columns via a Java Program
> {code:java}
> ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, but had 
> 2; nested exception is java.sql.SQLException: ERROR 201 (22000): Illegal 
> data. Expected length of at least 8 bytes, but had 2
> at 
> org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:102)
> at 
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
> at 
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
> at 
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
> at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:419)
> at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:474)
> at 
> com.zycus.qe.service.impl.PhoenixHBaseDAOImpl.fetchAggregationResult(PhoenixHBaseDAOImpl.java:752)
> ... 14 common frames omitted
> Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected 
> length of at least 8 bytes, but had 2
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:483)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:213)
> at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:165)
> at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:171)
> at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:175)
> at 
> org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:115)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:260)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.next(OrderedResultIterator.java:199)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:65)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:255)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.next(OrderedResultIterator.java:199)
> at 
> org.apache.phoenix.iterate.OrderedAggregatingResultIterator.next(OrderedAggregatingResultIterator.java:51)
> at 
> org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
> at 
> org.apache.phoenix.execute.TupleProjectionPlan$1.next(TupleProjectionPlan.java:62)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
> at 
> 

[jira] [Updated] (PHOENIX-4609) Error Occurs while selecting a specific set of columns

2018-02-15 Thread Aman Jha (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Jha updated PHOENIX-4609:
--
Description: 
While selecting columns from a table, an error occurs for Illegal Data.
h3. _*ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, but 
had 2*_

The data is read/write only through the Phoenix Client.

Moreover, this error only occurs while running queries via Java Program only 
and not through the Squirrel SQL client. Is there any other way to access 
results from the ResultSet that is returned from Phoenix Client. 

 

*Environment Details* : 

*HBase Version* : _1.2.6 on Hadoop 2.8.2_

*Phoenix Version* : _4.11.0-HBase-1.2_

*OS*: _LINUX(RHEL)_

 

The following error is caused when selecting columns via a Java Program
{code:java}
ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, but had 
2; nested exception is java.sql.SQLException: ERROR 201 (22000): Illegal data. 
Expected length of at least 8 bytes, but had 2
at 
org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:102)
at 
org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
at 
org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
at 
org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:419)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:474)
at 
com.zycus.qe.service.impl.PhoenixHBaseDAOImpl.fetchAggregationResult(PhoenixHBaseDAOImpl.java:752)
... 14 common frames omitted
Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected 
length of at least 8 bytes, but had 2
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:483)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:213)
at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:165)
at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:171)
at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:175)
at 
org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:115)
at 
org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:260)
at 
org.apache.phoenix.iterate.OrderedResultIterator.next(OrderedResultIterator.java:199)
at 
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
at 
org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
at 
org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
at 
org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:65)
at 
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
at 
org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:255)
at 
org.apache.phoenix.iterate.OrderedResultIterator.next(OrderedResultIterator.java:199)
at 
org.apache.phoenix.iterate.OrderedAggregatingResultIterator.next(OrderedAggregatingResultIterator.java:51)
at 
org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
at 
org.apache.phoenix.execute.TupleProjectionPlan$1.next(TupleProjectionPlan.java:62)
at 
org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
at 
org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
at 
org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:65)
at 
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
at 
org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
at 
org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
at 
org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:65)
at 
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
at 
org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
at 
org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:207)
at 
org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:207)
... 16 common frames omitted
{code}
 

[jira] [Commented] (PHOENIX-4609) Error Occurs while selecting a specific set of columns

2018-02-15 Thread Aman Jha (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365790#comment-16365790
 ] 

Aman Jha commented on PHOENIX-4609:
---

[~jamestaylor] [~samarthjain] is there a workaround for this issue.

> Error Occurs while selecting a specific set of columns
> --
>
> Key: PHOENIX-4609
> URL: https://issues.apache.org/jira/browse/PHOENIX-4609
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0, 4.13.0
>Reporter: Aman Jha
>Priority: Critical
> Attachments: DML_DDL.sql, SelectStatement.sql, TestPhoenix.java
>
>
> While selecting columns from a table, an error occurs for Illegal Data.
> h3. _*ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, 
> but had 2*_
>  
> Moreover, this error only occurs while running queries via Java Program only 
> and not through the Squirrel SQL client. Is there any other way to access 
> results from the ResultSet that is returned from Phoenix Client. 
>  
> *Environment Details* : 
> *HBase Version* : _1.2.6 on Hadoop 2.8.2_
> *Phoenix Version* : _4.11.0-HBase-1.2_
> *OS*: _LINUX(RHEL)_
>  
> The following error is caused when selecting columns via a Java Program
> {code:java}
> ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, but had 
> 2; nested exception is java.sql.SQLException: ERROR 201 (22000): Illegal 
> data. Expected length of at least 8 bytes, but had 2
> at 
> org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:102)
> at 
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
> at 
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
> at 
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
> at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:419)
> at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:474)
> at 
> com.zycus.qe.service.impl.PhoenixHBaseDAOImpl.fetchAggregationResult(PhoenixHBaseDAOImpl.java:752)
> ... 14 common frames omitted
> Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected 
> length of at least 8 bytes, but had 2
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:483)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:213)
> at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:165)
> at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:171)
> at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:175)
> at 
> org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:115)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:260)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.next(OrderedResultIterator.java:199)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:65)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:255)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.next(OrderedResultIterator.java:199)
> at 
> org.apache.phoenix.iterate.OrderedAggregatingResultIterator.next(OrderedAggregatingResultIterator.java:51)
> at 
> org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
> at 
> org.apache.phoenix.execute.TupleProjectionPlan$1.next(TupleProjectionPlan.java:62)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:65)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
> at 
> 

[jira] [Updated] (PHOENIX-4609) Error Occurs while selecting a specific set of columns

2018-02-15 Thread Aman Jha (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Jha updated PHOENIX-4609:
--
Attachment: SelectStatement.sql

> Error Occurs while selecting a specific set of columns
> --
>
> Key: PHOENIX-4609
> URL: https://issues.apache.org/jira/browse/PHOENIX-4609
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0, 4.13.0
>Reporter: Aman Jha
>Priority: Critical
> Attachments: DML_DDL.sql, SelectStatement.sql, TestPhoenix.java
>
>
> While selecting columns from a table, an error occurs for Illegal Data.
> h3. _*ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, 
> but had 2*_
>  
> Moreover, this error only occurs while running queries via Java Program only 
> and not through the Squirrel SQL client. Is there any other way to access 
> results from the ResultSet that is returned from Phoenix Client. 
>  
> *Environment Details* : 
> *HBase Version* : _1.2.6 on Hadoop 2.8.2_
> *Phoenix Version* : _4.11.0-HBase-1.2_
> *OS*: _LINUX(RHEL)_
>  
> The following error is caused when selecting columns via a Java Program
> {code:java}
> ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, but had 
> 2; nested exception is java.sql.SQLException: ERROR 201 (22000): Illegal 
> data. Expected length of at least 8 bytes, but had 2
> at 
> org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:102)
> at 
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
> at 
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
> at 
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
> at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:419)
> at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:474)
> at 
> com.zycus.qe.service.impl.PhoenixHBaseDAOImpl.fetchAggregationResult(PhoenixHBaseDAOImpl.java:752)
> ... 14 common frames omitted
> Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected 
> length of at least 8 bytes, but had 2
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:483)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:213)
> at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:165)
> at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:171)
> at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:175)
> at 
> org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:115)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:260)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.next(OrderedResultIterator.java:199)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:65)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:255)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.next(OrderedResultIterator.java:199)
> at 
> org.apache.phoenix.iterate.OrderedAggregatingResultIterator.next(OrderedAggregatingResultIterator.java:51)
> at 
> org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
> at 
> org.apache.phoenix.execute.TupleProjectionPlan$1.next(TupleProjectionPlan.java:62)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:65)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
> at 
> 

[jira] [Updated] (PHOENIX-4609) Error Occurs while selecting a specific set of columns

2018-02-15 Thread Aman Jha (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Jha updated PHOENIX-4609:
--
Attachment: DML_DDL.sql

> Error Occurs while selecting a specific set of columns
> --
>
> Key: PHOENIX-4609
> URL: https://issues.apache.org/jira/browse/PHOENIX-4609
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0, 4.13.0
>Reporter: Aman Jha
>Priority: Critical
> Attachments: DML_DDL.sql, SelectStatement.sql, TestPhoenix.java
>
>
> While selecting columns from a table, an error occurs for Illegal Data.
> h3. _*ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, 
> but had 2*_
>  
> Moreover, this error only occurs while running queries via Java Program only 
> and not through the Squirrel SQL client. Is there any other way to access 
> results from the ResultSet that is returned from Phoenix Client. 
>  
> *Environment Details* : 
> *HBase Version* : _1.2.6 on Hadoop 2.8.2_
> *Phoenix Version* : _4.11.0-HBase-1.2_
> *OS*: _LINUX(RHEL)_
>  
> The following error is caused when selecting columns via a Java Program
> {code:java}
> ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, but had 
> 2; nested exception is java.sql.SQLException: ERROR 201 (22000): Illegal 
> data. Expected length of at least 8 bytes, but had 2
> at 
> org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:102)
> at 
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
> at 
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
> at 
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
> at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:419)
> at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:474)
> at 
> com.zycus.qe.service.impl.PhoenixHBaseDAOImpl.fetchAggregationResult(PhoenixHBaseDAOImpl.java:752)
> ... 14 common frames omitted
> Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected 
> length of at least 8 bytes, but had 2
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:483)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:213)
> at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:165)
> at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:171)
> at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:175)
> at 
> org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:115)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:260)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.next(OrderedResultIterator.java:199)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:65)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:255)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.next(OrderedResultIterator.java:199)
> at 
> org.apache.phoenix.iterate.OrderedAggregatingResultIterator.next(OrderedAggregatingResultIterator.java:51)
> at 
> org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
> at 
> org.apache.phoenix.execute.TupleProjectionPlan$1.next(TupleProjectionPlan.java:62)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:65)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
> at 
> 

[jira] [Updated] (PHOENIX-4609) Error Occurs while selecting a specific set of columns

2018-02-15 Thread Aman Jha (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Jha updated PHOENIX-4609:
--
Attachment: TestPhoenix.java

> Error Occurs while selecting a specific set of columns
> --
>
> Key: PHOENIX-4609
> URL: https://issues.apache.org/jira/browse/PHOENIX-4609
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0, 4.13.0
>Reporter: Aman Jha
>Priority: Critical
> Attachments: DML_DDL.sql, SelectStatement.sql, TestPhoenix.java
>
>
> While selecting columns from a table, an error occurs for Illegal Data.
> h3. _*ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, 
> but had 2*_
>  
> Moreover, this error only occurs while running queries via Java Program only 
> and not through the Squirrel SQL client. Is there any other way to access 
> results from the ResultSet that is returned from Phoenix Client. 
>  
> *Environment Details* : 
> *HBase Version* : _1.2.6 on Hadoop 2.8.2_
> *Phoenix Version* : _4.11.0-HBase-1.2_
> *OS*: _LINUX(RHEL)_
>  
> The following error is caused when selecting columns via a Java Program
> {code:java}
> ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, but had 
> 2; nested exception is java.sql.SQLException: ERROR 201 (22000): Illegal 
> data. Expected length of at least 8 bytes, but had 2
> at 
> org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:102)
> at 
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
> at 
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
> at 
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
> at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:419)
> at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:474)
> at 
> com.zycus.qe.service.impl.PhoenixHBaseDAOImpl.fetchAggregationResult(PhoenixHBaseDAOImpl.java:752)
> ... 14 common frames omitted
> Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected 
> length of at least 8 bytes, but had 2
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:483)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:213)
> at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:165)
> at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:171)
> at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:175)
> at 
> org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:115)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:260)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.next(OrderedResultIterator.java:199)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:65)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:255)
> at 
> org.apache.phoenix.iterate.OrderedResultIterator.next(OrderedResultIterator.java:199)
> at 
> org.apache.phoenix.iterate.OrderedAggregatingResultIterator.next(OrderedAggregatingResultIterator.java:51)
> at 
> org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
> at 
> org.apache.phoenix.execute.TupleProjectionPlan$1.next(TupleProjectionPlan.java:62)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:65)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
> at 
> 

[jira] [Created] (PHOENIX-4609) Error Occurs while selecting a specific set of columns

2018-02-15 Thread Aman Jha (JIRA)
Aman Jha created PHOENIX-4609:
-

 Summary: Error Occurs while selecting a specific set of columns
 Key: PHOENIX-4609
 URL: https://issues.apache.org/jira/browse/PHOENIX-4609
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.11.0, 4.13.0
Reporter: Aman Jha


While selecting columns from a table, an error occurs for Illegal Data.
h3. _*ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, but 
had 2*_

 

Moreover, this error only occurs while running queries via Java Program only 
and not through the Squirrel SQL client. Is there any other way to access 
results from the ResultSet that is returned from Phoenix Client. 

 

*Environment Details* : 

*HBase Version* : _1.2.6 on Hadoop 2.8.2_

*Phoenix Version* : _4.11.0-HBase-1.2_

*OS*: _LINUX(RHEL)_

 

The following error is caused when selecting columns via a Java Program
{code:java}
ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, but had 
2; nested exception is java.sql.SQLException: ERROR 201 (22000): Illegal data. 
Expected length of at least 8 bytes, but had 2
at 
org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:102)
at 
org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
at 
org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
at 
org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:419)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:474)
at 
com.zycus.qe.service.impl.PhoenixHBaseDAOImpl.fetchAggregationResult(PhoenixHBaseDAOImpl.java:752)
... 14 common frames omitted
Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected 
length of at least 8 bytes, but had 2
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:483)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:213)
at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:165)
at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:171)
at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:175)
at 
org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:115)
at 
org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:260)
at 
org.apache.phoenix.iterate.OrderedResultIterator.next(OrderedResultIterator.java:199)
at 
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
at 
org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
at 
org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
at 
org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:65)
at 
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
at 
org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:255)
at 
org.apache.phoenix.iterate.OrderedResultIterator.next(OrderedResultIterator.java:199)
at 
org.apache.phoenix.iterate.OrderedAggregatingResultIterator.next(OrderedAggregatingResultIterator.java:51)
at 
org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
at 
org.apache.phoenix.execute.TupleProjectionPlan$1.next(TupleProjectionPlan.java:62)
at 
org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
at 
org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
at 
org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:65)
at 
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
at 
org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
at 
org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
at 
org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:65)
at 
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
at 
org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
at 
org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:207)
at 

[jira] [Commented] (PHOENIX-4605) Add TRANSACTION_PROVIDER and DEFAULT_TRANSACTION_PROVIDER instead of using boolean

2018-02-15 Thread Ohad Shacham (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365311#comment-16365311
 ] 

Ohad Shacham commented on PHOENIX-4605:
---

Lazily sounds good. I wouldn't initialize Omid client if we don't have a 
transaction manager running.

> Add TRANSACTION_PROVIDER and DEFAULT_TRANSACTION_PROVIDER instead of using 
> boolean
> --
>
> Key: PHOENIX-4605
> URL: https://issues.apache.org/jira/browse/PHOENIX-4605
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>
> We should deprecate QueryServices.DEFAULT_TABLE_ISTRANSACTIONAL_ATTRIB and 
> instead have a QueryServices.DEFAULT_TRANSACTION_PROVIDER now that we'll have 
> two transaction providers: Tephra and Omid. Along the same lines, we should 
> add a TRANSACTION_PROVIDER column to SYSTEM.CATALOG  and stop using the 
> IS_TRANSACTIONAL table property. For backwards compatibility, we can assume 
> the provider is Tephra if the existing properties are set to true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4608) Concurrent modification of bitset in ProjectedColumnExpression

2018-02-15 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365218#comment-16365218
 ] 

Sergey Soldatov commented on PHOENIX-4608:
--

[~jamestaylor] That was upsert select and it might be the clone of 
PHOENIX-4588. Will recheck tomorrow with the recent master for sure. 

> Concurrent modification of bitset in ProjectedColumnExpression
> --
>
> Key: PHOENIX-4608
> URL: https://issues.apache.org/jira/browse/PHOENIX-4608
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4608-v1.patch
>
>
> in ProjectedColumnExpression we are using an instance of ValueBitSet to track 
> nulls during evaluate calls. We are using a single instance of 
> ProjectedColumnExpression per column across all threads running in parallel, 
> so it may happen that one thread call bitSet.clear() and another thread is 
> using it in isNull at the same time, making a wrong assumption that the value 
> is null.  We saw that problem when query like 
> {noformat}
> upsert into C select trim (A.ID), B.B From (select ID, SUM(1) as B from T1 
> group by ID) as B join T2 as A on A.ID = B.ID;  
> {noformat}
> During the execution earlier mentioned condition happen and we don't advance 
> from the char column (A.ID)  to long (B.B) and get an exception like
> {noformat}
> Error: ERROR 201 (22000): Illegal data. BIGINT value -6908486506036322272 
> cannot be cast to Integer without changing its value (state=22000,code=201) 
> java.sql.SQLException: ERROR 201 (22000): Illegal data. BIGINT value 
> -6908486506036322272 cannot be cast to Integer without changing its value 
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:442)
>  
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>  
> at 
> org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:129) 
> at 
> org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:118)
>  
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:771)
>  
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:714)
>  
> at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
>  
> at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
>  
> at 
> org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
>  
> at 
> org.apache.phoenix.compile.UpsertCompiler$2.execute(UpsertCompiler.java:797) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:343) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:329)
>  
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1440) 
> at sqlline.Commands.execute(Commands.java:822) 
> at sqlline.Commands.sql(Commands.java:732) 
> at sqlline.SqlLine.dispatch(SqlLine.java:808) 
> at sqlline.SqlLine.begin(SqlLine.java:681) 
> at sqlline.SqlLine.start(SqlLine.java:398) 
> at sqlline.SqlLine.main(SqlLine.java:292)
> {noformat}
> Fortunately, bitSet is the only field we continuously modify in that class, 
> so we may fix this problem by making it ThreadLocal. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)