[jira] [Updated] (PHOENIX-5676) Inline-verification from IndexTool does not handle TTL/row-expiry

2020-01-14 Thread Abhishek Singh Chouhan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh Chouhan updated PHOENIX-5676:

Attachment: PHOENIX-5676-4.x-HBase-1.5.patch

> Inline-verification from IndexTool does not handle TTL/row-expiry
> -
>
> Key: PHOENIX-5676
> URL: https://issues.apache.org/jira/browse/PHOENIX-5676
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.1, 4.14.3
>Reporter: Priyank Porwal
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Fix For: 4.15.1, 4.16.0
>
> Attachments: PHOENIX-5676-4.x-HBase-1.5.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> If a data-table has TTL on it, it's indexes inherit the TTL too. Hence when 
> we run IndexTool with verification on such tables and it's indexes, rows that 
> are near expiry will successfully get rebuilt, but may not be returned on 
> verification read due to expiry. This will result in index verification 
> problem and may also fail rebuild job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5682) IndexTool can just update empty_column with verified if rest of index row matches

2020-01-14 Thread Priyank Porwal (Jira)
Priyank Porwal created PHOENIX-5682:
---

 Summary: IndexTool can just update empty_column with verified if 
rest of index row matches
 Key: PHOENIX-5682
 URL: https://issues.apache.org/jira/browse/PHOENIX-5682
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.14.3, 4.15.1
Reporter: Priyank Porwal
 Fix For: 4.15.1, 4.14.4, 4.16.0


When upgrading from old indexing design to new consistent indexing, 
IndexUpgradeTool kicks off IndexTool to rebuild the index. This index rebuild 
rewrites all index rows. If any index row was already consistent, it is 
rewritten + empty_column is updated with verified flag. 

IndexTool could potentially just update empty_column if rest of the index row 
matches with data rows. This would save the massive writes to the underlying 
dfs, as well as other side effects of these writes to replication.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5674) IndexTool to not write already correct index rows

2020-01-14 Thread Priyank Porwal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Priyank Porwal updated PHOENIX-5674:

Summary: IndexTool to not write already correct index rows  (was: IndexTool 
to not write already correct index rows/CFs)

> IndexTool to not write already correct index rows
> -
>
> Key: PHOENIX-5674
> URL: https://issues.apache.org/jira/browse/PHOENIX-5674
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.1, 4.14.3
>Reporter: Priyank Porwal
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 4.15.1, 4.14.4
>
> Attachments: PHOENIX-5674.master.001.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> IndexTool can avoid writing index rows if they are already consistent with 
> data-table. This will specially be useful when rebuilding index on DR-site 
> where indexes are replicated already, but rebuild might be needed for catchup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5674) IndexTool to not write already correct index rows/CFs

2020-01-14 Thread Priyank Porwal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Priyank Porwal updated PHOENIX-5674:

Description: IndexTool can avoid writing index rows if they are already 
consistent with data-table. This will specially be useful when rebuilding index 
on DR-site where indexes are replicated already, but rebuild might be needed 
for catchup.  (was: IndexTool can avoid writing index rows if they are already 
consistent with data-table. This will specially be useful when rebuilding index 
on DR-site where indexes are replicated already, but rebuild might be needed 
for catchup.

Likewise, during upgrades from old indexing scheme to new consistent indexing 
scheme, if the index data columns are consistent already, IndexTool should only 
rewrite the EmptyColumn to mark the row as verified instead of writing the data 
columns too.)

> IndexTool to not write already correct index rows/CFs
> -
>
> Key: PHOENIX-5674
> URL: https://issues.apache.org/jira/browse/PHOENIX-5674
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.1, 4.14.3
>Reporter: Priyank Porwal
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 4.15.1, 4.14.4
>
> Attachments: PHOENIX-5674.master.001.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> IndexTool can avoid writing index rows if they are already consistent with 
> data-table. This will specially be useful when rebuilding index on DR-site 
> where indexes are replicated already, but rebuild might be needed for catchup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-5680) remove psql.py from phoenix-queryserver

2020-01-14 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth resolved PHOENIX-5680.
--
Fix Version/s: 5.1.0
   Resolution: Fixed

Merged.

Thanks for the review [~elserj]

> remove psql.py from phoenix-queryserver
> ---
>
> Key: PHOENIX-5680
> URL: https://issues.apache.org/jira/browse/PHOENIX-5680
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 5.1.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
> Fix For: 5.1.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The phoenix-queryserver repo duplicates the bin/psql.py file from the core 
> phoenix repo, for no apparent reason.
> Remove it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5656) Make Phoenix scripts work with Python 3

2020-01-14 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5656:
---
Attachment: 5656-4.x-HBase-1.5-v4.txt

> Make Phoenix scripts work with Python 3
> ---
>
> Key: PHOENIX-5656
> URL: https://issues.apache.org/jira/browse/PHOENIX-5656
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Critical
> Fix For: 5.1.0, 4.16.0
>
> Attachments: 5656-4.x-HBase-1.5-untested.txt, 
> 5656-4.x-HBase-1.5-v3.txt, 5656-4.x-HBase-1.5-v4.txt
>
>
> Python 2 is being retired in some environments now. We should make sure that 
> the Phoenix scripts work with Python 2 and 3.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-1295) Add testing utility for table creation, population, and checking query results

2020-01-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned PHOENIX-1295:
-

Assignee: (was: Viraj Jasani)

> Add testing utility for table creation, population, and checking query results
> --
>
> Key: PHOENIX-1295
> URL: https://issues.apache.org/jira/browse/PHOENIX-1295
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Gabriel Reid
>Priority: Major
>  Labels: phoenix-hardening
> Attachments: PHOENIX-1295-WIP1.patch
>
>
> Mostly due to the way the JDBC is structured in general, it's relatively 
> painful to create a simple test case that just creates a simple table, 
> populates it with a couple of rows, and checks the output of a query.
> Adding to this is the fact that there isn't really a single "right way" to 
> write simple unit tests in Phoenix. Some tests try to cleanly close 
> statements, ResultsSets, and Connections, while others don't. New tests of 
> this sort are often created by first copying an existing test.
> The end results is that a couple of simple test cases to test a new built-in 
> function often end up being mostly wresting with JDBC, with the actual test 
> case getting largely hidden in the noise.
> The purpose of this ticket is to propose a utility to simplify creating 
> tables, populating them, and verifying the output.
> The general API I have in mind is would look like this:
> {code}
>  QueryTestUtil.on(jdbcUrl)
>   .createTable("testtable",
>   "id integer not null primary key",
>   "name varchar")
>   .withRows(
>   1, "name1",
>   2, "name2",
>   3, "othername")
>   .verifyQueryResults(
>   "select id, name from testtable where name like 'name%'",
>   1, "name1",
>   2, "name2");
> {code}
> The intention is to make it much less painful to write tests, and also to 
> replace as enough existing test code to use this pattern so that new tests 
> being created based on existing code will also follow this pattern.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Committers please look at the Phoenix tests and fix your failures

2020-01-14 Thread James Taylor
How about we require the tests to pass as a prerequisite for commit?

On Tue, Jan 14, 2020 at 3:16 PM la...@apache.org  wrote:

>  And I cannot stress enough how important this is for the project. As an
> example: We had the tests fail for just a few days, during that time we
> have had check-ins that broke other test; now it's quite hard to figure out
> which recent change broke the other tests.
> We need the test suite *always* passing. It's impossible to maintain a
> stable code base the size of Phoenix otherwise.
> -- Lars
> On Tuesday, January 14, 2020, 10:04:12 AM PST, la...@apache.org <
> la...@apache.org> wrote:
>
>   I spent a lot of time making QA better. It can be better, but it's
> stable enough. There're now very little excuses. "Test failure seems
> unrelated" is not an excuse anymore.(4.x-HBase-1.3 has some issue where
> HBase can't seem to start a cluster reliably... but all others are pretty
> stable.)
> After chatting with Andrew Purtell, one things I was going to offer is to
> simply revert any change that breaks a test. Period.I'd volunteer some of
> my time (hey, isn't that what a Chief Architect in a Fortune 100 company
> should do?!)
> With their changes reverted, people will presumably start to care. :)If I
> hear no objects I'll start doing that a while.
> Cheers.
> -- Lars
> On Monday, January 13, 2020, 06:23:01 PM PST, Josh Elser <
> els...@apache.org> wrote:
>
>  How do we keep getting into this mess: unreliable QA, people ignoring
> QA, or something else?
>
> On 1/12/20 9:24 PM, la...@apache.org wrote:
> > ... Not much else to say here...
> > The tests have been failing again for a while... I will NOT fix them
> again this time! Sorry folks.
> >
> > -- Lars
> >
> >
>


[jira] [Updated] (PHOENIX-5678) Cleanup anonymous inner classes used for BaseMutationPlan

2020-01-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-5678:
--
Attachment: PHOENIX-5678.master.000.patch

> Cleanup anonymous inner classes used for BaseMutationPlan
> -
>
> Key: PHOENIX-5678
> URL: https://issues.apache.org/jira/browse/PHOENIX-5678
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.15.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 5.1.0, 4.15.1
>
> Attachments: PHOENIX-5678.master.000.patch
>
>
> BaseMutationPlan has been extended as anonymous inner class at multiple 
> places and some of them have lots of logic placed in overridden methods. We 
> should convert them to Inner classes and use object of extended inner class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5678) Cleanup anonymous inner classes used for BaseMutationPlan

2020-01-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-5678:
--
Attachment: (was: PHOENIX-5678.master.000.patch)

> Cleanup anonymous inner classes used for BaseMutationPlan
> -
>
> Key: PHOENIX-5678
> URL: https://issues.apache.org/jira/browse/PHOENIX-5678
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.15.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 5.1.0, 4.15.1
>
> Attachments: PHOENIX-5678.master.000.patch
>
>
> BaseMutationPlan has been extended as anonymous inner class at multiple 
> places and some of them have lots of logic placed in overridden methods. We 
> should convert them to Inner classes and use object of extended inner class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5678) Cleanup anonymous inner classes used for BaseMutationPlan

2020-01-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-5678:
--
Attachment: (was: PHOENIX-5678.master.000.patch)

> Cleanup anonymous inner classes used for BaseMutationPlan
> -
>
> Key: PHOENIX-5678
> URL: https://issues.apache.org/jira/browse/PHOENIX-5678
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.15.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 5.1.0, 4.15.1
>
> Attachments: PHOENIX-5678.master.000.patch
>
>
> BaseMutationPlan has been extended as anonymous inner class at multiple 
> places and some of them have lots of logic placed in overridden methods. We 
> should convert them to Inner classes and use object of extended inner class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5678) Cleanup anonymous inner classes used for BaseMutationPlan

2020-01-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-5678:
--
Attachment: (was: PHOENIX-5678.master.000.patch)

> Cleanup anonymous inner classes used for BaseMutationPlan
> -
>
> Key: PHOENIX-5678
> URL: https://issues.apache.org/jira/browse/PHOENIX-5678
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.15.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 5.1.0, 4.15.1
>
> Attachments: PHOENIX-5678.master.000.patch
>
>
> BaseMutationPlan has been extended as anonymous inner class at multiple 
> places and some of them have lots of logic placed in overridden methods. We 
> should convert them to Inner classes and use object of extended inner class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5678) Cleanup anonymous inner classes used for BaseMutationPlan

2020-01-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-5678:
--
Attachment: PHOENIX-5678.master.000.patch

> Cleanup anonymous inner classes used for BaseMutationPlan
> -
>
> Key: PHOENIX-5678
> URL: https://issues.apache.org/jira/browse/PHOENIX-5678
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.15.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 5.1.0, 4.15.1
>
> Attachments: PHOENIX-5678.master.000.patch, 
> PHOENIX-5678.master.000.patch, PHOENIX-5678.master.000.patch
>
>
> BaseMutationPlan has been extended as anonymous inner class at multiple 
> places and some of them have lots of logic placed in overridden methods. We 
> should convert them to Inner classes and use object of extended inner class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5645) BaseScannerRegionObserver should prevent compaction from purging very recently deleted cells

2020-01-14 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5645:
-
Attachment: PHOENIX-5645-4.14-HBase-1.4.patch

> BaseScannerRegionObserver should prevent compaction from purging very 
> recently deleted cells
> 
>
> Key: PHOENIX-5645
> URL: https://issues.apache.org/jira/browse/PHOENIX-5645
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Attachments: PHOENIX-5645-4.14-HBase-1.4.patch, 
> PHOENIX-5645-4.x-HBase-1.5-v2.patch, PHOENIX-5645-4.x-HBase-1.5.patch, 
> PHOENIX-5645-4.x-HBase-1.5.v3.patch, PHOENIX-5645-addendum-4.x-HBase-1.5.patch
>
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> Phoenix's SCN feature has some problems, because HBase major compaction can 
> remove Cells that have been deleted or whose TTL or max versions has caused 
> them to be expired. 
> For example, IndexTool rebuilds and index scrutiny can both give strange, 
> incorrect results if a major compaction occurs in the middle of their run. In 
> the rebuild case, it's because we're rewriting "history" on the index at the 
> same time that compaction is rewriting "history" by purging deleted and 
> expired cells. 
> Create a new configuration property called "max lookback age", which declares 
> that no data written more recently than the max lookback age will be 
> compacted away. The max lookback age must be smaller than the TTL, and it 
> should not be legal for a user to look back further in the past than the 
> table's TTL. 
> Max lookback age by default will not be set, and the current behavior will be 
> preserved. But if max lookback age is set, it will be enforced by the 
> BaseScannerRegionObserver for all tables. 
> In the future, this should be contributed as a general feature to HBase for 
> arbitrary tables. See HBASE-23602.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5645) BaseScannerRegionObserver should prevent compaction from purging very recently deleted cells

2020-01-14 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5645:
-
Attachment: PHOENIX-5645-addendum-4.x-HBase-1.5.patch

> BaseScannerRegionObserver should prevent compaction from purging very 
> recently deleted cells
> 
>
> Key: PHOENIX-5645
> URL: https://issues.apache.org/jira/browse/PHOENIX-5645
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Attachments: PHOENIX-5645-4.x-HBase-1.5-v2.patch, 
> PHOENIX-5645-4.x-HBase-1.5.patch, PHOENIX-5645-4.x-HBase-1.5.v3.patch, 
> PHOENIX-5645-addendum-4.x-HBase-1.5.patch
>
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> Phoenix's SCN feature has some problems, because HBase major compaction can 
> remove Cells that have been deleted or whose TTL or max versions has caused 
> them to be expired. 
> For example, IndexTool rebuilds and index scrutiny can both give strange, 
> incorrect results if a major compaction occurs in the middle of their run. In 
> the rebuild case, it's because we're rewriting "history" on the index at the 
> same time that compaction is rewriting "history" by purging deleted and 
> expired cells. 
> Create a new configuration property called "max lookback age", which declares 
> that no data written more recently than the max lookback age will be 
> compacted away. The max lookback age must be smaller than the TTL, and it 
> should not be legal for a user to look back further in the past than the 
> table's TTL. 
> Max lookback age by default will not be set, and the current behavior will be 
> preserved. But if max lookback age is set, it will be enforced by the 
> BaseScannerRegionObserver for all tables. 
> In the future, this should be contributed as a general feature to HBase for 
> arbitrary tables. See HBASE-23602.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Moving Phoenix master to Hbase 2.2

2020-01-14 Thread Andrew Purtell
Take PhoenixAccessController as an example. Over time the HBase interfaces 
change in minor ways. You’ll need different compilation units for this class to 
be able to compile it across a wide range of 1.x. However the essential Phoenix 
functionality does not change. The logic that makes up the method bodies can be 
factored into a class that groups together static helper methods which come to 
contain this common logic. The common class can remain in the core module. Then 
all you have in the version specific modules is scaffolding. In that 
scaffolding, calls to the static methods in core. It’s not a clever refactor 
but is DRY. Over time this can be made cleaner case by case where the naive 
transformation has a distasteful result. 


> On Jan 14, 2020, at 6:40 PM, Andrew Purtell  wrote:
> 


Re: Moving Phoenix master to Hbase 2.2

2020-01-14 Thread Andrew Purtell
It’s not necessary to abstract the HBase interfaces into a compatibility layer, 
at least not to start. At each bump from one minor release to another a fix up 
typically touches a handful of files. The jump from 1.x to 2.x is a bigger deal 
but maybe there should still be separate branches for major HBase versions? 

Anyway let’s assume for now you want to unify all the branches for HBase 1.x. 
Start with the lowest HBase version you want to support. Then iterate up to the 
highest HBase version you want to support. Whenever you run into compile 
problems, make a new version specific maven module, add logic to the parent POM 
that chooses the right one. Then for each implicated file, move it into the 
version specific maven modules, duplicating as needed, and finally fixing up 
where needed. 

Over time you can iterate over the duplicated files and reduce duplication but 
there is no need to take that on up front, so the task is not insurmountable. 
It can be incremental. 


> On Jan 14, 2020, at 4:46 PM, Josh Elser  wrote:
> 
> Still not having looked at what Tephra does -- I'm intrigued by what Istvan 
> has in-progress. Waiting to see what he comes up with would be my suggestion 
> :)
> 
>> On 1/14/20 1:12 PM, la...@apache.org wrote:
>>  Does somebody volunteer to take this up?
>> I can see whether I can a resource where I work, but it's highly uncertain.
>> It would need a bit of digging and design work to see how we would abstract 
>> the HBase interface in the most effective way.
>> As mentioned below, Tephra did a good job at this and could serve as an 
>> example here. (Not dinging OMID, OMID does most of it's work client side and 
>> doesn't need these abstractions.)
>> -- Lars
>> On Tuesday, January 14, 2020, 01:13:36 AM PST, István Tóth 
>>  wrote:
>>Yes, the HBase API signatures change between versions, so we need to
>> compile each compat module against a specific HBase.
>> Whether I can define an internal compatibility API that is switchable at
>> run (startup) time without a performance hit remains to be seen.
>> István
>>> On Tue, Jan 14, 2020 at 3:21 AM Josh Elser  wrote:
>>> Agree that trying to wrangle branches is just too frustrating and
>>> error-prone.
>>> 
>>> It would also be great if we could have a single Phoenix jar that works
>>> across HBase versions, but would not die on that hill :)
>>> 
>>> On 12/20/19 5:04 AM, la...@apache.org wrote:
   I said _provided_ they can be isolated easily :) (I meant it in the
>>> sense of assuming it's easy).
 As I said though, Tephra has a similar problem and they did a really
>>> good job isolating HBase versions. We can learn from them. Sometimes they
>>> isolate the change only, and sometimes the class needs to be copied, but
>>> even then it's the one class that is copied, not another branch that needs
>>> to be kept in sync.
 
 This may also drive the desperately necessary refactoring of Phoenix to
>>> make these things easier to isolate, or to reduce the copying to a minimum.
>>> And we'd need to think through testing carefully.
 
 The branch per Phoenix and HBase version is too complex, IMHO. And the
>>> complex branch to HBase version mapping that Istvan outlines below confirms
>>> that.
 
 We should all take a brief look at the Tephra solution and see whether
>>> we can apply that. (And since Tephra is part of the fold now, perhaps
>>> someone can help there...?)
 Cheers.
 -- Lars
 
   On Thursday, December 19, 2019, 8:34:15 PM GMT+1, Geoffrey Jacoby <
>>> gjac...@gmail.com> wrote:
 
   Lars,
 
 I'm curious why you say the differences are easily isolated -- many of
>>> the
 core classes of Phoenix either directly inherit HBase classes or
>>> implement
 HBase interfaces, and those can vary between minor versions. (See my
>>> above
 example of a new coprocessor hook on BaseRegionObserver.)
 
 Geoffrey
 
 On Thu, Dec 19, 2019 at 10:54 AM la...@apache.org 
>>> wrote:
 
> Yep. The differences are pretty minimal - provided they can be
>>> isolated
> easily.
> Tephra might be a pretty good model. It supports various versions of
>>> HBase
> in a single branch and has similar issues as Phoenix (coprocessors,
>>> etc).
> -- Lars
>   On Thursday, December 19, 2019, 7:07:51 PM GMT+1, Josh Elser <
> els...@apache.org> wrote:
> 
> To clarify, you think that compat modules are better than that
> separate-branches model in 4.x?
> 
> On 12/18/19 11:29 AM, la...@apache.org wrote:
>> This is really hard to follow.
>> 
>> I think we should do the same with HBase dependencies in Phoenix that
> HBase does with Hadoop dependencies.
>> 
>> That is:  We could have a maven module with the specific HBase version
> dependent code.
>> Btw. Tephra does the same... A module for HBase version specific code.
>> -- Lars
>> 
>> On Tuesday, December 17, 

[jira] [Reopened] (PHOENIX-5644) IndexUpgradeTool should sleep only once if there is at least one immutable table provided

2020-01-14 Thread Swaroopa Kadam (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reopened PHOENIX-5644:
-

> IndexUpgradeTool should sleep only once if there is at least one immutable 
> table provided
> -
>
> Key: PHOENIX-5644
> URL: https://issues.apache.org/jira/browse/PHOENIX-5644
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.3
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 5.1.0, 4.15.1, 4.14.4, 4.16.0
>
> Attachments: PHOENIX-5644.4.x-HBase-1.3.add.patch, 
> PHOENIX-5644.4.x-HBase-1.3.add.v1.patch, 
> PHOENIX-5644.4.x-HBase-1.3.addv1.patch, 
> PHOENIX-5644.4.x-HBase-1.3.addv2.patch, PHOENIX-5644.4.x-HBase-1.3.patch, 
> PHOENIX-5644.4.x-HBase-1.3.v1.patch, PHOENIX-5644.4.x-HBase-1.3.v2.patch, 
> PHOENIX-5644.4.x-HBase-1.3.v3.patch, PHOENIX-5644.v1.patch
>
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-1295) Add testing utility for table creation, population, and checking query results

2020-01-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned PHOENIX-1295:
-

Assignee: Viraj Jasani

> Add testing utility for table creation, population, and checking query results
> --
>
> Key: PHOENIX-1295
> URL: https://issues.apache.org/jira/browse/PHOENIX-1295
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Gabriel Reid
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: phoenix-hardening
> Attachments: PHOENIX-1295-WIP1.patch
>
>
> Mostly due to the way the JDBC is structured in general, it's relatively 
> painful to create a simple test case that just creates a simple table, 
> populates it with a couple of rows, and checks the output of a query.
> Adding to this is the fact that there isn't really a single "right way" to 
> write simple unit tests in Phoenix. Some tests try to cleanly close 
> statements, ResultsSets, and Connections, while others don't. New tests of 
> this sort are often created by first copying an existing test.
> The end results is that a couple of simple test cases to test a new built-in 
> function often end up being mostly wresting with JDBC, with the actual test 
> case getting largely hidden in the noise.
> The purpose of this ticket is to propose a utility to simplify creating 
> tables, populating them, and verifying the output.
> The general API I have in mind is would look like this:
> {code}
>  QueryTestUtil.on(jdbcUrl)
>   .createTable("testtable",
>   "id integer not null primary key",
>   "name varchar")
>   .withRows(
>   1, "name1",
>   2, "name2",
>   3, "othername")
>   .verifyQueryResults(
>   "select id, name from testtable where name like 'name%'",
>   1, "name1",
>   2, "name2");
> {code}
> The intention is to make it much less painful to write tests, and also to 
> replace as enough existing test code to use this pattern so that new tests 
> being created based on existing code will also follow this pattern.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Python2 EOL

2020-01-14 Thread Josh Elser

Ah, that would make sense. Thanks Gabriel.

I can't say I've seen much adoption of Phoenix on Windows. The folks I 
have seen using it either via the JDBC driver we provide, or the ODBC 
driver from Simba.


On 1/14/20 2:29 AM, Gabriel Reid wrote:

My recollection (or maybe it was just my assumption) was that they
were created to provide/facilitate compatibility with Windows.

- Gabriel

On Tue, Jan 14, 2020 at 3:22 AM Josh Elser  wrote:


Do you recall why they were converted? I can also go digging in
Jira/Mail archives.

On 1/12/20 9:11 PM, la...@apache.org wrote:

   Heh.
They used to be shell scripts and then we converted them to Python.Personally I 
was not a fan of that back then, but anyway.
In any case there's some work to do.

-- Lars

  On Friday, January 10, 2020, 7:55:43 AM PST, Josh Elser 
 wrote:

   I think converting them to Bash is the right thing to do. We're not
doing anything fancy.

On 1/9/20 5:10 PM, Andrew Purtell wrote:

Some of the python scripts are glorified shell scripts and could be
rewritten as such, such as the launch scripts for psql and sqlline and the
pqs. I get that python is and was trendier than bash but sometimes the
right tool for the job is the right tool for the job. Unlike python, bash
has a very stable grammar.

On Thu, Jan 9, 2020 at 12:34 PM la...@apache.org  wrote:


Hi all,

python2 is officially EOL'd. No more changes, improvements, or fixes will
be done by the developers.
Some Linux distributions stopped shipping Python2.

It turns out our scripts do not work with Python3, see: [PHOENIX-5656]
Make Phoenix scripts work with Python 3 - ASF JIRA.

[PHOENIX-5656] Make Phoenix scripts work with Python 3 - ASF JIRA

So what should we do?

As outlined in the jira we have 3 options:

1. Do nothing. Phoenix will only work with EOL'd Python 2.
2. try to make all the scripts work with Python 2 and 3. That's
actually not possible in cases, but we can get close... And it's a lot of
work and experimentation.
3. Convert all scripts to Python 3. There's a tool (2to3) to do that
automatically. Phoenix will now _only_ work with Python 3.

Option 2 is some work - some of it not trivial - that someone would need
to pick up. Perhaps we can maintain two versions of all scripts, figure out
the version of Python and the use right one?

Let's discuss on the jira. I can't be only one interested in this :)

Cheers.

-- Lars









Re: Moving Phoenix master to Hbase 2.2

2020-01-14 Thread Josh Elser
Still not having looked at what Tephra does -- I'm intrigued by what 
Istvan has in-progress. Waiting to see what he comes up with would be my 
suggestion :)


On 1/14/20 1:12 PM, la...@apache.org wrote:

  Does somebody volunteer to take this up?
I can see whether I can a resource where I work, but it's highly uncertain.
It would need a bit of digging and design work to see how we would abstract the 
HBase interface in the most effective way.
As mentioned below, Tephra did a good job at this and could serve as an example 
here. (Not dinging OMID, OMID does most of it's work client side and doesn't 
need these abstractions.)
-- Lars

 On Tuesday, January 14, 2020, 01:13:36 AM PST, István Tóth 
 wrote:
  
  Yes, the HBase API signatures change between versions, so we need to

compile each compat module against a specific HBase.

Whether I can define an internal compatibility API that is switchable at
run (startup) time without a performance hit remains to be seen.

István

On Tue, Jan 14, 2020 at 3:21 AM Josh Elser  wrote:


Agree that trying to wrangle branches is just too frustrating and
error-prone.

It would also be great if we could have a single Phoenix jar that works
across HBase versions, but would not die on that hill :)

On 12/20/19 5:04 AM, la...@apache.org wrote:

   I said _provided_ they can be isolated easily :) (I meant it in the

sense of assuming it's easy).

As I said though, Tephra has a similar problem and they did a really

good job isolating HBase versions. We can learn from them. Sometimes they
isolate the change only, and sometimes the class needs to be copied, but
even then it's the one class that is copied, not another branch that needs
to be kept in sync.


This may also drive the desperately necessary refactoring of Phoenix to

make these things easier to isolate, or to reduce the copying to a minimum.
And we'd need to think through testing carefully.


The branch per Phoenix and HBase version is too complex, IMHO. And the

complex branch to HBase version mapping that Istvan outlines below confirms
that.


We should all take a brief look at the Tephra solution and see whether

we can apply that. (And since Tephra is part of the fold now, perhaps
someone can help there...?)

Cheers.
-- Lars

       On Thursday, December 19, 2019, 8:34:15 PM GMT+1, Geoffrey Jacoby <

gjac...@gmail.com> wrote:


   Lars,

I'm curious why you say the differences are easily isolated -- many of

the

core classes of Phoenix either directly inherit HBase classes or

implement

HBase interfaces, and those can vary between minor versions. (See my

above

example of a new coprocessor hook on BaseRegionObserver.)

Geoffrey

On Thu, Dec 19, 2019 at 10:54 AM la...@apache.org 

wrote:



     Yep. The differences are pretty minimal - provided they can be

isolated

easily.
Tephra might be a pretty good model. It supports various versions of

HBase

in a single branch and has similar issues as Phoenix (coprocessors,

etc).

-- Lars
       On Thursday, December 19, 2019, 7:07:51 PM GMT+1, Josh Elser <
els...@apache.org> wrote:

     To clarify, you think that compat modules are better than that
separate-branches model in 4.x?

On 12/18/19 11:29 AM, la...@apache.org wrote:

This is really hard to follow.

I think we should do the same with HBase dependencies in Phoenix that

HBase does with Hadoop dependencies.


That is:  We could have a maven module with the specific HBase version

dependent code.

Btw. Tephra does the same... A module for HBase version specific code.
-- Lars

         On Tuesday, December 17, 2019, 10:00:31 AM GMT+1, Istvan Toth <

st...@apache.org> wrote:


     What do you think about tying the minor releases to Hbase minor

releases

(not necessarily one-to-one)

for example (provided 5.1 is 2020H1)

5.0.0 -> HB 2.0
5.1.0 -> HB 2.2.2 (and whatever 2.1 is API compatible with it)
5.1.x -> HB 2.2.x (treat as maintenance branch, no major new features)
5.2.0 -> HB 2.3.0 (if released by that time)
5.2.x -> HB 2.3.x (treat as maintenance branch, no major new features)
5.3.0 -> HB 2.3.x (if there is no new major/minor Hbase release)
master -> latest released HBase version

Alternatively, we could stick with the same HBase version for patch
releases that we used for the first minor release.

This would limit the number of branches that we have to maintain in
parallel, while providing maintenance branches for older releases, and
timely-ish Phoenix releases.

The drawback is that users of old HBase versions won't get the latest
features, on the other hand they can expect more polish.

Istvan

On Thu, Dec 12, 2019 at 8:05 PM Geoffrey Jacoby 

wrote:



Since HBase 2.0 is EOM'ed, I'm +1 for not worrying about 2.0.x
compatibility with the 5.x branch going forward.

Given how coupled Phoenix is to the implementation details of HBase

though,

I'm not sure trying to abstract those away to keep one Phoenix branch

per

HBase major version is practical, however. At the least, it would be

really


[jira] [Updated] (PHOENIX-5265) [UMBRELLA] Phoenix Test should use gold files for result comparison instead of using hard-corded comparison.

2020-01-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-5265:
--
Summary: [UMBRELLA] Phoenix Test should use gold files for result 
comparison instead of using hard-corded comparison.  (was: (Umbrella)Phoenix 
Test should use gold files for result comparison instead of using hard-corded 
comparison.)

> [UMBRELLA] Phoenix Test should use gold files for result comparison instead 
> of using hard-corded comparison.
> 
>
> Key: PHOENIX-5265
> URL: https://issues.apache.org/jira/browse/PHOENIX-5265
> Project: Phoenix
>  Issue Type: Improvement
> Environment: {code:java}
> // code placeholder
> @Test
> public void testWithMultiCF() throws Exception {
> int nRows = 20;
> Connection conn = getConnection(0);
> PreparedStatement stmt;
> conn.createStatement().execute(
> "CREATE TABLE " + fullTableName
> + "(k VARCHAR PRIMARY KEY, a.v INTEGER, b.v INTEGER, c.v 
> INTEGER NULL, d.v INTEGER NULL) "
> + tableDDLOptions );
> stmt = conn.prepareStatement("UPSERT INTO " + fullTableName + " 
> VALUES(?,?, ?, ?, ?)");
> byte[] val = new byte[250];
> for (int i = 0; i < nRows; i++) {
> stmt.setString(1, Character.toString((char)('a' + i)) + 
> Bytes.toString(val));
> stmt.setInt(2, i);
> stmt.setInt(3, i);
> stmt.setInt(4, i);
> stmt.setInt(5, i);
> stmt.executeUpdate();
> }
> conn.commit();
> stmt = conn.prepareStatement("UPSERT INTO " + fullTableName + "(k, c.v, 
> d.v) VALUES(?,?,?)");
> for (int i = 0; i < 5; i++) {
> stmt.setString(1, Character.toString((char)('a' + 'z' + i)) + 
> Bytes.toString(val));
> stmt.setInt(2, i);
> stmt.setInt(3, i);
> stmt.executeUpdate();
> }
> conn.commit();
> ResultSet rs;
> String actualExplainPlan;
> collectStatistics(conn, fullTableName);
> List keyRanges = getAllSplits(conn, fullTableName);
> assertEquals(26, keyRanges.size());
> rs = conn.createStatement().executeQuery("EXPLAIN SELECT * FROM " + 
> fullTableName);
> actualExplainPlan = QueryUtil.getExplainPlan(rs);
> assertEquals(
> "CLIENT 26-CHUNK 25 ROWS " + (columnEncoded ? ( mutable ? "12530" 
> : "14190" ) : 
> (TransactionFactory.Provider.OMID.name().equals(transactionProvider)) ? 
> "25320" : "12420") +
> " BYTES PARALLEL 1-WAY FULL SCAN OVER " + 
> physicalTableName,
> actualExplainPlan);
> ConnectionQueryServices services = 
> conn.unwrap(PhoenixConnection.class).getQueryServices();
> List regions = 
> services.getAllTableRegions(Bytes.toBytes(physicalTableName));
> assertEquals(1, regions.size());
> collectStatistics(conn, fullTableName, Long.toString(1000));
> keyRanges = getAllSplits(conn, fullTableName);
> boolean oneCellPerColFamilyStorageScheme = !mutable && columnEncoded;
> boolean hasShadowCells = 
> TransactionFactory.Provider.OMID.name().equals(transactionProvider);
> assertEquals(oneCellPerColFamilyStorageScheme ? 14 : hasShadowCells ? 24 
> : 13, keyRanges.size());
> rs = conn
> .createStatement()
> .executeQuery(
> "SELECT 
> COLUMN_FAMILY,SUM(GUIDE_POSTS_ROW_COUNT),SUM(GUIDE_POSTS_WIDTH),COUNT(*) from 
> \"SYSTEM\".STATS where PHYSICAL_NAME = '"
> + physicalTableName + "' GROUP BY COLUMN_FAMILY 
> ORDER BY COLUMN_FAMILY");
> assertTrue(rs.next());
> assertEquals("A", rs.getString(1));
> assertEquals(25, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 12530 : 14190 ) : hasShadowCells 
> ? 25320 : 12420, rs.getInt(3));
> assertEquals(oneCellPerColFamilyStorageScheme ? 13 : hasShadowCells ? 23 
> : 12, rs.getInt(4));
> assertTrue(rs.next());
> assertEquals("B", rs.getString(1));
> assertEquals(oneCellPerColFamilyStorageScheme ? 25 : 20, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 5600 : 7260 ) : hasShadowCells ? 
> 11260 : 5540, rs.getInt(3));
> assertEquals(oneCellPerColFamilyStorageScheme ? 7 : hasShadowCells ? 10 : 
> 5, rs.getInt(4));
> assertTrue(rs.next());
> assertEquals("C", rs.getString(1));
> assertEquals(25, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 7005 : 7280 ) : hasShadowCells ? 
> 14085 : 6930, rs.getInt(3));
> assertEquals(hasShadowCells ? 13 : 7, rs.getInt(4));
> assertTrue(rs.next());
> assertEquals("D", rs.getString(1));
> assertEquals(25, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 7005 : 7280 ) : hasShadowCells ? 
> 14085 : 6930, rs.getInt(3));
> assertEquals(hasShadowCells ? 13 : 7, rs.getInt(4));

[jira] [Updated] (PHOENIX-5265) (Umbrella)Phoenix Test should use gold files for result comparison instead of using hard-corded comparison.

2020-01-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-5265:
--
Summary: (Umbrella)Phoenix Test should use gold files for result comparison 
instead of using hard-corded comparison.  (was: Phoenix Test should use gold 
files for result comparison instead of using hard-corded comparison.)

> (Umbrella)Phoenix Test should use gold files for result comparison instead of 
> using hard-corded comparison.
> ---
>
> Key: PHOENIX-5265
> URL: https://issues.apache.org/jira/browse/PHOENIX-5265
> Project: Phoenix
>  Issue Type: Improvement
> Environment: {code:java}
> // code placeholder
> @Test
> public void testWithMultiCF() throws Exception {
> int nRows = 20;
> Connection conn = getConnection(0);
> PreparedStatement stmt;
> conn.createStatement().execute(
> "CREATE TABLE " + fullTableName
> + "(k VARCHAR PRIMARY KEY, a.v INTEGER, b.v INTEGER, c.v 
> INTEGER NULL, d.v INTEGER NULL) "
> + tableDDLOptions );
> stmt = conn.prepareStatement("UPSERT INTO " + fullTableName + " 
> VALUES(?,?, ?, ?, ?)");
> byte[] val = new byte[250];
> for (int i = 0; i < nRows; i++) {
> stmt.setString(1, Character.toString((char)('a' + i)) + 
> Bytes.toString(val));
> stmt.setInt(2, i);
> stmt.setInt(3, i);
> stmt.setInt(4, i);
> stmt.setInt(5, i);
> stmt.executeUpdate();
> }
> conn.commit();
> stmt = conn.prepareStatement("UPSERT INTO " + fullTableName + "(k, c.v, 
> d.v) VALUES(?,?,?)");
> for (int i = 0; i < 5; i++) {
> stmt.setString(1, Character.toString((char)('a' + 'z' + i)) + 
> Bytes.toString(val));
> stmt.setInt(2, i);
> stmt.setInt(3, i);
> stmt.executeUpdate();
> }
> conn.commit();
> ResultSet rs;
> String actualExplainPlan;
> collectStatistics(conn, fullTableName);
> List keyRanges = getAllSplits(conn, fullTableName);
> assertEquals(26, keyRanges.size());
> rs = conn.createStatement().executeQuery("EXPLAIN SELECT * FROM " + 
> fullTableName);
> actualExplainPlan = QueryUtil.getExplainPlan(rs);
> assertEquals(
> "CLIENT 26-CHUNK 25 ROWS " + (columnEncoded ? ( mutable ? "12530" 
> : "14190" ) : 
> (TransactionFactory.Provider.OMID.name().equals(transactionProvider)) ? 
> "25320" : "12420") +
> " BYTES PARALLEL 1-WAY FULL SCAN OVER " + 
> physicalTableName,
> actualExplainPlan);
> ConnectionQueryServices services = 
> conn.unwrap(PhoenixConnection.class).getQueryServices();
> List regions = 
> services.getAllTableRegions(Bytes.toBytes(physicalTableName));
> assertEquals(1, regions.size());
> collectStatistics(conn, fullTableName, Long.toString(1000));
> keyRanges = getAllSplits(conn, fullTableName);
> boolean oneCellPerColFamilyStorageScheme = !mutable && columnEncoded;
> boolean hasShadowCells = 
> TransactionFactory.Provider.OMID.name().equals(transactionProvider);
> assertEquals(oneCellPerColFamilyStorageScheme ? 14 : hasShadowCells ? 24 
> : 13, keyRanges.size());
> rs = conn
> .createStatement()
> .executeQuery(
> "SELECT 
> COLUMN_FAMILY,SUM(GUIDE_POSTS_ROW_COUNT),SUM(GUIDE_POSTS_WIDTH),COUNT(*) from 
> \"SYSTEM\".STATS where PHYSICAL_NAME = '"
> + physicalTableName + "' GROUP BY COLUMN_FAMILY 
> ORDER BY COLUMN_FAMILY");
> assertTrue(rs.next());
> assertEquals("A", rs.getString(1));
> assertEquals(25, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 12530 : 14190 ) : hasShadowCells 
> ? 25320 : 12420, rs.getInt(3));
> assertEquals(oneCellPerColFamilyStorageScheme ? 13 : hasShadowCells ? 23 
> : 12, rs.getInt(4));
> assertTrue(rs.next());
> assertEquals("B", rs.getString(1));
> assertEquals(oneCellPerColFamilyStorageScheme ? 25 : 20, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 5600 : 7260 ) : hasShadowCells ? 
> 11260 : 5540, rs.getInt(3));
> assertEquals(oneCellPerColFamilyStorageScheme ? 7 : hasShadowCells ? 10 : 
> 5, rs.getInt(4));
> assertTrue(rs.next());
> assertEquals("C", rs.getString(1));
> assertEquals(25, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 7005 : 7280 ) : hasShadowCells ? 
> 14085 : 6930, rs.getInt(3));
> assertEquals(hasShadowCells ? 13 : 7, rs.getInt(4));
> assertTrue(rs.next());
> assertEquals("D", rs.getString(1));
> assertEquals(25, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 7005 : 7280 ) : hasShadowCells ? 
> 14085 : 6930, rs.getInt(3));
> assertEquals(hasShadowCells ? 13 : 7, rs.getInt(4));
> 

[jira] [Updated] (PHOENIX-5674) IndexTool to not write already correct index rows/CFs

2020-01-14 Thread Kadir OZDEMIR (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir OZDEMIR updated PHOENIX-5674:
---
Attachment: PHOENIX-5674.master.001.patch

> IndexTool to not write already correct index rows/CFs
> -
>
> Key: PHOENIX-5674
> URL: https://issues.apache.org/jira/browse/PHOENIX-5674
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.1, 4.14.3
>Reporter: Priyank Porwal
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 4.15.1, 4.14.4
>
> Attachments: PHOENIX-5674.master.001.patch
>
>
> IndexTool can avoid writing index rows if they are already consistent with 
> data-table. This will specially be useful when rebuilding index on DR-site 
> where indexes are replicated already, but rebuild might be needed for catchup.
> Likewise, during upgrades from old indexing scheme to new consistent indexing 
> scheme, if the index data columns are consistent already, IndexTool should 
> only rewrite the EmptyColumn to mark the row as verified instead of writing 
> the data columns too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Committers please look at the Phoenix tests and fix your failures

2020-01-14 Thread la...@apache.org
 And I cannot stress enough how important this is for the project. As an 
example: We had the tests fail for just a few days, during that time we have 
had check-ins that broke other test; now it's quite hard to figure out which 
recent change broke the other tests.
We need the test suite *always* passing. It's impossible to maintain a stable 
code base the size of Phoenix otherwise.
-- Lars
On Tuesday, January 14, 2020, 10:04:12 AM PST, la...@apache.org 
 wrote:  
 
  I spent a lot of time making QA better. It can be better, but it's stable 
enough. There're now very little excuses. "Test failure seems unrelated" is not 
an excuse anymore.(4.x-HBase-1.3 has some issue where HBase can't seem to start 
a cluster reliably... but all others are pretty stable.)
After chatting with Andrew Purtell, one things I was going to offer is to 
simply revert any change that breaks a test. Period.I'd volunteer some of my 
time (hey, isn't that what a Chief Architect in a Fortune 100 company should 
do?!)
With their changes reverted, people will presumably start to care. :)If I hear 
no objects I'll start doing that a while.
Cheers.
-- Lars
    On Monday, January 13, 2020, 06:23:01 PM PST, Josh Elser 
 wrote:  
 
 How do we keep getting into this mess: unreliable QA, people ignoring 
QA, or something else?

On 1/12/20 9:24 PM, la...@apache.org wrote:
> ... Not much else to say here...
> The tests have been failing again for a while... I will NOT fix them again 
> this time! Sorry folks.
> 
> -- Lars
> 
> 
    

[jira] [Assigned] (PHOENIX-5265) Phoenix Test should use gold files for result comparison instead of using hard-corded comparison.

2020-01-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned PHOENIX-5265:
-

Assignee: Viraj Jasani

> Phoenix Test should use gold files for result comparison instead of using 
> hard-corded comparison.
> -
>
> Key: PHOENIX-5265
> URL: https://issues.apache.org/jira/browse/PHOENIX-5265
> Project: Phoenix
>  Issue Type: Improvement
> Environment: {code:java}
> // code placeholder
> @Test
> public void testWithMultiCF() throws Exception {
> int nRows = 20;
> Connection conn = getConnection(0);
> PreparedStatement stmt;
> conn.createStatement().execute(
> "CREATE TABLE " + fullTableName
> + "(k VARCHAR PRIMARY KEY, a.v INTEGER, b.v INTEGER, c.v 
> INTEGER NULL, d.v INTEGER NULL) "
> + tableDDLOptions );
> stmt = conn.prepareStatement("UPSERT INTO " + fullTableName + " 
> VALUES(?,?, ?, ?, ?)");
> byte[] val = new byte[250];
> for (int i = 0; i < nRows; i++) {
> stmt.setString(1, Character.toString((char)('a' + i)) + 
> Bytes.toString(val));
> stmt.setInt(2, i);
> stmt.setInt(3, i);
> stmt.setInt(4, i);
> stmt.setInt(5, i);
> stmt.executeUpdate();
> }
> conn.commit();
> stmt = conn.prepareStatement("UPSERT INTO " + fullTableName + "(k, c.v, 
> d.v) VALUES(?,?,?)");
> for (int i = 0; i < 5; i++) {
> stmt.setString(1, Character.toString((char)('a' + 'z' + i)) + 
> Bytes.toString(val));
> stmt.setInt(2, i);
> stmt.setInt(3, i);
> stmt.executeUpdate();
> }
> conn.commit();
> ResultSet rs;
> String actualExplainPlan;
> collectStatistics(conn, fullTableName);
> List keyRanges = getAllSplits(conn, fullTableName);
> assertEquals(26, keyRanges.size());
> rs = conn.createStatement().executeQuery("EXPLAIN SELECT * FROM " + 
> fullTableName);
> actualExplainPlan = QueryUtil.getExplainPlan(rs);
> assertEquals(
> "CLIENT 26-CHUNK 25 ROWS " + (columnEncoded ? ( mutable ? "12530" 
> : "14190" ) : 
> (TransactionFactory.Provider.OMID.name().equals(transactionProvider)) ? 
> "25320" : "12420") +
> " BYTES PARALLEL 1-WAY FULL SCAN OVER " + 
> physicalTableName,
> actualExplainPlan);
> ConnectionQueryServices services = 
> conn.unwrap(PhoenixConnection.class).getQueryServices();
> List regions = 
> services.getAllTableRegions(Bytes.toBytes(physicalTableName));
> assertEquals(1, regions.size());
> collectStatistics(conn, fullTableName, Long.toString(1000));
> keyRanges = getAllSplits(conn, fullTableName);
> boolean oneCellPerColFamilyStorageScheme = !mutable && columnEncoded;
> boolean hasShadowCells = 
> TransactionFactory.Provider.OMID.name().equals(transactionProvider);
> assertEquals(oneCellPerColFamilyStorageScheme ? 14 : hasShadowCells ? 24 
> : 13, keyRanges.size());
> rs = conn
> .createStatement()
> .executeQuery(
> "SELECT 
> COLUMN_FAMILY,SUM(GUIDE_POSTS_ROW_COUNT),SUM(GUIDE_POSTS_WIDTH),COUNT(*) from 
> \"SYSTEM\".STATS where PHYSICAL_NAME = '"
> + physicalTableName + "' GROUP BY COLUMN_FAMILY 
> ORDER BY COLUMN_FAMILY");
> assertTrue(rs.next());
> assertEquals("A", rs.getString(1));
> assertEquals(25, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 12530 : 14190 ) : hasShadowCells 
> ? 25320 : 12420, rs.getInt(3));
> assertEquals(oneCellPerColFamilyStorageScheme ? 13 : hasShadowCells ? 23 
> : 12, rs.getInt(4));
> assertTrue(rs.next());
> assertEquals("B", rs.getString(1));
> assertEquals(oneCellPerColFamilyStorageScheme ? 25 : 20, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 5600 : 7260 ) : hasShadowCells ? 
> 11260 : 5540, rs.getInt(3));
> assertEquals(oneCellPerColFamilyStorageScheme ? 7 : hasShadowCells ? 10 : 
> 5, rs.getInt(4));
> assertTrue(rs.next());
> assertEquals("C", rs.getString(1));
> assertEquals(25, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 7005 : 7280 ) : hasShadowCells ? 
> 14085 : 6930, rs.getInt(3));
> assertEquals(hasShadowCells ? 13 : 7, rs.getInt(4));
> assertTrue(rs.next());
> assertEquals("D", rs.getString(1));
> assertEquals(25, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 7005 : 7280 ) : hasShadowCells ? 
> 14085 : 6930, rs.getInt(3));
> assertEquals(hasShadowCells ? 13 : 7, rs.getInt(4));
> assertFalse(rs.next());
> // Disable stats
> conn.createStatement().execute("ALTER TABLE " + fullTableName + 
> " SET " + PhoenixDatabaseMetaData.GUIDE_POSTS_WIDTH + "=0");
> collectStatistics(conn, 

[jira] [Updated] (PHOENIX-5678) Cleanup anonymous inner classes used for BaseMutationPlan

2020-01-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-5678:
--
Attachment: PHOENIX-5678.master.000.patch

> Cleanup anonymous inner classes used for BaseMutationPlan
> -
>
> Key: PHOENIX-5678
> URL: https://issues.apache.org/jira/browse/PHOENIX-5678
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.15.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 5.1.0, 4.15.1
>
> Attachments: PHOENIX-5678.master.000.patch, 
> PHOENIX-5678.master.000.patch
>
>
> BaseMutationPlan has been extended as anonymous inner class at multiple 
> places and some of them have lots of logic placed in overridden methods. We 
> should convert them to Inner classes and use object of extended inner class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5645) BaseScannerRegionObserver should prevent compaction from purging very recently deleted cells

2020-01-14 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5645:
-
Description: 
Phoenix's SCN feature has some problems, because HBase major compaction can 
remove Cells that have been deleted or whose TTL or max versions has caused 
them to be expired. 

For example, IndexTool rebuilds and index scrutiny can both give strange, 
incorrect results if a major compaction occurs in the middle of their run. In 
the rebuild case, it's because we're rewriting "history" on the index at the 
same time that compaction is rewriting "history" by purging deleted and expired 
cells. 

Create a new configuration property called "max lookback age", which declares 
that no data written more recently than the max lookback age will be compacted 
away. The max lookback age must be smaller than the TTL, and it should not be 
legal for a user to look back further in the past than the table's TTL. 

Max lookback age by default will not be set, and the current behavior will be 
preserved. But if max lookback age is set, it will be enforced by the 
BaseScannerRegionObserver for all tables. 

In the future, this should be contributed as a general feature to HBase for 
arbitrary tables. See HBASE-23602.

  was:
IndexTool rebuilds and index scrutiny can both give strange, incorrect results 
if a major compaction occurs in the middle of their run. In the rebuild case, 
it's because we're rewriting "history" on the index at the same time that 
compaction is rewriting "history" by purging deleted and expired cells. 

In the case of scrutiny, it's because it does an SCN-based lookback, and if 
versions are purged on the index before their equivalent data table rows, you 
can get false errors. 

Since in the new indexing path we already have a coprocessor on each index, it 
should override the compaction hook to shield rows newer than some configurable 
age from being purged during a major compaction.

In the future, this should be contributed as a general feature to HBase for 
arbitrary tables. 

Summary: BaseScannerRegionObserver should prevent compaction from 
purging very recently deleted cells  (was: GlobalIndexChecker should prevent 
compaction from purging very recently deleted cells)

> BaseScannerRegionObserver should prevent compaction from purging very 
> recently deleted cells
> 
>
> Key: PHOENIX-5645
> URL: https://issues.apache.org/jira/browse/PHOENIX-5645
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Attachments: PHOENIX-5645-4.x-HBase-1.5-v2.patch, 
> PHOENIX-5645-4.x-HBase-1.5.patch, PHOENIX-5645-4.x-HBase-1.5.v3.patch
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> Phoenix's SCN feature has some problems, because HBase major compaction can 
> remove Cells that have been deleted or whose TTL or max versions has caused 
> them to be expired. 
> For example, IndexTool rebuilds and index scrutiny can both give strange, 
> incorrect results if a major compaction occurs in the middle of their run. In 
> the rebuild case, it's because we're rewriting "history" on the index at the 
> same time that compaction is rewriting "history" by purging deleted and 
> expired cells. 
> Create a new configuration property called "max lookback age", which declares 
> that no data written more recently than the max lookback age will be 
> compacted away. The max lookback age must be smaller than the TTL, and it 
> should not be legal for a user to look back further in the past than the 
> table's TTL. 
> Max lookback age by default will not be set, and the current behavior will be 
> preserved. But if max lookback age is set, it will be enforced by the 
> BaseScannerRegionObserver for all tables. 
> In the future, this should be contributed as a general feature to HBase for 
> arbitrary tables. See HBASE-23602.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5636) Improve the error message when client connects to server with higher major version

2020-01-14 Thread Christine Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Feng updated PHOENIX-5636:

Attachment: PHOENIX-5636.master.v4.patch

> Improve the error message when client connects to server with higher major 
> version
> --
>
> Key: PHOENIX-5636
> URL: https://issues.apache.org/jira/browse/PHOENIX-5636
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Sandeep Guggilam
>Assignee: Christine Feng
>Priority: Minor
>  Labels: beginner, newbie
> Fix For: 4.15.1
>
> Attachments: PHOENIX-5636.master.v1.patch, 
> PHOENIX-5636.master.v2.patch, PHOENIX-5636.master.v3.patch, 
> PHOENIX-5636.master.v4.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When a 4.14 client connects to a 5.0 server, it errors out saying " Outdated 
> jars. Newer Phoenix clients can't communicate with older Phoenix servers"
> It should probably error out with "Major version of client is less than that 
> of the server"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5630) MAX_MUTATION_SIZE_EXCEEDED and MAX_MUTATION_SIZE_BYTES_EXCEEDED SQLExceptions should print existing size

2020-01-14 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5630:
--
Priority: Minor  (was: Major)

> MAX_MUTATION_SIZE_EXCEEDED and MAX_MUTATION_SIZE_BYTES_EXCEEDED SQLExceptions 
> should print existing size
> 
>
> Key: PHOENIX-5630
> URL: https://issues.apache.org/jira/browse/PHOENIX-5630
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Neha Gupta
>Priority: Minor
>  Labels: beginner, newbie
> Fix For: 4.15.1
>
> Attachments: PHOENIX-5630.patch, PHOENIX-5630.v1.patch, 
> PHOENIX-5630.v2.patch
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> These exceptions do not print the existing size of the MutationState. We 
> should add the existing size to the exception message for help in debugging. 
> For ex:
> {code:java}
> Caused by: java.sql.SQLException: ERROR 730 (LIM02): MutationState size is 
> bigger than maximum allowed number of bytes
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at 
> org.apache.phoenix.execute.MutationState.throwIfTooBig(MutationState.java:371)
> at org.apache.phoenix.execute.MutationState.join(MutationState.java:471)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:409)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)
> at 
> org.apache.phoenix.jdbc.DelegatePreparedStatement.execute(DelegatePreparedStatement.java:284)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5671) Add tests for ViewUtil

2020-01-14 Thread Xinyi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-5671:
---
Attachment: (was: PHOENIX-5671.patch)

> Add tests for ViewUtil
> --
>
> Key: PHOENIX-5671
> URL: https://issues.apache.org/jira/browse/PHOENIX-5671
> Project: Phoenix
>  Issue Type: Test
>Affects Versions: 4.16.0
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Minor
> Attachments: PHOENIX-5671.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Adding tests for ViewUtil to understand hasChildViews, 
> getSystemTableForChildLinks, and other APIs are working as expected.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5671) Add tests for ViewUtil

2020-01-14 Thread Xinyi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-5671:
---
Attachment: PHOENIX-5671.patch

> Add tests for ViewUtil
> --
>
> Key: PHOENIX-5671
> URL: https://issues.apache.org/jira/browse/PHOENIX-5671
> Project: Phoenix
>  Issue Type: Test
>Affects Versions: 4.16.0
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Minor
> Attachments: PHOENIX-5671.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Adding tests for ViewUtil to understand hasChildViews, 
> getSystemTableForChildLinks, and other APIs are working as expected.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5681) SYSCAT VIEW_STATEMENT column in doesn't store entire DDL for VIEW

2020-01-14 Thread Xinyi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-5681:
---
Priority: Minor  (was: Major)

> SYSCAT VIEW_STATEMENT column in doesn't store entire DDL for VIEW
> -
>
> Key: PHOENIX-5681
> URL: https://issues.apache.org/jira/browse/PHOENIX-5681
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Xinyi Yan
>Priority: Minor
> Attachments: Screen Shot 2020-01-14 at 10.56.49 AM.png
>
>
> !Screen Shot 2020-01-14 at 10.56.49 AM.png!
>  
> VIEW_STATEMENT column only stores partial DDL when we create a view.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5681) SYSCAT VIEW_STATEMENT column in doesn't store entire DDL for VIEW

2020-01-14 Thread Xinyi Yan (Jira)
Xinyi Yan created PHOENIX-5681:
--

 Summary: SYSCAT VIEW_STATEMENT column in doesn't store entire DDL 
for VIEW
 Key: PHOENIX-5681
 URL: https://issues.apache.org/jira/browse/PHOENIX-5681
 Project: Phoenix
  Issue Type: Bug
Reporter: Xinyi Yan
 Attachments: Screen Shot 2020-01-14 at 10.56.49 AM.png

!Screen Shot 2020-01-14 at 10.56.49 AM.png!

 

VIEW_STATEMENT column only stores partial DDL when we create a view.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5634) Use 'phoenix.default.update.cache.frequency' from connection properties at query time

2020-01-14 Thread Nitesh Maheshwari (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitesh Maheshwari updated PHOENIX-5634:
---
Affects Version/s: 5.1.0

> Use 'phoenix.default.update.cache.frequency' from connection properties at 
> query time
> -
>
> Key: PHOENIX-5634
> URL: https://issues.apache.org/jira/browse/PHOENIX-5634
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Nitesh Maheshwari
>Assignee: Nitesh Maheshwari
>Priority: Minor
> Fix For: 4.15.1
>
> Attachments: PHOENIX-5634.master.v1.patch
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> We have the config 'phoenix.default.update.cache.frequency' which specifies 
> the time a client should wait before it refreshes its metadata cache entry 
> for a table by fetching the latest metadata from system catalog. This value 
> could be set for a table in the following ways (in the following preference 
> order):
>  # Specifying UPDATE_CACHE_FREQUENCY in table creation DDL
>  # Specifying the connection property 'phoenix.default.update.cache.frequency'
>  # Using the default 'phoenix.default.update.cache.frequency'
> At query time, we look at whether UPDATE_CACHE_FREQUENCY was specified for 
> the table and decide based on that value if the latest metadata for a table 
> should be fetched from system catalog to update the cache. However, when the 
> table doesn't have UPDATE_CACHE_FREQUENCY specified we should look at the 
> connection property 'phoenix.default.update.cache.frequency' (or the default 
> 'phoenix.default.update.cache.frequency' when the connection level property 
> is not set) to make that decision. The support for latter is missing - this 
> Jira is intended to add that.
> This will aid exiting installations where the tables were created without a 
> specified UPDATE_CACHE_FREQUENCY, and thus always hit the system catalog to 
> get the latest metadata when referenced. With this support, we will be able 
> to reduce the load on system catalog by specifying a connection level 
> property for all tables referenced from the connection (as against UPSERTing 
> each table entry in system catalog to set an UPDATE_CACHE_FREQUENCY value).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5634) Use 'phoenix.default.update.cache.frequency' from connection properties at query time

2020-01-14 Thread Nitesh Maheshwari (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitesh Maheshwari updated PHOENIX-5634:
---
Fix Version/s: 5.1.0

> Use 'phoenix.default.update.cache.frequency' from connection properties at 
> query time
> -
>
> Key: PHOENIX-5634
> URL: https://issues.apache.org/jira/browse/PHOENIX-5634
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Nitesh Maheshwari
>Assignee: Nitesh Maheshwari
>Priority: Minor
> Fix For: 5.1.0, 4.15.1
>
> Attachments: PHOENIX-5634.master.v1.patch
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> We have the config 'phoenix.default.update.cache.frequency' which specifies 
> the time a client should wait before it refreshes its metadata cache entry 
> for a table by fetching the latest metadata from system catalog. This value 
> could be set for a table in the following ways (in the following preference 
> order):
>  # Specifying UPDATE_CACHE_FREQUENCY in table creation DDL
>  # Specifying the connection property 'phoenix.default.update.cache.frequency'
>  # Using the default 'phoenix.default.update.cache.frequency'
> At query time, we look at whether UPDATE_CACHE_FREQUENCY was specified for 
> the table and decide based on that value if the latest metadata for a table 
> should be fetched from system catalog to update the cache. However, when the 
> table doesn't have UPDATE_CACHE_FREQUENCY specified we should look at the 
> connection property 'phoenix.default.update.cache.frequency' (or the default 
> 'phoenix.default.update.cache.frequency' when the connection level property 
> is not set) to make that decision. The support for latter is missing - this 
> Jira is intended to add that.
> This will aid exiting installations where the tables were created without a 
> specified UPDATE_CACHE_FREQUENCY, and thus always hit the system catalog to 
> get the latest metadata when referenced. With this support, we will be able 
> to reduce the load on system catalog by specifying a connection level 
> property for all tables referenced from the connection (as against UPSERTing 
> each table entry in system catalog to set an UPDATE_CACHE_FREQUENCY value).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Moving Phoenix master to Hbase 2.2

2020-01-14 Thread la...@apache.org
 Does somebody volunteer to take this up?
I can see whether I can a resource where I work, but it's highly uncertain.
It would need a bit of digging and design work to see how we would abstract the 
HBase interface in the most effective way.
As mentioned below, Tephra did a good job at this and could serve as an example 
here. (Not dinging OMID, OMID does most of it's work client side and doesn't 
need these abstractions.)
-- Lars

On Tuesday, January 14, 2020, 01:13:36 AM PST, István Tóth 
 wrote:  
 
 Yes, the HBase API signatures change between versions, so we need to
compile each compat module against a specific HBase.

Whether I can define an internal compatibility API that is switchable at
run (startup) time without a performance hit remains to be seen.

István

On Tue, Jan 14, 2020 at 3:21 AM Josh Elser  wrote:

> Agree that trying to wrangle branches is just too frustrating and
> error-prone.
>
> It would also be great if we could have a single Phoenix jar that works
> across HBase versions, but would not die on that hill :)
>
> On 12/20/19 5:04 AM, la...@apache.org wrote:
> >  I said _provided_ they can be isolated easily :) (I meant it in the
> sense of assuming it's easy).
> > As I said though, Tephra has a similar problem and they did a really
> good job isolating HBase versions. We can learn from them. Sometimes they
> isolate the change only, and sometimes the class needs to be copied, but
> even then it's the one class that is copied, not another branch that needs
> to be kept in sync.
> >
> > This may also drive the desperately necessary refactoring of Phoenix to
> make these things easier to isolate, or to reduce the copying to a minimum.
> And we'd need to think through testing carefully.
> >
> > The branch per Phoenix and HBase version is too complex, IMHO. And the
> complex branch to HBase version mapping that Istvan outlines below confirms
> that.
> >
> > We should all take a brief look at the Tephra solution and see whether
> we can apply that. (And since Tephra is part of the fold now, perhaps
> someone can help there...?)
> > Cheers.
> > -- Lars
> >
> >      On Thursday, December 19, 2019, 8:34:15 PM GMT+1, Geoffrey Jacoby <
> gjac...@gmail.com> wrote:
> >
> >  Lars,
> >
> > I'm curious why you say the differences are easily isolated -- many of
> the
> > core classes of Phoenix either directly inherit HBase classes or
> implement
> > HBase interfaces, and those can vary between minor versions. (See my
> above
> > example of a new coprocessor hook on BaseRegionObserver.)
> >
> > Geoffrey
> >
> > On Thu, Dec 19, 2019 at 10:54 AM la...@apache.org 
> wrote:
> >
> >>    Yep. The differences are pretty minimal - provided they can be
> isolated
> >> easily.
> >> Tephra might be a pretty good model. It supports various versions of
> HBase
> >> in a single branch and has similar issues as Phoenix (coprocessors,
> etc).
> >> -- Lars
> >>      On Thursday, December 19, 2019, 7:07:51 PM GMT+1, Josh Elser <
> >> els...@apache.org> wrote:
> >>
> >>    To clarify, you think that compat modules are better than that
> >> separate-branches model in 4.x?
> >>
> >> On 12/18/19 11:29 AM, la...@apache.org wrote:
> >>> This is really hard to follow.
> >>>
> >>> I think we should do the same with HBase dependencies in Phoenix that
> >> HBase does with Hadoop dependencies.
> >>>
> >>> That is:  We could have a maven module with the specific HBase version
> >> dependent code.
> >>> Btw. Tephra does the same... A module for HBase version specific code.
> >>> -- Lars
> >>>
> >>>        On Tuesday, December 17, 2019, 10:00:31 AM GMT+1, Istvan Toth <
> >> st...@apache.org> wrote:
> >>>
> >>>    What do you think about tying the minor releases to Hbase minor
> releases
> >>> (not necessarily one-to-one)
> >>>
> >>> for example (provided 5.1 is 2020H1)
> >>>
> >>> 5.0.0 -> HB 2.0
> >>> 5.1.0 -> HB 2.2.2 (and whatever 2.1 is API compatible with it)
> >>> 5.1.x -> HB 2.2.x (treat as maintenance branch, no major new features)
> >>> 5.2.0 -> HB 2.3.0 (if released by that time)
> >>> 5.2.x -> HB 2.3.x (treat as maintenance branch, no major new features)
> >>> 5.3.0 -> HB 2.3.x (if there is no new major/minor Hbase release)
> >>> master -> latest released HBase version
> >>>
> >>> Alternatively, we could stick with the same HBase version for patch
> >>> releases that we used for the first minor release.
> >>>
> >>> This would limit the number of branches that we have to maintain in
> >>> parallel, while providing maintenance branches for older releases, and
> >>> timely-ish Phoenix releases.
> >>>
> >>> The drawback is that users of old HBase versions won't get the latest
> >>> features, on the other hand they can expect more polish.
> >>>
> >>> Istvan
> >>>
> >>> On Thu, Dec 12, 2019 at 8:05 PM Geoffrey Jacoby 
> >> wrote:
> >>>
>  Since HBase 2.0 is EOM'ed, I'm +1 for not worrying about 2.0.x
>  compatibility with the 5.x branch going forward.
> 
>  Given how coupled Phoenix is to the 

[jira] [Updated] (PHOENIX-5677) Replace System.currentTimeMillis with EnvironmentEdgeManager in non-test code

2020-01-14 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5677:
-
Attachment: PHOENIX-5677-4.x-HBase-1.3.patch

> Replace System.currentTimeMillis with EnvironmentEdgeManager in non-test code
> -
>
> Key: PHOENIX-5677
> URL: https://issues.apache.org/jira/browse/PHOENIX-5677
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Attachments: PHOENIX-5677-4.x-HBase-1.3.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Phoenix is inconsistent in using either system clock or 
> EnvironmentEdgeManager to get current time. The EnvironmentEdgeManager is 
> occasionally very useful in tests to control time deterministically without 
> needing to sleep. Direct references to System.currentTimeMillis in non-test 
> code should be switched over. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Committers please look at the Phoenix tests and fix your failures

2020-01-14 Thread la...@apache.org
 I spent a lot of time making QA better. It can be better, but it's stable 
enough. There're now very little excuses. "Test failure seems unrelated" is not 
an excuse anymore.(4.x-HBase-1.3 has some issue where HBase can't seem to start 
a cluster reliably... but all others are pretty stable.)
After chatting with Andrew Purtell, one things I was going to offer is to 
simply revert any change that breaks a test. Period.I'd volunteer some of my 
time (hey, isn't that what a Chief Architect in a Fortune 100 company should 
do?!)
With their changes reverted, people will presumably start to care. :)If I hear 
no objects I'll start doing that a while.
Cheers.
-- Lars
On Monday, January 13, 2020, 06:23:01 PM PST, Josh Elser 
 wrote:  
 
 How do we keep getting into this mess: unreliable QA, people ignoring 
QA, or something else?

On 1/12/20 9:24 PM, la...@apache.org wrote:
> ... Not much else to say here...
> The tests have been failing again for a while... I will NOT fix them again 
> this time! Sorry folks.
> 
> -- Lars
> 
> 
  

[jira] [Updated] (PHOENIX-5601) Add a new Coprocessor - ViewTTLAware Coprocessor

2020-01-14 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5601:
-
Attachment: PHOENIX-5601.master.001.patch

> Add a new Coprocessor - ViewTTLAware Coprocessor
> 
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 4.16.0
>
> Attachments: PHOENIX-5601.4.x-HBase-1.3.001.patch, 
> PHOENIX-5601.master.001.patch
>
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5678) Cleanup anonymous inner classes used for BaseMutationPlan

2020-01-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-5678:
--
Attachment: PHOENIX-5678.master.000.patch

> Cleanup anonymous inner classes used for BaseMutationPlan
> -
>
> Key: PHOENIX-5678
> URL: https://issues.apache.org/jira/browse/PHOENIX-5678
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.15.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 5.1.0, 4.15.1
>
> Attachments: PHOENIX-5678.master.000.patch
>
>
> BaseMutationPlan has been extended as anonymous inner class at multiple 
> places and some of them have lots of logic placed in overridden methods. We 
> should convert them to Inner classes and use object of extended inner class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5678) Cleanup anonymous inner classes used for BaseMutationPlan

2020-01-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-5678:
--
Attachment: (was: PHOENIX-5678.master.000.patch)

> Cleanup anonymous inner classes used for BaseMutationPlan
> -
>
> Key: PHOENIX-5678
> URL: https://issues.apache.org/jira/browse/PHOENIX-5678
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.15.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 5.1.0, 4.15.1
>
> Attachments: PHOENIX-5678.master.000.patch
>
>
> BaseMutationPlan has been extended as anonymous inner class at multiple 
> places and some of them have lots of logic placed in overridden methods. We 
> should convert them to Inner classes and use object of extended inner class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5678) Cleanup anonymous inner classes used for BaseMutationPlan

2020-01-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-5678:
--
Attachment: (was: PHOENIX-5678.master.000.patch)

> Cleanup anonymous inner classes used for BaseMutationPlan
> -
>
> Key: PHOENIX-5678
> URL: https://issues.apache.org/jira/browse/PHOENIX-5678
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.15.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 5.1.0, 4.15.1
>
> Attachments: PHOENIX-5678.master.000.patch
>
>
> BaseMutationPlan has been extended as anonymous inner class at multiple 
> places and some of them have lots of logic placed in overridden methods. We 
> should convert them to Inner classes and use object of extended inner class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5680) remove psql.py from phoenix-queryserver

2020-01-14 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-5680:
-
Description: 
The phoenix-queryserver repo duplicates the bin/psql.py file from the core 
phoenix repo, for no apparent reason.

Remove it.

  was:
The phoenix-queryserver repo duplicates the bin/pqs.py file from the core 
phoenix repo, for no apparent reason.

Remove it.


> remove psql.py from phoenix-queryserver
> ---
>
> Key: PHOENIX-5680
> URL: https://issues.apache.org/jira/browse/PHOENIX-5680
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 5.1.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
>
> The phoenix-queryserver repo duplicates the bin/psql.py file from the core 
> phoenix repo, for no apparent reason.
> Remove it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5680) remove psql.py from phoenix-queryserver

2020-01-14 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-5680:


 Summary: remove psql.py from phoenix-queryserver
 Key: PHOENIX-5680
 URL: https://issues.apache.org/jira/browse/PHOENIX-5680
 Project: Phoenix
  Issue Type: Task
Affects Versions: 5.1.0
Reporter: Istvan Toth
Assignee: Istvan Toth


The phoenix-queryserver repo duplicates the bin/pqs.py file from the core 
phoenix repo, for no apparent reason.

Remove it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5454) Phoenix scripts start foreground java processes as child processes

2020-01-14 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-5454:
-
Fix Version/s: 4.16.0
Affects Version/s: 4.15.0

Backported to the 4.x branches as well, in case we stick with python.

> Phoenix scripts start foreground java processes as child processes
> --
>
> Key: PHOENIX-5454
> URL: https://issues.apache.org/jira/browse/PHOENIX-5454
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-5454.master.v1.patch, 
> PHOENIX-5454.master.v2.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently the phoenix scripts in python start the java process via 
> subprocess.call() or subprocess.popen() even when the java process has to run 
> in the foreground, and there is no cleanup required.
> I propose that in these cases, we start java via os.exec*(). This has the 
> following advantages:
>  * There is no python process idling waiting for the java process to end, 
> reducing process count and memory consumption
>  * Signal handling is simplified (signals sent to the starting script are 
> received by the java process started)
>  * Return code handling is simplified (no need to check for and return error 
> codes from java in the startup script)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Moving Phoenix master to Hbase 2.2

2020-01-14 Thread István Tóth
Yes, the HBase API signatures change between versions, so we need to
compile each compat module against a specific HBase.

Whether I can define an internal compatibility API that is switchable at
run (startup) time without a performance hit remains to be seen.

István

On Tue, Jan 14, 2020 at 3:21 AM Josh Elser  wrote:

> Agree that trying to wrangle branches is just too frustrating and
> error-prone.
>
> It would also be great if we could have a single Phoenix jar that works
> across HBase versions, but would not die on that hill :)
>
> On 12/20/19 5:04 AM, la...@apache.org wrote:
> >   I said _provided_ they can be isolated easily :) (I meant it in the
> sense of assuming it's easy).
> > As I said though, Tephra has a similar problem and they did a really
> good job isolating HBase versions. We can learn from them. Sometimes they
> isolate the change only, and sometimes the class needs to be copied, but
> even then it's the one class that is copied, not another branch that needs
> to be kept in sync.
> >
> > This may also drive the desperately necessary refactoring of Phoenix to
> make these things easier to isolate, or to reduce the copying to a minimum.
> And we'd need to think through testing carefully.
> >
> > The branch per Phoenix and HBase version is too complex, IMHO. And the
> complex branch to HBase version mapping that Istvan outlines below confirms
> that.
> >
> > We should all take a brief look at the Tephra solution and see whether
> we can apply that. (And since Tephra is part of the fold now, perhaps
> someone can help there...?)
> > Cheers.
> > -- Lars
> >
> >  On Thursday, December 19, 2019, 8:34:15 PM GMT+1, Geoffrey Jacoby <
> gjac...@gmail.com> wrote:
> >
> >   Lars,
> >
> > I'm curious why you say the differences are easily isolated -- many of
> the
> > core classes of Phoenix either directly inherit HBase classes or
> implement
> > HBase interfaces, and those can vary between minor versions. (See my
> above
> > example of a new coprocessor hook on BaseRegionObserver.)
> >
> > Geoffrey
> >
> > On Thu, Dec 19, 2019 at 10:54 AM la...@apache.org 
> wrote:
> >
> >>Yep. The differences are pretty minimal - provided they can be
> isolated
> >> easily.
> >> Tephra might be a pretty good model. It supports various versions of
> HBase
> >> in a single branch and has similar issues as Phoenix (coprocessors,
> etc).
> >> -- Lars
> >>  On Thursday, December 19, 2019, 7:07:51 PM GMT+1, Josh Elser <
> >> els...@apache.org> wrote:
> >>
> >>To clarify, you think that compat modules are better than that
> >> separate-branches model in 4.x?
> >>
> >> On 12/18/19 11:29 AM, la...@apache.org wrote:
> >>> This is really hard to follow.
> >>>
> >>> I think we should do the same with HBase dependencies in Phoenix that
> >> HBase does with Hadoop dependencies.
> >>>
> >>> That is:  We could have a maven module with the specific HBase version
> >> dependent code.
> >>> Btw. Tephra does the same... A module for HBase version specific code.
> >>> -- Lars
> >>>
> >>>On Tuesday, December 17, 2019, 10:00:31 AM GMT+1, Istvan Toth <
> >> st...@apache.org> wrote:
> >>>
> >>>What do you think about tying the minor releases to Hbase minor
> releases
> >>> (not necessarily one-to-one)
> >>>
> >>> for example (provided 5.1 is 2020H1)
> >>>
> >>> 5.0.0 -> HB 2.0
> >>> 5.1.0 -> HB 2.2.2 (and whatever 2.1 is API compatible with it)
> >>> 5.1.x -> HB 2.2.x (treat as maintenance branch, no major new features)
> >>> 5.2.0 -> HB 2.3.0 (if released by that time)
> >>> 5.2.x -> HB 2.3.x (treat as maintenance branch, no major new features)
> >>> 5.3.0 -> HB 2.3.x (if there is no new major/minor Hbase release)
> >>> master -> latest released HBase version
> >>>
> >>> Alternatively, we could stick with the same HBase version for patch
> >>> releases that we used for the first minor release.
> >>>
> >>> This would limit the number of branches that we have to maintain in
> >>> parallel, while providing maintenance branches for older releases, and
> >>> timely-ish Phoenix releases.
> >>>
> >>> The drawback is that users of old HBase versions won't get the latest
> >>> features, on the other hand they can expect more polish.
> >>>
> >>> Istvan
> >>>
> >>> On Thu, Dec 12, 2019 at 8:05 PM Geoffrey Jacoby 
> >> wrote:
> >>>
>  Since HBase 2.0 is EOM'ed, I'm +1 for not worrying about 2.0.x
>  compatibility with the 5.x branch going forward.
> 
>  Given how coupled Phoenix is to the implementation details of HBase
> >> though,
>  I'm not sure trying to abstract those away to keep one Phoenix branch
> >> per
>  HBase major version is practical, however. At the least, it would be
> >> really
>  complex.
> 
>  For example, in the new year I plan to return to working on the change
> >> data
>  capture and Phoenix-level replication features, both of which depend
> on
>  WALKey interface changes and a new RegionObserver coprocessor hook
>  introduced in HBASE-22622 and 

[jira] [Updated] (PHOENIX-5678) Cleanup anonymous inner classes used for BaseMutationPlan

2020-01-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-5678:
--
Attachment: PHOENIX-5678.master.000.patch

> Cleanup anonymous inner classes used for BaseMutationPlan
> -
>
> Key: PHOENIX-5678
> URL: https://issues.apache.org/jira/browse/PHOENIX-5678
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.15.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 5.1.0, 4.15.1
>
> Attachments: PHOENIX-5678.master.000.patch, 
> PHOENIX-5678.master.000.patch
>
>
> BaseMutationPlan has been extended as anonymous inner class at multiple 
> places and some of them have lots of logic placed in overridden methods. We 
> should convert them to Inner classes and use object of extended inner class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)