Re: [ANNOUNCE] Lokesh Khurana as Phoenix Committer

2024-01-17 Thread Geoffrey Jacoby
Congratulations, Lokesh!

On Wed, Jan 17, 2024 at 12:25 PM Viraj Jasani  wrote:

> On behalf of the Apache Phoenix PMC, I'm pleased to announce that Lokesh
> Khurana has accepted the PMC's invitation to become a committer on Apache
> Phoenix.
>
> We appreciate all of the great contributions Lokesh has made to the
> community thus far and we look forward to his continued involvement.
>
> Congratulations and Welcome, Lokesh!
>


Re: [DISCUSS] 5.2.0 priority : PHOENIX-7106 Data Integrity Issues

2024-01-03 Thread Geoffrey Jacoby
I agree that data integrity issues are a higher priority than feature
development, so I also support the decision. The fact that several of the
major remaining 5.2 features are currently being developed in long-running
feature branches also helps, as work can continue there at the cost of a
rebase later.

How does this affect 5.1.4, which is also listed as a Fix Version for
PHOENIX-7106? From the bug description it also sounds like 5.1.3 and the
forthcoming .4 are affected, since we have server-side paging in 5.1. (Feel
free to move that to a separate thread if you feel it should be a separate
discussion.) Should this be a blocker for releasing 5.1.4?

Geoffrey


On Wed, Jan 3, 2024 at 5:06 PM Kadir Ozdemir 
wrote:

> Being a database, Phoenix has to make sure that the data stays on disk
> intact and its queries return correct data. In this case, Phoenix fails to
> return correct data for some queries if their scans experience region
> movement. Now that we know these data integrity issues and how to reproduce
> them, fixing them should be our first priority. So, I fully support this
> proposal.
>
> On Wed, Jan 3, 2024 at 10:58 PM Viraj Jasani  wrote:
>
> > Hello,
> >
> > I would like to bring PHOENIX-7106
> >  to everyone's
> > attention here and brief about the data integrity issues that we have in
> > various coprocessors. Majority of the issues are related to the fact that
> > we do not return valid rowkey for certain queries. If any region moves in
> > the middle of the scan, the HBase client relies on the last returned
> rowkey
> > and accordingly changes the scan boundaries while the scanner is getting
> > reset to continue the scan operation. If the region does not move, scan
> is
> > not expected to return invalid data, however if the region moves in the
> > middle of ongoing scan operation, scan would return invalid/incorrect
> data
> > causing data integrity issues.
> >
> > Given the critical nature of these issues, I would like to propose that
> we
> > treat this as a high priority for the upcoming 5.2.0 release, and not
> > include any other feature or big change to master branch until we merge
> > this. The PR is still not ready as additional changes are still in my
> > local, requiring rebase with the current master.
> >
> > I would get back to this discuss thread as soon as the PR and the doc are
> > updated with the latest findings so far. The changes include many of our
> > coproc scanner implementations and hence it would require significant
> > review as well.
> > It would be great if we can hold on to merging any feature or big change
> to
> > master branch until this gets in so as to not complicate
> merging/rebasing.
> > Once this is merged to the master branch, I would like to cut 5.2 branch
> > from master and we can move forward with 5.2.0 release.
> >
> > Please let me know if this looks good or if you have any other high
> > priority work for 5.2.0.
> >
>


Re: [DISCUSS] Client-Server Code Separation (PHOENIX-6053) has been Merged and Next Steps

2023-12-14 Thread Geoffrey Jacoby
My opinion depends on what we think the timeframe for releasing 5.2 would
be.

If it's soon, then we can focus on that and let 5.1 be more for bug fixes
and security updates after the 5.1.4 release.

If it's not soon (and I know there are still large ongoing feature branches
for JSON support, CDC, and the new Metadata API) then the ongoing
maintenance burden of having two very different release branches probably
makes the backport of the client/server change worthwhile.

Geoffrey

On Wed, Dec 13, 2023 at 9:03 PM Istvan Toth 
wrote:

> Thanks.
>
> Sure. In case we decide to backport, I'd still want to wait a few weeks to
> shake out any problems before backporting.
>
> Istvan
>
> On Thu, Dec 14, 2023 at 3:09 AM Viraj Jasani  wrote:
>
> > Sure, 5.2.0 sounds good.
> >
> > Reg the backport to 5.1 branch, I am in bit of a dilemma. Let's wait some
> > time for more opinions?
> >
> >
> > Thanks,
> > Viraj
> >
> >
> > On Wed, Dec 13, 2023 at 2:31 AM Istvan Toth 
> > wrote:
> >
> > > Thanks for responding, Viraj
> > >
> > > On compatibility:
> > >
> > > I am confident that this patch does not affect compatibility at all.
> > > The wire protocol remains the same, we are using the same protobuf
> > > definitions, and we use them identically.
> > > The classes which are referred from the HBase configuration or Hbase
> > > metadata (coprocessors, SplitPolicy, etc)
> > > have retained their names, and their behaviour.
> > >
> > > The only way this change can cause problems is:
> > >
> > > - We have made a mistake during refactoring, and changed behaviour.
> This
> > > would be a bug that can be fixed.
> > > - An application uses a refactored internal class directly. This is
> > > unlikely, and even if it happens, this can happen with any patch.
> > >
> > > About 99 percent of the changes is one of these two things:
> > > - Move the string constants out of the coprocessors into helper classes
> > in
> > > the client module
> > > - Split the static utility classes that contain methods used both from
> > the
> > > server and client side into two classes.
> > >
> > > The remaining one percent was somewhat more complex, where the client
> and
> > > server side code was more intertwined, and
> > > required actual thinking on how to solve.
> > >
> > > If you ignore the class (and sometime) method name changes, then both
> the
> > > client and server should execute exactly the same code.
> > > Aron has made some minor optimizations in a handful of cases, if those
> > turn
> > > out to be incorrect, then we can fix or revert them.
> > >
> > > I would compare this change to the one where we added the compatibility
> > > modules.
> > > We have changed the maven project structure heavily, and touched a lot
> of
> > > files, and added interfaces and abstract classes to handle this, but
> > there
> > > was zero change in the behaviour of the code.
> > >
> > > Regarding the 5.2/6.0 version question:
> > >
> > > This is more of an aesthetic question. The last major version change
> was
> > > for HBase 2.0.
> > > I am hopeful that we will be able to find a way to support HBase 3.x
> > > without branching the code base.
> > > I don't really see the need for a new major release. We have dropped
> the
> > > ball when we did not release more minor versions during the last 2+
> > years.
> > > We should be talking about releasing 5.4 or 5.5 by now.
> > > The new release doesn't break compatibility, so I see no technical
> reason
> > > to go to 6.0.
> > > Of course, your point of having a lot of changes is valid, if the
> > community
> > > agrees then going with 6.0 is also fine.
> > >
> > > Regarding the artifacts:
> > >
> > > We have found a last-minute solution to minimize the visible changes
> both
> > > for the consumers of the maven artifacts and the shaded JARs.
> > > By retaining phoenix-core, and making it depend on both of the new
> > modules,
> > > downstream applications should not need to make any changes in their
> > > dependencies.
> > > (Of course it is recommended to depend on phoenix-core-client instead
> for
> > > JDBC users)
> > > The client and server JARs also contain exactly the same code, as they
> > > depend on phoenix-core, and phoenix-core-server and both include both
> the
> > > client and server side code, exactly as they did before, with exactly
> the
> > > same relocations.
> > > ( phoenix-core-server depends on phoenix-core-client, so depending on
> it
> > is
> > > effectively the same as depending on phoenix-core)
> > > My original proposal included making changes in phoenix-server, but the
> > > committed change does not include that.
> > > The shaded JARs with different content will be new jars, with new names
> > > (see my first email)
> > >
> > >
> > > Regarding 5.1:
> > >
> > > I hope that the above can sway your opinion.
> > >
> > > If you have any more questions and concerns then I'm more than happy to
> > > discuss them.
> > >
> > > Best Regards
> > > Istvan
> > >
> > > On Wed, Dec 13, 2023 at 6:19 AM 

Re: [DISCUSS] Remove Omid coprocessors from phoenix-server

2023-12-07 Thread Geoffrey Jacoby
Cong,

Tephra support has already been removed from Phoenix 5.2. See
https://issues.apache.org/jira/browse/PHOENIX-6627.

It had a number of dependencies that were out of date and no longer being
maintained with known CVEs, and didn't seem to have many (if any) users
relying on it.

Geoffrey

On Wed, Dec 6, 2023 at 6:51 PM Cong Luo  wrote:

> And then
> Is it possible to remove tephra component(eg. Processor, Context,
> Provider...) from the phoenix codebase starting with 5.2? Because omid is
> production ready earlier than tephra, and tephra maintenance is
> diminishing, we may no longer need the tephra.
>
> On 2023/12/06 09:13:30 Istvan Toth wrote:
> > Hi!
> >
> > We are currently including the Omid coprocessors in Phoenix-server.
> >
> > I think that this is problematic.
> >
> > In order to use Omid, users have to install and configure the TSO server
> on
> > the cluster.
> >
> > The Omid install instructions (correctly) include the steps to install
> the
> > Omid coprocessors to HBase.
> >
> > So if the user follows all instructions, then they are going to end up
> with
> > two instances of the coprocessors in the HBase classpath.
> >
> > I propose removing the coprocessors from phoenix-server on master (not on
> > 5.1).
> >
> > What do you think ?
> >
> > Istvan
> >
>


Re: [ANNOUNCE] Rushabh Shah as Phoenix Committer

2023-08-15 Thread Geoffrey Jacoby
Congrats, Rushabh!

On Tue, Aug 15, 2023 at 3:28 PM Kadir Ozdemir
 wrote:

> Congratulations Rushabh!
>
> On Tue, Aug 15, 2023 at 10:46 AM Viraj Jasani  wrote:
>
> > On behalf of the Apache Phoenix PMC, I'm pleased to announce that Rushabh
> > Shah has accepted the PMC's invitation to become a committer on Apache
> > Phoenix.
> >
> > We appreciate all of the great contributions Rushabh has made to the
> > community thus far and we look forward to his continued involvement.
> >
> > Congratulations and Welcome, Rushabh!
> >
>


[jira] [Assigned] (PHOENIX-6981) Bump Jackson version to 2.14.1

2023-06-16 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reassigned PHOENIX-6981:


Assignee: Krzysztof Sobolewski

> Bump Jackson version to 2.14.1
> --
>
> Key: PHOENIX-6981
> URL: https://issues.apache.org/jira/browse/PHOENIX-6981
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Krzysztof Sobolewski
>Assignee: Krzysztof Sobolewski
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>
> A never-ending quest to stamp out CVEs ad such.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6981) Bump Jackson version to 2.14.1

2023-06-16 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6981.
--
Fix Version/s: 5.2.0
   5.1.4
   Resolution: Fixed

Merged to master and cherry-picked to 5.1. Thanks for the patch, [~kudivuhadi] 
and welcome!

> Bump Jackson version to 2.14.1
> --
>
> Key: PHOENIX-6981
> URL: https://issues.apache.org/jira/browse/PHOENIX-6981
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Krzysztof Sobolewski
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>
> A never-ending quest to stamp out CVEs ad such.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6941) Remove Phoenix Flume connector

2023-04-25 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6941:
-
Description: 
The Phoenix flume connector uses an ancient version of Flume.
We do not have volunteers to maintain it.

Remove it.
If/when someone volunteers to maintain it, we can add it back later.

  was:
The Phoenix flume connector uses an ancient version of Phoenix.
We do not have volunteers to maintain it.

Remove it.
If/when someone volunteers to maintain it, we can add it back later.


> Remove Phoenix Flume connector
> --
>
> Key: PHOENIX-6941
> URL: https://issues.apache.org/jira/browse/PHOENIX-6941
> Project: Phoenix
>  Issue Type: Task
>Reporter: Istvan Toth
>Priority: Major
>
> The Phoenix flume connector uses an ancient version of Flume.
> We do not have volunteers to maintain it.
> Remove it.
> If/when someone volunteers to maintain it, we can add it back later.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] Working towards a connectors release

2023-04-24 Thread Geoffrey Jacoby
I've never encountered someone using phoenix-flume. Did a little
digging and found that:

1. Phoenix-flume has never been touched since phoenix-connectors was split
off from the main Phoenix project in 2019, aside from the pom refactoring
work to support Phoenix 5.
2. It depends on Flume 1.4, which was released in *2013*. That's older than
Phoenix's Apache incubation in 2014.

Since the Flume project is still active, I also have no objection to
keeping it if someone steps up wanting to maintain it (this would at a
minimum mean upgrading to a modern version of Flume -- 1.11.0 seems to be
the current stable release, from October 2022.)

But in the absence of a maintainer, I'm +1 to removing it.

Geoffrey

On Mon, Apr 24, 2023 at 2:07 AM Istvan Toth  wrote:

> Hi!
>
> What should we do with phoenix-flume ?
> There has been so little (zero) activity on it that I have completely
> forgotten even about its existence.
> However Flume itself is still maintained, it doesn't really seem to cause
> any problems either.
> On the other hand, I have no idea whether it works on a production system.
> Should we keep it, or should we drop it ?
> I am leaning towards dropping it, as without an active maintainer (or at
> least a known user)
> we don't know if it even works properly.
> Just as with Kafka, we could add it back if someone volunteers to maintain
> it.
>
> Istvan
>
> On Wed, Apr 19, 2023 at 9:57 PM Geoffrey Jacoby 
> wrote:
>
> > +1.
> >
> > At $dayjob we have a legacy feature that uses phoenix-pig, but I believe
> > that usage is scheduled for deprecation soon and we can maintain it in
> our
> > internal fork until then. Pig hasn't had a release in 6 years and last I
> > heard doesn't support Hadoop 3; no reason to keep supporting it.
> >
> > Geoffrey
> >
> >
> >
> > On Tue, Apr 18, 2023 at 1:37 AM Istvan Toth  wrote:
> >
> > > Hi!
> > >
> > > We've never had a connectors release, because of multiple unsolved
> > > problems.
> > > Some, like java 11/17 support are relatively straightforward and don't
> > > really need discussion, but some are more impactful.
> > >
> > > I propose the following plan, which should give us a chance to have a
> > > release in the foreseeable future:
> > > Disclosure: at $dayjob, we only support the Spark and HBase connectors,
> > and
> > > those are the ones we can dedicate resources to.
> > >
> > > *- Drop the connectors for Phoenix 4.x*
> > > 4.x is EOL, and it complicates the project structure, build time, etc.
> > > We've never had a release for 4.x either.
> > >
> > > *- Drop the Kafka connector*
> > > It has CVEs, and only works with an ancient Kafka version.
> > > I have also seen zero developer or user interest in it.
> > > If someone volunteers to update and maintain it, we can always add it
> > back
> > > later
> > >
> > >
> > > *- Drop the Pig connector*This doesn't have critical problems, but I
> have
> > > seen zero interest in it.
> > > The shaded artifact doesn't use maven-shade-plugin, and I suspect that
> it
> > > would have classpath conflict issues.
> > > Fixing up the shading to be on par with the rest of the connectors
> would
> > be
> > > a non-trivial amount of work.
> > > If someone volunteers to update and maintain it, we can always add it
> > back
> > > later.
> > >
> > > *- Re-shade the hive 3 connector for hbase-shaded*
> > > Hbase in Hive 3 is very broken, we already need to replace the shipped
> > > HBase jars anyway.
> > > To avoid conflict with the included hbase jars, we want to avoid
> > > duplicating them.
> > > The solution is to omit the Hbase and Hadoop JARs from the shaded
> > > connector, and change the relocations
> > > to handle the binary incompatibilities between the shaded and
> non-shaded
> > > HBase API.
> > > We already do this for Spark, and this is also how the Hive 4 connector
> > > will have to work.
> > > (This already works well at $dayjob )
> > >
> > > This would leave us with only three connectors, but those would at
> least
> > be
> > > released, and easier to support:
> > > Spark 2
> > > Spark 3
> > > Hive 3
> > >
> > > Please share your thoughts!
> > >
> > > Istvan
> > >
> >
>


Re: [DISCUSS] Working towards a connectors release

2023-04-19 Thread Geoffrey Jacoby
+1.

At $dayjob we have a legacy feature that uses phoenix-pig, but I believe
that usage is scheduled for deprecation soon and we can maintain it in our
internal fork until then. Pig hasn't had a release in 6 years and last I
heard doesn't support Hadoop 3; no reason to keep supporting it.

Geoffrey



On Tue, Apr 18, 2023 at 1:37 AM Istvan Toth  wrote:

> Hi!
>
> We've never had a connectors release, because of multiple unsolved
> problems.
> Some, like java 11/17 support are relatively straightforward and don't
> really need discussion, but some are more impactful.
>
> I propose the following plan, which should give us a chance to have a
> release in the foreseeable future:
> Disclosure: at $dayjob, we only support the Spark and HBase connectors, and
> those are the ones we can dedicate resources to.
>
> *- Drop the connectors for Phoenix 4.x*
> 4.x is EOL, and it complicates the project structure, build time, etc.
> We've never had a release for 4.x either.
>
> *- Drop the Kafka connector*
> It has CVEs, and only works with an ancient Kafka version.
> I have also seen zero developer or user interest in it.
> If someone volunteers to update and maintain it, we can always add it back
> later
>
>
> *- Drop the Pig connector*This doesn't have critical problems, but I have
> seen zero interest in it.
> The shaded artifact doesn't use maven-shade-plugin, and I suspect that it
> would have classpath conflict issues.
> Fixing up the shading to be on par with the rest of the connectors would be
> a non-trivial amount of work.
> If someone volunteers to update and maintain it, we can always add it back
> later.
>
> *- Re-shade the hive 3 connector for hbase-shaded*
> Hbase in Hive 3 is very broken, we already need to replace the shipped
> HBase jars anyway.
> To avoid conflict with the included hbase jars, we want to avoid
> duplicating them.
> The solution is to omit the Hbase and Hadoop JARs from the shaded
> connector, and change the relocations
> to handle the binary incompatibilities between the shaded and non-shaded
> HBase API.
> We already do this for Spark, and this is also how the Hive 4 connector
> will have to work.
> (This already works well at $dayjob )
>
> This would leave us with only three connectors, but those would at least be
> released, and easier to support:
> Spark 2
> Spark 3
> Hive 3
>
> Please share your thoughts!
>
> Istvan
>


[jira] [Resolved] (PHOENIX-6918) ScanningResultIterator should not retry when the query times out

2023-04-18 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6918.
--
Fix Version/s: 5.2.0
   5.1.4
   Resolution: Fixed

Merged to master and cherry-picked back to 5.1. Thanks for the patch, 
[~lokiore]!

> ScanningResultIterator should not retry when the query times out
> 
>
> Key: PHOENIX-6918
> URL: https://issues.apache.org/jira/browse/PHOENIX-6918
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Kadir Ozdemir
>Assignee: Lokesh Khurana
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>
> ScanningResultIterator drops dummy results and retries Result#next() in a 
> loop as part of the Phoenix server paging feature.
> ScanningResultIterator does not check if the query has already timed out 
> currently. This means that ScanningResultIterator let the server to work on 
> the scan even though the Phoenix query is already timed out. 
> ScanningResultIterator should check if the query of the scan has been timed 
> out and if so should return an operation timeout exception as 
> BaseResultIterators does.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] Removing HBase 2.3 support from master

2023-02-10 Thread Geoffrey Jacoby
+1 to removing 2.3 support from 5.2 and subsequent releases (and keeping it
for any subsequent 5.1 releases) as Istvan suggests.

Geoffrey

On Fri, Feb 10, 2023 at 4:17 PM Viraj Jasani  wrote:

> I can't think of any issues with the removal of hbase 2.3 support from
> master (future 5.2 release).
> Also, release 2.6 should start sometime soon. We should be good to remove
> 2.3 support.
>
>
> On Fri, Feb 10, 2023 at 11:27 AM Istvan Toth  wrote:
>
> > Hi!
> >
> > HBase 2.3 has been EOL for 15 months.
> > 5.2 is taking longer than planned, and 2.3 will be quite old by the time
> we
> > release it.
> >
> > With the recent 5.1.3 release, and continued maintenance of the 5.1
> branch
> > we have any HBase 2.3 users covered.
> >
> > I propose removing HBase 2.3 support from the master branch.
> >
> > Please share your thoughts.
> >
> > regards
> > Istvan
> >
>


[jira] [Resolved] (PHOENIX-4863) Setup Travis CI to automatically run all the integration tests when a PR is created on github.com/apache/phoenix

2023-02-08 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-4863.
--
Resolution: Won't Fix

Apache infra no longer supports TravisCI

> Setup Travis CI to automatically run all the integration tests when a PR is 
> created on github.com/apache/phoenix
> 
>
> Key: PHOENIX-4863
> URL: https://issues.apache.org/jira/browse/PHOENIX-4863
> Project: Phoenix
>  Issue Type: Test
>Reporter: Thomas D'Silva
>Assignee: Priyank Porwal
>Priority: Major
>
> Apache Tephra does this (see 
> https://travis-ci.org/apache/incubator-tephra/jobs/278449357) 
> If would be convenient if the tests are run automatically when a PR is 
> created instead of the contributor having to manually create a patch file, 
> attach it on the JIRA and click the submit button. 
> See https://docs.travis-ci.com/user/getting-started



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6865) Move CI to Apache Yetus for phoenix-omid

2023-02-06 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6865:
-
Description: 
TravisCI is being EOLed by Apache infra, so all Phoenix subprojects using it 
must switch to some other form of CI. 

We should switch phoenix-omid to use the same Yetus CI we use in the main 
project, as suggested in PHOENIX-6145. 

  was:
TravisCI is being EOLed by Apache infra, so all Phoenix subprojects using it 
must switch to some other form of CI. 

We should switch it to use the same Yetus CI we use in the main project, as 
suggested in PHOENIX-6145. 


> Move CI to Apache Yetus for phoenix-omid
> 
>
> Key: PHOENIX-6865
> URL: https://issues.apache.org/jira/browse/PHOENIX-6865
> Project: Phoenix
>  Issue Type: Sub-task
>  Components: omid
>        Reporter: Geoffrey Jacoby
>Priority: Major
>
> TravisCI is being EOLed by Apache infra, so all Phoenix subprojects using it 
> must switch to some other form of CI. 
> We should switch phoenix-omid to use the same Yetus CI we use in the main 
> project, as suggested in PHOENIX-6145. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6865) Move CI to Apache Yetus for phoenix-omid

2023-02-06 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-6865:


 Summary: Move CI to Apache Yetus for phoenix-omid
 Key: PHOENIX-6865
 URL: https://issues.apache.org/jira/browse/PHOENIX-6865
 Project: Phoenix
  Issue Type: Sub-task
  Components: omid
Reporter: Geoffrey Jacoby


TravisCI is being EOLed by Apache infra, so all Phoenix subprojects using it 
must switch to some other form of CI. 

We should switch it to use the same Yetus CI we use in the main project, as 
suggested in PHOENIX-6145. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release of Apache Phoenix 5.1.3 RC1

2022-12-22 Thread Geoffrey Jacoby
+1 (binding)

Verified checksums: OK
Verified signatures: OK
Verified RAT: OK
mvn clean package builds: OK
mvn clean verify for HBase 2.5.2: (?) Got a weird UnknownFormatException
(as in String.format) from HDFS on every test that had used the
minicluster. Reached out to Tanuj who could not repro the issue, so I
assume it's a problem with my local machine
mvn clean verify for HBase 2.4.15: (OK with asterisks) The usual flapper on
PermissionNSDisabledIT, but also a reproducible-but-flappy failure on
AlterTableWithViewsIT I filed as PHOENIX-6850. Seems to be a test bug, so
I'm OK with fixing it in the next release. Not reproducible on 5.1.2,
however

Geoffrey



On Wed, Dec 21, 2022 at 12:50 PM Istvan Toth 
wrote:

> +1 (binding)
>
> Checksum for source distribution: OK
> Signature for source distribution: OK
> Checksum for Hbase 2.4 binary distribution: OK
> Signature for Hbase 2.4 binary  distribution: OK
> Release notes and changes files look good: OK
> mvn clean apache-rat:check: OK
> mvn clean package: OK
> eyeballed the contents of the binary assembly: OK
> eyeballed the contents of the maven repo: OK
> started phoenix_sandbox and ran smoke test: OK
>
> regards
> Istvan
>
> On Wed, Dec 21, 2022 at 5:36 AM Tanuj Khurana  wrote:
>
> > Please vote on this Apache phoenix release candidate,
> > Phoenix-5.1.3RC1
> >
> > The VOTE will remain open for at least 72 hours.
> >
> > [ ] +1 Release this package as Apache phoenix 5.1.3
> > [ ] -1 Do not release this package because ...
> >
> > The tag to be voted on is 5.1.3RC1:
> >
> > https://github.com/apache/phoenix/tree/5.1.3RC1
> >
> > The release files, including signatures, digests, as well as CHANGES.md
> > and RELEASENOTES.md included in this RC can be found at:
> >
> > https://dist.apache.org/repos/dist/dev/phoenix/phoenix-5.1.3RC1/
> >
> > Maven artifacts are available in a staging repository at:
> >
> https://repository.apache.org/content/repositories/orgapachephoenix-1248/
> >
> > Artifacts were signed with the 0x47BBA537 key which can be found in:
> >
> > https://dist.apache.org/repos/dist/release/phoenix/KEYS
> >
> > To learn more about Apache phoenix, please see
> >
> > https://phoenix.apache.org/
> >
> > Thanks,
> > Tanuj Khurana
> >
>
>
> --
> *István Tóth* | Sr. Staff Software Engineer
> *Email*: st...@cloudera.com
> cloudera.com 
> [image: Cloudera] 
> [image: Cloudera on Twitter]  [image:
> Cloudera on Facebook]  [image: Cloudera
> on LinkedIn] 
> --
> --
>


[jira] [Updated] (PHOENIX-6850) AlterTableWithViewsIT CreateView Props Test Flaps

2022-12-22 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6850:
-
Labels: beginner starter  (was: )

> AlterTableWithViewsIT CreateView Props Test Flaps
> -
>
> Key: PHOENIX-6850
> URL: https://issues.apache.org/jira/browse/PHOENIX-6850
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0, 5.1.3
>        Reporter: Geoffrey Jacoby
>Priority: Major
>  Labels: beginner, starter
>
> When running IT tests on the 5.1.3 RC1, and on the master (5.2) HEAD, I get 
> flappy behavior on 
> AlterTestsWithViewsIT.testCreateViewWithPropsMaintainsOwnProps. When a 
> particular param set is run standalone, it seems to consistently pass. 
> However, when run in concert with different param iterations, it sometimes 
> generates an NPE on
> {code:java}
> assertFalse(viewTable1.useStatsForParallelization());
> {code}
> This is because viewTable1 had previously been unset for 
> useStatsForParallelization, so it returns null if it doesn't pick up the 
> change to the base table properly.
> This seems to be a caching problem -- populating viewTable1 and viewTable2 
> from a call to PhoenixRuntime.getTableNoCache seems to fix it. 
> However, since the test updates the base table from a global connection, and 
> then tries to access views on that table from a separate tenant connection, 
> it's not obvious to me that the cache for the tenant connection _should_ be 
> expired in this situation, so I'm not sure the caching behavior counts as a 
> bug itself. 
> Interestingly though, 5.1.2 doesn't seem to have this issue. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6850) AlterTableWithViewsIT CreateView Props Test Flaps

2022-12-22 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-6850:


 Summary: AlterTableWithViewsIT CreateView Props Test Flaps
 Key: PHOENIX-6850
 URL: https://issues.apache.org/jira/browse/PHOENIX-6850
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.2.0, 5.1.3
Reporter: Geoffrey Jacoby


When running IT tests on the 5.1.3 RC1, and on the master (5.2) HEAD, I get 
flappy behavior on 
AlterTestsWithViewsIT.testCreateViewWithPropsMaintainsOwnProps. When a 
particular param set is run standalone, it seems to consistently pass. However, 
when run in concert with different param iterations, it sometimes generates an 
NPE on

{code:java}
assertFalse(viewTable1.useStatsForParallelization());
{code}

This is because viewTable1 had previously been unset for 
useStatsForParallelization, so it returns null if it doesn't pick up the change 
to the base table properly.

This seems to be a caching problem -- populating viewTable1 and viewTable2 from 
a call to PhoenixRuntime.getTableNoCache seems to fix it. 

However, since the test updates the base table from a global connection, and 
then tries to access views on that table from a separate tenant connection, 
it's not obvious to me that the cache for the tenant connection _should_ be 
expired in this situation, so I'm not sure the caching behavior counts as a bug 
itself. 

Interestingly though, 5.1.2 doesn't seem to have this issue. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release of Apache Phoenix 5.1.3 RC0

2022-11-22 Thread Geoffrey Jacoby
Viraj, is that a binding vote of -1?

I haven't had a chance to test the RC yet, but if the IT test was failing
in 5.1 similarly to how it was failing in master due to PHOENIX-6776, I
concur.

-1 (binding) for me.

On Tue, Nov 22, 2022 at 3:21 AM Viraj Jasani  wrote:

> Since PHOENIX-6776 is reverted, we will need new RC for 5.1.3.
>
>
> On Sun, Nov 13, 2022 at 12:51 PM Tanuj Khurana 
> wrote:
>
> > Please vote on this Apache phoenix release candidate, phoenix-5.1.3RC0
> >
> > The VOTE will remain open for at least 72 hours.
> >
> > [ ] +1 Release this package as Apache phoenix 5.1.3
> > [ ] -1 Do not release this package because ...
> >
> > The tag to be voted on is 5.1.3RC0:
> >
> >   https://github.com/apache/phoenix/tree/5.1.3RC0
> >
> > The release files, including signatures, digests, as well as CHANGES.md
> > and RELEASENOTES.md included in this RC can be found at:
> >
> >   https://dist.apache.org/repos/dist/dev/phoenix/5.1.3RC0/
> >
> > Maven artifacts are available in a staging repository at:
> >
> >   https://repository.apache.org/content/repositories//
> >
> > Artifacts were signed with the 0x47BBA537 key which can be found in:
> >
> >   https://dist.apache.org/repos/dist/release/phoenix/KEYS
> >
> > To learn more about Apache phoenix, please see
> >
> >   https://phoenix.apache.org/
> >
> > Thanks,
> > Tanuj Khurana
> >
>


[jira] [Updated] (PHOENIX-792) Support UPSERT SET command

2022-10-28 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-792:

Attachment: (was: 198416324-4ede2959-0d83-4f64-80e1-5d975d235a13.jpg)

> Support UPSERT SET command
> --
>
> Key: PHOENIX-792
> URL: https://issues.apache.org/jira/browse/PHOENIX-792
> Project: Phoenix
>  Issue Type: Task
>Reporter: James R. Taylor
>Assignee: thrylokya
>
> Support setting values in a table through a new UPSERT SET command like this:
> UPSERT my_table SET title = 'CEO'
> WHERE name = 'John Doe'
> UPSERT my_table SET pay_by_quarter = ARRAY[25000,25000,27000,27000]
> WHERE name = 'Carol';
> UPSERT my_table SET pay_by_quarter[4] = 15000
> WHERE name = 'Carol';
> This would essentially be syntactic sugar and use the same UpsertCompiler, 
> mapping to an UPSERT SELECT command that simply fills in the primary key 
> columns like this:
> UPSERT FROM my_table(name,title) 
> SELECT name,'CEO' FROM my_table
> WHERE name = 'John Doe'
> UPSERT FROM my_table(name, pay_by_quarter[4]) 
> SELECT name,15000 FROM my_table
> WHERE name = 'Carol';



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6824) Jarvis AI voice Assistant

2022-10-28 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6824.
--
Fix Version/s: (was: thirdparty-2.0.1)
 Release Note:   (was: Haiti Business Network Jarvis AI Smart Voice 
Assistant Investment Programming Sitting System On Hasbro Studio Online AI 
voice Assistant Customer Service Provider Robotics agent Connects )
   Resolution: Invalid

> Jarvis AI voice Assistant 
> --
>
> Key: PHOENIX-6824
> URL: https://issues.apache.org/jira/browse/PHOENIX-6824
> Project: Phoenix
>  Issue Type: Improvement
>  Components: connectors
>Affects Versions: thirdparty-2.0.0
> Environment: http://www.w3.org/2005/Atom; 
> xmlns:atlassian="http://streams.atlassian.com/syndication/general/1.0;>https://issues.apache.org/jira/activity?maxResults=10streams=user+IS+gotpockets121os_authType=basictitle=undefined  
> href="https://issues.apache.org/jira/activity?maxResults=10streams=user+IS+gotpockets121os_authType=basictitle=undefined;
>  rel="self"/> type="text">undefined-04002022-10-28T16:36:18.394ZApache
>  Software 
> Foundation  
> xmlns:activity="http://activitystrea.ms/spec/1.0/;>urn:uuid:e1792a3e-5ba1-324d-8a62-e2dc56357a14  type="html">a 
> href="https://issues.apache.org/jira/secure/ViewProfile.jspa?name=gotpockets121;
>  class="activity-item-user activity-item-author">Evens Max Pierrelouis 
> /a> attached one file to   a 
> href="https://issues.apache.org/jira/browse/PHOENIX-792;>PHOENIX-792 - 
> Support UPSERT SET command/a>
>   ul class="attachments activity-list">  
>  li>a 
> href="https://issues.apache.org/jira/secure/attachment/13051566/198416324-4ede2959-0d83-4f64-80e1-5d975d235a13.jpg;>198416324-4ede2959-0d83-4f64-80e1-5d975d235a13.jpg/a>
>/ul>  xmlns:usr="http://streams.atlassian.com/syndication/username/1.0;>Evens 
> Max Pierrelouis 
> https://issues.apache.org/jira/secure/ViewProfile.jspa?name=gotpockets121  xmlns:media="http://purl.org/syndication/atommedia; rel="photo" 
> href="https://issues.apache.org/jira/secure/useravatar?avatarId=39935s=16;
>  media:height="16" media:width="16"/> xmlns:media="http://purl.org/syndication/atommedia; rel="photo" 
> href="https://issues.apache.org/jira/secure/useravatar?avatarId=39935s=48;
>  media:height="48" 
> media:width="48"/>gotpockets121http://activitystrea.ms/schema/1.0/person2022-10-28T16:03:57.532Z2022-10-28T16:03:57.532Z  href="https://issues.apache.org/jira/browse/PHOENIX-792; 
> rel="alternate"/> href="https://issues.apache.org/jira/secure/viewavatar?size=xsmallavatarId=21148avatarType=issuetype;
>  rel="http://streams.atlassian.com/syndication/icon; title="Task"/> href="https://issues.apache.org/jira/s/40u070/820010/13pdxe5/1.0/_/download/resources/jira.webresources:global-static/wiki-renderer.css;
>  rel="http://streams.atlassian.com/syndication/css"/> href="https://issues.apache.org/jira/plugins/servlet/streamscomments/issues/PHOENIX-792;
>  rel="http://streams.atlassian.com/syndication/reply-to"/> href="https://issues.apache.org/jira/rest/jira-activity-stream/1.0/actions/issue-watch/PHOENIX-792;
>  rel="http://streams.atlassian.com/syndication/watch"/> href="https://issues.apache.org/jira/rest/jira-activity-stream/1.0/actions/issue-vote/PHOENIX-792;
>  rel="http://streams.atlassian.com/syndication/issue-vote"/> uri="https://issues.apache.org/jira"/>com.atlassian.jirahttp://activitystrea.ms/schema/1.0/posturn:uuid:d95f853a-847f-3ac8-a733-14960129bd99  type="text">198416324-4ede2959-0d83-4f64-80e1-5d975d235a13.jpg rel="alternate" 
> href="https://issues.apache.org/jira/secure/attachment/13051566/198416324-4ede2959-0d83-4f64-80e1-5d975d235a13.jpg"/>http://activitystrea.ms/schema/1.0/fileurn:uuid:e9641a82-d05c-3dba-b2d4-b925d52dfd14  type="text">PHOENIX-792Support UPSERT SET 
> command href="https://issues.apache.org/jira/browse/PHOENIX-792"/>http://streams.atlassian.com/syndication/types/issue-0400
>Reporter: Evens Max Pierrelouis 
>Priority: Major
>  Labels: auto-deprioritized-major
> Attachments: jarvis AI Assistant .pdf
>
>
> http://www.w3.org/2005/Atom; 
> xmlns:atlassian="http://streams.atlassian.com/syndication/general/1.0;>https://issues.apache.org/jira/activity?maxResults=10streams=user+IS+gotpockets12

[jira] [Resolved] (PHOENIX-6806) Protobufs don't compile on ARM-based Macs (Apple Silicon)

2022-10-10 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6806.
--
Fix Version/s: 5.2.0
   5.1.3
   Resolution: Fixed

> Protobufs don't compile on ARM-based Macs (Apple Silicon)
> -
>
> Key: PHOENIX-6806
> URL: https://issues.apache.org/jira/browse/PHOENIX-6806
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Geoffrey Jacoby
>        Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 5.2.0, 5.1.3
>
>
> This is similar to PHOENIX-6475 for 64-bit Linux ARM. Maven will fail looking 
> for an osx-aarch64 version of protoc 2.5.0
> However, unlike in the Linux case, we have a good workaround that lets us 
> keep using an official 2.5.0 binary. 
> MacOS versions that support Apple's ARM processors can run x64 code through a 
> translation layer (with a perf hit). Therefore, we can change the 
> phoenix-core pom to use the MacOS x86_64 version of protoc if it detects 
> we're running osx-aarch64. 
> Unlike running _all_ local development through an x64 JDK, which is very 
> slow, protobuf compilation isn't a big part of the build / test time, so the 
> perf hit for just emulating the protobuf compilation shouldn't be too bad. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6806) Protobufs don't compile on ARM-based Macs (Apple Silicon)

2022-10-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reassigned PHOENIX-6806:


Assignee: Geoffrey Jacoby

> Protobufs don't compile on ARM-based Macs (Apple Silicon)
> -
>
> Key: PHOENIX-6806
> URL: https://issues.apache.org/jira/browse/PHOENIX-6806
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Geoffrey Jacoby
>        Assignee: Geoffrey Jacoby
>Priority: Major
>
> This is similar to PHOENIX-6475 for 64-bit Linux ARM. Maven will fail looking 
> for an osx-aarch64 version of protoc 2.5.0
> However, unlike in the Linux case, we have a good workaround that lets us 
> keep using an official 2.5.0 binary. 
> MacOS versions that support Apple's ARM processors can run x64 code through a 
> translation layer (with a perf hit). Therefore, we can change the 
> phoenix-core pom to use the MacOS x86_64 version of protoc if it detects 
> we're running osx-aarch64. 
> Unlike running _all_ local development through an x64 JDK, which is very 
> slow, protobuf compilation isn't a big part of the build / test time, so the 
> perf hit for just emulating the protobuf compilation shouldn't be too bad. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release of Apache Phoenix Omid 1.1.0RC0

2022-10-06 Thread Geoffrey Jacoby
+1 (binding)
Signature: ok
Checksum: ok
rat-check: ok

Built and ran Omid unit and IT tests (Oracle x64 JDK 8) : ok
(TestTransactionCleanup's minicluster broke several times in the full run
but ran fine separately)

Tweaked the Phoenix and Phoenix-core poms to use Omid 1.1.0 and built / ran
Phoenix IT tests (Azul osx-aarch64 JDK 1.8.0_345)
There were some random test minicluster test failures unrelated to Omid but
transactional tests all passed ok

Geoffrey

On Thu, Oct 6, 2022 at 4:49 PM Andrew Purtell  wrote:

> Thanks Geoffrey.
> Let me note also that I owe the Phoenix community HBASE-27359, and have not
> forgotten about it, although it has been deprioritized due to $dayjob
> stuff.
>
> On Thu, Oct 6, 2022 at 1:22 PM Geoffrey Jacoby  wrote:
>
> > Andrew -- if you run phoenix-omid/dev-support/rebuild_hbase.sh  HBase
> > version> it will recompile the HBase dependency for Hadoop 3 and let a
> > subsequent mvn test run work.
> >
> > I should have my vote up later today.
> >
> > Geoffrey
> >
> > On Thu, Oct 6, 2022 at 3:32 PM Andrew Purtell 
> wrote:
> >
> > > +1 (binding)
> > >
> > > * Signature: ok
> > > * Checksum : ok
> > > * Rat check (1.8.0_332): ok
> > >  - mvn clean apache-rat:check
> > > * Built from source (1.8.0_332): ok
> > >  - mvn clean install  -DskipTests
> > > * Unit tests pass (1.8.0_332): failed
> > >  - mvn clean package -Dsurefire.rerunFailingTestsCount=3
> > >
> > > This release has the problem, like Phoenix in general, where the HBase
> 2
> > > dependency is not compiled for Hadoop 3 so all the unit tests which
> > require
> > > the HBase minicluster fail out of the box. I assume this is expected.
> > >
> > > java.lang.IncompatibleClassChangeError: Found interface
> > > org.apache.hadoop.hdfs.protocol.HdfsFileStatus, but class was expected
> > >
> > >
> > >
> > > On Thu, Oct 6, 2022 at 7:36 AM Istvan Toth  wrote:
> > >
> > > > Please vote on this Apache Phoenix Omid release candidate,
> > > > phoenix-omid-1.1.0RC0
> > > >
> > > > The VOTE will remain open for at least 72 hours.
> > > >
> > > > [ ] +1 Release this package as Apache phoenix omid 1.1.0
> > > > [ ] -1 Do not release this package because ...
> > > >
> > > > The tag to be voted on is 1.1.0RC0:
> > > >
> > > >   https://github.com/apache/phoenix-omid/tree/1.1.0RC0
> > > >
> > > > The release files, including signatures, digests, as well as
> CHANGES.md
> > > > and RELEASENOTES.md included in this RC can be found at:
> > > >
> > > >
> > https://dist.apache.org/repos/dist/dev/phoenix/phoenix-omid-1.1.0RC0/
> > > >
> > > > Maven artifacts are available in the orgapachephoenix-1245 staging
> > > > repository at:
> > > >
> > > >   https://repository.apache.org/#stagingRepositories
> > > >
> > > > Artifacts were signed with the st...@apache.org key which can be
> found
> > > in:
> > > >
> > > >   https://dist.apache.org/repos/dist/release/phoenix/KEYS
> > > >
> > > > To learn more about Apache phoenix omid, please see
> > > >
> > > >   https://phoenix.apache.org/
> > > >
> > > > To test the RC with Phoenix, apply the PR from PHOENIX-6715 to
> Phoenix
> > > (and
> > > > remove -SNAPSHOT from the Omid version).
> > > >
> > > > Thanks,
> > > > Istvan
> > > >
> > >
> > >
> > > --
> > > Best regards,
> > > Andrew
> > >
> > > Unrest, ignorance distilled, nihilistic imbeciles -
> > > It's what we’ve earned
> > > Welcome, apocalypse, what’s taken you so long?
> > > Bring us the fitting end that we’ve been counting on
> > >- A23, Welcome, Apocalypse
> > >
> > > On Thu, Oct 6, 2022 at 7:36 AM Istvan Toth  wrote:
> > >
> > > > Please vote on this Apache Phoenix Omid release candidate,
> > > > phoenix-omid-1.1.0RC0
> > > >
> > > > The VOTE will remain open for at least 72 hours.
> > > >
> > > > [ ] +1 Release this package as Apache phoenix omid 1.1.0
> > > > [ ] -1 Do not release this package because ...
> > > >
> > > > The tag to be voted on is 1.1.0RC0:
> > > >
> > >

[jira] [Created] (PHOENIX-6806) Protobufs don't compile on ARM-based Macs (Apple Silicon)

2022-10-06 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-6806:


 Summary: Protobufs don't compile on ARM-based Macs (Apple Silicon)
 Key: PHOENIX-6806
 URL: https://issues.apache.org/jira/browse/PHOENIX-6806
 Project: Phoenix
  Issue Type: Bug
Reporter: Geoffrey Jacoby


This is similar to PHOENIX-6475 for 64-bit Linux ARM. Maven will fail looking 
for an osx-aarch64 version of protoc 2.5.0

However, unlike in the Linux case, we have a good workaround that lets us keep 
using an official 2.5.0 binary. 

MacOS versions that support Apple's ARM processors can run x64 code through a 
translation layer (with a perf hit). Therefore, we can change the phoenix-core 
pom to use the MacOS x86_64 version of protoc if it detects we're running 
osx-aarch64. 

Unlike running _all_ local development through an x64 JDK, which is very slow, 
protobuf compilation isn't a big part of the build / test time, so the perf hit 
for just emulating the protobuf compilation shouldn't be too bad. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release of Apache Phoenix Omid 1.1.0RC0

2022-10-06 Thread Geoffrey Jacoby
Andrew -- if you run phoenix-omid/dev-support/rebuild_hbase.sh  it will recompile the HBase dependency for Hadoop 3 and let a
subsequent mvn test run work.

I should have my vote up later today.

Geoffrey

On Thu, Oct 6, 2022 at 3:32 PM Andrew Purtell  wrote:

> +1 (binding)
>
> * Signature: ok
> * Checksum : ok
> * Rat check (1.8.0_332): ok
>  - mvn clean apache-rat:check
> * Built from source (1.8.0_332): ok
>  - mvn clean install  -DskipTests
> * Unit tests pass (1.8.0_332): failed
>  - mvn clean package -Dsurefire.rerunFailingTestsCount=3
>
> This release has the problem, like Phoenix in general, where the HBase 2
> dependency is not compiled for Hadoop 3 so all the unit tests which require
> the HBase minicluster fail out of the box. I assume this is expected.
>
> java.lang.IncompatibleClassChangeError: Found interface
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus, but class was expected
>
>
>
> On Thu, Oct 6, 2022 at 7:36 AM Istvan Toth  wrote:
>
> > Please vote on this Apache Phoenix Omid release candidate,
> > phoenix-omid-1.1.0RC0
> >
> > The VOTE will remain open for at least 72 hours.
> >
> > [ ] +1 Release this package as Apache phoenix omid 1.1.0
> > [ ] -1 Do not release this package because ...
> >
> > The tag to be voted on is 1.1.0RC0:
> >
> >   https://github.com/apache/phoenix-omid/tree/1.1.0RC0
> >
> > The release files, including signatures, digests, as well as CHANGES.md
> > and RELEASENOTES.md included in this RC can be found at:
> >
> >   https://dist.apache.org/repos/dist/dev/phoenix/phoenix-omid-1.1.0RC0/
> >
> > Maven artifacts are available in the orgapachephoenix-1245 staging
> > repository at:
> >
> >   https://repository.apache.org/#stagingRepositories
> >
> > Artifacts were signed with the st...@apache.org key which can be found
> in:
> >
> >   https://dist.apache.org/repos/dist/release/phoenix/KEYS
> >
> > To learn more about Apache phoenix omid, please see
> >
> >   https://phoenix.apache.org/
> >
> > To test the RC with Phoenix, apply the PR from PHOENIX-6715 to Phoenix
> (and
> > remove -SNAPSHOT from the Omid version).
> >
> > Thanks,
> > Istvan
> >
>
>
> --
> Best regards,
> Andrew
>
> Unrest, ignorance distilled, nihilistic imbeciles -
> It's what we’ve earned
> Welcome, apocalypse, what’s taken you so long?
> Bring us the fitting end that we’ve been counting on
>- A23, Welcome, Apocalypse
>
> On Thu, Oct 6, 2022 at 7:36 AM Istvan Toth  wrote:
>
> > Please vote on this Apache Phoenix Omid release candidate,
> > phoenix-omid-1.1.0RC0
> >
> > The VOTE will remain open for at least 72 hours.
> >
> > [ ] +1 Release this package as Apache phoenix omid 1.1.0
> > [ ] -1 Do not release this package because ...
> >
> > The tag to be voted on is 1.1.0RC0:
> >
> >   https://github.com/apache/phoenix-omid/tree/1.1.0RC0
> >
> > The release files, including signatures, digests, as well as CHANGES.md
> > and RELEASENOTES.md included in this RC can be found at:
> >
> >   https://dist.apache.org/repos/dist/dev/phoenix/phoenix-omid-1.1.0RC0/
> >
> > Maven artifacts are available in the orgapachephoenix-1245 staging
> > repository at:
> >
> >   https://repository.apache.org/#stagingRepositories
> >
> > Artifacts were signed with the st...@apache.org key which can be found
> in:
> >
> >   https://dist.apache.org/repos/dist/release/phoenix/KEYS
> >
> > To learn more about Apache phoenix omid, please see
> >
> >   https://phoenix.apache.org/
> >
> > To test the RC with Phoenix, apply the PR from PHOENIX-6715 to Phoenix
> (and
> > remove -SNAPSHOT from the Omid version).
> >
> > Thanks,
> > Istvan
> >
>
>
> --
> Best regards,
> Andrew
>
> Unrest, ignorance distilled, nihilistic imbeciles -
> It's what we’ve earned
> Welcome, apocalypse, what’s taken you so long?
> Bring us the fitting end that we’ve been counting on
>- A23, Welcome, Apocalypse
>


[DISCUSS] Phoenix 5.2 Status

2022-10-03 Thread Geoffrey Jacoby
Over the past few months a lot of progress has been made toward a 5.2
release, and we've gone from approximately 50 open JIRAs targeted to 5.2
down to 4.

I also spent some time today using Viraj's excellent JIRA/git diff tool in
dev/misc_utils to harmonize the JIRAs that were missing a Fix Version.

The remaining JIRAs that aren't resolved and have a Fix Version of 5.2 are:
PHOENIX-6752 - Perf fix for Queries with lots of ORs (Jacob Isaac) -- In
review, with non-trivial suggestions to the tests still to go
PHOENIX-6715 - Update to Omid 1.1 (Istvan Toth) - This is waiting on the
release of Omid 1.1
PHOENIX-5586 - Documentation for splittable syscat (No owner)
PHOENIX-6082 - Documentation for HA client (No owner)

Some other JIRAs which have open PRs but no Fix Version:
PHOENIX-5422 (Richard Antal) - Use Java8 APIs instead of joda-time
PHOENIX-5066 (Istvan Toth) - Time Zones used incorrectly (Draft) (Still
seems in early stages?)
PHOENIX-6761 (Palash Chauhan / Kadir Ozdemir) - Phoenix Client Metadata
Caching Improvement (Some open questions on the PR, and the engineers tell
me that they still have some perf testing to do)

Are any of the above open JIRAs with no Fix Version intended for 5.2? Are
there any changes that are critical for 5.2 that don't have open PRs yet?

Thanks,

Geoffrey Jacoby


[jira] [Created] (PHOENIX-6802) HA Client Documentation

2022-10-03 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-6802:


 Summary: HA Client Documentation
 Key: PHOENIX-6802
 URL: https://issues.apache.org/jira/browse/PHOENIX-6802
 Project: Phoenix
  Issue Type: Task
Reporter: Geoffrey Jacoby
 Fix For: 5.2.0


The Phoenix HA client is being released as part of Phoenix 5.2. This will need 
documentation on the Phoenix site explaining how to use it, what use cases it's 
suited for, and use cases (such as mutable tables) for which it isn't. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6740) Upgrade default supported Hadoop 3 version to 3.2.3 for HBase 2.5 profile

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6740:
-
Fix Version/s: (was: 5.2.0)

> Upgrade default supported Hadoop 3 version to 3.2.3 for HBase 2.5 profile
> -
>
> Key: PHOENIX-6740
> URL: https://issues.apache.org/jira/browse/PHOENIX-6740
> Project: Phoenix
>  Issue Type: Task
>    Reporter: Geoffrey Jacoby
>        Assignee: Geoffrey Jacoby
>Priority: Major
>
> HBase is upgrading the minimum supported Hadoop to 3.2.3 for HBase 2.5, and 
> we have a similar request from dependabot. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6732) PherfMainIT and DataIngestIT have failing tests

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6732:
-
Fix Version/s: (was: 5.2.0)

> PherfMainIT and DataIngestIT have failing tests
> ---
>
> Key: PHOENIX-6732
> URL: https://issues.apache.org/jira/browse/PHOENIX-6732
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>        Reporter: Geoffrey Jacoby
>Assignee: Jacob Isaac
>Priority: Blocker
>
> PherfMainIT and DriverIngestIT have consistently failing IT tests, which can 
> be reproduced both locally and in Yetus. (This was shown recently in the test 
> run for PHOENIX-6554, which is a pherf improvement.)
> [ERROR] Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 69.393 s <<< FAILURE! - in org.apache.phoenix.pherf.DataIngestIT
> [ERROR] org.apache.phoenix.pherf.DataIngestIT.testColumnRulesApplied  Time 
> elapsed: 0.369 s  <<< FAILURE!
> java.lang.AssertionError: Expected 100 rows to have been inserted 
> expected:<30> but was:<31>
> [ERROR] org.apache.phoenix.pherf.PherfMainIT.testQueryTimeout  Time elapsed: 
> 15.531 s  <<< ERROR!
> java.io.FileNotFoundException: 
> /tmp/RESULTS/RESULT_COMBINED_2022-06-15_05-12-32_detail.csv (No such file or 
> directory)
> [ERROR] org.apache.phoenix.pherf.PherfMainIT.testNoQueryTimeout  Time 
> elapsed: 9.339 s  <<< ERROR!
> java.io.FileNotFoundException: 
> /tmp/RESULTS/RESULT_COMBINED_2022-06-15_05-12-23_detail.csv (No such file or 
> directory)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6396) PChar illegal data exception should not contain value

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6396:
-
Fix Version/s: 5.2.0

> PChar illegal data exception should not contain value
> -
>
> Key: PHOENIX-6396
> URL: https://issues.apache.org/jira/browse/PHOENIX-6396
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Major
> Fix For: 5.1.1, 4.16.1, 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6702) ConcurrentMutationsExtendedIT and PartialIndexRebuilderIT fail on Hbase 2.4.11+

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6702:
-
Fix Version/s: (was: 5.2.0)
   (was: 5.1.3)

> ConcurrentMutationsExtendedIT and PartialIndexRebuilderIT fail on Hbase 
> 2.4.11+
> ---
>
> Key: PHOENIX-6702
> URL: https://issues.apache.org/jira/browse/PHOENIX-6702
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.0, 5.1.3
>Reporter: Istvan Toth
>Assignee: Kadir Ozdemir
>Priority: Blocker
> Attachments: bisect.sh
>
>
> On my local machine
> ConcurrentMutationsExtendedIT.testConcurrentUpserts failed 6 out 10 times 
> while PartialIndexRebuilderIT.testConcurrentUpsertsWithRebuild failed 10 out 
> of 10 times with HBase 2.4.11 (the default build)
>  The same tests succeeded 3 out of 3 times with HBase 2.3.7.
> Either HBase 2.4 has a bug, or our compatibility modules need to be fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6388) Add sampled logging for read repairs

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6388:
-
Fix Version/s: 5.2.0

> Add sampled logging for read repairs
> 
>
> Key: PHOENIX-6388
> URL: https://issues.apache.org/jira/browse/PHOENIX-6388
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Minor
> Fix For: 5.1.1, 4.16.1, 4.17.0, 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6388) Add sampled logging for read repairs

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6388:
-
Fix Version/s: (was: 4.17.0)

> Add sampled logging for read repairs
> 
>
> Key: PHOENIX-6388
> URL: https://issues.apache.org/jira/browse/PHOENIX-6388
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Minor
> Fix For: 5.1.1, 4.16.1, 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6462) Index build mapper that failed should not be logging into the PHOENIX_INDEX_TOOL_RESULT table

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6462.
--
Fix Version/s: 5.2.0
   Resolution: Fixed

> Index build mapper that failed should not be logging into the 
> PHOENIX_INDEX_TOOL_RESULT table
> -
>
> Key: PHOENIX-6462
> URL: https://issues.apache.org/jira/browse/PHOENIX-6462
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 5.2.0
>
>
> Today, if a mapper fails it still logs the region into the 
> PHOENIX_INDEX_TOOL_RESULT table. This causes into mistakenly thinking that 
> the mapper succeeded. Incremental rebuilds will be mislead.
>  
> [~swaroopa] [~tkhurana]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6462) Index build mapper that failed should not be logging into the PHOENIX_INDEX_TOOL_RESULT table

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reassigned PHOENIX-6462:


Assignee: Gokcen Iskender

> Index build mapper that failed should not be logging into the 
> PHOENIX_INDEX_TOOL_RESULT table
> -
>
> Key: PHOENIX-6462
> URL: https://issues.apache.org/jira/browse/PHOENIX-6462
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
>
> Today, if a mapper fails it still logs the region into the 
> PHOENIX_INDEX_TOOL_RESULT table. This causes into mistakenly thinking that 
> the mapper succeeded. Incremental rebuilds will be mislead.
>  
> [~swaroopa] [~tkhurana]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6485) Clean up classpath in .py scripts

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6485.
--
Fix Version/s: 5.2.0
   Resolution: Fixed

> Clean up classpath in .py scripts
> -
>
> Key: PHOENIX-6485
> URL: https://issues.apache.org/jira/browse/PHOENIX-6485
> Project: Phoenix
>  Issue Type: Task
>Reporter: Richárd Antal
>Assignee: Richárd Antal
>Priority: Major
> Fix For: 5.2.0
>
>
> Clean up classpath in .py scripts and replace all phoenix-client JARs with 
> phoenix-client-embedded + log4j backend jar



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6544) Adding metadata inconsistency metric

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6544:
-
Fix Version/s: 5.2.0
   (was: 5.1.0)

> Adding metadata inconsistency metric
> 
>
> Key: PHOENIX-6544
> URL: https://issues.apache.org/jira/browse/PHOENIX-6544
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Minor
> Fix For: 4.16.1, 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-5838) Add Histograms for Table level Metrics.

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5838:
-
Fix Version/s: 5.2.0

> Add Histograms for  Table level Metrics.
> 
>
> Key: PHOENIX-5838
> URL: https://issues.apache.org/jira/browse/PHOENIX-5838
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: vikas meka
>Assignee: vikas meka
>Priority: Major
>  Labels: metric-collector, metrics
> Fix For: 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6572) Add Metrics for SystemCatalog Table

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6572:
-
Fix Version/s: 5.2.0

> Add Metrics for SystemCatalog Table
> ---
>
> Key: PHOENIX-6572
> URL: https://issues.apache.org/jira/browse/PHOENIX-6572
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: vikas meka
>Assignee: Xinyi Yan
>Priority: Major
> Fix For: 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6561) Allow pherf to intake phoenix Connection properties as argument.

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6561.
--
Fix Version/s: (was: 4.17.0)
   (was: 4.16.2)
   Resolution: Fixed

Resolving because the broken 4.x patch will never be released as 4.x is EOL. 

> Allow pherf to intake phoenix Connection properties as argument.
> 
>
> Key: PHOENIX-6561
> URL: https://issues.apache.org/jira/browse/PHOENIX-6561
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lokesh Khurana
>Assignee: Lokesh Khurana
>Priority: Minor
> Fix For: 5.2.0, 5.1.3
>
>
> Currently pherf doesn't allow connection properties to be passed as 
> arguments, it allows for some cases through scenario files, but for dynamic 
> properties selection that might not work, also for WriteWorkload no property 
> is being allowed to pass during connection creation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6603) Create SYSTEM.TRANSFORM table

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6603:
-
Fix Version/s: 5.2.0

> Create SYSTEM.TRANSFORM table
> -
>
> Key: PHOENIX-6603
> URL: https://issues.apache.org/jira/browse/PHOENIX-6603
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 5.2.0
>
>
> SYSTEM.TRANSFORM is a table for bookkeeping the transform process



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6612) Add TransformTool

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6612:
-
Fix Version/s: 5.2.0

> Add TransformTool
> -
>
> Key: PHOENIX-6612
> URL: https://issues.apache.org/jira/browse/PHOENIX-6612
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6617) IndexRegionObserver should create mutations for the transforming table

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6617:
-
Fix Version/s: 5.2.0

> IndexRegionObserver should create mutations for the transforming table
> --
>
> Key: PHOENIX-6617
> URL: https://issues.apache.org/jira/browse/PHOENIX-6617
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6620) TransformTool should fix the unverified rows and do validation

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6620:
-
Fix Version/s: 5.2.0

> TransformTool should fix the unverified rows and do validation
> --
>
> Key: PHOENIX-6620
> URL: https://issues.apache.org/jira/browse/PHOENIX-6620
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6579) ACL check doesn't honor the namespace mapping for mapped views.

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6579:
-
Fix Version/s: 5.2.0

> ACL check doesn't honor the namespace mapping for mapped views.
> ---
>
> Key: PHOENIX-6579
> URL: https://issues.apache.org/jira/browse/PHOENIX-6579
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.2
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 5.2.0, 5.1.3
>
>
> When the namespace mapping and ACLs are enabled and the user tries to create 
> a view on top of the existing HBase table, the query would fail if he doesn't 
> have permissions for the default namespace. 
> {noformat}
> *Error: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions (user=admin/ad...@example.com, scope=default:my_ns.my_table, 
> action=[READ])
>  at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.requireAccess(PhoenixAccessController.java:606)
>  at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.preCreateTable(PhoenixAccessController.java:201)
>  at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$2.call(PhoenixMetaDataCoprocessorHost.java:171)
>  at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$2.call(PhoenixMetaDataCoprocessorHost.java:168)
>  at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$PhoenixObserverOperation.callObserver(PhoenixMetaDataCoprocessorHost.java:86)
>  at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.execOperation(PhoenixMetaDataCoprocessorHost.java:106)
>  at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.preCreateTable(PhoenixMetaDataCoprocessorHost.java:168)
>  at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1900)
>  at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17317)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8313)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2499)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2481)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42286)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:418)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) 
> (state=08000,code=101)
>  {noformat}
> That happens because in the MetaData endpoint implementation we are still 
> using _SchemaUtil.getTableNameAsBytes(schemaName, tableName)_ for the mapped 
> view which knows nothing about namespace mapping, so the ACL check is going 
> against 'default:schema.table'. It could be fixed easy by  replacing the call 
> with _SchemaUtil.getPhysicalHBaseTableName(schemaName, tableName, 
> isNamespaceMapped).getBytes();_



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6622) TransformMonitor should orchestrate transform and do retries

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6622:
-
Fix Version/s: 5.2.0

> TransformMonitor should orchestrate transform and do retries
> 
>
> Key: PHOENIX-6622
> URL: https://issues.apache.org/jira/browse/PHOENIX-6622
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6639) Read repair of a table after cutover (transform is complete and table is switched)

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6639:
-
Fix Version/s: 5.2.0

> Read repair of a table after cutover (transform is complete and table is 
> switched)
> --
>
> Key: PHOENIX-6639
> URL: https://issues.apache.org/jira/browse/PHOENIX-6639
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6659) RVC with AND clauses return incorrect result

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6659:
-
Fix Version/s: 5.2.0
   5.1.3

> RVC with AND clauses return incorrect result
> 
>
> Key: PHOENIX-6659
> URL: https://issues.apache.org/jira/browse/PHOENIX-6659
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.16.1
>Reporter: Xinyi Yan
>Assignee: Gokcen Iskender
>Priority: Critical
> Fix For: 5.2.0, 5.1.3
>
> Attachments: Screen Shot 2022-03-01 at 1.26.44 PM.png
>
>
> CREATE TABLE DUMMY (PK1 VARCHAR NOT NULL, PK2 BIGINT NOT NULL, PK3 BIGINT NOT 
> NULL CONSTRAINT PK PRIMARY KEY (PK1,PK2,PK3));
> UPSERT INTO DUMMY VALUES ('a',0,1);
> UPSERT INTO DUMMY VALUES ('a',1,1);
> UPSERT INTO DUMMY VALUES ('a',2,1);
> UPSERT INTO DUMMY VALUES ('a',3,1);
> UPSERT INTO DUMMY VALUES ('a',3,2);
> UPSERT INTO DUMMY VALUES ('a',4,1);
>  
> {code:java}
> 0: jdbc:phoenix:localhost> SELECT * FROM DUMMY WHERE (PK1 = 'a') AND 
> (PK1,PK2,PK3) <= ('a',3,1);
> +--+--++
> |                   PK1                    |                   PK2            
>         |                   PK3          |
> +--+--++
> +--+--++
> No rows selected (0.045 seconds)
> 0: jdbc:phoenix:localhost> explain SELECT * FROM DUMMY WHERE (PK1 = 'a') AND 
> (PK1,PK2,PK3) <= ('a',3,1);
> +--+--++
> |                   PLAN                   |              EST_BYTES_READ      
>         |              EST_ROWS_READ     |
> +--+--++
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER DUMMY ['a',*] - 
> ['a',-9187343239835811840] | null          |
> |     SERVER FILTER BY FIRST KEY ONLY      | null                             
>         | null                           |
> +--+--++
> 2 rows selected (0.012 seconds)
> 0: jdbc:phoenix:localhost> SELECT * FROM DUMMY WHERE (PK1 = 'a') AND 
> (PK2,PK3) <= (3,1);
> +--+--++
> |                   PK1                    |                   PK2            
>         |                   PK3          |
> +--+--++
> | a                                        | 0                                
>         | 1                              |
> | a                                        | 1                                
>         | 1                              |
> | a                                        | 2                                
>         | 1                              |
> | a                                        | 3                                
>         | 1                              |
> +--+--++
> 4 rows selected (0.014 seconds)
> 0: jdbc:phoenix:localhost> EXPLAIN SELECT * FROM DUMMY WHERE (PK1 = 'a') AND 
> (PK2,PK3) <= (3,1);
> +--+--++
> |                   PLAN                   |              EST_BYTES_READ      
>         |              EST_ROWS_READ     |
> +--+--++
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER DUMMY ['a',*] - 
> ['a',3] | null                             |
> |     SERVER FILTER BY FIRST KEY ONLY      | null                             
>         | null                           |
> +--+--++
> 2 rows selected (0.004 seconds) {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6661) Sqlline does not work on PowerPC linux

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6661:
-
Fix Version/s: 5.2.0
   queryserver-6.0.0

> Sqlline does not work on PowerPC linux
> --
>
> Key: PHOENIX-6661
> URL: https://issues.apache.org/jira/browse/PHOENIX-6661
> Project: Phoenix
>  Issue Type: Bug
>  Components: core, queryserver
> Environment: {noformat}
> # uname -a
> Linux  4.18.0-305.el8.ppc64le #1 SMP Thu Apr 29 08:53:15 
> EDT 2021 ppc64le ppc64le ppc64le GNU/Linux
> # cat /etc/redhat-release
> Red Hat Enterprise Linux release 8.4 (Ootpa)
> # java -version
> openjdk version "11.0.12" 2021-07-20 LTS
> OpenJDK Runtime Environment 18.9 (build 11.0.12+7-LTS)
> OpenJDK 64-Bit Server VM 18.9 (build 11.0.12+7-LTS, mixed mode, 
> sharing){noformat}
>Reporter: Abhishek Jain
>Assignee: Istvan Toth
>Priority: Major
> Fix For: queryserver-6.0.0, 5.2.0
>
>
> When trying to run phoenix-sqlline.py or phoenix-sqlline-thin.py on Linux PPC,
> we get the following exception:
> {noformat}
> Exception in thread "main" com.sun.jna.LastErrorException: [25] Inappropriate 
> ioctl for device
>     at com.sun.jna.Native.invokeVoid(Native Method)
>     at com.sun.jna.Function.invoke(Function.java:415)
>     at com.sun.jna.Function.invoke(Function.java:361)
>     at com.sun.jna.Library$Handler.invoke(Library.java:265)
>     at com.sun.proxy.$Proxy0.ioctl(Unknown Source)
>     at 
> org.jline.terminal.impl.jna.linux.LinuxNativePty.getSize(LinuxNativePty.java:95)
>     at 
> org.jline.terminal.impl.AbstractPosixTerminal.getSize(AbstractPosixTerminal.java:60)
>     at org.jline.terminal.Terminal.getWidth(Terminal.java:196)
>     at sqlline.SqlLine.getConsoleReader(SqlLine.java:594)
>     at sqlline.SqlLine.begin(SqlLine.java:511)
>     at sqlline.SqlLine.start(SqlLine.java:267)
>     at sqlline.SqlLine.main(SqlLine.java:206){noformat}
> Upgrading to the latest sqlline 1.12 will result in the sqlline.py starting 
> normally, but it will not accept any keyboard input.
> Replacing the currently used sqlline-*-jar-with-dependencies.jar JAR with the 
> plain sqlline jar, and NOT adding the JNA and JANSI terminal variants and 
> their dependencies fixes the problem.
> Doing that, however, would break or at least seriously degrade sqlline 
> functionality on Windows.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6661) Sqlline does not work on PowerPC linux

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6661:
-
Fix Version/s: queryserver-6.0.1
   (was: queryserver-6.0.0)

> Sqlline does not work on PowerPC linux
> --
>
> Key: PHOENIX-6661
> URL: https://issues.apache.org/jira/browse/PHOENIX-6661
> Project: Phoenix
>  Issue Type: Bug
>  Components: core, queryserver
> Environment: {noformat}
> # uname -a
> Linux  4.18.0-305.el8.ppc64le #1 SMP Thu Apr 29 08:53:15 
> EDT 2021 ppc64le ppc64le ppc64le GNU/Linux
> # cat /etc/redhat-release
> Red Hat Enterprise Linux release 8.4 (Ootpa)
> # java -version
> openjdk version "11.0.12" 2021-07-20 LTS
> OpenJDK Runtime Environment 18.9 (build 11.0.12+7-LTS)
> OpenJDK 64-Bit Server VM 18.9 (build 11.0.12+7-LTS, mixed mode, 
> sharing){noformat}
>Reporter: Abhishek Jain
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.0, queryserver-6.0.1
>
>
> When trying to run phoenix-sqlline.py or phoenix-sqlline-thin.py on Linux PPC,
> we get the following exception:
> {noformat}
> Exception in thread "main" com.sun.jna.LastErrorException: [25] Inappropriate 
> ioctl for device
>     at com.sun.jna.Native.invokeVoid(Native Method)
>     at com.sun.jna.Function.invoke(Function.java:415)
>     at com.sun.jna.Function.invoke(Function.java:361)
>     at com.sun.jna.Library$Handler.invoke(Library.java:265)
>     at com.sun.proxy.$Proxy0.ioctl(Unknown Source)
>     at 
> org.jline.terminal.impl.jna.linux.LinuxNativePty.getSize(LinuxNativePty.java:95)
>     at 
> org.jline.terminal.impl.AbstractPosixTerminal.getSize(AbstractPosixTerminal.java:60)
>     at org.jline.terminal.Terminal.getWidth(Terminal.java:196)
>     at sqlline.SqlLine.getConsoleReader(SqlLine.java:594)
>     at sqlline.SqlLine.begin(SqlLine.java:511)
>     at sqlline.SqlLine.start(SqlLine.java:267)
>     at sqlline.SqlLine.main(SqlLine.java:206){noformat}
> Upgrading to the latest sqlline 1.12 will result in the sqlline.py starting 
> normally, but it will not accept any keyboard input.
> Replacing the currently used sqlline-*-jar-with-dependencies.jar JAR with the 
> plain sqlline jar, and NOT adding the JNA and JANSI terminal variants and 
> their dependencies fixes the problem.
> Doing that, however, would break or at least seriously degrade sqlline 
> functionality on Windows.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6662) Failed to delete rows when PK has one or more DESC column with IN clause

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6662:
-
Fix Version/s: 5.1.3

> Failed to delete rows when PK has one or more DESC column with IN clause
> 
>
> Key: PHOENIX-6662
> URL: https://issues.apache.org/jira/browse/PHOENIX-6662
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.16.1
>Reporter: Xinyi Yan
>Assignee: Gokcen Iskender
>Priority: Critical
> Fix For: 5.2.0, 5.1.3
>
>
> Global connection to create a base table and view.
> {code:java}
> CREATE TABLE IF NOT EXISTS DUMMY.BASE (TETNANT_ID CHAR(15) NOT NULL, PREFIX 
> CHAR(3) NOT NULL, COL1 DATE, COL2 CHAR(15), COL3 DATE, COL4 CHAR(15), COL5 
> DATE CONSTRAINT PK PRIMARY KEY ( TETNANT_ID, PREFIX ) ) MULTI_TENANT=true;
> CREATE VIEW IF NOT EXISTS DUMMY.GLOBAL_VIEW  (PK1 DECIMAL(12, 3) NOT NULL, 
> PK2 BIGINT NOT NULL, COL6 CHAR(15) , COL7 DATE, COL8 BOOLEAN, COL9 CHAR(15), 
> COL10 VARCHAR, COL11 VARCHAR CONSTRAINT PKVIEW PRIMARY KEY (PK1 DESC, PK2)) 
> AS SELECT * FROM DUMMY.BASE WHERE PREFIX = '01A'; {code}
> Tenant connection to create a view and repro the issue
> {code:java}
> 0: jdbc:phoenix:localhost> CREATE VIEW DUMMY."0ph" AS SELECT * FROM 
> DUMMY.GLOBAL_VIEW;
> No rows affected (0.055 seconds)
> 0: jdbc:phoenix:localhost> UPSERT INTO DUMMY."0ph" (PK1,PK2) VALUES (10.0,10);
> 1 row affected (0.038 seconds)
> 0: jdbc:phoenix:localhost> UPSERT INTO DUMMY."0ph" (PK1,PK2) VALUES (20.0,20);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:localhost> SELECT * FROM DUMMY."0ph";
> ++-+-+-+-+-+--+--+-+-+--+-+--+
> | PREFIX |          COL1           |      COL2       |          COL3          
>  |      COL4       |          COL5           |     PK1      |                 
>   PK2                    |      COL6       |          COL7           |        
>            COL8                   |      COL9       |                  COL |
> ++-+-+-+-+-+--+--+-+-+--+-+--+
> | 01A    | null                    |                 | null                   
>  |                 | null                    | 2E+1         | 20              
>                          |                 | null                    |        
>                                   |                 |                      |
> | 01A    | null                    |                 | null                   
>  |                 | null                    | 1E+1         | 10              
>                          |                 | null                    |        
>                                   |                 |                      |
> ++-+-+-+-+-+--+--+-+-+--+-+--+
> 2 rows selected (0.035 seconds)
> 0: jdbc:phoenix:localhost> DELETE FROM DUMMY."0ph" WHERE (PK1,PK2) IN 
> ((10.0,10),(20.0,20));
> No rows affected (0.024 seconds)
> 0: jdbc:phoenix:localhost> SELECT * FROM DUMMY."0ph";
> ++-+-+-+-+-+--+--+-+-+--+-+--+
> | PREFIX |          COL1           |      COL2       |          COL3          
>  |      COL4       |          COL5           |     PK1      |                 
>   PK2                    |      COL6       |          COL7           |        
>            COL8                   |      COL9       |                  COL |
> ++-+-+-+-+-+--+-

[jira] [Updated] (PHOENIX-6662) Failed to delete rows when PK has one or more DESC column with IN clause

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6662:
-
Fix Version/s: 5.2.0

> Failed to delete rows when PK has one or more DESC column with IN clause
> 
>
> Key: PHOENIX-6662
> URL: https://issues.apache.org/jira/browse/PHOENIX-6662
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.16.1
>Reporter: Xinyi Yan
>Assignee: Gokcen Iskender
>Priority: Critical
> Fix For: 5.2.0
>
>
> Global connection to create a base table and view.
> {code:java}
> CREATE TABLE IF NOT EXISTS DUMMY.BASE (TETNANT_ID CHAR(15) NOT NULL, PREFIX 
> CHAR(3) NOT NULL, COL1 DATE, COL2 CHAR(15), COL3 DATE, COL4 CHAR(15), COL5 
> DATE CONSTRAINT PK PRIMARY KEY ( TETNANT_ID, PREFIX ) ) MULTI_TENANT=true;
> CREATE VIEW IF NOT EXISTS DUMMY.GLOBAL_VIEW  (PK1 DECIMAL(12, 3) NOT NULL, 
> PK2 BIGINT NOT NULL, COL6 CHAR(15) , COL7 DATE, COL8 BOOLEAN, COL9 CHAR(15), 
> COL10 VARCHAR, COL11 VARCHAR CONSTRAINT PKVIEW PRIMARY KEY (PK1 DESC, PK2)) 
> AS SELECT * FROM DUMMY.BASE WHERE PREFIX = '01A'; {code}
> Tenant connection to create a view and repro the issue
> {code:java}
> 0: jdbc:phoenix:localhost> CREATE VIEW DUMMY."0ph" AS SELECT * FROM 
> DUMMY.GLOBAL_VIEW;
> No rows affected (0.055 seconds)
> 0: jdbc:phoenix:localhost> UPSERT INTO DUMMY."0ph" (PK1,PK2) VALUES (10.0,10);
> 1 row affected (0.038 seconds)
> 0: jdbc:phoenix:localhost> UPSERT INTO DUMMY."0ph" (PK1,PK2) VALUES (20.0,20);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:localhost> SELECT * FROM DUMMY."0ph";
> ++-+-+-+-+-+--+--+-+-+--+-+--+
> | PREFIX |          COL1           |      COL2       |          COL3          
>  |      COL4       |          COL5           |     PK1      |                 
>   PK2                    |      COL6       |          COL7           |        
>            COL8                   |      COL9       |                  COL |
> ++-+-+-+-+-+--+--+-+-+--+-+--+
> | 01A    | null                    |                 | null                   
>  |                 | null                    | 2E+1         | 20              
>                          |                 | null                    |        
>                                   |                 |                      |
> | 01A    | null                    |                 | null                   
>  |                 | null                    | 1E+1         | 10              
>                          |                 | null                    |        
>                                   |                 |                      |
> ++-+-+-+-+-+--+--+-+-+--+-+--+
> 2 rows selected (0.035 seconds)
> 0: jdbc:phoenix:localhost> DELETE FROM DUMMY."0ph" WHERE (PK1,PK2) IN 
> ((10.0,10),(20.0,20));
> No rows affected (0.024 seconds)
> 0: jdbc:phoenix:localhost> SELECT * FROM DUMMY."0ph";
> ++-+-+-+-+-+--+--+-+-+--+-+--+
> | PREFIX |          COL1           |      COL2       |          COL3          
>  |      COL4       |          COL5           |     PK1      |                 
>   PK2                    |      COL6       |          COL7           |        
>            COL8                   |      COL9       |                  COL |
> ++-+-+-+-+-+--+-

[jira] [Resolved] (PHOENIX-6649) TransformTool should transform the tenant view content as well

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6649.
--
Fix Version/s: 5.2.0
   Resolution: Fixed

> TransformTool should transform the tenant view content as well
> --
>
> Key: PHOENIX-6649
> URL: https://issues.apache.org/jira/browse/PHOENIX-6649
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6649) TransformTool should transform the tenant view content as well

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reassigned PHOENIX-6649:


Assignee: Gokcen Iskender

> TransformTool should transform the tenant view content as well
> --
>
> Key: PHOENIX-6649
> URL: https://issues.apache.org/jira/browse/PHOENIX-6649
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6669) RVC returns a wrong result

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reassigned PHOENIX-6669:


Assignee: Gokcen Iskender

> RVC returns a wrong result
> --
>
> Key: PHOENIX-6669
> URL: https://issues.apache.org/jira/browse/PHOENIX-6669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.16.1
>Reporter: Xinyi Yan
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 5.2.0
>
>
> {code:java}
> CREATE TABLE IF NOT EXISTS DUMMY (
>     PK1 VARCHAR NOT NULL,
>     PK2 BIGINT NOT NULL,
>     PK3 BIGINT NOT NULL,
>     PK4 VARCHAR NOT NULL,
>     COL1 BIGINT,
>     COL2 INTEGER,
>     COL3 VARCHAR,
>     COL4 VARCHAR,    CONSTRAINT PK PRIMARY KEY
>     (
>         PK1,
>         PK2,
>         PK3,
>         PK4
>     )
> );UPSERT INTO DUMMY (PK1, PK4, COL1, PK2, COL2, PK3, COL3, COL4)
>             VALUES ('xx', 'xid1', 0, 7, 7, 7, 'INSERT', null);
>  {code}
> The non-RVC query returns no row, but the RVC query returns a wrong result.
> {code:java}
> 0: jdbc:phoenix:localhost> select PK2
> . . . . . . . . . . . . .> from DUMMY
> . . . . . . . . . . . . .> where PK1 ='xx'
> . . . . . . . . . . . . .> and (PK1 > 'xx' AND PK1 <= 'xx')
> . . . . . . . . . . . . .> and (PK2 > 5 AND PK2 <=5)
> . . . . . . . . . . . . .> and (PK3 > 2 AND PK3 <=2);
> +--+
> |                   PK2                    |
> +--+
> +--+
>  No rows selected (0.022 seconds)
> 0: jdbc:phoenix:localhost> select PK2
> . . . . . . . . . . . . .> from DUMMY
> . . . . . . . . . . . . .> where (PK1 = 'xx')
> . . . . . . . . . . . . .> and (PK1, PK2, PK3) > ('xx', 5, 2)
> . . . . . . . . . . . . .> and (PK1, PK2, PK3) <= ('xx', 5, 2);
> +--+
> |                   PK2                    |
> +--+
> | 7                                        |
> +--+
> 1 row selected (0.033 seconds) {code}
> {code:java}
> 0: jdbc:phoenix:localhost> EXPLAIN select PK2 from DUMMY where (PK1 = 'xx') 
> and (PK1, PK2, PK3) > ('xx', 5, 2) and (PK1, PK2, PK3) <= ('xx', 5, 2);
> +--+--+--+--+
> |                   PLAN                   |              EST_BYTES_READ      
>         |              EST_ROWS_READ               |  |
> +--+--+--+--+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER DUMMY ['xx'] | 
> null                                     | null          |
> |     SERVER FILTER BY FIRST KEY ONLY      | null                             
>         | null                                     |  |
> +--+--+--+--+
> 2 rows selected (0.024 seconds) 
> 0: jdbc:phoenix:localhost> explain select PK2 from DUMMY where PK1 ='xx' and 
> (PK1 > 'xx' AND PK1 <= 'xx') and (PK2 > 5 AND PK2 <=5) and (PK3 > 2 AND PK3 
> <=2);
> +--+--+--+--+
> |                   PLAN                   |              EST_BYTES_READ      
>         |              EST_ROWS_READ               |  |
> +--+--+--+--+
> | DEGENERATE SCAN OVER DUMMY               | null                             
>         | null                                     |  |
> +--+--+--+--+
> 1 row selected (0.015 seconds){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6669) RVC returns a wrong result

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6669.
--
Fix Version/s: 5.2.0
   Resolution: Fixed

This was merged awhile back but the JIRA was never resolved

> RVC returns a wrong result
> --
>
> Key: PHOENIX-6669
> URL: https://issues.apache.org/jira/browse/PHOENIX-6669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.16.1
>Reporter: Xinyi Yan
>Priority: Major
> Fix For: 5.2.0
>
>
> {code:java}
> CREATE TABLE IF NOT EXISTS DUMMY (
>     PK1 VARCHAR NOT NULL,
>     PK2 BIGINT NOT NULL,
>     PK3 BIGINT NOT NULL,
>     PK4 VARCHAR NOT NULL,
>     COL1 BIGINT,
>     COL2 INTEGER,
>     COL3 VARCHAR,
>     COL4 VARCHAR,    CONSTRAINT PK PRIMARY KEY
>     (
>         PK1,
>         PK2,
>         PK3,
>         PK4
>     )
> );UPSERT INTO DUMMY (PK1, PK4, COL1, PK2, COL2, PK3, COL3, COL4)
>             VALUES ('xx', 'xid1', 0, 7, 7, 7, 'INSERT', null);
>  {code}
> The non-RVC query returns no row, but the RVC query returns a wrong result.
> {code:java}
> 0: jdbc:phoenix:localhost> select PK2
> . . . . . . . . . . . . .> from DUMMY
> . . . . . . . . . . . . .> where PK1 ='xx'
> . . . . . . . . . . . . .> and (PK1 > 'xx' AND PK1 <= 'xx')
> . . . . . . . . . . . . .> and (PK2 > 5 AND PK2 <=5)
> . . . . . . . . . . . . .> and (PK3 > 2 AND PK3 <=2);
> +--+
> |                   PK2                    |
> +--+
> +--+
>  No rows selected (0.022 seconds)
> 0: jdbc:phoenix:localhost> select PK2
> . . . . . . . . . . . . .> from DUMMY
> . . . . . . . . . . . . .> where (PK1 = 'xx')
> . . . . . . . . . . . . .> and (PK1, PK2, PK3) > ('xx', 5, 2)
> . . . . . . . . . . . . .> and (PK1, PK2, PK3) <= ('xx', 5, 2);
> +--+
> |                   PK2                    |
> +--+
> | 7                                        |
> +--+
> 1 row selected (0.033 seconds) {code}
> {code:java}
> 0: jdbc:phoenix:localhost> EXPLAIN select PK2 from DUMMY where (PK1 = 'xx') 
> and (PK1, PK2, PK3) > ('xx', 5, 2) and (PK1, PK2, PK3) <= ('xx', 5, 2);
> +--+--+--+--+
> |                   PLAN                   |              EST_BYTES_READ      
>         |              EST_ROWS_READ               |  |
> +--+--+--+--+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER DUMMY ['xx'] | 
> null                                     | null          |
> |     SERVER FILTER BY FIRST KEY ONLY      | null                             
>         | null                                     |  |
> +--+--+--+--+
> 2 rows selected (0.024 seconds) 
> 0: jdbc:phoenix:localhost> explain select PK2 from DUMMY where PK1 ='xx' and 
> (PK1 > 'xx' AND PK1 <= 'xx') and (PK2 > 5 AND PK2 <=5) and (PK3 > 2 AND PK3 
> <=2);
> +--+--+--+--+
> |                   PLAN                   |              EST_BYTES_READ      
>         |              EST_ROWS_READ               |  |
> +--+--+--+--+
> | DEGENERATE SCAN OVER DUMMY               | null                             
>         | null                                     |  |
> +--+--+--+--+
> 1 row selected (0.015 seconds){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6682) Jenkins tests are failing for Java 11.0.14.1

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6682:
-
Fix Version/s: 5.2.0

> Jenkins tests are failing for Java 11.0.14.1
> 
>
> Key: PHOENIX-6682
> URL: https://issues.apache.org/jira/browse/PHOENIX-6682
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>  Labels: ci, test
> Fix For: 5.2.0
>
>
> Jenkins tests are failing because the Jetty versions used by some Hadoop 
> versions cannot handle the fourth version component.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6774) Enable code coverage reporting to SonarQube in Phoenix

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6774:
-
Fix Version/s: 5.2.0

> Enable code coverage reporting to SonarQube in Phoenix
> --
>
> Key: PHOENIX-6774
> URL: https://issues.apache.org/jira/browse/PHOENIX-6774
> Project: Phoenix
>  Issue Type: Task
>Reporter: Dóra Horváth
>Assignee: Dóra Horváth
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6779) Account for connection attempted & failure metrics in all paths

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6779:
-
Fix Version/s: 5.2.0

> Account for connection attempted & failure metrics in all paths
> ---
>
> Key: PHOENIX-6779
> URL: https://issues.apache.org/jira/browse/PHOENIX-6779
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.2
>Reporter: Daniel Wong
>Assignee: Daniel Wong
>Priority: Major
> Fix For: 5.2.0, 5.1.3
>
>
> PHOENIX-6564 added some additional connection metrics.  These need to be 
> moved up higher in the stack closer to phoenix driver.create path as well as 
> the attempted metric.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [ANNOUNCE] Richard Antal joins Phoenix PMC

2022-09-30 Thread Geoffrey Jacoby
Congratulations, Richard!

On Thu, Sep 29, 2022 at 10:12 PM rajeshb...@apache.org <
chrajeshbab...@gmail.com> wrote:

> On behalf of the Apache Phoenix PMC, I'm pleased to announce that Richard
> Antal
> has accepted our invitation to join the PMC.
>
> We appreciate all of the great contributions Richard has made to the
> community thus far and we look forward to his continued involvement.
>
> Please join me in congratulating Richard Antal!
>
> Thanks,
> Rajeshbabu.
>


[jira] [Updated] (PHOENIX-5140) TableNotFoundException occurs when we create local asynchronous index

2022-09-30 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5140:
-
Fix Version/s: (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> TableNotFoundException occurs when we create local asynchronous index
> -
>
> Key: PHOENIX-5140
> URL: https://issues.apache.org/jira/browse/PHOENIX-5140
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
> Environment: > HDP : 3.0.0.0, HBase : 2.0.0,phoenix : 5.0.0 and 
> hadoop : 3.1.0
>Reporter: MariaCarrie
>Assignee: dan zheng
>Priority: Major
>  Labels: IndexTool, localIndex, tableUndefined
> Attachments: PHOENIX-5140-master-v1.patch, 
> PHOENIX-5140-master-v2.patch
>
>   Original Estimate: 48h
>  Time Spent: 20m
>  Remaining Estimate: 47h 40m
>
> First I create the table and insert the data:
> ^create table DMP.DMP_INDEX_TEST2 (id varchar not null primary key,name 
> varchar,age varchar);^
> ^upsert into DMP.DMP_INDEX_TEST2 values('id01','name01','age01');^
> The asynchronous index is then created:
> ^create local index if not exists TMP_INDEX_DMP_TEST2 on DMP.DMP_INDEX_TEST2 
> (name) ASYNC;^
> Because kerberos is enabled,So I need kinit HBase principal first,Then 
> execute the following command:
> ^HADOOP_CLASSPATH="/etc/hbase/conf" hadoop jar 
> /usr/hdp/3.0.0.0-1634/phoenix/phoenix-client.jar 
> org.apache.phoenix.mapreduce.index.IndexTool --schema DMP --data-table 
> DMP_INDEX_TEST2 --index-table TMP_INDEX_DMP_TEST2 --output-path 
> /hbase-backup2^
> But I got the following error:
> ^Error: java.lang.RuntimeException: 
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=DMP.DMP_INDEX_TEST2^
> ^at 
> org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:124)^
> ^at 
> org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:50)^
> ^at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)^
> ^at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)^
> ^at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)^
> ^at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)^
> ^at java.security.AccessController.doPrivileged(Native Method)^
> ^at javax.security.auth.Subject.doAs(Subject.java:422)^
> ^at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)^
> ^at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)^
> ^Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
> (42M03): Table undefined. tableName=DMP.DMP_INDEX_TEST2^
> ^at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableRegionLocation(ConnectionQueryServicesImpl.java:4544)^
> ^at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.getTableRegionLocation(DelegateConnectionQueryServices.java:312)^
> ^at 
> org.apache.phoenix.compile.UpsertCompiler.setValues(UpsertCompiler.java:163)^
> ^at 
> org.apache.phoenix.compile.UpsertCompiler.access$500(UpsertCompiler.java:118)^
> ^at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertValuesMutationPlan.execute(UpsertCompiler.java:1202)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)^
> ^at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)^
> ^at 
> org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:103)^
> ^... 9 more^
> I can query this table and have access to it,It works well:
> ^select * from DMP.DMP_INDEX_TEST2;^
> ^select * from DMP.TMP_INDEX_DMP_TEST2;^
> ^drop table DMP.DMP_INDEX_TEST2;^
> But why did my MR task make this mistake? Any Suggestions from anyone?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6751) Force using range scan vs skip scan when using the IN operator and large number of RVC elements

2022-09-29 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6751.
--
Release Note: Adds a new config parameter, 
phoenix.max.inList.skipScan.size, which controls the size of an IN clause 
before it will be automatically converted from a skip scan to a range scan. 
  Resolution: Fixed

> Force using range scan vs skip scan when using the IN operator and large 
> number of RVC elements 
> 
>
> Key: PHOENIX-6751
> URL: https://issues.apache.org/jira/browse/PHOENIX-6751
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.1, 4.16.0, 5.2.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Critical
> Fix For: 5.2.0, 5.1.3
>
>
> SQL queries using the IN operator using PKs of different SortOrder were 
> failing during the WHERE clause compilation phase and causing OOM issues on 
> the servers when a large number (~50k) of RVC elements were used in the IN 
> operator.
> SQL queries were failing specifically during the skip scan filter generation. 
> The skip scan filter is generated using a list of point key ranges. 
> [ScanRanges.create|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L80]
> The following getPointKeys 
> [code|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L541]
>  uses the KeyRange sets to create a new list of point-keys. When there are a 
> large number of RVC elements the above



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6749) Replace deprecated HBase 1.x API calls

2022-09-21 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6749.
--
Resolution: Fixed

Merged into the master branch. Thanks for this contribution, [~ameszaros]!

> Replace deprecated HBase 1.x API calls
> --
>
> Key: PHOENIX-6749
> URL: https://issues.apache.org/jira/browse/PHOENIX-6749
> Project: Phoenix
>  Issue Type: Improvement
>  Components: connectors, core, queryserver
>Reporter: Istvan Toth
>Assignee: Aron Attila Meszaros
>Priority: Major
> Fix For: 5.2.0
>
>
> Now that we no longer care about Hbase 1.x compatibility, we should replace 
> the deprecated Hbase 1.x API calls with HBase 2 API calls.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6791) WHERE optimizer redesign

2022-09-20 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6791:
-
Fix Version/s: 5.3.0

> WHERE optimizer redesign
> 
>
> Key: PHOENIX-6791
> URL: https://issues.apache.org/jira/browse/PHOENIX-6791
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Kadir Ozdemir
>Priority: Major
> Fix For: 5.3.0
>
>
> The WHERE optimizer in Phoenix derives the information about which row key 
> ranges to be scanned from the primary key (PK) column expressions in a where 
> clause. These key ranges are then used to determine the table regions to scan 
> and generate a SkipScanFilter for each of these scans if applicable. 
> The WHERE expression may include non-PK column (sub) expressions. After 
> identifying the key ranges, the WHERE optimizer removes the nodes for PK 
> columns from the expression tree if these nodes are fully used to determine 
> the key ranges.
> Since the values in the WHERE expression are expressed by byte arrays, the 
> key ranges are also expressed using byte arrays. KeyRange represents a range 
> for a row key or any sub part of a row key key. A key range is composed of 
> two pairs, one for each end of the range, lower and upper. The pair is formed 
> from a byte array and a boolean value. The boolean value indicates if the end 
> of the range specified by the byte array is inclusive or not. If the byte 
> array is empty, it means that the corresponding end of the range is 
> unbounded. 
> KeySlot represents a key part and the list of key ranges for this key part 
> where a key part can be any sub part of a PK, including leading, trailing, or 
> middle part of the key. The number of columns in a key part is called span. 
> For the terminal nodes (i..e, constant values) in the expression tree, 
> KeySlot objects are created with a single key range. When KeySlot objects are 
> rolled up in the expression tree, they can have multiple ranges. For example, 
> a KeySlot object representing an IN expression will have a separate range for 
> each member of the IN expression. Similarly the KeySlot object for an OR 
> expression can have multiple ranges similarly. Please note an IN operator can 
> be replaced by an equivalent OR expression. 
> When the WHERE optimizer visits the nodes of the expression tree, it 
> generates a KeySlots object. KeySlots is essentially a list of KeySlot 
> objects (please note the difference between KeySlots vs KeySlot). There are 
> two types of KeySlots: SingleKeySlot and MultiKeySlot. SingleKeySlot 
> represents a single key slot whereas MultiKeySlot is a list of key slots the 
> results of AND expression on SingleKeySlot or MultiKeySlot objects. 
> The key slots are rolled into a MultiKeySlot object when processing an AND 
> expression. The AND operation on two key slots starting their spans with the 
> same PK columns is equivalent to taking intersection of their ranges. The OR 
> operation implementation is limited and rather simple compared to the AND 
> operation. The OR operation attempts to coalesce key slots if all of the key 
> slots have the same starting PK column. If not, it generates a null KeySlots. 
> When an expression node is used fully in generating a key slot, this 
> expression node is removed from the expression tree.
> A row key for a given table can be composed of several PK columns. Without 
> any restrictions imposed by predefined rules, intersection of key slots can 
> lead to a large number of key slots, i.e., key ranges.  For example, consider 
> a row key composed of three integer columns, PK1, PK2, and PK3, and the 
> expression (PK1,  PK2) > (100, 25) AND PK3 = 5. The result would be a very 
> large number of key slots and each key slot represents a point in the three 
> dimensional space, including (100, 26, 5), (100, 27, 5), …, (100, 2147483647, 
> 5), (101, 1, 5), (101, 2, 5), … .
> A simple expression (like the one given above) with a relatively small number 
> of PK columns and a simple data type, e.g., integer, is sufficient to show 
> that finding key ranges for an arbitrary expression is an intractable 
> problem. Attempting to optimize the queries by enumerating the key ranges can 
> lead to excessive memory allocation and long computation times and the 
> optimization can defeat its purpose. 
> The current implementation attempts to enumerate all possible key ranges in 
> general. Because of this, the WHERE optimizer has caused out of memory 
> issues, and query timeouts due to high CPU usage. The very recent bug fixes 
> attempts to catch these cases and pre

[jira] [Assigned] (PHOENIX-6785) Sequence Performance Optimizations

2022-09-13 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reassigned PHOENIX-6785:


Assignee: Andrew Kyle Purtell

> Sequence Performance Optimizations
> --
>
> Key: PHOENIX-6785
> URL: https://issues.apache.org/jira/browse/PHOENIX-6785
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Geoffrey Jacoby
>Assignee: Andrew Kyle Purtell
>Priority: Major
> Fix For: 5.3.0
>
> Attachments: Sequence Architecture and Perf Improvements.pdf
>
>
> We've encountered scaling issues with Phoenix sequences in our production 
> environment, particularly with heavy usage of the same sequence causing 
> hotspotting on the physical SYSTEM.SEQUENCE table. 
> After some informal discussions on this with [~kadir], [~jisaac] and 
> [~tkhurana], I wrote up some thoughts on improvements that could be made to 
> sequences in a future Phoenix release. I'll attach it to this JIRA.
> As there are several proposed improvements, this will be an umbrella JIRA to 
> hold several subtasks. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6715) Update Omid to 1.1.0

2022-09-13 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6715:
-
Fix Version/s: 5.2.0

> Update Omid to 1.1.0
> 
>
> Key: PHOENIX-6715
> URL: https://issues.apache.org/jira/browse/PHOENIX-6715
> Project: Phoenix
>  Issue Type: Task
>  Components: core
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.0
>
>
> We should release Omid 1.1.0 , and update Phoenix to it for 5.2



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6749) Replace deprecated HBase 1.x API calls

2022-09-13 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6749:
-
Fix Version/s: 5.2.0

> Replace deprecated HBase 1.x API calls
> --
>
> Key: PHOENIX-6749
> URL: https://issues.apache.org/jira/browse/PHOENIX-6749
> Project: Phoenix
>  Issue Type: Improvement
>  Components: connectors, core, queryserver
>Reporter: Istvan Toth
>Assignee: Aron Attila Meszaros
>Priority: Major
> Fix For: 5.2.0
>
>
> Now that we no longer care about Hbase 1.x compatibility, we should replace 
> the deprecated Hbase 1.x API calls with HBase 2 API calls.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6788) Client-side Sequence Update Consolidation

2022-09-08 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-6788:


 Summary: Client-side Sequence Update Consolidation
 Key: PHOENIX-6788
 URL: https://issues.apache.org/jira/browse/PHOENIX-6788
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Geoffrey Jacoby
 Fix For: 5.3.0


This is similar to the proposed PHOENIX-6787, but for the client-side. If two 
requests for the same sequence are enqueued at a client, the client can 
consolidate them into one larger request, and then satisfy them both with the 
combined value returned from them. 

Because this optimization can change the order which operations are assigned 
sequence ids, it should be configurable with a feature flag.

As with PHOENIX-6787, if the consolidation of requests would result in a 
validation error (like an overflow or underflow) that wouldn't happen to some 
requests if issued separately, we should not consolidate. If an overflow or 
underflow validation error comes from the server-side, we should retry without 
consolidating. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6787) Server-side Sequence Update Consolidation

2022-09-08 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-6787:


 Summary: Server-side Sequence Update Consolidation
 Key: PHOENIX-6787
 URL: https://issues.apache.org/jira/browse/PHOENIX-6787
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Geoffrey Jacoby
 Fix For: 5.3.0


For secondary indexes, we have optimizations so that if multiple mutations are 
waiting on the same row lock, all subsequent mutations can re-use the previous 
mutation's final state and avoid an extra Get. 

We can apply a similar idea to Phoenix sequences. If there's a "hot" sequence 
with multiple requests queueing for a Sequence row lock, we can consolidate 
them down to one set of Get / Put operations, then satisfy them all. This 
change is transparent to the clients. 

Note that if this consolidation would cause the sequence update to fail when 
some of the requests would have succeeded otherwise, we should not consolidate. 
(An example is if a sequence has cycling disabled, and the first request would 
not overflow, but the first and second combined would. In this case we should 
let the first request go through unconsolidated, and fail the second request.) 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6786) SequenceRegionObserver should use batch mutation coproc hooks

2022-09-08 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-6786:


 Summary: SequenceRegionObserver should use batch mutation coproc 
hooks
 Key: PHOENIX-6786
 URL: https://issues.apache.org/jira/browse/PHOENIX-6786
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Geoffrey Jacoby


SequenceRegionObserver uses preIncrement but could use the standard batch 
mutation coproc hooks, similarly to how atomic upserts work after PHOENIX-6387. 
This will simplify the code and also make it easier to re-use code from 
secondary index generation in performance optimizations. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6785) Sequence Performance Optimizations

2022-09-08 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6785:
-
Attachment: Sequence Architecture and Perf Improvements.pdf

> Sequence Performance Optimizations
> --
>
> Key: PHOENIX-6785
> URL: https://issues.apache.org/jira/browse/PHOENIX-6785
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Geoffrey Jacoby
>Priority: Major
> Fix For: 5.3.0
>
> Attachments: Sequence Architecture and Perf Improvements.pdf
>
>
> We've encountered scaling issues with Phoenix sequences in our production 
> environment, particularly with heavy usage of the same sequence causing 
> hotspotting on the physical SYSTEM.SEQUENCE table. 
> After some informal discussions on this with [~kadir], [~jisaac] and 
> [~tkhurana], I wrote up some thoughts on improvements that could be made to 
> sequences in a future Phoenix release. I'll attach it to this JIRA.
> As there are several proposed improvements, this will be an umbrella JIRA to 
> hold several subtasks. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6785) Sequence Performance Optimizations

2022-09-08 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-6785:


 Summary: Sequence Performance Optimizations
 Key: PHOENIX-6785
 URL: https://issues.apache.org/jira/browse/PHOENIX-6785
 Project: Phoenix
  Issue Type: Improvement
Reporter: Geoffrey Jacoby
 Fix For: 5.3.0


We've encountered scaling issues with Phoenix sequences in our production 
environment, particularly with heavy usage of the same sequence causing 
hotspotting on the physical SYSTEM.SEQUENCE table. 

After some informal discussions on this with [~kadir], [~jisaac] and 
[~tkhurana], I wrote up some thoughts on improvements that could be made to 
sequences in a future Phoenix release. I'll attach it to this JIRA.

As there are several proposed improvements, this will be an umbrella JIRA to 
hold several subtasks. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6740) Upgrade default supported Hadoop 3 version to 3.2.3 for HBase 2.5 profile

2022-08-29 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6740.
--
Resolution: Duplicate

This is incorporated in PHOENIX-6692. 

> Upgrade default supported Hadoop 3 version to 3.2.3 for HBase 2.5 profile
> -
>
> Key: PHOENIX-6740
> URL: https://issues.apache.org/jira/browse/PHOENIX-6740
> Project: Phoenix
>  Issue Type: Task
>    Reporter: Geoffrey Jacoby
>        Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 5.2.0
>
>
> HBase is upgrading the minimum supported Hadoop to 3.2.3 for HBase 2.5, and 
> we have a similar request from dependabot. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6752) Duplicate expression nodes in extract nodes during WHERE compilation phase leads to poor performance.

2022-08-29 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6752:
-
Priority: Critical  (was: Major)

> Duplicate expression nodes in extract nodes during WHERE compilation phase 
> leads to poor performance.
> -
>
> Key: PHOENIX-6752
> URL: https://issues.apache.org/jira/browse/PHOENIX-6752
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0, 4.16.1, 5.2.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Critical
> Fix For: 5.2.0
>
> Attachments: test-case.txt
>
>
> SQL queries using the OR operator were taking a long time during the WHERE 
> clause compilation phase when a large number of OR clauses (~50k) are used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6752) Duplicate expression nodes in extract nodes during WHERE compilation phase leads to poor performance.

2022-08-29 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6752:
-
Fix Version/s: 5.2.0

> Duplicate expression nodes in extract nodes during WHERE compilation phase 
> leads to poor performance.
> -
>
> Key: PHOENIX-6752
> URL: https://issues.apache.org/jira/browse/PHOENIX-6752
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0, 4.16.1, 5.2.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 5.2.0
>
> Attachments: test-case.txt
>
>
> SQL queries using the OR operator were taking a long time during the WHERE 
> clause compilation phase when a large number of OR clauses (~50k) are used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-5215) Remove and replace HTrace

2022-08-29 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5215:
-
Fix Version/s: 5.3.0
   (was: 5.2.0)
   (was: 5.1.3)

> Remove and replace HTrace
> -
>
> Key: PHOENIX-5215
> URL: https://issues.apache.org/jira/browse/PHOENIX-5215
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Andrew Kyle Purtell
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Fix For: 5.3.0
>
>
> HTrace is dead.
> Hadoop is discussing a replacement of HTrace with OpenTracing, see 
> HADOOP-15566 
> HBase is having the same discussion on HBASE-22120



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6687) The region server hosting the SYSTEM.CATALOG fails to serve any metadata requests as default handler pool threads are exhausted.

2022-08-29 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6687.
--
Fix Version/s: (was: 4.17.0)
   Resolution: Fixed

Resolving as this was merged to master a couple of months ago. 

> The region server hosting the SYSTEM.CATALOG fails to serve any metadata 
> requests as default handler pool  threads are exhausted.
> -
>
> Key: PHOENIX-6687
> URL: https://issues.apache.org/jira/browse/PHOENIX-6687
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0, 5.1.1, 4.16.1, 5.2.0, 5.1.2
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 5.2.0
>
> Attachments: stacktraces.txt
>
>
> When SYSTEM.CATALOG region server is restarted and the server is experiencing 
> heavy metadata call volume.
> The stack traces indicate that all the default handler pool threads are 
> waiting for the CQSI.init thread to finish initializing.
> The CQSI.init thread itself cannot proceed since it cannot complete the 
> second RPC call 
> (org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility)
>  due to thread starvation.
> For e.g
> The following 
> [code|https://github.com/apache/phoenix/blob/3cff97087d79b85e282fca4ac69ddf499fb1f40f/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L661]
>  turned the getTable(..) into needing an additional server-to-server RPC call 
> when initializing a PhoenixConnection (CQSI.init) for the first time on the 
> JVM. 
> It is well-known that server-to-server RPC calls are prone to deadlocking due 
> to thread pool exhaustion.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] Releasing the next Omid version

2022-08-24 Thread Geoffrey Jacoby
Thanks for volunteering to be RM for the next Omid release.

I agree we should have a release soon (I think we'll want Phoenix 5.2 to
use the new Omid). In addition to OMID-226, I suggest we also upgrade
Omid's internal Hadoop version to align with whatever default Hadoop we
choose for Phoenix 5.2, to keep Phoenix's transitive dependencies (a
little) simpler. Right now Omid appears to depend on Hadoop 2.10, which
since Phoenix 5 is Hadoop 3-based, seems incorrect.

Geoffrey

On Wed, Aug 24, 2022 at 12:46 AM Istvan Toth  wrote:

> Hi!
>
> Most of the planned OMID changes for Phoenix 5.2 have landed.
> The only outstanding ticket that I'm aware of is OMID-226 which I also
> expect to land soon.
>
> Unless someone has more changes targeted for the next release, I propose
> that we release the next Omid version soon after OMID-226.
>
> I also propose bumping the version to 1.1.0, though because of the HBase
> 1.x compatibility removal, and maven artifact changes we could also argue
> for 2.0.0
>
> If there are no other volunteers, I also volunteer to be the RM for the
> release.
>
> regards
> Istvan
>


[jira] [Updated] (PHOENIX-6751) Force using range scan vs skip scan when using the IN operator and large number of RVC elements

2022-07-20 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6751:
-
Fix Version/s: 5.2.0

> Force using range scan vs skip scan when using the IN operator and large 
> number of RVC elements 
> 
>
> Key: PHOENIX-6751
> URL: https://issues.apache.org/jira/browse/PHOENIX-6751
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.1, 4.16.0, 5.2.0
>Reporter: Jacob Isaac
>Priority: Critical
> Fix For: 5.2.0
>
>
> SQL queries using the IN operator using PKs of different SortOrder were 
> failing during the WHERE clause compilation phase and causing OOM issues on 
> the servers when a large number (~50k) of RVC elements were used in the IN 
> operator.
> SQL queries were failing specifically during the skip scan filter generation. 
> The skip scan filter is generated using a list of point key ranges. 
> [ScanRanges.create|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L80]
> The following getPointKeys 
> [code|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L541]
>  uses the KeyRange sets to create a new list of point-keys. When there are a 
> large number of RVC elements the above



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6751) Force using range scan vs skip scan when using the IN operator and large number of RVC elements

2022-07-20 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6751:
-
Priority: Critical  (was: Major)

> Force using range scan vs skip scan when using the IN operator and large 
> number of RVC elements 
> 
>
> Key: PHOENIX-6751
> URL: https://issues.apache.org/jira/browse/PHOENIX-6751
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.1, 4.16.0, 5.2.0
>Reporter: Jacob Isaac
>Priority: Critical
>
> SQL queries using the IN operator using PKs of different SortOrder were 
> failing during the WHERE clause compilation phase and causing OOM issues on 
> the servers when a large number (~50k) of RVC elements were used in the IN 
> operator.
> SQL queries were failing specifically during the skip scan filter generation. 
> The skip scan filter is generated using a list of point key ranges. 
> [ScanRanges.create|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L80]
> The following getPointKeys 
> [code|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L541]
>  uses the KeyRange sets to create a new list of point-keys. When there are a 
> large number of RVC elements the above



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6733) Ref count leaked test failures

2022-07-19 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6733.
--
Fix Version/s: 5.1.3
   Resolution: Fixed

> Ref count leaked test failures
> --
>
> Key: PHOENIX-6733
> URL: https://issues.apache.org/jira/browse/PHOENIX-6733
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>        Reporter: Geoffrey Jacoby
>    Assignee: Geoffrey Jacoby
>Priority: Blocker
> Fix For: 5.2.0, 5.1.3
>
>
> In pretty much every recent Yetus test run, some tests have flapped in the 
> AfterClass teardown logic which tries to check for HBase Store reference 
> resource leaks. The error message is "Ref count leaked", and some common 
> suites this happens to are:
> DateTimeIT
> InListIT
> SequenceIT
> IndexToolForDeleteBeforeRebuildIT
> SpooledTmpFileDeleteIT
> I haven't had much luck trying to reproduce this locally. It's also not clear 
> yet whether the root cause is an HBase error or a Phoenix one. (And if it's a 
> Phoenix one, is the bug with something in Phoenix or with the resource 
> check?) 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (PHOENIX-6733) Ref count leaked test failures

2022-07-19 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reopened PHOENIX-6733:
--

Reopening since this can't be cherry-picked back to 5.1 trivially (because the 
functionality moved from CompatUtil to BaseTest in 5.2 after we dropped support 
for HBase 2.1 and 2.2).

I'll create a new PR for 5.1 so it can get another test run. 

> Ref count leaked test failures
> --
>
> Key: PHOENIX-6733
> URL: https://issues.apache.org/jira/browse/PHOENIX-6733
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>        Reporter: Geoffrey Jacoby
>    Assignee: Geoffrey Jacoby
>Priority: Blocker
> Fix For: 5.2.0
>
>
> In pretty much every recent Yetus test run, some tests have flapped in the 
> AfterClass teardown logic which tries to check for HBase Store reference 
> resource leaks. The error message is "Ref count leaked", and some common 
> suites this happens to are:
> DateTimeIT
> InListIT
> SequenceIT
> IndexToolForDeleteBeforeRebuildIT
> SpooledTmpFileDeleteIT
> I haven't had much luck trying to reproduce this locally. It's also not clear 
> yet whether the root cause is an HBase error or a Phoenix one. (And if it's a 
> Phoenix one, is the bug with something in Phoenix or with the resource 
> check?) 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6733) Ref count leaked test failures

2022-07-19 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6733.
--
Resolution: Fixed

Tests are passing now; merged to master. 

> Ref count leaked test failures
> --
>
> Key: PHOENIX-6733
> URL: https://issues.apache.org/jira/browse/PHOENIX-6733
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>        Reporter: Geoffrey Jacoby
>    Assignee: Geoffrey Jacoby
>Priority: Blocker
> Fix For: 5.2.0
>
>
> In pretty much every recent Yetus test run, some tests have flapped in the 
> AfterClass teardown logic which tries to check for HBase Store reference 
> resource leaks. The error message is "Ref count leaked", and some common 
> suites this happens to are:
> DateTimeIT
> InListIT
> SequenceIT
> IndexToolForDeleteBeforeRebuildIT
> SpooledTmpFileDeleteIT
> I haven't had much luck trying to reproduce this locally. It's also not clear 
> yet whether the root cause is an HBase error or a Phoenix one. (And if it's a 
> Phoenix one, is the bug with something in Phoenix or with the resource 
> check?) 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6733) Ref count leaked test failures

2022-07-19 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reassigned PHOENIX-6733:


Assignee: Geoffrey Jacoby

> Ref count leaked test failures
> --
>
> Key: PHOENIX-6733
> URL: https://issues.apache.org/jira/browse/PHOENIX-6733
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>        Reporter: Geoffrey Jacoby
>    Assignee: Geoffrey Jacoby
>Priority: Blocker
> Fix For: 5.2.0
>
>
> In pretty much every recent Yetus test run, some tests have flapped in the 
> AfterClass teardown logic which tries to check for HBase Store reference 
> resource leaks. The error message is "Ref count leaked", and some common 
> suites this happens to are:
> DateTimeIT
> InListIT
> SequenceIT
> IndexToolForDeleteBeforeRebuildIT
> SpooledTmpFileDeleteIT
> I haven't had much luck trying to reproduce this locally. It's also not clear 
> yet whether the root cause is an HBase error or a Phoenix one. (And if it's a 
> Phoenix one, is the bug with something in Phoenix or with the resource 
> check?) 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-5404) Move check to client side to see if there are any child views that need to be dropped while receating a table/view

2022-07-05 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5404:
-
Fix Version/s: 5.3.0
   (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> Move check to client side to see if there are any child views that need to be 
> dropped while receating a table/view
> --
>
> Key: PHOENIX-5404
> URL: https://issues.apache.org/jira/browse/PHOENIX-5404
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Thomas D'Silva
>Priority: Major
> Fix For: 5.3.0
>
>
> Remove {{ViewUtil.dropChildViews(env, tenantIdBytes, schemaName, 
> tableName);}} call in MetdataEndpointImpl.createTable
> While creating a table or view we need to ensure that are not any child views 
> that haven't been clean up by the DropChildView task yet. Move this check to 
> the client (issue a scan against SYSTEM.CHILD_LINK to see if a single linking 
> row exists).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[ANNOUNCE] Gokcen Iskender joins Phoenix PMC

2022-07-05 Thread Geoffrey Jacoby
On behalf of the Apache Phoenix PMC, I'm pleased to announce that Gokcen
Iskender
has accepted our invitation to join the PMC.

Please join me in congratulating Gokcen!

Thanks,

Geoffrey Jacoby


[jira] [Assigned] (PHOENIX-5686) MetaDataUtil#isLocalIndex returns incorrect results

2022-06-23 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reassigned PHOENIX-5686:


Assignee: Geoffrey Jacoby

> MetaDataUtil#isLocalIndex returns incorrect results
> ---
>
> Key: PHOENIX-5686
> URL: https://issues.apache.org/jira/browse/PHOENIX-5686
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Swaroopa Kadam
>    Assignee: Geoffrey Jacoby
>Priority: Minor
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> isLocalIndex function in MetaDataUtil uses 
> "_LOCAL_IDX_" to check if the index is a local index. It would be good to 
> modify the method to use correct logic (get rid of the old and unused code) 
> and use the method call wherever needed. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-5066) The TimeZone is incorrectly used during writing or reading data

2022-06-23 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5066:
-
Fix Version/s: 5.3.0
   (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> The TimeZone is incorrectly used during writing or reading data
> ---
>
> Key: PHOENIX-5066
> URL: https://issues.apache.org/jira/browse/PHOENIX-5066
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai Zhang
>Assignee: Istvan Toth
>Priority: Critical
> Fix For: 5.3.0
>
> Attachments: DateTest.java, PHOENIX-5066.4x.v1.patch, 
> PHOENIX-5066.4x.v2.patch, PHOENIX-5066.4x.v3.patch, 
> PHOENIX-5066.master.v1.patch, PHOENIX-5066.master.v2.patch, 
> PHOENIX-5066.master.v3.patch, PHOENIX-5066.master.v4.patch, 
> PHOENIX-5066.master.v5.patch, PHOENIX-5066.master.v6.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We have two methods to write data when uses JDBC API.
> #1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
> #2. Uses the _prepareStatement_ method to set some objects and execute.
> The _string_ data needs to convert to a new object by the schema information 
> of tables. we'll use some date formatters to convert string data to object 
> for Date/Time/Timestamp types when writes data and the formatters are used 
> when reads data as well.
>  
> *Uses default timezone test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47') 
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
> {code}
> Reading the table by the getString methods 
> {code:java}
> 1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 
> 07:45:07.660
> {code}
>  *Uses GMT+8 test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47')
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
> 2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
> 3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
> Reading the table by the getString methods
> {code:java}
>  1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 
> 23:40:47.000
> 2 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000 | 2018-12-10 
> 15:40:47.000
> 3 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106 | 2018-12-10 
> 15:40:47.106
> {code}
>  
> _We_ have a historical problem,  we'll parse the string to 
> Date/Time/Timestamp objects with timezone in #1, which means the actual data 
> is going to be changed when stored in HBase table。



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-5283) Add CASCADE INDEX ALL in the SQL Grammar of ALTER TABLE ADD

2022-06-23 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5283:
-
Fix Version/s: (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> Add CASCADE INDEX ALL in the SQL Grammar of ALTER TABLE ADD 
> 
>
> Key: PHOENIX-5283
> URL: https://issues.apache.org/jira/browse/PHOENIX-5283
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Major
> Attachments: PHOENIX-5283.4.x-hbase-1.3.v1.patch
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Include following support in the grammar. 
> ALTER TABLE ADD CASCADE <(comma separated list of indexes) | ALL > IF NOT 
> EXISTS  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-4846) WhereOptimizer.pushKeyExpressionsToScan() does not work correctly if the sort order of pk columns being filtered on changes

2022-06-23 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-4846:
-
Fix Version/s: 5.2.1
   (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> WhereOptimizer.pushKeyExpressionsToScan() does not work correctly if the sort 
> order of pk columns being filtered on changes
> ---
>
> Key: PHOENIX-4846
> URL: https://issues.apache.org/jira/browse/PHOENIX-4846
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Thomas D'Silva
>Priority: Critical
> Fix For: 5.2.1
>
> Attachments: PHOENIX-4846-wip.patch
>
>
> {{ExpressionComparabilityWrapper}} should set the sort order based on 
> {{childPart.getColumn()}} or else the attached test throws an 
> IllegalArgumentException
> {code}
> java.lang.IllegalArgumentException: 4 > 3
> at java.util.Arrays.copyOfRange(Arrays.java:3519)
> at 
> org.apache.hadoop.hbase.io.ImmutableBytesWritable.copyBytes(ImmutableBytesWritable.java:272)
> at 
> org.apache.phoenix.compile.WhereOptimizer.getTrailingRange(WhereOptimizer.java:329)
> at 
> org.apache.phoenix.compile.WhereOptimizer.clipRight(WhereOptimizer.java:350)
> at 
> org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOptimizer.java:237)
> at org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:157)
> at org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:108)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:556)
> {code}
> Also in {{pushKeyExpressionsToScan()}} we cannot extract pk column nodes from 
> the where clause if the sort order of the columns changes. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-5258) Add support to parse header from the input CSV file as input columns for CsvBulkLoadTool

2022-06-23 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5258:
-
Fix Version/s: 5.3.0
   (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> Add support to parse header from the input CSV file as input columns for 
> CsvBulkLoadTool
> 
>
> Key: PHOENIX-5258
> URL: https://issues.apache.org/jira/browse/PHOENIX-5258
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Prashant Vithani
>Assignee: Prashant Vithani
>Priority: Minor
> Fix For: 5.3.0
>
> Attachments: PHOENIX-5258-4.x-HBase-1.4.001.patch, 
> PHOENIX-5258-4.x-HBase-1.4.patch, PHOENIX-5258-master.001.patch, 
> PHOENIX-5258-master.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently, CsvBulkLoadTool does not support reading header from the input csv 
> and expects the content of the csv to match with the table schema. The 
> support for the header can be added to dynamically map the schema with the 
> header.
> The proposed solution is to introduce another option for the tool 
> `–parse-header`. If this option is passed, the input columns list is 
> constructed by reading the first line of the input CSV file.
>  * If there is only one file, read the header from the first line and 
> generate the `ColumnInfo` list.
>  * If there are multiple files, read the header from all the files, and throw 
> an error if the headers across files do not match.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-5648) Improve IndexScrutinyTool's performance by moving comparison logic to server side

2022-06-23 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-5648.
--
Fix Version/s: (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)
   Resolution: Won't Fix

Looks like the consensus earlier was that this wouldn't help much. If anyone 
wants to take this back up please feel free to reopen for a future post-5.2 
release. (Note though that the IndexTool is usually much more efficient than 
IndexScrutinyTool) 

> Improve IndexScrutinyTool's performance by moving comparison logic to server 
> side
> -
>
> Key: PHOENIX-5648
> URL: https://issues.apache.org/jira/browse/PHOENIX-5648
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0, 4.14.3
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Minor
>
> If IndexScrutinyTool runs on a table with billion rows, it takes lots of 
> time. 
> One of the ways to improve the tool is to move the comparison to the 
> server-side. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-5750) Upsert on immutable table fails with AccessDeniedException

2022-06-23 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5750:
-
Fix Version/s: (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> Upsert on immutable table fails with AccessDeniedException
> --
>
> Key: PHOENIX-5750
> URL: https://issues.apache.org/jira/browse/PHOENIX-5750
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 4.14.3
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Major
> Attachments: PHOENIX-5750.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5750.4.x-HBase-1.3.v2.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {code:java}
> // code placeholder
> In TableDDLPermissionsIT
> @Test
> public void testUpsertIntoImmutableTable() throws Throwable {
> startNewMiniCluster();
> final String schema = "TEST_INDEX_VIEW";
> final String tableName = "TABLE_DDL_PERMISSION_IT";
> final String phoenixTableName = schema + "." + tableName;
> grantSystemTableAccess();
> try {
> superUser1.runAs(new PrivilegedExceptionAction() {
> @Override
> public Void run() throws Exception {
> try {
> verifyAllowed(createSchema(schema), superUser1);
> verifyAllowed(onlyCreateTable(phoenixTableName), 
> superUser1);
> } catch (Throwable e) {
> if (e instanceof Exception) {
> throw (Exception)e;
> } else {
> throw new Exception(e);
> }
> }
> return null;
> }
> });
> if (isNamespaceMapped) {
> grantPermissions(unprivilegedUser.getShortName(), schema, 
> Action.WRITE, Action.READ,Action.EXEC);
> }
> // we should be able to read the data from another index as well to 
> which we have not given any access to
> // this user
> verifyAllowed(upsertRowsIntoTable(phoenixTableName), 
> unprivilegedUser);
> } finally {
> revokeAll();
> }
> }
> in BasePermissionsIT:
> AccessTestAction onlyCreateTable(final String tableName) throws SQLException {
> return new AccessTestAction() {
> @Override
> public Object run() throws Exception {
> try (Connection conn = getConnection(); Statement stmt = 
> conn.createStatement()) {
> assertFalse(stmt.execute("CREATE IMMUTABLE TABLE " + tableName
> + "(pk INTEGER not null primary key, data VARCHAR, 
> val integer)"));
> }
> return null;
> }
> };
> }
> AccessTestAction upsertRowsIntoTable(final String tableName) throws 
> SQLException {
> return new AccessTestAction() {
> @Override
> public Object run() throws Exception {
> try (Connection conn = getConnection()) {
> try (PreparedStatement pstmt = conn.prepareStatement(
> "UPSERT INTO " + tableName + " values(?, ?, ?)")) {
> for (int i = 0; i < NUM_RECORDS; i++) {
> pstmt.setInt(1, i);
> pstmt.setString(2, Integer.toString(i));
> pstmt.setInt(3, i);
> assertEquals(1, pstmt.executeUpdate());
> }
> }
> conn.commit();
> }
> return null;
> }
> };
> }{code}
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-6740) Upgrade default supported Hadoop 3 version to 3.2.3 for HBase 2.5 profile

2022-06-23 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6740:
-
Summary: Upgrade default supported Hadoop 3 version to 3.2.3 for HBase 2.5 
profile  (was: Upgrade minimum supported Hadoop 3 version to 3.2.3)

> Upgrade default supported Hadoop 3 version to 3.2.3 for HBase 2.5 profile
> -
>
> Key: PHOENIX-6740
> URL: https://issues.apache.org/jira/browse/PHOENIX-6740
> Project: Phoenix
>  Issue Type: Task
>    Reporter: Geoffrey Jacoby
>        Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 5.2.0
>
>
> HBase is upgrading the minimum supported Hadoop to 3.2.3 for HBase 2.5, and 
> we have a similar request from dependabot. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-6725) ConcurrentMutationException when adding column to table/view

2022-06-22 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6725:
-
Fix Version/s: 5.1.3

> ConcurrentMutationException when adding column to table/view
> 
>
> Key: PHOENIX-6725
> URL: https://issues.apache.org/jira/browse/PHOENIX-6725
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.0, 5.1.1, 4.16.0, 4.16.1, 5.1.2
>Reporter: Tanuj Khurana
>Assignee: Lokesh Khurana
>Priority: Major
> Fix For: 5.2.0, 5.1.3
>
>
> I have a single threaded workflow but occasionally I hit 
> ConcurrentMutationException error when adding column to table/view:
> Stack trace:
> {code:java}
>  2022-05-04 16:41:24,598 WARN  [main] 
> client.ConnectionManager$HConnectionImplementation: Checking master 
> connectioncom.google.protobuf.ServiceException: java.io.IOException: Call to 
> tkhurana-ltm.internal.salesforce.com:16000 failed on local exception: 
> java.io.IOException: Operation timed out
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:340)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$200(AbstractRpcClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:588)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1551)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2274)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1823)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:141)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4552)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:564)
> at org.apache.hadoop.hbase.client.HTable.getTableDescriptor(HTable.java:585)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableDescriptor(ConnectionQueryServicesImpl.java:531)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.separateAndValidateProperties(ConnectionQueryServicesImpl.java:2769)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumn(ConnectionQueryServicesImpl.java:2298)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:4146)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3772)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1487)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:414)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:395)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:383)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:206)
> Caused by: java.io.IOException: Call to x failed on local exception: 
> java.io.IOException: Operation timed out 
> at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:180)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:394)
>
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95)
> 
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415)
> 
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411)
> 
> at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103)   
> at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118)   
> at 
> org.apache.hadoop.hbase.ipc.BlockingRpcConnection.closeConn(BlockingRpcConnection.java:685)
> 
> at 
> org.apache.hadoop.hbase.ipc.BlockingRpcConnection.readResponse(BlockingRpcConnection.java:651)
>  
> at 
> org.apache.hadoop.hbase.ipc.BlockingRp

[jira] [Created] (PHOENIX-6740) Upgrade minimum supported Hadoop 3 version to 3.2.3

2022-06-22 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-6740:


 Summary: Upgrade minimum supported Hadoop 3 version to 3.2.3
 Key: PHOENIX-6740
 URL: https://issues.apache.org/jira/browse/PHOENIX-6740
 Project: Phoenix
  Issue Type: Task
Reporter: Geoffrey Jacoby
Assignee: Geoffrey Jacoby
 Fix For: 5.2.0


HBase is upgrading the minimum supported Hadoop to 3.2.3 for HBase 2.5, and we 
have a similar request from dependabot. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-6530) Fix tenantId generation for Sequential and Uniform load generators

2022-06-22 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6530.
--
Fix Version/s: 5.2.0
   (was: 4.17.0)
   Resolution: Fixed

Thanks for the patch, [~thrylokya24]

> Fix tenantId generation for Sequential and Uniform load generators
> --
>
> Key: PHOENIX-6530
> URL: https://issues.apache.org/jira/browse/PHOENIX-6530
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.17.0, 5.1.2
>Reporter: Jacob Isaac
>Assignee: thrylokya
>Priority: Major
> Fix For: 5.2.0, 5.1.3
>
>
> While running the perf workloads for 4.16, found that tenantId generation for 
> the various generators do not match.
> As result the read queries fail when the writes/data was created using 
> different generator.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-6725) ConcurrentMutationException when adding column to table/view

2022-06-21 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6725:
-
Fix Version/s: 5.2.0

> ConcurrentMutationException when adding column to table/view
> 
>
> Key: PHOENIX-6725
> URL: https://issues.apache.org/jira/browse/PHOENIX-6725
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.0, 5.1.1, 4.16.0, 4.16.1, 5.1.2
>Reporter: Tanuj Khurana
>Assignee: Lokesh Khurana
>Priority: Major
> Fix For: 5.2.0
>
>
> I have a single threaded workflow but occasionally I hit 
> ConcurrentMutationException error when adding column to table/view:
> Stack trace:
> {code:java}
>  2022-05-04 16:41:24,598 WARN  [main] 
> client.ConnectionManager$HConnectionImplementation: Checking master 
> connectioncom.google.protobuf.ServiceException: java.io.IOException: Call to 
> tkhurana-ltm.internal.salesforce.com:16000 failed on local exception: 
> java.io.IOException: Operation timed out
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:340)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$200(AbstractRpcClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:588)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1551)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2274)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1823)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:141)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4552)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:564)
> at org.apache.hadoop.hbase.client.HTable.getTableDescriptor(HTable.java:585)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableDescriptor(ConnectionQueryServicesImpl.java:531)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.separateAndValidateProperties(ConnectionQueryServicesImpl.java:2769)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumn(ConnectionQueryServicesImpl.java:2298)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:4146)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3772)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1487)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:414)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:395)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:383)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:206)
> Caused by: java.io.IOException: Call to x failed on local exception: 
> java.io.IOException: Operation timed out 
> at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:180)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:394)
>
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95)
> 
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415)
> 
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411)
> 
> at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103)   
> at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118)   
> at 
> org.apache.hadoop.hbase.ipc.BlockingRpcConnection.closeConn(BlockingRpcConnection.java:685)
> 
> at 
> org.apache.hadoop.hbase.ipc.BlockingRpcConnection.readResponse(BlockingRpcConnection.java:651)
>  
> at 
> org.apache.hadoop.hbase.ipc.BlockingRp

[jira] [Resolved] (PHOENIX-6725) ConcurrentMutationException when adding column to table/view

2022-06-21 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6725.
--
Resolution: Fixed

Merged to master. Thanks [~lokiore]

> ConcurrentMutationException when adding column to table/view
> 
>
> Key: PHOENIX-6725
> URL: https://issues.apache.org/jira/browse/PHOENIX-6725
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.0, 5.1.1, 4.16.0, 4.16.1, 5.1.2
>Reporter: Tanuj Khurana
>Assignee: Lokesh Khurana
>Priority: Major
>
> I have a single threaded workflow but occasionally I hit 
> ConcurrentMutationException error when adding column to table/view:
> Stack trace:
> {code:java}
>  2022-05-04 16:41:24,598 WARN  [main] 
> client.ConnectionManager$HConnectionImplementation: Checking master 
> connectioncom.google.protobuf.ServiceException: java.io.IOException: Call to 
> tkhurana-ltm.internal.salesforce.com:16000 failed on local exception: 
> java.io.IOException: Operation timed out
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:340)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$200(AbstractRpcClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:588)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1551)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2274)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1823)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:141)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4552)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:564)
> at org.apache.hadoop.hbase.client.HTable.getTableDescriptor(HTable.java:585)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableDescriptor(ConnectionQueryServicesImpl.java:531)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.separateAndValidateProperties(ConnectionQueryServicesImpl.java:2769)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumn(ConnectionQueryServicesImpl.java:2298)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:4146)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3772)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1487)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:414)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:395)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:383)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:206)
> Caused by: java.io.IOException: Call to x failed on local exception: 
> java.io.IOException: Operation timed out 
> at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:180)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:394)
>
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95)
> 
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415)
> 
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411)
> 
> at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103)   
> at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118)   
> at 
> org.apache.hadoop.hbase.ipc.BlockingRpcConnection.closeConn(BlockingRpcConnection.java:685)
> 
> at 
> org.apache.hadoop.hbase.ipc.BlockingRpcConnection.readResponse(BlockingRpcConnection.java:651)
>  
> at 
> org.apache.hadoop.hbase.ipc.BlockingRpcConne

  1   2   3   4   5   6   7   8   9   >