Hi!

What should we do with phoenix-flume ?
There has been so little (zero) activity on it that I have completely
forgotten even about its existence.
However Flume itself is still maintained, it doesn't really seem to cause
any problems either.
On the other hand, I have no idea whether it works on a production system.
Should we keep it, or should we drop it ?
I am leaning towards dropping it, as without an active maintainer (or at
least a known user)
we don't know if it even works properly.
Just as with Kafka, we could add it back if someone volunteers to maintain
it.

Istvan

On Wed, Apr 19, 2023 at 9:57 PM Geoffrey Jacoby <gjac...@apache.org> wrote:

> +1.
>
> At $dayjob we have a legacy feature that uses phoenix-pig, but I believe
> that usage is scheduled for deprecation soon and we can maintain it in our
> internal fork until then. Pig hasn't had a release in 6 years and last I
> heard doesn't support Hadoop 3; no reason to keep supporting it.
>
> Geoffrey
>
>
>
> On Tue, Apr 18, 2023 at 1:37 AM Istvan Toth <st...@apache.org> wrote:
>
> > Hi!
> >
> > We've never had a connectors release, because of multiple unsolved
> > problems.
> > Some, like java 11/17 support are relatively straightforward and don't
> > really need discussion, but some are more impactful.
> >
> > I propose the following plan, which should give us a chance to have a
> > release in the foreseeable future:
> > Disclosure: at $dayjob, we only support the Spark and HBase connectors,
> and
> > those are the ones we can dedicate resources to.
> >
> > *- Drop the connectors for Phoenix 4.x*
> > 4.x is EOL, and it complicates the project structure, build time, etc.
> > We've never had a release for 4.x either.
> >
> > *- Drop the Kafka connector*
> > It has CVEs, and only works with an ancient Kafka version.
> > I have also seen zero developer or user interest in it.
> > If someone volunteers to update and maintain it, we can always add it
> back
> > later
> >
> >
> > *- Drop the Pig connector*This doesn't have critical problems, but I have
> > seen zero interest in it.
> > The shaded artifact doesn't use maven-shade-plugin, and I suspect that it
> > would have classpath conflict issues.
> > Fixing up the shading to be on par with the rest of the connectors would
> be
> > a non-trivial amount of work.
> > If someone volunteers to update and maintain it, we can always add it
> back
> > later.
> >
> > *- Re-shade the hive 3 connector for hbase-shaded*
> > Hbase in Hive 3 is very broken, we already need to replace the shipped
> > HBase jars anyway.
> > To avoid conflict with the included hbase jars, we want to avoid
> > duplicating them.
> > The solution is to omit the Hbase and Hadoop JARs from the shaded
> > connector, and change the relocations
> > to handle the binary incompatibilities between the shaded and non-shaded
> > HBase API.
> > We already do this for Spark, and this is also how the Hive 4 connector
> > will have to work.
> > (This already works well at $dayjob )
> >
> > This would leave us with only three connectors, but those would at least
> be
> > released, and easier to support:
> > Spark 2
> > Spark 3
> > Hive 3
> >
> > Please share your thoughts!
> >
> > Istvan
> >
>

Reply via email to