Thank you Nihal

IMO HBase has already forked most of the Web server code from Hadoop, I
prefer option 1.

I was about to suggest adding the filters to hbase-http, but at the moment
REST and THRIFT don't depend on that.

Istvan



On Mon, Sep 1, 2025 at 4:53 PM Nihal Jain <nihalj...@apache.org> wrote:

> Apologies for the formatting glitch in the previous email. It seems some
> mail clients automatically collapsed the section for Option 2. Please be
> sure to click on it to expand the details.
>
> On 2025/09/01 14:37:47 Nihal Jain wrote:
> > Hi team,
> >
> > I'm initiating this discussion regarding the most effective approach for
> > HBASE-29542: migrating to Jetty 12 with the Jakarta EE10 namespace (Phase
> > 2). We've already completed Phase 1 of this migration (i.e. Jetty 9 to
> > Jetty 12 with EE8) under HBASE-29224.
> >
> > Our primary challenge forthis next phase is our reliance on Hadoop's
> > AuthenticationFilter, which is built with the javax.servlet namespace. To
> > bridge this compatibility gap, I've identified three main options and
> > created corresponding JIRA tickets:
> >
> > Option 1: Decouple via Source Fork (HBASE-29557)
> > This involves creating a new hbase-auth-filters module and directly
> copying
> > the necessary Hadoop authentication source code.
> >
> > JIRA: https://issues.apache.org/jira/browse/HBASE-29557
> >
> > <https://issues.apache.org/jira/browse/HBASE-29557>Pros:
> > * Complete decoupling from Hadoop's release timeline, allowing immediate
> > progress with full control.
> > * The underlying filter code is relatively stable and historically has
> not
> > required frequent changes, which may lower the maintenance risk.
> > * Eventually, we could refactor and move all existing filter-related code
> > within HBase to this new module, improving modularity.
> >
> > Cons: We would still assume the maintenance burden of a source fork,
> > including monitoring and backporting any security fixes from upstream
> > Hadoop.
> >
> > ---
> >
> > Option 2: Shade and Transform (HBASE-29563)
> > This approach will use the Maven Shade Plugin to create a private,
> > transformed copy of required Hadoop classes during the build process.
> This
> > includes isolating, relocating classes to an org.apache.hbase.shaded
> > namespace, and transforming their bytecode from javax to jakarta.
> >
> > JIRA: https://issues.apache.org/jira/browse/HBASE-29563
> >
> > Pros:
> > * Avoids a source fork, simplifying maintenance.
> > * Upstream fixes can be easily incorporated by updating the Hadoop
> > dependency, and relocation prevents classpath conflicts.
> >
> > Cons:
> > * Introduces some complexity to our build configuration, though this is a
> > well-established pattern.
> > * Upstream API changes in the Hadoop library could cause build or runtime
> > failures.
> >
> > ---
> >
> > Option 3: Wait for Hadoop (HADOOP-19395)
> >
> > This option entails pausing our migration efforts until the Hadoop
> > community completes their migration to the jakarta namespace.
> >
> > JIRA: https://issues.apache.org/jira/browse/HADOOP-19395
> >
> > Pros: Requires no immediate work from our end.
> >
> > Cons:
> > * Leaves us indefinitely blocked by another project's schedule and
> > priorities.
> > * Once Hadoop migrates, we would likely face the added complexity of
> > supporting both pre- and post-Jakarta versions of Hadoop.
> >
> > ---
> >
> > I would like to gather the team's perspective on these options to
> determine
> > the best path forward. Please go over the JIRAs for more details and
> share
> > your thoughts.
> >
> > Thanks,
> > Nihal
> >
>


-- 
*István Tóth* | Sr. Staff Software Engineer
*Email*: st...@cloudera.com
cloudera.com <https://www.cloudera.com>
[image: Cloudera] <https://www.cloudera.com/>
[image: Cloudera on Twitter] <https://twitter.com/cloudera> [image:
Cloudera on Facebook] <https://www.facebook.com/cloudera> [image: Cloudera
on LinkedIn] <https://www.linkedin.com/company/cloudera>
------------------------------
------------------------------

Reply via email to