>From what I see this is not putting those classes on into the cluster at
all. This looks like it is serializing them during each scan.
So this issue does not arise.
What I'm thinking about is how to ensure that this is not a way to avoid
the security in the cluster.

Niels
On Mar 7, 2014 1:41 AM, "Ted Yu" <[email protected]> wrote:

> Interesting blog.
>
> I wonder how subsequent work addresses the following:
>
> bq. Updating the filter.jar in the Hadoop FS while a table scan is
> happening can have undesired results if the updated filters are not
> backward compatible.
>
>
> On Thu, Mar 6, 2014 at 12:54 PM, Niels Basjes <[email protected]> wrote:
>
> > Hi,
> >
> > In the current HBase versions a Filter needs to be deployed by putting a
> > jar into all region servers (and depending on the HBase version restart
> the
> > regionservers).
> >
> > I'm in a multi tenant cluster environment where we may run into the need
> to
> > have both the old and the new version of a Filter available at the same
> > time. Also the option of having a method of easily trying out a new
> > implementation for a Filter (to see if it performs better) would be a lot
> > easier if it were possible to use a custom Filter without having to put
> it
> > onto all region servers.
> >
> > So after some Googling I found this interesting experiment for
> dynamically
> > uploading the Filter code with the Scan:
> >
> >
> http://tech.flurry.com/2012/12/06/exploring-dynamic-loading-of-custom-filters-i/
> >
> >
> > My question: Is such a feature planned for the mainline HBase?
> >
> > --
> > Best regards
> >
> > Niels Basjes
> >
>

Reply via email to