0.23 (and hive 0.12) code base in Spark works well from our perspective, so
not sure what you are referring to. As I said, I'm happy to maintain my own
plugins but as it stands there is no sane way to do so in Spark because
there is no clear separation/developer APIs for these.

cheers,
Tom

On Fri, Jun 12, 2015 at 11:21 AM, Sean Owen <so...@cloudera.com> wrote:

> I don't imagine that can be guaranteed to be supported anyway... the
> 0.x branch has never necessarily worked with Spark, even if it might
> happen to. Is this really something you would veto for everyone
> because of your deployment?
>
> On Fri, Jun 12, 2015 at 7:18 PM, Thomas Dudziak <tom...@gmail.com> wrote:
> > -1 to this, we use it with an old Hadoop version (well, a fork of an old
> > version, 0.23). That being said, if there were a nice developer api that
> > separates Spark from Hadoop (or rather, two APIs, one for scheduling and
> one
> > for HDFS), then we'd be happy to maintain our own plugins for those.
> >
> > cheers,
> > Tom
> >
>

Reply via email to