No nothing in particular.  just was looking if there was a way.  employ
spark plugin seems to be the standard.

thank you so much for your input.

On Tue, May 23, 2017 at 4:01 PM, Jonathan Leech <jonat...@gmail.com> wrote:

> There is a Phoenix / mapreduce integration. If you bypass Hbase you will
> need to take care to not miss edits that are only in memory and WAL.
>
> If you bypass both Phoenix and Hbase you will have to write code that can
> interpret both...Possible, yes, but not a good use of your time.
>
> Is there some machine learning algorithm you want to use that isn't
> included in Spark, or that you wouldn't be able to integrate with either
> Spark or a MapReduce job?
>
> On May 23, 2017, at 1:39 PM, Ash N <742...@gmail.com> wrote:
>
> Thanks Jonathan. ..
>
> But am looking to access data directly from HDFS.  not go through
> phoenix/hbase fir access.
>
> Is this possible?
>
>
> Best regards
>
> On May 23, 2017 3:35 PM, "Jonathan Leech" <jonat...@gmail.com> wrote:
>
> I think you would use Spark for that, via the Phoenix spark plugin.
>
> > On May 23, 2017, at 12:24 PM, Ash N <742...@gmail.com> wrote:
> >
> > Hi All,
> >
> > This may be a silly question.  we are storing data through Apache
> Phoenix.
> > Is there anything special we have to do so that machine learning and
> other analytics workloads can access this data from HDFS layer?
> >
> > Considering HBase stores data in HDFS.
> >
> >
> > thanks,
> > -ash
>
>
>

Reply via email to