Oracle is a perfectly reasonable endpoint for publishing data processed in
Spark. I've got to assume he's using it that way and not as a stand in for
HDFS?

On Friday, February 19, 2016, Jörn Franke <[email protected]> wrote:

> Generally oracle db should not be used as a storage layer for spark due to
> performance reasons. You should consider HDFS. This will help you also with
> fault - tolerance.
>
> > On 19 Feb 2016, at 03:35, Divya Gehlot <[email protected]
> <javascript:;>> wrote:
> >
> > Hi,
> > I am a Spark job which connects to RDBMS (in mycase its Oracle).
> > How can we check that complete data writing is successful?
> > Can I use commit in case of success or rollback in case of failure ?
> >
> >
> >
> > Thanks,
> > Divya
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected] <javascript:;>
> For additional commands, e-mail: [email protected] <javascript:;>
>
>

-- 
Russell Jurney twitter.com/rjurney [email protected] relato.io

Reply via email to