o:divya.htco...@gmail.com]
Sent: 21 February 2016 00:09
To: Mich Talebzadeh <m...@peridale.co.uk>
Cc: user @spark <user@spark.apache.org>; Russell Jurney
<russell.jur...@gmail.com>; Jörn Franke <jornfra...@gmail.com>
Subject: RE: Spark JDBC connection - data writing success or fa
t;jornfra...@gmail.com>
Cc: Divya Gehlot <divya.htco...@gmail.com>; user @spark <user@spark.apache.org>
Subject: Re: Spark JDBC connection - data writing success or failure cases
Oracle is a perfectly reasonable endpoint for publishing data processed in
Spark. I've got to assum
Oracle is a perfectly reasonable endpoint for publishing data processed in
Spark. I've got to assume he's using it that way and not as a stand in for
HDFS?
On Friday, February 19, 2016, Jörn Franke wrote:
> Generally oracle db should not be used as a storage layer for
Generally oracle db should not be used as a storage layer for spark due to
performance reasons. You should consider HDFS. This will help you also with
fault - tolerance.
> On 19 Feb 2016, at 03:35, Divya Gehlot wrote:
>
> Hi,
> I am a Spark job which connects to
l.com]
Sent: 19 February 2016 02:36
To: user @spark <user@spark.apache.org>
Subject: Spark JDBC connection - data writing success or failure cases
Hi,
I am a Spark job which connects to RDBMS (in mycase its Oracle).
How can we check that complete data writing is successful?
Can I
Hi,
I am a Spark job which connects to RDBMS (in mycase its Oracle).
How can we check that complete data writing is successful?
Can I use commit in case of success or rollback in case of failure ?
Thanks,
Divya