...@gmail.com>
> *Date: *Monday, August 21, 2017 at 6:44 PM
> *To: *Jake Russ <jr...@bloomintelligence.com>
> *Cc: *"user@spark.apache.org" <user@spark.apache.org>
> *Subject: *Re: Update MySQL table via Spark/SparkR?
>
>
>
> Hi Jake,
>
> This is an
:44 PM
To: Jake Russ <jr...@bloomintelligence.com>
Cc: "user@spark.apache.org" <user@spark.apache.org>
Subject: Re: Update MySQL table via Spark/SparkR?
Hi Jake,
This is an issue across all RDBMs including Oracle etc. When you are updating
you have to commit or roll back in RDB
How about append and a view simulating the update? Then you do not need 2
processes...
On Tue, 22 Aug 2017 at 8:44 am, Mich Talebzadeh
wrote:
> Hi Jake,
>
> This is an issue across all RDBMs including Oracle etc. When you are
> updating you have to commit or roll back
Hi Jake,
This is an issue across all RDBMs including Oracle etc. When you are
updating you have to commit or roll back in RDBMS itself and I am not aware
of Spark doing that.
The staging table is a safer method as it follows ETL type approach. You
create new data in the staging table in RDBMS
Hi everyone,
I’m currently using SparkR to read data from a MySQL database, perform some
calculations, and then write the results back to MySQL. Is it still true that
Spark does not support UPDATE queries via JDBC? I’ve seen many posts on the
internet that Spark’s DataFrameWriter does not