Sorry I somehow missed the "Scope" column in the docs, which
explicitly states its for reads only. I don't suppose anyone knows of some
other method I can submit SET statements for write sessions?
On Fri, Nov 26, 2021 at 12:51 PM wrote:
> Hello,
>
> Regarding JDBC sinks, the docs state:
>
Hello,
Regarding JDBC sinks, the docs state:
https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html
sessionInitStatement:
After each database session is opened to the remote DB and before starting
to read data, this option executes a custom SQL statement (or a PL/SQL
block). Use this to
Hi,
The stack trace suggests you're doing a join as well? and it's python..
I wonder if you're seeing this?
https://issues.apache.org/jira/browse/SPARK-17100
Are you using spark 2.0.0?
Tim
On Tue, 16 Aug 2016 at 16:58 Sumit Khanna wrote:
> This is just the
Hello,
I am running spark 1.5.1 on EMR using Python 3.
I have a pyspark job which is doing some simple joins and reduceByKey
operations. It works fine most of the time, but sometimes I get the
following error:
15/11/09 03:00:53 WARN TaskSetManager: Lost task 2.0 in stage 4.0 (TID
69,