I'm implementing V2 datasource for a custom datasource.
I'm trying to insert a record into a temp view, in following fashion.
insertDFWithSchema.createOrReplaceTempView(sqlView)
spark.sql(s”insert into $sqlView values (2, ‘insert_record1’, 200,
23000), (20001, ‘insert_record2’, 201, 23001)“)
Ok. I will work on creating a reproducible app. Thanks.
On Wed, Jan 13, 2021 at 3:57 PM Gabor Somogyi
wrote:
> Just reached this thread. +1 on to create a simple reproducer app and I
> suggest to create a jira attaching the full driver and executor logs.
> Ping me on the jira and I'll pick this
is shuffle file re-use based on identity or equality of the dataframe?
for example if run the exact same code twice to load data and do transforms
(joins, aggregations, etc.) but without re-using any actual dataframes,
will i still see skipped stages thanks to shuffle file re-use?
thanks!
koert
Hello here, I am new to spark and am trying to add some monitoring for spark
applications specifically to handle the below situations - 1 - Forwarding
Spark Event Logs to identify critical events like job start, executor
failures, job failures etc to ElasticSearch via log4j. However I could not
fin
Hi,
I am trying to connect to Presto via Spark shell using the following
connection string, however ending up with exception
*-bash-4.2$ spark-shell --driver-class-path
com.facebook.presto.jdbc.PrestoDriver --jars presto-jdbc-0.221.jar*
*scala> val presto_df = sqlContext.read.format("jdbc").op
Just reached this thread. +1 on to create a simple reproducer app and I
suggest to create a jira attaching the full driver and executor logs.
Ping me on the jira and I'll pick this up right away...
Thanks!
G
On Wed, Jan 13, 2021 at 8:54 AM Jungtaek Lim
wrote:
> Would you mind if I ask for a s