Thanks Sean. I guess I was being pedantic. In any case if the source table
does not exist as spark.read is a collection, then it is going to fall over
one way or another!
On Fri, 2 Oct 2020 at 15:55, Sean Owen wrote:
> It would be quite trivial. None of that affects any of the Spark execution
It would be quite trivial. None of that affects any of the Spark execution.
It doesn't seem like it helps though - you are just swallowing the cause.
Just let it fly?
On Fri, Oct 2, 2020 at 9:34 AM Mich Talebzadeh
wrote:
> As a side question consider the following read JDBC read
>
>
> val lowerB
As a side question consider the following read JDBC read
val lowerBound = 1L
val upperBound = 100L
val numPartitions = 10
val partitionColumn = "id"
val HiveDF = Try(spark.read.
format("jdbc").
option("url", jdbcUrl).
option("driver", HybridServerDriverName).
option("
Many thanks Russell. That worked
val *HiveDF* = Try(spark.read.
format("jdbc").
option("url", jdbcUrl).
option("dbtable", HiveSchema+"."+HiveTable).
option("user", HybridServerUserName).
option("password", HybridServerPassword).
load()) match {
*cas
You can't use df as the name of the return from the try and the name of the
match variable in success. You also probably want to match the name of the
variable in the match with the return from the match.
So
val df = Try(spark.read.
format("jdbc").
option("url", jdbcUrl).
option
Many thanks SEan.
Maybe I misunderstood your point?
var DF = Try(spark.read.
format("jdbc").
option("url", jdbcUrl).
option("dbtable", HiveSchema+"."+HiveTable).
option("user", HybridServerUserName).
option("password", HybridServerPassword).
load()) match {
You are reusing HiveDF for two vars and it ends up ambiguous. Just rename
one.
On Thu, Oct 1, 2020, 5:02 PM Mich Talebzadeh
wrote:
> Hi,
>
>
> Spark version 2.3.3 on Google Dataproc
>
>
> I am trying to use databricks to other databases
>
>
> https://spark.apache.org/docs/latest/sql-data-sources
Hi,
Spark version 2.3.3 on Google Dataproc
I am trying to use databricks to other databases
https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html
to read from Hive table on Prem using Spark in Cloud
This works OK without a Try enclosure.
import spark.implicits._
import scala
Sure, just do case Failure(e) => throw e
From: Mich Talebzadeh
Date: Tuesday, May 5, 2020 at 6:36 PM
To: Brandon Geise
Cc: Todd Nist , "user @spark"
Subject: Re: Exception handling in Spark
Hi Brandon.
In dealing with
df case Failure(e) => throw new Exception
5 May 2020 at 23:13, Brandon Geise wrote:
>
>> Match needs to be lower case “match”
>>
>>
>>
>> *From: *Mich Talebzadeh
>> *Date: *Tuesday, May 5, 2020 at 6:13 PM
>> *To: *Brandon Geise
>> *Cc: *Todd Nist , "user @spark" <
>>
to be lower case “match”
>
>
>
> *From: *Mich Talebzadeh
> *Date: *Tuesday, May 5, 2020 at 6:13 PM
> *To: *Brandon Geise
> *Cc: *Todd Nist , "user @spark" >
> *Subject: *Re: Exception handling in Spark
>
>
>
>
> scala> import scala.ut
Match needs to be lower case “match”
From: Mich Talebzadeh
Date: Tuesday, May 5, 2020 at 6:13 PM
To: Brandon Geise
Cc: Todd Nist , "user @spark"
Subject: Re: Exception handling in Spark
scala> import scala.util.{Try, Success, Failure}
import scala.util.{Try, Success, Fa
sclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary
Import scala.util.Try
Import scala.util.Success
Import scala.util.Failure
From: Mich Talebzadeh
Date: Tuesday, May 5, 2020 at 6:11 PM
To: Brandon Geise
Cc: Todd Nist , "user @spark"
Subject: Re: Exception handling in Spark
This is what I get
scala> val df =
Try(spar
you give this approach a try?
>
>
>
> val df = Try(spark.read.csv("")) match {
>
> case Success(df) => df
>
> case Failure(e) => throw new Exception("foo")
>
> }
>
>
>
> *From: *Mich Talebzadeh
> *Date:
dd Nist
Cc: Brandon Geise , "user @spark"
Subject: Re: Exception handling in Spark
I am trying this approach
val broadcastValue = "123456789" // I assume this will be sent as a constant
for the batch
// Create a DF on top of XML
try {
val df = spark.read.
d=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk
; arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Tue, 5 May 2020 at 16:41, Brandon Geis
Date: Tuesday, May 5, 2020 at 12:45 PM
To: Brandon Geise
Cc: "user @spark"
Subject: Re: Exception handling in Spark
Thanks Brandon!
i should have remembered that.
basically the code gets out with sys.exit(1) if it cannot find the file
I guess there is no easy way
r destruction.
>
>
>
>
> On Tue, 5 May 2020 at 16:41, Brandon Geise wrote:
>
>> You could use the Hadoop API and check if the file exists.
>>
>>
>>
>> *From: *Mich Talebzadeh
>> *Date: *Tuesday, May 5, 2020 at 11:25 AM
>> *To: *"user
use the Hadoop API and check if the file exists.
>
>
>
> *From: *Mich Talebzadeh
> *Date: *Tuesday, May 5, 2020 at 11:25 AM
> *To: *"user @spark"
> *Subject: *Exception handling in Spark
>
>
>
> Hi,
>
>
>
> As I understand exception handling in
You could use the Hadoop API and check if the file exists.
From: Mich Talebzadeh
Date: Tuesday, May 5, 2020 at 11:25 AM
To: "user @spark"
Subject: Exception handling in Spark
Hi,
As I understand exception handling in Spark only makes sense if one attempts an
action as
Hi,
As I understand exception handling in Spark only makes sense if one
attempts an action as opposed to lazy transformations?
Let us assume that I am reading an XML file from the HDFS directory and
create a dataframe DF on it
val broadcastValue = "123456789" // I assume this will
23 matches
Mail list logo