Thanks Sean. I guess I was being pedantic. In any case if the source table
does not exist as spark.read is a collection, then it is going to fall over
one way or another!




On Fri, 2 Oct 2020 at 15:55, Sean Owen <sro...@gmail.com> wrote:

> It would be quite trivial. None of that affects any of the Spark execution.
> It doesn't seem like it helps though - you are just swallowing the cause.
> Just let it fly?
>
> On Fri, Oct 2, 2020 at 9:34 AM Mich Talebzadeh <mich.talebza...@gmail.com>
> wrote:
>
>> As a side question consider the following read JDBC read
>>
>>
>> val lowerBound = 1L
>>
>> val upperBound = 1000000L
>>
>> val numPartitions = 10
>>
>> val partitionColumn = "id"
>>
>>
>> val HiveDF = Try(spark.read.
>>
>>     format("jdbc").
>>
>>     option("url", jdbcUrl).
>>
>>     option("driver", HybridServerDriverName).
>>
>>     option("dbtable", HiveSchema+"."+HiveTable).
>>
>>     option("user", HybridServerUserName).
>>
>>     option("password", HybridServerPassword).
>>
>>     option("partitionColumn", partitionColumn).
>>
>>     option("lowerBound", lowerBound).
>>
>>     option("upperBound", upperBound).
>>
>>     option("numPartitions", numPartitions).
>>
>>     load()) match {
>>
>>                    case Success(df) => df
>>
>>                    case Failure(e) => throw new Exception("Error
>> Encountered reading Hive table")
>>
>>      }
>>
>> Are there any performance implications of having Try, Success, Failure
>> enclosure around DF?
>>
>>>

Reply via email to