It would be quite trivial. None of that affects any of the Spark execution.
It doesn't seem like it helps though - you are just swallowing the cause.
Just let it fly?

On Fri, Oct 2, 2020 at 9:34 AM Mich Talebzadeh <mich.talebza...@gmail.com>
wrote:

> As a side question consider the following read JDBC read
>
>
> val lowerBound = 1L
>
> val upperBound = 1000000L
>
> val numPartitions = 10
>
> val partitionColumn = "id"
>
>
> val HiveDF = Try(spark.read.
>
>     format("jdbc").
>
>     option("url", jdbcUrl).
>
>     option("driver", HybridServerDriverName).
>
>     option("dbtable", HiveSchema+"."+HiveTable).
>
>     option("user", HybridServerUserName).
>
>     option("password", HybridServerPassword).
>
>     option("partitionColumn", partitionColumn).
>
>     option("lowerBound", lowerBound).
>
>     option("upperBound", upperBound).
>
>     option("numPartitions", numPartitions).
>
>     load()) match {
>
>                    case Success(df) => df
>
>                    case Failure(e) => throw new Exception("Error
> Encountered reading Hive table")
>
>      }
>
> Are there any performance implications of having Try, Success, Failure
> enclosure around DF?
>
>>

Reply via email to