Re: Exception handling in Spark throws recursive value for DF needs type error

2020-10-01 Thread Mich Talebzadeh
Many thanks Russell. That worked val *HiveDF* = Try(spark.read. format("jdbc"). option("url", jdbcUrl). option("dbtable", HiveSchema+"."+HiveTable). option("user", HybridServerUserName). option("password", HybridServerPassword). load()) match { *

Re: Exception handling in Spark throws recursive value for DF needs type error

2020-10-01 Thread Russell Spitzer
You can't use df as the name of the return from the try and the name of the match variable in success. You also probably want to match the name of the variable in the match with the return from the match. So val df = Try(spark.read. format("jdbc"). option("url", jdbcUrl).

Re: Exception handling in Spark throws recursive value for DF needs type error

2020-10-01 Thread Mich Talebzadeh
Many thanks SEan. Maybe I misunderstood your point? var DF = Try(spark.read. format("jdbc"). option("url", jdbcUrl). option("dbtable", HiveSchema+"."+HiveTable). option("user", HybridServerUserName). option("password", HybridServerPassword). load()) match {

Re: Exception handling in Spark throws recursive value for DF needs type error

2020-10-01 Thread Sean Owen
You are reusing HiveDF for two vars and it ends up ambiguous. Just rename one. On Thu, Oct 1, 2020, 5:02 PM Mich Talebzadeh wrote: > Hi, > > > Spark version 2.3.3 on Google Dataproc > > > I am trying to use databricks to other databases > > >

Exception handling in Spark throws recursive value for DF needs type error

2020-10-01 Thread Mich Talebzadeh
Hi, Spark version 2.3.3 on Google Dataproc I am trying to use databricks to other databases https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html to read from Hive table on Prem using Spark in Cloud This works OK without a Try enclosure. import spark.implicits._ import

Create custom receiver for MQTT in spark streaming

2020-10-01 Thread Muhammed Favas
Hi, I have a requirement to do analysis using spark on the data coming from IoT device via MQTT broker. The connectivity from my spark job is with MQTT broker where I can subscribe to specific topics. I have used the MQTTUtils library in spark to connect to the broker, but I have doubts about

Re: [Spark SQL] does pyspark udf support spark.sql inside def

2020-10-01 Thread Lakshmi Nivedita
Sure, will do that.I am using impala in pyspark. to retrieve the data A table schema date1 Bigint date2 Bigint ctry string sample data for table A: date1 date2 ctry 22-12-2012 06-01-2013 IN B table schema holidate Bigint Holiday =0/1 —string 0 means holiday—- 1 means

Re: Spark JDBC- OAUTH example

2020-10-01 Thread Gabor Somogyi
I know what you're writing, please check the code which will answer your questions: https://github.com/apache/spark/pull/29024 The API provides quite a freedom when a custom provider is implemented. G On Wed, Sep 30, 2020 at 8:29 PM Artemis User wrote: > I'm just curious in regard to what