As a simple hack you can put the jar in the class path and tgen set the jdbc parameters in the hive interpreter parameters. Then use %hive and just write sql aginst teradata. Eran
בתאריך שבת, 1 באוג׳ 2015, 00:08 מאת Dhaval Patel <dhaval1...@gmail.com>: > Hi, > > I am trying to connect to Teradata from spark and getting below error for > not finding suitable drivers. > > : java.sql.SQLException: No suitable driver found for > jdbc:teradata://XXXXXX > > > I have tried adding jar files using %dep, as well as in zeppelin-env.sh > setting up SPARK_CLASSPATH variable but instead of adding under > classpaths, it adds under spark.driver.extraClassPath. > SPARK_CLASSPATH=/...path/terajdbc4.jar:/..path/tdgssconfig.jar > > > Below is code I tried from Z : > > %pyspark > df = sqlContext.load(source="jdbc", url="jdbc:teradata://XXXXX, > user=XXXXXX, password=XXXXXXX", dbtable="XXXXXXX") > > I have tried from shell adding the driver and connecting from there and it > worked like charm. > > Thanks in advance! > > -Dhaval >