I meant ADD_JARS as you said --jars is not working for you with spark-shell.

Thanks
Deepak

On Tue, Dec 27, 2016 at 4:51 PM, Mich Talebzadeh <mich.talebza...@gmail.com>
wrote:

> Ok just to be clear do you mean
>
> ADD_JARS="~/jars/ojdbc6.jar" spark-shell
>
> or
>
> spark-shell --jars $ADD_JARS
>
>
> Thanks
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 27 December 2016 at 10:30, Deepak Sharma <deepakmc...@gmail.com> wrote:
>
>> It works for me with spark 1.6 (--jars)
>> Please try this:
>> ADD_JARS="<<PATH_TO_JAR>>" spark-shell
>>
>> Thanks
>> Deepak
>>
>> On Tue, Dec 27, 2016 at 3:49 PM, Mich Talebzadeh <
>> mich.talebza...@gmail.com> wrote:
>>
>>> Thanks.
>>>
>>> The problem is that with spark-shell --jars does not work! This is Spark
>>> 2 accessing Oracle 12c
>>>
>>> spark-shell --jars /home/hduser/jars/ojdbc6.jar
>>>
>>> It comes back with
>>>
>>> java.sql.SQLException: No suitable driver
>>>
>>> unfortunately
>>>
>>> and spark-shell uses spark-submit under the bonnet if you look at the
>>> shell file
>>>
>>> "${SPARK_HOME}"/bin/spark-submit --class org.apache.spark.repl.Main
>>> --name "Spark shell" "$@"
>>>
>>>
>>> hm
>>>
>>>
>>>
>>>
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * 
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>> On 27 December 2016 at 09:52, Deepak Sharma <deepakmc...@gmail.com>
>>> wrote:
>>>
>>>> Hi Mich
>>>> You can copy the jar to shared location and use --jars command line
>>>> argument of spark-submit.
>>>> Who so ever needs  access to this jar , can refer to the shared path
>>>> and access it using --jars argument.
>>>>
>>>> Thanks
>>>> Deepak
>>>>
>>>> On Tue, Dec 27, 2016 at 3:03 PM, Mich Talebzadeh <
>>>> mich.talebza...@gmail.com> wrote:
>>>>
>>>>> When one runs in Local mode (one JVM) on an edge host (the host user
>>>>> accesses the cluster), it is possible to put additional jar file say
>>>>> accessing Oracle RDBMS tables in $SPARK_CLASSPATH. This works
>>>>>
>>>>> export SPARK_CLASSPATH=~/user_jars/ojdbc6.jar
>>>>>
>>>>> Normally a group of users can have read access to a shared directory
>>>>> like above and once they log in their shell will invoke an environment 
>>>>> file
>>>>> that will have the above classpath plus additional parameters like
>>>>> $JAVA_HOME etc are set up for them.
>>>>>
>>>>> However, if the user chooses to run spark through spark-submit with
>>>>> yarn, then the only way this will work in my research is to add the jar
>>>>> file as follows on every node of Spark cluster
>>>>>
>>>>> in $SPARK_HOME/conf/spark-defaults.conf
>>>>>
>>>>> Add the jar path to the following:
>>>>>
>>>>> spark.executor.extraClassPath   /user_jars/ojdbc6.jar
>>>>>
>>>>> Note that setting both spark.executor.extraClassPath and
>>>>> SPARK_CLASSPATH
>>>>> will cause initialisation error
>>>>>
>>>>> ERROR SparkContext: Error initializing SparkContext.
>>>>> org.apache.spark.SparkException: Found both
>>>>> spark.executor.extraClassPath and SPARK_CLASSPATH. Use only the former.
>>>>>
>>>>> I was wondering if there are other ways of making this work in YARN
>>>>> mode, where every node of cluster will require this JAR file?
>>>>>
>>>>> Thanks
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Thanks
>>>> Deepak
>>>> www.bigdatabig.com
>>>> www.keosha.net
>>>>
>>>
>>>
>>
>>
>> --
>> Thanks
>> Deepak
>> www.bigdatabig.com
>> www.keosha.net
>>
>
>


-- 
Thanks
Deepak
www.bigdatabig.com
www.keosha.net

Reply via email to