I actually *just* figured it out.  Zeppelin has sqlContext "already created
and exposed" (
https://zeppelin.incubator.apache.org/docs/interpreter/spark.html).

So when I do "sqlContext = SQLContext(sc)" I overwrite sqlContext.  Then
Zeppelin cannot see this new sqlContext.

Anyway, anyone out there experiencing this problem, do NOT initialize
sqlContext and it works fine.

On Thu, Oct 29, 2015 at 6:10 PM Jeff Steinmetz <jeffrey.steinm...@gmail.com>
wrote:

> In zeppelin, what is the equivalent to adding jars in a pyspark call?
>
> Such as running pyspark with the elasticsearch-hadoop jar
>
> ./bin/pyspark --master local[2] --jars
> jars/elasticsearch-hadoop-2.1.0.Beta2.jar
>
> My assumption is that loading something like this inside a %dep is
> pointless, since those dependencies would only live in the %spark scala
> world (the spark jvm).  In zeppelin - pyspark spawns a separate process.
>
> Also how is the interpreters “spark.home” used?  How is it different that
> the  “SPARK_HOME” zeppelin-env.sh
> And finally – how are args used in the interpreter?  (what uses them)?
>
> Thank you.
> Jeff
>
-- 
Best regards,

Matt Sochor
Data Scientist
Mobile Defense

Mobile +1 215 307 7768


This email and any of its attachments may contain Mobile Defense Inc.
proprietary information, which is privileged, confidential, or subject to
copyright belonging to Mobile Defense Inc. This email is intended solely
for the use of the individuals or entities to which it is addressed by
Mobile Defense Inc. If you are not the intended recipient of this email,
you are hereby notified that any dissemination, distribution, copying, or
action taken in relation to the contents of and attachments to this email
is strictly prohibited and may be unlawful. If you have received this email
in error, please notify the sender immediately and permanently delete the
original and any copy of this email and any printout.

Reply via email to