Hey Tomas, thanks for your reply, I am aware of the z.load option, but my question was how to load dependencies without using the z.load :)
As I wrote, I want instead of having to run: z.load("org.apache.spark:spark-streaming-kinesis-asl_2.10:1.5.0") I would like to permanently add this dependency to the Zeppelin server so it is automatically loaded on startup to all notebooks. I assume it is possible, as there are a lot of libs provided automatically, there must be some conf file/jar folder with all avaivable libs :) Thanks, Josef On 4 November 2015 at 14:38, Tomas Hudik <xhu...@gmail.com> wrote: > Josef, > use %dep interpterter, e.g.: > %dep > //add Maven dependency > z.load("com.databricks:spark-csv_2.10:1.2.0" > > > > Be careful this snippet needs to be put before any spark/scala code! > More: https://zeppelin.incubator.apache.org/docs/interpreter/spark.html > > The dependencies will be stored in spark-dependencies/ directory > > hope this helps, Tomas > > On Wed, Nov 4, 2015 at 2:16 PM, Josef A. Habdank <jahabd...@gmail.com> > wrote: > >> Hello all, >> >> could you please hint me on how to permanently add a dependency to >> zeppelin, without using the ZeppelinContext? >> >> I use 'org.apache.spark:spark-streaming-kinesis-asl_2.10' in all of my >> notebooks, so instead of having to run: >> z.load("org.apache.spark:spark-streaming-kinesis-asl_2.10:1.5.0") >> >> I would like to permanently add this dependency to the Zeppelin server, >> as I would normally do in SBT: >> libraryDependencies ++= Seq("org.apache.spark" %% >> "spark-streaming-kinesis-asl_2.10" % "1.5.0") >> >> Is there any config/sbt file I can add it to, so it is automatically >> loaded? >> >> I assume I would have to restart the zeppelin-daemon after adding it, but >> that is ok, I do not mind :) >> >> Thank you, >> Josef >> > >