Ok, I've tried it.
Indeed it doesn't look for a spark pod. Other issues though, but if I
wouldn't overcome I'll open a new thread.
Thanks Jeff!

On Wed, Jun 23, 2021 at 11:40 AM Jeff Zhang <zjf...@gmail.com> wrote:

> set zeppelin.run.mode in zeppelin-site.xml to be local
>
> Lior Chaga <lio...@taboola.com> 于2021年6月23日周三 下午4:35写道:
>
>> I'm trying to deploy zeppelin 0.10 on k8s, using following manual build:
>>
>> mvn clean package -DskipTests -Pspark-scala-2.12 -Pinclude-hadoop 
>> -Pspark-3.0 -Phadoop2  -Pbuild-distr  -pl 
>> zeppelin-interpreter,zeppelin-zengine,spark/interpreter,spark/spark-dependencies,zeppelin-web,zeppelin-server,zeppelin-distribion,jdbc,zeppelin-plugins/notebookrepo/filesystem,zeppelin-plugins/launcher/k8s-standard
>>  -am
>>
>>
>> Spark itself is configured to use mesos as resource manager.
>> It seems as if when trying to start the spark
>> interpreter, K8sRemoteInterpreterProcess tries to find a sidecar pod for
>> spark interpreter:
>>
>> Pod pod = client.pods().inNamespace(namespace).withName(podName).get();
>>
>> Is there any option not to have spark interpreter as a separate pod, and
>> instead just create the spark context within the zeppelin process? I'm
>> trying to understand if I could make zeppelin
>> use K8sStandardInterpreterLauncher instead (I assume it's an alternative to
>> the remote interpreter?)
>>
>> Thanks,
>> Lior
>>
>
>
> --
> Best Regards
>
> Jeff Zhang
>

Reply via email to