Thanks, Moon! I got it worked. The reason why it didn't work is that I
tried to use both of the spark interpreters inside one notebook. I think I
can create different notebooks for each interpreters, but it would be great
if we could use "%xxx", where xxx is the user defined interpreter
identifier, to identify different interpreters for different paragraphs.

Besides, because currently both of the interpreters are using "spark" as
the identifier, they share the same log file. I am not sure whether there
are other cases they interfere with each other.

Thanks,
Zhong

On Thu, Feb 4, 2016 at 9:04 PM, moon soo Lee <m...@apache.org> wrote:

> Hi,
>
> Once you create another spark interpreter in Interpreter menu of GUI,
> then each notebook should able to select and use it (setting icon on top
> right corner of each notebook).
>
> If it does not work, could you find error message on the log file?
>
> Thanks,
> moon
>
> On Fri, Feb 5, 2016 at 11:54 AM Zhong Wang <wangzhong....@gmail.com>
> wrote:
>
>> Hi zeppelin pilots,
>>
>> I am trying to run multiple spark interpreters in the same Zeppelin
>> instance. This is very helpful if the data comes from multiple spark
>> clusters.
>>
>> Another useful use case is that, run one instance in cluster mode, and
>> another in local mode. This will significantly boost the performance of
>> small data analysis.
>>
>> Is there anyway to run multiple spark interpreters? I tried to create
>> another spark interpreter with a different identifier, which is allowed in
>> UI. But it doesn't work (shall I file a ticket?)
>>
>> I am now trying running multiple sparkContext in the same spark
>> interpreter.
>>
>> Zhong
>>
>>
>>

Reply via email to