Hi Moon,

Sorry, a few more questions.

My cluster is a mapr cluster.

If I want to install zeppelin on one edge node and multiple users access
that zeppelin, how do I set up multiple users to run jobs and access data
in MapR cluster using their own accounts?

If I want to install zeppelin on every users' desktop and let them to
access MapR from their own desktops, how do I install zeppelin on their
windows desktops?

Is there any guide somewhere?

Thanks,

York

On 7 September 2016 at 10:06, York Huang <yorkhuang.d...@gmail.com> wrote:

> Hi Moon,
>
> More questions.
>
> If I set up the MapR cluster in secure mode, how do I set up zeppelin?
>
> Thanks,
>
> York
>
> On 6 September 2016 at 17:16, York Huang <yorkhuang.d...@gmail.com> wrote:
>
>> Hi Moon,
>>
>> Thanks for your response.
>>
>> I have a MapR 4.1 cluster and would like to use zeppelin on it. If I
>> install zeppelin on an edge node, what security should I set up? The online
>> document is a bit confusing. Basically, I want to set up every users have
>> their own account (either AD or newly created zeppelin account).
>>
>> Is there any guide?
>>
>> Thanks,
>>
>> York
>>
>> On 5 September 2016 at 07:31, moon soo Lee <m...@apache.org> wrote:
>>
>>> Hi York,
>>>
>>> Thanks for the question.
>>>
>>> 1. How you install zeppelin is up to you and your use case. You can
>>> either run single instances of Zeppelin and configure authentication and
>>> let many user login, or let each user run their own Zeppelin instance.
>>> I see both use cases from users, and it really depends on your
>>> environment.
>>>
>>> 2. From 0.6.0 release, Zeppelin ships python interpreter. You can try
>>> %python.
>>>
>>> 3. You can run Zeppelin on windows by running bin/zeppelin.cmd
>>>
>>> 4. Interpreter can share data through resource pool. You can think
>>> resource pool as a distributed map across all interpreters. Although every
>>> interpreter can access the resource pool, few interpreters expose API to
>>> user and let user directly access the resource pool.
>>>
>>> SparkInterpreter, PysparkInterpreter, SparkRInterpreter are interpreters
>>> that expose resource pool API to user. You can access resource pool via
>>> z.get(), z.put() api. Check [1].
>>>
>>>
>>> Thanks,
>>> moon
>>>
>>> [1] http://zeppelin.apache.org/docs/latest/interpreter/spark
>>> .html#object-exchange
>>>
>>> On Sat, Sep 3, 2016 at 6:45 PM York Huang <yorkhuang.d...@gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I am new to Zeppelin and have a few questions.
>>>> 1. Should I install Zeppelin on a Hadoop edge node and every users
>>>> access from browser? Or should every users have to install their own
>>>> Zeppelin ?
>>>>
>>>> 2. How do I run standard Python without using spark?
>>>>
>>>> 3. Can I install Zeppelin on Windows server?
>>>>
>>>> 4. Is it possible to share data between interpreters ?
>>>>
>>>> Thanks
>>>>
>>>> York
>>>>
>>>> Sent from my iPhone
>>>
>>>
>>
>

Reply via email to