1)
Zeppelin uses the spark-shell REPL API. Therefore it behaves similarly to
the scala shell.
You do not write applications in the shell, in the technical sense, but
instead evaluate individual expressions with the goal of interacting with a
dataset.
You can (manually) export some of the code that you find useful in Zeppelin
to applications, for example to provide batch-pre-processing.
I recommend you look at demos/descriptions of the interactive shell
functionality to get an idea, of what Zeppelin offers over an application.
Also: You still have to manage most of your imports ;)

2)
There are two benefits:
- You can import and export/share notebooks. This means it makes sense to
split content.
- You also reduce the load of the browser, by splitting heavy
visualizations into multiple notebooks. Once you start rendering tens of
thousands of points, you start reaching the limits of a browser's
capability.

Hopefully this helps you get started.

On Thu, Sep 24, 2015 at 1:04 PM, Hammad <ham...@flexilogix.com> wrote:

> Hi mates,
>
> I was struggling with anatomy of Zeppelin in context of Spark and could
> not find anywhere that could answer my questions in mind as below;
>
> 1. Usually a scala application structure is;
>
> import org.apache.<whatever>
>
> obect MyApp{
> def main(args: Array[String]){
> //something
> }
> }
>
> whereas, on zeppelin we only write //something. Does it mean that one
> zeppelin daemon is one application? What if I want to write multiple
> applications on one zeppelin daemon instance?
>
> 2. Related to (1), if same spark context is shared across all notebooks,
> whats the benefit of having multiple notebooks?
>
> I really appreciate if someone may help me understand above two.
>
> Thanks,
> Hmad
>

Reply via email to