Hi all,

Turns out there was a problem with the docker yarn standalone I was using,
and linking the Zeppelin container with it. I just consolidated the two to
run Zeppelin in yarn-client mode.

Feel free to pull and play around with the setup:
docker pull namehta/zeppelin-yarn-standalone

docker run -it –rm -p 8080:8080 -p 8081:8081 -p 8180:8180 -p 8181:8181 -h
sandbox namehta/zeppelin-yarn-standalone /opt/zeppelin/bin/zeppelin.sh

Once running, access Zeppelin at: http://<host-ip>:8180


On Fri, Mar 27, 2015 at 6:34 PM, RJ Nowling <[email protected]> wrote:

> Can take a while for the Spark context to start. I ran into that issue.
> Give it a few minutes the first time.
>
>
>
> On Mar 22, 2015, at 8:34 PM, Nirav Mehta <[email protected]> wrote:
>
> Hi,
>
> I'm trying to run Zeppelin over an existing Spark cluster.
>
> My zeppelin-env.sh has the entry:
> export MASTER=spark://spark:7077
>
> In the first paragraph, I executed bash commands:
> %sh
> hadoop fs -ls /user/root
>
> This returned:
> drwxr-xr-x - root supergroup 0 2015-01-15 09:05 /user/root/input
> -rw-r--r-- 3 root supergroup 29966462 2015-03-23 01:06
> /user/root/product.txt
>
> In the next paragraph, I executed the following:
> %spark
> val prodRaw = sc.textFile("hdfs://user/root/product.txt")
> prodRaw.count
>
> This doesn't return any result, or any errors on the console. Instead, I
> see a new context create every time I execute something:
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> ------ Create new SparkContext spark://spark:7077 -------
> ------ Create new SparkContext spark://spark:7077 -------
> ------ Create new SparkContext spark://spark:7077 -------
>
> Is this expected behavior? Seems like Zeppelin should be holding the
> context.
>
> Same issues when executing the sample notebook.
>
> Appreciate any help!
>
>

Reply via email to