Apache Zeppelin in Spain

2016-11-09 Thread Mina Lee
Hi,

I'd like to let you know that some of PMCs and committers will be in Spain
for the ApacheCon Big Data Europe 2016. We are planning to hold a small
Zeppelin hands-on session for beginners on 15th or 16th. And also we want
to help you to get involved in Zeppelin development if you have some
missing feature in your mind but don't know where and how to start. We will
post schedule on the white board or somewhere in Melia Sevilla so please
find us!

You are more than welcome to stop by, say hi and get stickers and t-shirts
:)

We will also have a small meetup in Madrid this Saturday(12th 19-22h). If
you are interested visit
https://www.meetup.com/Madrid-Apache-Zeppelin-Meetup/events/234853331/ and
RSVP.

Looking forward to seeing you guys!

Best,
Mina


Re: Zeppelin with Separate Spark Connection

2016-11-09 Thread moon soo Lee
Hi Keren,

Have you tried to set 'master' property in 'interpreter' GUI menu?

Basically, set SPARK_HOME env variable and 'master' property would enough
for basic configuration.

Please take a look
http://zeppelin.apache.org/docs/0.6.2/interpreter/spark.html#2-set-master-in-interpreter-menu
 .

You're tring Zeppelin 0.6.2, right?

Thanks,
moon

On Wed, Nov 9, 2016 at 1:35 PM Tseytlin, Keren <
keren.tseyt...@capitalone.com> wrote:

> Hi All,
>
>
>
> I’ve just set up Zeppelin, and I’ve also set up my own Spark with
> connection to Alluxio. I installed Zeppelin using the binary. When I use
> Zeppelin, it seems to be using some internal Spark, not the one that I set
> up. What configurations should I set in order to make the notebooks and
> Spark jobs execute on my own Spark?
>
>
>
> I edited zeppelin-env.sh and added SPARK_HOME, but that caused anything I
> tried to run in my notebook just shoot back “ERROR” with no output.
>
>
>
> Any help would be much appreciated! Thanks!!
>
>
>
> Best,
>
> Keren
>
> --
>
> The information contained in this e-mail is confidential and/or
> proprietary to Capital One and/or its affiliates and may only be used
> solely in performance of work or services for Capital One. The information
> transmitted herewith is intended only for use by the individual or entity
> to which it is addressed. If the reader of this message is not the intended
> recipient, you are hereby notified that any review, retransmission,
> dissemination, distribution, copying or other use of, or taking of any
> action in reliance upon this information is strictly prohibited. If you
> have received this communication in error, please contact the sender and
> delete the material from your computer.
>


Re: Zeppelin with Separate Spark Connection

2016-11-09 Thread Luciano Resende
On Wed, Nov 9, 2016 at 1:34 PM, Tseytlin, Keren <
keren.tseyt...@capitalone.com> wrote:

> Hi All,
>
>
>
> I’ve just set up Zeppelin, and I’ve also set up my own Spark with
> connection to Alluxio. I installed Zeppelin using the binary. When I use
> Zeppelin, it seems to be using some internal Spark, not the one that I set
> up. What configurations should I set in order to make the notebooks and
> Spark jobs execute on my own Spark?
>
>
>
> I edited zeppelin-env.sh and added SPARK_HOME, but that caused anything I
> tried to run in my notebook just shoot back “ERROR” with no output.
>
>
>
> Any help would be much appreciated! Thanks!!
>
>
>
> Best,
>
> Keren
>
>
Try updating the following in zeppelin-env.sh
export MASTER="spark://spark-02.softlayer.com:7077"
export SPARK_HOME=/opt/spark-1.6.2-bin-hadoop2.6

And then on the ui, update the spark interpreter configuration to have
master url properly configured.

-- 
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/


Zeppelin with Separate Spark Connection

2016-11-09 Thread Tseytlin, Keren
Hi All,

I’ve just set up Zeppelin, and I’ve also set up my own Spark with connection to 
Alluxio. I installed Zeppelin using the binary. When I use Zeppelin, it seems 
to be using some internal Spark, not the one that I set up. What configurations 
should I set in order to make the notebooks and Spark jobs execute on my own 
Spark?

I edited zeppelin-env.sh and added SPARK_HOME, but that caused anything I tried 
to run in my notebook just shoot back “ERROR” with no output.

Any help would be much appreciated! Thanks!!

Best,
Keren


The information contained in this e-mail is confidential and/or proprietary to 
Capital One and/or its affiliates and may only be used solely in performance of 
work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.


Problem with scheduler (stops after ten executions)

2016-11-09 Thread Florian Schulz
Hi, 

 

I have a problem with the scheduler. I have a notebook and I execute the spark code in it every minute (with the cron scheduler). The scheduler starts ten times and then it stops and do nothing anymore (in no notebook at all). I use version 0.6.2 of Apache Zeppelin. Do you have any idea, why this happens? I can reproduce this without a problem, but I can't find anything in the logs. Hope you can help me.

 

 

Best regards

Florian