Re: [ANNOUNCE] Apache Zeppelin 0.6.0 released

2016-07-08 Thread Felix Cheung
Is this possibly caused by CDH requiring a build-from-source instead of the 
official binary releases?





On Thu, Jul 7, 2016 at 8:22 PM -0700, "Benjamin Kim" 
mailto:bbuil...@gmail.com>> wrote:

Moon,

My environmental setup consists of an 18 node CentOS 6.7 cluster with 24 cores, 
64GB, 12TB storage each:

  *   3 of those nodes are used as Zookeeper servers, HDFS name nodes, and a 
YARN resource manager
  *   15 are for data nodes
  *   jdk1.8_60 and CDH 5.7.1 installed

Another node is an app server, 24 cores, 128GB memory, 1TB storage. It has 
Zeppelin 0.6.0 and Livy 0.2.0 running on it. Plus, Hive Metastore and 
HiveServer2, Hue, and Oozie are running on it from CDH 5.7.1.

This is our QA cluster where we are testing before deploying to production.

If you need more information, please let me know.

Thanks,
Ben



On Jul 7, 2016, at 7:54 PM, moon soo Lee 
mailto:m...@apache.org>> wrote:

Randy,

Helium is not included in 0.6.0 release. Could you check which version are you 
using?
I created a fix for 500 errors from Helium URL in master branch. 
https://github.com/apache/zeppelin/pull/1150

Ben,
I can not reproduce the error, could you share how to reproduce error, or share 
your environment?

Thanks,
moon

On Thu, Jul 7, 2016 at 4:02 PM Randy Gelhausen 
mailto:rgel...@gmail.com>> wrote:
I don't- I hoped providing that information may help finding & fixing the 
problem.

On Thu, Jul 7, 2016 at 5:53 PM, Benjamin Kim 
mailto:bbuil...@gmail.com>> wrote:
Hi Randy,

Do you know of any way to fix it or know of a workaround?

Thanks,
Ben

On Jul 7, 2016, at 2:08 PM, Randy Gelhausen 
mailto:rgel...@gmail.com>> wrote:

HTTP 500 errors from a Helium URL





Re: Zeppelin reports unable to connect to master but master webUI is up and shows connected workers

2016-07-08 Thread B00083603 Michael O Brien
If I swap the spark home value to use the binaries I installed on the cluster 
that error goes away but I get java.net.ConnectException: Connection refused 
error instead.


If I use the spark console on Master I can run the pi example. I'm not sure if 
it matters but I don't have hadoop setup but do have shared storage available 
to all nodes via the same path



From: Mohit Jaggi 
Sent: 07 July 2016 18:30
To: users@zeppelin.apache.org
Subject: Re: Zeppelin reports unable to connect to master but master webUI is 
up and shows connected workers

it seems you have conflicting binaries between the two machines. check the 
version on server running Z and on the master. they should be the same.

On Jul 7, 2016, at 7:26 AM, B00083603 Michael O Brien 
mailto:b00083...@student.itb.ie>> wrote:

Hi,
I created a new notebook and added sc as the only paragraph. When I run it I 
get errors displayed org.apache.thrift.transport.TTransportException

Looking at the spark interpreter logs it shows the master as unresponsive.

I can view the master webpage on http://vm-20161023-002:8080/ which shows the 2 
worker nodes connected which die when I shut them down. There are no 
applications listed on the Web UI and no historic list of applications as this 
is my first cluster

I can ping from zeppelin the master vm.
I also removed the conf/zeppelin.sh setting where i specified my own spark 
install on zeppelin

Any pointers whats I'm doing wrong?

Michael



INFO [2016-07-07 16:10:39,031] ({appclient-register-master-threadpool-0} 
Logging.scala[logInfo]:58) - Connecting to master 
spark://vm-20161023-002:7077...
 INFO [2016-07-07 16:10:39,035] ({appclient-register-master-threadpool-0} 
Logging.scala[logInfo]:58) - Connecting to masterspark://vm-20161023-002:7077...
 WARN [2016-07-07 16:10:39,037] ({pool-2-thread-3} 
Logging.scala[logWarning]:70) - Application ID is not initialized yet.
ERROR [2016-07-07 16:10:39,039] ({appclient-registration-retry-thread} 
Logging.scala[logError]:74) - Application has been killed. Reason: All masters 
are unresponsive! Giving up.
 WARN [2016-07-07 16:10:39,049] ({appclient-register-master-threadpool-0} 
Logging.scala[logWarning]:91) - Failed to connect to master vm-20161023-002:7077
java.lang.RuntimeException: java.io.InvalidClassException: 
org.apache.spark.rpc.netty.RequestMessage; local class incompatible: stream 
classdesc serialVersionUID = -2221986757032131007, local class serialVersionUID 
= -5447855329526097695



ITB Email Disclaimer This is a confidential communication and is intended only 
for the addressee indicated in the message (or duly authorised to be 
responsible for the delivery of the message to such person). You are 
specifically prohibited from copying this message or delivering the same, or 
any part thereof, to any other person, whomsoever or howsoever, unless you 
receive written authorisation from us to do. If you are anyone other than the 
intended addressee, or person duly authorised and responsible for the delivery 
of this message to the intended addressee, you should destroy this message and 
notify us immediately.

ITB Email Disclaimer This is a confidential communication and is intended only 
for the addressee indicated in the message (or duly authorised to be 
responsible for the delivery of the message to such person). You are 
specifically prohibited from copying this message or delivering the same, or 
any part thereof, to any other person, whomsoever or howsoever, unless you 
receive written authorisation from us to do. If you are anyone other than the 
intended addressee, or person duly authorised and responsible for the delivery 
of this message to the intended addressee, you should destroy this message and 
notify us immediately.


Re: [ANNOUNCE] Apache Zeppelin 0.6.0 released

2016-07-08 Thread Benjamin Kim
Felix,

I forgot to add that I built Zeppelin from source 
http://mirrors.ibiblio.org/apache/zeppelin/zeppelin-0.6.0/zeppelin-0.6.0.tgz 
 
using this command "mvn clean package -DskipTests -Pspark-1.6 -Phadoop-2.6 
-Dspark.version=1.6.0-cdh5.7.1 -Dhadoop.version=2.6.0-cdh5.7.1 -Ppyspark 
-Pvendor-repo -Pbuild-distr -Dhbase.hbase.version=1.2.0-cdh5.7.1 
-Dhbase.hadoop.version=2.6.0-cdh5.7.1”.

I did this because we are using HBase 1.2 within CDH 5.7.1.

Hope this helps clarify.

Thanks,
Ben



> On Jul 8, 2016, at 2:01 AM, Felix Cheung  wrote:
> 
> Is this possibly caused by CDH requiring a build-from-source instead of the 
> official binary releases?
> 
> 
> 
> 
> 
> On Thu, Jul 7, 2016 at 8:22 PM -0700, "Benjamin Kim"  > wrote:
> 
> Moon,
> 
> My environmental setup consists of an 18 node CentOS 6.7 cluster with 24 
> cores, 64GB, 12TB storage each:
> 3 of those nodes are used as Zookeeper servers, HDFS name nodes, and a YARN 
> resource manager
> 15 are for data nodes
> jdk1.8_60 and CDH 5.7.1 installed
> 
> Another node is an app server, 24 cores, 128GB memory, 1TB storage. It has 
> Zeppelin 0.6.0 and Livy 0.2.0 running on it. Plus, Hive Metastore and 
> HiveServer2, Hue, and Oozie are running on it from CDH 5.7.1.
> 
> This is our QA cluster where we are testing before deploying to production.
> 
> If you need more information, please let me know.
> 
> Thanks,
> Ben
> 
>  
> 
>> On Jul 7, 2016, at 7:54 PM, moon soo Lee > > wrote:
>> 
>> Randy,
>> 
>> Helium is not included in 0.6.0 release. Could you check which version are 
>> you using?
>> I created a fix for 500 errors from Helium URL in master branch. 
>> https://github.com/apache/zeppelin/pull/1150 
>> 
>> 
>> Ben,
>> I can not reproduce the error, could you share how to reproduce error, or 
>> share your environment?
>> 
>> Thanks,
>> moon
>> 
>> On Thu, Jul 7, 2016 at 4:02 PM Randy Gelhausen > > wrote:
>> I don't- I hoped providing that information may help finding & fixing the 
>> problem.
>> 
>> On Thu, Jul 7, 2016 at 5:53 PM, Benjamin Kim > > wrote:
>> Hi Randy,
>> 
>> Do you know of any way to fix it or know of a workaround?
>> 
>> Thanks,
>> Ben
>> 
>>> On Jul 7, 2016, at 2:08 PM, Randy Gelhausen >> > wrote:
>>> 
>>> HTTP 500 errors from a Helium URL
>> 
>> 
> 



Pass parameters to paragraphs via URL

2016-07-08 Thread on
Hi,

I am trying to pass parameters via URL to a published paragraph (and to
run it after that), e.g., I would like to get variable test of
/paragraph/20160708-144835_1515469620?asIframe&test=123 within my python
context, calculating a bit and then printing the result in the paragraph
(so that it appears on the website).

Is this even possible?

Thanks and best regards,
ON


Re: Zeppelin reports unable to connect to master but master webUI is up and shows connected workers

2016-07-08 Thread Mohit Jaggi
stacktrace?

> On Jul 8, 2016, at 2:08 AM, B00083603 Michael O Brien 
>  wrote:
> 
> If I swap the spark home value to use the binaries I installed on the cluster 
> that error goes away but I get java.net.ConnectException: Connection refused 
> error instead.
> 
> If I use the spark console on Master I can run the pi example. I'm not sure 
> if it matters but I don't have hadoop setup but do have shared storage 
> available to all nodes via the same path
> 
> 
> From: Mohit Jaggi 
> Sent: 07 July 2016 18:30
> To: users@zeppelin.apache.org
> Subject: Re: Zeppelin reports unable to connect to master but master webUI is 
> up and shows connected workers
>  
> it seems you have conflicting binaries between the two machines. check the 
> version on server running Z and on the master. they should be the same.
> 
>> On Jul 7, 2016, at 7:26 AM, B00083603 Michael O Brien 
>> mailto:b00083...@student.itb.ie>> wrote:
>> 
>> Hi,
>> I created a new notebook and added sc as the only paragraph. When I run it I 
>> get errors displayed org.apache.thrift.transport.TTransportException
>> 
>> Looking at the spark interpreter logs it shows the master as unresponsive. 
>> 
>> I can view the master webpage on http://vm-20161023-002:8080/ 
>>  which shows the 2 worker nodes connected 
>> which die when I shut them down. There are no applications listed on the Web 
>> UI and no historic list of applications as this is my first cluster
>> 
>> I can ping from zeppelin the master vm.
>> I also removed the conf/zeppelin.sh setting where i specified my own spark 
>> install on zeppelin 
>> 
>> Any pointers whats I'm doing wrong?
>> 
>> Michael
>> 
>> 
>> INFO [2016-07-07 16:10:39,031] ({appclient-register-master-threadpool-0} 
>> Logging.scala[logInfo]:58) - Connecting to master 
>> spark://vm-20161023-002:7077 
>> ...
>>  INFO [2016-07-07 16:10:39,035] ({appclient-register-master-threadpool-0} 
>> Logging.scala[logInfo]:58) - Connecting to masterspark://vm-20161023-002:707 
>> 7 
>> ...
>>  WARN [2016-07-07 16:10:39,037] ({pool-2-thread-3} 
>> Logging.scala[logWarning]:70) - Application ID is not initialized yet.
>> ERROR [2016-07-07 16:10:39,039] ({appclient-registration-retry-thread} 
>> Logging.scala[logError]:74) - Application has been killed. Reason: All 
>> masters are unresponsive! Giving up.
>>  WARN [2016-07-07 16:10:39,049] ({appclient-register-master-threadpool-0} 
>> Logging.scala[logWarning]:91) - Failed to connect to master 
>> vm-20161023-002:7077
>> java.lang.RuntimeException: java.io.InvalidClassException: 
>> org.apache.spark.rpc.netty.RequestMessage; local class incompatible: stream 
>> classdesc serialVersionUID = -2221986757032131007, local class 
>> serialVersionUID = -5447855329526097695
>> 
>> 
>> ITB Email Disclaimer This is a confidential communication and is intended 
>> only for the addressee indicated in the message (or duly authorised to be 
>> responsible for the delivery of the message to such person). You are 
>> specifically prohibited from copying this message or delivering the same, or 
>> any part thereof, to any other person, whomsoever or howsoever, unless you 
>> receive written authorisation from us to do. If you are anyone other than 
>> the intended addressee, or person duly authorised and responsible for the 
>> delivery of this message to the intended addressee, you should destroy this 
>> message and notify us immediately.
> 
> ITB Email Disclaimer This is a confidential communication and is intended 
> only for the addressee indicated in the message (or duly authorised to be 
> responsible for the delivery of the message to such person). You are 
> specifically prohibited from copying this message or delivering the same, or 
> any part thereof, to any other person, whomsoever or howsoever, unless you 
> receive written authorisation from us to do. If you are anyone other than the 
> intended addressee, or person duly authorised and responsible for the 
> delivery of this message to the intended addressee, you should destroy this 
> message and notify us immediately.



Re: [ANNOUNCE] Apache Zeppelin 0.6.0 released

2016-07-08 Thread Felix Cheung
For #1, do you know if Spark can find the Hive metastore config (typically in 
hive-site.xml) - Spark's log should indicate that.


_
From: Benjamin Kim mailto:bbuil...@gmail.com>>
Sent: Friday, July 8, 2016 6:53 AM
Subject: Re: [ANNOUNCE] Apache Zeppelin 0.6.0 released
To: mailto:users@zeppelin.apache.org>>
Cc: mailto:d...@zeppelin.apache.org>>


Felix,

I forgot to add that I built Zeppelin from source 
http://mirrors.ibiblio.org/apache/zeppelin/zeppelin-0.6.0/zeppelin-0.6.0.tgz 
using this command "mvn clean package -DskipTests -Pspark-1.6 -Phadoop-2.6 
-Dspark.version=1.6.0-cdh5.7.1 -Dhadoop.version=2.6.0-cdh5.7.1 -Ppyspark 
-Pvendor-repo -Pbuild-distr -Dhbase.hbase.version=1.2.0-cdh5.7.1 
-Dhbase.hadoop.version=2.6.0-cdh5.7.1".

I did this because we are using HBase 1.2 within CDH 5.7.1.

Hope this helps clarify.

Thanks,
Ben



On Jul 8, 2016, at 2:01 AM, Felix Cheung 
mailto:felixcheun...@hotmail.com>> wrote:

Is this possibly caused by CDH requiring a build-from-source instead of the 
official binary releases?





On Thu, Jul 7, 2016 at 8:22 PM -0700, "Benjamin Kim" 
mailto:bbuil...@gmail.com>> wrote:

Moon,

My environmental setup consists of an 18 node CentOS 6.7 cluster with 24 cores, 
64GB, 12TB storage each:

  *   3 of those nodes are used as Zookeeper servers, HDFS name nodes, and a 
YARN resource manager
  *   15 are for data nodes
  *   jdk1.8_60 and CDH 5.7.1 installed

Another node is an app server, 24 cores, 128GB memory, 1TB storage. It has 
Zeppelin 0.6.0 and Livy 0.2.0 running on it. Plus, Hive Metastore and 
HiveServer2, Hue, and Oozie are running on it from CDH 5.7.1.

This is our QA cluster where we are testing before deploying to production.

If you need more information, please let me know.

Thanks,
Ben



On Jul 7, 2016, at 7:54 PM, moon soo Lee 
mailto:m...@apache.org>> wrote:

Randy,

Helium is not included in 0.6.0 release. Could you check which version are you 
using?
I created a fix for 500 errors from Helium URL in master branch. 
https://github.com/apache/zeppelin/pull/1150

Ben,
I can not reproduce the error, could you share how to reproduce error, or share 
your environment?

Thanks,
moon

On Thu, Jul 7, 2016 at 4:02 PM Randy Gelhausen 
mailto:rgel...@gmail.com>> wrote:
I don't- I hoped providing that information may help finding & fixing the 
problem.

On Thu, Jul 7, 2016 at 5:53 PM, Benjamin Kim 
mailto:bbuil...@gmail.com>> wrote:
Hi Randy,

Do you know of any way to fix it or know of a workaround?

Thanks,
Ben

On Jul 7, 2016, at 2:08 PM, Randy Gelhausen 
mailto:rgel...@gmail.com>> wrote:

HTTP 500 errors from a Helium URL