(https://issues.apache.org/jira/browse/ZEPPELIN-2324)
so please try 0.8.0 RC4
https://dist.apache.org/repos/dist/dev/zeppelin/zeppelin-0.8.0-rc4/
There's still one issue of publishing paragraph in 0.8.0 RC4, I will start a
new RC, but it should work fine if don't need to publish paragraph.
Michael
error and complete exception stack?
--
Ruslan Dautkhanov
On Mon, Jun 4, 2018 at 10:40 AM, Michael Segel
mailto:msegel_had...@hotmail.com>> wrote:
Hmmm. Still not working.
Added it to the interpreter setting and restarted the interpreter.
The issue is that I need to use the MapR version of spa
Hmmm. Still not working.
Added it to the interpreter setting and restarted the interpreter.
The issue is that I need to use the MapR version of spark since I’m running
this on the cluster.
Should I restart Zeppelin itself?
On Jun 4, 2018, at 11:32 AM, Ruslan Dautkhanov
I’m assuming that I want to set this in ./conf/zeppelin-site.xml …
Didn’t have any impact. Still getting the same error.
On Jun 4, 2018, at 11:17 AM, Michael Segel
mailto:msegel_had...@hotmail.com>> wrote:
Hmmm…. did not know that option existed.
Are there any downsides to doing this
Ruslan Dautkhanov
On Mon, Jun 4, 2018 at 9:05 AM, Michael Segel
mailto:msegel_had...@hotmail.com>> wrote:
Hi,
I’m trying to use Zeppelin to connect to a MapR Cluster…
Yes, I know that MapR has their own supported release but I also want to use
the same set up to also run stand alone to
Hi,
I’m trying to use Zeppelin to connect to a MapR Cluster…
Yes, I know that MapR has their own supported release but I also want to use
the same set up to also run stand alone too…
My issue is that I’m running Zeppelin 0.7.2 and when I try to connect to spark,
I get the following error….
May 30, 2018 at 11:07 AM, Michael Segel
mailto:msegel_had...@hotmail.com>> wrote:
Hi,
Ok… I wanted to include the Apache commons compress libraries for use in my
spark/scala note.
I know I can include it in the first note by telling the interpreter to load…
but I did some checking…
There’s
Hi,
Ok… I wanted to include the Apache commons compress libraries for use in my
spark/scala note.
I know I can include it in the first note by telling the interpreter to load…
but I did some checking…
There’s a local repo.
./zeppelin/local-repo/ … that actually has two older jars for
l#3-dynamic-dependency-loading-via-sparkdep-interpreter
helps here.
--
Ruslan Dautkhanov
On Fri, May 25, 2018 at 12:46 PM, Michael Segel
<msegel_had...@hotmail.com<mailto:msegel_had...@hotmail.com>> wrote:
What’s the best way to set up a class path for a specific notebook?
I have s
What’s the best way to set up a class path for a specific notebook?
I have some custom classes that I may want to include.
Is there a way to specify this in the specific note?
Would it be better to add the jars to an existing lib folder?
Thx
te:
You can probably deploy Zeppelin on n machines and manage behind a LoadBalancer?
Thanks
Ankit
On Mon, Apr 30, 2018 at 6:42 AM, Michael Segel
<msegel_had...@hotmail.com<mailto:msegel_had...@hotmail.com>> wrote:
Ok..
The answer is no.
You have a web interface. It runs on a web serv
Sun, Apr 29, 2018 at 11:51 PM, Michael Segel
<msegel_had...@hotmail.com<mailto:msegel_had...@hotmail.com>> wrote:
Yes if you mean to run the spark jobs on a cluster.
On Apr 29, 2018, at 7:25 AM, Soheil Pourbafrani
<soheil.i...@gmail.com<mailto:soheil.i...@gmail.com>>
Yes if you mean to run the spark jobs on a cluster.
On Apr 29, 2018, at 7:25 AM, Soheil Pourbafrani
> wrote:
I mean to configure Zeppelin in multimode.
On Sun, Apr 29, 2018 at 4:49 PM, Soheil Pourbafrani
Wayne,
Notes or paragraphs within a note?
Within a note, you should be able to do that provided that the paragraph you
want to take the results from has run.
Across notes?
If Zeppelin shares a spark context, it could be possible, however many
configure zeppelin to have a separate context per
w
notebooks.
On Tue, Jan 30, 2018 at 10:43 PM, Michael Segel
<msegel_had...@hotmail.com<mailto:msegel_had...@hotmail.com>> wrote:
I don’t think you can…
If you look in the ../notebook directory, the notes are all identified by an
unique id.
My guess? That the references are
Austin
On 01/12/2018 07:56 PM, Jeff Zhang wrote:
There're 2 options for you:
1. Disable hiveContext in spark via setting zeppelin.spark.useHiveContext to
false in spark's interpreter setting
2. Connect to hive metastore service instead of single derby instance. You can
configure that in your
tion.
-Shyla
On Thu, Oct 26, 2017 at 8:53 AM, Michael Segel
<msegel_had...@hotmail.com<mailto:msegel_had...@hotmail.com>> wrote:
Do you really want the user to be anonymous?
What about using Shiro to set up a role?
> On Oct 26, 2017, at 10:48 AM, shyla deshpande
> <deshpandes
….
On Oct 25, 2017, at 6:08 PM, Jeff Zhang
<zjf...@gmail.com<mailto:zjf...@gmail.com>> wrote:
What do you mean spark note ? There's no concept of spark note.
Could you give more details of your problem ? Maybe screenshot is helpful.
Michael Segel
<msegel_had...@hotmail.com<
n you access the hive in cli to verify hive works properly
Michael Segel
<msegel_had...@hotmail.com<mailto:msegel_had...@hotmail.com>>于2017年10月23日周一
下午7:49写道:
In the zeppelin-interpreter log…
ERROR [2017-10-23 00:00:03,759] ({BoneCP-pool-watch-thread}
PoolWatchThread.java[fillC
read-only mode on a connection.
at
org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at
org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
Source)
Thx
-Mike
On Oct 23, 2017, at 6:42 AM, Michael Segel
<msegel_
t;> wrote:
It is hard to tell without the full log. Could you paste the log ? And what
dependency do you specify for your hive interpreter ?
Michael Segel
<msegel_had...@hotmail.com<mailto:msegel_had...@hotmail.com>>于2017年10月23日周一
上午11:08写道:
I’m trying to set up a hive interpret
I went back and rebuilt 0.7.2 and it seems that the issue has gone away.
Not sure what changed or which setting could have been causing the issue with
socket connections.
> On Oct 18, 2017, at 10:40 AM, Michael Segel <msegel_had...@hotmail.com> wrote:
>
> The error:
> ER
Hi,
Why are you using -Pmapr50 instead of -Pmapr51 (There is no 5.2)
You’re building against 0.8.0 which isn’t the latest ‘stable’ release.
I’ve had issues building a MapR release against 7.3 but could build against 7.2
(Albeit I’ve had some issues along the way.)
Also why the added
on a connection.
at
org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
> On Oct 18, 2017, at 10:36 AM, Michael Segel <msegel_had...@hotmail.com> wrote:
>
> Hi,
>
> I’m running 7.2 compiled against MapR 5.2
>
> I have nginx in front of
Hi,
I’m running 7.2 compiled against MapR 5.2
I have nginx in front of Zeppelin so I run zeppelin on the local host port and
use nginx as a proxy w ssh.
After running zeppelin for approximately 24 hours, if I login and try to run a
paragraph in any notebook, I get a java error, connection
and-examples
On Tue, Oct 17, 2017 at 1:40 AM Michael Segel
<msegel_had...@hotmail.com<mailto:msegel_had...@hotmail.com>> wrote:
Hi,
I’m trying to build 0.7.3 release of zeppeling on a Centos 7 box.
I have /opt/scala and /opt/spark setup and their bin directories in my path.
(scala 2.11.
Hi,
I’m trying to build 0.7.3 release of zeppeling on a Centos 7 box.
I have /opt/scala and /opt/spark setup and their bin directories in my path.
(scala 2.11.11 and spark 2.1.1)
I’m trying to build a basic release ‘mvn clean package -DskipTests’ to start…
I’m running in to the
I’ve been trying to build zeppelin from the source.
Ran in to the following error:
[INFO] Compiling 6 source files to
/opt/zeppelin-0.7.3/zeppelin-display/target/classes at 1507925417280
[ERROR]
Can you clarify what you mean by OLTP?
From your description, you’re not doing OLTP which is a database term.
> On Oct 11, 2017, at 8:19 AM, Plamen Paskov
> wrote:
>
> Hi ppl,
>
> Is zeppelin planned to handle a lot of traffic? I will try to explain more
>
How about exporting the note, then re-importing it?
I had the same problem, but the restart of Zeppelin worked.
Someone else reported the same / similar issue in a different thread.
On Oct 9, 2017, at 11:04 AM, Healy, Terence D
> wrote:
Any suggestions on
wrote:
Hi Michael-
I'm not sure what the values are supposed to be in order to compare. The
interpreter is running; a save and restart still gives the same result.
On 10/06/2017 10:41 AM, Michael Segel wrote:
What do you see when you check out the spark interpreter? Something with
%spark, o
What do you see when you check out the spark interpreter? Something with
%spark, or %spark.sql (Sorry, going from memory. ) I think it may also have
to do with not having the spark interpreter running, so if you manually restart
the interpreter then re-run the notebook… it should work…
HTH
y good UI treatment.
3. Provides DAG
I think 1) does not stop 2) and 3) in the future. 2) also does not stop 3) in
the future.
So, why don't we try 1) first and keep discuss and polish idea about 2) and 3)?
Thanks,
moon
On Mon, Oct 2, 2017 at 10:22 AM Michael Segel
<msegel_had...@hotmail.co
Hi,
I know you can set zeppelin to either use a local copy of spark or set the ip
address of the spark master in the configuration files.
I was wondering if you could do this from within the notebook?
So I can use the same notebook to run a paragraph on the local machine, then
have the
<hfre...@twitter.com<mailto:hfre...@twitter.com>> wrote:
"nice to have" isn't a very strong requirement. I strongly uggest you really,
really think about this before you start pounding an overengineered solution to
a non-issue :-)
h
On Mon, Oct 2, 2017 at 9:12 AM, Mich
this in the wild (nor on this
thread), other than theoretical "what if" - which is totally fine, when it
doesn't introduce a lot of unecessary complexity for little to no gain (which
seems to be the case here)
h
On Mon, Oct 2, 2017 at 8:48 AM, Michael Segel
<msegel_had...@hotmail.c
hat dependency graph and level of
parallelism are not so cool.
I am not sure which option (1) or (2) is correct to implement at the moment. I
hope to hear from product visionaries which way to choose and to get approval
for the start of implementation.
Thank you!
Valeriy Polyakov
From: Michael Segel [ma
Sorry to jump in…
If you want to run paragraphs in parallel, you are going to want to have some
sort of dependency graph. Think of a common set up where you need to set up
common functions and imports. (setup of %spark.dep)
A good example is if your notebook is a bunch of unit tests and you
Ok… so I have Nginx set up as a proxy in front of Zeppelin.
With nginx I have to enter my username and password that I set up with
htpasswd.
Then I’m seeing my Zeppelin where I have shiro still implemented. (So I can
have multiple users rather everyone running as anonymous.)
This may be a
So I’ve downloaded the prebuilt nginx by setting up the repo and pulling down
the pre-built version 1.13
Unfortunately the documentation with Zeppelin 7.3 doesn’t match and is way off.
Anyone have any pointers? Basically there is no ./sites-available directory.
So where should the zeppelin
On Sep 28, 2017, at 6:59 AM, Michael Segel
<msegel_had...@hotmail.com<mailto:msegel_had...@hotmail.com>> wrote:
Thanks for the fast response.
A friend also recommended this… and it was my next step if I couldn’t get this
to work.
I always suspect human error first. (Huma
Thanks for the fast response.
A friend also recommended this… and it was my next step if I couldn’t get this
to work.
I always suspect human error first. (Human meaning me. :-) Where I did
something stupid to muck it up.
I’m going to try one last thing before I give up on the SSL and then
Hi,
While I have 7.3 set up on my mac and it works fine for local use
(http://127.0.0.1:8080) I want to install zeppelin on an edge node of my hadoop
cluster.
Because I’m going to be exposing the ip traffic across a network, I want to
make sure I’m using SSL.
Unfortunately, its a big fail.
I
43 matches
Mail list logo