Hi Andrea,
On 'start a paragraph B', 'start' means create paragraph B, you'll need to
call rest api. If 'start' means run paragraph B, you can leverage
AngualrDisplay system z.run() [1][2].
Hope this helps.
Thanks,
moon
[1]
he tooltip;
>
> On click on d3 chart bar i get that data and i use it to run a paragrah B.
>
> I'm trying to implement a drill down mechanism between paragraphs.
>
> I hope this helps to better understand my problem.
>
>
> Il giorno mer 8 giu 2016 alle ore 19:05 moon soo
Hi,
ZeppelinContext is available in Spark interpreter only. But there is no
reason to not having it in the other interpreters. [1] might give some idea
about how ZeppelinContext can be implemented in each interpreter.
Thanks,
moon
[1]
Hi Ashish,
I think your approach is generally right.
It's bit annoying while type of paragraph.result varies.
I think 'angularObject' field is another field that you also need to take
care.
object inside 'angularObject' might also vary their types.
Another approach would be serialize part of /
(default) , %pyspark , %sql , %dep edit restart remove
> Properties
>
> namevalue
> args-Xmax-classfile-name=128
>
> But unfortunately it's still reproducible. Any other ways how to do it?
> ____________
> From: moon soo Lee [m...@apac
ELIN-926
> Here is an issue. If you need any other details feel free to ask.
> ________
> From: moon soo Lee [m...@apache.org]
> Sent: Tuesday, 31 May 2016 3:07 PM
> To: users@zeppelin.apache.org
> Subject: Re: Getting 'File name too long' error when
Hi,
You can see 'args' property on spark interpreter setting, in interpreter
menu. Could you try there and see if it works?
Thanks,
moon
On Fri, May 27, 2016 at 12:18 AM Polina Marasanova <
polina.marasan...@quantium.com.au> wrote:
> Hello,
>
> Sometimes running scala code in Zeppelin I'm
> Congrats! Great job everyone
>>
>> Jeff Steinmetz
>>
>>
>>
>>
>>
>>
>> >>
>> >> On Wed, May 25, 2016 at 6:56 AM, moon soo Lee <m...@apache.org> wrote:
>> >>
>> >> > Congratulations and thank yo
this
> websocket call for changing the paragraph config ?
>
> Sincerely,
>
> Le mer. 15 juin 2016 à 22:04, moon soo Lee <m...@apache.org> a écrit :
>
>> Hi,
>>
>> I think 0.5.6 rest api (and current, too) does not take 'config'.
>>
>> Here's how
Elasticsearch interpreter does not provide such feature out of box.
If you don't mind customize code yourself, you can get current user name by
calling
"interpreterContext.getAuthenticationInfo().getUser()" inside of
"interpret()" method in ElasticsearchInterpreter.
And it's not going to very
Hi,
Thanks for bringing this discussion.
it's great idea minimize binary package size.
Can we set a policy to decide which interpreter goes to 'zeppelin-bin-min',
which is not?
One alternative is, instead of making 'zeppelin-bin-min', we can make
'zeppelin-bin-netinst'.
We can provide a shell
he data we send was not correct, and
> the zeppelin throw a Java.Null.Pointer)
>
> We have juste change it and now it works.
>
> Le ven. 24 juin 2016 à 05:15, moon soo Lee <m...@apache.org> a écrit :
>
>> Apologies for late response.
>> I'm not sure how your paragrap
eppelinServer and of each
> Interpreter. Although I prefer REST API, JMX is the de-facto standard for
> monitoring and there are a lot of existing monitoring tool that are
> designed for JMX
>
> When I have some bandwidth, I can push some PRs for those monitoring stuff
>
>
>
Hi,
Thanks for reporting the problem.
I have tried and get first 2 lines of output as well.
Do you mind file an jira issue for this problem?
That would help track this issue.
Best,
moon
On Thu, Jul 28, 2016 at 6:37 PM Abul Basar wrote:
> Hello All,
>
> I am trying to run
%python does not expose the api to access resource pool, yet. I think it's
great to have.
PyZeppelinContext [1] can be the place where z.put(), z.get() python api
can be exposed.
The api implementation will need to call JVM PythonInterpreter instance to
access the ResourcePool. Example of
Hi,
Regarding java.lang.ClassNotFoundException: org.apache.hadoop.security.
UserGroupInformation$AuthenticationMethod error, I assume you're using
master branch. (0.6.x branch shouldn't have this error)
You can add "org.apache.hadoop:hadoop-common:2.7.2" in the Dependencies
section of your
Hi,
The easiest way is install spark on your laptop, and configure it to use
your yarn remote cluster in a data center. And verify it with
bin/spark-shell.
Then install zeppelin and export SPARK_HOME env variable to your spark
installation. You'll need to set 'master' property to yarn-client in
Thanks for the question.
I think https://github.com/apache/zeppelin/pull/1148 is related.
The PR is not exactly about modular way of adding syntax highlights, but
you'll get an idea how syntax highlight support will be changed in the
future and how to add one.
Hope this helps.
Best,
moon
On
Thanks for sharing great event.
On Wed, Jul 6, 2016 at 7:20 AM DuyHai Doan wrote:
> Hello Zeppelin fans
>
> If you're living in the London area, I'll be giving a talk about using
> Spark/Cassandra and Zeppelin to store, aggregate and visualize particle
> accelerator
Hi,
set zeppelin.sparkuserHiveContext 'to' true supposed to create HiveContext.
If Zeppelin fails to create HiveContext for some reason, it prints logs and
fallback to SqlContext. [1]
If you can take a look 'logs/zeppelin-interpreter-spark-*' logs and find
"Can't create HiveContext. Fallback to
Hi Chen,
1. Currently, Interpreter binding mode is more like whether each Note will
have a separate Interpreter session(scoped/isolated) or not(shared), rather
than per user.
https://issues.apache.org/jira/browse/ZEPPELIN-1210 will bring interpreter
session per user in the Zeppelin server side,
com> wrote:
> On a side note…
>
> Has anyone got the Livy interpreter to be added as an interpreter in the
> latest build of Zeppelin 0.6.0? By the way, I have Shiro authentication on.
> Could this interfere?
>
> Thanks,
> Ben
>
>
> On Jun 29, 2016, at 11:18 A
Thanks for sharing really cool notebook!
On Thu, Jun 30, 2016 at 2:38 AM Alexander Bezzubov wrote:
> Thank you for sharing nice example of Machine Learning and Visualization
> notebook using Spark!
>
> --
> Alex
>
> On Thu, Jun 30, 2016 at 6:12 AM, tog
erpreter instance/process already created?
>
> Thanks
> Chen
>
>
>
> On Fri, Jun 24, 2016 at 11:51 AM moon soo Lee <m...@apache.org> wrote:
>
>> Hi,
>>
>> Thanks for asking question. It's not dumb question at all, Zeppelin docs
>> does not explai
>
>
>
> From:moon soo Lee <leemoon...@gmail.com>
> To:users@zeppelin.apache.org
> Date:2016/06/30 11:36 AM
> Subject:Re: spark interpreter
> --
>
>
>
> Hi Ben,
>
> Livy interpreter is included
Thanks Jongyoul for taking care of ZEPPELIN-2012 and share the plan.
Could you share little bit more detail about how export/import notebook
will work after ZEPPELIN-2012? Because we assume export/import notebook
works between different Zeppelin installations and one installation might
how to use the Credentials feature on
> securing usernames and passwords. I couldn’t find documentation on how.
>
> Thanks,
> Ben
>
> On Jul 1, 2016, at 9:04 AM, moon soo Lee <m...@apache.org> wrote:
>
> 0.6.0 is currently in vote in dev@ list.
>
> http://apache-
I created https://issues.apache.org/jira/browse/ZEPPELIN-1329 for this
> problem.
>
> Thanks,
> -Randy
>
> On Wed, Aug 10, 2016 at 12:07 PM, moon soo Lee <m...@apache.org> wrote:
>
> > Thanks for reporting the problem.
> >
> > The API was provided by zep
Scheduler runs on the Zeppelin daemon. Jobs will run without browser
opening the notebook.
Thanks,
moon
On Mon, Aug 15, 2016 at 2:04 PM Jayant Raj wrote:
> Hi All,
> I noticed there is a scheduler button on Zeppelin notebooks to
> periodically run the notebook. Can we
Hi Anton!
It'll be appreciated if you can file a new JIRA issue.
Once issue is created, let me link issue to
https://issues.apache.org/jira/browse/ZEPPELIN-1915, which is an umbrella
issue for built-in visualization.
Thanks,
moon
On Thu, Feb 2, 2017 at 8:59 AM Anton Bubna-Litic <
Verified
- checksum and signature for release artifacts
- build from source
- LICENSE source/binary release
- source release does not have unexpected binary
- binary package functioning
+1 (binding)
On Fri, Feb 3, 2017 at 1:28 PM Hyung Sung Shim wrote:
> +1
> Thanks
If you have Conda environment, you can try use %python.conda interpreter
included in 0.7.0.
http://zeppelin.apache.org/docs/0.7.0/interpreter/python.html#conda
Features are not fully documented on website, but if you just run
"%python.conda help" it'll print full features.
%python.conda
This might help
http://stackoverflow.com/a/40547480/2952665
Thanks,
moon
On Tue, Feb 7, 2017 at 11:06 PM Joaquín Silva <
joaquin.silva.vigen...@gmail.com> wrote:
> Hello,
> I'm trying to import import matplotlib.pyplot but it throws this error:
>
> ile "", line 1, in
> File
Although i don't know the best way to pass Shiro configuration to the
front-end, the hide some menu based on permission make sense.
Thanks,
moon
On Tue, Feb 7, 2017 at 7:08 PM Windy Qin wrote:
> hi all,
>The Zeppelin 0.7.0 enabled user to secure interpreter
Hi Jan,
Thanks for questions.
https://issues.apache.org/jira/browse/ZEPPELIN-2060 is the issue tracks
dynamic form execution behavior change.
We're planning to provide more intuitive way in 0.7.1.
Disappearing dynamic form element when hide the output is expected behavior.
While Hiding editor
Hi,
For 2) and 3), you might want to try "Personalized mode" [1] with shiro
authentication on.
Zeppelin currently does not have drilling down feature.
But you can implement any type of interaction, including drill down
feature, using angular display system[2] inside of Zeppelin notebook, if
you
Thanks for sharing your needs.
https://issues.apache.org/jira/browse/ZEPPELIN-1369 is tracking this issue.
Hope we have implementation, soon.
Best,
moon
On Wed, Feb 8, 2017 at 12:40 PM Ray wrote:
> Hello,
>
> I upgraded my Zeppelin to 0.7.0. In personal mode, every writer
Currently, cron scheduling feature is broken [1] and patch is available at
[2].
Shell we include this patch in 0.7.0 release?
[1] http://issues.apache.org/jira/browse/ZEPPELIN-2009
[2] https://github.com/apache/zeppelin/pull/1941
On Tue, Jan 24, 2017 at 6:45 PM Prabhjyot Singh
Hi Mathieu,
Thanks for reporting the problem.
I see no issue tracking this problem in our JIRA [1].
Do you mind add one?
We're preparing 0.7.1 release and hope we can address this problem in
upcoming release.
Thanks,
moon
[1] https://issues.apache.org/jira/browse/ZEPPELIN
On Tue, Feb 21,
Hi,
Download 0.7.0 -> Run R tutorial notebook repeatedly
will reproduce the problem? Otherwise, can someone clarify instruction to
reproduce the problem?
Thanks,
moon
On Sat, Feb 18, 2017 at 5:45 AM xyun...@simuwell.com
wrote:
> Within the Scala REPL everything is
Hi,
ZEPPELIN-2084 [1] addresses the problem.
Patch [2] is available and merged to master branch.
Thanks,
moon
[1] https://issues.apache.org/jira/browse/ZEPPELIN-2084
[2] https://github.com/apache/zeppelin/pull/2005
On Sat, Feb 18, 2017 at 10:51 PM Xiaohui Liu wrote:
> Hi,
The patch is available in latest branch-0.7 as well.
On Sat, Feb 18, 2017 at 11:51 PM moon soo Lee <m...@apache.org> wrote:
> Hi,
>
> ZEPPELIN-2084 [1] addresses the problem.
> Patch [2] is available and merged to master branch.
>
> Thanks,
> moon
>
> [1] http
If you're using '%python', not '%pyspark', you can try %python.conda to
change your environment.
Run
%python.conda help
in the notebook will display available command. This allow dynamically
configure conda environment.
Hope this helps.
Thanks,
moon
On Thu, Feb 23, 2017 at 3:23 PM Beth Lee
Hi,
The behavior is changed since Zeppelin support multiple result in a
paragraph [1].
If you're using %spark interpreter, one possible workaround is clear output
before print something. For example
// do something
// clear output
z.getInterpreterContext.out.clear
// display something
Hi,
Thanks for sharing the problem.
Do you see the same problem when you try 0.7.0 official binary release?
Thanks,
moon
On Wed, Feb 8, 2017 at 3:18 AM Benoit Hanotte wrote:
> Hello all,
>
> I am experiencing a ClassNotFoundException when trying to run RDD
>
Hi,
Currently GitNotebookRepo [1] does not have feature that adds remote
repository and push into it.
ZEPPELIN-1756 [2] is a related issue.
What do you think zeppelin allows optional configuration such as
ZEPPELIN_NOTEBOOK_GIT_REMOTE. And when this value is set, zeppelin
commit to remote
Hi,
+1 for releasing netinst package only.
Regarding make binary package only some packages, like spark, markdown,
jdbc, we have discussed having minimal package in [1].
And i still think it's very difficult to decide which interpreter need to
be included which is not. For example i prefer to
Thanks Jeff for staring the thread.
Here's my thoughts
1. Do we need to do this
yes.
2. If the answer is yes, which interpreters should be moved out
If Zeppelin community has no problem maintaining certain interpreter, then
no reason to remove contribution from community.
However, if Zeppelin
Hi,
I think we need to have some policy to decide which interpreter goes into
zeppelin-bin-min package. And make applying that policy as a part of
release process.
Because i can not see any consistent rule except for "it seems" or "i
guess". And i have no idea how i can explain if somebody ask
SparkSession is exposed as 'spark' [1].
Thanks,
moon
[1]
https://github.com/apache/zeppelin/blob/v0.6.1/spark/src/main/java/org/apache/zeppelin/spark/SparkInterpreter.java#L786
On Sun, Aug 21, 2016 at 12:21 PM Ahmed Sobhi wrote:
> Is the new SparkSession exposed in
Hi Jin,
Zeppelin provides interface that each interpreter implement 'completion()'
method. For example, SparkInterpreter implement 'completion()' [1] and
supposed to return available list of method when user enter ctrl+. after
dot (e.g. sc. and ctrl+.)
If it does not work for you, please feel
Try find small 'clock' icon on any particular Note next to 'Remove' button.
This scheduler feature will help you auto refresh Note.
Thanks,
moon
On Wed, Aug 24, 2016 at 11:31 AM kant kodali wrote:
> at very least can we auto refreshing on the Zeppelin dashboard on a
>
solution but i believe we should also provide
> something like z.runParagraph(paragraph)
>
> I will try your suggestions for couple of requirements.
>
> for "Click visualization element invoke action" agree there should be a
> simpler way
>
> Thanks,
> Pankaj
&
Here's JIRA issue. https://issues.apache.org/jira/browse/ZEPPELIN-1230.
It's not been addressed, yet.
Thanks,
moon
On Sat, Sep 3, 2016 at 11:28 PM Arvind Kandaswamy
wrote:
> When running the summary of lm, the zeppelin r version does not include
> full output. E.g. the
Hi York,
Thanks for the question.
1. How you install zeppelin is up to you and your use case. You can either
run single instances of Zeppelin and configure authentication and let many
user login, or let each user run their own Zeppelin instance.
I see both use cases from users, and it really
You will need to generate extra column which can be used as a X-axis for
column A and B.
On Wed, Sep 7, 2016 at 2:34 AM Abhisar Mohapatra <
abhisar.mohapa...@inmobi.com> wrote:
> I am not sure ,But can you try once by grouby function in zeppelin.If uou
> can group by columns then i guess that
Hi,
For now, code and result data are mixed in note.json, which is represented
by 'class Note' [1]. And every Notebook storage layer need to implement
'NotebookRepo.get()' [2] to read note.json from underlying storage and
convert it into 'class Note'.
As you see the related API and class
.ku...@resilinc.com>
> wrote:
>
> Hi moon,
>
> Yes that was the way that I was using. But is there any plan for future
> releases to removing the data from note and storing only configuration?
>
> Because storing the configuration with data when there is no max result
>
>
> Zeppelin:
> zeppelin.python /home/cloudera/anaconda2/bin
>
> In zeppelin, nothing is returned.
>
>
> On Wed, Sep 14, 2016 at 11:53 AM, moon soo Lee <m...@apache.org> wrote:
>
>> Did you export SPARK_HOME in conf/zeppelin-env.sh?
>> Could you verify the some
6 at 8:50 AM Felix Cheung <felixcheun...@hotmail.com>
>> wrote:
>>
>>> And
>>> matplotlib.use('Agg')
>>>
>>> Would only work before matplotlib is first used so you would need to
>>> restart the interpreter. From error stack below it looks like
I think there should be code chances to address this problem.
Maybe line chart can have a checkbox option that user can select ignore
empty value or treats empty value as zero.
Do you mind file an issue for it?
Thanks,
moon
On Mon, Sep 12, 2016 at 8:11 AM Ayestaran Nerea
Did you export SPARK_HOME in conf/zeppelin-env.sh?
Could you verify the some code works with ${SPARK_HOME}/bin/pyspark, on the
same machine that zeppelin runs?
Thanks,
moon
On Wed, Sep 14, 2016 at 8:07 AM Abhi Basu <9000r...@gmail.com> wrote:
> Oops sorry. the above code generated this error:
>
Hi,
Such error can be raised when you have multiple version of Netty in the
classpath. You can try exclude netty from dependency management gui.
Thanks,
moon
On Mon, Sep 5, 2016 at 9:53 AM Michael Pedersen wrote:
> Hello,
>
> I'm trying to include an external JAR file
Zeppelin support spark 2.0 from 0.6.1 release. Check "Available
interpreters" section on download page [1].
Please try 0.6.1 release or build from master branch, and let us know if it
works!
Thanks,
moon
[1] http://zeppelin.apache.org/download.html
On Wed, Sep 7, 2016 at 12:18 PM Jeremy
Hi,
Thanks for sharing the problem.
Could you share which version of Zeppelin are you using and how did you try
matplotlib inside of Zeppelin? Are you trying matplotlib with z.show() ?
Thanks,
moon
On Tue, Sep 13, 2016 at 1:56 AM Xi Shen wrote:
> Hi,
>
> I want to build
the way that I was using. But is there any plan for future
> releases to removing the data from note and storing only configuration?
>
> Because storing the configuration with data when there is no max result
> limit will create a big note.json file.
>
>
>
> Thanks & Regards,
Hi,
If you see NotebookRepo [1], which is notebook storage layer abstraction in
Zeppelin, you'll find there's actually no limitations on how Note is stored.
i.e. Underlying storage system (file, db, etc), How they structured
(directory, filename, rows in db, etc), Format of Note (json, etc), all
Thanks for sharing the problem.
It maybe not helping you directly, but i have created a patch for
ZEPPELIN-1480 https://github.com/apache/zeppelin/pull/1490
If you can use scheduler, the patch will help.
Best,
moon
On Tue, Sep 27, 2016 at 10:54 PM Jonathan Gough
wrote:
Regarding two interpreter settings,
1. Phoenix (Accessible only to admin)
2. Phoenix-custom (Accessible to other user)
I think interpreter authorization [1] can help. which is available on
master branch (0.7.0-SNAPSHOT).
Thanks,
moon
[1]
On Wed, Oct 5, 2016 at 10:33 PM Vikash Kumar <vikash.ku...@resilinc.com>
wrote:
> Thanks moon,
>
>Yes this task solves my problem but we have to wait for 7
> release. So is there nearby plan to release 07 version?
>
>
>
> Thanks & Regards,
>
>
I'm not sure since when, but %html inside of the cell doesn't work if it is
first column. If you add any other column which is not using %html on left,
table will be rendered correctly.
Hope this helps
Thanks,
moon
On Fri, Sep 16, 2016 at 2:21 PM Kevin Niemann
wrote:
Hi Bala,
Thanks for sharing the problem.
Not sure but could you try newer version? zeppelin-0.6.2.
If it does not help, could you open developer console on your web browser
and see if there're any errors?
Thanks,
moon
On Sun, Nov 6, 2016 at 9:16 PM Balachandar R.A.
who can see his note, right?
>
> But that does not protect the system from users getting root access via
> %sh (or who knows what else) if Zeppelin is running as root?
>
> Thank you,
>
> Igor
>
> On 11/06/2016 08:48 PM, moon soo Lee wrote:
>
> Zeppelin already ha
, Nov 6, 2016 at 5:45 PM Igor Yakushin <i...@uchicago.edu> wrote:
>
>
> On 11/06/2016 07:30 PM, moon soo Lee wrote:
> > Hi Igor,
> >
> > Zeppelin runs with user id that execute bin/zeppelin-daemon.sh or
> > bin/zeppeiln.sh. And all interpreter processe
Hi Keren,
Have you tried to set 'master' property in 'interpreter' GUI menu?
Basically, set SPARK_HOME env variable and 'master' property would enough
for basic configuration.
Please take a look
http://zeppelin.apache.org/docs/0.6.2/interpreter/spark.html#2-set-master-in-interpreter-menu
.
the administrator monitor the
> running interpreter and shut it down if running out of resources?
>
> On 8 November 2016 at 05:33, moon soo Lee <m...@apache.org> wrote:
>
> Thanks Igor for valuable feedbacks.
> For that reason, i've seen some companies instantiate Zeppelin ins
Hi Bala,
There're a lot of docker images related to zeppelin on docker hub.
https://hub.docker.com/search/?isAutomated=0=0=1=0=zeppelin=0
you may find one that you would like to use.
In case of you're interested in official docker image release, community is
working on
> Le mer. 16 nov. 2016 à 23:45, moon soo Lee <m...@apache.org> a écrit :
>
> Hi,
>
> Zeppelin actually does have embedded mode that runs Interpreter in the
> same JVM that Zeppelin run. This feature does not exposed to user, but it
> can be controlled by InterpreterOption.
According to your log, your interpreter process seems failed to start.
Check following lines in your log.
You can try run interpreter process manually and see why it is failing.
i.e. run
D:\zeppelin-0.6.2\bin\interpreter.cmd -d
D:\zeppelin-0.6.2\interpreter\spark -p 55492
---
INFO
Hi Kevin,
This is an example that programmatically set graph options.
https://www.zeppelinhub.com/viewer/notebooks/aHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL0xlZW1vb25zb28vemVwcGVsaW4tZXhhbXBsZXMvbWFzdGVyLzJCOFhRVUM1Qi9ub3RlLmpzb24
You can get some idea how to configure graph programmatically
Hi,
Zeppelin actually does have embedded mode that runs Interpreter in the same
JVM that Zeppelin run. This feature does not exposed to user, but it can be
controlled by InterpreterOption.remote field.
Hi,
It's strange,
Do you have SPARK_HOME or HADOOP_CONF_DIR defined in conf/zeppelin-env.sh?
You can stop Zeppelin, delete /home/asif/zeppelin-0.6.2-bin-all/metastore_db,
start Zeppelin and try again.
Thanks,
moon
On Tue, Nov 15, 2016 at 4:05 PM Muhammad Rezaul Karim
Hi,
Do you have the same problem on SPARK_HOME/bin/spark-shell?
Are you using standalone spark cluster? or Yarn?
Thanks,
moon
On Sun, Nov 13, 2016 at 8:19 PM York Huang wrote:
> I ran into the "No space left on device" error in zeppelin spark when I
> tried to run
s. I'm still how to do
that. I have installed Spark on my machine and have set the SPARK_HOME.
However, do I also need to install and configure the Hadoop. But the error
is all about the Hive. Am I some how wrong?
A sample zeppelin-env.sh file would be very helpful.
On Nov 16, 2016 10:47 PM, moon so
k on my machine and have set the SPARK_HOME.
> However, do I also need to install and configure the Hadoop. But the error
> is all about the Hive. Am I some how wrong?
> A sample zeppelin-env.sh file would be very helpful.
>
> On Nov 16, 2016 10:47 PM, moon soo Lee <m...@apache
Appreciate for sharing your investigation.
I tried to schedule some paragraphs returning no value, but couldn't
reproduce the problem. Could you share some code snippets to reproduce the
problem?
Thanks,
moon
On Mon, Nov 14, 2016 at 11:36 AM Florian Schulz wrote:
> Hi,
>
Hi,
This feature is work in progress here
https://github.com/apache/zeppelin/pull/1539
Hope we can see this feature in master, soon.
Thanks,
moon
On Wed, Nov 2, 2016 at 1:07 PM Chen Song wrote:
> Hello
>
> Is there a way to configure a JDBC interpreter to use the user
Tried the sample code in both Zeppelin and spark-shell, and got the same
error.
Please try following code as a workaround.
import org.apache.spark.sql.expressions.MutableAggregationBuffer
import org.apache.spark.sql.expressions.UserDefinedAggregateFunction
class GeometricMean extends
Hi,
Have you tried
http://zeppelin.apache.org/docs/latest/manual/dynamicform.html#select-form ?
Thanks,
moon
On Tue, Oct 25, 2016 at 12:15 AM Manjunath, Kiran
wrote:
> Hello All,
>
>
>
> I have a question on creating dropdown list (dynamic forms) using psql
> interpreter.
Hi Nirav,
Thanks for sharing your thoughts.
I think idea of reuse notebook make sense.
One possible idea about resuing notebook, is extend current
z.run(PRARAGRAPH_ID) [1] which works for paragraphs only in the same note,
to z.run(NOTE_ID) or z.run(PARAGRAPH_ID) which works any note or paragraph
ation purposes. Is there such a
> possibility?
>
> BR,
> Jan
>
> On 2016-04-06 20:44 ( 0100), moon soo Lee <m...@apache.org> wrote:
> > Hi,
> >
> > Removing sync is not supported at the moment.
> >
> > For iframe, i think we can introduce new
"./bin/install-interpreter.sh --name shell" command supposed to work.
It just read informations from conf/interpreter-list to install interpreter
artifacts. Could find a line start with 'shell' conf/interpreter-list?
otherwise you can modify this file as you needed. here's original file
included
Pluggable module list(Helium) on website list user package (3rd party) in
npm registry. Each plugin can have it's own license which may/may not
compatible to Apache 2 License, while zeppelin is not including them in the
release.
So, accepting the plugin license is up to individual user when
in Zeppelin:
> [image: image.png]
>
> Thanks for your help.
>
> Shan
>
> On Sat, Mar 18, 2017 at 8:39 AM, moon soo Lee <m...@apache.org> wrote:
>
> If you don't have spark cluster, then you don't need to do 2).
> After 1) %spark.r interpreter should work.
>
>
If you don't have spark cluster, then you don't need to do 2).
After 1) %spark.r interpreter should work.
If you do have spark cluster, export SPARK_HOME env variable in
conf/zeppelin-env.sh, that should be enough make it work.
Hope this helps.
Thanks,
moon
On Fri, Mar 17, 2017 at 2:41 PM
e also taken a snapshot of the
> Spark Interpreter configuration that I have access to/using in Zeppelin.
> This interpreter comes with SQL and Python integration and I'm figuring out
> how do I get to use R.
>
> On Sat, Mar 18, 2017 at 8:06 PM, moon soo Lee <m...@apache.org> wrote:
When property key in interpreter configuration screen matches certain
condition [1], it'll be treated as a environment variable.
You can remove PYSPARK_PYTHON from conf/zeppelin-env.sh and place it in
interpreter configuration.
Thanks,
moon
[1]
Great to see discussion for 0.8.0.
List of features for 0.8.0 looks really good.
*Interpreter factory refactoring*
Interpreter layer supports various behavior depends on combination of
PerNote,PerUser / Shared,Scoped,Isolated. We'll need strong test cases for
each combination as a first step.
Hi Hishfish,
If you take a look Clock example [1], you'll see how it creates angular
objects and update every seconds from backend, so front-end can be updated
accordingly.
After you add your object into AngularObjectRegistry, you can get
AngularObject and add watcher [2]. Then any changes of
is
> nothing similar available for Helium Applications. Of course I find it
> listed on the /helium Page, where I can enable/disable it. If enabled, I
> can not see, how to actually use the application.
>
> Thanks.
> Andreas
>
>
> ------ Forwarded message
1 - 100 of 187 matches
Mail list logo