Re: nightly builds?

2018-05-11 Thread Jongyoul Lee
+1. We might use Jenkins and SNAPSHOT repo.

JL

On Fri, May 11, 2018 at 3:11 AM, Florent Pousserot <
florent.pousse...@gmail.com> wrote:

> +1
>
> Florent,
>
> Le 10 mai 2018 à 03:48, Jeff Zhang <zjf...@gmail.com> a écrit :
>
>
> That's a good idea. +1
>
>
>
> Ruslan Dautkhanov <dautkha...@gmail.com>于2018年5月9日周三 下午11:29写道:
>
>> This probably had to go to the dev group instead - would it be possible
>> to get
>> an automated nightly/weekly builds published too?
>> Something like a "bleeding edge" builds from a master snapshot.
>>
>> It would help folks who cannot build themselves, but have to use
>> some features / fixes that aren't available on latest official release.
>> Also it gives new features exposure to more testing, so it should be a
>> win-win for users and developers.
>>
>> Some other open source projects employ nightly builds.
>>
>>
>> Thanks!
>>
>> Ruslan Dautkhanov
>>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Artifact dependency for geomesa causes NPE

2018-05-09 Thread Jongyoul Lee
Hi,

I'm not familiar with geomesa but you don't have to build Zeppelin with a
specific Spark version because Zeppelin support all of external Spark
without rebuilding it. I suggest you download binary version and extract it
and set `SPARK_HOME` inside conf/zeppelin-env. As you mentioned, it might
look like a dependency problem.

Hope this help,
JL

On Wed, May 9, 2018 at 7:46 AM, David Boyd <db...@incadencecorp.com> wrote:

> All:
>
> I am following the instructions here:  http://www.geomesa.org/
> documentation/current/user/spark/zeppelin.html
> To use geomesa spark with zeppelin.
> Whenever I add the artifact dependency I get the following error on any
> code I try to run (this includes the basic features -spark
> tutorial).
>
> java.lang.NullPointerException
> at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38)
> at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33)
> at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext_2(
> SparkInterpreter.java:398)
> at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(
> SparkInterpreter.java:387)
> at org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(
> SparkInterpreter.java:146)
> at org.apache.zeppelin.spark.SparkInterpreter.open(
> SparkInterpreter.java:843)
> at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(
> LazyOpenInterpreter.java:70)
> at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$
> InterpretJob.jobRun(RemoteInterpreterServer.java:491)
> at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
> at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(
> FIFOScheduler.java:139)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>
> I have tried specifying the jar as a maven artifact and as an absolute
> path.   I have tried multiple versions (1.3.4, 1.3.5, and 2.0.0) of the
> artifact.As soon as I remove the dependency the code works again.
>
> Is there another external dependency I can try to see if that is the
> problem.
> I have tried this with both the 0.7.3 Zeppelin binary distribution, and
> also with a 0.7.4 distribution I built specifically.
> I am running spark 2.1 on my cluster.  Like I said without this dependency
> the example code works just fine.
> Here is the build command I used for the distribution I am running:
>
> mvn clean package -DskipTests -Pspark-2.1 -Phadoop-2.7 -Pyarn -Ppyspark
> -Psparkr -Pr -Pscala-2.11 -Pexamples -Pbuild-distr
>
> From looking at the code around the trace it appears either a class is not
> found or something is getting dorked with SparkContext.
>
> Any help would be appreciated.
>
>
> --
> = mailto:db...@incadencecorp.com <db...@incadencecorp.com> 
> 
> David W. Boyd
> VP,  Data Solutions
> 10432 Balls Ford, Suite 240
> Manassas, VA 20109
> office:   +1-703-552-2862
> cell: +1-703-402-7908
> == http://www.incadencecorp.com/ 
> ISO/IEC JTC1 WG9, editor ISO/IEC 20547 Big Data Reference Architecture
> Chair ANSI/INCITS TC Big Data
> Co-chair NIST Big Data Public Working Group Reference Architecture
> First Robotic Mentor - FRC, FTC - www.iliterobotics.org
> Board Member- USSTEM Foundation - www.usstem.org
>
> The information contained in this message may be privileged
> and/or confidential and protected from disclosure.
> If the reader of this message is not the intended recipient
> or an employee or agent responsible for delivering this message
> to the intended recipient, you are hereby notified that any
> dissemination, distribution or copying of this communication
> is strictly prohibited.  If you have received this communication
> in error, please notify the sender immediately by replying to
> this message and deleting the material from any computer.
>
>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Working on a note that is being viewed by other users

2018-05-08 Thread Jongyoul Lee
olymail.io/v1/z/b/NWFlYjI5M2NlMzc3/pnYFED_0O3yPu9DdNIyvIY_hP7XAHneipj8AjB-QWysK-sw-4gkB1I7Fw3aq0341ghsuX0kpOxoXBekYE_X7jvI7FhqFh8rOguUeBuib_qCUz_MiJ0jCIvaGL1IYCxF1MxlFFUQddFPXHnQ3lG3vpVciDB57N5jUIKnEqH-jpMByo0jUtn7fUTz50JRGYaakol1qfPLSmugGzcEUoAnZMsILX7dBJaDW6otJrGsc4GNOI53cjgMw2YUajT4urS7yTb-t2RN9SHgQQYsecXnPFTkch9IOki75bpUC2pH-uct7a3Mw96rTlOPaPyuq1PUEdBVevQlNMclMKM-0oUF_rKdFcjBU7m1VrCQfffm0N_-ATeMXpnuDTSgxx74aaFquNwcv36w2JxaXuys0n82t5o2-x_9VmSs=>
> <https://share.polymail.io/v1/z/b/NWFlYjI5M2NlMzc3/pnYFED_0O3yPu9DdNIyvIY_hP7XAHneipj8AjB-QWysK-sw-4gkB1I7Fw3aq0341ghsuX0kpOxoXBekYE_X7jvI7FhqFh8rOguUeBuib_qCUz_MiJ0jCIvaGL1IYCxF1MxlFFUQddFPXV2E3y3b5sRw7EVF7PZyKPPqa83-gpt51u1eR9W_QVTbtlIxbOLiqqExkePOU2_kH3skHoU-YPIwNWbFbJenW-Z1Noy8b6T1IIpPN0QMw0McYhCsz7yS6R7mpkQBgFnARTotAZSfPDDkAjsFCiWrzY5QYz5H7pM86b2Its6eRnOvaNSyhl7wEbhBetglNeY_XiK39_7aSNGtvUSvpwcX0>
> <https://share.polymail.io/v1/z/b/NWFlYjI5M2NlMzc3/pnYFED_0O3yPu9DdNIyvIY_hP7XAHneipj8AjB-QWysK-sw-4gkB1I7Fw3aq0341ghsuX0kpOxoXBekYE_X7jvI7FhqFh8rOguUeBuib_qCUz_MiJ0jCIvaGL1IYCxF1MxlFFUQddFPXHnQ3lGDpo0FlDVx5O5CSY7eW8T291aoRpleV4kDxUTD8lJNPR7G6s016ba-XguQEvPEWJPL57nOi3ir-UBkwcA==>[image:
> PlaceIQ:CES 2018]
> <https://share.polymail.io/v1/z/b/NWFlYjI5M2NlMzc3/pnYFED_0O3yPu9DdNIyvIY_hP7XAHneipj8AjB-QWysK-sw-4gkB1I7Fw3aq0341ghsuX0kpOxoXBekYE_X7jvI7FhqFh8rOguUeBuib_qCUz_MiJ0jCIvaGL1IYCxF1MxlFFUQddFPXHnQ3lGDpo0FlDVx5O5CSY7eW8T29pN9zo0rttGvZUyfpiZhMOJmqtENsbeiRkaQ_wcUBtxKVI4oecpRkFIDb74BEqGMWqSZVKJcvUdlGpOunufJG0ZaNcfJj>
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Replace bower to npm

2018-05-07 Thread Jongyoul Lee
Hi devs and users,

When building Zeppelin from source, zeppelin web build takes around frive
minutes and almost the time is to download/copy bower dependencies. On the
other hand, if we could remove those dependencies and change them into npm,
we, surprisingly, would reduce our build time.

Can someone/some group help it? I’m willing to test all of features related
to this dependencies and really appreciate your effort.

Thanks in advance,
Jongyoul Lee
-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Starting to Prepare 0.8.0 RC1

2018-04-26 Thread Jongyoul Lee
Unfortunately, I might find a critical error while running invalid sql
query with SparkSqlInterpreter. I'll file up the issue and submit a PR soon

On Fri, Apr 27, 2018 at 12:36 PM, Mina Lee <mina...@apache.org> wrote:

> Nice! Let me know if you need any help
>
> On Fri, 27 Apr 2018 at 12:17 PM Jun Kim <i2r@gmail.com> wrote:
>
> > Awesome!
> > 2018년 4월 27일 (금) 오전 10:15, Jongyoul Lee <jongy...@gmail.com>님이 작성:
> >
> >> Sounds great!!
> >>
> >> On Fri, Apr 27, 2018 at 10:13 AM, Xiaohui Liu <hero...@gmail.com>
> wrote:
> >>
> >>> Fantastic!
> >>>
> >>> On Fri, 27 Apr 2018 at 9:12 AM, Jeff Zhang <zjf...@gmail.com> wrote:
> >>>
> >>>> Hi Folks,
> >>>>
> >>>> All the block issues are fixed, I am starting to prepare 0.8.0 RC1
> >>>>
> >>>>
> >>>>
> >>
> >>
> >> --
> >> 이종열, Jongyoul Lee, 李宗烈
> >> http://madeng.net
> >>
> > --
> > Taejun Kim
> >
> > Data Mining Lab.
> > School of Electrical and Computer Engineering
> > University of Seoul
> >
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Starting to Prepare 0.8.0 RC1

2018-04-26 Thread Jongyoul Lee
Sounds great!!

On Fri, Apr 27, 2018 at 10:13 AM, Xiaohui Liu <hero...@gmail.com> wrote:

> Fantastic!
>
> On Fri, 27 Apr 2018 at 9:12 AM, Jeff Zhang <zjf...@gmail.com> wrote:
>
>> Hi Folks,
>>
>> All the block issues are fixed, I am starting to prepare 0.8.0 RC1
>>
>>
>>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Extra SparkSubmit process in running Cassandra queries

2018-04-23 Thread Jongyoul Lee
Hi,

AFAIK, it might be something wrong. Can you explain more about it?

JL

On Sun, Apr 22, 2018 at 6:36 PM, Soheil Pourbafrani <soheil.i...@gmail.com>
wrote:

> Hi, I use Zeppelin 7.3
>
> Customizing Cassandra interpreter, I configured it for my Cassandra
> cluster.
>
> When I try to get data from Cassandra using the command:
>
> %cassandra
>
> SELECT * FROM Key.Table ;
>
> I expect it only creates just a RemoteInterpreterServer process to fetch
> data from Cassandra, but in addition to RemoteInterpreterServer, a
> SparkSubmit process is created!
>
> I didn't use any spark code, just Cassandra CQL query, Why the SparkSubmit
> process is created?
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Zeppelin next version

2018-04-22 Thread Jongyoul Lee
Sounds Great!!

On Sun, Apr 22, 2018 at 5:48 PM, Jeff Zhang <zjf...@gmail.com> wrote:

>
> It is pretty close to release, the only remaining blocked issue is
> deallock issue. https://issues.apache.org/jira/browse/ZEPPELIN-3401
>
>
>
> Jongyoul Lee <jongy...@gmail.com>于2018年4月22日周日 下午3:13写道:
>
>> Hi,
>>
>> Jeff is preparing the date as I know, but I’m not sure the exact date yet.
>>
>> Regards,
>> JL
>>
>> On Tue, 17 Apr 2018 at 8:43 PM Yohana Khoury <
>> yohana.kho...@gigaspaces.com> wrote:
>>
>>> Hi,
>>>
>>> Is there any release date for the upcoming Zeppelin version that will
>>> support Spark 2.3 ?
>>> I saw there are commits on 0.8.x branch but not on 0.7.x. Are you
>>> planning to support Spark 2.3 on 0.7.x too?
>>>
>>>
>>> Thanks
>>>
>> --
>> 이종열, Jongyoul Lee, 李宗烈
>> http://madeng.net
>>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Zeppelin next version

2018-04-22 Thread Jongyoul Lee
Hi,

Jeff is preparing the date as I know, but I’m not sure the exact date yet.

Regards,
JL

On Tue, 17 Apr 2018 at 8:43 PM Yohana Khoury <yohana.kho...@gigaspaces.com>
wrote:

> Hi,
>
> Is there any release date for the upcoming Zeppelin version that will
> support Spark 2.3 ?
> I saw there are commits on 0.8.x branch but not on 0.7.x. Are you planning
> to support Spark 2.3 on 0.7.x too?
>
>
> Thanks
>
-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Zeppelin Matrics

2018-04-22 Thread Jongyoul Lee
Hi,

There is no metric you can get from current Zeppelin. I’m developing jmx
supporting feature. O hope I could share it sooner.

Regards,
JL

On Sun, 1 Apr 2018 at 1:05 AM Michael Bullock <michaelbullo...@gmail.com>
wrote:

> Greetings Zeppelin Team,
> Im currently  running Zeppelin in my AWS environment.  I wanted to know
> what metrics from Zeppelin can be gathered?
>
> Respectfully,
>
>
> --
> *Michael L. Bullock II*
> *Email: michaelbullo...@gmail.com <michaelbullo...@gmail.com>*
> *Mobile: (240)504-7520*
>
-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: [Julia] Does Spark.jl work in Zeppelin's existing Spark/livy.spark interpreters?

2018-04-22 Thread Jongyoul Lee
Hello,

AFAIK, there is no issue.

Regards
JL

On Wed, 18 Apr 2018 at 2:22 AM Josh Goldsborough <
joshgoldsboroughs...@gmail.com> wrote:

> Wondering if anyone has had success using the Spark.jl
> <https://github.com/dfdx/Spark.jl> library for Spark to support Julia
> using one of Zeppelin's spark interpreters.
>
> Thanks!
> -Josh
>
-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: No logout/signout item on menu drop down list

2018-04-22 Thread Jongyoul Lee
Hi,

anonymous user name is a specially treated username for non-auth env.

Afaik, Some logic compares with that name, and showing/hiding logout menu
seems to have it.

Regards,
JL

On Wed, 18 Apr 2018 at 6:14 AM Joe W. Byers <ecjb...@aol.com> wrote:

> On 04/17/2018 04:53 PM, Joe W. Byers wrote:
>
> Hello,
>
> I just configured a zeppelin server using ldap authentication.  When I
> open the url and log in with a user, there is no logout/signout item on the
> drop down menu list.  The only way I can logout is to delete my cookies.
>
> Has anyone experienced this?  Does anyone have any idea what I need to do
> to correct this?
>
> Thanks
>
> Joe
> --
> *Joe W. Byers*
>
> I need to update this.  I created an user, anonymous, in my ldap.  This
> user does not have a logout menu option.  All other users have the logout.
> I must have a flag incorrectly set.
>
> Thanks
>
> Joe
>
>
> --
> *Joe W. Byers*
>
-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Code Convention: Eclipse formatter

2018-03-07 Thread Jongyoul Lee
Hi,

Those formatters are not fully compatible. So Incompatibility always
happens. I think fixing format through checkstyle would be better.

Thanks,
JL

On Thu, Mar 1, 2018 at 2:07 PM, Gokulakannan Muralidharan <
go...@sentienz.com> wrote:

> Thanks Jeff, tried importing it in eclipse it is not working.
>
> Looks like eclipse code formatter can be imported in intellij using *eclipse
> formatter plugin* for intellij and not vice versa
> https://intellij-support.jetbrains.com/hc/en-us/
> community/posts/206575559-How-to-Export-Code-Formatting-
> Configurations-to-Eclipse
>
> On Thu, Mar 1, 2018 at 8:09 AM, Jeff Zhang <zjf...@gmail.com> wrote:
>
>>
>> I use intellij which can specify the checkstyle file for project. I think
>> eclipse should able be able to do that.
>>
>> https://github.com/apache/zeppelin/blob/master/_tools/checkstyle.xml
>>
>>
>>
>> Gokulakannan Muralidharan <go...@sentienz.com>于2018年2月28日周三 下午1:45写道:
>>
>>> Hi,
>>>
>>> Google code style format
>>> <https://github.com/google/styleguide/blob/gh-pages/eclipse-java-google-style.xml>
>>>  for
>>> Java is not working for Zeppelin project as mentioned in the docs
>>> <https://zeppelin.apache.org/contribution/contributions.html#code-convention>,
>>> getting checkstyle errors when trying to build locally.
>>>
>>> Could someone please share the eclipse code formatter used for Zeppelin
>>> java code?
>>>
>>> Regards,
>>> Gokul
>>>
>>>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: [DISCUSS] Roadmap 0.9 and future

2018-03-07 Thread Jongyoul Lee
 release
>>>   - Ramp up support of current Spark feature (e.g. Display job
>>>   progress correctly)
>>>   - Spark streaming support
>>>   - Handling Livy timeout
>>>   - Other interpreters
>>>- Better Hive support (e.g. configuration)
>>>   - Latest version PrestoDB support (pass property correctly)
>>>   - Run interpreter in containerized environment
>>>- Let individual user upload custom library from user's machine
>>>directly
>>>- Interpreter documentation is not detail enough
>>>
>>> And people in the meeting excited about ConInterpreter ZEPPELIN-3085 [2]
>>> in upcoming release, regarding dynamic/inline configuration of interpreter.
>>>
>>> And there were ideas on other areas, too. like
>>>
>>>- Separate Admin role and user role
>>>- Sidebar with plugin widget
>>>- Better integration with emerging framework like
>>>Tensorflow/MXNet/Ray
>>>- Sharing data
>>>- Schedule notebook from external scheduler
>>>
>>> Regarding scheduling notebook, Luciano shared his project
>>> NotebookTools[3] and it made people really excited.
>>>
>>> Also, there were inspiring discussions about the community/project.
>>> Current status and how can we make community/project more healthy. And
>>> here's some ideas around the topic
>>>
>>>- Need more frequent release
>>>- More attention to code review to speed up
>>>- Publishing roadmap beforehand to help contribution
>>>- 'Newbie', 'low hanging fruit' tag helps contribution
>>>- Enterprise friendly is another biggest strength of Zeppelin (in
>>>addition to Spark support) need to keep improve.
>>>
>>>
>>> I probably missed many idea shared yesterday. Please feel free to
>>> add/correct the summary. Hope more people in the mailinglist join and
>>> develop the idea together. And I think this discussion can leads community
>>> shape 0.9 and future version of Zeppelin, and update and publish future
>>> roadmap[4].
>>>
>>> Best,
>>> moon
>>>
>>> Special thanks to ZEPL <https://www.zepl.com> for the swag and dinner.
>>>
>>> [1] https://docs.google.com/document/d/18Wc3pEFx3qm9XoME_
>>> V_B9k_LlAd1PLyKQQEveR1on1o/edit?usp=sharing
>>> [2] https://issues.apache.org/jira/browse/ZEPPELIN-3085
>>> [3] https://github.com/SparkTC/notebook-exporter/
>>> tree/master/notebook-exporter
>>> [4] https://cwiki.apache.org/confluence/display/ZEPPELIN/
>>> Zeppelin+Roadmap
>>>
>>>
>>>
>>>
>>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Fwd: Travel Assistance applications open. Please inform your communities

2018-02-20 Thread Jongyoul Lee
-- Forwarded message --
From: Gavin McDonald <ga...@16degrees.com.au>
Date: Wed, Feb 14, 2018 at 6:34 PM
Subject: Travel Assistance applications open. Please inform your communities
To: travel-assista...@apache.org


Hello PMCs.

Please could you forward on the below email to your dev and user lists.

Thanks

Gav…

—
The Travel Assistance Committee (TAC) are pleased to announce that travel
assistance applications for ApacheCon NA 2018 are now open!

We will be supporting ApacheCon NA Montreal, Canada on 24th - 29th
September 2018

 TAC exists to help those that would like to attend ApacheCon events, but
are unable to do so for financial reasons.
For more info on this years applications and qualifying criteria, please
visit the TAC website at < http://www.apache.org/travel/ >. Applications
are now open and will close 1st May.

*Important*: Applications close on May 1st, 2018. Applicants have until the
closing date above to submit their applications (which should contain as
much supporting material as required to efficiently and accurately process
their request), this will enable TAC to announce successful awards shortly
afterwards.

As usual, TAC expects to deal with a range of applications from a diverse
range of backgrounds. We therefore encourage (as always) anyone thinking
about sending in an application to do so ASAP.
We look forward to greeting many of you in Montreal

Kind Regards,
Gavin - (On behalf of the Travel Assistance Committee)
—





-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: [DISCUSS] Change some default settings for avoiding unintended usages

2017-12-23 Thread Jongyoul Lee
I also worry about well-known password problem. We need to find a way of
generating password randomly at the first time to avoid potential risk, but
it’s not easy on our current shiro setting. Can someone have any good idea
to solve it?

On Sun, 24 Dec 2017 at 3:14 AM Felix Cheung <felixcheun...@hotmail.com>
wrote:

> Authentication by default is good but we should avoid having well known
> user / password by default - it’s security risk.
>
> 
> From: Belousov Maksim Eduardovich <m.belou...@tinkoff.ru>
> Sent: Thursday, December 21, 2017 12:30:57 AM
> To: users@zeppelin.apache.org
> Cc: d...@zeppelin.apache.org
> Subject: RE: [DISCUSS] Change some default settings for avoiding
> unintended usages
>
> The authentication by default isn't big deal, it's could be enabled.
> It's nice to use another account by default: guest/guest, for example.
>
>
> Thanks,
>
> Maksim Belousov
>
> From: Jongyoul Lee [mailto:jongy...@gmail.com]
> Sent: Monday, December 18, 2017 6:07 AM
> To: users <users@zeppelin.apache.org>
> Cc: d...@zeppelin.apache.org
> Subject: Re: [DISCUSS] Change some default settings for avoiding
> unintended usages
>
> Agreed. Supporting container services must be good and I like this idea,
> but I don't think it's the part of this issue directly. Let's talk about
> this issue with another email.
>
> I want to talk about enabling authentication by default. If it's enabled,
> we should login admin/password1 at the beginning. How do you think of it?
>
> On Sat, Dec 2, 2017 at 1:57 AM, Felix Cheung <felixcheun...@hotmail.com
> <mailto:felixcheun...@hotmail.com>> wrote:
> I’d +1 docker or container support (mesos, dc/os, k8s)
>
> But I think that they are separate things. If users are authenticated and
> interpreter is impersonating each user, the risk of system disruption
> should be low. This is typically how to secure things in a system, through
> user directory (eg LDAP) and access control (normal user can’t sudo and
> delete everything).
>
> Thought?
>
> _
> From: Jeff Zhang <zjf...@gmail.com<mailto:zjf...@gmail.com>>
> Sent: Thursday, November 30, 2017 11:51 PM
>
> Subject: Re: [DISCUSS] Change some default settings for avoiding
> unintended usages
> To: <d...@zeppelin.apache.org<mailto:d...@zeppelin.apache.org>>
> Cc: users <users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>>
>
>
> +1 for running interpreter process in docker container.
>
>
>
> Jongyoul Lee <jongy...@gmail.com<mailto:jongy...@gmail.com>>于2017年12月1日周五
> 下午3:36写道:
> Yes, exactly, this is not only the shell interpreter problem, all can run
> any script through python and Scala. Shell is just an example.
>
> Using docker looks good but it cannot avoid unindented usage of resources
> like mining coin.
>
> On Fri, Dec 1, 2017 at 2:36 PM, Felix Cheung <felixcheun...@hotmail.com
> <mailto:felixcheun...@hotmail.com>>
> wrote:
>
> > I don’t think that’s limited to the shell interpreter.
> >
> > You can run any arbitrary program or script from python or Scala (or
> java)
> > as well.
> >
> > _
> > From: Jeff Zhang <zjf...@gmail.com<mailto:zjf...@gmail.com>>
> > Sent: Wednesday, November 29, 2017 4:00 PM
> > Subject: Re: [DISCUSS] Change some default settings for avoiding
> > unintended usages
> > To: <d...@zeppelin.apache.org<mailto:d...@zeppelin.apache.org>>
> > Cc: users <users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>>
> >
> >
> >
> > Shell interpreter is a black hole for security, usually we don't
> recommend
> > or allow user to use shell.
> >
> > We may need to refactor the shell interpreter, running under zeppelin
> user
> > is too dangerous.
> >
> >
> >
> >
> >
> > Jongyoul Lee <jongy...@gmail.com<mailto:jongy...@gmail.com>>于2017年11月29日周三
> 下午11:44写道:
> >
> > > Hi, users and dev,
> > >
> > > Recently, I've got an issue about the abnormal usage of some
> > interpreters.
> > > Zeppelin's users can access shell by shell and python interpreters. It
> > > means all users can run or execute what they want even if it harms the
> > > system. Thus I agree that we need to change some default settings to
> > > prevent this kind of abusing situation. Before we proceed to do it, I
> > want
> > > to listen to others' opinions.
> > >
> > > Feel free to reply this email
> > >
> > > Regards,
> > > Jongyoul
> > >
> > > --
> > > 이종열, Jongyoul Lee, 李宗烈
> > > http://madeng.net
> > >
> >
> >
> >
>
>
> --
> 이종열, Jongyoul Lee, 李宗烈
> http://madeng.net
>
>
>
>
> --
> 이종열, Jongyoul Lee, 李宗烈
> http://madeng.net
>
-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


[DISCUSS] Review process

2017-12-17 Thread Jongyoul Lee
Hi committers,

I want to suggest one thing about our reviewing process. We have the policy
to wait for one-day before merging some PRs. AFAIK, It's because we reduce
mistakes and prevent abuses from committing by owner without reviewing it
concretely. I would like to change this policy to remove delay after
merging it. We, recently, don't have much reviewers and committers who can
merge continuously, and in my case, I, sometimes, forget some PRs that I
have to merge. And I also believe all committers have consensus how to
review and merge contributions.

How do you think of it?

JL

-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: [DISCUSS] Change some default settings for avoiding unintended usages

2017-12-17 Thread Jongyoul Lee
Agreed. Supporting container services must be good and I like this idea,
but I don't think it's the part of this issue directly. Let's talk about
this issue with another email.

I want to talk about enabling authentication by default. If it's enabled,
we should login admin/password1 at the beginning. How do you think of it?

On Sat, Dec 2, 2017 at 1:57 AM, Felix Cheung <felixcheun...@hotmail.com>
wrote:

> I’d +1 docker or container support (mesos, dc/os, k8s)
>
> But I think that they are separate things. If users are authenticated and
> interpreter is impersonating each user, the risk of system disruption
> should be low. This is typically how to secure things in a system, through
> user directory (eg LDAP) and access control (normal user can’t sudo and
> delete everything).
>
> Thought?
>
> _
> From: Jeff Zhang <zjf...@gmail.com>
> Sent: Thursday, November 30, 2017 11:51 PM
>
> Subject: Re: [DISCUSS] Change some default settings for avoiding
> unintended usages
> To: <d...@zeppelin.apache.org>
> Cc: users <users@zeppelin.apache.org>
>
>
>
> +1 for running interpreter process in docker container.
>
>
>
> Jongyoul Lee <jongy...@gmail.com>于2017年12月1日周五 下午3:36写道:
>
>> Yes, exactly, this is not only the shell interpreter problem, all can run
>> any script through python and Scala. Shell is just an example.
>>
>> Using docker looks good but it cannot avoid unindented usage of resources
>> like mining coin.
>>
>> On Fri, Dec 1, 2017 at 2:36 PM, Felix Cheung <felixcheun...@hotmail.com>
>> wrote:
>>
>> > I don’t think that’s limited to the shell interpreter.
>> >
>> > You can run any arbitrary program or script from python or Scala (or
>> java)
>> > as well.
>> >
>> > _
>> > From: Jeff Zhang <zjf...@gmail.com>
>> > Sent: Wednesday, November 29, 2017 4:00 PM
>> > Subject: Re: [DISCUSS] Change some default settings for avoiding
>> > unintended usages
>> > To: <d...@zeppelin.apache.org>
>> > Cc: users <users@zeppelin.apache.org>
>> >
>> >
>> >
>> > Shell interpreter is a black hole for security, usually we don't
>> recommend
>> > or allow user to use shell.
>> >
>> > We may need to refactor the shell interpreter, running under zeppelin
>> user
>> > is too dangerous.
>> >
>> >
>> >
>> >
>> >
>> > Jongyoul Lee <jongy...@gmail.com>于2017年11月29日周三 下午11:44写道:
>> >
>> > > Hi, users and dev,
>> > >
>> > > Recently, I've got an issue about the abnormal usage of some
>> > interpreters.
>> > > Zeppelin's users can access shell by shell and python interpreters. It
>> > > means all users can run or execute what they want even if it harms the
>> > > system. Thus I agree that we need to change some default settings to
>> > > prevent this kind of abusing situation. Before we proceed to do it, I
>> > want
>> > > to listen to others' opinions.
>> > >
>> > > Feel free to reply this email
>> > >
>> > > Regards,
>> > > Jongyoul
>> > >
>> > > --
>> > > 이종열, Jongyoul Lee, 李宗烈
>> > > http://madeng.net
>> > >
>> >
>> >
>> >
>>
>>
>> --
>> 이종열, Jongyoul Lee, 李宗烈
>> http://madeng.net
>>
>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: [DISCUSS] Change some default settings for avoiding unintended usages

2017-11-30 Thread Jongyoul Lee
Yes, exactly, this is not only the shell interpreter problem, all can run
any script through python and Scala. Shell is just an example.

Using docker looks good but it cannot avoid unindented usage of resources
like mining coin.

On Fri, Dec 1, 2017 at 2:36 PM, Felix Cheung <felixcheun...@hotmail.com>
wrote:

> I don’t think that’s limited to the shell interpreter.
>
> You can run any arbitrary program or script from python or Scala (or java)
> as well.
>
> _
> From: Jeff Zhang <zjf...@gmail.com>
> Sent: Wednesday, November 29, 2017 4:00 PM
> Subject: Re: [DISCUSS] Change some default settings for avoiding
> unintended usages
> To: <d...@zeppelin.apache.org>
> Cc: users <users@zeppelin.apache.org>
>
>
>
> Shell interpreter is a black hole for security, usually we don't recommend
> or allow user to use shell.
>
> We may need to refactor the shell interpreter, running under zeppelin user
> is too dangerous.
>
>
>
>
>
> Jongyoul Lee <jongy...@gmail.com>于2017年11月29日周三 下午11:44写道:
>
> > Hi, users and dev,
> >
> > Recently, I've got an issue about the abnormal usage of some
> interpreters.
> > Zeppelin's users can access shell by shell and python interpreters. It
> > means all users can run or execute what they want even if it harms the
> > system. Thus I agree that we need to change some default settings to
> > prevent this kind of abusing situation. Before we proceed to do it, I
> want
> > to listen to others' opinions.
> >
> > Feel free to reply this email
> >
> > Regards,
> > Jongyoul
> >
> > --
> > 이종열, Jongyoul Lee, 李宗烈
> > http://madeng.net
> >
>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


[DISCUSS] Change some default settings for avoiding unintended usages

2017-11-29 Thread Jongyoul Lee
Hi, users and dev,

Recently, I've got an issue about the abnormal usage of some interpreters.
Zeppelin's users can access shell by shell and python interpreters. It
means all users can run or execute what they want even if it harms the
system. Thus I agree that we need to change some default settings to
prevent this kind of abusing situation. Before we proceed to do it, I want
to listen to others' opinions.

Feel free to reply this email

Regards,
Jongyoul

-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Change Port Zeppelin to 80

2017-10-29 Thread Jongyoul Lee
Hi,

you can change "zeppelin.server.port" in 'conf/zeppelin-site.xml'

JL

On Wed, Sep 13, 2017 at 12:44 AM, Carlos Andres Zambrano Barrera <
cza...@gmail.com> wrote:

> Hi,
>
> I want to change the deafult por to 80, anyone could tellme where is the
> file where could i change that?
>
> That is because as we explore the master node I can see different
> directories with conf files.
>
> --
> Ing. Carlos Andrés Zambrano Barrera
> Cel: 3123825834
>
>
>
>
>
>
> <https://mailtrack.io/> Sent with Mailtrack
> <https://mailtrack.io/install?source=signature=en=cza...@gmail.com=22>
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Downloading files from the notebook

2017-10-29 Thread Jongyoul Lee
You can just print out your notebook as a pdf. I might work well

JL

On Tue, Oct 17, 2017 at 3:21 AM, Jeff Chung <jch...@cognitivemedicine.com>
wrote:

> I have a notebook that can create a PDF dynamically.  I would like to be
> able to have a user to download the PDF.  I tried storing the PDF in the
> notebook directory and creating a href link to http://localhost:8080/#/
> notebook/{notebook id}/mypdf.pdf but zeppelin just displays the zeppelin
> home instead of taking me to the file.  How can I download a file from my
> notebook?
>
> Jeff
>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: OpenAM LDAP integration with Zeppelin

2017-10-29 Thread Jongyoul Lee
Hi,

I'm not an expert of LDAP and Shiro, AFAIK, currently, Shiro doesn't
support group-related things on LDAP.

I knew a reference to use LDAP in Zeppelin by setting all groups manually.

Hope this help,
Jongyoul Lee

On Tue, Oct 17, 2017 at 5:13 PM, Suresh Ayyavoo <sur...@iappsasia.com>
wrote:

> Hi All,
>
> Who have integrated OpenAM LDAP with Zeppelin0.7.2?
> The group are not honoured in zeppelin roles?Any idea?
>
> LoginRestApi.java[postLogin]:115) - {"status":"OK","message":"","
> body":{"principal":"admin","ticket":"81e0db18-a10a-454f-
> a9b1-c9fa41abc877","roles":"[]"}}
>
> --
>
> Suresh Ayyavoo
>
> Solution Architect / R Lead
>
> iAPPS Pte. Ltd.
>
> 3 Fusionopolis Way
> <https://maps.google.com/?q=3+Fusionopolis+Way=gmail=g>,
> Symbiosis #13-25 S(138633)
>
> [O] 64631795   [F] 6778 5300 [M] 91540224
>
> Website: www.iappsasia.com
>
> Facebook: www.facebook.com/iappsasia
>
> Youtube: www.youtube.com/user/iAPPSasia
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Read Hbase table in pyspark gives java.lang.ClassNotFoundException: org.apache.phoenix.jdbc.PhoenixDriver

2017-10-26 Thread Jongyoul Lee
Hi,

you can locate them under ${ZEPPELIN_HOME}/interpreter/spark/

The other option is to set dependencies in interpreter tab of spark
interpreter. See here:
http://zeppelin.apache.org/docs/0.7.3/manual/dependencymanagement.html

I'm not sure if sc.something works or not.

Regards,
JL

On Thu, Oct 26, 2017 at 3:43 PM, Indtiny S <indt...@gmail.com> wrote:

> Hi,
>
> I have those libraries but where to place those libraries so that zeppelin
> can pick up.
>
> or is there any way to set the library path using sparkcontext i.e using
> sc?
>
>
> Regards
> In
>
> On Wed, Oct 25, 2017 at 9:22 PM, Jongyoul Lee <jongy...@gmail.com> wrote:
>
> > Hi,
> >
> > I'm not sure but you can try to locate them under interpreter/spark if
> you
> > can do it
> >
> > JL
> >
> > On Wed, Oct 25, 2017 at 3:05 PM, Indtiny S <indt...@gmail.com> wrote:
> >
> > > Hi,
> > > I am trying to read Hbase tables in pyspark data frame,
> > > I am using the below code
> > > but I am getting the ClassNotFoundException error
> > >
> > >  df=sqlContext.read.format('jdbc').options(driver="org.
> > > apache.phoenix.jdbc.PhoenixDriver",url='jdbc:
> > > phoenix:localhost:2181:/hbase-unsecure',dbtable='table_name').load()
> > >
> > >
> > > java.lang.ClassNotFoundException: org.apache.phoenix.jdbc.
> PhoenixDriver
> > >
> > >
> > > I have the libraries phoenix-spark-4.7.0-HBase-1.1.jar and
> > > phoenix-4.7.0-HBase-1.1-client.jar but dont know where to place them .
> > >
> > >
> > > I am using zeppelin 0.7.0
> > >
> > >
> > > Rgds
> > >
> > > In
> > >
> > >
> > >
> > >
> > >
> > >
> >
> >
> > --
> > 이종열, Jongyoul Lee, 李宗烈
> > http://madeng.net
> >
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Read Hbase table in pyspark gives java.lang.ClassNotFoundException: org.apache.phoenix.jdbc.PhoenixDriver

2017-10-25 Thread Jongyoul Lee
Hi,

I'm not sure but you can try to locate them under interpreter/spark if you
can do it

JL

On Wed, Oct 25, 2017 at 3:05 PM, Indtiny S <indt...@gmail.com> wrote:

> Hi,
> I am trying to read Hbase tables in pyspark data frame,
> I am using the below code
> but I am getting the ClassNotFoundException error
>
>  df=sqlContext.read.format('jdbc').options(driver="org.
> apache.phoenix.jdbc.PhoenixDriver",url='jdbc:
> phoenix:localhost:2181:/hbase-unsecure',dbtable='table_name').load()
>
>
> java.lang.ClassNotFoundException: org.apache.phoenix.jdbc.PhoenixDriver
>
>
> I have the libraries phoenix-spark-4.7.0-HBase-1.1.jar and
> phoenix-4.7.0-HBase-1.1-client.jar but dont know where to place them .
>
>
> I am using zeppelin 0.7.0
>
>
> Rgds
>
> In
>
>
>
>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Interpreter %sq not found Zeppelin swallows last "l" for some reason...?

2017-07-10 Thread Jongyoul Lee
Thanks for telling me that. I'll also test it with chrome. Might you use it
in Windows? I never heard about it so I'm just asking something to find a
clue.

On Mon, 10 Jul 2017 at 17:10 Serega Sheypak <serega.shey...@gmail.com>
wrote:

> It was Chrome, probably  Version 59.0.3071.115 (Official Build) (64-bit)
> Can't reproduce same issue in Safari. Safari works fine.
>
> 2017-07-10 5:58 GMT+02:00 Jongyoul Lee <jongy...@gmail.com>:
>
>> Which browser do you use it?
>>
>> On Mon, Jun 26, 2017 at 11:49 PM, Serega Sheypak <
>> serega.shey...@gmail.com> wrote:
>>
>>> Hi, I get super weird exception:
>>>
>>> ERROR [2017-06-26 07:44:17,523] ({qtp2016336095-99}
>>> NotebookServer.java[persistAndExecuteSingleParagraph]:1749) - Exception
>>> from run
>>>
>>> org.apache.zeppelin.interpreter.InterpreterException:
>>> paragraph_1498480084440_1578830546's Interpreter %sq not found
>>>
>>> I have three paragraphs in my notebook
>>>
>>>
>>>
>>> %spark.dep
>>>
>>> z.load("my.local.jar.jar")
>>>
>>>
>>> %spark
>>>
>>> import com.myorg.SuperClass
>>>
>>> // bla-bla
>>>
>>> features.toDF().registerTempTable("features")
>>>
>>>
>>> %sql
>>>
>>> select f1, f2, count(*) as cnt from features;
>>>
>>>
>>> The last one gets this weird exception. Where did "l" go?
>>>
>>
>>
>>
>> --
>> 이종열, Jongyoul Lee, 李宗烈
>> http://madeng.net
>>
>
> --
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Problem with MariaDB and jdbc

2017-07-10 Thread Jongyoul Lee
We DID provide two statements in a paragraph for a while, but this features
looks broken.

On Tue, 11 Jul 2017 at 03:14 <dar...@ontrenet.com> wrote:

> Best as i recall you can't have two sql statements in one zeppelin note.
> Try separating them.
>
> Get Outlook for Android <https://aka.ms/ghei36>
>
>
>
>
> On Wed, Jul 5, 2017 at 7:13 AM -0400, "Iavor Jelev" <
> iavor.je...@babelmonkeys.com> wrote:
>
> Hi everyone,
>>
>> first off - I'm new to Zeppelin, but I already love it. Great work on
>> the software!
>>
>> I'm running Zeppelin 0.6.2 in docker and expiriencing a strange issue:
>> There is a MariaDB in the same network, which I want to connect to. I
>> set up the jdbc-Interpreter as shown here (for the lack of a mariaDB
>> example in the 0.6.2 documentation, assuming it is the same):
>> https://zeppelin.apache.org/docs/0.7.1/interpreter/jdbc.html#mariadb
>>
>> Now, when I open a notebook and type the following, it works fine:
>>
>> %jdbc
>> show databases
>>
>> If I run the following, it works also:
>>
>> %jdbc
>> use testdb
>>
>> Here comes the strange part. If I run the following, I get an Exception:
>>
>> %jdbc
>> use testdb;
>> show tables;
>>
>> You have an error in your SQL syntax; check the manual that corresponds
>> to your MariaDB server version for the right syntax to use near 'show
>> tables' at line 2
>> Query is : use testdb;
>> show tables;
>> class java.sql.SQLSyntaxErrorException
>> org.mariadb.jdbc.internal.util.ExceptionMapper.get(ExceptionMapper.java:127)
>> org.mariadb.jdbc.internal.util.ExceptionMapper.throwException(ExceptionMapper.java:71)
>> org.mariadb.jdbc.MariaDbStatement.executeQueryEpilog(MariaDbStatement.java:226)
>> org.mariadb.jdbc.MariaDbStatement.executeInternal(MariaDbStatement.java:260)
>> org.mariadb.jdbc.MariaDbStatement.execute(MariaDbStatement.java:273)
>> org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(JDBCInterpreter.java:322)
>> org.apache.zeppelin.jdbc.JDBCInterpreter.interpret(JDBCInterpreter.java:408)
>> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
>> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341)
>> org.apache.zeppelin.scheduler.Job.run(Job.java:176)
>> org.apache.zeppelin.scheduler.ParallelScheduler$JobRunner.run(ParallelScheduler.java:162)
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> java.lang.Thread.run(Thread.java:745)
>>
>> Also - if I run 'use testdb', again a query which works by itself, and I
>> run 'show tables' in a new panel directly after, then I get:
>>
>> No database selected
>> Query is : show tables;
>> class java.sql.SQLException
>> org.mariadb.jdbc.internal.util.ExceptionMapper.get(ExceptionMapper.java:138)
>> org.mariadb.jdbc.internal.util.ExceptionMapper.throwException(ExceptionMapper.java:71)
>> org.mariadb.jdbc.MariaDbStatement.executeQueryEpilog(MariaDbStatement.java:226)
>> org.mariadb.jdbc.MariaDbStatement.executeInternal(MariaDbStatement.java:260)
>> org.mariadb.jdbc.MariaDbStatement.execute(MariaDbStatement.java:273)
>> org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(JDBCInterpreter.java:322)
>> org.apache.zeppelin.jdbc.JDBCInterpreter.interpret(JDBCInterpreter.java:408)
>> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
>> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341)
>> org.apache.zeppelin.scheduler.Job.run(Job.java:176)
>> org.apache.zeppelin.scheduler.ParallelScheduler$JobRunner.run(ParallelScheduler.java:162)
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> java.lang.Thread.run(Thread.java:745)
>>
>> Both approaches work on my local installation (which is zeppelin 0.7.2
>> with a local MySQL though, so not a fair comparison). Has anyone had
>> similar issues? Can anyone offer some advice on what might be going
>> wrong here?
>>
>>
>> Best regards,
>> Iavor
>>
>> --
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: [DISCUSS] Reducing default interpreters while building and releasing Zeppelin

2017-07-10 Thread Jongyoul Lee
I haven't thought about about downgrading, but at least, we need to provide
a way to install interpreter from UI. And do you think we need to erase
interpreter as well? We can unbind some existing interpreters and also
delete some interpreter settings as well.

On Mon, 10 Jul 2017 at 17:16 Jeff Zhang <zjf...@gmail.com> wrote:

>
> Does helium UI support interpreter erase/upgrade/downgrade?
>
> Jongyoul Lee <jongy...@gmail.com>于2017年7月10日周一 上午11:36写道:
>
>> Hi,
>>
>> I think it's time to move to the next step. Zeppelin's web site is
>> already changed. Zeppelin's default package also provide same interpreters
>> and include markdown and shell. How do you think of it?
>>
>> Regards,
>> Jongyoul
>>
>> On Sun, Jun 18, 2017 at 11:08 PM, Jongyoul Lee <jongy...@gmail.com>
>> wrote:
>>
>>> @Rafffele,
>>>
>>> After we discuss it with community, then will make a JIRA issue for
>>> handling it. :-)
>>>
>>> On Fri, Jun 16, 2017 at 1:28 PM, Park Hoon <1am...@gmail.com> wrote:
>>>
>>>> @Jeff > , we need to integrate the install script in zeppelin UI.
>>>>
>>>> AFAIK, helium has UI to list all available interpreters which are
>>>> registered in the central maven repo although this feature doesn't work
>>>> currently.
>>>>
>>>> We can improve this page is used to install interpreters which will be
>>>> not included in the default distribution.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> [image: Inline image 1]
>>>>
>>>>
>>>> On Wed, Jun 14, 2017 at 8:48 PM, Raffaele S <r.sagg...@gmail.com>
>>>> wrote:
>>>>
>>>>> I believe it's a good idea to select which interpreters to keep in the
>>>>> "default release", should we track this through JIRA?
>>>>>
>>>>> 2017-06-11 17:05 GMT+02:00 Jongyoul Lee <jongy...@gmail.com>:
>>>>>
>>>>>> Thanks, Alex.
>>>>>>
>>>>>> I left comments we started to discuss on it recently.
>>>>>>
>>>>>> On Sun, Jun 11, 2017 at 6:50 PM, Alexander Bezzubov <b...@apache.org>
>>>>>> wrote:
>>>>>>
>>>>>>> Hey guys,
>>>>>>>
>>>>>>> great effort! I think people in few other communities will be very
>>>>>>> happy
>>>>>>> with it i.e [1] and [2].
>>>>>>>
>>>>>>> Is there an issue that tracks current status or something like that?
>>>>>>> Does
>>>>>>> anyone have concrete plans to work on it in this/next release?
>>>>>>>
>>>>>>> Sorry if I have missed that out. And please, keep up a good work!
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>>
>>>>>>> Alex
>>>>>>>
>>>>>>>
>>>>>>> 1. https://issues.apache.org/jira/browse/BIGTOP-2269
>>>>>>> 2.
>>>>>>>
>>>>>>> https://github.com/kubernetes/kubernetes/tree/master/examples/spark#known-issues-with-zeppelin
>>>>>>>
>>>>>>> On Mon, Jun 5, 2017, 06:54 Jongyoul Lee <jongy...@gmail.com> wrote:
>>>>>>>
>>>>>>> > Felix,
>>>>>>> > Yes, I said a bit confused. I want to release Zeppelin with some -
>>>>>>> not-all
>>>>>>> > - interpreters, but deploy all interpreters into maven to install
>>>>>>> them if
>>>>>>> > users want to use them.
>>>>>>> >
>>>>>>> > Moon,
>>>>>>> > I think it's the best to fit the list as same as homepage by
>>>>>>> default as it
>>>>>>> > makes users confused less. But if we want to add more
>>>>>>> interpreters, I think
>>>>>>> > mailing questions and related issues are one of the proper
>>>>>>> criteria.
>>>>>>> >
>>>>>>> > Jeff,
>>>>>>> > Agreed. We already had a menu but it just shows how to use
>>>>>>> > install-interpreter.sh.
>>>>>>> >
>>>>>>> > On Mon, Jun 5, 2017 at 9:

Re: Zeppelin / Flink

2017-07-09 Thread Jongyoul Lee
I'm not sure but you need to upgrade the version of Flink and build it up
again.

On Sun, Jul 2, 2017 at 6:25 PM, Günter Hipler <guenter.hip...@unibas.ch>
wrote:

> Hi,
>
> I tried to establish a connection from Zeppelin to the latest Flink
> version (1.3.1)
>
> Using the latest flink version I got
>
> text: org.apache.flink.api.scala.DataSet[String] =
> org.apache.flink.api.scala.DataSet@69b8836f
> counts: org.apache.flink.api.scala.AggregateDataSet[(String, Int)] =
> org.apache.flink.api.scala.AggregateDataSet@5a46c8ba
> org.apache.flink.client.program.ProgramInvocationException: The program
> execution failed: Communication with JobManager failed: Lost connection to
> the JobManager.
>   at org.apache.flink.client.program.ClusterClient.run(ClusterCli
> ent.java:409)
>
> The same test job against the Flink 1.1.3 cluster is possible. (1.1.3 is
> actually part of the current Zeppelin distribution 0.7.2 and upcoming 0.8
> as Interpreter)
>
> Is there anybody who has experiences in trying to update the Zeppelin
> interpreter?
>
> Thanks a lot!
>
> Günter
>
>
> --
> Universität Basel
> Universitätsbibliothek
> Günter Hipler
> Projekt SwissBib
> Schoenbeinstrasse 18-20
> 4056 Basel, Schweiz
> Tel.: + 41 (0)61 267 31 12 Fax: ++41 61 267 3103
> E-Mail guenter.hip...@unibas.ch
> URL: www.swissbib.org  / http://www.ub.unibas.ch/
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Interpreter %sq not found Zeppelin swallows last "l" for some reason...?

2017-07-09 Thread Jongyoul Lee
Which browser do you use it?

On Mon, Jun 26, 2017 at 11:49 PM, Serega Sheypak <serega.shey...@gmail.com>
wrote:

> Hi, I get super weird exception:
>
> ERROR [2017-06-26 07:44:17,523] ({qtp2016336095-99} NotebookServer.java[
> persistAndExecuteSingleParagraph]:1749) - Exception from run
>
> org.apache.zeppelin.interpreter.InterpreterException:
> paragraph_1498480084440_1578830546's Interpreter %sq not found
>
> I have three paragraphs in my notebook
>
>
>
> %spark.dep
>
> z.load("my.local.jar.jar")
>
>
> %spark
>
> import com.myorg.SuperClass
>
> // bla-bla
>
> features.toDF().registerTempTable("features")
>
>
> %sql
>
> select f1, f2, count(*) as cnt from features;
>
>
> The last one gets this weird exception. Where did "l" go?
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Error with Tutorial

2017-07-09 Thread Jongyoul Lee
If you set SPARK_HOME, please check that path first. If not, please show
your interpreter settings.

Thanks,

On Wed, Jul 5, 2017 at 4:55 PM, CHALLA <bizcha...@gmail.com> wrote:

> code being executed
> import sys.process._
>
> val sqlContext = new org.apache.spark.sql.SQLContext(sc)
>
> val bankText = sc.textFile("D:\Softs\zeppelin-0.7.2-bin-all\Bank\
> bank-full.csv")
>
> case class Bank(age:Integer, job:String, marital : String, education :
> String, balance : Integer)
>
> // split each line, filter out header (starts with "age"), and map it into
> Bank case class
> val bank = bankText.map(s=>s.split(";")).filter(s=>s(0)!="\"age\"").map(
> s=>Bank(s(0).toInt,
> s(1).replaceAll("\"", ""),
> s(2).replaceAll("\"", ""),
> s(3).replaceAll("\"", ""),
> s(5).replaceAll("\"", "").toInt
> )
> )
>
> // convert to DataFrame and create temporal table
> bank.toDF().registerTempTable("bank")
> 
>
> error is as below
>
> java.lang.NullPointerException
> at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38)
> at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33)
> at 
> org.apache.zeppelin.spark.SparkInterpreter.createSparkContext_2(SparkInterpreter.java:391)
>
> at 
> org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:380)
>
> at 
> org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:146)
>
> at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:828)
>
> at 
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)
>
> at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$
> InterpretJob.jobRun(RemoteInterpreterServer.java:491)
> at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
> at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
>
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
> at java.util.concurrent.FutureTask.run(Unknown Source)
> at java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.access$201(Unknown Source)
> at java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.run(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> ==
> Please help
>
>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Roadmap for 0.8.0

2017-07-09 Thread Jongyoul Lee
AFAIK, ETA is by end of July.


On Fri, Jun 23, 2017 at 8:31 PM, mbatista <mario.bati...@nokia.com> wrote:

> Hi,
>
> What's the planned date for 0.8 Release?
>
> Thanks in advance.
>
>
>
>
> --
> View this message in context: http://apache-zeppelin-users-
> incubating-mailing-list.75479.x6.nabble.com/Roadmap-for-0-8-
> 0-tp5243p5828.html
> Sent from the Apache Zeppelin Users (incubating) mailing list mailing list
> archive at Nabble.com.
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Add jar JDBC

2017-06-11 Thread Jongyoul Lee
Sounds good

On Sun, Jun 11, 2017 at 10:33 PM, Andrés Ivaldi <iaiva...@gmail.com> wrote:

> thanks!!
> I'd misunderstand the interpreter part and I'd added a mysql interpreter
> instead of add the jar in the spark interpreter.
> Now it's working,  thanks again!!
>
> On Sat, Jun 10, 2017 at 11:21 PM, Jongyoul Lee <jongy...@gmail.com> wrote:
>
>> You can add that jars through the interpreter tabs
>>
>> On Sun, Jun 11, 2017 at 7:34 AM, Andrés Ivaldi <iaiva...@gmail.com>
>> wrote:
>>
>>> Hello, I want to use Zeppelin embedded Spark with  MySql jdbc, but
>>> I'cant add the jar to the Spark classpath.
>>> I've tried adding the classpath to SPARK_SUBMIT_OPTIONS but doesn't work.
>>>
>>> Got error No suitable driver
>>>
>>> --
>>> Ing. Ivaldi Andres
>>>
>>
>>
>>
>> --
>> 이종열, Jongyoul Lee, 李宗烈
>> http://madeng.net
>>
>
>
>
> --
> Ing. Ivaldi Andres
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: [DISCUSS] Reducing default interpreters while building and releasing Zeppelin

2017-06-11 Thread Jongyoul Lee
Thanks, Alex.

I left comments we started to discuss on it recently.

On Sun, Jun 11, 2017 at 6:50 PM, Alexander Bezzubov <b...@apache.org> wrote:

> Hey guys,
>
> great effort! I think people in few other communities will be very happy
> with it i.e [1] and [2].
>
> Is there an issue that tracks current status or something like that? Does
> anyone have concrete plans to work on it in this/next release?
>
> Sorry if I have missed that out. And please, keep up a good work!
>
>
> --
>
> Alex
>
>
> 1. https://issues.apache.org/jira/browse/BIGTOP-2269
> 2.
> https://github.com/kubernetes/kubernetes/tree/master/
> examples/spark#known-issues-with-zeppelin
>
> On Mon, Jun 5, 2017, 06:54 Jongyoul Lee <jongy...@gmail.com> wrote:
>
> > Felix,
> > Yes, I said a bit confused. I want to release Zeppelin with some -
> not-all
> > - interpreters, but deploy all interpreters into maven to install them if
> > users want to use them.
> >
> > Moon,
> > I think it's the best to fit the list as same as homepage by default as
> it
> > makes users confused less. But if we want to add more interpreters, I
> think
> > mailing questions and related issues are one of the proper criteria.
> >
> > Jeff,
> > Agreed. We already had a menu but it just shows how to use
> > install-interpreter.sh.
> >
> > On Mon, Jun 5, 2017 at 9:36 AM, Jeff Zhang <zjf...@gmail.com> wrote:
> >
> >>
> >> If possible, we need to integrate the install script in zeppelin UI. As
> I
> >> would expect many users would ask why some interpreter is missing and
> how
> >> to install them.
> >>
> >>
> >>
> >> moon soo Lee <m...@apache.org>于2017年6月5日周一 上午2:06写道:
> >>
> >>> Following is last discussion related to release package size.
> >>>
> >>>
> >>> https://lists.apache.org/thread.html/69f606409790d7ba11422e8c6df941
> a75c5dfae0aca63eccf2f840bf@%3Cusers.zeppelin.apache.org%3E
> >>>
> >>> at this time, we have discussed about having bin-all (every
> >>> interpreters), bin-min (selected interpreters), bin-netinst (no
> >>> interpreters) package but didn't conclude the criteria and how we make
> a
> >>> decision.
> >>>
> >>> Jongyoul, do you have any idea about criteria?
> >>>
> >>> Thanks,
> >>> moon
> >>>
> >>> On Sun, Jun 4, 2017 at 10:47 AM Felix Cheung <
> felixcheun...@hotmail.com>
> >>> wrote:
> >>>
> >>>> Sure - I think it will be important to discuss what criteria to use to
> >>>> decide what is included vs what will be released separately.
> >>>>
> >>>> _
> >>>> From: Jongyoul Lee <jongy...@gmail.com>
> >>>>
> >>> Sent: Sunday, June 4, 2017 9:47 AM
> >>>> Subject: Re: [DISCUSS] Reducing default interpreters while building
> and
> >>>> releasing Zeppelin
> >>>> To: dev <d...@zeppelin.apache.org>
> >>>>
> >>> Cc: <users@zeppelin.apache.org>
> >>>>
> >>>
> >>>>
> >>>>
> >>>> It means we release with some interpreters and deploy all interpreters
> >>>> into
> >>>> maven separately. We already had a install-interpreter script inside
> >>>> it. If
> >>>> someone wants to install specific interpreter not included in default
> >>>> release package, they can use that script to install specific one.
> >>>>
> >>>> On Sun, Jun 4, 2017 at 9:11 AM, Felix Cheung <
> felixcheun...@hotmail.com
> >>>> >
> >>>> wrote:
> >>>>
> >>>> > Are we proposing some interpreters to be built and released
> >>>> separately?
> >>>> >
> >>>> > Is this going to be separate packaging? Or separate release
> pipeline?
> >>>> >
> >>>> >
> >>>> > _
> >>>> > From: Jongyoul Lee <jongy...@gmail.com<mailto:jongy...@gmail.com
> >>>> <jongy...@gmail.com>>>
> >>>> > Sent: Friday, June 2, 2017 11:04 PM
> >>>> > Subject: [DISCUSS] Reducing default interpreters while building and
> >>>> > releasing Zeppelin
> >>>>
> >>> > To: dev <d...@zeppelin.apache.org<mailto:d.

Re: Add jar JDBC

2017-06-10 Thread Jongyoul Lee
You can add that jars through the interpreter tabs

On Sun, Jun 11, 2017 at 7:34 AM, Andrés Ivaldi <iaiva...@gmail.com> wrote:

> Hello, I want to use Zeppelin embedded Spark with  MySql jdbc, but I'cant
> add the jar to the Spark classpath.
> I've tried adding the classpath to SPARK_SUBMIT_OPTIONS but doesn't work.
>
> Got error No suitable driver
>
> --
> Ing. Ivaldi Andres
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Plot over a million points

2017-06-10 Thread Jongyoul Lee
Hi,

It's a problem from now. AFAIK, Zeppelin will propose TableData API and it
will help to solve this kind of problem. Please wait a moment.

Regards,
JL

On Fri, Jun 9, 2017 at 7:13 PM, Raffaele S <r.sagg...@gmail.com> wrote:

> Hello,
>
> I am currently working  on time series using a Spark infrastructure and I
> am trying to display multiple time series (millions of points) using
> Zeppelin.
>
> The first thing I tried is to increase *zeppelin.spark.maxResult* value
> but this leads to high memory usage, lag and crashes.
>
> Is there a way to display these points interactively using Zeppelin?
>
>
> Thanks.
>
> Raffaele
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Multi tenancy - isolated interpreters block each other?

2017-06-09 Thread Jongyoul Lee
Hi,

Thanks for reporting this bug. Can you make a jira issue and describe how
to reproduce it?

Thanks,
Jongyoul

On Tue, Jun 6, 2017 at 9:35 AM, Wade Jensen <wade.jen...@quantium.com.au>
wrote:

> Hello,
>
>
>
> I have been experimenting with multi user modes with 0.7.1 in EMR. I set
> the Spark interpreter scope to be isolated per user, and then ran two
> different users side by side in different browser tabs. All the privacy and
> sharing features worked, but whenever I tried to run a spark sql query in
> both notes at the same time, one query will hang at 0% until the other is
> completed. (Note the queries where on different data, so it is not that the
> data was locked by one query).
>
>
>
> Sorry if this is documented explicitly somewhere, but is this expected
> behaviour? Or should I be able to run multiple queries with separate
> interpreters in parallel?
>
>
>
> I looked in the YARN scheduler GUI and could see two separate Spark
> processes, so I would have thought this should work.
>
>
>
> Kind regards,
>
> Wade Jensen
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Release 0.7.2

2017-06-04 Thread Jongyoul Lee
I saw another mailing thread now. Ignore it.

On Mon, Jun 5, 2017 at 1:45 PM, Jongyoul Lee <jongy...@gmail.com> wrote:

> Hi all,
>
> We have umbrella issue for 0.7.2. See https://issues.apache.org/
> jira/browse/ZEPPELIN-2276
>
> I think it's almost done and there're some trivial issues which don't look
> like blockers. How about starting release process for 0.7.2? And I suggest
> moving remaining issues into 0.8.0.
>
> Regards,
> JL
>
> --
> 이종열, Jongyoul Lee, 李宗烈
> http://madeng.net
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: [DISCUSS] Reducing default interpreters while building and releasing Zeppelin

2017-06-04 Thread Jongyoul Lee
Felix,
Yes, I said a bit confused. I want to release Zeppelin with some - not-all
- interpreters, but deploy all interpreters into maven to install them if
users want to use them.

Moon,
I think it's the best to fit the list as same as homepage by default as it
makes users confused less. But if we want to add more interpreters, I think
mailing questions and related issues are one of the proper criteria.

Jeff,
Agreed. We already had a menu but it just shows how to use
install-interpreter.sh.

On Mon, Jun 5, 2017 at 9:36 AM, Jeff Zhang <zjf...@gmail.com> wrote:

>
> If possible, we need to integrate the install script in zeppelin UI. As I
> would expect many users would ask why some interpreter is missing and how
> to install them.
>
>
>
> moon soo Lee <m...@apache.org>于2017年6月5日周一 上午2:06写道:
>
>> Following is last discussion related to release package size.
>>
>> https://lists.apache.org/thread.html/69f606409790d7ba11422e8c6df941
>> a75c5dfae0aca63eccf2f840bf@%3Cusers.zeppelin.apache.org%3E
>>
>> at this time, we have discussed about having bin-all (every
>> interpreters), bin-min (selected interpreters), bin-netinst (no
>> interpreters) package but didn't conclude the criteria and how we make a
>> decision.
>>
>> Jongyoul, do you have any idea about criteria?
>>
>> Thanks,
>> moon
>>
>> On Sun, Jun 4, 2017 at 10:47 AM Felix Cheung <felixcheun...@hotmail.com>
>> wrote:
>>
>>> Sure - I think it will be important to discuss what criteria to use to
>>> decide what is included vs what will be released separately.
>>>
>>> _
>>> From: Jongyoul Lee <jongy...@gmail.com>
>>>
>> Sent: Sunday, June 4, 2017 9:47 AM
>>> Subject: Re: [DISCUSS] Reducing default interpreters while building and
>>> releasing Zeppelin
>>> To: dev <d...@zeppelin.apache.org>
>>>
>> Cc: <users@zeppelin.apache.org>
>>>
>>
>>>
>>>
>>> It means we release with some interpreters and deploy all interpreters
>>> into
>>> maven separately. We already had a install-interpreter script inside it.
>>> If
>>> someone wants to install specific interpreter not included in default
>>> release package, they can use that script to install specific one.
>>>
>>> On Sun, Jun 4, 2017 at 9:11 AM, Felix Cheung <felixcheun...@hotmail.com>
>>> wrote:
>>>
>>> > Are we proposing some interpreters to be built and released separately?
>>> >
>>> > Is this going to be separate packaging? Or separate release pipeline?
>>> >
>>> >
>>> > _
>>> > From: Jongyoul Lee <jongy...@gmail.com<mailto:jongy...@gmail.com
>>> <jongy...@gmail.com>>>
>>> > Sent: Friday, June 2, 2017 11:04 PM
>>> > Subject: [DISCUSS] Reducing default interpreters while building and
>>> > releasing Zeppelin
>>>
>> > To: dev <d...@zeppelin.apache.org<mailto:d...@zeppelin.apache.org
>>> <d...@zeppelin.apache.org>>>, <
>>>
>>
>>> > users@zeppelin.apache.org<mailto:users@zeppelin.apache.org
>>> <users@zeppelin.apache.org>>>
>>> >
>>> >
>>> > Hi dev and users,
>>> >
>>>
>> > Recently, zeppelin.apache.org<http://zeppelin.apache.org> is being
>>>
>>
>>> > changed for increasing user experiences and convenience. I like this
>>> kind
>>> > of changes. I, however, saw some arguments that which interpreters we
>>> will
>>> > locate in the first page. I'd like to expand its argument to the
>>> package we
>>> > release.
>>> >
>>> > Current zeppelin packages exceed 700MB with default option because
>>> > Zeppelin tried to include all interpreters by default. It was good at
>>> the
>>> > early age but, nowadays, Zeppelin community suffer from the size
>>> because
>>> > ASF infra allows the package size under 500MB. So I'd like to reduce
>>> the
>>> > package size by reducing default packages.
>>> >
>>> > In case of rebuilding homepage, community proposed some criteria
>>> including
>>> > mailing list and # of question in stackoverflow. I think we can adapt
>>> same
>>> > criteria into release version of Zeppelin.
>>> >
>>> > To handle this kind of issue, I think consensus of community is the
>>> most
>>> > important factor. If someone wants to have an idea to deal with it,
>>> please
>>> > feel free to talk about it.
>>> >
>>> > Thanks,
>>> > Jongyoul Lee
>>> >
>>> > --
>>> > 이종열, Jongyoul Lee, 李宗烈
>>> > http://madeng.net
>>> >
>>> >
>>> >
>>>
>>>
>>> --
>>> 이종열, Jongyoul Lee, 李宗烈
>>> http://madeng.net
>>>
>>>
>>>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Release 0.7.2

2017-06-04 Thread Jongyoul Lee
Hi all,

We have umbrella issue for 0.7.2. See
https://issues.apache.org/jira/browse/ZEPPELIN-2276

I think it's almost done and there're some trivial issues which don't look
like blockers. How about starting release process for 0.7.2? And I suggest
moving remaining issues into 0.8.0.

Regards,
JL

-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: [DISCUSS] Reducing default interpreters while building and releasing Zeppelin

2017-06-04 Thread Jongyoul Lee
It means we release with some interpreters and deploy all interpreters into
maven separately. We already had a install-interpreter script inside it. If
someone wants to install specific interpreter not included in default
release package, they can use that script to install specific one.

On Sun, Jun 4, 2017 at 9:11 AM, Felix Cheung <felixcheun...@hotmail.com>
wrote:

> Are we proposing some interpreters to be built and released separately?
>
> Is this going to be separate packaging? Or separate release pipeline?
>
>
> _________
> From: Jongyoul Lee <jongy...@gmail.com<mailto:jongy...@gmail.com>>
> Sent: Friday, June 2, 2017 11:04 PM
> Subject: [DISCUSS] Reducing default interpreters while building and
> releasing Zeppelin
> To: dev <d...@zeppelin.apache.org<mailto:d...@zeppelin.apache.org>>, <
> users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>>
>
>
> Hi dev and users,
>
> Recently, zeppelin.apache.org<http://zeppelin.apache.org> is being
> changed for increasing user experiences and convenience. I like this kind
> of changes. I, however, saw some arguments that which interpreters we will
> locate in the first page. I'd like to expand its argument to the package we
> release.
>
> Current zeppelin packages exceed 700MB with default option because
> Zeppelin tried to include all interpreters by default. It was good at the
> early age but, nowadays, Zeppelin community suffer from the size because
> ASF infra allows the package size under 500MB. So I'd like to reduce the
> package size by reducing default packages.
>
> In case of rebuilding homepage, community proposed some criteria including
> mailing list and # of question in stackoverflow. I think we can adapt same
> criteria into release version of Zeppelin.
>
> To handle this kind of issue, I think consensus of community is the most
> important factor. If someone wants to have an idea to deal with it, please
> feel free to talk about it.
>
> Thanks,
> Jongyoul Lee
>
> --
> 이종열, Jongyoul Lee, 李宗烈
> http://madeng.net
>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


[DISCUSS] Reducing default interpreters while building and releasing Zeppelin

2017-06-03 Thread Jongyoul Lee
Hi dev and users,

Recently, zeppelin.apache.org is being changed for increasing user
experiences and convenience. I like this kind of changes. I, however, saw
some arguments that which interpreters we will locate in the first page.
I'd like to expand its argument to the package we release.

Current zeppelin packages exceed 700MB with default option because Zeppelin
tried to include all interpreters by default. It was good at the early age
but, nowadays, Zeppelin community suffer from the size because ASF infra
allows the package size under 500MB. So I'd like to reduce the package size
by reducing default packages.

In case of rebuilding homepage, community proposed some criteria including
mailing list and # of question in stackoverflow. I think we can adapt same
criteria into release version of Zeppelin.

To handle this kind of issue, I think consensus of community is the most
important factor. If someone wants to have an idea to deal with it, please
feel free to talk about it.

Thanks,
Jongyoul Lee

-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Task not serializable error when I try to cache the spark sql table

2017-06-02 Thread Jongyoul Lee
Hi,

Which version of spark do you use?

On Thu, Jun 1, 2017 at 10:44 AM, shyla deshpande <deshpandesh...@gmail.com>
wrote:

> Hello all,
>
> I am getting org.apache.spark.SparkException: Task not serializable error
> when I try to cache the spark sql table. I am using a UDF on a column of
> table and want to cache the resultant table . I can execute the paragraph
> successfully when there is no caching.
>
> Please help! Thanks
>
> UDF :
> def fn1(res: String): Int = {
>   100
> }
>  spark.udf.register("fn1", fn1(_: String): Int)
>
>
>spark
>   .read
>   .format("org.apache.spark.sql.cassandra")
>   .options(Map("keyspace" -> "k", "table" -> "t"))
>   .load
>   .createOrReplaceTempView("t1")
>
>
>  val df1 = spark.sql("SELECT  col1, col2, fn1(col3)   from t1" )
>
>  df1.createOrReplaceTempView("t2")
>
>spark.catalog.cacheTable("t2")
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Setting Note permission

2017-05-19 Thread Jongyoul Lee
Simply, you'd better enables personalized mode in the top of note. Then,
one user's behaviors doesn't affect another one.

Try it and leave comments.

Thanks,
Jongyoul Lee

On Tue, May 16, 2017 at 1:36 PM, shyla deshpande <deshpandesh...@gmail.com>
wrote:

> I want to know if this is possible. Works great for a single user but in a
> multi-user environment, we need more granular control on who can do what.
> The readers permission is not useful, because the user cannot execute or
> even change the display type
>
> Please share your experience how you are using in multi user environment.
>
> Thanks
>
> On Sun, May 14, 2017 at 9:53 PM, shyla deshpande <deshpandesh...@gmail.com
> > wrote:
>
>> How do I set permissions to a Note to do only the following :
>> 1. execute the paragraphs
>> 2. choose a different value from dynamic dropdown
>> 3. change display type from bar chart to tabular
>> 4. download the result data as a csv or tsv file
>>
>> I do no want the users to change the code or access Interpreter menu or
>> change configuration.
>>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Hive interpreter Error as soon as Hive query uses MapRed

2017-05-19 Thread Jongyoul Lee
Can you check your script works in native hive environment?

On Fri, May 19, 2017 at 10:20 AM, Meier, Alexander <
alexander.me...@t-systems-dmc.com> wrote:

> Hi list
>
> I’m trying to get a Hive interpreter correctly running on a CDH 5.7
> Cluster with Spark 1.6. Simple queries are running fine, but as soon as a
> query needs a MapRed tasks in order to complete, the query fails with:
>
> java.sql.SQLException: Error while processing statement: FAILED: Execution
> Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> at org.apache.hive.jdbc.HiveStatement.execute(
> HiveStatement.java:279)
> at org.apache.commons.dbcp2.DelegatingStatement.execute(
> DelegatingStatement.java:291)
> at org.apache.commons.dbcp2.DelegatingStatement.execute(
> DelegatingStatement.java:291)
> at org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(
> JDBCInterpreter.java:580)
> at org.apache.zeppelin.jdbc.JDBCInterpreter.interpret(
> JDBCInterpreter.java:692)
> at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(
> LazyOpenInterpreter.java:95)
> at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$
> InterpretJob.jobRun(RemoteInterpreterServer.java:490)
> at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
> at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(
> FIFOScheduler.java:139)
> at java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> etc…
>
> I’ve got the interpreter set up as follows:
>
> Properties
> namevalue
> default.driver  org.apache.hive.jdbc.
> HiveDriver
> default.url
>  jdbc:hive2://[hostname]:1
> hive.driver org.apache.hive.jdbc.
> HiveDriver
> hive.url
> jdbc:hive2://[hostname]:1
> zeppelin.interpreter.localRepo  /opt/zeppelin/local-repo/2CJ4XM2Z4
>
> Dependencies
> artifact
> /opt/cloudera/parcels/CDH/lib/hive/lib/hive-jdbc.jar
> /opt/cloudera/parcels/CDH/lib/hive/lib/hive-service.jar
> /opt/cloudera/parcels/CDH/lib/hadoop/client/hadoop-common.jar
> /opt/cloudera/parcels/CDH/lib/hive/lib/hive-common.jar
> /opt/cloudera/parcels/CDH/lib/hive/lib/hive-metastore.jar
>
>
> Unfortunately I haven’t found any help googling around… anyone here with
> some helpful input?
>
> Best regards and many thanks in advance,
> Alex




-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: what causes InterpreterException: Host key verification failed

2017-05-11 Thread Jongyoul Lee
Great!

On Wed, May 10, 2017 at 5:29 AM, Yeshwanth Jagini <y...@yotabitesllc.com>
wrote:

> i am not able to recreate this problem.
>
> it is working now
>
> On Tue, May 9, 2017 at 1:22 PM, Jongyoul Lee <jongy...@gmail.com> wrote:
>
>> It looks like a ssh problem. Can you tell me the setting how we use Spark
>> Interpreter?
>>
>> On Fri, May 5, 2017 at 7:09 AM, Yeshwanth Jagini <y...@yotabitesllc.com>
>> wrote:
>>
>>> Any idea why i am running into this issue,
>>>
>>> INFO [2017-05-04 22:02:24,996] ({pool-2-thread-2}
>>> SchedulerFactory.java[jobFinished]:137) - Job paragraph_1423500779206_-
>>> 1502780787 finished by scheduler org.apache.zeppelin.interprete
>>> r.remote.RemoteInterpreter2A94M5J1Z256540498
>>>  INFO [2017-05-04 22:03:02,450] ({pool-2-thread-4}
>>> SchedulerFactory.java[jobStarted]:131) - Job paragraph_1423500779206_-
>>> 1502780787 started by scheduler org.apache.zeppelin.interprete
>>> r.remote.RemoteInterpreter2A94M5J1Z256540498
>>>  INFO [2017-05-04 22:03:02,451] ({pool-2-thread-4}
>>> Paragraph.java[jobRun]:362) - run paragraph 20150210-015259_1403135953
>>> using null org.apache.zeppelin.interpreter.LazyOpenInterpreter@5aa74c91
>>>  INFO [2017-05-04 22:03:02,451] ({pool-2-thread-4}
>>> RemoteInterpreterManagedProcess.java[start]:126) - Run interpreter
>>> process [/opt/zeppelin-0.7.1-bin-all/bin/interpreter.sh, -d,
>>> /opt/zeppelin-0.7.1-bin-all/interpreter/spark, -p, 37115, -u, bigdata,
>>> -l, /opt/zeppelin-0.7.1-bin-all/local-repo/2CFWU98CR]
>>>  INFO [2017-05-04 22:03:02,513] ({Exec Default Executor}
>>> RemoteInterpreterManagedProcess.java[onProcessComplete]:180) -
>>> Interpreter process exited 0
>>> ERROR [2017-05-04 22:03:02,980] ({pool-2-thread-4} Job.java[run]:188) -
>>> Job failed
>>> org.apache.zeppelin.interpreter.InterpreterException: Host key
>>> verification failed.
>>>
>>> at org.apache.zeppelin.interpreter.remote.RemoteInterpreterMana
>>> gedProcess.start(RemoteInterpreterManagedProcess.java:143)
>>> at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProc
>>> ess.reference(RemoteInterpreterProcess.java:73)
>>> at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.ope
>>> n(RemoteInterpreter.java:258)
>>> at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.get
>>> FormType(RemoteInterpreter.java:423)
>>> at org.apache.zeppelin.interpreter.LazyOpenInterpreter.getFormT
>>> ype(LazyOpenInterpreter.java:106)
>>> at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:387)
>>> at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
>>> at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(
>>> RemoteScheduler.java:329)
>>> at java.util.concurrent.Executors$RunnableAdapter.call(Executor
>>> s.java:473)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFu
>>> tureTask.access$201(ScheduledThreadPoolExecutor.java:178)
>>> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFu
>>> tureTask.run(ScheduledThreadPoolExecutor.java:292)
>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>> Executor.java:1145)
>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>> lExecutor.java:615)
>>> at java.lang.Thread.run(Thread.java:745)
>>> ERROR [2017-05-04 22:03:02,981] ({pool-2-thread-4}
>>> NotebookServer.java[afterStatusChange]:2050) - Error
>>> org.apache.zeppelin.interpreter.InterpreterException: Host key
>>> verification failed.
>>>
>>> at org.apache.zeppelin.interpreter.remote.RemoteInterpreterMana
>>> gedProcess.start(RemoteInterpreterManagedProcess.java:143)
>>> at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProc
>>> ess.reference(RemoteInterpreterProcess.java:73)
>>> at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.ope
>>> n(RemoteInterpreter.java:258)
>>> at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.get
>>> FormType(RemoteInterpreter.java:423)
>>> at org.apache.zeppelin.interpreter.LazyOpenInterpreter.getFormT
>>> ype(LazyOpenInterpreter.java:106)
>>> at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:387)
>>> at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
>>> at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(
>>> RemoteScheduler.java:32

Re: Spark-CSV - Zeppelin tries to read CSV locally in Standalon mode

2017-05-09 Thread Jongyoul Lee
Could you test if it works with spark-shell?

On Sun, May 7, 2017 at 5:22 PM, Sofiane Cherchalli <sofian...@gmail.com>
wrote:

> Hi,
>
> I have a standalone cluster, one master and one worker, running in
> separate nodes. Zeppelin is running is in a separate node too in client
> mode.
>
> When I run a notebook that reads a CSV file located in the worker
> node with Spark-CSV package, Zeppelin tries to read the CSV locally and
> fails because the CVS is in the worker node and not in Zeppelin node.
>
> Is this the expected behavior?
>
> Thanks.
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: what causes InterpreterException: Host key verification failed

2017-05-09 Thread Jongyoul Lee
eter.InterpreterException: Host key
> verification failed.
> , result: Host key verification failed.
>
>  INFO [2017-05-04 22:03:03,042] ({pool-2-thread-4} 
> SchedulerFactory.java[jobFinished]:137)
> - Job paragraph_1423500779206_-1502780787 finished by scheduler
> org.apache.zeppelin.interpreter.remote.RemoteInterpreter2A94M5J1Z256540498
>
>
> Thanks,
> Yeshwanth Jagini
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: spark.r interpreter becomes unresponsive after some time and R process quits silently

2017-05-09 Thread Jongyoul Lee
et as:
>>> spark.executor.memory   21g
>>> spark.driver.memory 21g
>>> spark.python.worker.memory   4g
>>> spark.sql.autoBroadcastJoinThreshold0
>>>
>>> I use Spark in stand-alone mode and it works perfectly. It also works
>>> correctly with Zeppelin but this is what happens:
>>> 1) Start zeppelin on the server using the command service zeppelin start
>>> 2) Connect to port 8080 using Mozilla Firefox from client
>>> 3) Insert username and password (I enabled Shiro authentication)
>>> 4) open a notebook
>>> 5) Execute the following code:
>>> %spark.r
>>> 2+2
>>> 6) The code runs correctly and I can see that R is currently running as
>>> a process.
>>> 7) Repeat steps 2-5 after some time (let’s say 2 or 3 hours) and
>>> Zeppelin remains forever on “Running” or, if the elapsed time is higher
>>> (for example 1 day) since the last run, it returns “Error”. The
>>> “time-to-be-unresponsive” seems to be random and unpredictable. Also, R is
>>> not present in the list of running processes. Spark session remains active
>>> because I can access Spark UI from port 4040 and the application name is
>>> “Zeppelin”, so it’s the Spark instance created by Zeppelin.
>>>
>>> I observed that sometimes I can simply restart the interpreter from
>>> Zeppelin UI, but many other times it doesn’t work and I have to restart
>>> Zeppelin ( service zeppelin restart ).
>>>
>>> This issue afflicts both 0.7.0 and 0.7.1 but I haven’t tried with
>>> previous versions. It also happens if Zeppelin isn’t installed as a service.
>>>
>>> I can’t provide more detail because I can’t see any error or warning in
>>> the logs.. this is really strange.
>>>
>>> Thank you all.
>>> Kind regards
>>>  Pietro Pugni
>>>
>>
>>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Hive Reserve Keyword support

2017-05-09 Thread Jongyoul Lee
If it's possible for you to pass that properties when you create a
connection, you can passes it by setting it into interpreter setting

On Sat, Apr 29, 2017 at 4:25 PM, Dibyendu Bhattacharya <
dibyendu.bhattach...@gmail.com> wrote:

> Hi,
>
> I have a Hive Table which has a column named date. When I tried to query
> using Zeppelin %jdbc interpreter , I got bellow error.
>
>
> Error while compiling statement: FAILED: ParseException line 1:312 Failed
> to recognize predicate 'date'. Failed rule: 'identifier' in expression
> specification
> class org.apache.hive.service.cli.HiveSQLException
> org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:231)
> org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:217)
> org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:254)
> org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(
> JDBCInterpreter.java:322)
> org.apache.zeppelin.jdbc.JDBCInterpreter.interpret(
> JDBCInterpreter.java:408)
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(
> LazyOpenInterpreter.java:94)
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$
> InterpretJob.jobRun(RemoteInterpreterServer.java:341)
> org.apache.zeppelin.scheduler.Job.run(Job.java:176)
> org.apache.zeppelin.scheduler.ParallelScheduler$JobRunner.
> run(ParallelScheduler.java:162)
>
>
> My query looks like this :
>
> select x,y,z from mytable where date = '2017-04-28"
>
> I believe it is failing because date is reserve keyword . Is there anyway
> I can set  hive.support.sql11.reserved.keywords=false in Zeppelin ?
>
> regards,
> Dibyendu
>
>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: java.lang.NullPointerException on adding local jar as dependency to the spark interpreter

2017-05-09 Thread Jongyoul Lee
Can you add your spark interpreter's log file?

On Sat, May 6, 2017 at 12:53 AM, shyla deshpande <deshpandesh...@gmail.com>
wrote:

> Also, my local jar file that I want to add as dependency is a fat jar with
> dependencies.  Nothing works after I add my local fat jar, I get 
> *java.lang.NullPointerException
> for everything. Please help*
>
> On Thu, May 4, 2017 at 10:18 PM, shyla deshpande <deshpandesh...@gmail.com
> > wrote:
>
>> Adding the dependency by filling groupId:artifactId:version works good.
>> But when I add add a local jar file as the artifact , get
>> *ERROR java.lang.NullPointerException*. I see the local jar file being
>> added to local-repo, but I get the ERROR.
>>
>> Please help.
>>
>>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Unable to run Zeppelin Spark on YARN

2017-05-09 Thread Jongyoul Lee
Hi,

"--master yarn --deploy-mode client" will be overridden when create spark
context by SparkInterpreter In zeppelin. You have to set those values in
interpreter setting page

Regards,
Jongyoul

On Fri, May 5, 2017 at 8:33 AM, Jianfeng (Jeff) Zhang <
jzh...@hortonworks.com> wrote:

>
> Could you try set yarn-client in interpreter setting page ?
>
>
> Best Regard,
> Jeff Zhang
>
>
> From: Yeshwanth Jagini <y...@yotabitesllc.com>
> Reply-To: "users@zeppelin.apache.org" <users@zeppelin.apache.org>
> Date: Friday, May 5, 2017 at 3:13 AM
> To: "users@zeppelin.apache.org" <users@zeppelin.apache.org>
> Subject: Unable to run Zeppelin Spark on YARN
>
> Hi we are running cloudera CDH 5.9.1 .
>
> while setting up zeppelin, i followed the documentation on website and
> specified following options
>
> export ZEPPELIN_JAVA_OPTS="-Dhadoop.version=2.6.0-cdh5.9.1"
>   # Additional jvm options. for example, export
> ZEPPELIN_JAVA_OPTS="-Dspark.executor.memory=8g -Dspark.cores.max=16"
>
> export SPARK_HOME="/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/lib/spark"
># (required) When it is defined, load it instead
> of Zeppelin embedded Spark libraries
> export SPARK_SUBMIT_OPTIONS="--master yarn --deploy-mode client"
> # (optional) extra options to pass to spark submit. eg)
> "--driver-memory 512M --executor-memory 1G".
> export SPARK_APP_NAME=Zeppelin # (optional) The
> name of spark application.
>
> export HADOOP_CONF_DIR=/etc/hadoop/conf #
> yarn-site.xml is located in configuration directory in HADOOP_CONF_DIR.
>
> export ZEPPELIN_IMPERSONATE_CMD='sudo -H -u ${ZEPPELIN_IMPERSONATE_USER}
> bash -c'   # Optional, when user want to run interpreter as end web
> user. eg) 'sudo -H -u ${ZEPPELIN_IMPERSONATE_USER} bash -c '
>
> when running spark notebook, spark-submit is running in local mode and i
> cannot see the application in yarn resource manager.
> is there any other configuration i am missing?
>
>
> Thanks,
> Yeshwanth Jagini
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: How to debug spark.dep job?

2017-04-29 Thread Jongyoul Lee
Hi,

All sparkinterpreter's logs are stored in
logs/zeppelin-interpreter-spark-*.log.

On Thu, Apr 27, 2017 at 11:14 PM, Serega Sheypak <serega.shey...@gmail.com>
wrote:

> Hi, seems like I was able to start Zeppelin. I have inhouse artifactory
> and I want zeppelin to download my artifacts from artifactory and use
> classes in spark job afterwards.
>
> Notebook submission hangs %spark.dep and never finishes. Zeppelin outputs
> to log that DepInterpreter job has been staerted. What is the rgiht way to
> figure out what it tires t do?
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Livy in Zeppelin - architecture and deployment

2017-04-03 Thread Jongyoul Lee
Hi,

1) Yes, Z supports to communicate with Spark cluster through livy or
directly.
2) No, Z doesn't install anything related Livy.

0.7.1 looks better and there's no migration issue to use it from older
version. I strongly recommend to update it.

Regards,
Jongyoul

On Mon, Apr 3, 2017 at 3:34 PM, Jean Georges Perrin <j...@jgp.net> wrote:

> Hi,
>
> Sorry if those are stupid questions, but could not find the answer.
>
> 1) Is Zeppelin exclusively using Livy to talk to Spark?
>
> 2) During the installation of Zeppelin, do you install/configure Livy?
>
> I use Zeppelin v0.5/v0.6 only to talk to RDBMS so far: I am trying to
> connect the dots with Spark.
>
> Thanks!
>
> jg




-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Few suggestions and issues with zeppelin 0.7.0

2017-03-23 Thread Jongyoul Lee
Thanks,

Can you file a jira issue for it?

On Thu, Mar 23, 2017 at 9:32 PM, Meethu Mathew <meethu.mat...@flytxt.com>
wrote:

> Hi,
>
>
>- Its not possible to undo edits in a paragraph once its executed. But
>it was possible in 0.6.0. Is it purposefully removed or not?
>
>
>- Similarly, it would be great if there is a way to undo a delete
>paragraph.
>
>
>- When we edit a paragraph, an orange line appears in the left side of
>the paragraph, which is expected to disappear once the content is auto
>saved, but it exists even after contents are saved. Please correct it.
>
>
>- The documentation at https://zeppelin.apache.org/
>docs/0.7.0/rest-api/rest-notebook.html#run-a-paragraph-synchronously
>
> <https://zeppelin.apache.org/docs/0.7.0/rest-api/rest-notebook.html#run-a-paragraph-synchronously>
>says the sample json error as
>
> {
>
>"status": "INTERNAL_SERVER_ERROR",
>"body": {
>"code": "ERROR",
>"type": "TEXT",
>"msg": "bash: -c: line 0: unexpected EOF while looking for matching 
> ``'\nbash: -c: line 1: syntax error: unexpected end of file\nExitValue: 2"
>}
> }
>
>
> But it is  actually coming like
>  {  "status": "OK",
> "body": {
>  "code": "SUCCESS",
>       "msg": [  {
>"type": "TEXT",
> "data": "hello world"  }
>  ]  }}
>
> I think its an issue in the documentation.
>
> Regards,
> Meethu Mathew
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Roadmap for 0.8.0

2017-03-21 Thread Jongyoul Lee
I agree that it will help prolong RC period and use it actually. And also
we need code freeze for the new features and spend time to stabilize RC.

On Tue, Mar 21, 2017 at 1:25 PM, Felix Cheung <felixcheun...@hotmail.com>
wrote:

> +1 on quality and stabilization.
>
> I'm not sure if releasing as preview or calling it unstable fits with the
> ASF release process though.
>
> Other projects have code freeze, RC (and longer RC iteration time) etc. -
> do we think those will help improve quality when the release is finally cut?
>
>
> _
> From: Jianfeng (Jeff) Zhang <jzh...@hortonworks.com>
> Sent: Monday, March 20, 2017 6:13 PM
> Subject: Re: Roadmap for 0.8.0
> To: <users@zeppelin.apache.org>, dev <d...@zeppelin.apache.org>
>
>
>
> Strongly +1 for adding system test for different interpreter modes and
> focus on bug fixing than new features. I do heard from some users complain
> about the bugs of zeppelin major release. A stabilized release is very
> necessary for community.
>
>
>
>
> Best Regard,
> Jeff Zhang
>
>
> From: moon soo Lee <m...@apache.org<mailto:m...@apache.org
> <m...@apache.org>>>
> Reply-To: "users@zeppelin.apache.org<mailto:users@zeppelin.apache.org
> <users@zeppelin.apache.org>>" <users@zeppelin.apache.org lto:users@zeppelin.apache.org <users@zeppelin.apache.org>>>
> Date: Tuesday, March 21, 2017 at 4:10 AM
> To: "users@zeppelin.apache.org<mailto:users@zeppelin.apache.org
> <users@zeppelin.apache.org>>" <users@zeppelin.apache.org lto:users@zeppelin.apache.org <users@zeppelin.apache.org>>>, dev <
> d...@zeppelin.apache.org<mailto:d...@zeppelin.apache.org
> <d...@zeppelin.apache.org>>>
>
> Subject: Re: Roadmap for 0.8.0
>
> Great to see discussion for 0.8.0.
> List of features for 0.8.0 looks really good.
>
> Interpreter factory refactoring
> Interpreter layer supports various behavior depends on combination of
> PerNote,PerUser / Shared,Scoped,Isolated. We'll need strong test cases for
> each combination as a first step.
> Otherwise, any pullrequest will silently break one of behavior at any time
> no matter we refactor or not. And fixing and testing this behavior is so
> hard.
> Once we have complete test cases, not only guarantee the behavior but also
> make refactoring much easier.
>
>
> 0.8.0 release
> I'd like to suggest improvements on how we release a new version.
>
> In the past, 0.6.0 and 0.7.0 release with some critical problems. (took 3
> months to stabilize 0.6 and we're working on stabilizing 0.7.0 for 2 months)
>
> I think the same thing will happen again with 0.8.0, while we're going to
> make lots of changes and add many new features.
> After we released 0.8.0, while 'Stabilizing' the new release, user who
> tried the new release may get wrong impression of the quality. Which is
> very bad and we already repeated the mistake in 0.6.0 and 0.7.0.
>
> So from 0.8.0 release, I'd suggest we improve way we release new version
> to give user proper expectation. I think there're several ways of doing it.
>
> 1. Release 0.8.0-preview officially and then release 0.8.0.
> 2. Release 0.8.0 with 'beta' or 'unstable' label. And keep 0.7.x as a
> 'stable' release in the download page. Once 0.8.x release becomes stable
> enough make 0.8.x release as a 'stable' and move 0.7.x to 'old' releases.
>
>
> After 0.8.0,
> Since Zeppelin projects starts, project went through some major milestone,
> like
>
> - project gets first users and first contributor
> - project went into Apache Incubator
> - project became TLP.
>
> And I think it's time to think about hitting another major milestone.
>
> Considering features we already have, features we're planning on 0.8, wide
> adoption of Zeppelin in the industry, I think it's time to focus on make
> project more mature and make a 1.0 release. Which i think big milestone for
> the project.
>
> After 0.8.0 release, I suggest we more focus on bug fixes, stability
> improvement, optimizing user experience than adding new features. And with
> subsequent minor release, 0.8.1, 0.8.2 ... moment we feel confident about
> the quality, release it as a 1.0.0 instead of 0.8.x.
>
> Once we have 1.0.0 released, then I think we can make larger, experimental
> changes on 2.0.0 branch aggressively, while we keep maintaining 1.0.x
> branch.
>
>
> Thanks,
> moon
>
> On Mon, Mar 20, 2017 at 8:55 AM Felix Cheung <felixcheun...@hotmail.com<
> mailto:felixcheun...@hotmail.com <felixcheun...@hotmail.com>>> wrote:
> There are several pending visualization improvements/PRs that would be
>

Re: Zeppelin should support standard protocols for authN and AuthZ

2017-03-20 Thread Jongyoul Lee
Hi,

Can you explain or give me an idea for it more detail?



On Mon, Mar 20, 2017 at 7:02 PM, mbatista <mario.bati...@nokia.com> wrote:

> In order to make Zeppelin more easy to integrate in the modern cloud
> environments where authentication and authorization are done by having a
> centralized server for all the apps, Zeppelin shall support standard
> protocols for IAM purposes.
>
> Regarding authentication
>
> -OpenId connect protocol
>
> Authorization
>
> -UMA protocol (user access management), which is a OAuth2.0 profile.
>
> This allows Resources owners to write their access control policies on the
> Authorization server and make the policy enforcement point in Zeppelin
> itself, for instance.
>
> A common language for policy expression can be XACML or the emerging ALFA
> language.
>
>
>
>
>
> --
> View this message in context: http://apache-zeppelin-users-
> incubating-mailing-list.75479.x6.nabble.com/Zeppelin-should-
> support-standard-protocols-for-authN-and-AuthZ-tp5247.html
> Sent from the Apache Zeppelin Users (incubating) mailing list mailing list
> archive at Nabble.com.
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Roadmap for 0.8.0

2017-03-20 Thread Jongyoul Lee
Thanks for letting me know. I agree almost things we should develop.
Personally, concerning refactoring it, I'm doing a bit with several PRs but
we need to restructure InterpreterFactory. At first, list up all issues and
make some groups and handle it. How do you think?

On Mon, Mar 20, 2017 at 2:12 PM, Jeff Zhang <zjf...@gmail.com> wrote:

>
>
> Here's some candidates for 0.8 IMO
>
>- Restructing InterpreterFacotry https://issues.apache.org/
>jira/browse/ZEPPELIN-2056Although it is refactoring ticket, I feel
>it is pretty important thing to do. As I see many bugs are due to
>interpreter factory component,  and I do feel it needs refactoring.
>- Admin Feature https://issues.apache.org/jira/browse/ZEPPELIN-2236
>- User Level Interpreter Setting https://issues.apache.org/
>jira/browse/ZEPPELIN-1338
>- Interpreter Lifecycle Control https://issues.apache.
>org/jira/browse/ZEPPELIN-2197
>
>
>
> Jongyoul Lee <jongy...@gmail.com>于2017年3月20日周一 下午12:03写道:
>
>> Hi dev & users,
>>
>> Recently, community submits very new features for Apache Zeppelin. I think
>> it's very positive signals to improve Apache Zeppelin and its community.
>> But in another aspect, we should focus on what the next release includes.
>> I
>> think we need to summarize and prioritize them. Here is what I know:
>>
>> * Cluster management
>> * Admin feature
>> * Replace some context to separate users
>> * Helium online
>>
>> Feel free to talk if you want to add more things. I think we need to
>> choose
>> which features will be included in 0.8.0, too.
>>
>> Regards,
>> Jongyoul Lee
>>
>> --
>> 이종열, Jongyoul Lee, 李宗烈
>> http://madeng.net
>>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Roadmap for 0.8.0

2017-03-19 Thread Jongyoul Lee
Hi dev & users,

Recently, community submits very new features for Apache Zeppelin. I think
it's very positive signals to improve Apache Zeppelin and its community.
But in another aspect, we should focus on what the next release includes. I
think we need to summarize and prioritize them. Here is what I know:

* Cluster management
* Admin feature
* Replace some context to separate users
* Helium online

Feel free to talk if you want to add more things. I think we need to choose
which features will be included in 0.8.0, too.

Regards,
Jongyoul Lee

-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Release on 0.7.1 and 0.7.2

2017-03-13 Thread Jongyoul Lee
Hi dev and users,

As we released 0.7.0, most of users and dev reported a lot of bugs which
were critical. For the reason, community including me started to prepare
new minor release with umbrella issue[1]. Due to contributors' efforts, we
have resolved some of issues and have reviewed almost unresolved issues. I
want to talk about the new minor release at this point. Generally, we have
resolved all of issues reported as bugs before we release but some issues
are very critical and it causes serious problem using Apache Zeppelin. Then
I think, in this time, it's better to release 0.7.1 as soon as we can and
prepare a new minor release with rest of unresolved issues.

I'd like to start a process this Friday and if some issues are not merged
until then, I hope they would be included in 0.7.2.

Feel free to talk to me if you have a better plan to improve users'
experiences.

Regards,
Jongyoul Lee

[1] https://issues.apache.org/jira/browse/ZEPPELIN-2134

-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: [DISCUSS] Behaviour of Interpreter when it is restarted

2017-02-26 Thread Jongyoul Lee
Hi Jagat,

I'd love to listen how users use Apache Zeppelin in real world. It always
help to figure out what we have to focus on, to find out what the critical
issues are and to set the future roadmap. Can you share us - or me - detail
issues bothering you?

On Wed, Feb 22, 2017 at 5:28 PM, Jagat Singh <ja...@jagatsingh.com> wrote:

> We are implementing the exact same use case today , Trying out central
> shared zeppelin 0.7 instance in our organisation.
>
> As member of admin team we have locked down the interpreter page and
> published set of settings which we have been configured in the backend.
>
> If Interpreter settings are per user basis then only we will give the
> ability to end user to change what he needs , and behaviour you mentioned
> about restart affecting only that user job seems a good idea.
>
> One of the issue we are facing is lifecycle of interpreters , zombie
> Zeppelin processes which are left on the node where Zeppelin is installed
> and on cluster.  How does restart of interpreter by end users and people
> from admin team affect those processes ?
>
> Another issue to take care is which of the change will need Zeppelin server
> to be restarted as a whole ?
>
> I can write detailed note on issues we are having with running Zeppelin in
> enterprise if you are ready to take some feedback in future backlog.
>
>
>
>
> On 22 February 2017 at 19:17, Prabhjyot Singh <prabhjyotsi...@apache.org>
> wrote:
>
> > This is WRT the PR that I've created https://github.com/
> > apache/zeppelin/pull/2034.
> >
> > The issue that I want to discuss over here is how should an Interpreter
> > behave when it is;
> >  - restarted from notebook
> >  - restarted from Interpreter setting page
> >  - edited from Interpreter setting page
> >  - deleted from Interpreter setting page
> >
> >
> > Assuming Zeppelin is being used in Enterprise world, where not all user
> > may have access to Zeppelin's Interpreter setting page, say only
> restricted
> > user say "admin-group" have access to this page. Now when a restart, edit
> > or delete action is performed from Interpreter setting page; any of this
> > operation should terminate all the processes of that particular
> > Interpreter. On the other hand if it is restarted from the notebook page
> by
> > any User, then only process of that logged-in User should get affected.
> >
> > How do you guys think of it?
> >
> > --
> >
> > Warm Regards,
> >
> > Prabhjyot Singh
> >
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


[DISCUSS] Admin feature

2017-02-22 Thread Jongyoul Lee
Hi folks,

Recently, I've heard some new feature assumed that it needed the admin
account or similar role. But Apache Zeppelin doesn't have any admin feature
like hiding/showing menu and settings. I want to know how community thinks
about that feature.

My first concern is that we have to consider two modes: anonymous and
authenticated.

Feel free to start the discussion on pros and cons.

Regards,
Jongyoul

-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Real-time Updated Plots with Kafka

2017-02-22 Thread Jongyoul Lee
It's interesting. :-)

On Thu, Feb 23, 2017 at 11:52 AM, Chaoran Yu <yuchaoran2...@gmail.com>
wrote:

> Hello guys,
>
>I'm working on visualization based on Zeppelin that displays data
> coming from Kafka. I'm wondering if it's possible to make my plots update
> in real time as data keep coming in from Kafka.
>
>For example, consider a simple program shown below:
>
>
> %spark
> import _root_.kafka.serializer.DefaultDecoder
> import _root_.kafka.serializer.StringDecoder
> import org.apache.spark.streaming.kafka.KafkaUtils
> import org.apache.spark.storage.StorageLevel
> import org.apache.spark.streaming._
>
> val ssc = new StreamingContext(sc, Seconds(2))
>
> val lines = KafkaUtils.createStream(ssc, zkQuorum, groupId,
> Map("test-topic" -> 1))
> val words = lines.map(x => x._1 + x._2)
> words.print()
>
> ssc.start()
>
>
> When I execute the Zeppelin cell containing the above code, it would only
> print out contents of words variable once and never update it again.
> I have to re-execute the cell to see an update. How do I make words update
> automatically so that I can use it later to generate plots that update
> automatically as well?
>
>
> Thank you,
>
> --
> Chaoran Yu
> University of California at Berkeley | May 2014
> B.S. Computer Science and Engineering
> Phone: (510) 542-7749
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: [DISCUSS] Behaviour of Interpreter when it is restarted

2017-02-22 Thread Jongyoul Lee
Yes, we can set some roles by Shiro but what I mean is we didn't implement
role-based functions like hiding or showing some menu.

On Thu, Feb 23, 2017 at 12:17 PM, Prabhjyot Singh <prabhjyotsi...@apache.org
> wrote:

> Yes, agreed we need some components to manage lifecycle of interpreters.
>
>  > "I agree that we need to keep same behavior even though users restart
> in any place."
> I too agree we should have same behaviour.
>
>  > Thus we must not assume that "admin" exists.
> In Zeppelin we can do this https://github.com/apache/
> zeppelin/blob/master/conf/shiro.ini.template#L82 , and when this is
> enabled, that is the case which concerns me.
>
>
>
> On 22 February 2017 at 23:03, Jongyoul Lee <jongy...@gmail.com> wrote:
>
>> Basically, I agree that we need to keep same behavior even though users
>> restart in any place. I don't have any preference between restarting all
>> processes and starting user's process but currently Zeppelin doesn't have
>> any "admin" feature and no concept like admin by default. Thus we must not
>> assume that "admin" exists. And in my opinion, we need to treat
>> "edit/delete" action as special cases because these are disruptive thus all
>> interpreters should be shutdown.
>>
>> And as Jeff mentioned - I'm not sure if this issue is related or not -,
>> we need some components to manage lifecycle of interpreters.
>>
>> On Wed, Feb 22, 2017 at 6:18 PM, Jeff Zhang <zjf...@gmail.com> wrote:
>>
>>>
>>> I think we can combine scenario 2 and 3 if user click yes button on the
>>> popup window of whether you want to restart interpreter in scenario 3.
>>>
>>> Regarding the restarting scenario of 1,2,3, IMHO I think we don't need
>>> to differentiate them. Otherwise it might confuse users that restarting in
>>> different places have different behavior. Zeppelin as a notebook should not
>>> do too much assumption and do too much extra work implicitly for users, let
>>> user to control what they want to do.
>>>
>>> IMHO I think the behavior of restarting should be just restarting the
>>> current user's interpreter and don't affect other users. If it's admin
>>> perform the restarting operation in interpreter setting page, I also think
>>> we should not restart all the users' interpreters by default. Because I
>>> think the admin's intention of updating interpreter setting is just to
>>> update the interpreter setting so that all the users can use the latest
>>> interpreter setting (e.g. update SPARK_HOME in spark interpreter setting.
>>> For now everyone share the same interpreter setting, but in the long term I
>>> think everyone should has his own setting that extend from admin's setting.
>>> But this is another story, not related to this thread ), admin doesn't want
>>> to interrupt and close the current other users' active interpreters. Of
>>> course, this is just my biased thinking, some customer may indeed want to
>>> close all the interpreters when admin perform restarting operation. Then we
>>> can provide one configuration in zeppelin-site to allow user to do that,
>>> but by default I think we should not allow admin to close all users' active
>>> interpreter.
>>>
>>> Delete is a very special scenario among them, for now I think we can
>>> terminate all the interpreter processes when interpreter is deleted.
>>> Because after interpreter is deleted, there's no way to shutdown the
>>> interpreter in zeppelin for now. If we don't close and shutdown them, then
>>> that means resource leakage.
>>>
>>> Besides these, another thing I want to mention is that there's no
>>> dedicated component or concept in zeppelin to control lifecycle of
>>> interpreter. E.g. for now if user don't restart interpreter, his
>>> interpreter will be alive forever. This is almost unacceptable for
>>> enterprise usage.  I think we should have some component to do that work to
>>> manage the lifecycle of interpreter.
>>>
>>>
>>>
>>>
>>> Prabhjyot Singh <prabhjyotsi...@apache.org>于2017年2月22日周三 下午4:17写道:
>>>
>>>> This is WRT the PR that I've created
>>>> https://github.com/apache/zeppelin/pull/2034.
>>>>
>>>> The issue that I want to discuss over here is how should an Interpreter
>>>> behave when it is;
>>>>  - restarted from notebook
>>>>  - restarted from Interpreter setting page
>>>>  - edited from Interpre

Re: [DISCUSS] Behaviour of Interpreter when it is restarted

2017-02-22 Thread Jongyoul Lee
Basically, I agree that we need to keep same behavior even though users
restart in any place. I don't have any preference between restarting all
processes and starting user's process but currently Zeppelin doesn't have
any "admin" feature and no concept like admin by default. Thus we must not
assume that "admin" exists. And in my opinion, we need to treat
"edit/delete" action as special cases because these are disruptive thus all
interpreters should be shutdown.

And as Jeff mentioned - I'm not sure if this issue is related or not -, we
need some components to manage lifecycle of interpreters.

On Wed, Feb 22, 2017 at 6:18 PM, Jeff Zhang <zjf...@gmail.com> wrote:

>
> I think we can combine scenario 2 and 3 if user click yes button on the
> popup window of whether you want to restart interpreter in scenario 3.
>
> Regarding the restarting scenario of 1,2,3, IMHO I think we don't need to
> differentiate them. Otherwise it might confuse users that restarting in
> different places have different behavior. Zeppelin as a notebook should not
> do too much assumption and do too much extra work implicitly for users, let
> user to control what they want to do.
>
> IMHO I think the behavior of restarting should be just restarting the
> current user's interpreter and don't affect other users. If it's admin
> perform the restarting operation in interpreter setting page, I also think
> we should not restart all the users' interpreters by default. Because I
> think the admin's intention of updating interpreter setting is just to
> update the interpreter setting so that all the users can use the latest
> interpreter setting (e.g. update SPARK_HOME in spark interpreter setting.
> For now everyone share the same interpreter setting, but in the long term I
> think everyone should has his own setting that extend from admin's setting.
> But this is another story, not related to this thread ), admin doesn't want
> to interrupt and close the current other users' active interpreters. Of
> course, this is just my biased thinking, some customer may indeed want to
> close all the interpreters when admin perform restarting operation. Then we
> can provide one configuration in zeppelin-site to allow user to do that,
> but by default I think we should not allow admin to close all users' active
> interpreter.
>
> Delete is a very special scenario among them, for now I think we can
> terminate all the interpreter processes when interpreter is deleted.
> Because after interpreter is deleted, there's no way to shutdown the
> interpreter in zeppelin for now. If we don't close and shutdown them, then
> that means resource leakage.
>
> Besides these, another thing I want to mention is that there's no
> dedicated component or concept in zeppelin to control lifecycle of
> interpreter. E.g. for now if user don't restart interpreter, his
> interpreter will be alive forever. This is almost unacceptable for
> enterprise usage.  I think we should have some component to do that work to
> manage the lifecycle of interpreter.
>
>
>
>
> Prabhjyot Singh <prabhjyotsi...@apache.org>于2017年2月22日周三 下午4:17写道:
>
>> This is WRT the PR that I've created
>> https://github.com/apache/zeppelin/pull/2034.
>>
>> The issue that I want to discuss over here is how should an Interpreter
>> behave when it is;
>>  - restarted from notebook
>>  - restarted from Interpreter setting page
>>  - edited from Interpreter setting page
>>  - deleted from Interpreter setting page
>>
>>
>> Assuming Zeppelin is being used in Enterprise world, where not all user
>> may
>> have access to Zeppelin's Interpreter setting page, say only restricted
>> user say "admin-group" have access to this page. Now when a restart, edit
>> or delete action is performed from Interpreter setting page; any of this
>> operation should terminate all the processes of that particular
>> Interpreter. On the other hand if it is restarted from the notebook page
>> by
>> any User, then only process of that logged-in User should get affected.
>>
>> How do you guys think of it?
>>
>> --
>>
>> Warm Regards,
>>
>> Prabhjyot Singh
>>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: NoSuchMethodException in SQL

2017-02-16 Thread Jongyoul Lee
Which version of Spark do you use? I've found the current version supports
"lz4" by default, then it looks like you don't have to set anything. If you
want to another type of compression, you can set the configuration settings
into interpreter tab instead of using "conf.set'. Actually, that doesn't
affect anything because SparkInterpreter of Apache Zeppelin create
SparkContext before you set the configuration. Thus you should use
interpreter tab for setting some configurations to SparkContext.

Hope this help,
Jongyoul

On Fri, Feb 17, 2017 at 12:52 AM, Muhammad Rezaul Karim <
reza_cse...@yahoo.com> wrote:

> Hi Lee,
>
> Thanks for the info that really helped. I set the compression codec in
> the Spark side -i.e. inside the SPARK_HOME and now the problem resolved.
> However, I was wondering if it's possible to set the same from the Zeppelin
> notebook.
>
> I tried in the following way:
>
> %spark conf.set("spark.io.compression.codec", "lz4")
>
> But getting an error. Please suggest.
>
>
>
> On Thursday, February 16, 2017 7:40 AM, Jongyoul Lee <jongy...@gmail.com>
> wrote:
>
>
> Hi, Can you check if the script passes in spark-shell or not? AFAIK, you
> have to add compression codec by yourself in Spark side.
>
> On Wed, Feb 15, 2017 at 1:10 AM, Muhammad Rezaul Karim <
> reza_cse...@yahoo.com> wrote:
>
> Hi All,
>
> I am receiving the following exception while executing SQL queries:
>  java.lang. NoSuchMethodException: org.apache.spark.io.
> LZ4CompressionCodec.( org.apache.spark.SparkConf)
> at java.lang.Class. getConstructor0(Class.java: 3082)
> at java.lang.Class. getConstructor(Class.java: 1825)
> at org.apache.spark.io. CompressionCodec$.createCodec(
> CompressionCodec.scala:71)
> at org.apache.spark.io. CompressionCodec$.createCodec(
> CompressionCodec.scala:65)
> at org.apache.spark.sql. execution.SparkPlan.org
> <http://org.apache.spark.sql.execution.sparkplan.org/>$
> apache$spark$sql$execution$ SparkPlan$$decodeUnsafeRows(
> SparkPlan.scala:250)
> at org.apache.spark.sql. execution.SparkPlan$$anonfun$
> executeCollect$1.apply( SparkPlan.scala:276)
> at org.apache.spark.sql. execution.SparkPlan$$anonfun$
> executeCollect$1.apply( SparkPlan.scala:275)
> at scala.collection. IndexedSeqOptimized$class.
> foreach(IndexedSeqOptimized. scala:33)
> at scala.collection.mutable. ArrayOps$ofRef.foreach(
> ArrayOps.scala:186)
> at org.apache.spark.sql. execution.SparkPlan.
> executeCollect(SparkPlan. scala:275)
> at org.apache.spark.sql. execution.exchange. BroadcastExchangeExec$$
> anonfun$relationFuture$1$$ anonfun$apply$1.apply(
> BroadcastExchangeExec.scala: 78)
> at org.apache.spark.sql. execution.exchange. BroadcastExchangeExec$$
> anonfun$relationFuture$1$$ anonfun$apply$1.apply(
> BroadcastExchangeExec.scala: 75)
> at org.apache.spark.sql. execution.SQLExecution$.
> withExecutionId(SQLExecution. scala:94)
> at org.apache.spark.sql. execution.exchange. BroadcastExchangeExec$$
> anonfun$relationFuture$1. apply(BroadcastExchangeExec. scala:74)
> at org.apache.spark.sql. execution.exchange. BroadcastExchangeExec$$
> anonfun$relationFuture$1. apply(BroadcastExchangeExec. scala:74)
> at scala.concurrent.impl.Future$ PromiseCompletingRunnable.
> liftedTree1$1(Future.scala:24)
> at scala.concurrent.impl.Future$ PromiseCompletingRunnable.run(
> Future.scala:24)
> at java.util.concurrent. ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> at java.util.concurrent. ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread. java:745)
>
>
> *My SQL query is: *
> %sql  select * from land where Price >= 10000 AND CLUSTER = 2
>
> I am experiencing the above exception in the 1st run always but when I
> re-execute the same query for the 2nd or 3rd time, I don't get this error.
>
> Am I doing something wrong? Someone, please help me out.
>
>
>
>
>
> Kinds regards,
> Reza
>
>
>
>
> --
> 이종열, Jongyoul Lee, 李宗烈
> http://madeng.net
>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Zeppelin Books

2017-02-15 Thread Jongyoul Lee
There's no book for Apache Zeppelin yet.

On Tue, Feb 14, 2017 at 11:44 PM, Muhammad Rezaul Karim <
reza_cse...@yahoo.com> wrote:

>
> Hi All,
>
> Could anyone suggest me some recent books on Apache Zeppelin?
>
>
> Kind regards,
> Reza
>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: [VOTE] Release Apache Zeppelin 0.7.0 (RC4)

2017-02-04 Thread Jongyoul Lee
+1 (binding)

On Sat, Feb 4, 2017 at 12:41 PM, moon soo Lee <m...@apache.org> wrote:

> Verified
>  - checksum and signature for release artifacts
>  - build from source
>  - LICENSE source/binary release
>  - source release does not have unexpected binary
>  - binary package functioning
>
> +1 (binding)
>
> On Fri, Feb 3, 2017 at 1:28 PM Hyung Sung Shim <hss...@nflabs.com> wrote:
>
> > +1
> > Thanks mina for your effort.
> >
> > 2017-02-03 11:58 GMT+09:00 Renjith Kamath <renjith.kam...@gmail.com>:
> >
> > +1
> >
> > On Fri, Feb 3, 2017 at 8:01 AM, Prabhjyot Singh <
> prabhjyotsi...@apache.org
> > >
> > wrote:
> >
> > > +1
> > >
> > > On Feb 2, 2017 8:25 PM, "Alexander Bezzubov" <b...@apache.org> wrote:
> > >
> > >> +1,
> > >>
> > >> and thank you for an awesome work Mina!
> > >> Your persistence in making RCs and incorporating feedback is
> admirable.
> > >>
> > >> Verified:
> > >>  - checksums, signatures + keys for sources and bin-all
> > >>  - bin-all can run all Spark Zeppelin Tutorial in local mode
> > >>  - sources do compile, but only without tests.
> > >>Build \w tests fails on zeppelin-zengine for me
> > >>
> > >> 1) Failed tests:
> > >>   NotebookTest.testSchedulePoolUsage:397 expected: but
> > >> was:
> > >>
> > >> 2) frontend build on Linux also failed mysteriously executing yarn
> > >> command, most probably due to local env configuration.
> > >>
> > >>
> > >> --
> > >> Alex
> > >>
> > >> On Thu, Feb 2, 2017 at 5:28 PM, Jeff Zhang <zjf...@gmail.com> wrote:
> > >>
> > >>> +1
> > >>>
> > >>> Ahyoung Ryu <ahyoung...@apache.org>于2017年2月2日周四 下午9:36写道:
> > >>>
> > >>> +1
> > >>>
> > >>> On Thu, Feb 2, 2017 at 10:07 PM, Jun Kim <i2r@gmail.com> wrote:
> > >>>
> > >>> +1
> > >>> 2017년 2월 2일 (목) 오후 9:49, Sora Lee <sora0...@zepl.com>님이 작성:
> > >>>
> > >>> +1
> > >>>
> > >>> On Thu, Feb 2, 2017 at 9:40 PM Khalid Huseynov <khalid...@zepl.com>
> > >>> wrote:
> > >>>
> > >>> > +1
> > >>> >
> > >>> > On Thu, Feb 2, 2017 at 9:21 PM, DuyHai Doan <doanduy...@gmail.com>
> > >>> wrote:
> > >>> >
> > >>> > +1
> > >>> >
> > >>> > On Thu, Feb 2, 2017 at 9:56 AM, Mina Lee <mina...@apache.org>
> wrote:
> > >>> >
> > >>> > > I propose the following RC to be released for the Apache Zeppelin
> > >>> 0.7.0
> > >>> > > release.
> > >>> > >
> > >>> > > The commit id is df007f2284a09caa7c8b35f8b59d5f1993fe8b64 which
> is
> > >>> > > corresponds to the tag v0.7.0-rc4:
> > >>> > > *
> >
> > >>> > https://git-wip-us.apache.org/repos/asf?p=zeppelin.git;a=sho
> > >>> rtlog;h=refs/tags/v0.7.0-rc4
> > >>> > > <
> > >>> > https://git-wip-us.apache.org/repos/asf?p=zeppelin.git;a=sho
> >
> > >>> rtlog;h=refs/tags/v0.7.0-rc4
> > >>>
> > >>> > >*
> > >>> > >
> > >>> > > The release archives (tgz), signature, and checksums are here
> > >>> > >
> > https://dist.apache.org/repos/dist/dev/zeppelin/zeppelin-0.7.0-rc4/
> > >>> > >
> > >>> > > The release candidate consists of the following source
> distribution
> > >>> > > archive
> > >>> > > zeppelin-0.7.0.tgz
> > >>> > >
> > >>> > > In addition, the following supplementary binary distributions are
> > >>> > provided
> > >>> > > for user convenience at the same location
> > >>> > > zeppelin-0.7.0-bin-all.tgz
> > >>> > > zeppelin-0.7.0-bin-netinst.tgz
> > >>> > >
> > >>> > > The maven artifacts are here
> > >>> > >
> > >>> > https://repository.apache.org/content/repositories/orgapache
> > >>> zeppelin-1027
> > >>> > >
> > >>> > > You can find the KEYS file here:
> > >>> > > https://dist.apache.org/repos/dist/release/zeppelin/KEYS
> > >>> > >
> > >>> > > Release notes available at
> > >>> > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> > >>> > > version=12336544=12316221
> > >>> > >
> > >>> > > Vote will be open for next 72 hours (close at 01:00 5/Feb PST).
> > >>> > >
> > >>> > > [ ] +1 approve
> > >>> > > [ ] 0 no opinion
> > >>> > > [ ] -1 disapprove (and reason why)
> > >>> > >
> > >>> >
> > >>> >
> > >>> >
> > >>>
> > >>> --
> > >>> Taejun Kim
> > >>>
> > >>> Data Mining Lab.
> > >>> School of Electrical and Computer Engineering
> > >>> University of Seoul
> > >>>
> > >>>
> > >>>
> > >>
> >
> >
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Monitoring Zeppelin Health Via Rest API

2017-01-25 Thread Jongyoul Lee
I think it's good idea and we need add more beans to be monitored, too.

On Thu, Jan 26, 2017 at 9:45 AM, Vinay Shukla <vinayshu...@gmail.com> wrote:

> How about launching Zeppelin JVM with JConsole related config and trying
> to monitor JVM level stats?
>
> On Wed, Jan 25, 2017 at 9:12 AM, Rob Anderson <rockclimbings...@gmail.com>
> wrote:
>
>> Ok, thanks for the reply Jongyoul.
>>
>> On Wed, Jan 25, 2017 at 12:13 AM, Jongyoul Lee <jongy...@gmail.com>
>> wrote:
>>
>>> AFAIK, Zeppelin doesn't have it for now. We have to develop that
>>> function.
>>>
>>> Regards,
>>> Jongyoul
>>>
>>> On Wed, Jan 25, 2017 at 3:18 AM, Rob Anderson <
>>> rockclimbings...@gmail.com> wrote:
>>>
>>>> Hello,
>>>>
>>>> We're running Zeppelin 0.6.2  and authenticating against Active
>>>> Directory via Shiro.  Everything is working pretty well, however, we do
>>>> occasionally have issues, which is leading to a bad user experience, as
>>>> operationally we're unaware of a problem.
>>>>
>>>> We'd like to monitor the health of Zeppelin via the rest api, however,
>>>> I don't see a way to programmatically authenticate, so we can make the
>>>> calls.  Does anyone have any recommendations?
>>>>
>>>> Thanks,
>>>>
>>>> Rob
>>>>
>>>
>>>
>>>
>>> --
>>> 이종열, Jongyoul Lee, 李宗烈
>>> http://madeng.net
>>>
>>
>>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Monitoring Zeppelin Health Via Rest API

2017-01-24 Thread Jongyoul Lee
AFAIK, Zeppelin doesn't have it for now. We have to develop that function.

Regards,
Jongyoul

On Wed, Jan 25, 2017 at 3:18 AM, Rob Anderson <rockclimbings...@gmail.com>
wrote:

> Hello,
>
> We're running Zeppelin 0.6.2  and authenticating against Active Directory
> via Shiro.  Everything is working pretty well, however, we do occasionally
> have issues, which is leading to a bad user experience, as operationally
> we're unaware of a problem.
>
> We'd like to monitor the health of Zeppelin via the rest api, however, I
> don't see a way to programmatically authenticate, so we can make the
> calls.  Does anyone have any recommendations?
>
> Thanks,
>
> Rob
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: [VOTE] Release Apache Zeppelin 0.7.0 (RC3)

2017-01-23 Thread Jongyoul Lee
+1 (binding)

On Tue, Jan 24, 2017 at 2:43 PM, Mina Lee <mina...@apache.org> wrote:

> I propose the following RC to be released for the Apache Zeppelin 0.7.0
> release.
>
> The commit id is 48ad70e8c62975bdb00779bed5919eaca98c5b5d which is
> corresponds to the tag v0.7.0-rc3:
> *https://git-wip-us.apache.org/repos/asf?p=zeppelin.git;a=commit;h=48ad70e8c62975bdb00779bed5919eaca98c5b5d
> <https://git-wip-us.apache.org/repos/asf?p=zeppelin.git;a=commit;h=48ad70e8c62975bdb00779bed5919eaca98c5b5d>*
>
> The release archives (tgz), signature, and checksums are here
> https://dist.apache.org/repos/dist/dev/zeppelin/zeppelin-0.7.0-rc3/
>
> The release candidate consists of the following source distribution
> archive
> zeppelin-0.7.0.tgz
>
> In addition, the following supplementary binary distributions are provided
> for user convenience at the same location
> zeppelin-0.7.0-bin-all.tgz
> zeppelin-0.7.0-bin-netinst.tgz
>
> The maven artifacts are here
> https://repository.apache.org/content/repositories/orgapachezeppelin-1024
>
> You can find the KEYS file here:
> https://dist.apache.org/repos/dist/release/zeppelin/KEYS
>
> Release notes available at
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> version=12336544=12316221
>
> Vote will be open for next 72 hours (close at 22:00 26/Jan PST).
>
> [ ] +1 approve
> [ ] 0 no opinion
> [ ] -1 disapprove (and reason why)
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: [DISCUSS] Remove PostgresqlInterpreter

2017-01-23 Thread Jongyoul Lee
I created https://issues.apache.org/jira/browse/ZEPPELIN-2003

On Mon, Jan 23, 2017 at 10:56 PM, Jongyoul Lee <jongy...@gmail.com> wrote:

> Thanks for replying it. I'll make a PR for doing it.
>
> On Mon, Jan 23, 2017 at 4:09 PM, Prabhjyot Singh <
> prabhjyotsi...@apache.org> wrote:
>
>> +1.
>>
>> Yes, agreed I too think its overhead, and all the features can be
>> achieved via JDBCInterpreter.
>>
>> On 23 January 2017 at 10:45, Jongyoul Lee <jongy...@gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> We, currently, have two kind of Interpreter supporting to connect DB
>>> which are PostgresqlInterpreter and JDBCInterpreter. In history,
>>> JDBCInterpreter is based on PostgresqlInterpeter and has the same function
>>> with it. I suggest to remove PostgrsqlInterpreter in Zeppelin's code base
>>> because they have same functions and JDBCInterpreter is now de facto
>>> standard. All new contribution based on JDBC function is provided and added
>>> onto JDBCInterpreter. In old times, I suggested same thing but it wouldn't
>>> be accepted because PostgresqlInterpreter was better then JDBCInterpreter
>>> at that time. But, now, JDBCInterpreter include all functions of
>>> PostgresqlInterpreter and provide better functions.
>>>
>>> How do you guys think of it? If it's accepted, 0.8.0 won't have
>>> PostgresqlInterpreter anymore.
>>>
>>> Regards,
>>> Jongyoul Lee
>>>
>>> --
>>> 이종열, Jongyoul Lee, 李宗烈
>>> http://madeng.net
>>>
>>
>>
>>
>> --
>>
>> Warm Regards,
>>
>> Prabhjyot Singh
>>
>
>
>
> --
> 이종열, Jongyoul Lee, 李宗烈
> http://madeng.net
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: [DISCUSS] Remove PostgresqlInterpreter

2017-01-23 Thread Jongyoul Lee
Thanks for replying it. I'll make a PR for doing it.

On Mon, Jan 23, 2017 at 4:09 PM, Prabhjyot Singh <prabhjyotsi...@apache.org>
wrote:

> +1.
>
> Yes, agreed I too think its overhead, and all the features can be achieved
> via JDBCInterpreter.
>
> On 23 January 2017 at 10:45, Jongyoul Lee <jongy...@gmail.com> wrote:
>
>> Hi all,
>>
>> We, currently, have two kind of Interpreter supporting to connect DB
>> which are PostgresqlInterpreter and JDBCInterpreter. In history,
>> JDBCInterpreter is based on PostgresqlInterpeter and has the same function
>> with it. I suggest to remove PostgrsqlInterpreter in Zeppelin's code base
>> because they have same functions and JDBCInterpreter is now de facto
>> standard. All new contribution based on JDBC function is provided and added
>> onto JDBCInterpreter. In old times, I suggested same thing but it wouldn't
>> be accepted because PostgresqlInterpreter was better then JDBCInterpreter
>> at that time. But, now, JDBCInterpreter include all functions of
>> PostgresqlInterpreter and provide better functions.
>>
>> How do you guys think of it? If it's accepted, 0.8.0 won't have
>> PostgresqlInterpreter anymore.
>>
>> Regards,
>> Jongyoul Lee
>>
>> --
>> 이종열, Jongyoul Lee, 李宗烈
>> http://madeng.net
>>
>
>
>
> --
>
> Warm Regards,
>
> Prabhjyot Singh
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


[DISCUSS] Remove PostgresqlInterpreter

2017-01-22 Thread Jongyoul Lee
Hi all,

We, currently, have two kind of Interpreter supporting to connect DB which
are PostgresqlInterpreter and JDBCInterpreter. In history, JDBCInterpreter
is based on PostgresqlInterpeter and has the same function with it. I
suggest to remove PostgrsqlInterpreter in Zeppelin's code base because they
have same functions and JDBCInterpreter is now de facto standard. All new
contribution based on JDBC function is provided and added onto
JDBCInterpreter. In old times, I suggested same thing but it wouldn't be
accepted because PostgresqlInterpreter was better then JDBCInterpreter at
that time. But, now, JDBCInterpreter include all functions of
PostgresqlInterpreter and provide better functions.

How do you guys think of it? If it's accepted, 0.8.0 won't have
PostgresqlInterpreter anymore.

Regards,
Jongyoul Lee

-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: [Discuss] Move some interpreters out of zeppelin project

2017-01-20 Thread Jongyoul Lee
Hi Jeff,

Thanks for starting this issue.

It increases flexibility of improving interpreters itself but it can also
decreases stability of interpreters. I'm worried about this side-effect. As
you mentioned, it's hard for me to review new interpreter that I didn't use
but it couldn't be a reason why we divide some code from Zeppelin. We have
to make more ppl as committers to review various interpreters. Thus I don't
want some interpreters out of Zeppelin.

But I, totally, agree about #3, #4. If we deploy minimum package of
Zeppelin, we have to provide GUI for install/uninstall. If it's done,
bin-all-pkg is meaningless and bin-min-pkg is enough.

On Fri, Jan 20, 2017 at 7:14 PM, Jeff Zhang <zjf...@gmail.com> wrote:

> As we talk in another thread [1] about moving some interpreters out of
> zeppelin project. I open this thread to discuss it in more details. I'd
> like to raise 4 questions for this.
>
> 1. Do we need to do this
> 2. If the answer is yes, which interpreters should be moved out
> 3. How do we integrate these interpreters into zeppelin
> 4. How does zeppelin work with these third party interpreters
>
> I will first give my inputs on this.
>
> *1. Do we need to do this ?*
> Personally, I strongly +1 on this. Several reasons:
>
>- Keep the zeppelin project much smaller
>- Each interpreter's improvements won't be blocked by the release of
>zeppelin. Interpreters can has its own release cycle as long as
>zeppelin-interpreter doesn't break the compatibility.
>- Zeppelin developer don't have the knowledge of all interpreters.
>Sometimes it is very difficult for zeppelin committers to review a new
>interpreter that he doesn't know.
>
>
> 2. Which interpreters should be moved out ?
> We can discuss it  in another thread about the min package.
>
> 3. How do we integrate these interpreters into zeppelin
> Currently, user can install third party interpreter by running script (
> http://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/manual/
> interpreterinstallation.html#3rd-party-interpreters), but this is not
> convienient, and it is hard to let every user to be aware of this feature.
> So I think we should do that in zeppelin UI. We should allow user to
> install/uninstall/upgrade/downgrade third party interpreters in the
> interpreter page.
>
> 4. How does zeppelin work with these third party interpreters
> Besides the interface zeppelin expose to the third party interpreter to be
> install/uninstall/upgrade/downgrade, it is third party interpreter's own
> responsibility to develop and make new release.
>
> Please help comment on these 4 questions and feel free to add any things
> that I miss.
>
>
> [1] https://lists.apache.org/thread.html/69f606409790d7ba11422e8c6df941
> a75c5dfae0aca63eccf2f840bf@%3Cusers.zeppelin.apache.org%3E
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: [DISCUSS] Release package size

2017-01-18 Thread Jongyoul Lee
> which only includes spark interpreter from 0.7.0 release.
>> One concern is that users need one more step to install the interpreters
>> they use,
>> but I believe it can be done easily with single line of command [1].
>>
>> FYI, attaching the link of similar discussion [2] we had last June in
>> mailing list.
>>
>> Regards,
>> Mina
>>
>> [1] http://zeppelin.apache.org/docs/0.6.2/manual/
>> interpreterinstallation.html#install-specific-interpreters
>> <http://zeppelin.apache.org/docs/0.6.2/manual/interpreterinstallation.html>
>> [2] https://lists.apache.org/thread.html/4b54c034cf8d691655156e0cb64724
>> 3180c57a6829d97aa3c085b63c@%3Cusers.zeppelin.apache.org%3E
>>
>> --
>> Taejun Kim
>>
>> Data Mining Lab.
>> School of Electrical and Computer Engineering
>> University of Seoul
>>
>>
>> --
>> Taejun Kim
>>
>> Data Mining Lab.
>> School of Electrical and Computer Engineering
>> University of Seoul
>>
>
>
> ___
> *Eric Pugh **| *Founder & CEO | OpenSource Connections, LLC | 434.466.1467
> | http://www.opensourceconnections.com | My Free/Busy
> <http://tinyurl.com/eric-cal>
> Co-Author: Apache Solr Enterprise Search Server, 3rd Ed
> <https://www.packtpub.com/big-data-and-business-intelligence/apache-solr-enterprise-search-server-third-edition-raw>
> This e-mail and all contents, including attachments, is considered to be
> Company Confidential unless explicitly stated otherwise, regardless
> of whether attachments are marked as such.
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Vizualize bash result

2017-01-14 Thread Jongyoul Lee
Basically, you can draw everything if your output starts with '%table' and
data format is tsv. For example, you can echo '%table ...' to draw table in
sh.

Hope this help,
Jongyoul

On Sat, Jan 14, 2017 at 9:02 PM, Markovich <amriv...@gmail.com> wrote:

> Hi,
> Is it possible to visualize bash result in Zeppelin?
>
> For example I'd like to see a chart from this command: hdfs dfs -du -s -h
> '/*'
>
> Also I'm interested in passing bash output to variable.
>
> Regards,
> Andrey
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Using CDH dynamic resource pools with Zeppelin

2017-01-14 Thread Jongyoul Lee
PMC raised the release issue on 0.7.0 and community is discussing it.
AFAIK, one of committers will make rc1 within next week.

On Fri, Jan 13, 2017 at 4:33 PM, Yaar Reuveni <ya...@liveperson.com> wrote:

> Is it known when v0.7.0 is expected to be released?
>
> On Wed, Jan 11, 2017 at 4:09 PM, Paul Brenner <pbren...@placeiq.com>
> wrote:
>
>> My understanding is that this kind of user specific control isn’t coming
>> until v0.70. Currently when we run zeppelin all tasks are submitted by the
>> user that started the zeppelin process (so we start zeppelin from the yarn
>> account and everything is submitted as yarn). At least for spark there is a
>> user queue parameter that can be set in the interpreter which ensures that
>> users are only getting the resources they are allowed. We just create a
>> different interpreter for each user and set that parameter. It isn’t
>> perfect, and might not even be available for your JDBC, but I thought the
>> detail might help.
>>
>> <http://www.placeiq.com/> <http://www.placeiq.com/>
>> <http://www.placeiq.com/> Paul Brenner <https://twitter.com/placeiq>
>> <https://twitter.com/placeiq> <https://twitter.com/placeiq>
>> <https://www.facebook.com/PlaceIQ> <https://www.facebook.com/PlaceIQ>
>> <https://www.linkedin.com/company/placeiq>
>> <https://www.linkedin.com/company/placeiq>
>> DATA SCIENTIST
>> *(217) 390-3033 *
>>
>> <http://www.placeiq.com/2015/05/26/placeiq-named-winner-of-prestigious-2015-oracle-data-cloud-activate-award/>
>> <http://placeiq.com/2015/12/18/accuracy-vs-precision-in-location-data-mma-webinar/>
>> <http://placeiq.com/2015/12/18/accuracy-vs-precision-in-location-data-mma-webinar/>
>> <http://placeiq.com/2015/12/18/accuracy-vs-precision-in-location-data-mma-webinar/>
>> <http://placeiq.com/2015/12/18/accuracy-vs-precision-in-location-data-mma-webinar/>
>> <http://placeiq.com/2016/03/08/measuring-addressable-tv-campaigns-is-now-possible/>
>> <http://placeiq.com/2016/04/13/placeiq-joins-the-network-advertising-initiative-nai-as-100th-member/>
>> <http://placeiq.com/2016/04/13/placeiq-joins-the-network-advertising-initiative-nai-as-100th-member/>
>> <http://placeiq.com/2016/04/13/placeiq-joins-the-network-advertising-initiative-nai-as-100th-member/>
>> <http://placeiq.com/2016/04/13/placeiq-joins-the-network-advertising-initiative-nai-as-100th-member/>
>> <http://placeiq.com/2016/04/13/placeiq-joins-the-network-advertising-initiative-nai-as-100th-member/>
>> <http://pages.placeiq.com/Location-Data-Accuracy-Whitepaper-Download.html?utm_source=Signature_medium=Email_campaign=AccuracyWP>
>> <http://placeiq.com/2016/08/03/placeiq-bolsters-location-intelligence-platform-with-mastercard-insights/>
>> <http://placeiq.com/2016/10/26/the-making-of-a-location-data-industry-milestone/>[image:
>> PlaceIQ:Location Data Accuracy]
>> <http://placeiq.com/2016/12/07/placeiq-introduces-landmark-a-groundbreaking-offering-that-delivers-access-to-the-highest-quality-location-data-for-insights-that-fuel-limitless-business-decisions/>
>>
>> On Wed, Jan 11, 2017 at 8:04 AM Yaar Reuveni > <yaar+reuveni+%3cya...@liveperson.com%3E>> wrote:
>>
>>> Hey,
>>> Since no answer yet, I'll try a simpler question.
>>> I have Zeppelin defined with a *JDBC* interpreter configured with
>>> *Impala* that works against a CDH5.5 Hadoop cluster.
>>> When I run queries from Zeppelin, these queries run without a user in
>>> Hadoop, also no user seen in the Cloudera manager.
>>> How can I configure it so there is a user defined on the connection and
>>> on the running queries?
>>>
>>> Thanks,
>>> Yaar
>>>
>>> On Tue, Dec 20, 2016 at 10:25 AM, Yaar Reuveni <ya...@liveperson.com>
>>> wrote:
>>>
>>>> Hey,
>>>> We're using a cloudera distribution hadoop.
>>>> We want to know how can we configure Zeppelin user authentication and
>>>> link between users and resource pools
>>>> <https://www.cloudera.com/documentation/enterprise/5-8-x/topics/cm_mc_resource_pools.html>
>>>> in our YARN Hadoop cluster
>>>>
>>>> Thanks,
>>>> Yaar
>>>>
>>>> --
>>>>
>>>>
>>>>
>> This message may contain confidential and/or privileged information.
>> If you are not the addressee or authorized to receive this on behalf of
>> the addressee you must not use, copy, disclose or take action based on this
>> message or any information herein.
>> If you have received this message in error, please advise the sender
>> immediately by reply email and delete this message. Thank you.
>>
>>
>
>
> This message may contain confidential and/or privileged information.
> If you are not the addressee or authorized to receive this on behalf of
> the addressee you must not use, copy, disclose or take action based on this
> message or any information herein.
> If you have received this message in error, please advise the sender
> immediately by reply email and delete this message. Thank you.
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Passing variables from %pyspark to %sh

2017-01-12 Thread Jongyoul Lee
Yes, many users suggest that feature to share results between paragraphs
and different interpreters. I think this would be one of major features in
a next release.

On Thu, Jan 12, 2017 at 10:30 PM, t p <tauis2...@gmail.com> wrote:

> Is it possible to have similar support to exchange  checkbox/dropdown
> variables and can variables be exchanged with other interpreters like PSQL
> (e.g. variable set by spark/pyspark and accessible in another para which is
> running PSQL interpreter).
>
> I’m interested in doing this and I’d like to know if there is a way to
> accomplish this:
> https://lists.apache.org/thread.html/a1b3530e5a20f983acd70f8fca029f
> 90b6bfe8d0d999597342447e6f@%3Cusers.zeppelin.apache.org%3E
>
>
> On Jan 12, 2017, at 2:16 AM, Jongyoul Lee <jongy...@gmail.com> wrote:
>
> There's no way to communicate between spark and sh intepreter. It need to
> implement it but it doesn't yet. But I agree that it would be helpful for
> some cases. Can you create issue?
>
> On Thu, Jan 12, 2017 at 3:32 PM, Ruslan Dautkhanov <dautkha...@gmail.com>
> wrote:
>
>> It's possible to exchange variables between Scala and Spark
>> through z.put and z.get.
>>
>> How to pass a variable to %sh?
>>
>> In Jupyter it's possible to do for example as
>>
>>>   ! hadoop fs -put {localfile} {hdfsfile}
>>
>>
>> where localfile and and hdfsfile are Python variables.
>>
>> Can't find any references for something similar in Shell Interpreter
>> https://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/interpreter/shell.html
>>
>> In many notebooks we have to pass small variabels
>> from Zeppelin notes to external scripts as parameters.
>>
>> It would be awesome to have something like
>>
>> %sh
>>> /path/to/script --param8={var1} --param9={var2}
>>
>>
>> where var1 and var2 would be implied to be fetched as z.get('var1')
>> and z.get('var2') respectively.
>>
>> Other thoughts?
>>
>>
>> Thank you,
>> Ruslan Dautkhanov
>>
>>
>
>
> --
> 이종열, Jongyoul Lee, 李宗烈
> http://madeng.net
>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Passing variables from %pyspark to %sh

2017-01-11 Thread Jongyoul Lee
There's no way to communicate between spark and sh intepreter. It need to
implement it but it doesn't yet. But I agree that it would be helpful for
some cases. Can you create issue?

On Thu, Jan 12, 2017 at 3:32 PM, Ruslan Dautkhanov <dautkha...@gmail.com>
wrote:

> It's possible to exchange variables between Scala and Spark
> through z.put and z.get.
>
> How to pass a variable to %sh?
>
> In Jupyter it's possible to do for example as
>
>>   ! hadoop fs -put {localfile} {hdfsfile}
>
>
> where localfile and and hdfsfile are Python variables.
>
> Can't find any references for something similar in Shell Interpreter
> https://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/interpreter/shell.html
>
> In many notebooks we have to pass small variabels
> from Zeppelin notes to external scripts as parameters.
>
> It would be awesome to have something like
>
> %sh
>> /path/to/script --param8={var1} --param9={var2}
>
>
> where var1 and var2 would be implied to be fetched as z.get('var1')
> and z.get('var2') respectively.
>
> Other thoughts?
>
>
> Thank you,
> Ruslan Dautkhanov
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Mailing List

2017-01-11 Thread Jongyoul Lee
You can create issue into https://issues.apache.org/jira/browse/ZEPPELIN

On Thu, Jan 12, 2017 at 2:54 AM, Meeraj Kunnumpurath <
mee...@servicesymphony.com> wrote:

> Ok, thank you. Is there a URL for the JIRA?
>
> Sent from my iPhone
>
> On Jan 11, 2017, at 8:17 PM, Jongyoul Lee <jongy...@gmail.com> wrote:
>
> Hi Meeraj,
>
> 1. Zeppelin doesn't support both of authenticated and anonymous
> simultaneously.2
> 2. AFAIK, that's a bug. Can you create a new ticket for it and describe
> how to reproduce it? It will be appreciated.
>
> Regards,
> Jongyoul
>
> On Wed, Jan 11, 2017 at 5:09 PM, Meeraj Kunnumpurath <
> mee...@servicesymphony.com> wrote:
>
>> Thanks Felix, much appreciated. My question is on authentication and
>> access control.
>>
>> 1. Authenticated users can create read-only notebooks
>> 2. Anonymous users can view the notebooks and nothing else
>>
>> I have tried a few things tinkering with shiro.ini and the sites XML.
>>
>> 1. If I enable anonymous access, I can't seem to get the UI link to login
>> 2. Also the anonymous users seem to be able to manage interpreters,
>> credentials etc, despite restricting role-based access to the corresponding
>> REST URLs.
>>
>> I can share the shiro.ini contents, if that helps.
>>
>> Many thanks
>> Meeraj
>>
>> On Wed, Jan 11, 2017 at 11:56 AM, Felix Cheung <felixcheun...@hotmail.com
>> > wrote:
>>
>>> Right, there isn't a lot of traffic on the user list.
>>> Perhaps you can resend your question?
>>>
>>>
>>> --
>>> *From:* Meeraj Kunnumpurath <mee...@servicesymphony.com>
>>> *Sent:* Tuesday, January 10, 2017 12:05:46 PM
>>> *To:* users@zeppelin.apache.org
>>> *Subject:* Re: Mailing List
>>>
>>> Thanks Felix. I saw a couple on the interpreter REST API, that was it.
>>>
>>> Sent from my iPhone
>>>
>>> On Jan 10, 2017, at 10:06 PM, Felix Cheung <felixcheun...@hotmail.com>
>>> wrote:
>>>
>>> There was a few email yesterday - do you not get them?
>>>
>>>
>>> --
>>> *From:* Meeraj Kunnumpurath <mee...@servicesymphony.com>
>>> *Sent:* Tuesday, January 10, 2017 9:09:52 AM
>>> *To:* users@zeppelin.apache.org
>>> *Subject:* Mailing List
>>>
>>> Hello
>>>
>>> Is this mailing list active? I see hardly any traffic here, just seen
>>> couple of mails over the past three days. I had asked a few questions for
>>> which there has been no response.
>>>
>>> Regards
>>>
>>> --
>>> *Meeraj Kunnumpurath*
>>>
>>>
>>> *Director and Executive Principal Service Symphony Ltd 00 44 7702 693597*
>>>
>>> *00 971 50 409 0169 mee...@servicesymphony.com
>>> <mee...@servicesymphony.com>*
>>>
>>>
>>
>>
>> --
>> *Meeraj Kunnumpurath*
>>
>>
>> *Director and Executive PrincipalService Symphony Ltd00 44 7702 693597*
>>
>> *00 971 50 409 0169mee...@servicesymphony.com
>> <mee...@servicesymphony.com>*
>>
>
>
>
> --
> 이종열, Jongyoul Lee, 李宗烈
> http://madeng.net
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Mailing List

2017-01-11 Thread Jongyoul Lee
Hi Meeraj,

1. Zeppelin doesn't support both of authenticated and anonymous
simultaneously.2
2. AFAIK, that's a bug. Can you create a new ticket for it and describe how
to reproduce it? It will be appreciated.

Regards,
Jongyoul

On Wed, Jan 11, 2017 at 5:09 PM, Meeraj Kunnumpurath <
mee...@servicesymphony.com> wrote:

> Thanks Felix, much appreciated. My question is on authentication and
> access control.
>
> 1. Authenticated users can create read-only notebooks
> 2. Anonymous users can view the notebooks and nothing else
>
> I have tried a few things tinkering with shiro.ini and the sites XML.
>
> 1. If I enable anonymous access, I can't seem to get the UI link to login
> 2. Also the anonymous users seem to be able to manage interpreters,
> credentials etc, despite restricting role-based access to the corresponding
> REST URLs.
>
> I can share the shiro.ini contents, if that helps.
>
> Many thanks
> Meeraj
>
> On Wed, Jan 11, 2017 at 11:56 AM, Felix Cheung <felixcheun...@hotmail.com>
> wrote:
>
>> Right, there isn't a lot of traffic on the user list.
>> Perhaps you can resend your question?
>>
>>
>> --
>> *From:* Meeraj Kunnumpurath <mee...@servicesymphony.com>
>> *Sent:* Tuesday, January 10, 2017 12:05:46 PM
>> *To:* users@zeppelin.apache.org
>> *Subject:* Re: Mailing List
>>
>> Thanks Felix. I saw a couple on the interpreter REST API, that was it.
>>
>> Sent from my iPhone
>>
>> On Jan 10, 2017, at 10:06 PM, Felix Cheung <felixcheun...@hotmail.com>
>> wrote:
>>
>> There was a few email yesterday - do you not get them?
>>
>>
>> --
>> *From:* Meeraj Kunnumpurath <mee...@servicesymphony.com>
>> *Sent:* Tuesday, January 10, 2017 9:09:52 AM
>> *To:* users@zeppelin.apache.org
>> *Subject:* Mailing List
>>
>> Hello
>>
>> Is this mailing list active? I see hardly any traffic here, just seen
>> couple of mails over the past three days. I had asked a few questions for
>> which there has been no response.
>>
>> Regards
>>
>> --
>> *Meeraj Kunnumpurath*
>>
>>
>> *Director and Executive Principal Service Symphony Ltd 00 44 7702 693597*
>>
>> *00 971 50 409 0169 mee...@servicesymphony.com
>> <mee...@servicesymphony.com>*
>>
>>
>
>
> --
> *Meeraj Kunnumpurath*
>
>
> *Director and Executive PrincipalService Symphony Ltd00 44 7702 693597*
>
> *00 971 50 409 0169mee...@servicesymphony.com <mee...@servicesymphony.com>*
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Read only mode

2017-01-04 Thread Jongyoul Lee
Hi Nabajyoti,

0.6.0 supports permission feature for note. If you click the lock icon in
the right top corner, you can see the menu.

Regards,
Jongyoul

On Wed, Dec 28, 2016 at 9:23 PM, Nabajyoti Dash <nabajyoti.d...@rakuten.com>
wrote:

> Hi,
> I want to make my zeppeline notebook read-only so that the end user can't
> modify the queries.
> I am using zeppelin 0.6.0. Is this feature available in that version?If yes
> what to do?
>
> Thanks,
> Nabajyoti
>
>
>
> --
> View this message in context: http://apache-zeppelin-users-
> incubating-mailing-list.75479.x6.nabble.com/Read-only-mode-tp4794.html
> Sent from the Apache Zeppelin Users (incubating) mailing list mailing list
> archive at Nabble.com.
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: spark streaming with Kafka

2016-11-03 Thread Jongyoul Lee
Can you share your script? In my understand, Zeppelin already created
SparkContext when it starts, thus you don't need to and must not make new
sc by yourself.

Can you please check it?

On Wed, Nov 2, 2016 at 5:32 PM, Mich Talebzadeh <mich.talebza...@gmail.com>
wrote:

> This is a good question.
>
> Normally I create a streaming app (in Scala) using mvn or sbt with a Uber
> jar file and run that with dependencies. Tried to run the source code in
> Zeppelin after adding /home/hduser/jars/spark-stream
> ing-kafka-assembly_2.10-1.6.1.jar to dependencies but it did not work.
>
> The problem is that you already a spark session running as seen below in
> zeppelin's spark log
>
>  WARN [2016-11-02 08:23:26,559] ({pool-2-thread-10}
> Logging.scala[logWarning]:66) - Another SparkContext is being constructed
> (or threw an exception in its constructor).  This may indicate an error,
> since only one SparkContext may be running in this JVM (see SPARK-2243).
> The other SparkContext was created at:
>
> So I am not Zeppelin what can do this through Zeppelin
>
> HTH
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 2 November 2016 at 05:33, Jongyoul Lee <jongy...@gmail.com> wrote:
>
>> Zeppelin currently propagates a jar including `SparkInterprer`. Thus if
>> you add some dependencies into interpreter tab of Spark, that jar doesn't
>> pass to executors so that error may occurs. How about using --packages
>> option?
>>
>> On Mon, Oct 24, 2016 at 10:32 PM, herman...@teeupdata.com <
>> herman...@teeupdata.com> wrote:
>>
>>> yes, yarn-client model.
>>>
>>> In situation of spark streaming, does zeppelin support graphs that are
>>> automatically update themselves, like a self-updated dashboard on top of
>>> streaming based spark tables?
>>>
>>> Thanks
>>> Herman.
>>>
>>>
>>> On Oct 21, 2016, at 23:42, Jongyoul Lee <jongy...@gmail.com> wrote:
>>>
>>> Hi,
>>>
>>> Do you use yarn-client mode of Spark?
>>>
>>> On Friday, 21 October 2016, herman...@teeupdata.com <
>>> herman...@teeupdata.com> wrote:
>>>
>>>> Hi Everyone,
>>>>
>>>> Does zeppelin support spark streaming with kafka? I am using zeppelin
>>>> 0.6.1 with spark 2.0 and kafka 0.10.0.0.
>>>>
>>>> I got error when import org.apache.spark.streaming.kafka.KafkaUtils
>>>> :36: error: object kafka is not a member of package
>>>> org.apache.spark.streaming
>>>> import org.apache.spark.streaming.kafka.KafkaUtils
>>>>
>>>> I already added the spark streaming kafka jar to the dependencies of
>>>> spark interpreter.
>>>>
>>>> If it is supported, is there a tutorial/sample notebook?
>>>>
>>>> Thanks
>>>> Herman.
>>>>
>>>>
>>>>
>>>>
>>>
>>> --
>>> 이종열, Jongyoul Lee, 李宗烈
>>> http://madeng.net
>>>
>>>
>>>
>>
>>
>> --
>> 이종열, Jongyoul Lee, 李宗烈
>> http://madeng.net
>>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: spark streaming with Kafka

2016-10-21 Thread Jongyoul Lee
Hi,

Do you use yarn-client mode of Spark?

On Friday, 21 October 2016, herman...@teeupdata.com <herman...@teeupdata.com>
wrote:

> Hi Everyone,
>
> Does zeppelin support spark streaming with kafka? I am using zeppelin
> 0.6.1 with spark 2.0 and kafka 0.10.0.0.
>
> I got error when import org.apache.spark.streaming.kafka.KafkaUtils
> :36: error: object kafka is not a member of package
> org.apache.spark.streaming
> import org.apache.spark.streaming.kafka.KafkaUtils
>
> I already added the spark streaming kafka jar to the dependencies of spark
> interpreter.
>
> If it is supported, is there a tutorial/sample notebook?
>
> Thanks
> Herman.
>
>
>
>

-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: ClassNotFoundException when using %jdbc(hive)

2016-10-14 Thread Jongyoul Lee
Hi,

Did you solve it by installing JDK?

On Thu, Oct 13, 2016 at 8:44 PM, Xi Shen <davidshe...@gmail.com> wrote:

> Turns out I need JDK, but I only install JRE...
>
> On Wed, Oct 12, 2016 at 3:25 PM Xi Shen <davidshe...@gmail.com> wrote:
>
>> Here's the log http://pastie.org/private/nem2pur2tl3adgbrv4hl2g
>>
>> I did a test. I copied the jar file to ./interpreter/jdbc, the execute
>> the same command. This I got another error related to hadoop-common. I
>> think the configuration in "dependency" section did not take effect at all.
>> But there's no error either.
>>
>> Please help~
>>
>> On Wed, Oct 12, 2016 at 1:02 PM Xi Shen <davidshe...@gmail.com> wrote:
>>
>> Hi,
>>
>> I followed https://zeppelin.apache.org/docs/0.6.1/interpreter/jdbc.html,
>> and added two artifacts:
>>
>>- org.apache.hive:hive-jdbc:2.1.0
>>- org.apache.hadoop:hadoop-common:2.7.3
>>
>> Then I tried this:
>>
>> %jdbc(hive)
>> show tables
>>
>> But I got this error:
>>
>> org.apache.hive.jdbc.HiveDriver
>> class java.lang.ClassNotFoundException
>> java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>> java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>>
>>
>> I think the dependency configuration will pull in the hive-jdbc jar file,
>> so I did not put the jar file into the ./interpreter/jdbc directory. Did I
>> do anything wrong? Or the manual needs update?
>> --
>>
>>
>> Thanks,
>> David S.
>>
>> --
>>
>>
>> Thanks,
>> David S.
>>
> --
>
>
> Thanks,
> David S.
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: ClassNotFoundException when using %jdbc(hive)

2016-10-12 Thread Jongyoul Lee
Hi,

Can you attach jdbc interpreter's log file?

On Wed, Oct 12, 2016 at 2:02 PM, Xi Shen <davidshe...@gmail.com> wrote:

> Hi,
>
> I followed https://zeppelin.apache.org/docs/0.6.1/interpreter/jdbc.html,
> and added two artifacts:
>
>- org.apache.hive:hive-jdbc:2.1.0
>- org.apache.hadoop:hadoop-common:2.7.3
>
> Then I tried this:
>
> %jdbc(hive)
> show tables
>
> But I got this error:
>
> org.apache.hive.jdbc.HiveDriver
> class java.lang.ClassNotFoundException
> java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>
>
> I think the dependency configuration will pull in the hive-jdbc jar file,
> so I did not put the jar file into the ./interpreter/jdbc directory. Did I
> do anything wrong? Or the manual needs update?
> --
>
>
> Thanks,
> David S.
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Change Zeppelin directory interpreters location

2016-10-09 Thread Jongyoul Lee
Let me check it.

On Mon, Oct 10, 2016 at 8:42 AM, Maximilien Belinga <
maximilien.beli...@wouri.co> wrote:

> Hi,
>
> I'm actually working to change the location of Zeppelin interpreters
> directory. To achieve my goal, I changed the following environment
> variables in the {zeppelin_folder}/conf/zeppelin-env.sh file:
>
> export ZEPPELIN_INTERPRETER_DIR="/my/new_location"
> export ZEPPELIN_INTERPRETER_CONF="/my/new_location/conf/interpreter.json"
>
> When I go through the UI and create a new interpreter, the
> interpreter.json at the new location does not change. The old file is the
> modified one.
>
> I'm on Ubuntu 16.04 with Zeppelin 0.6.1.
>
> How can I figure this out?
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: [DISCUSS] Zeppelin 0.7 Roadmap

2016-10-09 Thread Jongyoul Lee
Thanks for the sharing this roadmap.

I'd like to add impersonation for Spark/JDBC interpreter.

How do you think of ti?

Regards,
Jongyoul

On Sat, Oct 8, 2016 at 6:46 AM, moon soo Lee <m...@apache.org> wrote:

> Hi, Apache Zeppelin users and developers,
>
> We're about to start a release process for 0.6.2 and i think it's good
> time to discuss about future release, 0.7.0. There were great discussion
> about roadmap [1] and we updated wiki page [2], but 0.7.0 section on the
> roadmap wiki is empty at the moment. Having a 0.7.0 section on a wiki page,
> i think, doesn't mean neither rejecting other subjects nor 100% guarantee
> of them. However it's more for helping Zeppelin users and developers have
> reasonable expectations for the next release and helps community focus on
> main theme.
>
> Multi-tenancy related feature (interpreter authorization, impersonation,
> per user interpreter, and so on) are the most active subject in the
> community at the moment i think. And we have a new menu, "Job" in the
> master branch is another big change. I think "Enterprise ready" section on
> [2] can be main topic for 0.7 release.
>
> And there're many improvements around generic JDBC interpreter and Python
> support (matplotlib integration and so on). They can be another important
> subject with new Interpreters.
>
> Besides that, I've seen people struggle with front-end performance and we
> can address that on 0.7, that would be great.
> Also i'd like to keep working on pluggability for visualization, which was
> subject from 0.6 release.
> Therefore, i would suggest following draft as a roadmap for 0.7.0
>
> * Enterprise support
>   - Multi user support (ZEPPELIN-1337)
>   - Job management
> * Interpreter
>   - Improve JDBC / Python interpreter
>   - New interpreters
> * Front end performance improvement
> * Pluggable visualization
>
> Regarding timeline, although we're keep making series of 0.6.x release,
> it's already been 3 months since 0.6 release. And i think many items are
> already been addressed on master branch or patches are available. So i
> think we can target near future, like November.
>
> What do you think? And feedback would be appreciated.
>
> Thanks,
> moon
>
> [1] http://apache-zeppelin-users-incubating-mailing-li
> <http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/DISCUSS-Update-Roadmap-tp2452.html>
> st.75479.x6.nabble.com/DISCUSS-Update-Roadmap-tp2452.html
> <http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/DISCUSS-Update-Roadmap-tp2452.html>
> [2] https://cwiki.apache.org/confluence/display/ZEPPELIN/Zeppelin+Roadmap
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: User specific interpreter

2016-10-05 Thread Jongyoul Lee
Hi,

Can you share your idea in more details? If you want to new interpreter
setting with existing interpreter, it's very simple. You can go to the
inteterpreter tab and create new one with different name. Unfortunately,
others can see that new setting and use it. About Multiuser implementation,
there're a lot of requests and we are keeping it with
https://issues.apache.org/jira/browse/ZEPPELIN-1337

Hope this help,
Jongyoul

On Wed, Oct 5, 2016 at 2:20 PM, Vikash Kumar <vikash.ku...@resilinc.com>
wrote:

> Hi all,
>
> Can we create user specific interpreters? Like I want to
> create phoenix jdbc interpreter only for admin user. I am using branch
> 0.6.2.
>
> And question regarding
>
> 1.   release date for branch 7 so that we can demo for Helium
>
> 2.Multiuser implementation roadmap?
>
>
>
>
>
>
>
> Thanks & Regards,
>
> *Vikash Kumar*
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: ActiveDirectoryGroupRealm.java allows user outside of searchBase to login

2016-09-22 Thread Jongyoul Lee
 config.
> My understanding was shiro should only allow user within that searchBase to
> login, but seems like not the case.  When I trace the code of
> ActiveDirectoryGroupRealm.java, the only place searchBase was used is in
> method getRoleNamesForUser<https://github.com/apache/zeppelin/
> blob/v0.6.0/zeppelin-server/src/main/java/org/apache/zeppelin/server/
> ActiveDirectoryGroupRealm.java#L162> , if the user is not inside
> searchBase, a empty roleNames will be return and without any exception,
> thus the user will be login I guess?
>
> I'm not sure if this is expected behaviour or not. I also tried the v0.6.1
> and seems also have same behaviour. In general I just want to restrict user
> only in certain groups of ActiveDirectory to be able to login. Is that
> possible without rewriting our own Realm?
>
> Thanks,
> Weipu
>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Zeppelin Kerberos error

2016-08-31 Thread Jongyoul Lee
I think it's related to https://issues.apache.org/jira/browse/ZEPPELIN-1175
which remove some class path when Zeppelin launches interpreter. Could you
please check your hive-site.xml is included in your interpreter process? It
looks like a configuration issue because you can see the default database.
If it doesn't exists, you should copy your xml into interpreter/spark/dep/

Regards,
JL

On Wed, Aug 31, 2016 at 9:52 PM, Pradeep Reddy <pradeepreddy.a...@gmail.com>
wrote:

> Hi Jongyoul- I followed the exact same steps for compiling and setting up
> the new build from source as 0.5.6 (only difference is, I acquired the
> source for latest build using "git clone")
>
> hive-site.xml was copied to conf directory. But, the spark interpreter is
> not talking to the hive metastore. Both the 0.5.6 & the latest builds are
> running in the same machine. In 0.5.6 when i run the below command, I see
> 116 databases listed, as per my expectations and I'm able to run my
> notebooks built on those databases.
>
> [image: Inline image 1]
>
> Thanks,
> Pradeep
>
>
> On Wed, Aug 31, 2016 at 2:52 AM, Jongyoul Lee <jongy...@gmail.com> wrote:
>
>> Hello,
>>
>> Do you copy your hive-site.xml in a proper position?
>>
>> On Wed, Aug 31, 2016 at 3:52 PM, Pradeep Reddy <
>> pradeepreddy.a...@gmail.com> wrote:
>>
>>> nothing obvious. I will stick to 0.5.6 build, until the latest builds
>>> stabilize.
>>>
>>> On Wed, Aug 31, 2016 at 1:39 AM, Jeff Zhang <zjf...@gmail.com> wrote:
>>>
>>>> Then I guess maybe you are connecting to different database. Why not
>>>> using  'z.show(sql("databases"))' to display the databases ? Then you
>>>> will get a hint what's going on.
>>>>
>>>> On Wed, Aug 31, 2016 at 2:36 PM, Pradeep Reddy <
>>>> pradeepreddy.a...@gmail.com> wrote:
>>>>
>>>>> Yes...I didn't wish to show the names of the databases that we have in
>>>>> our data lake on that screen shot. so thats why I chose to display the
>>>>> count. The latest zeppelin build just shows 1 count which is "default"
>>>>> database.
>>>>>
>>>>> Thanks,
>>>>> Pradeep
>>>>>
>>>>> On Wed, Aug 31, 2016 at 1:33 AM, Jeff Zhang <zjf...@gmail.com> wrote:
>>>>>
>>>>>> 116 is the databases count number. Do you expect a list of database ?
>>>>>> then you need to use 'z.show(sql("databases"))'
>>>>>>
>>>>>> On Wed, Aug 31, 2016 at 2:26 PM, Pradeep Reddy <
>>>>>> pradeepreddy.a...@gmail.com> wrote:
>>>>>>
>>>>>>> Here it is Jeff
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> On Wed, Aug 31, 2016 at 1:24 AM, Jeff Zhang <zjf...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi Pradeep,
>>>>>>>>
>>>>>>>> I don't see the databases on your screenshot (second one for
>>>>>>>> 0.5.6). I think the output is correct.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Wed, Aug 31, 2016 at 12:55 PM, Pradeep Reddy <
>>>>>>>> pradeepreddy.a...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi Jeff- I was able to make Kerberos work in 0.5.6 zeppelin build.
>>>>>>>>> It seems like Kerberos not working & spark not able to talk to the 
>>>>>>>>> shared
>>>>>>>>> hive meta store are defects in the current build.
>>>>>>>>>
>>>>>>>>> On Tue, Aug 30, 2016 at 11:09 PM, Pradeep Reddy <
>>>>>>>>> pradeepreddy.a...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Jeff-
>>>>>>>>>>
>>>>>>>>>> I switched to local mode now, I'm able to summon the implicit
>>>>>>>>>> objects like sc, sqlContext etc., but it doesn't show my databases &
>>>>>>>>>> tables, just shows 1 database "default".
>>>>>>>>>>
>>>>>>>>>> Zeppelin Latest Build
>>>>>>>>>>
>>>>>>>>>>

Re: Zeppelin Kerberos error

2016-08-31 Thread Jongyoul Lee
quot;true",
>>>>>>>>>>> "zeppelin.spark.sql.stacktrace": "true",
>>>>>>>>>>> "zeppelin.spark.useHiveContext": "true",
>>>>>>>>>>> "zeppelin.interpreter.localRepo":
>>>>>>>>>>> "/home/pradeep.x.alla/zeppelin/local-repo/2BUTFVN89",
>>>>>>>>>>> "zeppelin.spark.concurrentSQL": "false",
>>>>>>>>>>> "args": "",
>>>>>>>>>>> "zeppelin.pyspark.python": "python",
>>>>>>>>>>> "spark.yarn.keytab": "/home/pradeep.x.alla/pradeep.
>>>>>>>>>>> x.alla.keytab",
>>>>>>>>>>> "spark.yarn.principal": "pradeep.x.alla",
>>>>>>>>>>> "zeppelin.dep.additionalRemoteRepository":
>>>>>>>>>>> "spark-packages,http://dl.bintray.com/spark-packages/maven,false
>>>>>>>>>>> ;"
>>>>>>>>>>>   },
>>>>>>>>>>>   "status": "READY",
>>>>>>>>>>>   "interpreterGroup": [
>>>>>>>>>>> {
>>>>>>>>>>>   "name": "spark",
>>>>>>>>>>>   "class": "org.apache.zeppelin.spark.SparkInterpreter",
>>>>>>>>>>>   "defaultInterpreter": true
>>>>>>>>>>> },
>>>>>>>>>>> {
>>>>>>>>>>>   "name": "sql",
>>>>>>>>>>>   "class": "org.apache.zeppelin.spark.Spa
>>>>>>>>>>> rkSqlInterpreter",
>>>>>>>>>>>   "defaultInterpreter": false
>>>>>>>>>>> },
>>>>>>>>>>> {
>>>>>>>>>>>   "name": "dep",
>>>>>>>>>>>   "class": "org.apache.zeppelin.spark.DepInterpreter",
>>>>>>>>>>>   "defaultInterpreter": false
>>>>>>>>>>> },
>>>>>>>>>>> {
>>>>>>>>>>>   "name": "pyspark",
>>>>>>>>>>>   "class": "org.apache.zeppelin.spark.PyS
>>>>>>>>>>> parkInterpreter",
>>>>>>>>>>>   "defaultInterpreter": false
>>>>>>>>>>> }
>>>>>>>>>>>   ],
>>>>>>>>>>>   "dependencies": [],
>>>>>>>>>>>   "option": {
>>>>>>>>>>> "remote": true,
>>>>>>>>>>> "port": -1,
>>>>>>>>>>> "perNoteSession": false,
>>>>>>>>>>> "perNoteProcess": false,
>>>>>>>>>>> "isExistingProcess": false,
>>>>>>>>>>> "setPermission": false,
>>>>>>>>>>> "users": []
>>>>>>>>>>>   }
>>>>>>>>>>> }
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Aug 30, 2016 at 6:52 PM, Jeff Zhang <zjf...@gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> It looks like your kerberos configuration issue. Do you mind to
>>>>>>>>>>>> share your configuration ? Or you can first try to run spark-shell 
>>>>>>>>>>>> using
>>>>>>>>>>>> spark.yarn.keytab & spark.yarn.principle to verify them.
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Aug 31, 2016 at 6:12 AM, Pradeep Reddy <
>>>>>>>>>>>> pradeepreddy.a...@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi- I recently built zeppelin from source and configured
>>>>>>>>>>>>> kerberos authentication. For Kerberos I added "spark.yarn.keytab" 
>>>>>>>>>>>>> &
>>>>>>>>>>>>> "spark.yarn.principal" and also set master to "yarn-client".  But 
>>>>>>>>>>>>> I keep
>>>>>>>>>>>>> getting this error whenever I use spark interpreter in the 
>>>>>>>>>>>>> notebook
>>>>>>>>>>>>>
>>>>>>>>>>>>> 3536728 started by scheduler org.apache.zeppelin.spark.Spar
>>>>>>>>>>>>> kInterpreter335845091
>>>>>>>>>>>>> ERROR [2016-08-30 17:45:37,237] ({pool-2-thread-2}
>>>>>>>>>>>>> Job.java[run]:189) - Job failed
>>>>>>>>>>>>> java.lang.IllegalArgumentException: Invalid rule: L
>>>>>>>>>>>>> RULE:[2:$1@$0](.*@\Q.COM\E$)s/@\Q\E$//L
>>>>>>>>>>>>> RULE:[1:$1@$0](.*@\Q\E$)s/@\Q\E$//L
>>>>>>>>>>>>> RULE:[2:$1@$0](.*@\Q\E$)s/@\Q\E$//L
>>>>>>>>>>>>> DEFAULT
>>>>>>>>>>>>> at org.apache.hadoop.security.aut
>>>>>>>>>>>>> hentication.util.KerberosName.parseRules(KerberosName.java:3
>>>>>>>>>>>>> 21)
>>>>>>>>>>>>> at org.apache.hadoop.security.aut
>>>>>>>>>>>>> hentication.util.KerberosName.setRules(KerberosName.java:386)
>>>>>>>>>>>>> at org.apache.hadoop.security.Had
>>>>>>>>>>>>> oopKerberosName.setConfiguration(HadoopKerberosName.java:75)
>>>>>>>>>>>>> at org.apache.hadoop.security.Use
>>>>>>>>>>>>> rGroupInformation.initialize(UserGroupInformation.java:227)
>>>>>>>>>>>>> at org.apache.hadoop.security.Use
>>>>>>>>>>>>> rGroupInformation.ensureInitialized(UserGroupInformation.jav
>>>>>>>>>>>>> a:214)
>>>>>>>>>>>>> at org.apache.hadoop.security.Use
>>>>>>>>>>>>> rGroupInformation.isAuthenticationMethodEnabled(UserGroupInf
>>>>>>>>>>>>> ormation.java:275)
>>>>>>>>>>>>> at org.apache.hadoop.security.Use
>>>>>>>>>>>>> rGroupInformation.isSecurityEnabled(UserGroupInformation.jav
>>>>>>>>>>>>> a:269)
>>>>>>>>>>>>> at org.apache.hadoop.security.Use
>>>>>>>>>>>>> rGroupInformation.loginUserFromKeytab(UserGroupInformation.j
>>>>>>>>>>>>> ava:820)
>>>>>>>>>>>>> at org.apache.zeppelin.spark.Spar
>>>>>>>>>>>>> kInterpreter.open(SparkInterpreter.java:539)
>>>>>>>>>>>>> at org.apache.zeppelin.interprete
>>>>>>>>>>>>> r.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
>>>>>>>>>>>>> at org.apache.zeppelin.interprete
>>>>>>>>>>>>> r.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
>>>>>>>>>>>>> at org.apache.zeppelin.interprete
>>>>>>>>>>>>> r.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteI
>>>>>>>>>>>>> nterpreterServer.java:383)
>>>>>>>>>>>>> at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
>>>>>>>>>>>>> at org.apache.zeppelin.scheduler.
>>>>>>>>>>>>> FIFOScheduler$1.run(FIFOScheduler.java:139)
>>>>>>>>>>>>> at java.util.concurrent.Executors
>>>>>>>>>>>>> $RunnableAdapter.call(Executors.java:511)
>>>>>>>>>>>>> at java.util.concurrent.FutureTas
>>>>>>>>>>>>> k.run(FutureTask.java:266)
>>>>>>>>>>>>> at java.util.concurrent.Scheduled
>>>>>>>>>>>>> ThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledT
>>>>>>>>>>>>> hreadPoolExecutor.java:180)
>>>>>>>>>>>>> at java.util.concurrent.Scheduled
>>>>>>>>>>>>> ThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPo
>>>>>>>>>>>>> olExecutor.java:293)
>>>>>>>>>>>>> at java.util.concurrent.ThreadPoo
>>>>>>>>>>>>> lExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>>>>>>>>>>>> at java.util.concurrent.ThreadPoo
>>>>>>>>>>>>> lExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>>>>>>>>>>>> at java.lang.Thread.run(Thread.java:745)
>>>>>>>>>>>>>  INFO [2016-08-30 17:45:37,247] ({pool-2-thread-2}
>>>>>>>>>>>>> SchedulerFactory.java[jobFinished]:137) - Job
>>>>>>>>>>>>> remoteInterpretJob_1472593536728 finished by scheduler
>>>>>>>>>>>>> org.apache.zeppelin.spark.SparkInterpreter335845091
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>> Pradeep
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Best Regards
>>>>>>>>>>>>
>>>>>>>>>>>> Jeff Zhang
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Best Regards
>>>>>>>>>
>>>>>>>>> Jeff Zhang
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best Regards
>>>>>>
>>>>>> Jeff Zhang
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards
>>>>
>>>> Jeff Zhang
>>>>
>>>
>>>
>>
>>
>> --
>> Best Regards
>>
>> Jeff Zhang
>>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Is there any way to change the default interpreter in a interpreter group ?

2016-08-24 Thread Jongyoul Lee
Hi Jeff,

The first, Zeppelin tries to read that location. If Z cannot read it, Z
tries to find in a jar. Thus you don't have to be worried about conflict as
well.

On Thu, Aug 25, 2016 at 10:22 AM, Jeff Zhang <zjf...@gmail.com> wrote:

> Thanks Jongyoul,  But I notice for now interpreter-setting.json is in the
> interpreter jar which prevent me to edit it directly. If I put another
> interpreter-setting.json in {ZEPPLIN_HOME}/interpreter/{interpreter-group},
> I'm afraid they will conflict as both of them are in the classpath.
>
> On Thu, Aug 25, 2016 at 9:06 AM, Jongyoul Lee <jongy...@gmail.com> wrote:
>
>> Hi Jeff,
>>
>> For now, there's no UI for change default interpreter in a interpreter
>> group. The only way to do it is editing interpreter-setting.json and
>> locating {ZEPPLIN_HOME}/interpreter/{interpreter-group}/interpreter-
>> setting.json
>>
>> Thanks,
>> Jongyoul
>>
>> On Thu, Aug 25, 2016 at 9:19 AM, Jeff Zhang <zjf...@gmail.com> wrote:
>>
>>> I'd like to change the default interpreter to pyspark in the spark
>>> interpreter group, but seems the default interpreter is defined in
>>> interpreter-setting.json which is packaged in spark interpreter jar so that
>>> I can not modify it. Is there any other way that I can change the default
>>> interpreter ?
>>>
>>> --
>>> Best Regards
>>>
>>> Jeff Zhang
>>>
>>
>>
>>
>> --
>> 이종열, Jongyoul Lee, 李宗烈
>> http://madeng.net
>>
>
>
>
> --
> Best Regards
>
> Jeff Zhang
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Use embedded Spark for some notebook but use system provided Spark for others

2016-08-08 Thread Jongyoul Lee
Hi Patrick,

You can set SPARK_HOME in your interpreter tab actually. I, however, cannot
test this feature well and I'll submit a PR for setting envs as well as
properties from interpreter tab. I think you want to use different version
of Spark with multiple Spark setting. Can you please file a new JIRA issue
for it?

Regards,
Jongyoul

On Mon, Aug 8, 2016 at 5:09 PM, DuyHai Doan <doanduy...@gmail.com> wrote:

> Easier solution: creates different instances of Spark interpreter for each
> use case:
>
> 1) For embedded Spark, just let the master property to local[*]
> 2) For system provided Spark, edit the Spark interpreter settings and
> change the master to some spark://:7077
>
> On Mon, Aug 8, 2016 at 9:52 AM, Patrick Duflot <
> patrick.duf...@iba-group.com> wrote:
>
>> Hello Zeppelin users,
>>
>>
>>
>> I was looking to configure Zeppelin so that it uses embedded Spark for
>> some notebooks but uses system provided Spark for others.
>>
>> However it seems that the SPARK_HOME is a global parameter in
>> zeppelin-env.sh.
>>
>> Is it possible to overwrite this setting at notebook level?
>>
>>
>>
>> Thanks!
>>
>>
>>
>> Patrick
>>
>>
>>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Zeppelin Installation Help

2016-08-04 Thread Jongyoul Lee
netty.util.concurrent.SingleThreadEventExecutor$2.
> run(SingleThreadEventExecutor.java:111)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.nio.channels.ClosedChannelException
>
>
> The error log from my spark master is:
>
> 16/08/04 00:25:15 ERROR actor.OneForOneStrategy: Error while decoding
> incoming Akka PDU of length: 1305
> akka.remote.transport.AkkaProtocolException: Error while decoding
> incoming Akka PDU of length: 1305
> Caused by: akka.remote.transport.PduCodecException: Decoding PDU failed.
> at akka.remote.transport.AkkaPduProtobufCodec$.
> decodePdu(AkkaPduCodec.scala:167)
> at akka.remote.transport.ProtocolStateActor.akka$remote$transport$
> ProtocolStateActor$$decodePdu(AkkaProtocolTransport.scala:513)
> at akka.remote.transport.ProtocolStateActor$$anonfun$4.
> applyOrElse(AkkaProtocolTransport.scala:320)
> at akka.remote.transport.ProtocolStateActor$$anonfun$4.
> applyOrElse(AkkaProtocolTransport.scala:292)
> at scala.runtime.AbstractPartialFunction.apply(
> AbstractPartialFunction.scala:33)
> at akka.actor.FSM$class.processEvent(FSM.scala:595)
> at akka.remote.transport.ProtocolStateActor.processEvent(
> AkkaProtocolTransport.scala:220)
> at akka.actor.FSM$class.akka$actor$FSM$$processMsg(FSM.scala:589)
> at akka.actor.FSM$$anonfun$receive$1.applyOrElse(FSM.scala:583)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
> at akka.actor.ActorCell.invoke(ActorCell.scala:456)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
> at akka.dispatch.Mailbox.run(Mailbox.scala:219)
> at akka.dispatch.ForkJoinExecutorConfigurator$
> AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(
> ForkJoinTask.java:260)
> at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.
> runTask(ForkJoinPool.java:1339)
> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(
> ForkJoinPool.java:1979)
> at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(
> ForkJoinWorkerThread.java:107)
> Caused by: com.google.protobuf_spark.InvalidProtocolBufferException:
> Protocol message contained an invalid tag (zero).
> at com.google.protobuf_spark.InvalidProtocolBufferException
> .invalidTag(InvalidProtocolBufferException.java:68)
>     at com.google.protobuf_spark.CodedInputStream.readTag(
> CodedInputStream.java:108)
> at akka.remote.WireFormats$AkkaProtocolMessage$Builder.
> mergeFrom(WireFormats.java:5410)
> at akka.remote.WireFormats$AkkaProtocolMessage$Builder.
> mergeFrom(WireFormats.java:5275)
> at com.google.protobuf_spark.AbstractMessage$Builder.
> mergeFrom(AbstractMessage.java:300)
> at com.google.protobuf_spark.AbstractMessage$Builder.
> mergeFrom(AbstractMessage.java:238)
> at com.google.protobuf_spark.AbstractMessageLite$Builder.
> mergeFrom(AbstractMessageLite.java:162)
> at com.google.protobuf_spark.AbstractMessage$Builder.
> mergeFrom(AbstractMessage.java:716)
> at com.google.protobuf_spark.AbstractMessage$Builder.
> mergeFrom(AbstractMessage.java:238)
> at com.google.protobuf_spark.AbstractMessageLite$Builder.
> mergeFrom(AbstractMessageLite.java:153)
> at com.google.protobuf_spark.AbstractMessage$Builder.
> mergeFrom(AbstractMessage.java:709)
> at akka.remote.WireFormats$AkkaProtocolMessage.parseFrom(
> WireFormats.java:5209)
> at akka.remote.transport.AkkaPduProtobufCodec$.
> decodePdu(AkkaPduCodec.scala:168)
> ... 17 more
>
> Regards,
>
> Brian
>
>
>
>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Pass Credentials through JDBC

2016-07-28 Thread Jongyoul Lee
You can find more information on
https://issues.apache.org/jira/browse/ZEPPELIN-1146

Hope this help,
Jongyoul

On Fri, Jul 29, 2016 at 12:08 AM, Benjamin Kim <bbuil...@gmail.com> wrote:

> Hi Jonyoul,
>
> How would I enter credentials with the current version of Zeppelin? Do you
> know of a way to make it work now?
>
> Thanks,
> Ben
>
> On Jul 28, 2016, at 8:06 AM, Jongyoul Lee <jongy...@gmail.com> wrote:
>
> Hi,
>
> In my plan, this is a next step after
> https://issues.apache.org/jira/browse/ZEPPELIN-1210. But for now, there's
> no way to pass your credentials with hiding them. I hope that would be
> included in 0.7.0.
>
> Regards,
> Jongyoul
>
> On Thu, Jul 28, 2016 at 11:22 PM, Benjamin Kim <bbuil...@gmail.com> wrote:
>
>> How do I pass username and password to JDBC connections such as Phoenix
>> and Hive that are my own? Can my credentials be passed from Shiro after
>> logging in? Or do I have to set them at the Interpreter level without
>> sharing them? I wish there was more information on this.
>>
>> Thanks,
>> Ben
>
>
>
>
> --
> 이종열, Jongyoul Lee, 李宗烈
> http://madeng.net
>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Setting default interpreter at notebook level

2016-07-28 Thread Jongyoul Lee
Hi Abul,

Concerning "defaultInterpreter", it is a feature of current master and
doesn't work at 0.6.0. Sorry for wrong information. And for now, we don't
have any specific plan for supporting different default interpreter with
same interpreter setting. Thus, in your case, %r tags is a proper way for
now. I also don't think it's the best. I hope Zeppelin support this
feature, too.

Regards,
Jongyoul

On Thu, Jul 28, 2016 at 10:54 PM, Abul Basar <aba...@einext.com> wrote:

> Hello Jongyoul,
>
> I could not find the file interpreter-setting.json, but I found a file
> conf/interpreter.json. I added a property "default":"true"
> for interpreter "org.apache.zeppelin.spark.SparkRInterpreter". I restarted
> zeppelin demon service. But, R did not work as default interpreter.
>
> Then I changed conf/zeppelin-site.xml to alter the sequence of
> the interpreter, putting org.apache.zeppelin.spark.SparkRInterpreter as
> first interpreter. This worked. But this solution is not practical. If want
> to work on a R notebook and a scala notebook in parallel, this mechanism
> require me to touch conf/zeppelin-site.xml file or keep on using %r tags
> on each cell.
>
> Thanks!
>
> -AB
>
> On Mon, Jul 25, 2016 at 3:30 PM, Jongyoul Lee <jongy...@gmail.com> wrote:
>
>> Hello Abul,
>>
>> Changing orders within a group dynamically is not supported yet. You can
>> change it by making interpreter-setting.json in a resources directory In a
>> interpreter-setting.json, you can find a property named `default`. If it's
>> true, That will be a default interpreter in a group. If you don't want to
>> compile Zeppelin again, copy interpreter-setting.json into
>> interpreter/spark/ and open it and change it. It will also have a same
>> effect.
>>
>> Hope this help,
>> Jongyoul
>>
>> On Mon, Jul 25, 2016 at 4:39 PM, Abul Basar <aba...@einext.com> wrote:
>>
>>> Hi Krishnaprasad,
>>>
>>> Yes, I have played around with that feature. What I found is "spark,
>>> pyspark, r, sql" are grouped together. I use Zeppelin for Spark projects.
>>> So I need to set one of these sub-categories as default. Most often I use
>>> scala for Spark. But I should be able to create a notebook using r (which
>>> essentially is SparkR) as a default. Please let me know if I am missing
>>> something.
>>>
>>> Thanks!
>>> - AB
>>>
>>> On Mon, Jul 25, 2016 at 12:45 PM, Krishnaprasad A S <
>>> krishna.pra...@flytxt.com> wrote:
>>>
>>>> Hi Abul,
>>>>  You can change the default interpreter for each notebook through
>>>> zeppelin web UI.
>>>> Go to the notebook and then settings(up right corner), there you can
>>>> find Interpreter binding option. You can reorder the interpreters by drag
>>>> and drop. The first one will be default.
>>>>
>>>> Hope this helps.
>>>>
>>>> Regards,
>>>> Krishnaprasad
>>>>
>>>> On Mon, Jul 25, 2016 at 12:01 PM, Abul Basar <aba...@einext.com> wrote:
>>>>
>>>>> I know there is a way to set up a default interpreter at Zepplin using 
>>>>> zeppelin.interpreters
>>>>> property in conf/zeppelin-site.xml. The setting is global is nature.
>>>>>
>>>>> But, is it possible to create a notebook level setting for
>>>>> interpreter? For example, in a notebook I want to set the default
>>>>> interpreter at R so that for every code block i do not have to start with
>>>>> "%spark.r", while on another notebook, I want to set the default
>>>>> interpreter as Scala.
>>>>>
>>>>> I am using v0.6
>>>>>
>>>>> AB
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Krishnaprasad A S
>>>> Lead Engineer
>>>> Flytxt
>>>> Skype: krishnaprasadas
>>>> M: +91 8907209454 | O: +91 471.3082753 | F: +91 471.2700202
>>>> www.flytxt.com | Visit our blog <http://blog.flytxt.com/> | Follow us
>>>> <http://www.twitter.com/flytxt> | Connect on LinkedIn
>>>> <http://www.linkedin.com/company/22166?goback=%2Efcs_GLHD_flytxt_false_*2_*2_*2_*2_*2_*2_*2_*2_*2_*2_*2_*2=ncsrch_hits>
>>>>
>>>
>>>
>>
>>
>> --
>> 이종열, Jongyoul Lee, 李宗烈
>> http://madeng.net
>>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Pass Credentials through JDBC

2016-07-28 Thread Jongyoul Lee
Hi,

In my plan, this is a next step after
https://issues.apache.org/jira/browse/ZEPPELIN-1210. But for now, there's
no way to pass your credentials with hiding them. I hope that would be
included in 0.7.0.

Regards,
Jongyoul

On Thu, Jul 28, 2016 at 11:22 PM, Benjamin Kim <bbuil...@gmail.com> wrote:

> How do I pass username and password to JDBC connections such as Phoenix
> and Hive that are my own? Can my credentials be passed from Shiro after
> logging in? Or do I have to set them at the Interpreter level without
> sharing them? I wish there was more information on this.
>
> Thanks,
> Ben




-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Issue with Spark + Zeppelin on Mesos - Failed to create local dir

2016-07-28 Thread Jongyoul Lee
It looks like error of Spark's side. Does it works normally by spark shell?

On Thu, Jul 28, 2016 at 7:23 AM, Michael Sells <mjse...@gmail.com> wrote:

> Trying to get Zeppelin running on Mesos and I'm consistently hitting the
> following error when I try to create a dataframe/rdd from a file.
>
> java.io.IOException: Failed to create local dir in
> /tmp/blockmgr-82f31798-dd17-4907-a039-d1c90bf12a80/0e.
> at
> org.apache.spark.storage.DiskBlockManager.getFile(DiskBlockManager.scala:73)
> at org.apache.spark.storage.DiskStore.contains(DiskStore.scala:161)
> at org.apache.spark.storage.BlockManager.org
> $apache$spark$storage$BlockManager$$getCurrentBlockStatus(BlockManager.scala:391)
> at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:817)
> at
> org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:645)
> at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1003)
>
> Running Mesos 0.28, Zeppelin 0.6.0, Spark 1.6.1. This seems to happen
> whenever I try to read data from any source. Errors out just trying to
> create a dataframe or rdd like:
>
> sc.textFile("s3://filepath")
>
> Any pointers on what might be off here? I've tried changing the temp dir
> around and opening permissions. Everything I see indicates it should be
> able to write there. Any help would be appreciated.
>
> Thanks,
> Mike
>



-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


<    1   2   3   >