Re: [ANNOUNCE] Apache Zeppelin 0.8.0 released

2018-06-28 Thread Jianfeng (Jeff) Zhang
Hi Patrick,

Which link is broken ? I can access all the links.

Best Regard,
Jeff Zhang


From: Patrick Maroney mailto:pmaro...@wapacklabs.com>>
Reply-To: mailto:us...@zeppelin.apache.org>>
Date: Friday, June 29, 2018 at 4:59 AM
To: mailto:us...@zeppelin.apache.org>>
Cc: dev mailto:dev@zeppelin.apache.org>>
Subject: Re: [ANNOUNCE] Apache Zeppelin 0.8.0 released

Great work Team/Community!

Links on the main download page are broken:

http://zeppelin.apache.org/download.html

...at least the ones I need ;-)

Patrick Maroney
Principal Engineer - Data Science & Analytics
Wapack Labs LLC


Public Key: http://pgp.mit.edu/pks/lookup?op=get=0x7C810C9769BD29AF

On Jun 27, 2018, at 11:21 PM, Prabhjyot Singh 
mailto:prabhjyotsi...@gmail.com>> wrote:

Awesome! congratulations team.



On Thu 28 Jun, 2018, 8:39 AM Taejun Kim, 
mailto:i2r@gmail.com>> wrote:
Awesome! Thanks for your great work :)

2018?? 6?? 28?? (??)  12:07, Jeff Zhang 
mailto:zjf...@apache.org>> :
The Apache Zeppelin community is pleased to announce the availability of
the 0.8.0 release.

Zeppelin is a collaborative data analytics and visualization tool for
distributed, general-purpose data processing system such as Apache Spark,
Apache Flink, etc.

This is another major release after the last minor release 0.7.3.
The community put significant effort into improving Apache Zeppelin since
the last release. 122 contributors fixed totally 602 issues. Lots of
new features are introduced, such as inline configuration, ipython
interpreter, yarn-cluster mode support , interpreter lifecycle manager
and etc.

We encourage you to download the latest release
fromhttp://zeppelin.apache.org/download.html

Release note is available
athttp://zeppelin.apache.org/releases/zeppelin-release-0.8.0.html

We welcome your help and feedback. For more information on the project and
how to get involved, visit our website at http://zeppelin.apache.org/

Thank you all users and contributors who have helped to improve Apache
Zeppelin.

Regards,
The Apache Zeppelin community
--
Taejun Kim

Data Mining Lab.
School of Electrical and Computer Engineering
University of Seoul



Re: PRs for documentation?

2018-06-19 Thread Jianfeng (Jeff) Zhang


It is not necessary to create ticket for document change if it is trivial
change, otherwise you still need to create ticket.

Regarding the review, I am sorry that there¹s no much bandwidth to review
for committers, if there¹s no response in 3 days, please ping someone or
send mail in dev mail list.



Best Regard,
Jeff Zhang





On 6/19/18, 9:50 PM, "Alex Ott"  wrote:

>Hi all
>
>I have a question - should I file JIRA for documentation
>fixes/improvements?
>For example, I opened PR some time ago (
>https://github.com/apache/zeppelin/pull/2997) that fixes multiple problems
>and improves documentation. But it still not merged... (conflict occur
>afterwards - I'll rebase soon)
>
>-- 
>With best wishes,Alex Ott
>http://alexott.net/
>Twitter: alexott_en (English), alexott (Russian)



Re: All PySpark jobs are canceled when one user cancel his PySpark paragraph (job)

2018-06-12 Thread Jianfeng (Jeff) Zhang

Which version do you use ?


Best Regard,
Jeff Zhang


From: Jhon Anderson Cardenas Diaz 
mailto:jhonderson2...@gmail.com>>
Reply-To: "us...@zeppelin.apache.org" 
mailto:us...@zeppelin.apache.org>>
Date: Friday, June 8, 2018 at 11:08 PM
To: "us...@zeppelin.apache.org" 
mailto:us...@zeppelin.apache.org>>, 
"dev@zeppelin.apache.org" 
mailto:dev@zeppelin.apache.org>>
Subject: All PySpark jobs are canceled when one user cancel his PySpark 
paragraph (job)

Dear community,

Currently we are having problems with multiple users running paragraphs 
associated with pyspark jobs.

The problem is that if an user aborts/cancels his pyspark paragraph (job), the 
active pyspark jobs of the other users are canceled too.

Going into detail, I've seen that when you cancel a user's job this method is 
invoked (which is fine):

sc.cancelJobGroup("zeppelin-[notebook-id]-[paragraph-id]")

But somehow unknown to me, this method is also invoked:

sc.cancelAllJobs()

The above is due to the trace of the log that appears in the jobs of the other 
users:

Py4JJavaError: An error occurred while calling o885.count.
: org.apache.spark.SparkException: Job 461 cancelled as part of cancellation of 
all jobs
at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
at 
org.apache.spark.scheduler.DAGScheduler.handleJobCancellation(DAGScheduler.scala:1375)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply$mcVI$sp(DAGScheduler.scala:721)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply(DAGScheduler.scala:721)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply(DAGScheduler.scala:721)
at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
at 
org.apache.spark.scheduler.DAGScheduler.doCancelAllJobs(DAGScheduler.scala:721)
at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1628)
at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1925)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1938)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1951)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1965)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:275)
at 
org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2386)
at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2788)
at 
org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2385)
at 
org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2392)
at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2420)
at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2419)
at org.apache.spark.sql.Dataset.withCallback(Dataset.scala:2801)
at org.apache.spark.sql.Dataset.count(Dataset.scala:2419)
at sun.reflect.GeneratedMethodAccessor120.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)

(, Py4JJavaError('An error occurred while 
calling o885.count.\n', JavaObject id=o886), )

Any idea of why this could be happening?

(I have 0.8.0 version from September 2017)

Thank you!


Re: Error while building Zeppelin master

2017-10-30 Thread Jianfeng (Jeff) Zhang

Sure, go ahead.


Best Regard,
Jeff Zhang





On 10/31/17, 6:46 AM, "Andrea Santurbano"  wrote:

>I think the problem is related to [ZEPPELIN-2685] since then the pom.xml
>of
>zeppelin-interperter module includes this configuration:
>
>${basedir}/interpreter/${project.name}
>
>if i change the value in this way:
>
>${basedir}/interpreter/${project.artifactId}tory>
>
>it works correctly.
>
>Can i make a PR for this?
>Thanks
>Andrea
>
>
>Il giorno mer 25 ott 2017 alle ore 17:12 Jongyoul Lee 
>ha scritto:
>
>> This looks like a bug building in Window env. Can you do that on Linux
>>or
>> Mac?
>>
>> On Tue, Oct 24, 2017 at 11:56 PM, Andrea Santurbano 
>> wrote:
>>
>> > Hi guys,
>> > when i try to build Zeppelin from apache master repo on Windows 10 i
>>get
>> > this error:
>> >
>> > [ERROR] Failed to execute goal
>> > org.apache.maven.plugins:maven-dependency-plugin:2.8:copy
>>(copy-artifact)
>> > on project zeppelin-interpreter: Error copying artifact from
>> > C:\Users\Andrea\workspace_zeppelin\zeppelin-master\
>> > zeppelin-interpreter\target\zeppelin-interpreter-0.8.0-SNAPSHOT.jar
>> > to
>> > 
>>C:\Users\Andrea\workspace_zeppelin\zeppelin-master\zeppelin-interpreter\
>> > interpreter\Zeppelin:
>> > Interpreter\zeppelin-interpreter-0.8.0-SNAPSHOT.jar
>> >
>> > Can someone help me to understand why?
>> > Thanks
>> > Andrea
>> >
>>
>>
>>
>> --
>> ??, Jongyoul Lee, ??
>> http://madeng.net
>>



Re: Roadmap for 0.8.0

2017-03-20 Thread Jianfeng (Jeff) Zhang

Strongly +1 for adding system test for different interpreter modes and focus on 
bug fixing than new features. I do heard from some users complain about the 
bugs of zeppelin major release.  A stabilized release is very necessary for 
community.




Best Regard,
Jeff Zhang


From: moon soo Lee >
Reply-To: "us...@zeppelin.apache.org" 
>
Date: Tuesday, March 21, 2017 at 4:10 AM
To: "us...@zeppelin.apache.org" 
>, dev 
>
Subject: Re: Roadmap for 0.8.0

Great to see discussion for 0.8.0.
List of features for 0.8.0 looks really good.

Interpreter factory refactoring
Interpreter layer supports various behavior depends on combination of 
PerNote,PerUser / Shared,Scoped,Isolated. We'll need strong test cases for each 
combination as a first step.
Otherwise, any pullrequest will silently break one of behavior at any time no 
matter we refactor or not. And fixing and testing this behavior is so hard.
Once we have complete test cases, not only guarantee the behavior but also make 
refactoring much easier.


0.8.0 release
I'd like to suggest improvements on how we release a new version.

In the past, 0.6.0 and 0.7.0 release with some critical problems. (took 3 
months to stabilize 0.6 and we're working on stabilizing 0.7.0 for 2 months)

I think the same thing will happen again with 0.8.0, while we're going to make 
lots of changes and add many new features.
After we released 0.8.0, while 'Stabilizing' the new release, user who tried 
the new release may get wrong impression of the quality. Which is very bad and 
we already repeated the mistake in 0.6.0 and 0.7.0.

So from 0.8.0 release, I'd suggest we improve way we release new version to 
give user proper expectation. I think there're several ways of doing it.

1. Release 0.8.0-preview officially and then release 0.8.0.
2. Release 0.8.0 with 'beta' or 'unstable' label. And keep 0.7.x as a 'stable' 
release in the download page. Once 0.8.x release becomes stable enough make 
0.8.x release as a 'stable' and move 0.7.x to 'old' releases.


After 0.8.0,
Since Zeppelin projects starts, project went through some major milestone, like

- project gets first users and first contributor
- project went into Apache Incubator
- project became TLP.

And I think it's time to think about hitting another major milestone.

Considering features we already have, features we're planning on 0.8, wide 
adoption of Zeppelin in the industry, I think it's time to focus on make 
project more mature and make a 1.0 release. Which i think big milestone for the 
project.

After 0.8.0 release, I suggest we more focus on bug fixes, stability 
improvement, optimizing user experience than adding new features. And with 
subsequent minor release, 0.8.1, 0.8.2 ... moment we feel confident about the 
quality, release it as a 1.0.0 instead of 0.8.x.

Once we have 1.0.0 released, then I think we can make larger, experimental 
changes on 2.0.0 branch aggressively, while we keep maintaining 1.0.x branch.


Thanks,
moon

On Mon, Mar 20, 2017 at 8:55 AM Felix Cheung 
> wrote:
There are several pending visualization improvements/PRs that would be very 
good to get them in as well.



From: Jongyoul Lee >
Sent: Sunday, March 19, 2017 9:03:24 PM
To: dev; us...@zeppelin.apache.org
Subject: Roadmap for 0.8.0

Hi dev & users,

Recently, community submits very new features for Apache Zeppelin. I think it's 
very positive signals to improve Apache Zeppelin and its community. But in 
another aspect, we should focus on what the next release includes. I think we 
need to summarize and prioritize them. Here is what I know:

* Cluster management
* Admin feature
* Replace some context to separate users
* Helium online

Feel free to talk if you want to add more things. I think we need to choose 
which features will be included in 0.8.0, too.

Regards,
Jongyoul Lee

--
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Unstable travis CI recently

2017-01-18 Thread Jianfeng (Jeff) Zhang
+1 

Best Regard,
Jeff Zhang





On 1/19/17, 1:03 AM, "Jongyoul Lee"  wrote:

>I also agree that ppl don't care of the result of CI anymore even it's
>real
>failure. One possible solution is making umbrella ticket, grabbing flaky
>tests, disabling at first and enabling when it solves. but it assumes we
>need to do our best to fix the flaky tests. Otherwise, we will lose some
>tests...
>
>How do you guys think of it?
>
>On Thu, Jan 19, 2017 at 1:56 AM, Felix Cheung 
>wrote:
>
>> I'd agree. Is there a course of actions you can propose? Disable all
>>these
>> tests is a not a long term solution, right?
>>
>>
>> _
>> From: Jeff Zhang >
>> Sent: Tuesday, January 17, 2017 6:01 PM
>> Subject: Re: Unstable travis CI recently
>> To: >
>>
>>
>>
>> Should we disable these flaky test now ? CI seems become more unstable
>> recently. It is almost useless for me, I never see a success CI
>>recently.
>> Here's one screenshot of recent closed PRs. Most of them has CI failure.
>> IMO, this is pretty bad, especially for new contributors.
>>
>>
>>
>> [pasted1]
>>
>>
>>
>>
>>
>> Jun Kim >?2016?12?13???
>> ??11:27???
>> @Hoon Thanks for your information :-) I should use that next time!
>>
>> 2016? 12? 13? (?) ?? 8:23, Park Hoon <1am...@gmail.com> a...@gmail.com>>?? ??:
>>
>> > I totally agree with your opinions. I will work on ZEPPELIN-1739,
>> > ZEPPELIN-1749 first i reported before.
>> >
>> > @Jun Kim. So true. We have to wait long time. FYI, we can use our own
>> > travis CI containers to test (I recently learned also!) by configuring
>> > your-github-id/zeppelin-repo in travis CI
>> >
>> > Thanks!
>> >
>> > On Tue, Dec 13, 2016 at 8:19 PM, Jun Kim
>>> j...@gmail.com>> wrote:
>> >
>> > > I definitely agree with you!
>> > >
>> > > I reopened my PR twice recently to pass CI and it wasn't because of
>>me.
>> > >
>> > > CI takes about ~40min for a test, so I had to wait 1h and 20min to
>> write
>> > a
>> > > comment after passing CI T_T
>> > >
>> > > And the worst of it is that I don't believe CI's result more and
>>more.
>> > >
>> > > 2016? 12? 13? (?) ?? 8:10, Jeff Zhang > u...@gmail.com>>?? ??:
>> > >
>> > > > Hi Folks,
>> > > >
>> > > > As you may notice that our travis CI is not stable recently.
>>There's
>> > many
>> > > > flaky test, and it waste every developer's time to figure out
>>whether
>> > the
>> > > > failure is due to your PR or flaky test. So I think it is time
>>for us
>> > to
>> > > > make the CI stable. Here's tickets for all the flaky test.
>> > > >
>> > > > https://issues.apache.org/jira/issues/?jql=project%20%
>> > > 3D%20ZEPPELIN%20AND%20text%20~%20flaky%20and%20status%20!%
>> > > 3D%20RESOLVED%20ORDER%20%20BY%20status%20ASC%20
>> > > >
>> > > > Fixing the flaky test may take some time and may not easy for some
>> test
>> > > but
>> > > > I think it is worth to do that.  And it is better for these people
>> who
>> > > > familiar with that particular test case to fix it. What do you
>>guys
>> > > think ?
>> > > > Thanks
>> > > >
>> > > --
>> > > Taejun Kim
>> > >
>> > > Data Mining Lab.
>> > > School of Electrical and Computer Engineering
>> > > University of Seoul
>> > >
>> >
>> --
>> Taejun Kim
>>
>> Data Mining Lab.
>> School of Electrical and Computer Engineering
>> University of Seoul
>>
>>
>>
>
>
>-- 
>이종열, Jongyoul Lee, 李宗烈
>http://madeng.net



Re: Exporting Spark paragraphs as Spark Applications

2017-01-04 Thread Jianfeng (Jeff) Zhang

I don¹t understand why user want to export zeppelin note as spark
application. 

If they want to trigger the running of spark app, why not use zeppelin¹s
rest api for that. Even user export it as spark application, most of time
in reality, they need to submit it through spark job server, so why not
use zeppelin as a spark job server.
And if the spark app fails, it is pretty hard to debug it, because the
exporting tool has changed/restructured the source code.
 

If this is a pretty large and complicated spark application, I don¹t think
zeppelin is a proper tool for that, they¹d better to use IDE for that
project.

BTW, After https://github.com/apache/zeppelin/pull/1799, user can define
the dependency between paragraphs, and they can run one whole note which
contains different interpreters.
 


Best Regard,
Jeff Zhang





On 1/5/17, 2:25 AM, "Luciano Resende"  wrote:

>I have made some progress with a tool to handle the points discussed in
>this thread. It's currently a command line tool and given a Zeppelin
>notebook (note.json) it generates a Spark scala application, compiles it
>using the compiler embedded in the scala sdk and then package all these
>resources into a jar that works with spark-submit command.
>
>I would like to start prototyping the integration into the Zeppelin UI and
>I was wondering if it would be ok to use the above jar as a dependency
>(e.g. from a maven release) and integrate into zeppelin...
>
>Thoughts ?
>
>
>On Mon, Sep 19, 2016 at 7:47 AM, Sourav Mazumder <
>sourav.mazumde...@gmail.com> wrote:
>
>> To Moon's point, This is what my vision is around this feature -
>>
>> 1. Use should be able to package 1, more than one, all of the
>>paragraphs in
>> a Notebook to create a Jar file which can be used with Spark-Submit.
>>
>> 2. The tool should automatically remove the all the interactive
>>statements
>> like print, show etc.
>>
>> 3. The tool should automatically create a Main class in addition to the
>>jar
>> file(s) which will internally call the respective jar. User can then
>>change
>> this main class if needed for parameterization through Args.
>>
>> Regards,
>> Sourav
>>
>> On Mon, Sep 19, 2016 at 7:33 AM, Sourav Mazumder <
>> sourav.mazumde...@gmail.com> wrote:
>>
>> > I am also pretty much for this.
>> >
>> > I have got the similar request from each and every people/group who I
>> > showcased Zeppelin.Regards,
>> > Sourav
>> >
>> > On Fri, Sep 16, 2016 at 8:06 PM, moon soo Lee  wrote:
>> >
>> >> Hi Luciano,
>> >>
>> >> I've also got a lot of questions about "Productize the notebook"
>>every
>> >> time
>> >> i meet users use Zeppelin in their work.
>> >>
>> >> I think it's actually about two different problems that Zeppelin
>>need to
>> >> address.
>> >>
>> >> *1) Provide way that interactive notebook becomes part of production
>> data
>> >> pipeline.*
>> >>
>> >> Although Zeppelin does have quite convenient cron-like scheduler for
>> each
>> >> Note, built-in cron scheduler is not ready for serious use in the
>> >> production. Because it lacks some features like actions after
>> >> success/fail,
>> >> fault-tolerance, history, and so on. I think community is working on
>> >> improving it, and it's going to take some time.
>> >>  Meanwhile, any external enterprise level job scheduler can run Note
>>or
>> >> Paragraph via REST api. But we don't have any guide and examples for
>>it,
>> >> what are the REST APIs user can use for this purpose, and how to use
>> them
>> >> in various cases (e.g. with authentication on, dynamic form
>>parameters,
>> >> etc). I think a lot of things need to be improved to make zeppelin
>> easier
>> >> to be part of production pipeline.
>> >>
>> >> *2) Provide stable way of run spark paragraphs.*
>> >>
>> >> Another barrier of using notebook in production pipeline is Scala
>>REPL
>> in
>> >> SparkInterpreter. SparkInterpreter uses Scala REPL to provide
>> interactive
>> >> scala session and Scala REPL will eventually hit OOME as it compiles
>>and
>> >> runs statements. Current workaround in zeppelin is cron-scheduler
>>inside
>> >> of
>> >> notebook has checkbox that can restart the Note after scheduler runs
>>it.
>> >> Of course that option does not apply when external scheduler runs job
>> >> through REST api.
>> >>
>> >> I think what Luciano suggesting, "Export Spark Paragraph as Spark
>> >> application" is interesting. If Spark Paragraphs can be easily
>>packaged
>> >> into jar (spark application) that can be one of way to address 1) and
>> 2).
>> >> In case of user already have stable way to schedule spark application
>> jar.
>> >>
>> >> Actually, Flink interactive shell works in similar way internally as
>>far
>> >> as
>> >> i know. i.e. package compiled class into jar and submit.
>> >>
>> >> One idea for prototyping is,
>> >> How about make a interpreter inside of spark interpreter group, say
>>it's
>> >> %spark.build or some better name.
>> >>
>> >> And if user runs some command like
>> >>
>> >> 

Re: interpreter-setting.json properties

2016-12-13 Thread Jianfeng (Jeff) Zhang
Hi JL,

I think Igor means the property name appears twice in the
interpreter-setting.son

E.g. In the following setting, zeppelin.livy.url appears twice.


"zeppelin.livy.url": {
  "envName": "ZEPPELIN_LIVY_HOST_URL",
  "propertyName": "zeppelin.livy.url",
  "defaultValue": "http://localhost:8998;,
  "description": "The URL for Livy Server."
},



Best Regard,
Jeff Zhang





On 12/14/16, 10:00 AM, "Jongyoul Lee"  wrote:

>Hi,
>
>Zeppelin supports two way to pass the configuration to interpreter.
>propertyName is one of these way to do. I don't know what you talk about
>exactly but interpreter's properties map is initialized by set of
>propertyNames in interpreter-setting. If you remove propertyName, those
>key
>are not initialized when the interpreter starts.
>
>JL
>
>On Tue, Dec 13, 2016 at 10:48 PM, Igor Drozdov 
>wrote:
>
>> Hello,
>>
>> I'm doing task https://issues.apache.org/jira/browse/ZEPPELIN-922
>> (introducing new interpreter registration mechanism for Scalding) and I
>> have a question about interpreter-setting.json.
>> Why do we need "propertyName"? It always has the same value as a key in
>> properties map (or null). I can't find any usages of it in code.
>>
>> Should I add this property to new json or just omit it?
>>
>> Thank you
>> Igor Drozdov
>>
>>
>
>
>-- 
>이종열, Jongyoul Lee, 李宗烈
>http://madeng.net



Re: [VOTE] Apache Zeppelin release 0.6.1 (rc2)

2016-08-15 Thread Jianfeng (Jeff) Zhang
>sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
>62)
>at
>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm
>pl.java:43)
>at java.lang.reflect.Method.invoke(Method.java:498)
>at
>org.apache.zeppelin.spark.ZeppelinContext.showDF(ZeppelinContext.java:214)
>at
>org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterprete
>r.java:129)
>at
>org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInte
>rpreter.java:94)
>at
>org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJo
>b.jobRun(RemoteInterpreterServer.java:341)
>at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
>at 
>org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
>at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>at
>java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.acces
>s$201(ScheduledThreadPoolExecutor.java:180)
>at
>java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(S
>cheduledThreadPoolExecutor.java:293)
>at
>java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:
>1142)
>at
>java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java
>:617)
>at java.lang.Thread.run(Thread.java:745)
>
>
>Thank you,
>Vinay
>
>On Mon, Aug 15, 2016 at 7:55 AM, rohit choudhary <rconl...@gmail.com>
>wrote:
>
>> +1.
>>
>> Ran sample notebooks. Cent.
>>
>> Thanks,
>> Rohit.
>>
>> On Mon, Aug 15, 2016 at 4:23 PM, madhuka udantha
>><madhukaudan...@gmail.com
>> >
>> wrote:
>>
>> > +1
>> >
>> > Built in windows 8
>> >
>> > On Mon, Aug 15, 2016 at 4:03 PM, mina lee <mina...@apache.org> wrote:
>> >
>> > > +1 (binding)
>> > >
>> > > On Sun, Aug 14, 2016 at 9:58 PM, Felix Cheung <
>> felixcheun...@hotmail.com
>> > >
>> > > wrote:
>> > >
>> > > > +1
>> > > >
>> > > > Tested out binaries and netinstall, with spark and a few other
>> > > > interpreters.
>> > > >
>> > > > Thanks Mina!
>> > > >
>> > > >
>> > > > _
>> > > > From: Alexander Bezzubov <b...@apache.org<mailto:b...@apache.org>>
>> > > > Sent: Sunday, August 14, 2016 12:05 AM
>> > > > Subject: Re: [VOTE] Apache Zeppelin release 0.6.1 (rc2)
>> > > > To: <dev@zeppelin.apache.org<mailto:dev@zeppelin.apache.org>>
>> > > >
>> > > >
>> > > > +1 (binding), for this rapid release put together so well by Mina
>>Lee
>> > > > again!
>> > > >
>> > > > Verified:
>> > > > - build from sources, SparkInterpreter over apache spark 2.0 and
>> > > > PythonInterpreter
>> > > > - https://dist.apache.org is super slow for me :\ so can not help
>>\w
>> > > 517mb
>> > > > binaries
>> > > >
>> > > > --
>> > > > Alex
>> > > >
>> > > > On Sun, Aug 14, 2016 at 1:54 AM, Victor Manuel Garcia <
>> > > > victor.gar...@beeva.com<mailto:victor.gar...@beeva.com>> wrote:
>> > > >
>> > > > > +1
>> > > > >
>> > > > > 2016-08-13 18:45 GMT+02:00 Prabhjyot Singh <
>> > prabhjyotsi...@apache.org<
>> > > > mailto:prabhjyotsi...@apache.org>>:
>> > > > >
>> > > > > > +1
>> > > > > >
>> > > > > > On 13 Aug 2016 9:25 p.m., "Sourav Mazumder" <
>> > > > sourav.mazumde...@gmail.com<mailto:sourav.mazumde...@gmail.com>
>> > > > > >
>> > > > > > wrote:
>> > > > > >
>> > > > > > > + 1
>> > > > > > >
>> > > > > > > Regards,
>> > > > > > > Sourav
>> > > > > > >
>> > > > > > > > On ১৩ আগস্ট, ২০১৬, at ২:১৭ পূর্বাহ্ণ, Hyung Sung Shim <
>> > > > > > hss...@nflabs.com<mailto:hss...@nflabs.com>>
>> > > > > > > wrote:
>> > > > > > > >
>> > > > > > > > +1
>> > > > > > > >
>&