Re: Can't delete an empty paragraph?

2017-04-26 Thread moon soo Lee
Checked last 3 version 0.6.2, 0.7.0, 0.7.1, and all they can remove empty
paragraph.

Thanks,
moon

On Mon, Apr 24, 2017 at 1:44 AM Partridge, Lucas (GE Aviation) <
lucas.partri...@ge.com> wrote:

> Thanks moon.  Unfortunately I’m not an admin for the system I’m using and
> don’t control when it gets updated.
>
>
>
> Do you happen to know which version of Zeppelin this issue was fixed in?
> Is it only 0.7.2?
>
>
>
> Thanks, Lucas.
>
>
>
> *From:* moon soo Lee [mailto:m...@apache.org]
> *Sent:* 22 April 2017 06:32
> *To:* users@zeppelin.apache.org
> *Subject:* EXT: Re: Can't delete an empty paragraph?
>
>
>
> Thanks for reporting the problem.
>
> If 'About Zeppelin' dialog missing version number, i guess it's 0.5.x.
>
>
>
> We'll release 0.7.2 [1] in next few weeks.
>
> Please consider use recent version if possible.
>
>
>
> But of course, please feel free share any issues on your version.
>
>
>
> Thanks,
>
> moon
>
>
>
> [1] https://issues.apache.org/jira/browse/ZEPPELIN-2276
>
>
>
> On Fri, Apr 21, 2017 at 6:22 AM Partridge, Lucas (GE Aviation) <
> lucas.partri...@ge.com> wrote:
>
> I can’t delete an empty paragraph in a notebook.  I’m talking about an
> empty paragraph that’s not the last paragraph. I can click on Remove under
> the paragraph’s settings icon and be prompted to delete it, but when I
> click OK the paragraph doesn’t get deleted!
>
>
>
> To delete it I have to put something in the paragraph (random text will
> do) and then execute it. Only then will the paragraph be deleted.
>
>
>
> This might have been fixed in a later version of Zeppelin. I know I’m not
> using the latest but unfortunately the ‘About Zeppelin’ dialog is missing
> the version number!
>
>
>
> Thanks, Lucas.
>
>
>
>


Re: How do I configure R interpreter in Zeppelin?

2017-04-26 Thread Ruslan Dautkhanov
Thanks for feedback.

%spark.r
print("Hello World!")
 throws exception [2].

Understood - I'll try to remove -Pr and rebuild Zeppelin. Yep, I used a
fresh master snapshot.
( I have't seen anything in maven build logs that could indicate a problem
around R interpreter)
Will update this email thread with result after rebuilding Zeppelin without
-Pr


[2]

spark.r interpreter not found
org.apache.zeppelin.interpreter.InterpreterException: spark.r interpreter
not found at
org.apache.zeppelin.interpreter.InterpreterFactory.getInterpreter(InterpreterFactory.java:417)
at org.apache.zeppelin.notebook.Note.run(Note.java:620) at
org.apache.zeppelin.socket.NotebookServer.persistAndExecuteSingleParagraph(NotebookServer.java:1781)
at
org.apache.zeppelin.socket.NotebookServer.runParagraph(NotebookServer.java:1741)
at
org.apache.zeppelin.socket.NotebookServer.onMessage(NotebookServer.java:288)
at
org.apache.zeppelin.socket.NotebookSocket.onWebSocketText(NotebookSocket.java:59)
at
org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextMessage(JettyListenerEventDriver.java:128)
at
org.eclipse.jetty.websocket.common.message.SimpleTextMessage.messageComplete(SimpleTextMessage.java:69)
at
org.eclipse.jetty.websocket.common.events.AbstractEventDriver.appendMessage(AbstractEventDriver.java:65)
at
org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextFrame(JettyListenerEventDriver.java:122)




-- 
Ruslan Dautkhanov

On Wed, Apr 26, 2017 at 2:13 PM, moon soo Lee  wrote:

> Zeppelin includes two R interpreter implementations.
>
> One used to activated by -Psparkr the other -Pr.
> Since https://github.com/apache/zeppelin/pull/2215, -Psparkr is activated
> by default. And if you're trying to use sparkR, -Psparkr (activated by
> default in master branch) is implementation you might be more interested.
>
> So you can just try use with %spark.r prefix.
> Let me know if it works for you.
>
> Thanks,
> moon
>
> On Wed, Apr 26, 2017 at 12:11 AM Ruslan Dautkhanov 
> wrote:
>
>> Hi moon soo Lee,
>>
>> Cloudera's Spark doesn't have $SPARK_HOME/bin/sparkR
>> Would Zeppelin still enable its sparkR interpreter then?
>>
>> Built Zeppelin using
>>
>> $ mvn clean package -DskipTests -Pspark-2.1 -Ppyspark
>>> -Dhadoop.version=2.6.0-cdh5.10.1 -Phadoop-2.6 -Pyarn *-Pr*
>>> -Pvendor-repo -Pscala-2.10 -pl '!...,!...' -e
>>
>>
>> . . .
>>> [INFO] Zeppelin: *R Interpreter*  SUCCESS
>>> [01:01 min]
>>> [INFO] 
>>> 
>>> [INFO] BUILD SUCCESS
>>> [INFO] 
>>> 
>>> [INFO] Total time: 11:28 min
>>
>>
>> None of the R-related interpreters show up nevertheless.
>>
>> This is including latest Zeppelin snapshot and was the same on previous
>> releases of Zeppelin.
>> So something is missing on our side.
>>
>> R and R packages mentioned in http://zeppelin.apache.org/
>> docs/0.8.0-SNAPSHOT/interpreter/r.html
>> are installed on the servers that runs Zeppelin (and Spark driver as it
>> is yarn-client).
>>
>> I guess either above build options are wrong or there is another
>> dependency I missed.
>> conf/zeppelin-site.xml has R related interpreters mentioned - [1] but
>> none of them
>> show up once Zeppelin starts up.
>>
>> Any ideas?
>>
>>
>> Thank you,
>> Ruslan
>>
>>
>> [1]
>>
>> 
>>>   zeppelin.interpreters
>>>   org.apache.zeppelin.spark.PySparkInterpreter,org.
>>> apache.zeppelin.spark.SparkInterpreter,org.apache.zeppelin.
>>> *rinterpreter.RRepl*,org.apache.zeppelin.rinterpreter.*KnitR*
>>> ,org.apache.zeppelin.spark.*SparkRInterpreter*
>>> ,org.apache.zeppelin.spark.SparkSqlInterpreter,org.
>>> apache.zeppelin.spark.DepInterpreter,org.apache.
>>> zeppelin.markdown.Markdown,org.apache.zeppelin.angular.
>>> AngularInterpreter,org.apache.zeppelin.shell.
>>> ShellInterpreter,org.apache.zeppelin.file.HDFSFileInterpreter,org.
>>> apache.zeppelin.flink.FlinkInterpreter,,org.apache.zeppelin.python.
>>> PythonInterpreter,org.apache.zeppelin.lens.LensInterpreter,
>>> org.apache.zeppelin.ignite.IgniteInterpreter,org.apache.zeppelin.ignite.
>>> IgniteSqlInterpreter,org.apache.zeppelin.cassandra.
>>> CassandraInterpreter,org.apache.zeppelin.geode.GeodeOqlInterpreter,org.
>>> apache.zeppelin.postgresql.PostgreSqlInterpreter,org.
>>> apache.zeppelin.jdbc.JDBCInterpreter,org.apache.zeppelin.kylin.
>>> KylinInterpreter,org.apache.zeppelin.elasticsearch.
>>> ElasticsearchInterpreter,org.apache.zeppelin.scalding.
>>> ScaldingInterpreter,org.apache.zeppelin.alluxio.
>>> AlluxioInterpreter,org.apache.zeppelin.hbase.
>>> HbaseInterpreter,org.apache.zeppelin.livy.LivySparkInterpreter,org.
>>> apache.zeppelin.livy.LivyPySparkInterpreter,org.apache.zeppelin.livy.
>>> LivySparkRInterpreter,org.apache.zeppelin.livy.
>>> LivySparkSQLInterpreter,org.apache.zeppelin.bigquery.
>>> BigQueryInterpreter
>>>   Comma separated interpreter 

Customizing sparkconfig before starting spark app

2017-04-26 Thread Serega Sheypak
Hi, I have few questions about spark application customization
1. Is it possible to set spark app name from notebook, not from zeppelin
conf?
2. Is is possible to register custom kryo serializers?
3. Is it possible to configure user name? Right now I'm running zeppelin as
root and all jobs are submitted as root. I want to use logged user name
instead.


Re: paragraph log is shown always

2017-04-26 Thread Jan Rasehorn
Hello Moon,

thank you for your suggestion.
I already called "clear output" through the web UI action for one of my
paragraphs. But it does not help, since clearing the output means to remove
also the output that actually shall be displayed.

To make it more clear what I mean.

I'm using the spark sparksql interpreter.
When the paragraph is executed I can see the console output for each
program statement above the rendered tables or diagrams.
As far as I know this console output should not be visible if I use the
simple or report view. But it is visible and does not disappear even if a
switch between simple and report view.

Best regards,
Jan


2017-04-26 21:18 GMT+02:00 moon soo Lee :

> You can clear output in %spark
>
> z.getInterpreterContext.out.clear
>
> in %pyspark
>
> z.getInterpreterContext().out().clear()
>
> It may help hide unwanted output displayed.
>
> Thanks,
> moon
>
>
> On Tue, Apr 25, 2017 at 11:09 AM Jan Rasehorn 
> wrote:
>
>> Hello,
>>
>> I'm currently running version 0.7.0 and 0.7.1 .
>>
>> When I execute a paragraph it will display the console log no matter if I
>> selected the simple or report mode for the notebook.
>>
>> I wonder if there is a trick to hide the paragraph console output but
>> still display the actual output like selection fields or tables/charts.
>>
>> Thanks for any suggestions and BR,
>> Jan
>>
>


Re: paragraph log is shown always

2017-04-26 Thread moon soo Lee
You can clear output in %spark

z.getInterpreterContext.out.clear

in %pyspark

z.getInterpreterContext().out().clear()

It may help hide unwanted output displayed.

Thanks,
moon

On Tue, Apr 25, 2017 at 11:09 AM Jan Rasehorn 
wrote:

> Hello,
>
> I'm currently running version 0.7.0 and 0.7.1 .
>
> When I execute a paragraph it will display the console log no matter if I
> selected the simple or report mode for the notebook.
>
> I wonder if there is a trick to hide the paragraph console output but
> still display the actual output like selection fields or tables/charts.
>
> Thanks for any suggestions and BR,
> Jan
>


Zeppelin Load test

2017-04-26 Thread Yeshwanth Jagini
Hi we are trying to setup zeppelin for our developer team,

before doing that i want to understand how zeppelin resource management is
done.
for example let's say i have 15 users who are running notebooks from
zeppelin, simultaneously

what is the ideal configuration needed for the machine that hosts zeppelin
server?

it would be great if someone explains how the spark connections are handled
from zeppelin





Thanks,
Yeshwanth Jagini


Last chance: ApacheCon is just three weeks away

2017-04-26 Thread Rich Bowen
ApacheCon is just three weeks away, in Miami, Florida, May 15th - 18th.
http://apachecon.com/

There's still time to register and attend. ApacheCon is the best place
to find out about tomorrow's software, today.

ApacheCon is the official convention of The Apache Software Foundation,
and includes the co-located events:
  * Apache: Big Data
  * Apache: IoT
  * TomcatCon
  * FlexJS Summit
  * Cloudstack Collaboration Conference
  * BarCampApache
  * ApacheCon Lightning Talks

And there's dozens of opportunities to meet your fellow Apache
enthusiasts, both from your project, and from the other 200+ projects at
the Apache Software Foundation.

Register here:
http://events.linuxfoundation.org/events/apachecon-north-america/attend/register-

More information here: http://apachecon.com/

Follow us and learn more about ApacheCon:
  * Twitter: @ApacheCon
  * Discussion mailing list:
https://lists.apache.org/list.html?apachecon-disc...@apache.org
  * Podcasts and speaker interviews: http://feathercast.apache.org/
  * IRC: #apachecon on the https://freenode.net/

We look forward to seeing you in Miami!

-- 
Rich Bowen - VP Conferences, The Apache Software Foundation
http://apachecon.com/
@apachecon



signature.asc
Description: OpenPGP digital signature


Zeppelin framework is not getting unregistered from Mesos

2017-04-26 Thread Meethu Mathew
Hi,

We have connected our zeppelin to mesos. But the issue we are facing is
that Zeppelin framework is not getting unregistered from Mesos  even if the
notebook is closed.

Another problem is if the user logout from zeppelin, the SparkContext is
getting stopped. When the same user login again, it creates another
SparkContext and then the previous SparkContext will become a dead process
and exist.

Is it a bug of zeppelin or is there any other proper way to unbind the
zeppelin framework?

Zeppelin version is 0.7.0

Regards,
Meethu Mathew


Re: Zeppelin build from source

2017-04-26 Thread Raffaele S
This might be a bug with the latest master on git, could you try building
Zeppelin stable (0.7.1)?

2017-04-21 17:59 GMT+02:00 Swapnil Shinde :

> Hello Everyone
>  I am new to zeppelin world and trying to build zeppelin source code
> for the first time. I am building it in *windows* platform. Below are the
> steps I followed to build and difficuties/workaround I faced-
>
> *1. Update pom.xml to use scala 2.11. (Successful)*
>
> ./dev/change_scala_version.sh 2.11
>
>
> *2. Build (Error)*
>
> mvn clean package -DskipTests -Pmapr51 -Pspark-2.0 -Dspark.version=2.0.1
> -Phadoop-2.7 -Pscala-2.11 -Pr -Pexamples
>
> Above command was running until zepplin-web build. zeppelin-web was
> stuck at log "*npm clean run*" at "*yarn run build*" for very long time.
> I tried to build it using cygwin just in case but no luck.
>
> *Workaround* - Then I followed instructions in readme.md to build it step
> by step -
>
>
> # install required depepdencies and bower packages (only once)
> $ npm install -g yarn
> $ yarn install
>
> # build zeppelin-web for production
> $ yarn run build
>
>
> Above commands run fine but "mvn clean package" under zeppelin-web is
> still hanging.
>
> Therefore, I build zepplin-web with step by step commands and then removed
> zeppling web as module from parent pom. If I run "mvn clean package
> -DskipTests" on parent pom (entire zeppeling project) it throws error while
> building zeppelin-server.
>
>
> *Error message - *
>
>
> INFO] Reactor Summary:
> [INFO]
> [INFO] Zeppelin .. SUCCESS [
>  3.193 s]
> [INFO] Zeppelin: Interpreter . SUCCESS [
>  4.445 s]
> [INFO] Zeppelin: Zengine . SUCCESS [
>  3.456 s]
> [INFO] Zeppelin: Display system apis . SUCCESS [
>  0.772 s]
> [INFO] Zeppelin: Spark dependencies .. SUCCESS [
> 47.962 s]
> [INFO] Zeppelin: Groovy interpreter .. SUCCESS [
>  0.349 s]
> [INFO] Zeppelin: Spark ... SUCCESS [
>  4.724 s]
> [INFO] Zeppelin: Markdown interpreter  SUCCESS [
>  0.503 s]
> [INFO] Zeppelin: Angular interpreter . SUCCESS [
>  0.341 s]
> [INFO] Zeppelin: Shell interpreter ... SUCCESS [
>  0.345 s]
> [INFO] Zeppelin: Livy interpreter  SUCCESS [
>  4.095 s]
> [INFO] Zeppelin: HBase interpreter ... SUCCESS [
>  1.838 s]
> [INFO] Zeppelin: Apache Pig Interpreter .. SUCCESS [
>  1.681 s]
> [INFO] Zeppelin: JDBC interpreter  SUCCESS [
>  0.678 s]
> [INFO] Zeppelin: File System Interpreters  SUCCESS [
>  0.602 s]
> [INFO] Zeppelin: Flink ... SUCCESS [
>  1.255 s]
> [INFO] Zeppelin: Apache Ignite interpreter ... SUCCESS [
>  0.427 s]
> [INFO] Zeppelin: Kylin interpreter ... SUCCESS [
>  0.312 s]
> [INFO] Zeppelin: Python interpreter .. SUCCESS [
>  1.918 s]
> [INFO] Zeppelin: Lens interpreter  SUCCESS [
>  1.325 s]
> [INFO] Zeppelin: Apache Cassandra interpreter  SUCCESS [
>  5.904 s]
> [INFO] Zeppelin: Elasticsearch interpreter ... SUCCESS [
>  1.269 s]
> [INFO] Zeppelin: BigQuery interpreter  SUCCESS [
>  0.457 s]
> [INFO] Zeppelin: Alluxio interpreter . SUCCESS [
>  1.118 s]
> [INFO] Zeppelin: Scio  SUCCESS [
> 17.768 s]
> [INFO] Zeppelin: Server .. FAILURE [
>  1.681 s]
> [INFO] Zeppelin: Packaging distribution .. SKIPPED
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 01:48 min
> [INFO] Finished at: 2017-04-21T10:43:26-05:00
> [INFO] Final Memory: 106M/1708M
> [INFO] 
> 
> *[ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-enforcer-plugin:1.3.1:enforce (enforce) on
> project zeppelin-server:
> org.apache.maven.plugins.enforcer.DependencyConvergence failed with
> message:*
> *[ERROR] Failed while enforcing releasability the error(s) are [*
> *[ERROR] Dependency convergence error for
> org.scala-lang.modules:scala-xml_2.11:1.0.4 paths to dependency are:*
> *[ERROR] +-org.apache.zeppelin:zeppelin-server:0.8.0-SNAPSHOT*
> *[ERROR] +-org.scala-lang:scala-compiler:2.11.7*
> *[ERROR] +-org.scala-lang.modules:scala-xml_2.11:1.0.4*
> *[ERROR] and*
> *[ERROR] +-org.apache.zeppelin:zeppelin-server:0.8.0-SNAPSHOT*
> *[ERROR] +-org.scalatest:scalatest_2.11:2.2.4*
> *[ERROR] +-org.scala-lang.modules:scala-xml_2.11:1.0.2*
> *[ERROR] ]*
> [ERROR] -> [Help 1]
>
>
> I am not sure how to fix this enforcer plugin 

Re: How do I configure R interpreter in Zeppelin?

2017-04-26 Thread Ruslan Dautkhanov
Hi moon soo Lee,

Cloudera's Spark doesn't have $SPARK_HOME/bin/sparkR
Would Zeppelin still enable its sparkR interpreter then?

Built Zeppelin using

$ mvn clean package -DskipTests -Pspark-2.1 -Ppyspark
> -Dhadoop.version=2.6.0-cdh5.10.1 -Phadoop-2.6 -Pyarn *-Pr* -Pvendor-repo
> -Pscala-2.10 -pl '!...,!...' -e


. . .
> [INFO] Zeppelin: *R Interpreter*  SUCCESS
> [01:01 min]
> [INFO]
> 
> [INFO] BUILD SUCCESS
> [INFO]
> 
> [INFO] Total time: 11:28 min


None of the R-related interpreters show up nevertheless.

This is including latest Zeppelin snapshot and was the same on previous
releases of Zeppelin.
So something is missing on our side.

R and R packages mentioned in
http://zeppelin.apache.org/docs/0.8.0-SNAPSHOT/interpreter/r.html
are installed on the servers that runs Zeppelin (and Spark driver as it is
yarn-client).

I guess either above build options are wrong or there is another dependency
I missed.
conf/zeppelin-site.xml has R related interpreters mentioned - [1] but none
of them
show up once Zeppelin starts up.

Any ideas?


Thank you,
Ruslan


[1]


>   zeppelin.interpreters
>
> org.apache.zeppelin.spark.PySparkInterpreter,org.apache.zeppelin.spark.SparkInterpreter,org.apache.zeppelin.
> *rinterpreter.RRepl*,org.apache.zeppelin.rinterpreter.*KnitR*
> ,org.apache.zeppelin.spark.*SparkRInterpreter*
> ,org.apache.zeppelin.spark.SparkSqlInterpreter,org.apache.zeppelin.spark.DepInterpreter,org.apache.zeppelin.markdown.Markdown,org.apache.zeppelin.angular.AngularInterpreter,org.apache.zeppelin.shell.ShellInterpreter,org.apache.zeppelin.file.HDFSFileInterpreter,org.apache.zeppelin.flink.FlinkInterpreter,,org.apache.zeppelin.python.PythonInterpreter,org.apache.zeppelin.lens.LensInterpreter,org.apache.zeppelin.ignite.IgniteInterpreter,org.apache.zeppelin.ignite.IgniteSqlInterpreter,org.apache.zeppelin.cassandra.CassandraInterpreter,org.apache.zeppelin.geode.GeodeOqlInterpreter,org.apache.zeppelin.postgresql.PostgreSqlInterpreter,org.apache.zeppelin.jdbc.JDBCInterpreter,org.apache.zeppelin.kylin.KylinInterpreter,org.apache.zeppelin.elasticsearch.ElasticsearchInterpreter,org.apache.zeppelin.scalding.ScaldingInterpreter,org.apache.zeppelin.alluxio.AlluxioInterpreter,org.apache.zeppelin.hbase.HbaseInterpreter,org.apache.zeppelin.livy.LivySparkInterpreter,org.apache.zeppelin.livy.LivyPySparkInterpreter,org.apache.zeppelin.livy.LivySparkRInterpreter,org.apache.zeppelin.livy.LivySparkSQLInterpreter,org.apache.zeppelin.bigquery.BigQueryInterpreter
>   Comma separated interpreter configurations. First
> interpreter become a default
> 





-- 
Ruslan Dautkhanov

On Sun, Mar 19, 2017 at 1:07 PM, moon soo Lee  wrote:

> Easiest way to figure out what your environment needs is,
>
> 1. run SPARK_HOME/bin/sparkR in your shell and make sure it works in the
> same host where Zeppelin going to run.
> 2. try use %spark.r in Zeppelin with SPARK_HOME configured. Normally it
> should work when 1) works without problem, otherwise take a look error
> message and error log to get more informations.
>
> Thanks,
> moon
>
>
> On Sat, Mar 18, 2017 at 8:47 PM Shanmukha Sreenivas Potti <
> shanmu...@utexas.edu> wrote:
>
> I'm not 100% sure as I haven't set it up but it looks like I'm using
>> Zeppelin preconfigured with Spark and I've also taken a snapshot of the
>> Spark Interpreter configuration that I have access to/using in Zeppelin.
>> This interpreter comes with SQL and Python integration and I'm figuring out
>> how do I get to use R.
>>
>> On Sat, Mar 18, 2017 at 8:06 PM, moon soo Lee  wrote:
>>
>> AFAIK, Amazon EMR service has an option that launches Zeppelin
>> (preconfigured) with Spark. Do you use Zeppelin provided by EMR or are you
>> setting up Zeppelin separately?
>>
>> Thanks,
>> moon
>>
>> On Sat, Mar 18, 2017 at 4:13 PM Shanmukha Sreenivas Potti <
>> shanmu...@utexas.edu> wrote:
>>
>> ​​
>> Hi Moon,
>>
>> Thanks for responding. Exporting Spark_home is exactly where I have a
>> problem. I'm using Zeppelin notebook with Spark on EMR clusters from an AWS
>> account on cloud. I'm not the master account holder for that AWS account
>> but I'm guessing I'm a client account with limited access probably. Can I
>> still do it?
>>
>> If yes, can you explain where and how should I do that shell scripting to
>> export the variable? Can I do this in the notebook itself by starting the
>> paragraph with sh% or do I need to do something else?
>> If you can share any video that would be great. I would like to let you
>> know that I'm a novice user just getting to explore Big Data.
>>
>> Sharing more info for better context.
>>
>> Here's my AWS account detail type:
>> assumed-role/ConduitAccessClientRole-DO-NOT-DELETE/shan
>>
>> Spark Interpreter config in Zeppelin:
>> [image: image.png]
>>
>> Thanks for your help.
>>
>> Shan
>>

Re: Could not resolve dependencies for project org.apache.zeppelin:zeppelin-spark_2.10:jar:0.8.0-SNAPSHOT

2017-04-26 Thread Raffaele S
If the script is not sufficient, please downgrade your scala version to
2.11.
Scala 2.12 is not officially supported.

Raffaele

2017-04-21 3:54 GMT+02:00 Kang Minwoo :

> Thanks, I will try that.
>
> Best regards,
> Minwoo Kang
>
> 
> 보낸 사람: Ahyoung Ryu 
> 보낸 날짜: 2017년 4월 21일 금요일 오전 12:14:26
> 받는 사람: users@zeppelin.apache.org
> 제목: Re: Could not resolve dependencies for project
> org.apache.zeppelin:zeppelin-spark_2.10:jar:0.8.0-SNAPSHOT
>
> Hi Minwoo,
>
> Have you tried to run "ZEPPELIN_HOME/dev/change_scala_version.sh 2.11"
> before run your build command?
> The script file will update all pom.xml files to scala 2.11.
>
> Please refer to https://zeppelin.apache.org/docs/latest/install/build.
> html#2-build-source
>
> Ahyoung
>
> On Thu, Apr 20, 2017 at 9:49 PM, Kang Minwoo  mailto:minwoo.k...@outlook.com>> wrote:
> - OS version
> macOS Sierra 10.12.4
>
> - JDK, scala version
> java version "1.8.0_121"
> Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
> Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
>
> scala -version
> Scala code runner version 2.12.2 -- Copyright 2002-2017, LAMP/EPFL and
> Lightbend, Inc.
>
> - build command
> mvn clean package -DskipTests -Phadoop-2.7 -Pscala-2.11 -X
>
> Best regards,
> Minwoo Kang
>
> 
> 보낸 사람: Park Hoon <1am...@gmail.com>
> 보낸 날짜: 2017년 4월 20일 목요일 오후 9:43:25
> 받는 사람: users@zeppelin.apache.org
> 제목: Re: Could not resolve dependencies for project
> org.apache.zeppelin:zeppelin-spark_2.10:jar:0.8.0-SNAPSHOT
>
> Hi, minwoo.
>
> Could you describe your env including build command?
>
> - OS version
> - JDK, scala version
> - build command
>
> thanks!
>
>
>
>
>
> On Thu, 20 Apr 2017 at 21:39 Kang Minwoo >> wrote:
> Hello,
>
> I got fail to build zeppelin in my local computer.
> Revision is 652911abe457d48a540be4a3de2dad824691dfb1
>
> There is maven log.
>
> [ERROR] Failed to execute goal on project zeppelin-spark_2.10: Could not
> resolve dependencies for project 
> org.apache.zeppelin:zeppelin-spark_2.10:jar:0.8.0-SNAPSHOT:
> Failure to find org.apache.zeppelin:zeppelin-display_2.11:jar:0.8.0-SNAPSHOT
> in http://repository.apache.org/snapshots was cached in the local
> repository, resolution will not be reattempted until the update interval of
> apache.snapshots has elapsed or updates are forced -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute
> goal on project zeppelin-spark_2.10: Could not resolve dependencies for
> project org.apache.zeppelin:zeppelin-spark_2.10:jar:0.8.0-SNAPSHOT:
> Failure to find org.apache.zeppelin:zeppelin-display_2.11:jar:0.8.0-SNAPSHOT
> in http://repository.apache.org/snapshots was cached in the local
> repository, resolution will not be reattempted until the update interval of
> apache.snapshots has elapsed or updates are forced
> at org.apache.maven.lifecycle.internal.
> LifecycleDependencyResolver.getDependencies(LifecycleDependencyResolver.
> java:221)
> at org.apache.maven.lifecycle.internal.
> LifecycleDependencyResolver.resolveProjectDependencies(
> LifecycleDependencyResolver.java:127)
> at org.apache.maven.lifecycle.internal.MojoExecutor.
> ensureDependenciesAreResolved(MojoExecutor.java:245)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
> MojoExecutor.java:199)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
> MojoExecutor.java:153)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
> MojoExecutor.java:145)
> at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.
> buildProject(LifecycleModuleBuilder.java:116)
> at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.
> buildProject(LifecycleModuleBuilder.java:80)
> at org.apache.maven.lifecycle.internal.builder.singlethreaded.
> SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
> at org.apache.maven.lifecycle.internal.LifecycleStarter.
> execute(LifecycleStarter.java:128)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:863)
> at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288)
> at org.apache.maven.cli.MavenCli.main(MavenCli.java:199)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>