le HADOOP_CONF_DIR under one jvm classpath. Only one
> default configuration will be used.
>
>
> Best Regard,
> Jeff Zhang
>
>
> From: Serega Sheypak <serega.shey...@gmail.com>
> Reply-To: "users@zeppelin.apache.org" <users@zeppelin.apache.org>
>
I know it, thanks, but it's non reliable solution.
2017-03-26 5:23 GMT+02:00 Jianfeng (Jeff) Zhang <jzh...@hortonworks.com>:
>
> You can try to specify the namenode address for hdfs file. e.g
>
> spark.read.csv(“hdfs://localhost:9009/file”)
>
> Best Regard,
> Je
Hi, I'm trying run Zeppelin 0.8.0-SNAPSHOT in Docker. Startup takes
forever. It starts in seconds when launched on host, not in Docker
container.
I suspect Docker container has poorly configured network and some part of
zeppelin tries to reach remote resource.
SLF4J: See
Hi, I have three hadoop clusters. Each cluster has it's own NN HA
configured and YARN.
I want to allow user to read from ant cluster and write to any cluster.
Also user should be able to choose where to run is spark job.
What is the right way to configure it in Zeppelin?
Hi, I need to pre-configure spark interpreter with my own artifacts and
internal repositories. How can I do it?
eter.json of the Zeppelin
> installation will be changed.
>
> On Sat, Apr 22, 2017, 11:35 Serega Sheypak <serega.shey...@gmail.com>
> wrote:
>
>> Hi, I need to pre-configure spark interpreter with my own artifacts and
>> internal repositories. How can I do it?
>>
>
Hi, I have few concerns I can't resolve right now. I definitely can go
though the source code and find the solution, but I would like to
understand the idea behind.
I'm building Zeppelin from sources using 0.8.0-SNAPSHOT. I do build it with
custom cloudera CDH spark 2.0-something.
I can't
Nevermind, I forgot that it's in intepreter settings
https://cloud.githubusercontent.com/assets/5082742/20110797/c6852202-a60b-11e6-8264-93437a58f752.gif
2017-07-10 10:46 GMT+02:00 Serega Sheypak <serega.shey...@gmail.com>:
> Super stupid question, sorry.
> I can't find button / l
all my problems.
2017-07-10 20:37 GMT+02:00 Jongyoul Lee <jongy...@gmail.com>:
> Thanks for telling me that. I'll also test it with chrome. Might you use
> it in Windows? I never heard about it so I'm just asking something to find
> a clue.
>
> On Mon, 10 Jul 2017 at
by end-users like data analysts.
2017-06-29 21:11 GMT+02:00 Иван Шаповалов <shapovalov.iva...@gmail.com>:
> if you use helium - it will be installing npm at start time. See
> HeliumVisualizationFactory.java
>
> 2017-06-29 17:09 GMT+03:00 Serega Sheypak <serega.shey...@gma
solved. I misunderstood how update works.
2017-06-29 21:14 GMT+02:00 Иван Шаповалов <shapovalov.iva...@gmail.com>:
> looks like you create an interpreter setting via rest api and it is
> configured well enough
>
> 2017-06-29 18:32 GMT+03:00 Serega Sheypak <serega.shey...@gmail.co
Hi, resolved. root cause:
I've recompiled zeppelin with spark 2.11, used spark 2.0 complied for scala
2.11 but external artifacts were complied for scala 2.10
I did provide correct external artifacts and Zeppelin started to work.
2017-06-26 22:49 GMT+02:00 Serega Sheypak <serega.shey...@gmail.
instance.
What do I miss?
Thanks!
2017-06-30 16:43 GMT+02:00 Jeff Zhang <zjf...@gmail.com>:
>
> Right, create three spark interpreters for your 3 yarn cluster.
>
>
>
> Serega Sheypak <serega.shey...@gmail.com>于2017年6月30日周五 下午10:33写道:
>
>> Hi, thanks for your reply
16:21 GMT+02:00 Jeff Zhang <zjf...@gmail.com>:
>
> Try set HADOOP_CONF_DIR for each yarn conf in interpreter setting.
>
> Serega Sheypak <serega.shey...@gmail.com>于2017年6月30日周五 下午10:11写道:
>
>> Hi I have several different hadoop clusters, each of them has it
Ah, it's there, thanks!
2017-06-28 12:44 GMT+02:00 Иван Шаповалов <shapovalov.iva...@gmail.com>:
> for 3.2 https://zeppelin.apache.org/docs/0.7.2/rest-api/rest-
> interpreter.html should work
>
> 2017-06-28 12:14 GMT+03:00 Serega Sheypak <serega.shey...@gmail.com>:
>
Hi, I'm reading
https://zeppelin.apache.org/docs/0.7.2/rest-api/rest-notebook.html
I has great REST API for notebooks and paragraphs.
I'm looking for interpreter configuration. I want to automate Zeppelin
deployment and I need:
1. put zeppelin war on node (done)
2. start war and connect to
Hi Jeff!
Am I right that I don't have to recomplie Zeppelin for scala 2.11 to make
it work with spark 2.0 complied for scala 2.11?
Zeppelin doesn't really care about spark scala version and spark version
overall (1.6 ... 2.0)
Thanks!
2017-06-27 18:08 GMT+02:00 Serega Sheypak <serega.s
Hi, I'm starting zeppelin w/o internet. Looks like it tries to access some
external resources. Is it true?
Can I stop it somehow? It takes 2 minutes to start. I failed to find it in
source code.
Thanks!
Hi, I have few questions about spark application customization
1. Is it possible to set spark app name from notebook, not from zeppelin
conf?
2. Is is possible to register custom kryo serializers?
3. Is it possible to configure user name? Right now I'm running zeppelin as
root and all jobs are
Hi, seems like I was able to start Zeppelin. I have inhouse artifactory and
I want zeppelin to download my artifacts from artifactory and use classes
in spark job afterwards.
Notebook submission hangs %spark.dep and never finishes. Zeppelin outputs
to log that DepInterpreter job has been
>
> Hope this helps.
>
> Best,
> moon
>
>
> On Sat, Apr 22, 2017 at 1:04 PM Serega Sheypak <serega.shey...@gmail.com>
> wrote:
>
>> Hi, I have few concerns I can't resolve right now. I definitely can go
>> though the source code and find the solution,
Hi, I get super weird exception:
ERROR [2017-06-26 07:44:17,523] ({qtp2016336095-99}
NotebookServer.java[persistAndExecuteSingleParagraph]:1749) - Exception
from run
org.apache.zeppelin.interpreter.InterpreterException:
paragraph_1498480084440_1578830546's Interpreter %sq not found
I have three
Hi, I'm getting strange NPE w/o any obvious reason.
My notebook contains two paragraphs:
res0: org.apache.zeppelin.dep.Dependency =
org.apache.zeppelin.dep.Dependency@6ce5acd
%spark.dep z.load("some-local-jar.jar")
and
import com.SuperClass
// bla-bla
val features =
Hi, I have more or less the same symptom
if (Utils.isScala2_10()) {
binder = (Map) getValue("_binder");
} else {
binder = (Map) getLastObject();
}
binder.put("sc", sc); // EXCEPTION HERE
java.lang.NullPointerException at
Serega Sheypak <serega.shey...@gmail.com>:
> Ok, seems like something wrong when you try to use deps. I was able run
> simple spark job w/o third party dependecies.
> Zeppelin always throw NPE when you try to use local files using %spark.dep
> or spark interpreter conf (there i
:31 GMT+02:00 Serega Sheypak <serega.shey...@gmail.com>:
> Hi, I'm getting strange NPE w/o any obvious reason.
>
> My notebook contains two paragraphs:
>
>
> res0: org.apache.zeppelin.dep.Dependency = org.apache.zeppelin.dep.
> Dependency@6ce5acd
>
> %spa
spark is installed.
>
>
>
> Serega Sheypak <serega.shey...@gmail.com>于2017年6月27日周二 下午6:14写道:
>
>> Hi, can zeppelin spark interpreter support spark 1.6 / 2.0 / 2.1
>> I didn't find which spark versions are supported...
>>
>
It was my fault, I'm so sorry, I've recompiled zeppelin for scala 2.11 to
make it run with cloudera spark 2.0 and used scala 2.10 third party libs. I
replaced them with 2.11 versions and it started to work
вт, 27 июня 2017 г. в 9:52, Serega Sheypak <serega.shey...@gmail.com>:
> Hi,
how to prevent it?
>
> I am trying to make a strong case for my company to switch from other
> notebook application to Zeppelin, Zeppelin looks good and only this issue
> concerns me.
>
> I'm looking forward for any insights, thanks.
> On Monday, June 26, 2017, 11:56:45 AM
Here is my sample notebook:
%spark
val linesText = sc.textFile("hdfs://cluster/user/me/lines.txt")
case class Line(id:Long, firstField:String, secondField:String)
val lines = linesText.map{ line =>
val splitted = line.split(" ")
println("splitted => " + splitted)
plitted)
Line(splitted(0).toLong, splitted(1), splitted(2))
}
lines.collect().foreach(println)
prints file contexts to UI. I have some trouble with sql...
2017-05-02 13:57 GMT+02:00 Serega Sheypak <serega.shey...@gmail.com>:
> Here is my sample notebook:
> %spark
> val linesT
Hi, can zeppelin spark interpreter support spark 1.6 / 2.0 / 2.1
I didn't find which spark versions are supported...
Hi, I'm trying to use spark and sql paragraphs with 3rd party jars added to
spark interpreter configuration.
My spark code works fine.
My sql paragraph fails with class not found exception
%sql
create external table MY_TABLE row format serde 'com.my.MyAvroSerde'
with serdeproperties
Hi, I've create spark interpreter with name spark_my
I was able to restart it and zeppelin shows "green" marker near it.
spark_my is copy-paste of default spark intepreter with few changes.
I try to use spark_my in notebook:
%spark_my
import java.nio.ByteBuffer
more code
Zeppelin shows
Hi zeppelin users!
I have the question about dependencies users are using while running
notebooks using spark interpreter.
Imagine I have configured spark intepreter.
Two users write their spark notebooks.
the first user does
z.load("com:best-it-company:0.1")
the second one user adds to his
il.com>于2017年10月23日周一 下午7:08写道:
>
>>
>> Please bind this interpreter to your note first.
>>
>> Serega Sheypak <serega.shey...@gmail.com>于2017年10月23日周一 下午6:14写道:
>>
>>> Hi, I've create spark interpreter with name spark_my
>>> I was able to restart
36 matches
Mail list logo