in Chrome
or anything like that.
--
Ruslan Dautkhanov
On Sun, Jan 29, 2017 at 2:43 PM, moon soo Lee <m...@apache.org> wrote:
> Hi,
>
> I'm not sure which action can possibly make output blinks and disappears.
> But
>
> ERROR [2017-01-28 11:13:53,338] ({pool-4-thread-1}
> Ap
>From the screenshot "JSON file size cannot exceed MB".
Notice there is no number between "exceed" and "MB".
Not sure if we're missing a setting or an environment variable to define
the limit?
It now prevents us from importing any notebooks.
--
Ruslan Dautkhan
Thank you Herman!
That was it.
--
Ruslan Dautkhanov
On Mon, Nov 14, 2016 at 7:23 AM, herman...@teeupdata.com <
herman...@teeupdata.com> wrote:
> You may check if you have hive-site.xml under zeppelin/spark config
> folder…
>
> Thanks
> Herman.
>
>
>
>
sPath
and* *SPARK_CLASSPATH*. *Use only the former*."
Sidenote: that's interesting that spark-shell with the same SPARK_CLASSPATH
gives opposite message -
" *SPARK_CLASSPATH *was detected (set to '...').
*This is deprecated in Spark 1.0*+. "
Thank you,
Ruslan Dautkhanov
[0]
export JAVA_
nch changes multiple times a day, so no guarantees.
>
> Thanks,
> moon
>
> On Tue, Nov 22, 2016 at 5:05 PM Ruslan Dautkhanov <dautkha...@gmail.com>
> wrote:
>
>> We'd like to try Zeppelin 0.7.0 from the current snapshot.
>> Jira shows there are just two in-progres
> Could you add *-Pvendor-repo *option to build Zeppelin with CDH?
> You can refer to http://zeppelin.apache.org/doc
> s/0.7.0-SNAPSHOT/install/build.html#build-command-examples .
> Thanks.
>
> 2016-11-23 16:05 GMT+09:00 Ruslan Dautkhanov <dautkha...@gmail.com>:
>
> Fol
Getting
SparkInterpreter.java[getSQLContext_1]:256 - Can't create HiveContext.
Fallback to SQLContext
See full stack [1] below.
Java 7
Zeppelin 0.6.2
CDH 5.8.3
Spark 1.6
How to fix this?
Thank you.
[1]
WARN [2016-11-26 09:38:39,028] ({pool-2-thread-2}
Yes I can work with hiveContext from spark-shell.
Back to the original question.
Getting
You must *build Spark with Hive*. Export 'SPARK_HIVE=true'
See full stack [2] above.
Any ideas?
--
Ruslan Dautkhanov
On Thu, Nov 24, 2016 at 4:48 PM, Jeff Zhang <zjf...@gmail.com> wrote:
>
lin_home/conf for
- hive-site.xml
- hdfs-site.xml
- core-site.xml
- yarn-site.xml
Thank you,
Ruslan Dautkhanov
We'd like to try Zeppelin 0.7.0 from the current snapshot.
Jira shows there are just two in-progress tickets left so it's close to
release?
How stable is to use against Spark 1.6, Spark 2, PySpark interpreters?
Thanks,
Ruslan
Problem 1, with sqlContext)
Spark 1.6
CDH 5.8.3
Zeppelin 0.6.2
Running
> sqlCtx = SQLContext(sc)
> sqlCtx.sql('select * from marketview.spend_dim')
shows exception "Table not found" .
The same runs find when using hiveContext.
See full stack in [1]
The same stack in the log file [2].
I
Also, to get rid of this problem (once HiveContext(sc) was assigned at
least twice to a variable,
the only fix is - ro restart Zeppelin :-(
--
Ruslan Dautkhanov
On Sun, Nov 27, 2016 at 9:00 AM, Ruslan Dautkhanov <dautkha...@gmail.com>
wrote:
> I found a pattern when this happens.
&
to a variable
more than once,
and use that variable between assignments.
--
Ruslan Dautkhanov
On Mon, Nov 21, 2016 at 2:52 PM, Ruslan Dautkhanov <dautkha...@gmail.com>
wrote:
> Getting
> You must *build Spark with Hive*. Export 'SPARK_HIVE=true'
> See full stack [2] below.
>
&g
lin-zrinterpreter_2.10
--
Ruslan Dautkhanov
Zeppelin 0.7.0 built from yesterday's snapshot.
Getting below error stack when trying to start Zeppelin 0.7.0.
The same shiro config works fine in 0.6.2.
We're using LDAP authentication configured in shiro.ini as
ldapRealm = org.apache.zeppelin.server.LdapGroupRealm
r.java:151)
at org.apache.shiro.config.ReflectionBuilder.buildObjects(
ReflectionBuilder.java:119)
at org.apache.shiro.config.IniSecurityManagerFactory.buildInstances(
IniSecurityManagerFactory.java:161)
--
Ruslan Dautkhanov
On Mon, Nov 28, 2016 at 8:23 AM, Ruslan Dautkhanov <dautkha...@gma
)
Thanks,
Ruslan
On Mon, Nov 28, 2016 at 9:13 AM, Ruslan Dautkhanov <dautkha...@gmail.com>
wrote:
> Looking at 0.7 docs, Shiro LDAP authentication shiro.ini configuration
> looks the same.
> http://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/security/shir
> oauthentication.html
Yep, CDH doesn't have Spark compiled with Thrift server.
My understanding Zeppelin uses spark-shell REPL and not Spark thrift server.
Thank you.
--
Ruslan Dautkhanov
On Thu, Nov 24, 2016 at 1:57 AM, Jeff Zhang <zjf...@gmail.com> wrote:
> AFAIK, spark of CDH don’t support spark thri
Getting [1] error stack when trying to build Zeppelin from 0.7-snapshot.
We will not use most of the built-in Zeppelin interpreters, including Scio
which is failing.
How we can switch off (black-list) certain interpreters from Zeppelin build
at all?
ssionState$$anon$1.(HiveSessionState.scala:63)
at
org.apache.spark.sql.hive.HiveSessionState.analyzer$lzycompute(HiveSessionState.scala:63)
at
org.apache.spark.sql.hive.HiveSessionState.analyzer(HiveSessionState.scala:62)
--
Ruslan Dautkhanov
On Mon, Nov 14, 2016 at 7:23
fic
paths.
> I didn't see any issue about impersonate spark interpreter using
--proxy-user. Do you mind create one?
Complete: https://issues.apache.org/jira/browse/ZEPPELIN-1730
Thank you.
--
Ruslan Dautkhanov
On Tue, Nov 29, 2016 at 3:30 PM, moon soo Lee <m...@apache.org> wrote:
&
Thank you Khalid.
That was it. I was able to start 0.7.0 with ldap shiro config now.
--
Ruslan Dautkhanov
On Tue, Nov 29, 2016 at 12:15 AM, Khalid Huseynov <khalid...@nflabs.com>
wrote:
> I think during refactoring LdapGroupRealm has moved into different package,
> so could you
, it's only fixable by restarting
Zeppelin server which is super inconvenient.
Thanks.
--
Ruslan Dautkhanov
On Tue, Nov 29, 2016 at 12:54 PM, Felix Cheung <felixcheun...@hotmail.com>
wrote:
> Can you reuse the HiveContext instead of making new ones with
> Hi
I got a lucky jira number :-)
https://issues.apache.org/jira/browse/ZEPPELIN-1777
Thank you Jeff.
--
Ruslan Dautkhanov
On Thu, Dec 8, 2016 at 10:50 PM, Jeff Zhang <zjf...@gmail.com> wrote:
> hmm, I think so, please file a ticket for it.
>
>
>
> Ruslan Dautkhanov <da
image 1]
--
Ruslan Dautkhanov
On Wed, Nov 30, 2016 at 7:34 PM, Jeff Zhang <zjf...@gmail.com> wrote:
> Hi Ruslan,
>
> I miss another thing, You also need to delete file conf/interpreter.json
> which store the original setting. Otherwise the original setting is always
>
We'd like to have paragraph's code generated by a preceding paragraph.
For example, one of the use cases we have
is when %pyspark generates Hive DDLs.
(can't run those in Spark in some cases)
Any chance an output of a paragraph can be redirected to a following
paragraph?
I was thinking something
xport as a
PDF"
Please vote up if you would find that useful too.
Thank you.
--
Ruslan Dautkhanov
On Wed, Dec 7, 2016 at 10:32 PM, Hyunsung Jo <hyunsung...@gmail.com> wrote:
> Hi Ruslan,
>
> Not aware of Zeppelin's roadmap, but perhaps the tag line of the
> ZeppelinHub
={var1} --param9={var2}
where var1 and var2 would be implied to be fetched as z.get('var1')
and z.get('var2') respectively.
Other thoughts?
Thank you,
Ruslan Dautkhanov
Created https://issues.apache.org/jira/browse/ZEPPELIN-1967
(JIRA had some issues.. https://twitter.com/infrabot - had to wait a
couple of days.)
Great ideas. Thank you everyone.
--
Ruslan Dautkhanov
On Thu, Jan 12, 2017 at 8:55 AM, t p <tauis2...@gmail.com> wrote:
> Is somet
"className": "org.apache.zeppelin.spark.SparkInterpreter",
to section
"className": "org.apache.zeppelin.spark.PySparkInterpreter",
pySpark is still not default.
--
Ruslan Dautkhanov
On Tue, Nov 29, 2016 at 10:36 PM, Jeff Zhang <zjf...@gmail.com> wrote:
> No, you don't need
Until we have a good multitenancy support in Zeppelin, we'd have to run
individual Zeppelin instances for each user.
We were trying to use following shiro.ini configurations:
> [urls]
> /api/version = anon
> /** = user["rdautkhanov@CORP.DOMAIN"]
Also tried
> /** = authc,
Thank you Jeff.
Do I have to create interpreter/spark directory in $ZEPPELIN_HOME/conf
or in $ZEPPELIN_HOME directory?
So zeppelin.interpreters in zeppelin-site.xml is deprecated in 0.7?
Thanks!
--
Ruslan Dautkhanov
On Tue, Nov 29, 2016 at 6:54 PM, Jeff Zhang <zjf...@gmail.com>
Any easy way to get Spark Driver's URL (i.e. from sparkContext )?
I always have to go to CM -> YARN applications -> choose my Spark job ->
click Application Master etc. to get Spark's Driver UI.
Any way we could derive driver's URL programmatically from SparkContext
variable?
ps. Long haul - it
Thank you everyone for confirming this issue.
Created https://issues.apache.org/jira/browse/ZEPPELIN-1832
Thanks again.
--
Ruslan Dautkhanov
On Fri, Dec 16, 2016 at 2:48 AM, blaubaer <rene.pfitz...@nzz.ch> wrote:
> We are seeing this problem as well, regularly actually. E
> from pyspark.conf import SparkConf
> ImportError: No module named *pyspark.conf*
William, you probably meant
from pyspark import SparkConf
?
--
Ruslan Dautkhanov
On Mon, Mar 20, 2017 at 2:12 PM, William Markito Oliveira <
william.mark...@gmail.com> wrote:
> Ah! Thanks Ru
You're right - it will not be dynamic.
You may want to check
https://issues.apache.org/jira/browse/ZEPPELIN-2195
https://github.com/apache/zeppelin/pull/2079
it seems it is fixed in a current snapshot of Zeppelin (comitted 3 weeks
ago).
--
Ruslan Dautkhanov
On Mon, Mar 20, 2017 at 1:21 PM
confusion.
--
Ruslan Dautkhanov
On Mon, Mar 20, 2017 at 12:59 PM, William Markito Oliveira <
mark...@apache.org> wrote:
> I'm trying to use zeppelin.pyspark.python as the variable to set the
> python that Spark worker nodes should use for my job, but it doesn't seem
> to be wo
of %sh interpreter.
Is this a known issue?
--
Ruslan Dautkhanov
Filed https://issues.apache.org/jira/browse/ZEPPELIN-2368
We had users asking the same.. it forced them to run paragraphs one by one
manually.
--
Ruslan Dautkhanov
On Wed, Apr 5, 2017 at 4:57 PM, moon soo Lee <m...@apache.org> wrote:
> Hi,
>
> That's expected behavio
out errors).
It will be a compromise between completely sequential run and having a way
to define a DAG.
--
Ruslan Dautkhanov
On Thu, Apr 6, 2017 at 1:32 AM, Jeff Zhang <zjf...@gmail.com> wrote:
>
> That's correct, it needs define dependency between paragraphs, e.g.
> %spark
https://issues.apache.org/jira/browse/ZEPPELIN-2197
This was created just yesterday :-)
On Wed, Mar 1, 2017 at 12:54 PM Alexander Filipchik
wrote:
> Hi,
>
> Is there any way to close an isolated interpreter after some timeout?
> Let's say set an inactivity timeout of 30
sues.apache.org/jira/browse/ZEPPELIN-1660 "Home directory
references (i.e. ~/zeppelin/) in zeppelin-env.sh don't work as expected"
Less of a critical compared to the above two, but it could complement the
multi-tenancy feature very well.
Best regards,
Ruslan Dautkhanov
On Wed, Mar 22,
rkflow.
Thank you,
Ruslan Dautkhanov
On Wed, Apr 5, 2017 at 12:01 AM, Jeff Zhang <zjf...@gmail.com> wrote:
>
> Hi Ruslan,
>
> Regarding 'make zeppelinContext available in shell interpreter', you may
> want to check https://issues.apache.org/jira/browse/ZEPPELIN-1595
>
preter will be instantiated
Globally in shared
process."
--
Ruslan Dautkhanov
On Thu, Apr 6, 2017 at 6:34 PM, Jeff Zhang <zjf...@gmail.com> wrote:
>
> What mode do you use ?
>
>
>
> Ruslan Dautkhanov <dautkha...@gmail.com>于2017年4月7日周五 上午12:49写道:
>
>>
It was built. I think binaries are only available for official releases?
--
Ruslan Dautkhanov
On Wed, Aug 2, 2017 at 4:41 PM, Benjamin Kim <bbuil...@gmail.com> wrote:
> Did you build Zeppelin or download the binary?
>
> On Wed, Aug 2, 2017 at 3:40 PM Ruslan Dautkhanov <da
Might need to recompile Zeppelin with Scala 2.11?
Also Spark 2.2 now requires JDK8 I believe.
--
Ruslan Dautkhanov
On Tue, Aug 1, 2017 at 6:26 PM, Benjamin Kim <bbuil...@gmail.com> wrote:
> Here is more.
>
> org.apache.zeppelin.interpreter.InterpreterException: WARNING:
S=/etc/hive/conf:/var/lib/sqoop/ojdbc7.jar
--
Ruslan Dautkhanov
On Mon, Jul 10, 2017 at 12:10 PM, <dar...@ontrenet.com> wrote:
> Hi
>
> We want to use a jdbc driver with pyspark through Zeppelin. Not the custom
> interpreter but from sqlContext where we can read into datafram
Your example works fine for me too.
We're on Zeppelin snapshot ~2 months old.
--
Ruslan Dautkhanov
On Tue, Jul 11, 2017 at 3:11 PM, Ben Vogan <b...@shopkick.com> wrote:
> Here is the specific example that is failing:
>
> import pandas
> z.show(pandas.DataFrame([u'Jalape\x
I think if you have a shared storage for notebooks (for example, NFS
mounted from a third server),
and a load-balancer that supports sticky sessions (like F5) on top, it
should be possible to have HA without
any code change in Zeppelin. Am I missing something?
--
Ruslan Dautkhanov
On Fri, Jun
QueryInterpreter
> Comma separated interpreter configurations. First
> interpreter become a default
>
--
Ruslan Dautkhanov
On Sun, Mar 19, 2017 at 1:07 PM, moon soo Lee <m...@apache.org> wrote:
> Easiest way to figure out what your environment needs is,
>
> 1. run SPARK
ssage.java:69)
at
org.eclipse.jetty.websocket.common.events.AbstractEventDriver.appendMessage(AbstractEventDriver.java:65)
at
org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextFrame(JettyListenerEventDriver.java:122)
--
Ruslan Dautkhanov
On Wed, Apr 26, 2017 at 2:13 PM
gt; mvn clean package -DskipTests -Pspark-2.1 -Dhadoop.version=2.6.0-cdh5.10.1
> -Phadoop-2.6 -Pvendor-repo -Pscala-2.10 -Psparkr -pl
> '!alluxio,!flink,!ignite,!lens,!cassandra,!bigquery,!scio' -e
You may needs additional steps depending which interpreters you use (like R
etc).
--
Rusla
Hope to see this as implemented one day
https://issues.apache.org/jira/browse/ZEPPELIN-1774
On Wed, May 3, 2017 at 5:05 AM Petr Knez wrote:
> I know about feature (link to paragraph) but it not works if Zeppelin has
> enabled Shiro authorization.
> It works only for me (if
Maven generates some of the web resource names, for example, css files.
- What are those hex ids in file names?
- Why those ids duplicate in file names up to 5 times? (see example below
in *bold*)
$ find . -name "main*css"
> ./spark-dependencies/target/spark-2.1.0/docs/css/main.css
>
>
as it gets
stuck):
[image: Inline image 2]
I think if Zeppelin could understand that there is an interactive prompt,
this will be helpful not only with password prompts but any other cases
(including shell interpreter).
--
Ruslan Dautkhanov
On Tue, May 9, 2017 at 4:59 PM, Ben Vogan <b...@shopki
Has anyone experienced below exception?
It started happening inconsistently after upgrade to a last week master
snapshot of Zeppelin.
We have multiple users reported the same issue.
java.lang.NullPointerException at
org.apache.zeppelin.spark.Utils.buildJobGroupId(Utils.java:112) at
ll. The
> python part of Airflow is really just describing what gets run and it isn't
> hard to run something that isn't written in python.
>
> On Fri, May 19, 2017 at 2:52 PM, Ruslan Dautkhanov <dautkha...@gmail.com>
> wrote:
>
>> We also use both Zeppelin a
Thanks for sharing this Jeff!
Once Zeppelin supports yarn-cluster, what would be main benefits of using
Livy Spark interpreters, instead of just the Spark interpreters?
--
Ruslan Dautkhanov
On Thu, May 4, 2017 at 10:51 PM, Jeff Zhang <zjf...@gmail.com> wrote:
> For anyone that
).
They run Zeppelin on edge nodes that have NFS mounts to a drop zone.
ps. Hue has a limit too, by default 100k rows
https://github.com/cloudera/hue/blob/release-3.12.0/desktop/conf.dist/hue.ini#L905
Not sure how much it scales up.
--
Ruslan Dautkhanov
On Tue, May 2, 2017 at 10:41 AM, Paul
That's awesome. Congrats everyone!
Hope to see 0.8.0 release soon too - it has nice new features we would love
to see.
--
Ruslan Dautkhanov
On Fri, Sep 22, 2017 at 1:36 AM, Mina Lee <mina...@apache.org> wrote:
> The Apache Zeppelin community is pleased to announce the ava
://issues.apache.org/jira/browse/ZEPPELIN-2040
completed ..
Thanks
--
Ruslan Dautkhanov
On Sun, Sep 10, 2017 at 9:13 PM, Yeshwanth Jagini <y...@yotabitesllc.com>
wrote:
> Cloudera Data Science workbench is totally a different product. Cloudera
> acquired it from https://sense.io/
>
&g
Building from a current Zeppelin snapshot fails with
zeppelin build fails with
org.apache.maven.plugins.enforcer.DependencyConvergence
see details below.
Build command
/opt/maven/maven-latest/bin/mvn clean package -DskipTests -Pspark-2.2
-Dhadoop.version=2.6.0-cdh5.12.0 -Phadoop-2.6 -Pvendor-repo
.run(ScheduledThreadPoolExecutor.java:293)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
--
Ruslan Dautkhanov
On Sun, Aug 27,
notebooks not show up?
Thanks,
Ruslan Dautkhanov
by: java.text.ParseException: Unparseable date: "2017-08-27
19:56:22.229"
at java.text.DateFormat.parse(DateFormat.java:357)
at
com.google.gson.internal.bind.DateTypeAdapter.deserializeToDate(DateTypeAdapter.java:79)
... 50 more
--
Ruslan Dautkhanov
On Mon, Aug 28, 2017 at 11:32
patibility.
Can somebody please point me to PR / jria for this change?
Any workarounds that would make an upgrade easier?
Also, this change makes reverting zeppelin upgrades impossible.
--
Ruslan Dautkhanov
On Mon, Aug 28, 2017 at 11:35 AM, Ruslan Dautkhanov <dautkha...@gmail.com>
wrote
the newer version of Zeppelin
- they'll not show up in list of available notebooks.
--
Ruslan Dautkhanov
On Mon, Aug 28, 2017 at 6:07 PM, Jianfeng (Jeff) Zhang <
jzh...@hortonworks.com> wrote:
>
> Do you use the latest zeppelin master branch ? I see this issue before,
> but
Sorry for bringing up an older topic .. I agree "latest" / "stable" makes a
lot of sense.
Also what was *not* discussed in this thread is release cadence target.
IMHO, 2-3 releases a year should give a quicker turnover to release latest
fixes and improvements / quicker feedback from the users?
Would be nice if each user's interpreter is started in its own docker
container a-la cloudera data science workbench.
Then each user's shell interpreter is pretty isolated.
Actually, from a CDSW session you could pop up a terminal session to your
container which I found pretty neat.
--
Ruslan
will contribute back to the project when we find solution.
Thanks for the suggestion Felix. Is this known if Zeppelin can work fine
with jasckson 2.*2*.3?
(certain dependencies currently list jackson 2.*5*.3)
--
Ruslan Dautkhanov
On Sat, Dec 16, 2017 at 3:03 AM, Felix Cheung <felixch
Chrome can print to pdf. In Destination "printer" change to "Save as pdf".
--
Ruslan Dautkhanov
On Thu, Nov 9, 2017 at 10:31 AM, shyla deshpande <deshpandesh...@gmail.com>
wrote:
> Hello all,
>
> I want the users to be able to download the data in report
Getting "IPython is available, use IPython for PySparkInterpreter" warning
after starting pyspark interpreter.
How do I default %pyspark to ipython?
Tried to change to
"class": "org.apache.zeppelin.spark.PySparkInterpreter",
to
"class": "org.apache.zeppelin.spark.IPySparkInterpreter",
in
t; fallback when ipython interpreter become much more mature.
>
>
>
>
> Ruslan Dautkhanov <dautkha...@gmail.com>于2017年12月11日周一 下午1:20写道:
>
>> Getting "IPython is available, use IPython for PySparkInterpreter"
>> warning after starting pyspark interpret
org.codehaus.jackson
> + jackson-mapper-asl
> +
> +
> + org.codehaus.jackson
> + jackson-core-asl
> +
> +
> + org.apache.zookeeper
> + zookeeper
> +
>
>
>
On Sun, Aug 27, 2017 at 2:25
hat aren't available on latest official release.
Also it gives new features exposure to more testing, so it should be a
win-win for users and developers.
Some other open source projects employ nightly builds.
Thanks!
Ruslan Dautkhanov
I didn't know 0.8 rc1/rc2 were out. Was it advertised on the dev list?
Thanks for sharing this.
--
Ruslan Dautkhanov
On Sun, May 13, 2018 at 1:23 AM, Rotem Herzberg <
rotem.herzb...@gigaspaces.com> wrote:
> Hello all,
>
> I've downloaded and built the zeppelin v0.8
Thank you Jeff.
--
Ruslan Dautkhanov
On Wed, May 16, 2018 at 6:19 PM, Jeff Zhang <zjf...@gmail.com> wrote:
> Yes, the voting thread is on dev mail list.
>
> https://lists.apache.org/thread.html/c6435f3fcfab4c516e2ef90f436575
> 3268546293afa1ae2c50cc54f9@%3Cdev.zepp
You may want to check if %spark.dep
https://zeppelin.apache.org/docs/latest/interpreter/spark.html#3-dynamic-dependency-loading-via-sparkdep-interpreter
helps here.
--
Ruslan Dautkhanov
On Fri, May 25, 2018 at 12:46 PM, Michael Segel <msegel_had...@hotmail.com>
wrote:
> What’s the
Was anybody able to import notes on 0.8 RC or a recent master snapshot?
Notes import seems to be broken
Filed https://issues.apache.org/jira/browse/ZEPPELIN-3485
This looks serious to me.
--
Ruslan Dautkhanov
G: A HTTP GET method, public javax.ws.rs.core.Response
> org.apache.zeppelin.rest.CredentialRestApi.getCredentials(java.lang.String)
> throws java.io.IOException,java.lang.IllegalArgumentException, should not
> consume any entity.
--
Ruslan Dautkhanov
If you set pretty verbose level in log4j then you can see output in log
files. I've seen it there.
Then you can use regexps to strip out paragraph outputs from rest of
debugging messages.
May work as a one off effort. Might be a good idea to file an enhancement
request - this can be also useful
Nope add that as a spark interpreter setting.
0.7.2 should work fine with Spark 2.2 afaik.
You may want to go with Zeppelin 0.8 when you upgrade to Spark 2.3.
--
Ruslan Dautkhanov
On Mon, Jun 4, 2018 at 10:29 AM, Michael Segel
wrote:
> I’m assuming that I want to set this in ./conf/zeppe
Can you send a screenshot with the error and complete exception stack?
--
Ruslan Dautkhanov
On Mon, Jun 4, 2018 at 10:40 AM, Michael Segel
wrote:
> Hmmm. Still not working.
> Added it to the interpreter setting and restarted the interpreter.
>
> The issue is that I need to
pache.zeppelin.interpreter.remote.RemoteInterpreterServer=DEBUG
> log4j.logger.org.glassfish.jersey.internal.inject.Providers=SEVERE
--
Ruslan Dautkhanov
On Wed, Jun 20, 2018 at 3:01 AM Alessandro Liparoti <
alessandro.l...@gmail.com> wrote:
> Hi,
> yes spark UI is a tool I already use for it but as Rusian mentioned would
> be
Not sure if Spark-Cassandra connector would be helpful?
https://github.com/datastax/spark-cassandra-connector
--
Ruslan Dautkhanov
On Mon, Apr 30, 2018 at 7:38 AM, Soheil Pourbafrani <soheil.i...@gmail.com>
wrote:
> Is it possible to save a Cassandra query result in a variab
Thank you Jeff
--
Ruslan Dautkhanov
On Thu, Jan 11, 2018 at 1:57 AM, Jeff Zhang <zjf...@gmail.com> wrote:
>
> ZEPPELIN-3119 will fix this. Will update this thread once it is done
>
>
>
>
> Ruslan Dautkhanov <dautkha...@gmail.com>于2017年12月29日周五 上午6:04写道:
&g
f attempts to timeout the interpreter in the logs even at
DEBUG level.
Thanks,
Ruslan Dautkhanov
ved data on
> closed stream
> INFO [2018-02-14 10:39:10,924] ({grpc-default-worker-ELG-1-2}
> AbstractClientStream2.java[inboundDataReceived]:249)
> - Received data on closed stream
> INFO [2018-02-14 10:39:10,925] ({grpc-default-worker-ELG-1-2}
> AbstractClientStream2.java[inboundDataReceived]:249) - Received data on
> closed stream
--
Ruslan Dautkhanov
quot;"" ---> 82 limit =
> len(df) > self.max_result 83 header_buf = StringIO("")
> 84 if show_index: TypeError: object of type 'DataFrame' has no len()
>
>
--
Ruslan Dautkhanov
ed message --
From: kpayson64 <notificati...@github.com>
Date: Mon, Feb 19, 2018 at 2:47 PM
Subject: Re: [grpc/grpc] Unicode support in Python 2? (#14446)
To: grpc/grpc <g...@noreply.github.com>
Cc: Ruslan Dautkhanov <dautkha...@gmail.com>, Author <
aut...@noreply.github.
Thanks Jeff! Should there be a Zeppelin 0.8.1 release sometime soon with
all the fixes for issues that the users have faced in 0.8.0?
--
Ruslan Dautkhanov
On Mon, Jul 23, 2018 at 12:24 AM Jeff Zhang wrote:
>
> Thanks Ruslan, I will fix it.
>
> Ruslan Dautkhanov 于2018年7月23
/2812/files
--
Ruslan Dautkhanov
On Wed, Aug 8, 2018 at 10:01 AM Paul Brenner wrote:
> ok, I went ahead and opened
> https://issues.apache.org/jira/browse/ZEPPELIN-3692
> <https://share.polymail.io/v1/z/b/NWI2
Thanks for bringing this up for discussion. My 2 cents below.
I am with Maksim and Felix on concerns with special characters now allowed
in notebook names, and also concerns with different charsets. Russian
language, for example, most commonly use iso-8859-5, koi-8r/u, windows-1251
charsets etc.
Thank you luxun,
I left a couple of comments in that google document.
--
Ruslan Dautkhanov
On Tue, Jul 17, 2018 at 11:30 PM liuxun wrote:
> hi,Ruslan Dautkhanov
>
> Thank you very much for your question. according to your advice, I added 3
> schematics to illustrate.
>
users to a survived
instance?
Thanks,
Ruslan Dautkhanov
On Tue, Jul 17, 2018 at 2:46 AM liuxun wrote:
> hi:
>
> Our company installed and deployed a lot of zeppelin for data analysis.
> The single server version of zeppelin could not meet our application
> scenarios, so we trans
https://zeppelin.apache.org/ home page still reads
"WHAT'S NEW IN
Apache Zeppelin 0.7"
--
Ruslan Dautkhanov
On Fri, Jun 29, 2018 at 4:56 AM Spico Florin wrote:
> Hi!
> I tried to get the docker image for this version 0.8.0, but it seems
> that is not in the official do
I've seen this a couple of times..
--
Ruslan Dautkhanov
On Tue, Jul 10, 2018 at 2:34 PM Paul Brenner wrote:
> We are using 0.8 release and noticed that the editor section of paragraphs
> will randomly collapse when you leave a notebook open for a while. Clicking
> "hide ed
.
--
Ruslan Dautkhanov
On Wed, Jul 11, 2018 at 8:34 AM Paul Brenner wrote:
> I created https://issues.apache.org/jira/browse/ZEPPELIN-3616
> <https://share.polymail.io/v1/z/b/NWI0NjE0ZjY5ZWQ5/vfDe3e-5eTzwzo9OQ-55bPTEMeVmaOuuZP-yX5lXQ11joN96teso2sc3Evt0OMTYRlgcbbiNTR
I assume some users are connecting to Spark in Zeppelin through Livy.
It seems Livy doesn't support `hadoop.security.auth_to_local` - filed
https://issues.apache.org/jira/browse/LIVY-481
Has anyone ran into this issue?
It seems Livy's `livy.server.auth.kerberos.name-rules` config was trying to
+1 to remove it
Setting default interpreter is not very useful anyway (for example, we
can't make %pyspark default without manually editing xml files in Zeppelin
distro). https://issues.apache.org/jira/browse/ZEPPELIN-3282
--
Ruslan Dautkhanov
On Fri, Jul 6, 2018 at 7:27 AM Paul Brenner
1 - 100 of 121 matches
Mail list logo