Re: Share variable across notebooks

2016-11-14 Thread Sora Lee
Hi, bala, Thanks for sharing the problem. For the more information, If you want to change the sharing option of the variable, you can use option of each interpreter in Interpreter menu. [image: pasted1] Thanks, Sora On Mon, Nov 14, 2016 at 2:31 PM Jun Kim wrote: > Hi

Embedded Mode for interpreter

2016-11-14 Thread kevin giroux
Hello, I would like some information about the binding of each interpreter. I read the documentation, http://zeppelin.apache.org/docs/latest/manual/interpreters.html#interpreter-binding-mode

Re: Share variable across notebooks

2016-11-14 Thread Jun Kim
Hi Balachandar You can share a variable of Spark as default. Try creating a variable and using it from another note. However, it's not a good thought to share variables in my experience. There is a possibility of a conflict of variables, and you won't figure out where the variable comes from in

Share variable across notebooks

2016-11-14 Thread Balachandar R.A.
Hello, Is there a way to spread my paragraphs into two different notebooks? In other words, how to share a variable (spark RDD / spark dataframe / javascript variable) across multiple notebooks? Regards Bala

Re: Google OAuth with Zeppelin

2016-11-14 Thread Tamas Szuromi
Hey, You can extend the shiro auth with the pac4j library what supports ouath2. More details: https://github.com/bujiio/buji-pac4j cheers, Tamas On 14 November 2016 at 10:35, Юрий Рочняк wrote: > Hello, > > I’m wondering if it’s possible to configure Zeppelin and OAuth in

Google OAuth with Zeppelin

2016-11-14 Thread Юрий Рочняк
Hello, I’m wondering if it’s possible to configure Zeppelin and OAuth in the way that users, authenticated with Google, could have access to Zeppelin? It’s easy to configure simple proxy to authenticate users with Google while accessing Zeppelin. However, in such case all users are redirected to

Re: Zeppelin Spark2 / Hive issue

2016-11-14 Thread Ruslan Dautkhanov
Thank you Herman! That was it. -- Ruslan Dautkhanov On Mon, Nov 14, 2016 at 7:23 AM, herman...@teeupdata.com < herman...@teeupdata.com> wrote: > You may check if you have hive-site.xml under zeppelin/spark config > folder… > > Thanks > Herman. > > > > On Nov 14, 2016, at 03:07, Ruslan

Zeppelin Spark2 / Hive issue

2016-11-14 Thread Ruslan Dautkhanov
Dear Apache Zeppelin User group, Got Zeppelin running, but can't get %spark.sql interpreter running correctly, getting [1] in console output. Running latest Zeppelin (0.6.2), Spark 2.0, Hive 1.1, Hadoop 2.6, Java 7. My understanding is that Spark wants to initialize Hive Context, and Hive isn't

Re: 0.7.0 is good to try?

2016-11-22 Thread Ruslan Dautkhanov
Got it. Thank you Moon. Ruslan On Tue, Nov 22, 2016 at 9:07 PM, moon soo Lee wrote: > While snapshot can be broken at any moment, it's not very encouraged to > use snapshot in the production. > > There will be half baked features, some new features with bugs, some >

Re: 0.6.2 build fails

2016-11-22 Thread Hyung Sung Shim
Hello. Thank you for sharing your problem. Could you add *-Pvendor-repo *option to build Zeppelin with CDH? You can refer to http://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/install/build.html#build-command-examples . Thanks. 2016-11-23 16:05 GMT+09:00 Ruslan Dautkhanov : >

Re: 0.6.2 build fails

2016-11-23 Thread Ruslan Dautkhanov
Thank you Hyung. That was it. That is resolved. Although I still can't get Zeppelin to work .. will send another email on this new issue. Thanks again. On Wed, Nov 23, 2016 at 12:19 AM Hyung Sung Shim wrote: > Hello. > Thank you for sharing your problem. > > Could you add

Configuring table format/type detection

2016-11-23 Thread Everett Anderson
Hi, I've been using Zeppelin with Spark SQL recently. One thing I've noticed that can be confusing is that Zeppelin attempts to detect the type of column and format it. For example, for columns that appear to have mostly numbers, it will put in commas. Is there a way to configure it globally or

SnappyData announces new cloud service,iSight with Apache Zeppelin

2016-11-25 Thread Sachin Janani
Hi, Today we are launching a service called the iSight cloud (iSight is short for instant insights), which combines elements of the platform to allow users to create notebooks where you can run queries on your data, and visualize the results in near real time. iSight uses 3 core elements to pull

SparkInterpreter.java[getSQLContext_1]:256 - Can't create HiveContext. Fallback to SQLContext

2016-11-26 Thread Ruslan Dautkhanov
Getting SparkInterpreter.java[getSQLContext_1]:256 - Can't create HiveContext. Fallback to SQLContext See full stack [1] below. Java 7 Zeppelin 0.6.2 CDH 5.8.3 Spark 1.6 How to fix this? Thank you. [1] WARN [2016-11-26 09:38:39,028] ({pool-2-thread-2}

Re: "You must build Spark with Hive. Export 'SPARK_HIVE=true'"

2016-11-26 Thread Ruslan Dautkhanov
Yes I can work with hiveContext from spark-shell. Back to the original question. Getting You must *build Spark with Hive*. Export 'SPARK_HIVE=true' See full stack [2] above. Any ideas? -- Ruslan Dautkhanov On Thu, Nov 24, 2016 at 4:48 PM, Jeff Zhang wrote: > My point is

Re: Zeppeling 0.6.2 stable version startup error - Static initialization is deprecated

2016-11-21 Thread Nirav Patel
Looks like it was an issue with Linux Firewall. Got around it. Thanks On Mon, Nov 21, 2016 at 11:24 AM, Nirav Patel wrote: > I built 0.6.2 stable version using spark2 . It starts but I get following > error and UI doesn't load. > > INFO [2016-11-21 11:19:22,235]

Re: Zepelin problem in HA HDFS

2016-11-23 Thread Felix Cheung
Quite possibly since Spark is talking to HDFS. Does it work in your environment when HA switch over with a long running spark shell session? From: Ruslan Dautkhanov Sent: Sunday, November 20, 2016 5:27:54 PM To: users@zeppelin.apache.org

Re: "You must build Spark with Hive. Export 'SPARK_HIVE=true'"

2016-11-23 Thread Felix Cheung
Hmm, SPARK_HOME is set it should pick up the right Spark. Does this work with the Scala Spark interpreter instead of pyspark? If it doesn't, is there more info in the log? From: Ruslan Dautkhanov Sent: Monday, November 21, 2016 1:52:36 PM

Re: Pass parameters to paragraphs via URL

2016-11-21 Thread duncan.fol...@gmail.com
On 2016-07-13 16:34 (-), TEJA SRIVASTAV wrote: > PS typo > > > > On Wed, Jul 13, 2016 at 9:03 PM TEJA SRIVASTAV > wrote: > > > We do have work around for that but Validate. > > You need to use angularBinding to achieve it > > %angular

"You must build Spark with Hive. Export 'SPARK_HIVE=true'"

2016-11-21 Thread Ruslan Dautkhanov
Getting You must *build Spark with Hive*. Export 'SPARK_HIVE=true' See full stack [2] below. I'm using Spark 1.6 that comes with CDH 5.8.3. So it's definitely compiled with Hive. We use Jupyter notebooks without problems in the same environment. Using Zeppelin 0.6.2, downloaded as

0.7.0 is good to try?

2016-11-22 Thread Ruslan Dautkhanov
We'd like to try Zeppelin 0.7.0 from the current snapshot. Jira shows there are just two in-progress tickets left so it's close to release? How stable is to use against Spark 1.6, Spark 2, PySpark interpreters? Thanks, Ruslan

Re: Zeppelin process died [FAILED]

2016-11-22 Thread Manjunath, Kiran
What does the logs tell you? Regards, Kiran From: Muhammad Rezaul Karim Reply-To: "users@zeppelin.apache.org" , Muhammad Rezaul Karim Date: Saturday, November 19, 2016 at 1:46 AM To: Users

Key/value display settings lost if no results

2016-11-22 Thread Kevin Niemann
I'm using a custom interpreter and it's possible for the end user (in report view) to run a query that returns no results (using the text input form). This is fine, except that it breaks any key/value I had assigned to the pie chart for example. Is it possible to not lose that setting? The next

Re: Embedded Mode for interpreter

2016-11-22 Thread moon soo Lee
It was selectable option in the before, but Zeppelin community discussed and decided to hide that option while it requires user's understanding how zeppelin works internally. We wanted keep option really simple. But since then, many other option as been added, and now interpreter option is not

Re: Configuring table format/type detection

2016-11-24 Thread Alexander Bezzubov
Hi Everett, this is a very good question actually. Right now there is not, but it sounds as a great feature, so may be worth filing a JIRA issue. There was a discussion when this feature was contributed [1], [2] and there also is some work on having ability to manually override text\number

Re: "You must build Spark with Hive. Export 'SPARK_HIVE=true'"

2016-11-24 Thread Jeff Zhang
AFAIK, spark of CDH don’t support spark thrift server, so it is possible it is not compiled with hive. Can you run spark-shell to verify that ? If it is built with hive, HiveContext will be created in spark-shell. Ruslan Dautkhanov 于2016年11月24日周四 下午3:30写道: > I can't

Re: "You must build Spark with Hive. Export 'SPARK_HIVE=true'"

2016-11-24 Thread Jeff Zhang
My point is that I suspect CDH also didn't compile spark with hive, you can run spark-shell to verify that. Ruslan Dautkhanov 于2016年11月25日周五 上午1:48写道: > Yep, CDH doesn't have Spark compiled with Thrift server. > My understanding Zeppelin uses spark-shell REPL and not Spark

Re: sqlContext can't find tables, while HiveContext can

2016-11-24 Thread Jeff Zhang
First you need to figure out where is the table. Is this table registered in spark sql code or a hive table ? If it is hive table, then check whether you put hive-site.xml on classpath and configure metastore uri correctly in hive-site.xml. look at the interpreter log to see which metastore it is

sqlContext can't find tables, while HiveContext can

2016-11-24 Thread Ruslan Dautkhanov
Problem 1, with sqlContext) Spark 1.6 CDH 5.8.3 Zeppelin 0.6.2 Running > sqlCtx = SQLContext(sc) > sqlCtx.sql('select * from marketview.spend_dim') shows exception "Table not found" . The same runs find when using hiveContext. See full stack in [1] The same stack in the log file [2]. I

Websockets failling

2016-11-25 Thread Montalban Pontesta, Iraitz
Morning, I am having some issues using Zeppelin (0.6) with HDP 2.4. Everything is working installation-wise but when I access the website it is blank and connects and disconnects from websocket url. Apparently, when I debug my browser I can see is giving me errors regarding said websocket

Zeppelin or Jupiter

2016-11-28 Thread Mich Talebzadeh
H, I use Zeppelin in different form and shape and it is very promising. Some colleagues are mentioning that Jupiter can do all that Zeppelin handles. I have not used Jupiter myself. I have used Tableau but that is pretty limited to SQL. Anyone has used Jupiter and can share their experience of

Re: "You must build Spark with Hive. Export 'SPARK_HIVE=true'"

2016-11-27 Thread Ruslan Dautkhanov
Also, to get rid of this problem (once HiveContext(sc) was assigned at least twice to a variable, the only fix is - ro restart Zeppelin :-( -- Ruslan Dautkhanov On Sun, Nov 27, 2016 at 9:00 AM, Ruslan Dautkhanov wrote: > I found a pattern when this happens. > > When I

Re: "You must build Spark with Hive. Export 'SPARK_HIVE=true'"

2016-11-27 Thread Ruslan Dautkhanov
I found a pattern when this happens. When I run sqlCtx = HiveContext(sc) it works as expected. Second and any time after that - gives that exception stack I reported in this email chain. > sqlCtx = HiveContext(sc) > sqlCtx.sql('select * from marketview.spend_dim') You must build Spark with

Re: how to build Zeppelin with a defined list of interpreters

2016-11-28 Thread Hyung Sung Shim
Sorry for confusing you Ruslan. 2016-11-28 23:20 GMT+09:00 Ruslan Dautkhanov : > Thank you moon! > > > On Mon, Nov 28, 2016 at 7:04 AM moon soo Lee wrote: > >> Hi, >> >> You can use '-pl' for your maven build command to exclude submodules. For >> example,

Re: R interpreter build fails

2016-11-28 Thread Ahyoung Ryu
Hi Ruslan, Can you share your build command in here? Thanks, Ahyoung On Mon, Nov 28, 2016 at 11:51 PM, Ruslan Dautkhanov wrote: > The same problem in 0.6.2 and 0.7.0-snapshot. > R interpreter build fails with below error stack. > > R is installed locally through yum. >

Re: R interpreter build fails

2016-11-28 Thread Hyung Sung Shim
Hello. In order to install R Interpreter, you need to install some packages. [1] http://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/interpreter/r.html 2016-11-28 23:51 GMT+09:00 Ruslan Dautkhanov : > The same problem in 0.6.2 and 0.7.0-snapshot. > R interpreter build fails

R interpreter build fails

2016-11-28 Thread Ruslan Dautkhanov
The same problem in 0.6.2 and 0.7.0-snapshot. R interpreter build fails with below error stack. R is installed locally through yum. Is there any special requirements for R interpreter build? [INFO] Zeppelin: Packaging distribution ... SUCCESS [ 6.250 s] [INFO] Zeppelin: R

0.7 Shiro LDAP authentication changes? Unable to instantiate class [org.apache.zeppelin.server.LdapGroupRealm]

2016-11-28 Thread Ruslan Dautkhanov
Zeppelin 0.7.0 built from yesterday's snapshot. Getting below error stack when trying to start Zeppelin 0.7.0. The same shiro config works fine in 0.6.2. We're using LDAP authentication configured in shiro.ini as ldapRealm = org.apache.zeppelin.server.LdapGroupRealm

Re: 0.7 Shiro LDAP authentication changes? Unable to instantiate class [org.apache.zeppelin.server.LdapGroupRealm]

2016-11-28 Thread Ruslan Dautkhanov
Looking at 0.7 docs, Shiro LDAP authentication shiro.ini configuration looks the same. http://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/security/shiroauthentication.html Any ideas why this might be broken in the current snapshot? Exception in thread "main"

RDD to Dataframe Error

2016-11-28 Thread Mark Mikolajczak
Hi All, Hoping you can help: I have created an RDD from a NOSQL database and I want to convert the RDD to a data frame. I have tried many options but all result in errors. val df = sc.couchbaseQuery(test).map(_.value).collect().foreach(println)

Re: Unable to connect with Spark Interpreter

2016-11-28 Thread moon soo Lee
According to your log, your interpreter process seems failed to start. Check following lines in your log. You can try run interpreter process manually and see why it is failing. i.e. run D:\zeppelin-0.6.2\bin\interpreter.cmd -d D:\zeppelin-0.6.2\interpreter\spark -p 55492 --- INFO

Re: 0.7 Shiro LDAP authentication changes? Unable to instantiate class [org.apache.zeppelin.server.LdapGroupRealm]

2016-11-28 Thread Ruslan Dautkhanov
+ dev list Could somebody please let me know if shiro-LDAP is known to be broken in master? So I will stop my attempts to work with 0.7. [org.apache.zeppelin.server.LdapGroupRealm] for object named 'ldapRealm'. Please ensure you've specified the fully qualified class name correctly. at

Re: RDD to Dataframe Error

2016-11-28 Thread Jun Kim
Hi, Mark! Which kind of error message do you get? The simplest way to convert RDD to DF is just importing implicits and use toDF import spark.implicits._ val df = rdd.toDF :-) 2016년 11월 29일 (화) 오전 1:26, Mark Mikolajczak 님이 작성: > > > Hi All, > > Hoping you can help:

Re: Zeppelin or Jupiter

2016-11-28 Thread DuyHai Doan
"Granted, these two features are currently only fully supported by the spark interpreter group but work is currently underway to make the API extensible to other interpreters" --> Incorrect, the display system has also an API for front-end:

Re: Zeppelin or Jupiter

2016-11-28 Thread Goodman, Alexander (398K)
Hi Mich, You might want to take a look at this: https://www.linkedin.com/pulse/comprehensive-comparison- jupyter-vs-zeppelin-hoc-q-phan-mba- I use both Zeppelin and Jupyter myself, and I would say by and large the conclusions of that article are still mostly correct. Jupyter is definitely

Re: RDD to Dataframe Error

2016-11-28 Thread Mohit Jaggi
looks like you have RDD of JSON. Try this: http://spark.apache.org/docs/latest/sql-programming-guide.html#json-datasets Mohit Jaggi Founder, Data Orchard LLC www.dataorchardllc.com > On Nov 28, 2016, at 9:49 AM,

Re: 0.7 Shiro LDAP authentication changes? Unable to instantiate class [org.apache.zeppelin.server.LdapGroupRealm]

2016-11-28 Thread Khalid Huseynov
I think during refactoring LdapGroupRealm has moved into different package, so could you try in your shiro config with: ldapRealm = org.apache.zeppelin.realm.LdapGroupRealm On Tue, Nov 29, 2016 at 2:33 AM, Ruslan Dautkhanov wrote: > + dev list > > Could somebody please

RE: Unable to connect with Spark Interpreter

2016-11-29 Thread Jan Botorek
Hello, Thanks for the advice, but it doesn’t seem that anything is wrong when I start the interpreter manually. I attach logs from interpreter and from zeppelin. This is the cmd output from interpreter launched manually: D:\zeppelin-0.6.2\bin> interpreter.cmd -d

Re: "You must build Spark with Hive. Export 'SPARK_HIVE=true'"

2016-11-24 Thread Ruslan Dautkhanov
Yep, CDH doesn't have Spark compiled with Thrift server. My understanding Zeppelin uses spark-shell REPL and not Spark thrift server. Thank you. -- Ruslan Dautkhanov On Thu, Nov 24, 2016 at 1:57 AM, Jeff Zhang wrote: > AFAIK, spark of CDH don’t support spark thrift

Re: Key/value display settings lost if no results

2016-11-22 Thread moon soo Lee
Hi Kevin, This is an example that programmatically set graph options. https://www.zeppelinhub.com/viewer/notebooks/aHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL0xlZW1vb25zb28vemVwcGVsaW4tZXhhbXBsZXMvbWFzdGVyLzJCOFhRVUM1Qi9ub3RlLmpzb24 You can get some idea how to configure graph programmatically

how to build Zeppelin with a defined list of interpreters

2016-11-27 Thread Ruslan Dautkhanov
Getting [1] error stack when trying to build Zeppelin from 0.7-snapshot. We will not use most of the built-in Zeppelin interpreters, including Scio which is failing. How we can switch off (black-list) certain interpreters from Zeppelin build at all?

Zeppelin Integration with Azure Blob Storage

2016-11-28 Thread Panayotis Trapatsas
I use Zeppelin 0.6.2 with Spark 2.0.1. I want to read files from Azure Blob storage, so I try to include these packages: "*com.microsoft.azure:azure-storage:4.2.0*" and " *org.apache.hadoop:hadoop-azure:2.7.3*". For some reason, after I include the packages I get this error: Caused by:

RE: Unable to connect with Spark Interpreter

2016-11-28 Thread Jan Botorek
Hello again, I am sorry, but don’t you, guys, really nobody tackle with the same issue, please? I have currently tried the new version (0.6.2) – both binary and „to compile“ versions. But the issue remains the same. I have tried it on several laptops and servers, always the same result.

JDBC Interpreter does not commit updates workaround?

2016-11-26 Thread Matt L
Hello All! From Zeppelin Notes, is there a way to make the JDBCInterpreter commit after I run an upset command? Or somehow call the connection.commit method? I know there’s an open JIRA for this(https://issues.apache.org/jira/browse/ZEPPELIN-1645

Re: Having issues with Hive RuntimeException in runing Zeppelin notebook applicaiton

2016-11-16 Thread Md. Rezaul Karim
Hi Moon, No,  I did not set those two environmental variables. I'm still how to do that. I have installed Spark on my machine and have set the SPARK_HOME. However, do I also need to install and configure the Hadoop. But the error is all about the Hive. Am I some how wrong? A sample

Re: Embedded Mode for interpreter

2016-11-16 Thread moon soo Lee
Hi, Zeppelin actually does have embedded mode that runs Interpreter in the same JVM that Zeppelin run. This feature does not exposed to user, but it can be controlled by InterpreterOption.remote field.

Re: Having issues with Hive RuntimeException in runing Zeppelin notebook applicaiton

2016-11-16 Thread Jianfeng (Jeff) Zhang
>>> at /home/asif/zeppelin-0.6.2-bin-all/metastore_db has an incompatible >>> format with the current version of the software. The database was created >>> by or upgraded by version 10.11. Try to delete this folder and rerun it again. Best Regard, Jeff Zhang From: moon soo Lee

Re: Having issues with Hive RuntimeException in runing Zeppelin notebook applicaiton

2016-11-16 Thread moon soo Lee
Hi, It's strange, Do you have SPARK_HOME or HADOOP_CONF_DIR defined in conf/zeppelin-env.sh? You can stop Zeppelin, delete /home/asif/zeppelin-0.6.2-bin-all/metastore_db, start Zeppelin and try again. Thanks, moon On Tue, Nov 15, 2016 at 4:05 PM Muhammad Rezaul Karim

Zeppelin is much slower than pure spark-shell

2016-11-16 Thread York Huang
Hi, I am using a spark sql JOIN. It runs well in SPARK-SHELL --master yarn-client, but is very slow and timeout in zeppelin. Where should I examine the issue? Thanks

Re: Having issues with Hive RuntimeException in runing Zeppelin notebook applicaiton

2016-11-16 Thread Muhammad Rezaul Karim
Hi Moon, I have set those variables as follows (a partial view of the zeppelin-env.sh file). Is this okay? #!/bin/bash export JAVA_HOME=/usr/lib/jvm/java-8-oracle export PATH=$PATH:$JAVA_HOME/bin export SPARK_HOME=/home/asif/spark-2.0.0-bin-hadoop2.7 export PATH=$PATH:$SPARK_HOME/bin   

Re: Zeppelin is much slower than pure spark-shell

2016-11-16 Thread Jeff Zhang
It could be many reasons. You need to check the spark UI, whether they use the same resources and same number of tasks York Huang 于2016年11月17日周四 下午1:05写道: > Hi, > I am using a spark sql JOIN. It runs well in SPARK-SHELL --master > yarn-client, but is very slow and

ArrayIndexOutOfBoundsException on Zeppelin notebook example

2016-11-16 Thread Muhammad Rezaul Karim
Hi All, I have the following Scala code (taken from https://zeppelin.apache.org/docs/0.6.2/quickstart/tutorial.html#data-retrieval) that deals with the sample Bank-details data:

Re: ArrayIndexOutOfBoundsException on Zeppelin notebook example

2016-11-17 Thread Hyung Sung Shim
Hello Muhammad. Please check your bank-full.csv file first and you can filter item length in your scala code for example *val bank = bankText.map(s => s.split(";")).filter(s => (s.size)>5).filter(s => s(0) != "\"age\"")* Hope this helps. 2016-11-17 21:26 GMT+09:00 Dayong :

Aw: Re: Two different errors while executing Spark SQL queries against cached temp tables

2016-11-16 Thread Florian Schulz
Hi Alex,   thanks for responding, I hope I can give you all the needed information.  I downloaded the 0.6.2 binary package and used the standard configuration. I only start the zeppelin-deamon and zeppelin spins up the embedded Spark environment. I only added the postgresql package

Re: FW: Issue with Zeppelin setup on Datastax-Spark

2016-11-16 Thread DuyHai Doan
Ok I understand why you have issue. You are using Spark 2.0.2 and latest Datastax 5.0.3 is still using Spark version 1.6.X On Wed, Nov 16, 2016 at 10:23 AM, Abul Basar wrote: > I am facing a similar issue while using Spark R. > > My environment: > >- Spark 2.0.2 >-

Re: FW: Issue with Zeppelin setup on Datastax-Spark

2016-11-16 Thread DuyHai Doan
I recommend downloading my pre-built Zeppelin for Datastax. Shared folder link: https://drive.google.com/folderview?id=0B6wR2aj4Cb6wQ01aR3ItR0xUNms On Wed, Nov 16, 2016 at 11:13 AM, DuyHai Doan wrote: > Ok I understand why you have issue. > > You are using Spark 2.0.2 and

Re: FW: Issue with Zeppelin setup on Datastax-Spark

2016-11-16 Thread Abul Basar
Hello DuyHai, Original problem reported by Arpan Saha is related to Datastax. I am using Spark + Zeppelin. Below is the configuration. - Spark 2.0.2 - Zeppelin: 0.6.2 - Java 1.8.0_111 - R: 3.3.1 Thanks, Abul On Wed, Nov 16, 2016 at 3:44 PM, DuyHai Doan

Aw: Two different errors while executing Spark SQL queries against cached temp tables

2016-11-16 Thread Florian Schulz
Hi,   can anyone help me with this? It is very anoying, because I get this error very often (on my local maschine and also on a second vm). I use Zeppelin 0.6.2 with Spark 2.0 and Scala 2.11.     Best regards Florian   Gesendet: Montag, 14. November 2016 um 20:45 Uhr Von: "Florian Schulz"

Unable to connect with Spark Interpreter

2016-11-16 Thread Jan Botorek
Hello, I am not able to run any Spark code in the Zeppelin. I tried compiled versions of Zeppelin as well as to compile the source code on my own based on the https://github.com/apache/zeppelin steps. My configuration is Scala in 2.11 version and spark 2.0.1. Also, I tried different versions of

Re: Two different errors while executing Spark SQL queries against cached temp tables

2016-11-16 Thread Alexander Bezzubov
Hi Florian, sorry for slow response, I guess the main reason for not much feedback here is that its hard to reproduce the error you describe, as it does not happen reliably even on your local environment. java.lang.NoSuchMethodException: org.apache.spark.io.LZ4CompressionCodec This can be a

Re: Unable to connect with Spark Interpreter

2016-11-16 Thread Alexander Bezzubov
Hi Jan, this is rather generic error saying that ZeppelinServer somehow could not connect to the interpreter proces on your machine. Could you please share more from logs/* in particular, .out and .log of the Zeppelin server AND zepplein-interpreter-spark*.log - usually this is enough to

RE: Unable to connect with Spark Interpreter

2016-11-16 Thread Jan Botorek
Hello Alexander, Thank you for a quick response. Please, see the server log attached. Unfortunately, I don’t have any zeppelin-interpreter-spark*.log in the logs file. Questions: - It happens everytime – even, If I try to run several paragraphs - Yes, it keeps happening even

Jetty hangs, and Zeppelin hangs thereafter

2016-11-18 Thread Zhe Sun
Dear Zeppeliners, *1. For Zeppelin 0.6.0, I found it's very likely to hang at the synchronized (noteSocketMap), for example:* "Thread-133" #189 prio=5 os_prio=0 tid=0x7efc14001000 nid=0x7d3f waiting for monitor entry [0x7efc56cef000] * java.lang.Thread.State: BLOCKED (on object

Re: Zeppelin Spark2 / Hive issue

2016-11-18 Thread Ruslan Dautkhanov
I am now not getting that error when using %sql, but Zeppelin/Hive jar still creates a local metastore instead of using our prod HMS, each time I use Zeppelin, it creates a Derby files in bin/ directory from where I started Zeppelin. $ ll total 12 -rw-r--r-- 1 rdautkha gopher 746 Nov 18 16:06

Re: Zeppelin process died [FAILED]

2016-11-18 Thread Muhammad Rezaul Karim
Hi Ahyoung, Thanks for your prompt reply too. I have changed the default port number from 8080 to 8015 at ~zeppelin-0.6.2-bin-all/conf/zeppelin-site.xml file and now it's working perfectly. Thank a million.  Regards,Rezaul On Friday, November 18, 2016 11:06 PM, Ahyoung Ryu

Re: Is it possible to run Java code on Zeppelin Notebook?

2016-11-17 Thread Abhisar Mohapatra
Yes it will. I guess there are some implementations too On Thu, Nov 17, 2016 at 10:41 PM, Muhammad Rezaul Karim < reza_cse...@yahoo.com> wrote: > Hi All, > > I am a new user of Zeppelin and got to know that Apache Zeppelin is using > Spark as the backend interpreter. > > Till date, I have run

Zeppelin Pulling Files from S3 That Are KMS Encrypted Denied

2016-11-17 Thread Tseytlin, Keren
Hi All, I have a bucket that I’m working with and I want to pull orc files from there and use it in my Spark/Scala magic. The only thing is that these files are KMS encrypted. When I try to get a KMS file however, it shows me an AWS Access Denied error, although there is no possible way that

Re: SparkException: Task not serializable for Closure variable

2016-11-17 Thread Nirav Patel
I have to try but I think it probably will happen with spark-shell as well. I have found alternative to pass a Array to UDF as a parameter. Thanks On Thu, Nov 17, 2016 at 9:24 AM, moon soo Lee wrote: > Are you able to run the same code in SPARK_HOME/bin/spark-shell ? > >

Re: No Space left on device

2016-11-17 Thread moon soo Lee
Hi, Do you have the same problem on SPARK_HOME/bin/spark-shell? Are you using standalone spark cluster? or Yarn? Thanks, moon On Sun, Nov 13, 2016 at 8:19 PM York Huang wrote: > I ran into the "No space left on device" error in zeppelin spark when I > tried to run

Re: Having issues with Hive RuntimeException in runing Zeppelin notebook applicaiton

2016-11-17 Thread Muhammad Rezaul Karim
 Hi, I can run the same code in SPARK_HOME/bin/spark-shell. However, it does not allow me to execute the SQL command. On Thursday, November 17, 2016 6:01 PM, moon soo Lee wrote: Are you able to run the same code in SPARK_HOME/bin/spark-shell? Thanks,moon On Thu,

Re: Having issues with Hive RuntimeException in runing Zeppelin notebook applicaiton

2016-11-17 Thread moon soo Lee
Although "export PATH=$PATH..." is not really necessary in zeppelin-env.sh, i think your configuration looks okay. Have you tried remove /home/asif/zeppelin-0.6.2-bin-all/metastore_db ? Thanks, moon On Wed, Nov 16, 2016 at 6:48 PM Muhammad Rezaul Karim wrote: Hi Moon,

Is it possible to run Java code on Zeppelin Notebook?

2016-11-17 Thread Muhammad Rezaul Karim
Hi All, I am a new user of Zeppelin and got to know that Apache Zeppelin is using Spark as the backend interpreter. Till date, I have run some codes written in Scala on the Zeppelin notebook. However, I am pretty familiar with writing Spark application using Java. Now my question: is it

Re: Having issues with Hive RuntimeException in runing Zeppelin notebook applicaiton

2016-11-17 Thread moon soo Lee
Are you able to run the same code in SPARK_HOME/bin/spark-shell? Thanks, moon On Thu, Nov 17, 2016 at 9:47 AM Muhammad Rezaul Karim wrote: > Hi, > > Thanks a lot. Yes, I have removed the metastore_db directory too. > > > > > On Thursday, November 17, 2016 5:38 PM, moon

Re: Having issues with Hive RuntimeException in runing Zeppelin notebook applicaiton

2016-11-17 Thread Muhammad Rezaul Karim
Hi, Thanks a lot. Yes, I have removed the metastore_db directory too.   On Thursday, November 17, 2016 5:38 PM, moon soo Lee wrote: Although "export PATH=$PATH..."  is not really necessary in zeppelin-env.sh, i think your configuration looks okay. Have you tried

Re: ArrayIndexOutOfBoundsException on Zeppelin notebook example

2016-11-17 Thread Hyung Sung Shim
Good to hear it helps. 2016년 11월 18일 (금) 오전 1:52, Muhammad Rezaul Karim 님이 작성: > Hi Shim, > > Now it works perfectly. Thank you so much. Actually, I am from Java > background and learning the Scala. > > > Thanks and Regards, > - > *Md. Rezaul

zeppelin spark sql - ClassNotFoundException: $line70280873551.$read$

2016-11-17 Thread Nirav Patel
Recently I started to getting following error upon execution of spark sql. validInputDocs.createOrReplaceTempView("valInput") %sql select count(*) from valInput //Fails with ClassNotFoundException exception But validInputDocs.show works just fine. ANy interpreter settings that may have

Re: Users with independent filters

2016-11-11 Thread Matheus de Oliveira
Sorry about the long delay. On Fri, Oct 28, 2016 at 2:10 AM, moon soo Lee wrote: > https://issues.apache.org/jira/browse/ZEPPELIN-1236 is an related issue. This is exactly what I wanted. While that not happens, seems that Zeppelin is not yet the tool I'm looking for in this

No Space left on device

2016-11-13 Thread York Huang
I ran into the "No space left on device" error in zeppelin spark when I tried to run the following. cache table temp_tbl as select * from ( select *, rank() over (partition by id order by year desc) as rank from table1 ) v where v.rank =1 The table1 is very big. I set up spark.local.dir in

unsubscribe

2016-11-13 Thread Mathieu Delsaut
Mathieu Delsaut *Research Engineer at LE²P* +262 (0)262 93 86 08

Re: Is it possible to run Java code on Zeppelin Notebook?

2016-11-20 Thread Alexander Bezzubov
Good question :) Actually, there is a not very well known yet "hack" (I talked about it a bit on ApacheCon this year) - to run a pure Java paragraph in Apache Zeppelin - you can just use `%beam` interpreter! Beam interpreter uses Beam Java API, so you can leverage it i.e to run WEKA machine

Re: Is it possible to run Java code on Zeppelin Notebook?

2016-11-20 Thread Felix Cheung
I think you will need to convert Java code into Scala syntax? But Scala can call into Java libraries and so on. I don't think we have an interpreter for Java since it does not come with a REPL until Java 9? From: Abhisar Mohapatra

Re: Problem with scheduler (stops after ten executions)

2016-11-15 Thread moon soo Lee
Appreciate for sharing your investigation. I tried to schedule some paragraphs returning no value, but couldn't reproduce the problem. Could you share some code snippets to reproduce the problem? Thanks, moon On Mon, Nov 14, 2016 at 11:36 AM Florian Schulz wrote: > Hi, >

Having issues with Hive RuntimeException in runing Zeppelin notebook applicaiton

2016-11-15 Thread Muhammad Rezaul Karim
Hi,I am a new user of Apache Zeppelin and I am running a simple notebook app on Zeppelin (version 0.6.2-bin-all) using Scala based on Spark. My source code is as follows: val bankText = sc.textFile("/home/rezkar/zeppelin-0.6.2-bin-all/bin/bank-full.csv")case class Bank(age:String,

Problem with scheduler (stops after ten executions)

2016-11-14 Thread Florian Schulz
Hi,   sorry for my late reponse! I experimented a lot with this in the last days and I think I have fixed this now, but I'm not sure, what exactly the problem was. I think it has something to do with functions, which return no value (type == Unit). I changed all of them to return anything (e.g.

Two different errors while executing Spark SQL queries against cached temp tables

2016-11-14 Thread Florian Schulz
Hi everyone,   I have some trouble while executing some Spark SQL queries against some cached temp tables. I query different temp tables and while doing aggregates etc., I often get these errors back:   java.lang.NoSuchMethodException:

Re: Is it possible to run Java code on Zeppelin Notebook?

2016-11-20 Thread Muhammad Rezaul Karim
Hi Alexander, Thanks for replying me. I am particularly interested in running Spark code written in Java on Zeppelin Notebook.   On Sunday, November 20, 2016 11:06 AM, Alexander Bezzubov wrote: Good question :) Actually, there is a not very well known yet "hack" (I

Adjust Height of Data Visualization

2016-11-20 Thread s r
Hi, Sorry if I'm missing something obvious I've started using Zeppelin for data exploration and would like to have a bigger (higher) visualization widget to display more rows. Any ideas?

Re: Adjust Height of Data Visualization

2016-11-20 Thread Alexander Bezzubov
Hi, it makes sence - did you try draghing a paragraph lower right conner to adjust the output height? -- Alex On Sun, Nov 20, 2016, 16:44 s r wrote: > Hi, Sorry if I'm missing something obvious > I've started using Zeppelin for data exploration and would like to >

Re: Embedded Mode for interpreter

2016-11-21 Thread kevin giroux
What about giving *freedom* to users, and then, exposing again such an embedded mode to users ? Well, not all users have the same needs, or want to run Zeppelin the same way. On my side, being able to use such a 'embedded' mode will be useful. Thanks. Le mer. 16 nov. 2016 à 23:45, moon soo Lee

How to make Zeppelin log everything to console

2016-10-21 Thread Xi Shen
Hi, Currently, the default log4j logs almost everything to the log files. But I do not want to run Zeppelin as a service, and I want to print all the logs, including the interpreter logs to the console. Can I do that with log4j configuration? Which class should I use? -- Thanks, David S.

<    5   6   7   8   9   10   11   12   13   14   >