Re: "You must build Spark with Hive. Export 'SPARK_HIVE=true'"

2016-11-23 Thread Ruslan Dautkhanov
I can't reproduce this in %spark, nor %sql It seems to be %pyspark-specific. Also seems it runs fine first time I start Zeppelin, then it shows this error You must build Spark with Hive. Export 'SPARK_HIVE=true' and run build/sbt assembly sqlc = HiveContext(sc) sqlc.sql("select count(*) from hi

Configuring table format/type detection

2016-11-23 Thread Everett Anderson
Hi, I've been using Zeppelin with Spark SQL recently. One thing I've noticed that can be confusing is that Zeppelin attempts to detect the type of column and format it. For example, for columns that appear to have mostly numbers, it will put in commas. Is there a way to configure it globally or

Re: 0.6.2 build fails

2016-11-23 Thread Hyung Sung Shim
Thank you for sharing the result! and Feel free to mail whenever you have problems. 2016년 11월 24일 (목) 오전 7:30, Ruslan Dautkhanov 님이 작성: > Thank you Hyung. That was it. That is resolved. > > Although I still can't get Zeppelin to work .. will send another email on > this new issue. > > Thanks again

Re: 0.6.2 build fails

2016-11-23 Thread Ruslan Dautkhanov
Thank you Hyung. That was it. That is resolved. Although I still can't get Zeppelin to work .. will send another email on this new issue. Thanks again. On Wed, Nov 23, 2016 at 12:19 AM Hyung Sung Shim wrote: > Hello. > Thank you for sharing your problem. > > Could you add *-Pvendor-repo *opti

Re: Zepelin problem in HA HDFS

2016-11-23 Thread Ruslan Dautkhanov
Well, that wasn't a long-running session. HDFS Namenode states haven't changed when that Zeppelin notebook started. It's always reproducible problem. It might be a Spark 2.0 problem. I am failing back to Spark 1.6. Thanks Felix. -- Ruslan Dautkhanov On Wed, Nov 23, 2016 at 6:58 AM, Felix Cheu

Re: Zepelin problem in HA HDFS

2016-11-23 Thread Felix Cheung
Quite possibly since Spark is talking to HDFS. Does it work in your environment when HA switch over with a long running spark shell session? From: Ruslan Dautkhanov Sent: Sunday, November 20, 2016 5:27:54 PM To: users@zeppelin.apache.org Subject: Re: Zepelin pr

Re: "You must build Spark with Hive. Export 'SPARK_HIVE=true'"

2016-11-23 Thread Felix Cheung
Hmm, SPARK_HOME is set it should pick up the right Spark. Does this work with the Scala Spark interpreter instead of pyspark? If it doesn't, is there more info in the log? From: Ruslan Dautkhanov Sent: Monday, November 21, 2016 1:52:36 PM To: users@zeppelin.apa