It works, Thanks Jeff!
On Wednesday, August 2, 2017, 6:24:34 PM PDT, Jeff Zhang
wrote:
I suppose the %sql means %spark.sql, in that case you need to modify the
hive-site.xml under SPARK_CONF_DIR
Richard Xin 于2017年8月3日周四 上午9:21写道:
on AWS EMRI am
37485 created
INFO [2017-08-03 20:22:53,610] ({pool-2-thread-2}
SchedulerFactory.java[jobStarted]:131) - Job paragraph_1501783535283_1713771734
started by scheduler
org.apache.zeppelin.interpreter.remote.RemoteInterpretershared_session392982422
INFO [2017-08-03 20:2
ter
org.apache.zeppelin.postgresql.PostgreSqlInterpreter 340437485 created
INFO [2017-08-03 20:22:53,610] ({pool-2-thread-2}
SchedulerFactory.java[jobStarted]:131) - Job paragraph_1501783535283_1713771734
started by scheduler
org.apache.zeppelin.interpreter.remote.RemoteInterpretershared_sessi
created
INFO [2017-08-03 20:22:53,610] ({pool-2-thread-2}
SchedulerFactory.java[jobStarted]:131) - Job paragraph_1501783535283_1713771734
started by scheduler
org.apache.zeppelin.interpreter.remote.RemoteInterpretershared_session392982422
INFO [2017-08-03 20:22:53,611] ({pool-2-thread-2} Paragraph.java[jobRun]:362) -
run paragraph 20170803-180535_552293631 using psql
d by scheduler
org.apache.zeppelin.interpreter.remote.RemoteInterpretershared_session392982422
INFO [2017-08-03 20:22:53,611] ({pool-2-thread-2} Paragraph.java[jobRun]:362) -
run paragraph 20170803-180535_552293631 using psql
org.apache.zeppelin.interpreter.LazyOpenInterpreter@144aa9ed
INFO [2017