Do you set SPARK_HOME in zeppelin-env.sh ? And What version of spark do you
use ? Do you use CDH's spark or apache spark ?




LINZ, Arnaud <al...@bouyguestelecom.fr>于2018年2月6日周二 下午8:40写道:

> Hi,
>
>
>
> The lack of interpreter log was due to the impersonation of the user,
> which could not write to the log dir. I’ve given the proper rights on the
> log dir.
>
> Zeppelin tries to connect to the correct metastore :
>
>
>
> INFO [2018-02-06 13:24:40,201] ({pool-2-thread-4}
> DFSClient.java[getDelegationToken]:1086) - Created token for alinz:
> HDFS_DELEGATION_TOKEN owner=alinz/*hidden_due_to_entreprise_policy*,
> renewer=yarn, realUser=, issueDate=1517919880167, maxDate=1518524680167,
> sequenceNumber=9040, masterKeyId=156 on ha-hdfs:
> *hidden_due_to_entreprise_policy*
>
> INFO [2018-02-06 13:24:40,757] ({pool-2-thread-4}
> HiveMetaStoreClient.java[open]:391) - Trying to connect to metastore with
> URI thrift://*hidden_due_to_entreprise_policy*:9083
>
> INFO [2018-02-06 13:24:40,793] ({pool-2-thread-4}
> HiveMetaStoreClient.java[open]:465) - Opened a connection to metastore,
> current connections: 1
>
> INFO [2018-02-06 13:24:40,793] ({pool-2-thread-4}
> HiveMetaStoreClient.java[open]:518) - Connected to metastore.
>
>
>
> Then I have a strange exception :
>
>
>
> WARN [2018-02-06 13:24:47,162] ({pool-2-thread-4}
> SparkInterpreter.java[getSQLContext_1]:272) - Can't create HiveContext.
> Fallback to SQLContext
>
> java.lang.reflect.InvocationTargetException
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>
>         at
> org.apache.zeppelin.spark.SparkInterpreter.getSQLContext_1(SparkInterpreter.java:267)
>
>         at
> org.apache.zeppelin.spark.SparkInterpreter.getSQLContext(SparkInterpreter.java:244)
>
>         at
> org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:854)
>
>         at
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)
>
>         at
> org.apache.zeppelin.spark.PySparkInterpreter.getSparkInterpreter(PySparkInterpreter.java:565)
>
>         at
> org.apache.zeppelin.spark.PySparkInterpreter.createGatewayServerAndStartScript(PySparkInterpreter.java:209)
>
>         at
> org.apache.zeppelin.spark.PySparkInterpreter.open(PySparkInterpreter.java:162)
>
>         at
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)
>
>         at
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:491)
>
>         at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
>
>         at
> org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
>
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
>         at java.lang.Thread.run(Thread.java:745)
>
> Caused by: java.lang.NoSuchMethodError:
> com.facebook.fb303.FacebookService$Client.sendBaseOneway(Ljava/lang/String;Lorg/apache/thrift/TBase;)V
>
>         at
> com.facebook.fb303.FacebookService$Client.send_shutdown(FacebookService.java:436)
>
>         at
> com.facebook.fb303.FacebookService$Client.shutdown(FacebookService.java:430)
>
>         at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.close(HiveMetaStoreClient.java:538)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>         at java.lang.reflect.Method.invoke(Method.java:498)
>
>         at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:105)
>
>         at com.sun.proxy.$Proxy23.close(Unknown Source)
>
>      at com.sun.proxy.$Proxy23.close(Unknown Source)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>         at java.lang.reflect.Method.invoke(Method.java:498)
>
>         at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2067)
>
>         at com.sun.proxy.$Proxy23.close(Unknown Source)
>
>         at org.apache.hadoop.hive.ql.metadata.Hive.close(Hive.java:357)
>
>         at
> org.apache.hadoop.hive.ql.metadata.Hive.access$000(Hive.java:153)
>
>         at org.apache.hadoop.hive.ql.metadata.Hive$1.remove(Hive.java:173)
>
>         at
> org.apache.hadoop.hive.ql.metadata.Hive.closeCurrent(Hive.java:326)
>
>         at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:296)
>
>         at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:273)
>
>         at
> org.apache.spark.sql.hive.client.ClientWrapper.client(ClientWrapper.scala:272)
>
>         at
> org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:288)
>
>         at
> org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1$1(ClientWrapper.scala:239)
>
>         at
> org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:238)
>
>         at
> org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:281)
>
>         at
> org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:488)
>
>         at
> org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:478)
>
>         at
> org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:442)
>
>         at
> org.apache.spark.sql.SQLContext$$anonfun$4.apply(SQLContext.scala:272)
>
>         at
> org.apache.spark.sql.SQLContext$$anonfun$4.apply(SQLContext.scala:271)
>
>         at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>
>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>
>         at
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>
>         at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>
>         at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:271)
>
>         at
> org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:90)
>
>         at
> org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101)
>
>         ... 22 more
>
> INFO [2018-02-06 13:24:49,094] ({pool-2-thread-4}
> SparkInterpreter.java[populateSparkWebUrl]:1013) - Sending metainfos to
> Zeppelin server: {url=http://10.136.168.12:8200}
>
> INFO [2018-02-06 13:24:50,656] ({pool-2-thread-4}
> SchedulerFactory.java[jobFinished]:137) - Job
> remoteInterpretJob_1517919875120 finished by scheduler interpreter_
> 2085378249 <(208)%20537-8249>
>
>
>
> (I don’t use anything related to Facebook…)
>
> It seems somebody already had that error
>
>
> http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/SparkInterpreter-java-getSQLContext-1-256-Can-t-create-HiveContext-Fallback-to-SQLContext-td4612.html
>
> but no answer was provided.
>
>
>
>
>
> *De :* LINZ, Arnaud
> *Envoyé :* mardi 6 février 2018 12:22
> *À :* 'users@zeppelin.apache.org' <users@zeppelin.apache.org>
> *Objet :* RE: How to have a native graphical representation (%sql) of a
> HiveContext?
>
>
>
> Ooops, sorry. No zeppelin-interpreter-log, but I did looked at the wrong
> log file.
>
>
>
> No error on startup, and here is the stack trace when I query a hive table
> :
>
>
>
> INFO [2018-02-06 12:12:18,405] ({qtp1632392469-74}
> AbstractValidatingSessionManager.java[enableSessionValidation]:230) -
> Enabling session validation scheduler...
> WARN [2018-02-06 12:12:18,484] ({qtp1632392469-74}
> LoginRestApi.java[postLogin]:119) -
> {"status":"OK","message":"","body":{"principal":"alinz","ticket":"f4ebf4ec-d889-4d01-808a-ee8b9800c80e","roles":"[]"}}
> INFO [2018-02-06 12:12:18,710] ({qtp1632392469-74}
> NotebookServer.java[sendNote]:711) - New operation from 172.24.193.234 :
> 61997 : alinz : GET_NOTE : 2D1SETNNQ
> WARN [2018-02-06 12:12:18,764] ({qtp1632392469-74}
> GitNotebookRepo.java[revisionHistory]:157) - No Head found for 2D1SETNNQ,
> No HEAD exists and no explicit starting revision was specified
> INFO [2018-02-06 12:12:19,102] ({qtp1632392469-71}
> InterpreterFactory.java[createInterpretersForNote]:188) - Create
> interpreter instance spark for note 2D1SETNNQ
> INFO [2018-02-06 12:12:19,107] ({qtp1632392469-71}
> InterpreterFactory.java[createInterpretersForNote]:221) - Interpreter
> org.apache.zeppelin.spark.SparkInterpreter 1563969204 created
> INFO [2018-02-06 12:12:19,109] ({qtp1632392469-71}
> InterpreterFactory.java[createInterpretersForNote]:221) - Interpreter
> org.apache.zeppelin.spark.SparkSqlInterpreter 119386711 created
> INFO [2018-02-06 12:12:19,110] ({qtp1632392469-71}
> InterpreterFactory.java[createInterpretersForNote]:221) - Interpreter
> org.apache.zeppelin.spark.DepInterpreter 1706768324 created
> INFO [2018-02-06 12:12:19,111] ({qtp1632392469-71}
> InterpreterFactory.java[createInterpretersForNote]:221) - Interpreter
> org.apache.zeppelin.spark.PySparkInterpreter 261340605 created
> INFO [2018-02-06 12:12:19,112] ({qtp1632392469-71}
> InterpreterFactory.java[createInterpretersForNote]:221) - Interpreter
> org.apache.zeppelin.spark.SparkRInterpreter 276911901 created
> INFO [2018-02-06 12:12:19,111] ({qtp1632392469-71}
> InterpreterFactory.java[createInterpretersForNote]:221) - Interpreter
> org.apache.zeppelin.spark.PySparkInterpreter 261340605 created
> INFO [2018-02-06 12:12:19,112] ({qtp1632392469-71}
> InterpreterFactory.java[createInterpretersForNote]:221) - Interpreter
> org.apache.zeppelin.spark.SparkRInterpreter 276911901 created
> INFO [2018-02-06 12:13:38,419] ({pool-2-thread-2}
> SchedulerFactory.java[jobStarted]:131) - Job
> paragraph_1513703910594_823567073 started by scheduler
> org.apache.zeppelin.interpreter.remote.RemoteInterpretershared_session1977922636
> INFO [2018-02-06 12:13:38,420] ({pool-2-thread-2}
> Paragraph.java[jobRun]:362) - run paragraph 20171219-181830_1097537317
> using pyspark org.apache.zeppelin.interpreter.LazyOpenInterpreter@f93bdbd
> INFO [2018-02-06 12:13:38,428] ({pool-2-thread-2}
> RemoteInterpreterManagedProcess.java[start]:126) - Run interpreter process
> [/opt/zeppelin/bin/interpreter.sh, -d, /opt/zeppelin/interpreter/spark, -p,
> 43118, -u, alinz, -l, /opt/zeppelin/local-repo/2CYVF45A9]
> INFO [2018-02-06 12:13:39,474] ({pool-2-thread-2}
> RemoteInterpreter.java[init]:221) - Create remote interpreter
> org.apache.zeppelin.spark.SparkInterpreter
> INFO [2018-02-06 12:13:39,570] ({pool-2-thread-2}
> RemoteInterpreter.java[pushAngularObjectRegistryToRemote]:551) - Push local
> angular object registry from ZeppelinServer to remote interpreter group
> 2CYVF45A9:alinz:
> INFO [2018-02-06 12:13:39,585] ({pool-2-thread-2}
> RemoteInterpreter.java[init]:221) - Create remote interpreter
> org.apache.zeppelin.spark.SparkSqlInterpreter
> INFO [2018-02-06 12:13:39,587] ({pool-2-thread-2}
> RemoteInterpreter.java[init]:221) - Create remote interpreter
> org.apache.zeppelin.spark.DepInterpreter
> INFO [2018-02-06 12:13:39,592] ({pool-2-thread-2}
> RemoteInterpreter.java[init]:221) - Create remote interpreter
> org.apache.zeppelin.spark.PySparkInterpreter
> INFO [2018-02-06 12:13:39,603] ({pool-2-thread-2}
> RemoteInterpreter.java[init]:221) - Create remote interpreter
> org.apache.zeppelin.spark.SparkRInterpreter
> WARN [2018-02-06 12:13:53,749] ({pool-2-thread-2}
> NotebookServer.java[afterStatusChange]:2064) - Job
> 20171219-181830_1097537317 is finished, status: ERROR, exception: null,
> result: %text WARNING: Running python applications through 'pyspark' is
> deprecated as of Spark 1.0.
> se ./bin/spark-submit <python file>
> /tmp/zeppelin_pyspark-5465626759760627770.py:151: UserWarning: Unable to
> load inline matplotlib backend, falling back to Agg
>   warnings.warn("Unable to load inline matplotlib backend, "
>
> %text Traceback (most recent call last):
>   File
> "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/spark/python/lib/pyspark.zip/pyspark/sql/utils.py",
> line 45, in deco
>     return f(*a, **kw)
>   File
> "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py",
> line 308, in get_return_value
>     format(target_id, ".", name), value)
> py4j.protocol.Py4JJavaError: An error occurred while calling o73.sql.
> : org.apache.spark.sql.AnalysisException: Table not found:
> `hiveDatabase`.`hiveTable`;
>         at
> org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
>         at
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:54)
>         at
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:50)
>         at
> org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:121)
>         at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:120)
>         at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:120)
>       at scala.collection.immutable.List.foreach(List.scala:318)
>         at
> org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:120)
>         at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:120)
>         at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:120)
>         at scala.collection.immutable.List.foreach(List.scala:318)
>         at
> org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:120)
>         at
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:50)
>         at
> org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:44)
>         at
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:35)
>         at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:133)
>         at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
>         at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:829)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
>         at
> py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
>         at py4j.Gateway.invoke(Gateway.java:259)
>         at
> py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
>         at py4j.commands.CallCommand.execute(CallCommand.java:79)
>         at py4j.GatewayConnection.run(GatewayConnection.java:209)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>
> (…)
>
>
>
> During handling of the above exception, another exception occurred:
>
>
>
> Traceback (most recent call last):
>
>   File "/tmp/zeppelin_pyspark-5465626759760627770.py", line 355, in
> <module>
>
>     exec(code, _zcUserQueryNameSpace)
>
>   File "<stdin>", line 1, in <module>
>
>   File
> "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/spark/python/lib/pyspark.zip/pyspark/sql/context.py",
> line 580, in sql
>
>     return DataFrame(self._ssql_ctx.sql(sqlQuery), self)
>
>   File
> "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py",
> line 813, in __call__
>
>     answer, self.gateway_client, self.target_id, self.name)
>
>   File
> "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/spark/python/lib/pyspark.zip/pyspark/sql/utils.py",
> line 51, in deco
>
>     raise AnalysisException(s.split(': ', 1)[1], stackTrace)
>
> pyspark.sql.utils.AnalysisException: 'Table not found:
> `hiveDatabase`.`hiveTable`;'
>
>
>
>
>
> INFO [2018-02-06 12:13:53,795] ({pool-2-thread-2}
> SchedulerFactory.java[jobFinished]:137) - Job
> paragraph_1513703910594_823567073 finished by scheduler
> org.apache.zeppelin.interpreter.remote.RemoteInterpretershared_session1977922636
>
> *De :* Jeff Zhang [mailto:zjf...@gmail.com <zjf...@gmail.com>]
> *Envoyé :* mardi 6 février 2018 12:04
> *À :* users@zeppelin.apache.org
> *Objet :* Re: How to have a native graphical representation (%sql) of a
> HiveContext?
>
>
>
>
>
> It is weird, there should be a interpreter log file with name
> 'zeppelin-interpreter-spark*.log'
>
>
>
> Are you sure you are looking at the correct log folder ? Because from the
> log you pasted, it is the log of 2017-12-08
>
>
>
>
>
>
>
>
>
>
>
> LINZ, Arnaud <al...@bouyguestelecom.fr>于2018年2月6日周二 下午6:45写道:
>
> I’ve been searching for such a log, but I don’t see anything related to
> spark in …/zeppelin/logs
>
> The only logs I have are
>
> zeppelin-$USER-$HOST.log
>
> and
>
> zeppelin-$USER-$HOST.out
>
> but they really don’t contain anything useful.
>
>
>
> Log =
>
>
>
> INFO [2017-12-08 12:21:36,847] ({main}
> ZeppelinConfiguration.java[create]:101) - Load configuration from
> file:/etc/zeppelin/zeppelin-site.xml
> INFO [2017-12-08 12:21:36,896] ({main}
> ZeppelinConfiguration.java[create]:109) - Server Host: 0.0.0.0
> INFO [2017-12-08 12:21:36,896] ({main}
> ZeppelinConfiguration.java[create]:113) - Server SSL Port: 8080
> INFO [2017-12-08 12:21:36,896] ({main}
> ZeppelinConfiguration.java[create]:115) - Context Path: /
> INFO [2017-12-08 12:21:36,900] ({main}
> ZeppelinConfiguration.java[create]:116) - Zeppelin Version: 0.7.3
> INFO [2017-12-08 12:21:36,917] ({main} Log.java[initialized]:186) -
> Logging initialized @275ms
> INFO [2017-12-08 12:21:36,979] ({main}
> ZeppelinServer.java[setupWebAppContext]:346) - ZeppelinServer Webapp path:
> /opt/zeppelin/webapps
> INFO [2017-12-08 12:21:37,196] ({main}
> IniRealm.java[processDefinitions]:188) - IniRealm defined, but there is no
> [users] section defined.  This realm will not be populated with any users
> and it is assumed that they will be populated programatically.  Users must
> be defined for this Realm instance to be useful.
> INFO [2017-12-08 12:21:37,197] ({main}
> AuthorizingRealm.java[getAuthorizationCacheLazy]:248) - No cache or
> cacheManager properties have been set.  Authorization cache cannot be
> obtained.
> INFO [2017-12-08 12:21:37,227] ({main} ZeppelinServer.java[main]:187) -
> Starting zeppelin server
> INFO [2017-12-08 12:21:37,229] ({main} Server.java[doStart]:327) -
> jetty-9.2.15.v20160210
> INFO [2017-12-08 12:21:38,913] ({main}
> StandardDescriptorProcessor.java[visitServlet]:297) - NO JSP Support for /,
> did not find org.eclipse.jetty.jsp.JettyJspServlet
> INFO [2017-12-08 12:21:38,922] ({main} ContextHandler.java[log]:2052) -
> Initializing Shiro environment
> INFO [2017-12-08 12:21:38,922] ({main}
> EnvironmentLoader.java[initEnvironment]:128) - Starting Shiro environment
> initialization.
> INFO [2017-12-08 12:21:38,972] ({main}
> IniRealm.java[processDefinitions]:188) - IniRealm defined, but there is no
> [users] section defined.  This realm will not be populated with any users
> and it is assumed that they will be populated programatically.  Users must
> be defined for this Realm instance to be useful.
> INFO [2017-12-08 12:21:38,972] ({main}
> AuthorizingRealm.java[getAuthorizationCacheLazy]:248) - No cache or
> cacheManager properties have been set.  Authorization cache cannot be
> obtained.
> INFO [2017-12-08 12:21:38,977] ({main}
> EnvironmentLoader.java[initEnvironment]:141) - Shiro environment
> initialized in 55 ms.
> WARN [2017-12-08 12:21:39,115] ({main} Helium.java[loadConf]:101) -
> /etc/zeppelin/helium.json does not exists
> WARN [2017-12-08 12:21:39,402] ({main} Interpreter.java[register]:406) -
> Static initialization is deprecated for interpreter sql, You should change
> it to use interpreter-setting.json in your jar or
> interpreter/{interpreter}/interpreter-setting.json
> INFO [2017-12-08 12:21:39,403] ({main}
> InterpreterSettingManager.java[init]:305) - Interpreter psql.sql found.
> class=org.apache.zeppelin.postgresql.PostgreSqlInterpreter
> INFO [2017-12-08 12:21:39,497] ({main}
> InterpreterSettingManager.java[init]:337) - InterpreterSettingRef name
> ignite
> INFO [2017-12-08 12:21:39,497] ({main}
> InterpreterSettingManager.java[init]:337) - InterpreterSettingRef name
> python
> INFO [2017-12-08 12:21:39,497] ({main}
> InterpreterSettingManager.java[init]:337) - InterpreterSettingRef name jdbc
> INFO [2017-12-08 12:21:39,497] ({main}
> InterpreterSettingManager.java[init]:337) - InterpreterSettingRef name psql
> INFO [2017-12-08 12:21:39,497] ({main}
> InterpreterSettingManager.java[init]:337) - InterpreterSettingRef name lens
> INFO [2017-12-08 12:21:39,497] ({main}
> InterpreterSettingManager.java[init]:337) - InterpreterSettingRef name pig
>
> INFO [2017-12-08 12:21:39,498] ({main}
> InterpreterSettingManager.java[init]:337) - InterpreterSettingRef name flink
> INFO [2017-12-08 12:21:39,498] ({main}
> InterpreterSettingManager.java[init]:337) - InterpreterSettingRef name
> angular
> INFO [2017-12-08 12:21:39,498] ({main}
> InterpreterSettingManager.java[init]:337) - InterpreterSettingRef name livy
> INFO [2017-12-08 12:21:39,498] ({main}
> InterpreterSettingManager.java[init]:337) - InterpreterSettingRef name file
> INFO [2017-12-08 12:21:39,498] ({main}
> InterpreterSettingManager.java[init]:337) - InterpreterSettingRef name
> elasticsearch
> INFO [2017-12-08 12:21:39,498] ({main}
> InterpreterSettingManager.java[init]:337) - InterpreterSettingRef name
> cassandra
> INFO [2017-12-08 12:21:39,498] ({main}
> InterpreterSettingManager.java[init]:337) - InterpreterSettingRef name sh
> INFO [2017-12-08 12:21:39,498] ({main}
> InterpreterSettingManager.java[init]:337) - InterpreterSettingRef name spark
> INFO [2017-12-08 12:21:39,498] ({main}
> InterpreterSettingManager.java[init]:337) - InterpreterSettingRef name md
> INFO [2017-12-08 12:21:39,498] ({main}
> InterpreterSettingManager.java[init]:337) - InterpreterSettingRef name
> alluxio
> INFO [2017-12-08 12:21:39,499] ({main}
> InterpreterSettingManager.java[init]:337) - InterpreterSettingRef name
> bigquery
> INFO [2017-12-08 12:21:39,499] ({main}
> InterpreterSettingManager.java[init]:337) - InterpreterSettingRef name hbase
> INFO [2017-12-08 12:21:39,499] ({main}
> InterpreterSettingManager.java[init]:337) - InterpreterSettingRef name kylin
> INFO [2017-12-08 12:21:39,532] ({main}
> InterpreterSettingManager.java[init]:366) - InterpreterSetting group md :
> id=2D18QE9AX, name=md
> INFO [2017-12-08 12:21:39,533] ({main}
> InterpreterSettingManager.java[init]:366) - InterpreterSetting group flink
> : id=2CYSDG6AU, name=flink
> INFO [2017-12-08 12:21:39,533] ({main}
> InterpreterSettingManager.java[init]:366) - InterpreterSetting group
> angular : id=2CZY6QWUG, name=angular
> INFO [2017-12-08 12:21:39,533] ({main}
> InterpreterSettingManager.java[init]:366) - InterpreterSetting group sh :
> id=2CYZFQZXG, name=sh
> INFO [2017-12-08 12:21:39,533] ({main}
> InterpreterSettingManager.java[init]:366) - InterpreterSetting group file :
> id=2D1RRAN3P, name=file
> INFO [2017-12-08 12:21:39,533] ({main}
> InterpreterSettingManager.java[init]:366) - InterpreterSetting group python
> : id=2D1A42TEJ, name=python
> INFO [2017-12-08 12:21:39,533] ({main}
> InterpreterSettingManager.java[init]:366) - InterpreterSetting group livy :
> id=2D26DBPPT, name=livy
> INFO [2017-12-08 12:21:39,533] ({main}
> InterpreterSettingManager.java[init]:366) - InterpreterSetting group psql :
> id=2CYQH5RKE, name=psql
> INFO [2017-12-08 12:21:39,533] ({main}
> InterpreterSettingManager.java[init]:366) - InterpreterSetting group kylin
> : id=2D1VSHNAX, name=kylin
> INFO [2017-12-08 12:21:39,533] ({main}
> InterpreterSettingManager.java[init]:366) - InterpreterSetting group lens :
> id=2D3AQXBDD, name=lens
> INFO [2017-12-08 12:21:39,533] ({main}
> InterpreterSettingManager.java[init]:366) - InterpreterSetting group jdbc :
> id=2CZ1RS873, name=jdbc
> INFO [2017-12-08 12:21:39,534] ({main}
> InterpreterSettingManager.java[init]:366) - InterpreterSetting group
> cassandra : id=2D338M3RA, name=cassandra
> INFO [2017-12-08 12:21:39,534] ({main}
> InterpreterSettingManager.java[init]:366) - InterpreterSetting group
> elasticsearch : id=2D372MTWM, name=elasticsearch
> INFO [2017-12-08 12:21:39,534] ({main}
> InterpreterSettingManager.java[init]:366) - InterpreterSetting group ignite
> : id=2D3A12WV4, name=ignite
> INFO [2017-12-08 12:21:39,534] ({main}
> InterpreterSettingManager.java[init]:366) - InterpreterSetting group
> alluxio : id=2D1D8DBB6, name=alluxio
> INFO [2017-12-08 12:21:39,534] ({main}
> InterpreterSettingManager.java[init]:366) - InterpreterSetting group hbase
> : id=2CZ72SGDR, name=hbase
> INFO [2017-12-08 12:21:39,534] ({main}
> InterpreterSettingManager.java[init]:366) - InterpreterSetting group pig :
> id=2CZJA495Z, name=pig
> INFO [2017-12-08 12:21:39,534] ({main}
> InterpreterSettingManager.java[init]:366) - InterpreterSetting group spark
> : id=2CYVF45A9, name=spark
> INFO [2017-12-08 12:21:39,534] ({main}
> InterpreterSettingManager.java[init]:366) - InterpreterSetting group
> bigquery : id=2D1KTF7YE, name=bigquery
> INFO [2017-12-08 12:21:39,547] ({main}
> InterpreterFactory.java[<init>]:130) - shiroEnabled: false
> INFO [2017-12-08 12:21:39,587] ({main} VfsLog.java[info]:138) - Using
> "/tmp/vfs_cache" as temporary files store.
> INFO [2017-12-08 12:21:39,621] ({main} GitNotebookRepo.java[<init>]:63) -
> Opening a git repo at '/opt/zeppelin/notebook'
> INFO [2017-12-08 12:21:39,840] ({main}
> NotebookAuthorization.java[loadFromFile]:96) -
> /etc/zeppelin/notebook-authorization.json
> INFO [2017-12-08 12:21:39,843] ({main} Credentials.java[loadFromFile]:102)
> - /etc/zeppelin/credentials.json
> INFO [2017-12-08 12:21:39,867] ({main}
> StdSchedulerFactory.java[instantiate]:1184) - Using default implementation
> for ThreadExecutor
> INFO [2017-12-08 12:21:39,870] ({main}
> SimpleThreadPool.java[initialize]:268) - Job execution threads will use
> class loader of thread: main
> INFO [2017-12-08 12:21:39,880] ({main}
> SchedulerSignalerImpl.java[<init>]:61) - Initialized Scheduler Signaller of
> type: class org.quartz.core.SchedulerSignalerImpl
> INFO [2017-12-08 12:21:39,881] ({main} QuartzScheduler.java[<init>]:240) -
> Quartz Scheduler v.2.2.1 created.
> INFO [2017-12-08 12:21:39,881] ({main} RAMJobStore.java[initialize]:155) -
> RAMJobStore initialized.
> INFO [2017-12-08 12:21:39,882] ({main}
> QuartzScheduler.java[initialize]:305) - Scheduler meta-data: Quartz
> Scheduler (v2.2.1) 'DefaultQuartzScheduler' with instanceId 'NON_CLUSTERED'
>   Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
>   NOT STARTED.
>   Currently in standby mode.
>   Number of jobs executed: 0
>   Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads.
>   Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support
> persistence. and is not clustered.
> INFO [2017-12-08 12:21:39,882] ({main}
> StdSchedulerFactory.java[instantiate]:1339) - Quartz scheduler
> 'DefaultQuartzScheduler' initialized from default resource file in Quartz
> package: 'quartz.properties'
> INFO [2017-12-08 12:21:39,882] ({main}
> StdSchedulerFactory.java[instantiate]:1343) - Quartz scheduler version:
> 2.2.1
> INFO [2017-12-08 12:21:39,882] ({main} QuartzScheduler.java[start]:575) -
> Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started.
> INFO [2017-12-08 12:21:39,982] ({main} FolderView.java[createFolder]:107)
> - Create folder Zeppelin Tutorial
> INFO [2017-12-08 12:21:39,982] ({main} FolderView.java[createFolder]:107)
> - Create folder /
> INFO [2017-12-08 12:21:39,983] ({main} Folder.java[setParent]:168) - Set
> parent of / to /
> INFO [2017-12-08 12:21:39,983] ({main} Folder.java[setParent]:168) - Set
> parent of Zeppelin Tutorial to /
> INFO [2017-12-08 12:21:39,983] ({main} Folder.java[addNote]:184) - Add
> note 2C2AUG798 to folder Zeppelin Tutorial
> INFO [2017-12-08 12:21:39,994] ({main} Folder.java[addNote]:184) - Add
> note 2BWJFTXKJ to folder Zeppelin Tutorial
> INFO [2017-12-08 12:21:40,000] ({main} Folder.java[addNote]:184) - Add
> note 2C35YU814 to folder Zeppelin Tutorial
> INFO [2017-12-08 12:21:40,002] ({main} Folder.java[addNote]:184) - Add
> note 2A94M5J1Z to folder Zeppelin Tutorial
> INFO [2017-12-08 12:21:40,009] ({main} Folder.java[addNote]:184) - Add
> note 2BYEZ5EVK to folder Zeppelin Tutorial
> INFO [2017-12-08 12:21:40,011] ({main} Folder.java[addNote]:184) - Add
> note 2C57UKYWR to folder Zeppelin Tutorial
> INFO [2017-12-08 12:21:40,011] ({main} Notebook.java[<init>]:127) -
> Notebook indexing started...
> INFO [2017-12-08 12:21:40,142] ({main}
> LuceneSearch.java[addIndexDocs]:305) - Indexing 6 notebooks took 130ms
> INFO [2017-12-08 12:21:40,142] ({main} Notebook.java[<init>]:129) -
> Notebook indexing finished: 6 indexed in 0s
> INFO [2017-12-08 12:21:40,245] ({main}
> ServerImpl.java[initDestination]:94) - Setting the server's publish address
> to be /
> INFO [2017-12-08 12:21:40,688] ({main} ContextHandler.java[doStart]:744) -
> Started
> o.e.j.w.WebAppContext@418e7838{/,file:/opt/zeppelin-0.7.3/webapps/webapp/,AVAILABLE}{/opt/zeppelin/zeppelin-web-0.7.3.war}
> INFO [2017-12-08 12:21:40,743] ({main}
> AbstractConnector.java[doStart]:266) - Started ServerConnector@2630dbc4
> {SSL-HTTP/1.1}{0.0.0.0:8080}
> INFO [2017-12-08 12:21:40,743] ({main} Server.java[doStart]:379) - Started
> @4103ms
> INFO [2017-12-08 12:21:40,743] ({main} ZeppelinServer.java[main]:197) -
> Done, zeppelin server started
> INFO [2017-12-08 12:21:42,392] ({qtp1632392469-71}
> NotebookServer.java[onOpen]:157) - New connection from 10.136.169.200 :
> 43470
> INFO [2017-12-08 12:21:46,800] ({qtp1632392469-71}
> AbstractValidatingSessionManager.java[enableSessionValidation]:230) -
> Enabling session validation scheduler...
> INFO [2017-12-08 12:21:46,864] ({qtp1632392469-71}
> AuthorizingRealm.java[getAuthorizationCacheLazy]:248) - No cache or
> cacheManager properties have been set.  Authorization cache cannot be
> obtained.
> INFO [2017-12-08 12:21:46,896] ({qtp1632392469-71}
> AuthorizingRealm.java[getAuthorizationCacheLazy]:248) - No cache or
> cacheManager properties have been set.  Authorization cache cannot be
> obtained.
> INFO [2017-12-08 12:21:46,931] ({qtp1632392469-71}
> AuthorizingRealm.java[getAuthorizationCacheLazy]:248) - No cache or
> cacheManager properties have been set.  Authorization cache cannot be
> obtained.
> INFO [2017-12-08 12:21:46,958] ({qtp1632392469-71}
> AuthorizingRealm.java[getAuthorizationCacheLazy]:248) - No cache or
> cacheManager properties have been set.  Authorization cache cannot be
> obtained.
> INFO [2017-12-08 12:21:46,992] ({qtp1632392469-71}
> AuthorizingRealm.java[getAuthorizationCacheLazy]:248) - No cache or
> cacheManager properties have been set.  Authorization cache cannot be
> obtained.
> INFO [2017-12-08 12:21:47,019] ({qtp1632392469-71}
> AuthorizingRealm.java[getAuthorizationCacheLazy]:248) - No cache or
> cacheManager properties have been set.  Authorization cache cannot be
> obtained.
> INFO [2017-12-08 12:21:47,047] ({qtp1632392469-71}
> AuthorizingRealm.java[getAuthorizationCacheLazy]:248) - No cache or
> cacheManager properties have been set.  Authorization cache cannot be
> obtained.
> INFO [2017-12-08 12:21:47,076] ({qtp1632392469-71}
> AuthorizingRealm.java[getAuthorizationCacheLazy]:248) - No cache or
> cacheManager properties have been set.  Authorization cache cannot be
> obtained.
> WARN [2017-12-08 12:21:47,109] ({qtp1632392469-71}
> LoginRestApi.java[postLogin]:119) -
> {"status":"OK","message":"","body":{"principal":"zepptest","ticket":"e0c65401-e791-4fd9-bcc0-ec68b47f2b27","roles":"[ipausers]"}}
> INFO [2017-12-08 12:22:03,273] ({qtp1632392469-71}
> AuthorizingRealm.java[getAuthorizationCacheLazy]:248) - No cache or
> cacheManager properties have been set.  Authorization cache cannot be
> obtained.
> INFO [2017-12-08 12:22:03,290] ({qtp1632392469-77}
> AuthorizingRealm.java[getAuthorizationCacheLazy]:248) - No cache or
> cacheManager properties have been set.  Authorization cache cannot be
> obtained.
> INFO [2017-12-08 12:22:03,305] ({qtp1632392469-79}
> AuthorizingRealm.java[getAuthorizationCacheLazy]:248) - No cache or
> cacheManager properties have been set.  Authorization cache cannot be
> obtained.
>
> OUT
>
> Dec 08, 2017 12:20:03 PM com.sun.jersey.api.core.ScanningResourceConfig
> logClasses
> INFO: Root resource classes found:
>   class org.apache.zeppelin.rest.ConfigurationsRestApi
>  class org.apache.zeppelin.rest.InterpreterRestApi
>   class org.apache.zeppelin.rest.CredentialRestApi
>   class org.apache.zeppelin.rest.LoginRestApi
>   class org.apache.zeppelin.rest.NotebookRestApi
>   class org.apache.zeppelin.rest.NotebookRepoRestApi
>   class org.apache.zeppelin.rest.SecurityRestApi
>   class org.apache.zeppelin.rest.ZeppelinRestApi
>   class org.apache.zeppelin.rest.HeliumRestApi
> Dec 08, 2017 12:20:03 PM com.sun.jersey.api.core.ScanningResourceConfig
> init
> INFO: No provider classes found.
> Dec 08, 2017 12:20:03 PM
> com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
> INFO: Initiating Jersey application, version 'Jersey: 1.13 06/29/2012
> 05:14 PM'
> Dec 08, 2017 12:20:03 PM com.sun.jersey.spi.inject.Errors
> processErrorMessages
> WARNING: The following warnings have been detected with resource and/or
> provider classes:
>   WARNING: A HTTP GET method, public javax.ws.rs.core.Response
> org.apache.zeppelin.rest.CredentialRestApi.getCredentials(java.lang.String)
> throws java.io.IOException,java.lang.IllegalArgumentException, should not
> consume any entity.
>   WARNING: A HTTP GET method, public javax.ws.rs.core.Response
> org.apache.zeppelin.rest.InterpreterRestApi.listInterpreter(java.lang.String),
> should not consume any entity.
>   WARNING: A sub-resource method, public javax.ws.rs.core.Response
> org.apache.zeppelin.rest.NotebookRestApi.createNote(java.lang.String)
> throws java.io.IOException, with URI template, "/", is treated as a
> resource method
>   WARNING: A sub-resource method, public javax.ws.rs.core.Response
> org.apache.zeppelin.rest.NotebookRestApi.getNoteList() throws
> java.io.IOException, with URI template, "/", is treated as a resource method
> ZEPPELIN_CLASSPATH:
> ::/opt/zeppelin/lib/interpreter/*:/opt/zeppelin/lib/*:/opt/zeppelin/*::/etc/zeppelin
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=512m; support was removed in 8.0
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/opt/zeppelin-0.7.3/lib/interpreter/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/opt/zeppelin-0.7.3/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Dec 08, 2017 12:20:37 PM com.sun.jersey.api.core.PackagesResourceConfig
> init
> INFO: Scanning for root resource and provider classes in the packages:
>   org.apache.zeppelin.rest
>
> *De :* Jeff Zhang [mailto:zjf...@gmail.com]
> *Envoyé :* mardi 6 février 2018 11:17
>
>
> *À :* users@zeppelin.apache.org
> *Objet :* Re: How to
>
> have a native graphical representation (%sql) of a HiveContext?
>
>
>
>
>
> Could you attach the spark interpreter log ?
>
>
>
> LINZ, Arnaud <al...@bouyguestelecom.fr>于2018年2月6日周二 下午4:49写道:
>
> I really tried, but it really does not work…
>
>    - zeppelin.spark.useHiveContext is set to true
>    - I directly use sqlContext without creating it
>    - I have a link to hive-site.xml in /etc/spark/conf
>
> But sqlContext does not see any hive table
>
>
>
> I don’t see any error log, or any helpful log in zeppelin logs. What can
> be wrong?
>
>
>
>
>
> *De :* Jeff Zhang [mailto:zjf...@gmail.com]
>
> *Envoyé :* lundi 5 février 2018 14:31
>
>
> *À :* users@zeppelin.apache.org
> *Objet :* Re: How to have a native graphical representation (%sql) of a
> HiveContext?
>
>
>
> Sorry I should be more accurate.
>
> Just don't create SqlContext/HiveContext in zeppelin as zeppelin will
> create that for you. If you zeppelin.spark.useHiveContext to true variable 
> sqlContext
> will be HiveContext, other it is SqlContxt
>
>
>
> LINZ, Arnaud <al...@bouyguestelecom.fr>于2018年2月5日周一 下午9:15写道:
>
> Thanks for your answer, but it does not address my problem.
>
>    - I don’t create sqlContext, I use the one provided by Zeppelin. But
>    sqlContext is not a hive Context and cannot access hive metastore.
>    - Zeppelin can see my hive conf files, and selecting tables through a
>    created hiveContext works. But I cannot visualize them in the %sql
>    graphical interpretor.
>
>
>
> *De :* Jeff Zhang [mailto:zjf...@gmail.com]
> *Envoyé :* lundi 5 février 2018 14:01
> *À :* users@zeppelin.apache.org
> *Objet :* Re: How to have a native graphical representation (%sql) of a
> HiveContext?
>
>
>
>
>
> 1. Don't create sqlContext in zeppelin as zeppelin will create that for
> you, and %sql use the sqlContext created by zeppelin itself.
>
> 2. Make sure you have hive-site.xml under SPARK_CONF_DIR if you want to
> use hiveContext. Otherwise spark will use single user derby instance which
> is not for production, and will cause conflicts when you create multiple
> spark interpreter in one zeppelin instance.
>
>
>
>
>
> LINZ, Arnaud <al...@bouyguestelecom.fr>于2018年2月5日周一 下午8:33写道:
>
> Hello,
>
>
>
> I’m trying to install Zeppelin (0.7.2) on my CDH cluster, and I am unable
> to connect the sql + graphical representations of the %sql  interpreter
> with my Hive data, and more surprisingly I really can’t find any good
> source on the internet (apache zeppelin documentation or stack overflow)
> that gives a practical answer about how to do this.
>
> Most of the time, the data comes from compressed Hive tables and not plain
> hdfs text files ; so using a hive context is far more convenient than a
> plain spark sql context.
>
>
>
> The following :
>
> %spark
>
> val hc = new  org.apache.spark.sql.hive.HiveContext(sc)
>
> val result = hc.sql("select * from hivedb.hivetable")
>
> result.registerTempTable("myTest")
>
>
>
> works but no myTest table is available in the following %sql interpreter :
>
> %sql
>
> select * from myTest
>
> org.apache.spark.sql.AnalysisException: Table not found: myTest;
>
>
>
>
>
> However the following :
>
> %pyspark
>
> result = sqlContext.read.text("hdfs://cluster/test.txt")
>
> result.registerTempTable("mySqlTest")
>
>
>
> works as the %sql interpreter is “plugged”  to the sqlContext
>
>
>
> but
>
> result = sqlContext.sql("select * from hivedb.hivetable") does not work
> as the sqlContext is not a hive context.
>
>
>
> I have set zeppelin.spark.useHiveContext to true, but it seems to have no
> effect (btw, it was more of a wild guess since the documentation is not
> giving much detail on parameters and context configuration)
>
>
>
> Can you direct me towards how to configure the context used by the %sql
> interpreter?
>
>
>
> Best regards,
>
> Arnaud
>
>
>
> PS : %spark and %sql interpreter conf:
>
>
>
> master  yarn-client
>
> spark.app.name  Zeppelin
>
> spark.cores.max
>
> spark.executor.memory   5g
>
> zeppelin.R.cmd  R
>
> zeppelin.R.image.width  100%
>
> zeppelin.R.knitr    true
>
> zeppelin.R.render.options   out.format = 'html', comment = NA, echo =
> FALSE, results = 'asis', message = F, warning = F
>
> zeppelin.dep.additionalRemoteRepository spark-packages,
> http://dl.bintray.com/spark-packages/maven,false;
>
> zeppelin.dep.localrepo  local-repo
>
> zeppelin.interpreter.localRepo  /opt/zeppelin/local-repo/2CYVF45A9
>
> zeppelin.interpreter.output.limit   102400
>
> zeppelin.pyspark.python /usr/bin/pyspark
>
> zeppelin.spark.concurrentSQL    true
>
> zeppelin.spark.importImplicit   true
>
> zeppelin.spark.maxResult    1000
>
> zeppelin.spark.printREPLOutput  true
>
> zeppelin.spark.sql.stacktrace   true
>
> zeppelin.spark.useHiveContext   true
>
>
> ------------------------------
>
>
> L'intégrité de ce message n'étant pas assurée sur internet, la société
> expéditrice ne peut être tenue responsable de son contenu ni de ses pièces
> jointes. Toute utilisation ou diffusion non autorisée est interdite. Si
> vous n'êtes pas destinataire de ce message, merci de le détruire et
> d'avertir l'expéditeur.
>
> The integrity of this message cannot be guaranteed on the Internet. The
> company that sent this message cannot therefore be held liable for its
> content nor attachments. Any unauthorized use or dissemination is
> prohibited. If you are not the intended recipient of this message, then
> please delete it and notify the sender.
>
>

Reply via email to