[jira] [Created] (ZEPPELIN-5323) Interpreter Recovery Does Not Preserve Running Spark Jobs
Paul Brenner created ZEPPELIN-5323: -- Summary: Interpreter Recovery Does Not Preserve Running Spark Jobs Key: ZEPPELIN-5323 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5323 Project: Zeppelin Issue Type: Bug Reporter: Paul Brenner We are using zeppelin 0.10 built from from master on March 26th, looks like the most recent commit was 85ed8e2e51e1ea10df38d4710216343efe218d60. We tried to enable interpreter recovery by adding the following to zeppelin-site.xml: zeppelin.recovery.storage.class org.apache.zeppelin.interpreter.recovery.FileSystemRecoveryStorage ReoveryStorage implementation based on hadoop FileSystem zeppelin.recovery.dir /user/zeppelin/recovery Location where recovery metadata is stored when we start up zeppelin we get no errors, I can start a job running and I see in {{/user/zeppelin/recovery/spark_paul.recovery}} that it lists {{spark_paul-anonymous-2G3KV92PG 10.16.41.212:34374}} so that look promising when we stop zeppelin the interpreter process keeps running, but I see the following happens to the spark job 21/04/08 13:42:09 INFO yarn.YarnAllocator: Canceling requests for 262 executor container(s) to have a new desired total 0 executors. 21/04/08 13:42:09 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. zeppelin-212.sec.placeiq.net:36733 21/04/08 13:42:09 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. zeppelin-212.sec.placeiq.net:36733 21/04/08 13:42:09 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0 21/04/08 13:42:09 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED 21/04/08 13:42:09 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered. 21/04/08 13:42:09 INFO yarn.ApplicationMaster: Deleting staging directory hdfs://nameservice1/user/pbrenner/.sparkStaging/application_1617808481394_4478 21/04/08 13:42:09 INFO util.ShutdownHookManager: Shutdown hook called then when we start zeppelin back up I see the following on the paragraph that was running: java.lang.RuntimeException: Interpreter instance org.apache.zeppelin.spark.SparkInterpreter not created at org.apache.zeppelin.interpreter.remote.PooledRemoteClient.callRemoteFunction(PooledRemoteClient.java:114) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.callRemoteFunction(RemoteInterpreterProcess.java:99) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:281) at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:442) at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:71) at org.apache.zeppelin.scheduler.Job.run(Job.java:172) at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:132) at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:182) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) It looks VERY close to working, but somehow spark jobs are still getting shutdown when we shutdown zepplin. Any ideas? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5322) Add feature 'delete paragraph' for Zeppelin Client
Zhubowen created ZEPPELIN-5322: -- Summary: Add feature 'delete paragraph' for Zeppelin Client Key: ZEPPELIN-5322 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5322 Project: Zeppelin Issue Type: Improvement Components: zeppelin-client Reporter: Zhubowen Add feature 'delete paragraph' for zeppelin client -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5321) Missing links of some interpreters in index page and link menu.
Myoungdo Park created ZEPPELIN-5321: --- Summary: Missing links of some interpreters in index page and link menu. Key: ZEPPELIN-5321 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5321 Project: Zeppelin Issue Type: Bug Components: documentation Reporter: Myoungdo Park Assignee: Myoungdo Park h4. Procedure - Check interpreter links at index page - Check interpreter links at top interpreter menu h4. Problem Links of some interpreters are missing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5320) Support yarn application mode for flink interpreter
Jeff Zhang created ZEPPELIN-5320: Summary: Support yarn application mode for flink interpreter Key: ZEPPELIN-5320 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5320 Project: Zeppelin Issue Type: New Feature Components: flink Affects Versions: 0.9.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5319) Incorrect markdown in the documentation of sap interpreter
Myoungdo Park created ZEPPELIN-5319: --- Summary: Incorrect markdown in the documentation of sap interpreter Key: ZEPPELIN-5319 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5319 Project: Zeppelin Issue Type: Bug Components: documentation Reporter: Myoungdo Park Assignee: Myoungdo Park Attachments: image-2021-04-09-23-20-17-295.png example codes are not displayed correctly at "/interpreter/sap.html" !image-2021-04-09-23-20-17-295.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5318) it can't work with Kerberos
ighack created ZEPPELIN-5318: Summary: it can't work with Kerberos Key: ZEPPELIN-5318 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5318 Project: Zeppelin Issue Type: Bug Components: spark Affects Versions: 0.9.0 Environment: # CDH 6.3.2 # zeppelin 0.9.0 # centos Reporter: ighack zeppeling 0.9.0 does not work with Kerberos I have add "zeppelin.server.kerberos.keytab" and "zeppelin.server.kerberos.principal" in zeppelin-site.xml but I aslo get error "Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "bigdser5/10.3.87.27"; destination host is: "bigdser1":8020;" And add "spark.yarn.keytab","spark.yarn.principal" in spark interpreters,it does not work yet. In my saprk-shell that can work with Kerberos -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5317) Automate document and website deploy
Lee Moon Soo created ZEPPELIN-5317: -- Summary: Automate document and website deploy Key: ZEPPELIN-5317 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5317 Project: Zeppelin Issue Type: Improvement Reporter: Lee Moon Soo Currently, Apache Zeppelin website and documentation are deployed through apache SVN "manually", page by page. After a pullrequest is merged, committers may forget or make a mistake on performing the manual SVN deployment process. If we can make the entire website and documentation is generated as a single artifact (like docker image) and make deployment automated, then there will be fewer broken links and outdated documentation. Or we can learn from other ASF projects about the best practice of releasing website and document. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5316) Incorrect markdown at '/quickstart/docker.html'
Myoungdo Park created ZEPPELIN-5316: --- Summary: Incorrect markdown at '/quickstart/docker.html' Key: ZEPPELIN-5316 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5316 Project: Zeppelin Issue Type: Bug Components: documentation Reporter: Myoungdo Park Attachments: image-2021-04-08-23-15-29-553.png h4. Procedure 1. Access the document at '/quickstart/docker.html' h4. Problem Some markdowns are displayed incorrectly like below. !image-2021-04-08-23-15-29-553.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5315) Support multiple users with different keytabs
Shengnan YU created ZEPPELIN-5315: - Summary: Support multiple users with different keytabs Key: ZEPPELIN-5315 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5315 Project: Zeppelin Issue Type: Improvement Components: flink Affects Versions: 0.9.0 Reporter: Shengnan YU Currently Zeppellin cannot handle different keytabs for different users. We can impersonate yarn user to submit job but task manager use shipped keytab to get delegation token from hadoop which cannot be impersonated. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5314) Upgrade thrift to 0.14.1
Prarthi created ZEPPELIN-5314: - Summary: Upgrade thrift to 0.14.1 Key: ZEPPELIN-5314 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5314 Project: Zeppelin Issue Type: Improvement Components: Interpreters Reporter: Prarthi Zeppelin is pulling in Thrift 0.13.0 which needs to be upgraded to 0.14.1 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5313) Broken link in zeppelin home page
Myoungdo Park created ZEPPELIN-5313: --- Summary: Broken link in zeppelin home page Key: ZEPPELIN-5313 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5313 Project: Zeppelin Issue Type: Bug Components: Homepage Reporter: Myoungdo Park Attachments: image-2021-04-08-00-05-12-476.png h4. Procedure 1. Access at [Zepplin home page|https://zeppelin.apache.org/] 2. Select the menu 'Docs -> 0.9.0-SNAPSHOT' h4. Problem 'Not found' page is displayed -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5312) Zeppelin 0.9.0 with spark 3.1.1 interpeter can't load delta io classes
Dmitry Kravchuk created ZEPPELIN-5312: - Summary: Zeppelin 0.9.0 with spark 3.1.1 interpeter can't load delta io classes Key: ZEPPELIN-5312 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5312 Project: Zeppelin Issue Type: Bug Components: Core, Interpreters, pySpark, spark, zeppelin-interpreter Affects Versions: 0.9.0 Environment: * Zeppelin 0.9.0 * Spark 3.1.1 * Python 3.7.9 * Delta io 0.8.0 Reporter: Dmitry Kravchuk Fix For: 0.9.1 I'm using Hadoop on prem with environment described below and it works fine with parquet in zeppelin but it can't load any delta io class and fails with batch of errors but I have configured spark 3 interpreter with needed paths. It's working okay with delta io lib with spark-submit except delta table update operations. Which information do you need to be able to help me with this issue? Is zeppelin 0.9.0 possible to work with delta io 0.8.0 or not? Thank you. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5311) Unable to run set list add hive statement
Jeff Zhang created ZEPPELIN-5311: Summary: Unable to run set list add hive statement Key: ZEPPELIN-5311 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5311 Project: Zeppelin Issue Type: Bug Components: JdbcInterpreter Affects Versions: 0.9.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5310) Cluster mode is broken on latest build from source
Paul Brenner created ZEPPELIN-5310: -- Summary: Cluster mode is broken on latest build from source Key: ZEPPELIN-5310 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5310 Project: Zeppelin Issue Type: Bug Affects Versions: 0.10.0 Environment: Interpreter settings are as follows: "spark_paul": { "id": "spark_paul", "name": "spark_paul", "group": "spark", "properties": { "SPARK_HOME": { "name": "SPARK_HOME", "value": "", "type": "string", "description": "Location of spark distribution" }, "spark.master": { "name": "spark.master", "value": "yarn", "type": "string", "description": "Spark master uri. local | yarn-client | yarn-cluster | spark master address of standalone mode, ex) spark://master_host:7077" }, "spark.submit.deployMode": { "name": "spark.submit.deployMode", "value": "client", "type": "string", "description": "The deploy mode of Spark driver program, either \"client\" or \"cluster\", Which means to launch driver program locally (\"client\") or remotely (\"cluster\") on one of the nodes inside the cluster." }, "spark.app.name": { "name": "spark.app.name", "value": "zeppelin_dev_paul", "type": "string", "description": "The name of spark application." }, "spark.driver.cores": { "name": "spark.driver.cores", "value": "1", "type": "number", "description": "Number of cores to use for the driver process, only in cluster mode." }, "spark.driver.memory": { "name": "spark.driver.memory", "value": "5g", "type": "string", "description": "Amount of memory to use for the driver process, i.e. where SparkContext is initialized, in the same format as JVM memory strings with a size unit suffix (\"k\", \"m\", \"g\" or \"t\") (e.g. 512m, 2g)." }, "spark.executor.cores": { "name": "spark.executor.cores", "value": "1", "type": "number", "description": "The number of cores to use on each executor" }, "spark.executor.memory": { "name": "spark.executor.memory", "value": "3g", "type": "string", "description": "Executor memory per worker instance. ex) 512m, 32g" }, "spark.executor.instances": { "name": "spark.executor.instances", "value": "2", "type": "number", "description": "The number of executors for static allocation." }, "spark.files": { "name": "spark.files", "value": "", "type": "string", "description": "Comma-separated list of files to be placed in the working directory of each executor. Globs are allowed." }, "spark.jars": { "name": "spark.jars", "value": "http://nexus.placeiq.net:8081/nexus/content/repositories/releases/com/placeiq/lap/4.1.25/lap-4.1.25.jar,hdfs://gandalf-nn.placeiq.net/lib/dap/0.1.0/dap-jar-assembled.jar;;, "type": "string", "description": "Comma-separated list of jars to include on the driver and executor classpaths. Globs are allowed." }, "spark.jars.packages": { "name": "spark.jars.packages", "value": "ds-commons:ds-commons_2.11:0.1-SNAPSHOT", "type": "string", "description": "Comma-separated list of Maven coordinates of jars to include on the driver and executor classpaths. The coordinates should be groupId:artifactId:version. If spark.jars.ivySettings is given artifacts will be resolved according to the configuration in the file, otherwise artifacts will be searched for in the local maven repo, then maven central and finally any additional remote repositories given by the command-line option --repositories." }, "zeppelin.spark.useHiveContext": { "name": "zeppelin.spark.useHiveContext", "value": true, "type": "checkbox", "description": "Use HiveContext instead of SQLContext if it is true.
[jira] [Created] (ZEPPELIN-5309) Notebooks lose their interpreter binding after upgrading to 0.9.0
Daniel Gies created ZEPPELIN-5309: - Summary: Notebooks lose their interpreter binding after upgrading to 0.9.0 Key: ZEPPELIN-5309 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5309 Project: Zeppelin Issue Type: Bug Affects Versions: 0.9.0 Reporter: Daniel Gies When upgrading a Zeppelin 0.8.1 instance to 0.9.0, the interpreter bindings of notebooks are lost. As a result, notebooks cannot run their paragraphs until the interpreter is manually rebound. Manually rebinding the interpreter on each notebook is completely unworkable for large multi-tenant instances. In an 0.8.1 instance with an interpreter named "gcp_presto", I created a note named "InterperterBinding" with a default interpreter "gcp_presto" and one paragraph containing "show schemas". Then I upgraded the instance to 0.9.0 and ran bin/upgrade-note.sh When I restarted Zeppelin and viewed the note, the interpreter binding had reverted to the value of zeppelin.interpreter.group.default (spark), and the paragraph will not run unless prefixed with %gcp_presto Before the upgrade, interpreter.json contained this snippet: "2G1YJDE62":["gcp_presto","spark","md","python2"] After upgrade, interpreter.json removed the interpreterBinding section, and the note.zpln file has no defaultInterpreterGroup section. The expected behavior is that the notebook's interpreter binding would be migrated from the interperterBinding section of /zeppelin/conf/interpreter.json into the defaultInterpreterGroup section of note.zpln For Notebooks bound to multiple interpreters, we probably want to migrate only the first value to defaultInterpreterGroup -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5308) username in paragraph "last updated by" status is reset
Vladimir Prus created ZEPPELIN-5308: --- Summary: username in paragraph "last updated by" status is reset Key: ZEPPELIN-5308 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5308 Project: Zeppelin Issue Type: Bug Affects Versions: 0.9.0 Reporter: Vladimir Prus Create a note and execute a paragraph. Observe that paragraph say something like {code:java} Took 1 min 9 sec. Last updated by vladimir at April 01 2021, 1:46:32 PM.{code} Restart Zeppelin. Observed effect: the paragraph footer say: {code:java} Took 1 min 9 sec. Last updated by anonymous at April 01 2021, 1:46:32 PM.{code} Expected: the username in paragraph footer is not cleared, since it's very useful to know who run this paragraph, especially for old paragraph where it's otherwise not obvious. It appears to be cleared in Note.java {code:java} public void postProcessParagraphs() { for (Paragraph p : paragraphs) { p.parseText(); p.setNote(this); p.setAuthenticationInfo(AuthenticationInfo.ANONYMOUS); {code} and there's no comment explaining why it is done. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5307) Ignore the single quote and double quote in sql comment
Jeff Zhang created ZEPPELIN-5307: Summary: Ignore the single quote and double quote in sql comment Key: ZEPPELIN-5307 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5307 Project: Zeppelin Issue Type: Bug Components: JdbcInterpreter Affects Versions: 0.9.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5306) Run into error "Cannot run program "/bin/spark-submit"" in yarn cluster mode
Xiangyu Li created ZEPPELIN-5306: Summary: Run into error "Cannot run program "/bin/spark-submit"" in yarn cluster mode Key: ZEPPELIN-5306 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5306 Project: Zeppelin Issue Type: Bug Components: zeppelin-interpreter Affects Versions: 0.9.0 Environment: Linux RHEL 7 Reporter: Xiangyu Li SPARK_HOME is set in zeppelin-env.sh, but when it is not set in the interpreter, yarn client mode will work, but yarn cluster mode fails with the following error message: org.apache.zeppelin.interpreter.InterpreterException: java.io.IOException: Fail to set additional jars for spark interpreter at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.open(RemoteInterpreter.java:129) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:271) at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:444) at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:72) at org.apache.zeppelin.scheduler.Job.run(Job.java:172) at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:132) at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:182) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.IOException: Fail to set additional jars for spark interpreter at org.apache.zeppelin.interpreter.launcher.SparkInterpreterLauncher.buildEnvFromProperties(SparkInterpreterLauncher.java:163) at org.apache.zeppelin.interpreter.launcher.StandardInterpreterLauncher.launchDirectly(StandardInterpreterLauncher.java:77) at org.apache.zeppelin.interpreter.launcher.InterpreterLauncher.launch(InterpreterLauncher.java:110) at org.apache.zeppelin.interpreter.InterpreterSetting.createInterpreterProcess(InterpreterSetting.java:847) at org.apache.zeppelin.interpreter.ManagedInterpreterGroup.getOrCreateInterpreterProcess(ManagedInterpreterGroup.java:66) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getOrCreateInterpreterProcess(RemoteInterpreter.java:104) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.internal_create(RemoteInterpreter.java:154) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.open(RemoteInterpreter.java:126) ... 13 more Caused by: java.io.IOException: Cannot run program "/bin/spark-submit": error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048) at org.apache.zeppelin.interpreter.launcher.SparkInterpreterLauncher.detectSparkScalaVersion(SparkInterpreterLauncher.java:233) at org.apache.zeppelin.interpreter.launcher.SparkInterpreterLauncher.buildEnvFromProperties(SparkInterpreterLauncher.java:127) ... 20 more Caused by: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.(UNIXProcess.java:247) at java.lang.ProcessImpl.start(ProcessImpl.java:134) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029) ... 22 more org.apache.zeppelin.interpreter.InterpreterException: java.io.IOException: Fail to set additional jars for spark interpreter at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.open(RemoteInterpreter.java:129) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:271) at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:444) at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:72) at org.apache.zeppelin.scheduler.Job.run(Job.java:172) at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:132) at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:182) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thre
[jira] [Created] (ZEPPELIN-5305) Ci - Test Zeppelin Plugins
Philipp Dallig created ZEPPELIN-5305: Summary: Ci - Test Zeppelin Plugins Key: ZEPPELIN-5305 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5305 Project: Zeppelin Issue Type: Task Components: CI-infra Affects Versions: 0.9.1, 0.10.0 Reporter: Philipp Dallig Assignee: Philipp Dallig I noticed that with the transfer from Travis to github-ci, the Zeppelin plugin tests were removed. We should re-enable the plugin tests. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5304) 0.9.0 tag missing from github
Mikko Kortelainen created ZEPPELIN-5304: --- Summary: 0.9.0 tag missing from github Key: ZEPPELIN-5304 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5304 Project: Zeppelin Issue Type: Bug Components: build Affects Versions: 0.9.0 Environment: github Reporter: Mikko Kortelainen Fix For: 0.9.0 Please tag commit 9b839b5ae34ce42a350a78ec40e762ddf904a480 as 0.9.0 on github as it is the release tag used for 0.9.0 in maven central. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5303) Having colon in notebook name fails zeppelin to start
Khalid Huseynov created ZEPPELIN-5303: - Summary: Having colon in notebook name fails zeppelin to start Key: ZEPPELIN-5303 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5303 Project: Zeppelin Issue Type: Bug Affects Versions: 0.9.0 Environment: Centos 7, Java 8 Reporter: Khalid Huseynov If having : in zeppelin notebook name it failed to work and restart fails as well. below is a log MultiException stack 1 of 6 org.apache.commons.vfs2.FileSystemException: Invalid descendent file name "performance: nru_analysis_2G2YTYCAD.zpln". at org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveName(DefaultFileSystemManager.java:796) at org.apache.commons.vfs2.provider.AbstractFileObject.getChildren(AbstractFileObject.java:1045) at org.apache.zeppelin.notebook.repo.VFSNotebookRepo.listFolder(VFSNotebookRepo.java:110) at org.apache.zeppelin.notebook.repo.VFSNotebookRepo.listFolder(VFSNotebookRepo.java:111) at org.apache.zeppelin.notebook.repo.VFSNotebookRepo.listFolder(VFSNotebookRepo.java:111) at org.apache.zeppelin.notebook.repo.VFSNotebookRepo.list(VFSNotebookRepo.java:100) at org.apache.zeppelin.notebook.repo.NotebookRepoSync.list(NotebookRepoSync.java:188) at org.apache.zeppelin.notebook.NoteManager.init(NoteManager.java:74) at org.apache.zeppelin.notebook.NoteManager.(NoteManager.java:69) at sun.reflect.GeneratedConstructorAccessor27.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.glassfish.hk2.utilities.reflection.ReflectionHelper.makeMe(ReflectionHelper.java:1356) at org.jvnet.hk2.internal.ClazzCreator.createMe(ClazzCreator.java:248) at org.jvnet.hk2.internal.ClazzCreator.create(ClazzCreator.java:342) at org.jvnet.hk2.internal.SystemDescriptor.create(SystemDescriptor.java:463) at org.jvnet.hk2.internal.SingletonContext$1.compute(SingletonContext.java:59) at org.jvnet.hk2.internal.SingletonContext$1.compute(SingletonContext.java:47) at org.glassfish.hk2.utilities.cache.Cache$OriginThreadAwareFuture$1.call(Cache.java:74) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.glassfish.hk2.utilities.cache.Cache$OriginThreadAwareFuture.run(Cache.java:131) at org.glassfish.hk2.utilities.cache.Cache.compute(Cache.java:176) at org.jvnet.hk2.internal.SingletonContext.findOrCreate(SingletonContext.java:98) at org.jvnet.hk2.internal.Utilities.createService(Utilities.java:2102) at org.jvnet.hk2.internal.ServiceHandleImpl.getService(ServiceHandleImpl.java:93) at org.jvnet.hk2.internal.ServiceLocatorImpl.getService(ServiceLocatorImpl.java:679) at org.jvnet.hk2.internal.ThreeThirtyResolver.resolve(ThreeThirtyResolver.java:54) at org.jvnet.hk2.internal.ClazzCreator.resolve(ClazzCreator.java:188) at org.jvnet.hk2.internal.ClazzCreator.resolveAllDependencies(ClazzCreator.java:205) at org.jvnet.hk2.internal.ClazzCreator.create(ClazzCreator.java:334) at org.jvnet.hk2.internal.SystemDescriptor.create(SystemDescriptor.java:463) at org.jvnet.hk2.internal.SingletonContext$1.compute(SingletonContext.java:59) at org.jvnet.hk2.internal.SingletonContext$1.compute(SingletonContext.java:47) at org.glassfish.hk2.utilities.cache.Cache$OriginThreadAwareFuture$1.call(Cache.java:74) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.glassfish.hk2.utilities.cache.Cache$OriginThreadAwareFuture.run(Cache.java:131) at org.glassfish.hk2.utilities.cache.Cache.compute(Cache.java:176) at org.jvnet.hk2.internal.SingletonContext.findOrCreate(SingletonContext.java:98) at org.jvnet.hk2.internal.Utilities.createService(Utilities.java:2102) at org.jvnet.hk2.internal.ServiceLocatorImpl.getService(ServiceLocatorImpl.java:666) at org.jvnet.hk2.internal.ThreeThirtyResolver.resolve(ThreeThirtyResolver.java:54) at org.jvnet.hk2.internal.ClazzCreator.resolve(ClazzCreator.java:188) at org.jvnet.hk2.internal.ClazzCreator.resolveAllDependencies(ClazzCreator.java:205) at org.jvnet.hk2.internal.ClazzCreator.create(ClazzCreator.java:334) at org.jvnet.hk2.internal.SystemDescriptor.create(SystemDescriptor.java:463) at org.jvnet.hk2.internal.SingletonContext$1.compute(SingletonContext.java:59) at org.jvnet.hk2.internal.SingletonContext$1.compute(SingletonContext.java:47) at org.glassfish.hk2.utilities.cache.Cache$OriginThreadAwareFuture$1.call(Cac
[jira] [Created] (ZEPPELIN-5302) Error "Executor cores must be a positice number"
Dung created ZEPPELIN-5302: -- Summary: Error "Executor cores must be a positice number" Key: ZEPPELIN-5302 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5302 Project: Zeppelin Issue Type: Bug Components: Core, front-end Reporter: Dung Attachments: image-2021-03-28-12-07-47-611.png, image-2021-03-28-12-14-22-460.png, image-2021-03-28-12-15-12-177.png, image-2021-03-28-12-16-24-010.png, image-2021-03-28-12-19-55-627.png Hi! I try an error when change config on UI is one integer but when save and run again it show on img error under, """Executor cores must be a positive number"" :( I check it in data log and it is a real number!!! !image-2021-03-28-12-14-22-460.png! !image-2021-03-28-12-15-12-177.png! !image-2021-03-28-12-16-24-010.png! !image-2021-03-28-12-19-55-627.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5301) zeppelin on k8s provide configmaps to set resource request and limit
housezhang created ZEPPELIN-5301: Summary: zeppelin on k8s provide configmaps to set resource request and limit Key: ZEPPELIN-5301 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5301 Project: Zeppelin Issue Type: Improvement Components: Kubernetes Affects Versions: 0.9.0, 0.8.2 Reporter: housezhang i want to set request source to control interpreter pod resource : I find in the 100-interpreter-spec.yaml ,provide zeppelin.k8s.interpreter.memory and zeppelin.k8s.interpreter.cores to set resouce request and limit ,but in the K8sRemoteInterpreterProcess class do not set properties except spark interpreter , so ,i solved this by the fllow : in the k8s zeppelin-server-conf-map configmap add follow : ZEPPELIN_K8S_INTERPRETER_CORES: 1 ZEPPELIN_K8S_INTERPRETER_MEMORY: 2Gi int the K8sRemoteInterpreterProcess get the property by the System env: if(StringUtils.isNotEmpty(getCoreValue())){ k8sProperties.put("zeppelin.k8s.interpreter.cores", getCoreValue()); } if(StringUtils.isNotEmpty(getMemoryValue())){ k8sProperties.put("zeppelin.k8s.interpreter.memory", getMemoryValue()); } private CharSequence getCoreValue() { return System.getenv("ZEPPELIN_K8S_INTERPRETER_CORES"); } -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5300) Paragraph pending until timeout when process launcher has already fails
Jeff Zhang created ZEPPELIN-5300: Summary: Paragraph pending until timeout when process launcher has already fails Key: ZEPPELIN-5300 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5300 Project: Zeppelin Issue Type: Bug Components: spark Affects Versions: 0.9.0 Reporter: Jeff Zhang How to reproduce: Specify spark.jars.packages to be a unknown package and launch it in yarn cluster mode -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5299) Comment at end of query causes query to be ignored
Jon Courtney created ZEPPELIN-5299: -- Summary: Comment at end of query causes query to be ignored Key: ZEPPELIN-5299 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5299 Project: Zeppelin Issue Type: Bug Affects Versions: 0.9.0 Environment: Bug found in Zeppelin 0.9.0-preview1, which is included in the AWS EMR 6.2.0 release. Reporter: Jon Courtney A trailing comment at the end of a query causes the query to be ignored. Running a cell with such a query will cause Zeppelin to return immediately, without an error. Example: {code:java} %sql select 'one' , 'two' -- comment{code} Workaround: Adding a final ';' at the end of the query fixes the problem, like so: {code:java} %sql select 'one' , 'two' -- comment ;{code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5298) livy interpreter for sparkr variable display not working
Vikgeek Ritsuko created ZEPPELIN-5298: - Summary: livy interpreter for sparkr variable display not working Key: ZEPPELIN-5298 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5298 Project: Zeppelin Issue Type: Bug Components: livy-interpreter Affects Versions: 0.9.0, 0.8.0 Environment: * zeppelin 0.8.0 * R 3.5.2 Reporter: Vikgeek Ritsuko Attachments: livy_spark_error_01.PNG, livy_spark_error_02.PNG, livy_spark_error_03.PNG, livy_spark_error_04.PNG, livy_spark_error_05.PNG Hi, I encounter a problem in livy-interpreter for sparkr, the output objet are not display if we don't make a direct call with plot for example {{ %livy.sparkr library(ggplot2) pres_rating <- data.frame( rating = as.numeric(presidents), year = as.numeric(floor(time(presidents))), quarter = as.numeric(cycle(presidents)) ) p <- ggplot(pres_rating, aes(x=year, y=quarter, fill=rating)) p <- p + geom_raster() p }} render png2 (image livy_spark_error_01.PNG) but {{ %livy.sparkr library(ggplot2) pres_rating <- data.frame( rating = as.numeric(presidents), year = as.numeric(floor(time(presidents))), quarter = as.numeric(cycle(presidents)) ) p <- ggplot(pres_rating, aes(x=year, y=quarter, fill=rating)) p <- p + geom_raster() plot(p) }} work fine (image livy_spark_error_02.PNG) it is not that important but due to that impossible to render functions like pairs or any plotly object see livy_spark_error_03.PNG and livy_spark_error_05.PNG -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5297) livy interpreter for sparkr variable display not working
Vikgeek Ritsuko created ZEPPELIN-5297: - Summary: livy interpreter for sparkr variable display not working Key: ZEPPELIN-5297 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5297 Project: Zeppelin Issue Type: Bug Components: livy-interpreter Affects Versions: 0.9.0, 0.8.0 Environment: * Zeppelin 0.8.0 * R version 3.5.2 (2018-12-20) Reporter: Vikgeek Ritsuko Attachments: livy_spark_error_01.PNG, livy_spark_error_02.PNG, livy_spark_error_03.PNG, livy_spark_error_04.PNG, livy_spark_error_05.PNG Hi, I encounter a problem in livy-interpreter for sparkr, the output objet are not display if we don't make a direct call with plot for example {{ %livy.sparkr library(ggplot2) pres_rating <- data.frame( rating = as.numeric(presidents), year = as.numeric(floor(time(presidents))), quarter = as.numeric(cycle(presidents)) ) p <- ggplot(pres_rating, aes(x=year, y=quarter, fill=rating)) p <- p + geom_raster() p }} render png2 (image livy_spark_error_01.PNG) but {{ %livy.sparkr library(ggplot2) pres_rating <- data.frame( rating = as.numeric(presidents), year = as.numeric(floor(time(presidents))), quarter = as.numeric(cycle(presidents)) ) p <- ggplot(pres_rating, aes(x=year, y=quarter, fill=rating)) p <- p + geom_raster() plot(p) }} work fine (image livy_spark_error_02.PNG) it is not that important but due to that impossible to render functions like pairs or any plotly object see livy_spark_error_03.PNG and livy_spark_error_05.PNG -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5296) NPE when calling completion for %spark.sql
Jeff Zhang created ZEPPELIN-5296: Summary: NPE when calling completion for %spark.sql Key: ZEPPELIN-5296 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5296 Project: Zeppelin Issue Type: Bug Components: spark Affects Versions: 0.9.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5295) "Read time out" error while using zeppelin BigQuery interpreter
Animesh Nandanwar created ZEPPELIN-5295: --- Summary: "Read time out" error while using zeppelin BigQuery interpreter Key: ZEPPELIN-5295 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5295 Project: Zeppelin Issue Type: Bug Components: zeppelin-client Affects Versions: 0.9.0 Reporter: Animesh Nandanwar Setting following property for `{{zeppelin.bigquery.wait_time` to 5(in ms), and Bigquery interpreter times out with 'read timeout` error.}} {{Issue can be easily created by creating a zeppelin notebook and runnign query against bigquery public dataset from console }} {{`SELECT count(*) FROM }}{{bigquery-samples.wikipedia_benchmark.Wiki100B}}{{ WHERE title like "%g"` }} {{}} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5294) Only clean user password when user pass credential is set.
Jeff Zhang created ZEPPELIN-5294: Summary: Only clean user password when user pass credential is set. Key: ZEPPELIN-5294 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5294 Project: Zeppelin Issue Type: Bug Components: JdbcInterpreter Affects Versions: 0.9.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5293) Search index rebuild blocks everything
Vladimir Prus created ZEPPELIN-5293: --- Summary: Search index rebuild blocks everything Key: ZEPPELIN-5293 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5293 Project: Zeppelin Issue Type: Bug Affects Versions: 0.9.0 Reporter: Vladimir Prus Suppose that you have 10K notebooks stored on S3. Set zeppelin.search.index.rebuild to true in options and restart Zeppelin. Expected effect: notes are indexes in the background. Observed effect: Zeppelin is not usable for 30 minutes. UI only shows navigation bar. All the threads are blocked waiting for Notebook instance to appear. It appears that Notebook's constructor calls LuceneSearch.startRebuildIndex, and it creates a thread and then *joins* it. So, until indexing finishes, nothing works. I am unsure that this thread.join is warranted - maybe it can be just removed? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5292) Deadlock in ConnectionManager
Vladimir Prus created ZEPPELIN-5292: --- Summary: Deadlock in ConnectionManager Key: ZEPPELIN-5292 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5292 Project: Zeppelin Issue Type: Bug Affects Versions: 0.9.0 Reporter: Vladimir Prus Attachments: stacktrace-2021-03-18.txt Our 0.9.0 install fairly regularly becomes unresponsive. Specifically, if I open the home page, I see the navigation bar, but nothing else shows up. The problem does not resolve itself, and there's no CPU usage whatsoever. I attach a stacktrace from one such incident, where about all threads are waiting inside ConnectionManager, like so: {code:java} "qtp733672688-15179" #15179 prio=5 os_prio=0 tid=0x7fc1f0002000 nid=0x14103 waiting for monitor entry [0x7fc1d48c7000] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.zeppelin.socket.ConnectionManager.removeConnectionFromAllNote(ConnectionManager.java:175) - waiting to lock <0x7fc5dbb0c5d8> (a java.util.HashMap) {code} and {code:java} "qtp733672688-15068" #15068 prio=5 os_prio=0 tid=0x7fc358001000 nid=0x14069 waiting for monitor entry [0x7fc15aae9000] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.zeppelin.socket.ConnectionManager.addNoteConnection(ConnectionManager.java:108) - waiting to lock <0x7fc5dbb0c5d8> (a java.util.HashMap) {code} The lock is held here: {code:java} "qtp733672688-10896" #10896 prio=5 os_prio=0 tid=0x7fc2f4007800 nid=0x12661 waiting for monitor entry [0x7fc395267000] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.zeppelin.socket.NotebookSocket.send(NotebookSocket.java:70) - waiting to lock <0x7fc5dbe1b050> (a org.apache.zeppelin.socket.NotebookSocket) at org.apache.zeppelin.socket.ConnectionManager.broadcast(ConnectionManager.java:247) at org.apache.zeppelin.socket.ConnectionManager.checkCollaborativeStatus(ConnectionManager.java:214) at org.apache.zeppelin.socket.ConnectionManager.removeConnectionFromNote(ConnectionManager.java:190) - locked <0x7fc5dbb0c5d8> (a java.util.HashMap) at org.apache.zeppelin.socket.ConnectionManager.removeConnectionFromAllNote(ConnectionManager.java:178) - locked <0x7fc5dbb0c5d8> (a java.util.HashMap) {code} Probably, NotebookSocket.send takes a long time, while holding a lock that is blocking basically all connections? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5291) Some visualizations (ultimate-column-chart, ultimate-dual-column-chart) does not work with some aggregation
Nenad Vujasinovic created ZEPPELIN-5291: --- Summary: Some visualizations (ultimate-column-chart, ultimate-dual-column-chart) does not work with some aggregation Key: ZEPPELIN-5291 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5291 Project: Zeppelin Issue Type: Bug Components: helium Affects Versions: 0.9.0 Reporter: Nenad Vujasinovic Ultimates packages do not display data for certain aggregations over data. Problem is that strings are passed instead of numbers and if it has one value for one key, then that value does not convert to float because the aggregation function that converts a string to float does not called. The same issue (ZEPPELIN-3224) closed but PR declined. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5290) NPE on empty input
Vladimir Prus created ZEPPELIN-5290: --- Summary: NPE on empty input Key: ZEPPELIN-5290 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5290 Project: Zeppelin Issue Type: Bug Reporter: Vladimir Prus Try to run the following paragraphs: {code:java} %spark.sql select * from ( select "foo" as a ) where (a = '${search}' or '${search}' = '') {code} Observed effect: NPE as follow {code:java} java.lang.NullPointerException at org.apache.zeppelin.display.Input.getSimpleQuery(Input.java:376) at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:467) at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:72) at org.apache.zeppelin.scheduler.Job.run(Job.java:172) at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:132) at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:182) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) {code} Expected effect: Zeppelin does not produce an NPE that the analysts can't understand. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5289) Pyspark interpreter fails according to spark .jar accessing
Dmitry Kravchuk created ZEPPELIN-5289: - Summary: Pyspark interpreter fails according to spark .jar accessing Key: ZEPPELIN-5289 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5289 Project: Zeppelin Issue Type: Improvement Components: Interpreters, pySpark, zeppelin-interpreter Affects Versions: 0.9.0 Environment: * zeppelin 0.9.0 * hadoop 3.2 * spark 3.1.1 Reporter: Dmitry Kravchuk Fix For: 0.9.1 Attachments: interpreter 1.png, interpreter 2.png, zeppelin 0.9.0 spark 3.1 interpreter error.png Cannot run pyspark interpreter with spark 3.1.1 using zeppeling 0.9.0 !zeppelin 0.9.0 spark 3.1 interpreter error.png! Output: {code:java} org.apache.zeppelin.interpreter.InterpreterException: org.apache.zeppelin.interpreter.InterpreterException: org.apache.zeppelin.interpreter.InterpreterException: Fail to open SparkInterpreter at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:76) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:836) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:744) at org.apache.zeppelin.scheduler.Job.run(Job.java:172) at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:132) at org.apache.zeppelin.scheduler.FIFOScheduler.lambda$runJobInScheduler$0(FIFOScheduler.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.zeppelin.interpreter.InterpreterException: org.apache.zeppelin.interpreter.InterpreterException: Fail to open SparkInterpreter at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:76) at org.apache.zeppelin.interpreter.Interpreter.getInterpreterInTheSameSessionByClassName(Interpreter.java:355) at org.apache.zeppelin.interpreter.Interpreter.getInterpreterInTheSameSessionByClassName(Interpreter.java:366) at org.apache.zeppelin.spark.PySparkInterpreter.open(PySparkInterpreter.java:90) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70) ... 8 more Caused by: org.apache.zeppelin.interpreter.InterpreterException: Fail to open SparkInterpreter at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:122) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70) ... 12 more Caused by: scala.reflect.internal.FatalError: Error accessing /opt/spark31/spark-3.1.1-bin-hadoop3.2/jars/accessors-smart-1.2.jar at scala.tools.nsc.classpath.AggregateClassPath.$anonfun$list$3(AggregateClassPath.scala:99) at scala.collection.Iterator.foreach(Iterator.scala:941) at scala.collection.Iterator.foreach$(Iterator.scala:941) at scala.collection.AbstractIterator.foreach(Iterator.scala:1429) at scala.collection.IterableLike.foreach(IterableLike.scala:74) at scala.collection.IterableLike.foreach$(IterableLike.scala:73) at scala.collection.AbstractIterable.foreach(Iterable.scala:56) at scala.tools.nsc.classpath.AggregateClassPath.list(AggregateClassPath.scala:87) at scala.tools.nsc.util.ClassPath.list(ClassPath.scala:36) at scala.tools.nsc.util.ClassPath.list$(ClassPath.scala:36) at scala.tools.nsc.classpath.AggregateClassPath.list(AggregateClassPath.scala:30) at scala.tools.nsc.symtab.SymbolLoaders$PackageLoader.doComplete(SymbolLoaders.scala:284) at scala.tools.nsc.symtab.SymbolLoaders$SymbolLoader.complete(SymbolLoaders.scala:230) at scala.reflect.internal.Symbols$Symbol.info(Symbols.scala:1542) at scala.reflect.internal.Mirrors$RootsBase.init(Mirrors.scala:257) at scala.tools.nsc.Global.rootMirror$lzycompute(Global.scala:74) at scala.tools.nsc.Global.rootMirror(Global.scala:72) at scala.tools.nsc.Global.rootMirror(Global.scala:44) at scala.reflect.internal.Definitions$DefinitionsClass.ObjectClass$lzycompute(Definitions.scala:295) at scala.reflect.internal.Definitions$DefinitionsClass.ObjectClass(Definitions.scala:295) at scala.reflect.internal.Definitions$DefinitionsClass.init(Definitions.scala:1480) at scala.tools.nsc.Global$Run.(Global.scala:1199) at scala.tools.nsc.interpreter.IMain._initialize(IMain.scala:132) at scala.tools.nsc.interpreter.IMain.initializeSynchronous(IMain.scala:154) at org.apache.zeppelin.spark.SparkScala212Interpreter.open(SparkScala212Interpreter.scala:86
[jira] [Created] (ZEPPELIN-5288) Add rest api to reload note
Jeff Zhang created ZEPPELIN-5288: Summary: Add rest api to reload note Key: ZEPPELIN-5288 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5288 Project: Zeppelin Issue Type: Improvement Components: zeppelin-server Affects Versions: 0.9.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5287) Note search does not consider patch
Vladimir Prus created ZEPPELIN-5287: --- Summary: Note search does not consider patch Key: ZEPPELIN-5287 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5287 Project: Zeppelin Issue Type: Bug Affects Versions: 0.9.0 Reporter: Vladimir Prus Starting with a fresh install, create a note called "user/orders". Then, in the filter on the home page, enter "user". Observed effect - the just-created note is filtered out. Expected effect: it's visible, since "user" is part of full name of the note. For background, we have 100+ users, and most notes are organized into // hierarchy, and many people type their name into filter bar to quickly find their notes. It worked in 0.8, but no longer works. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5286) Unable to run some tutorial notes in zeppelin docker container
Jeff Zhang created ZEPPELIN-5286: Summary: Unable to run some tutorial notes in zeppelin docker container Key: ZEPPELIN-5286 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5286 Project: Zeppelin Issue Type: Improvement Affects Versions: 0.9.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5285) interpreter log in docker container is missing
Jeff Zhang created ZEPPELIN-5285: Summary: interpreter log in docker container is missing Key: ZEPPELIN-5285 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5285 Project: Zeppelin Issue Type: Bug Components: docker Affects Versions: 0.9.0 Reporter: Jeff Zhang log in docker container is redirected to stdout, we could see the zeppelin server log in stdout, but could not see the interpreter log -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5284) savepoint & checkpoint don't work in flink 1.12
Jeff Zhang created ZEPPELIN-5284: Summary: savepoint & checkpoint don't work in flink 1.12 Key: ZEPPELIN-5284 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5284 Project: Zeppelin Issue Type: Bug Components: flink Affects Versions: 0.9.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5283) Cannot connect to external spark cluster in Kubernetes
Stepan created ZEPPELIN-5283: Summary: Cannot connect to external spark cluster in Kubernetes Key: ZEPPELIN-5283 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5283 Project: Zeppelin Issue Type: Bug Components: Kubernetes, spark Reporter: Stepan Even when I'm setting connect to existing process it sill creates new pod in kubernetes and cannot connect to existing cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5282) Launching zeppelin interpreter on kubernetes is time out, kill it now
Stepan created ZEPPELIN-5282: Summary: Launching zeppelin interpreter on kubernetes is time out, kill it now Key: ZEPPELIN-5282 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5282 Project: Zeppelin Issue Type: Bug Components: Kubernetes, spark Affects Versions: 0.9.0 Reporter: Stepan Hi, I'm using zeppelin-server.yaml and when installed to Kubernetes it works fine for python but for spark I'm getting Launching zeppelin interpreter on kubernetes is time out, kill it now and spark pod shows only pod/spark-ofanlh 0/1 Init:Error no logs and nothing in pod describe. Is there some helm chart or working yaml? I tried a lot of these and nothing works... Thank you -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5281) Expose ZeppelinContext interpret(string) method
Carlos Diogo created ZEPPELIN-5281: -- Summary: Expose ZeppelinContext interpret(string) method Key: ZEPPELIN-5281 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5281 Project: Zeppelin Issue Type: Improvement Reporter: Carlos Diogo Making the Zeppelin context interpret() method available would allow for easy code injection from string in a notebook For python there is the exec function , with which we can load some python code from file and then execute it as a pre-code for a note - like an include For Java and Scala , such functionality does not exist. This is limiting our ability to re-use code within zeppelin - compiling to Jars and then adding them to Zeppelin is not very interactive With the exposure of the interpret() one could inject from a file (or other source) any code - scala, python, sql(jdbc) - and execute it Current workaround we have in place, is to use the restapi to inject the code to execute in a paragraph of the note. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5280) Use update as the default type of %flink.ssql
Jeff Zhang created ZEPPELIN-5280: Summary: Use update as the default type of %flink.ssql Key: ZEPPELIN-5280 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5280 Project: Zeppelin Issue Type: Improvement Components: flink Affects Versions: 0.9.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5279) Paragraphs terminate after 1 hour
Noam created ZEPPELIN-5279: -- Summary: Paragraphs terminate after 1 hour Key: ZEPPELIN-5279 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5279 Project: Zeppelin Issue Type: Bug Components: Interpreters, pySpark, spark Affects Versions: 0.8.2 Environment: cat /etc/*release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=16.04 DISTRIB_CODENAME=xenial DISTRIB_DESCRIPTION="Ubuntu 16.04.7 LTS" NAME="Ubuntu" VERSION="16.04.7 LTS (Xenial Xerus)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 16.04.7 LTS" VERSION_ID="16.04" HOME_URL="http://www.ubuntu.com/; SUPPORT_URL="http://help.ubuntu.com/; BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/; VERSION_CODENAME=xenial UBUNTU_CODENAME=xenial Reporter: Noam I am running a paragraph using pyspark and am noticing that the runs all stop after exactly 1 hour, after which I get a message at the bottom of the paragraph similar to: "Took 1 hrs 0 min 0 sec. Last updated by anonymous at February 09 2021, 10:24:25 PM." Followed by the following error message (see bottom). In the background the jobs do seem to actually still be running on spark, however it seems like the pyspark interpreter is losing a connection to the processes after the 1hr. I tried editing the `zeppelin.interpreter.lifecyclemanager.timeout.threshold` variable in zeppelin-site.xml, but this has no effect on this issue, or anything in general as far as I can tell. Paragraph output after timing out: " org.apache.thrift.transport.TTransportException at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_interpret(RemoteInterpreterService.java:274) at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.interpret(RemoteInterpreterService.java:258) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter$4.call(RemoteInterpreter.java:233) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter$4.call(RemoteInterpreter.java:229) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.callRemoteFunction(RemoteInterpreterProcess.java:135) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.interpret(RemoteInterpreter.java:228) at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:449) at org.apache.zeppelin.scheduler.Job.run(Job.java:188) at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:315) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) org.apache.thrift.transport.TTransportException at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_interpret(RemoteInterpreterService.java:274) at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.interpret(RemoteInterpreterService.java:258) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter$4.call(RemoteInterpreter.java:233) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter$4.call(RemoteInterpreter.java:229) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.callRemoteFunction(RemoteInterpreterProcess.java:135) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.interpret(RemoteInterpreter.java:228) at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:449) at org.apache.zeppelin.scheduler.Job.run(Job.java:188) at org.
[jira] [Created] (ZEPPELIN-5278) Support for Spark 3.1.1
Hélder Hugo Ferreira created ZEPPELIN-5278: -- Summary: Support for Spark 3.1.1 Key: ZEPPELIN-5278 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5278 Project: Zeppelin Issue Type: Bug Components: pySpark Affects Versions: 0.9.0 Environment: Docker environment * Portainer v2.0.1 * Spark 3.1.1 * Zeppelin 0.9.0 (with internal libs of Spark 3.0.1, but also tried with 3.0.2 and 3.1.1) Reporter: Hélder Hugo Ferreira Fix For: 0.9.1 Attachments: image-2021-03-04-18-13-04-860.png We have updated our Spark to v3.1.1 and now we are unable to keep using our Zeppelin notebooks. Always get this sort of errors: !image-2021-03-04-18-13-04-860.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5277) Use RandomStringUtils from apache-commons-lang3
Philipp Dallig created ZEPPELIN-5277: Summary: Use RandomStringUtils from apache-commons-lang3 Key: ZEPPELIN-5277 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5277 Project: Zeppelin Issue Type: Improvement Components: Kubernetes Affects Versions: 0.9.0, 0.9.1, 0.10.0 Reporter: Philipp Dallig Assignee: Philipp Dallig I noticed the following method in apache-commons-lang3. {code:java} RandomStringUtils.random(...){code} We should replace our method in K8sUtils with the method from the library. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5276) Pyspark interpreter doesn't add jars to PYTHONPATH for yarn cluster mode
Adam Binford created ZEPPELIN-5276: -- Summary: Pyspark interpreter doesn't add jars to PYTHONPATH for yarn cluster mode Key: ZEPPELIN-5276 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5276 Project: Zeppelin Issue Type: Bug Reporter: Adam Binford When using the native spark-submit to run a python script directly, Spark adds all the resolved jars from --jars and --packages to the PYTHONPATH. This lets some packages (like delta.io) automagically add their python packages to your session. Because the Pyspark interpreter is launched from a jar during the spark submit, you don't automatically get that behavior. The PysparkInterpreter should add the jars to the python path for you when bootstrapping the python session. I don't know if this only affects yarn cluster mode or other modes as well, as it's the only one we use. Currently, you can manually working around this by setting your python path directly when creating your session, you just need to know the naming format spark saves jars in: PYTHONPATH=./io.delta_delta-core_2.12-0.8.0.jar -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5275) Pyspark interpreter missing java imports
Adam Binford created ZEPPELIN-5275: -- Summary: Pyspark interpreter missing java imports Key: ZEPPELIN-5275 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5275 Project: Zeppelin Issue Type: Bug Components: pySpark Affects Versions: 0.9.0 Reporter: Adam Binford The pyspark bootstrap scripts are missing some of the `java_imports` that are in the native spark java gateway: [https://github.com/apache/spark/blob/master/python/pyspark/java_gateway.py#L152] Most obvious thing that doesn't work is using df.explain() from pyspark. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5274) Support to load jars at runtime in flink interpreter
Jeff Zhang created ZEPPELIN-5274: Summary: Support to load jars at runtime in flink interpreter Key: ZEPPELIN-5274 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5274 Project: Zeppelin Issue Type: Improvement Components: flink Affects Versions: 0.9.0 Reporter: Jeff Zhang Currently, we have to specify flink.execution.jars before starting flink interpreter, it would be nice to support to load jars dynamically after starting flink interpreter. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5273) Bump spark version to 3.1.1
Jeff Zhang created ZEPPELIN-5273: Summary: Bump spark version to 3.1.1 Key: ZEPPELIN-5273 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5273 Project: Zeppelin Issue Type: Improvement Components: spark Affects Versions: 0.9.0 Reporter: Jeff Zhang Spark 3.1.1 is released, we should upgrade it in zeppelin -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5272) Row return type FunctionHint doesn't work in flink interpreter
Jeff Zhang created ZEPPELIN-5272: Summary: Row return type FunctionHint doesn't work in flink interpreter Key: ZEPPELIN-5272 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5272 Project: Zeppelin Issue Type: Bug Components: flink Affects Versions: 0.9.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5271) Running Pig Query in Apache Zeppelin
sujsin77 created ZEPPELIN-5271: -- Summary: Running Pig Query in Apache Zeppelin Key: ZEPPELIN-5271 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5271 Project: Zeppelin Issue Type: Bug Components: Interpreters Affects Versions: 0.9.0 Reporter: sujsin77 I am running following Pig query in Apache Zeppelin {{%pig.query A = load '/Pig_data' using PigStorage(',') as(ExamName,ExamId,BITSID, StudentName,Issue_Type,Time); B = group A by Issue_Type; C = FOREACH B GENERATE group as Issue_Type, COUNT($1);}} But is gives me following error {{org.apache.zeppelin.interpreter.InterpreterException: java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/conf/YarnConfiguration at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:76) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:836) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:744) at org.apache.zeppelin.scheduler.Job.run(Job.java:172) at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:132) at org.apache.zeppelin.scheduler.FIFOScheduler.lambda$runJobInScheduler$0(FIFOScheduler.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/conf/YarnConfiguration at org.apache.pig.PigServer.(PigServer.java:249) at org.apache.pig.PigServer.(PigServer.java:220) at org.apache.pig.PigServer.(PigServer.java:193) at org.apache.pig.PigServer.(PigServer.java:185) at org.apache.zeppelin.pig.PigInterpreter.open(PigInterpreter.java:64) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70) ... 8 more Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.yarn.conf.YarnConfiguration at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 14 more}} I had checked that hadoop classpath and yarn classpath is set already {{[hadoop@localhost ~]$ hadoop classpath /home/hadoop/hadoop/etc/hadoop:/home/hadoop/hadoop/share/hadoop/common/lib/*:/home/hadoop/hadoop/share/hadoop/common/*:/home/hadoop/hadoop/share/hadoop/hdfs:/home/hadoop/hadoop/share/hadoop/hdfs/lib/*:/home/hadoop/hadoop/share/hadoop/hdfs/*:/home/hadoop/hadoop/share/hadoop/mapreduce/lib/*:/home/hadoop/hadoop/share/hadoop/mapreduce/*:/home/hadoop/hadoop/share/hadoop/yarn:/home/hadoop/hadoop/share/hadoop/yarn/lib/*:/home/hadoop/hadoop/share/hadoop/yarn/* [hadoop@localhost ~]$ yarn classpath /home/hadoop/hadoop/etc/hadoop:/home/hadoop/hadoop/share/hadoop/common/lib/*:/home/hadoop/hadoop/share/hadoop/common/*:/home/hadoop/hadoop/share/hadoop/hdfs:/home/hadoop/hadoop/share/hadoop/hdfs/lib/*:/home/hadoop/hadoop/share/hadoop/hdfs/*:/home/hadoop/hadoop/share/hadoop/mapreduce/lib/*:/home/hadoop/hadoop/share/hadoop/mapreduce/*:/home/hadoop/hadoop/share/hadoop/yarn:/home/hadoop/hadoop/share/hadoop/yarn/lib/*:/home/hadoop/hadoop/share/hadoop/yarn/*}} also set in *zeppelin-env.sh* {{export USE_HADOOP=True export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop}} please help me where is the problem. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5270) Implementing a soft shutdown for K8s interpreters
Philipp Dallig created ZEPPELIN-5270: Summary: Implementing a soft shutdown for K8s interpreters Key: ZEPPELIN-5270 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5270 Project: Zeppelin Issue Type: Improvement Components: Interpreters, Kubernetes Affects Versions: 0.9.1, 0.10.0 Reporter: Philipp Dallig Assignee: Philipp Dallig We should implement a soft shutdown for the Zeppelin interpreter which have been started in K8s. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5269) Sometimes user name is filled in interpreter's dependencies section
Jeff Zhang created ZEPPELIN-5269: Summary: Sometimes user name is filled in interpreter's dependencies section Key: ZEPPELIN-5269 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5269 Project: Zeppelin Issue Type: Improvement Components: front-end, zeppelin-web Affects Versions: 0.9.0 Reporter: Jeff Zhang Attachments: image-2021-03-01-11-25-28-270.png !image-2021-03-01-11-25-28-270.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5268) Markdown content is not displayed by default
Jeff Zhang created ZEPPELIN-5268: Summary: Markdown content is not displayed by default Key: ZEPPELIN-5268 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5268 Project: Zeppelin Issue Type: Improvement Components: markdown Affects Versions: 0.9.0 Reporter: Jeff Zhang How to reproduce it: 1. Create a new note, write one markdown paragraph and execute it to display the content 2. Restart zeppelin, reopen the note, the markdown is not displayed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5267) Use environment variable FLINK_HOME if it is not specified in flink interpreter
Jeff Zhang created ZEPPELIN-5267: Summary: Use environment variable FLINK_HOME if it is not specified in flink interpreter Key: ZEPPELIN-5267 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5267 Project: Zeppelin Issue Type: Improvement Components: flink Affects Versions: 0.9.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5266) Enable extension of flexmark in markdown interpreter
Jeff Zhang created ZEPPELIN-5266: Summary: Enable extension of flexmark in markdown interpreter Key: ZEPPELIN-5266 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5266 Project: Zeppelin Issue Type: Improvement Components: markdown Affects Versions: 0.9.0 Reporter: Jeff Zhang Currently, extension is not enabled. e.g. emoji doesn't work in md interpreter for now. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5265) Support to drop temporary flink table
Jeff Zhang created ZEPPELIN-5265: Summary: Support to drop temporary flink table Key: ZEPPELIN-5265 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5265 Project: Zeppelin Issue Type: Improvement Components: flink Affects Versions: 0.9.0, 0.10.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5264) Put note title in individual line
Jeff Zhang created ZEPPELIN-5264: Summary: Put note title in individual line Key: ZEPPELIN-5264 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5264 Project: Zeppelin Issue Type: Improvement Components: zeppelin-web Affects Versions: 0.9.0 Reporter: Jeff Zhang Attachments: image-2021-02-26-10-00-15-632.png Otherwise you can not see the whole title when title is very long. !image-2021-02-26-10-00-15-632.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5263) Current version of bootstrap is known to have vulnerabilities
jason ogaard created ZEPPELIN-5263: -- Summary: Current version of bootstrap is known to have vulnerabilities Key: ZEPPELIN-5263 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5263 Project: Zeppelin Issue Type: Bug Components: GUI Affects Versions: 0.9.0 Reporter: jason ogaard The version of bootstrap used by the zeppelin UI ([3.2.0|https://github.com/apache/zeppelin/blob/master/zeppelin-web/bower.json]) is known to have vulnerabilities: [https://github.com/advisories/GHSA-9v3m-8fp8-mj99] - the recommendation is to upgrade to 3.4.1 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5262) Add some features for Zeppelin Client
Zhubowen created ZEPPELIN-5262: -- Summary: Add some features for Zeppelin Client Key: ZEPPELIN-5262 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5262 Project: Zeppelin Issue Type: Improvement Components: zeppelin-client Affects Versions: 0.9.0 Reporter: Zhubowen Add some feature for Zeppelin Client, such as rename note, copy note -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5261) Sandbox HTML result rendering
Lee Moon Soo created ZEPPELIN-5261: -- Summary: Sandbox HTML result rendering Key: ZEPPELIN-5261 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5261 Project: Zeppelin Issue Type: Improvement Components: GUI Affects Versions: 0.9.0 Reporter: Lee Moon Soo Zeppelin display system allows users to render arbitrary HTML results inside a Note. This includes Javascript inlined in the HTML data to be rendered. It can be used for a potential xss attack, when a user open a shared notebook from another user, which includes an exploit code inside HTML result in the Note. There could a couple of different approaches to prevent this a. Don't render HTML results unless the user explicitly 'trust' the Note. In this way, when a Note includes HTML results, Zeppelin UI can ask the user if user want to trust and render HTML result or not. b. Sandbox HTML result rendering using iframe In this way, HTML result is rendered inside an iframe came from different domain. Because browser's xss protection, it prevents potential exploits rendered in iframe access to any data in the parent window (Zeppelin). This approach is implemented in Google Colab. IMO, (b) is more favorable while it makes security depends on 'trust' of a user. However, there's some expected complexity on implementation and configuration, such as * Passing result data to render from parent window to the iframe came from a different domain * Automatically resize iframe based on its content * client webbrowser should able to access Iframe domain. Or should able to configure an alternative domain to load iframe source. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5260) nNotebook functionalities(like run paragraphs, add a new paragraph, reset interpreters) become unusable, when code completion is used
jahira ibrahim created ZEPPELIN-5260: Summary: nNotebook functionalities(like run paragraphs, add a new paragraph, reset interpreters) become unusable, when code completion is used Key: ZEPPELIN-5260 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5260 Project: Zeppelin Issue Type: Bug Affects Versions: 0.9.0 Reporter: jahira ibrahim Attachments: image-2021-02-22-15-28-43-747.png Steps to recreate: # Get the latest zeppelin (0.9) from docker # Create a simple notebook with a simple python command like following: import sys from keras.model import Sequential 3. Try to use the tab completion after the keras. +*Actual Result:*+ Try 2-3 times. Tab completion stops showing the options.Then try to add a paragraph, No new paragraph is added, when '+ Add paragraph' is clicked. Try to execute a paragraph, nothing happens. Try to reset the interpreter, following error shows up: !image-2021-02-22-15-28-43-747.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5259) Use RemoteInterpreterManagedProcess for K8sRemoteInterpreterProcess
Philipp Dallig created ZEPPELIN-5259: Summary: Use RemoteInterpreterManagedProcess for K8sRemoteInterpreterProcess Key: ZEPPELIN-5259 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5259 Project: Zeppelin Issue Type: Improvement Components: Kubernetes Affects Versions: 0.9.0, 0.9.1, 0.10.0 Reporter: Philipp Dallig Assignee: Philipp Dallig We should use the RemoteInterpreterManagedProcess for the K8sRemoteInterpreterProcess to remove duplicate code. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5258) Add Knox realm support to ShiroAuthenticationService.getAssociatedRoles
Adam Binford created ZEPPELIN-5258: -- Summary: Add Knox realm support to ShiroAuthenticationService.getAssociatedRoles Key: ZEPPELIN-5258 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5258 Project: Zeppelin Issue Type: Improvement Components: zeppelin-server Affects Versions: 0.9.0 Reporter: Adam Binford The KnoxJwtRealm was added as another form of shiro authentication, and works for the native shiro URL authorization. We need to add special handling to ShiroAuthenticationService.getAssociatedRoles so that we can obtain groups for Knox realm users and use them for notebook authorization. A better long term approach would probably be to eliminate the need for getAssociatedRoles all together and better natively use the shiro API, but for now this would at least add better Knox support. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5257) Refactoring of ExecutionContext
Jeff Zhang created ZEPPELIN-5257: Summary: Refactoring of ExecutionContext Key: ZEPPELIN-5257 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5257 Project: Zeppelin Issue Type: Improvement Components: zeppelin-zengine Affects Versions: 0.9.0, 0.10.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5256) Bulk Import (create) notebooks
Steven Wickers created ZEPPELIN-5256: Summary: Bulk Import (create) notebooks Key: ZEPPELIN-5256 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5256 Project: Zeppelin Issue Type: Improvement Affects Versions: 0.9.0 Reporter: Steven Wickers We use Zeppelin Notebook in our development and would be nice to have a bulk import to create multiple notebooks at once. We allow users to import or share containers which contain multiple notebooks, some time hundreds of notebooks. It would be nice to use one endpoint to create multiple notebooks at once. Thanks, Steve. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5255) Exported note name should included the note id
Jeff Zhang created ZEPPELIN-5255: Summary: Exported note name should included the note id Key: ZEPPELIN-5255 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5255 Project: Zeppelin Issue Type: Bug Components: front-end, zeppelin-web Affects Versions: 0.9.0, 0.10.0 Reporter: Jeff Zhang Currently, the exported note name doesn't include the note id. But it should be \{note_name}_${note_id}.zpln -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5254) Memory leak of InterpreterGroup
Jeff Zhang created ZEPPELIN-5254: Summary: Memory leak of InterpreterGroup Key: ZEPPELIN-5254 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5254 Project: Zeppelin Issue Type: Bug Components: zeppelin-interpreter Affects Versions: 0.9.0, 0.10.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5253) Memory leak of RemoteManagerProcess
Jeff Zhang created ZEPPELIN-5253: Summary: Memory leak of RemoteManagerProcess Key: ZEPPELIN-5253 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5253 Project: Zeppelin Issue Type: Bug Components: zeppelin-interpreter Affects Versions: 0.9.0, 0.10.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5252) Readers / Runners unable to run individual paragraphs in 0.9.0
Yun Ki Lee created ZEPPELIN-5252: Summary: Readers / Runners unable to run individual paragraphs in 0.9.0 Key: ZEPPELIN-5252 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5252 Project: Zeppelin Issue Type: Bug Components: GUI, security Affects Versions: 0.9.0 Environment: RedHat 7, Java 1.8.0_181, Chrome browser Reporter: Yun Ki Lee In version 0.9.0, users who are Readers and Runners of a Notebook can only run the whole notebook and not select the paragraph they wish to run. Readers can only enter a Notebook into the Report view but cannot go into the Default view without Writer permissions. In version 0.8.1, readers were able to select each paragraph rather than having the run the whole Notebook, ie they were able to view each Notebook using Default mode but they just couldn't edit the Notebooks. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5251) Couldn't get the paragraph result ("zeppelin.paragraph.result.table") from Resource Pool.
Yongqiang Li created ZEPPELIN-5251: -- Summary: Couldn't get the paragraph result ("zeppelin.paragraph.result.table") from Resource Pool. Key: ZEPPELIN-5251 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5251 Project: Zeppelin Issue Type: Bug Components: Interpreters Affects Versions: 0.9.0 Reporter: Yongqiang Li Fix For: 0.9.1 Hi, In 0.8.2, I can use the following code to retrieve the paragraph results. {code:java} %python ic = z.getInterpreterContext() rp = ic.getResourcePool() rp.get(notebookID, paragraphID, "zeppelin.paragraph.result.table") {code} It's very easy for me to transfer the data among the notebooks/paragraphs. But in 0.9.0, the call always returns nothing. Has this function been removed? Thanks. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5250) Unable to run tpcds query 1 via presto on jdbc
Jeff Zhang created ZEPPELIN-5250: Summary: Unable to run tpcds query 1 via presto on jdbc Key: ZEPPELIN-5250 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5250 Project: Zeppelin Issue Type: Bug Components: JdbcInterpreter Affects Versions: 0.9.0, 0.10.0 Reporter: Jeff Zhang {code:java} java.sql.SQLException: Error executing query at com.facebook.presto.jdbc.PrestoStatement.internalExecute(PrestoStatement.java:279) at com.facebook.presto.jdbc.PrestoStatement.execute(PrestoStatement.java:228) at org.apache.commons.dbcp2.DelegatingStatement.execute(DelegatingStatement.java:291) at org.apache.commons.dbcp2.DelegatingStatement.execute(DelegatingStatement.java:291) at org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(JDBCInterpreter.java:770) at org.apache.zeppelin.jdbc.JDBCInterpreter.internalInterpret(JDBCInterpreter.java:901) at org.apache.zeppelin.interpreter.AbstractInterpreter.interpret(AbstractInterpreter.java:47) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:110) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:852) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:744) at org.apache.zeppelin.scheduler.Job.run(Job.java:172) at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:132) at org.apache.zeppelin.scheduler.ParallelScheduler.lambda$runJobInScheduler$0(ParallelScheduler.java:46) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.IllegalArgumentException: ParameterKind is [TYPE] but expected [LONG] at com.facebook.presto.jdbc.internal.spi.type.TypeSignatureParameter.getValue(TypeSignatureParameter.java:87) at com.facebook.presto.jdbc.internal.spi.type.TypeSignatureParameter.getLongLiteral(TypeSignatureParameter.java:99) at com.facebook.presto.jdbc.ColumnInfo.setTypeInfo(ColumnInfo.java:194) at com.facebook.presto.jdbc.PrestoResultSet.getColumnInfo(PrestoResultSet.java:1868) at com.facebook.presto.jdbc.PrestoResultSet.(PrestoResultSet.java:121) at com.facebook.presto.jdbc.PrestoStatement.internalExecute(PrestoStatement.java:250) ... 15 more {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5249) Update to thrift 0.14.0
Philipp Dallig created ZEPPELIN-5249: Summary: Update to thrift 0.14.0 Key: ZEPPELIN-5249 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5249 Project: Zeppelin Issue Type: Improvement Components: Core Affects Versions: 0.9.0, 0.9.1, 0.10.0 Reporter: Philipp Dallig We should update thrift to 0.14.0 as we have a problem that should be fixed with the new version. Zeppelin code: [https://github.com/apache/zeppelin/blob/f3bdd4a1fa0cf19bc1015955d8ade4bc79a8e16f/zeppelin-interpreter/src/main/java/org/apache/zeppelin/interpreter/remote/RemoteInterpreterServer.java#L318-L322] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5248) Add docker support for local jekyll builds for documentation
Omri keefe created ZEPPELIN-5248: Summary: Add docker support for local jekyll builds for documentation Key: ZEPPELIN-5248 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5248 Project: Zeppelin Issue Type: Improvement Components: documentation Reporter: Omri keefe Assignee: Omri keefe To make it easier for new contributors to build and test documentation changes will add support to build and serve locally using docker. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5247) Move versions from code to configuration/properties file
Omri keefe created ZEPPELIN-5247: Summary: Move versions from code to configuration/properties file Key: ZEPPELIN-5247 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5247 Project: Zeppelin Issue Type: Task Reporter: Omri keefe In various unit test we find that we have hard coded version values for components that are tested/used. We would like to extract these to an external `.properties` at the root dir, or consolidate in the POM.xml [example for this in code |https://github.com/apache/zeppelin/blob/master/zeppelin-interpreter-integration/src/test/java/org/apache/zeppelin/integration/FlinkIntegrationTest110.java#L32] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5246) Zeppelin in cluster mode doesn't create spark submit
Ruslan Fialkovsky created ZEPPELIN-5246: --- Summary: Zeppelin in cluster mode doesn't create spark submit Key: ZEPPELIN-5246 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5246 Project: Zeppelin Issue Type: Bug Components: interpreter-setting, Interpreters, spark Affects Versions: 0.9.0 Reporter: Ruslan Fialkovsky Attachments: Screenshot 2021-02-10 at 18.30.38.png Hello. I'm trying to configure zeppelin cluster mode and running spark on yarn. This is my interpreter conf on the picture and it works in zeppelin single node case. So, seems zeppelin starts SparkContext instead of spark-submit in zeppelin cluster mode: {code:java} // INFO [2021-02-10 18:34:16,838] ({Thread-1034} ClusterInterpreterCheckThread.java[run]:51) - ClusterInterpreterCheckThread run() >>> │ INFO [2021-02-10 18:34:16,848] ({SchedulerFactory2} ProcessLauncher.java[transition]:109) - Process state is transitioned to LAUNCHED│ INFO [2021-02-10 18:34:16,848] ({SchedulerFactory2} ProcessLauncher.java[launch]:96) - Process is launched: [/usr/lib/zeppelin/bin/interpreter.sh, -d, /usr/lib/zeppelin/interpreter/spark, -c, 10.15.145│ .26, -p, 17317, -r, :, -i, spark-fialkovskiy, -u, fialkovskiy, -l, /usr/lib/zeppelin/local-repo/spark, -g, spark] │ INFO [2021-02-10 18:34:16,955] ({Exec Stream Pumper} ProcessLauncher.java[processLine]:188) - Interpreter launch command: /usr/lib/spark/3.0.1/bin/spark-submit --class org.apache.zeppelin.interpreter.│ remote.RemoteInterpreterServer --driver-class-path ":/usr/lib/zeppelin/interpreter/spark/*::/usr/lib/zeppelin/interpreter/zeppelin-interpreter-shaded-0.9.0-preview2.jar:/usr/lib/zeppelin/interpreter/spa│ rk/spark-interpreter-0.9.0-preview2.jar:/etc/hadoop/" --driver-java-options " -Dfile.encoding=UTF-8 -Dlog4j.configuration='file:///etc/zeppelin/log4j.properties' -Dlog4j.configurationFile='file:///etc/z│ eppelin/log4j2.properties' -Dzeppelin.log.file='/usr/lib/zeppelin/logs/zeppelin-interpreter-spark-fialkovskiy-fialkovskiy--hadoop836713.log'" /usr/lib/zeppelin/interpreter/spark/spark-interpreter-0.9.0-│ preview2.jar 10.15.145.26 17317 "spark-fialkovskiy" :+ pid=8070 │ INFO [2021-02-10 18:34:16,955] ({Exec Stream Pumper} ProcessLauncher.java[processLine]:188) - Interpreter launch command: /usr/lib/spark/3.0.1/bin/spark-submit --class org.apache.zeppelin.interpreter.│ remote.RemoteInterpreterServer --driver-class-path ":/usr/lib/zeppelin/interpreter/spark/*::/usr/lib/zeppelin/interpreter/zeppelin-interpreter-shaded-0.9.0-preview2.jar:/usr/lib/zeppelin/interpreter/spa│ rk/spark-interpreter-0.9.0-preview2.jar:/etc/hadoop/" --driver-java-options " -Dfile.encoding=UTF-8 -Dlog4j.configuration='file:///etc/zeppelin/log4j.properties' -Dlog4j.configurationFile='file:///etc/z│ eppelin/log4j2.properties' -Dzeppelin.log.file='/usr/lib/zeppelin/logs/zeppelin-interpreter-spark-fialkovskiy-fialkovskiy--hadoop836713.log'" /usr/lib/zeppelin/interpreter/spark/spark-interpreter-0.9.0-│ preview2.jar 10.15.145.26 17317 "spark-fialkovskiy" :+ pid=8070 │ INFO [2021-02-10 18:34:24,844] ({Thread-1034} ClusterManager.java[getIntpProcessStatus]:455) - interpreter thrift 10.15.145.26:17305 service is online! │ INFO [2021-02-10 18:34:24,845] ({Thread-1034} ClusterManager.java[getIntpProcessStatus]:461) - interpreter thrift 10.15.145.26:17305 accessible! │ INFO [2021-02-10 18:34:24,845] ({Thread-1034} ClusterInterpreterCheckThread.java[online]:62) - Found cluster interpreter 10.15.145.26:17305 │ INFO [2021-02-10 18:34:24,851] ({Thread-1034} ProcessLauncher.java[transition]:109) - Process state is transitioned to RUNNING │ INFO [2021-02-10 18:34:24,852] ({Thread-1034} ClusterInterpreterCheckThread.java[run]:81) - ClusterInterpreterCheckThread run() <<< │ INFO [2021-02-10 18:34:24,854] ({SchedulerFactory2} ClusterManager.java[getIntpProcessStatus]:455) - interpreter thrift 10.15.145.26:17305 service is online!
[jira] [Created] (ZEPPELIN-5245) Zeppelin doesn't propagate interpreter setting in cluster mode
Ruslan Fialkovsky created ZEPPELIN-5245: --- Summary: Zeppelin doesn't propagate interpreter setting in cluster mode Key: ZEPPELIN-5245 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5245 Project: Zeppelin Issue Type: Bug Components: Core, interpreter-setting, zeppelin-server Affects Versions: 0.9.0 Reporter: Ruslan Fialkovsky Hello. I'm trying to configure zeppelin in cluster mode. When I change interpreter setting they implement only on local node and I get error on remote node: (\{netty-messaging-event-epoll-client-3} NettyMessagingService.java[lambda$null$20]:531) - An error occurred in a message handler: {} │ java.lang.ClassCastException: com.google.gson.internal.LinkedTreeMap cannot be cast to java.util.HashMap │ at org.apache.zeppelin.interpreter.InterpreterSettingManager.onClusterEvent(InterpreterSettingManager.java:1215) │ at org.apache.zeppelin.cluster.ClusterManagerServer.lambda$new$5(ClusterManagerServer.java:370) │ at io.atomix.cluster.messaging.impl.NettyMessagingService.lambda$null$20(NettyMessagingService.java:529) │ at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:399) │ at io.atomix.cluster.messaging.impl.NettyMessagingService.lambda$registerHandler$21(NettyMessagingService.java:525) │ at io.atomix.cluster.messaging.impl.NettyMessagingService$RemoteServerConnection.dispatch(NettyMessagingService.java:1122) │ at io.atomix.cluster.messaging.impl.NettyMessagingService$RemoteServerConnection.access$800(NettyMessagingService.java:1100) │ at io.atomix.cluster.messaging.impl.NettyMessagingService$InboundMessageDispatcher.channelRead0(NettyMessagingService.java:754) │ at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) │ at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) │ at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) │ at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) │ at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) │ at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) │ at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) │ at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) │ at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) │ at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) │ at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) │ at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) │ at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) │ at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808) │ at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:417) │ at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:317) │ at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) │ at java.lang.Thread.run(Thread.java:748) Also what interesting, I tried 0.9.0 preview2 and everything have worked fine. The problem appeared in release version -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5244) Docker build in dockerhub fails
Jeff Zhang created ZEPPELIN-5244: Summary: Docker build in dockerhub fails Key: ZEPPELIN-5244 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5244 Project: Zeppelin Issue Type: Bug Affects Versions: 0.9.0, 0.10.0 Reporter: Jeff Zhang {code:java} Collecting package metadata (current_repodata.json):...working...doneSolving environment: ...working...done## Package Plan ##environment location: /opt/condaadded / updated specs:- _libgcc_mutex==0.1=main- brotlipy==0.7.0=py37h27cfd23_1003- ca-certificates==2020.10.14=0- certifi==2020.6.20=pyhd3eb1b0_3- cffi==1.14.3=py37h261ae71_2- chardet==3.0.4=py37h06a4308_1003- conda-package-handling==1.7.2=py37h03888b9_0- conda==4.9.2=py37h06a4308_0- cryptography==3.2.1=py37h3c74f83_1- idna==2.10=py_0- ld_impl_linux-64==2.33.1=h53a641e_7- libedit==3.1.20191231=h14c3975_1- libffi==3.3=he6710b0_2- libgcc-ng==9.1.0=hdf63c60_0- libstdcxx-ng==9.1.0=hdf63c60_0- ncurses==6.2=he6710b0_1- openssl==1.1.1h=h7b6447c_0- pip==20.2.4=py37h06a4308_0- pycosat==0.6.3=py37h27cfd23_0- pycparser==2.20=py_2- pyopenssl==19.1.0=pyhd3eb1b0_1- pysocks==1.7.1=py37_1- python==3.7.9=h7579374_0- readline==8.0=h7b6447c_0- requests==2.24.0=py_0- ruamel_yaml==0.15.87=py37h7b6447c_1- setuptools==50.3.1=py37h06a4308_1- six==1.15.0=py37h06a4308_0- sqlite==3.33.0=h62c20be_0- tk==8.6.10=hbc83047_0- tqdm==4.51.0=pyhd3eb1b0_0- urllib3==1.25.11=py_0- wheel==0.35.1=pyhd3eb1b0_0- xz==5.2.5=h7b6447c_0- yaml==0.2.5=h7b6447c_0- zlib==1.2.11=h7b6447c_3The following NEW packages will be INSTALLED:_libgcc_mutex pkgs/main/linux-64::_libgcc_mutex-0.1-mainbrotlipy pkgs/main/linux-64::brotlipy-0.7.0-py37h27cfd23_1003ca-certificates pkgs/main/linux-64::ca-certificates-2020.10.14-0certifi pkgs/main/noarch::certifi-2020.6.20-pyhd3eb1b0_3cffi pkgs/main/linux-64::cffi-1.14.3-py37h261ae71_2chardet pkgs/main/linux-64::chardet-3.0.4-py37h06a4308_1003conda pkgs/main/linux-64::conda-4.9.2-py37h06a4308_0conda-package-han~ pkgs/main/linux-64::conda-package-handling-1.7.2-py37h03888b9_0cryptography pkgs/main/linux-64::cryptography-3.2.1-py37h3c74f83_1idna pkgs/main/noarch::idna-2.10-py_0ld_impl_linux-64 pkgs/main/linux-64::ld_impl_linux-64-2.33.1-h53a641e_7libedit pkgs/main/linux-64::libedit-3.1.20191231-h14c3975_1libffi pkgs/main/linux-64::libffi-3.3-he6710b0_2libgcc-ng pkgs/main/linux-64::libgcc-ng-9.1.0-hdf63c60_0libstdcxx-ng pkgs/main/linux-64::libstdcxx-ng-9.1.0-hdf63c60_0ncurses pkgs/main/linux-64::ncurses-6.2-he6710b0_1openssl pkgs/main/linux-64::openssl-1.1.1h-h7b6447c_0pip pkgs/main/linux-64::pip-20.2.4-py37h06a4308_0pycosat pkgs/main/linux-64::pycosat-0.6.3-py37h27cfd23_0pycparser pkgs/main/noarch::pycparser-2.20-py_2pyopenssl pkgs/main/noarch::pyopenssl-19.1.0-pyhd3eb1b0_1pysocks pkgs/main/linux-64::pysocks-1.7.1-py37_1python pkgs/main/linux-64::python-3.7.9-h7579374_0readline pkgs/main/linux-64::readline-8.0-h7b6447c_0requests pkgs/main/noarch::requests-2.24.0-py_0ruamel_yaml pkgs/main/linux-64::ruamel_yaml-0.15.87-py37h7b6447c_1setuptools pkgs/main/linux-64::setuptools-50.3.1-py37h06a4308_1six pkgs/main/linux-64::six-1.15.0-py37h06a4308_0sqlite pkgs/main/linux-64::sqlite-3.33.0-h62c20be_0tk pkgs/main/linux-64::tk-8.6.10-hbc83047_0tqdm pkgs/main/noarch::tqdm-4.51.0-pyhd3eb1b0_0urllib3 pkgs/main/noarch::urllib3-1.25.11-py_0wheel pkgs/main/noarch::wheel-0.35.1-pyhd3eb1b0_0xz pkgs/main/linux-64::xz-5.2.5-h7b6447c_0yaml pkgs/main/linux-64::yaml-0.2.5-h7b6447c_0zlib pkgs/main/linux-64::zlib-1.2.11-h7b6447c_3Preparing transaction: ...working...doneExecuting transaction: ...working...doneinstallation finished.[91m+ [0m[91mexport PATH=/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin+ conda config --set always_yes yes --set changeps1 no[0m[91m+ conda[0m[91m info -a[0mactive environment : Noneuser config file : /root/.condarcpopulated config files : /root/.condarcconda version : 4.9.2conda-build version : not installedpython version : 3.7.9.final.0virtual packages : __glibc=2.31=0__unix=0=0__archspec=1=x86_64base environment : /opt/conda (writable)channel URLs : https://repo.anaconda.com/pkgs/main/linux-64https://repo.anaconda.com/pkgs/main/noarchhttps://repo.anaconda.com/pkgs/r/linux-64https://repo.anaconda.com/pkgs/r/noarchpackage cache : /opt/conda/pkgs/root/.conda/pkgsenvs directories : /opt/conda/envs/root/.conda/envsplatform : linux-64user-agent : conda/4.9.2 requests/2.24.0 CPython/3.7.9 Linux/4.4.0-1060-aws ubuntu/20.04.1 glibc/2.31UID:GID : 0:0netrc file : Noneoffline mode : False# conda environments:#base * /opt/condasys.version: 3.7.9 (default, Aug 31 2020, 12:42:55)...sys.prefix: /opt/condasys.executable: /opt/conda/bin/pythonconda location: /opt/conda/lib/python3.7/site-packages/condaconda-build: Noneconda-env: /opt/conda/bin/conda-envuser site dirs:CIO_TEST: CONDA_ROOT: /opt
[jira] [Created] (ZEPPELIN-5243) Get HIVE_CONF_DIR from enviroment in flink interpreter
Jeff Zhang created ZEPPELIN-5243: Summary: Get HIVE_CONF_DIR from enviroment in flink interpreter Key: ZEPPELIN-5243 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5243 Project: Zeppelin Issue Type: Improvement Components: flink Affects Versions: 0.9.0, 0.10.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5242) Rat exclude correction "**/interpreter/**"
Philipp Dallig created ZEPPELIN-5242: Summary: Rat exclude correction "**/interpreter/**" Key: ZEPPELIN-5242 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5242 Project: Zeppelin Issue Type: Improvement Affects Versions: 0.9.0 Reporter: Philipp Dallig Assignee: Philipp Dallig I noticed several files without licence headers The exclude "**/interpreter/**" is too wide. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5241) Typos in spark tutorial
Omri keefe created ZEPPELIN-5241: Summary: Typos in spark tutorial Key: ZEPPELIN-5241 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5241 Project: Zeppelin Issue Type: Task Components: documentation Affects Versions: 0.9.0 Reporter: Omri keefe Found a few typos while testing for tutorials in local docker -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5240) Neo4jCypherInterpreterTest fails
Jeff Zhang created ZEPPELIN-5240: Summary: Neo4jCypherInterpreterTest fails Key: ZEPPELIN-5240 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5240 Project: Zeppelin Issue Type: Improvement Reporter: Jeff Zhang {code:java} org.apache.zeppelin.graph.neo4j.Neo4jCypherInterpreterTest Time elapsed: 1.061 sec <<< ERROR! 13010org.testcontainers.containers.ContainerLaunchException: Container startup failed 13011 at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:330) 13012 at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:311) 13013 at org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1022) 13014 at org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:29) 13015 at org.junit.rules.RunRules.evaluate(RunRules.java:20) 13016 at org.junit.runners.ParentRunner.run(ParentRunner.java:363) 13017 at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264) 13018 at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) 13019 at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124) 13020 at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) 13021 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) 13022 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) 13023Caused by: org.testcontainers.containers.ContainerFetchException: Can't get Docker image: RemoteDockerImage(imageName=neo4j:4.1.1, imagePullPolicy=DefaultPullPolicy()) 13024 at org.testcontainers.containers.GenericContainer.getDockerImageName(GenericContainer.java:1279) 13025 at org.testcontainers.containers.GenericContainer.logger(GenericContainer.java:613) 13026 at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:320) 13027 at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:311) 13028 at org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1022) 13029 at org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:29) 13030 at org.junit.rules.RunRules.evaluate(RunRules.java:20) 13031 at org.junit.runners.ParentRunner.run(ParentRunner.java:363) 13032 at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264) 13033 at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) 13034 at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124) 13035 at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) 13036 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) 13037 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) 13038Caused by: com.github.dockerjava.api.exception.NotFoundException: {"message":"No such image: testcontainersofficial/ryuk:0.3.0"} 13039 13040 at com.github.dockerjava.okhttp.OkHttpInvocationBuilder.execute(OkHttpInvocationBuilder.java:287) 13041 at com.github.dockerjava.okhttp.OkHttpInvocationBuilder.execute(OkHttpInvocationBuilder.java:271) 13042 at com.github.dockerjava.okhttp.OkHttpInvocationBuilder.post(OkHttpInvocationBuilder.java:129) 13043 at com.github.dockerjava.core.exec.CreateContainerCmdExec.execute(CreateContainerCmdExec.java:33) 13044 at com.github.dockerjava.core.exec.CreateContainerCmdExec.execute(CreateContainerCmdExec.java:13) 13045 at com.github.dockerjava.core.exec.AbstrSyncDockerCmdExec.exec(AbstrSyncDockerCmdExec.java:21) 13046 at com.github.dockerjava.core.command.AbstrDockerCmd.exec(AbstrDockerCmd.java:35) 13047 at com.github.dockerjava.core.command.CreateContainerCmdImpl.exec(CreateContainerCmdImpl.java:595) 13048 at org.testcontainers.utility.ResourceReaper.start(ResourceReaper.java:94) 13049 at org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:168) 13050 at org.testcontainers.LazyDockerClient.getDockerClient(LazyDockerClient.java:14) 13051 at org.testcontainers.LazyDockerClient.listImagesCmd(LazyDockerClient.java:12) 13052 at org.testcontainers.images.LocalImagesCache.maybeInitCache(LocalImagesCache.java:68) 13053 at org.testcontainers.images.LocalImagesCache.get(LocalImagesCache.java:32) 13054 at org.testcontainers.images.AbstractImagePullPolicy.shouldPull(AbstractImagePullPolicy.java:18) 13055 at org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:59) 13056 at org.testcontainers.images.RemoteDockerImage.resolve(Rem
[jira] [Created] (ZEPPELIN-5239) Support to specify multiple jdbc urls for HA
Jeff Zhang created ZEPPELIN-5239: Summary: Support to specify multiple jdbc urls for HA Key: ZEPPELIN-5239 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5239 Project: Zeppelin Issue Type: Improvement Components: JdbcInterpreter Affects Versions: 0.9.0, 0.10.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5238) Add slack channel in doc
Jeff Zhang created ZEPPELIN-5238: Summary: Add slack channel in doc Key: ZEPPELIN-5238 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5238 Project: Zeppelin Issue Type: Improvement Components: documentation Affects Versions: 0.9.0, 0.10.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5237) ConcurrentModificationException in Notebook.getNotesInfo
Jeff Zhang created ZEPPELIN-5237: Summary: ConcurrentModificationException in Notebook.getNotesInfo Key: ZEPPELIN-5237 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5237 Project: Zeppelin Issue Type: Improvement Components: zeppelin-server Affects Versions: 0.9.0, 0.10.0 Reporter: Jeff Zhang {code:java} java.util.ConcurrentModificationException at java.util.HashMap$EntrySpliterator.forEachRemaining(HashMap.java:1704) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566) at org.apache.zeppelin.notebook.Notebook.getNotesInfo(Notebook.java:654) at org.apache.zeppelin.socket.NotebookServer$4.handleUser(NotebookServer.java:666) at org.apache.zeppelin.socket.ConnectionManager.forAllUsers(ConnectionManager.java:371) at org.apache.zeppelin.socket.NotebookServer.broadcastNoteListUpdate(NotebookServer.java:663) at org.apache.zeppelin.socket.NotebookServer.inlineBroadcastNoteList(NotebookServer.java:657) at org.apache.zeppelin.socket.NotebookServer.broadcastNoteList(NotebookServer.java:676) at org.apache.zeppelin.rest.NotebookRestApi$3.onSuccess(NotebookRestApi.java:465) at org.apache.zeppelin.rest.NotebookRestApi$3.onSuccess(NotebookRestApi.java:461) at org.apache.zeppelin.service.NotebookService.cloneNote(NotebookService.java:270) at org.apache.zeppelin.rest.NotebookRestApi.cloneNote(NotebookRestApi.java:460) at sun.reflect.GeneratedMethodAccessor50.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5236) ConcurrentModificationException in NotebookAuthorizationInfoSaving
Jeff Zhang created ZEPPELIN-5236: Summary: ConcurrentModificationException in NotebookAuthorizationInfoSaving Key: ZEPPELIN-5236 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5236 Project: Zeppelin Issue Type: Improvement Components: zeppelin-zengine Affects Versions: 0.9.0, 0.10.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5235) Add button to stop running note
Jeff Zhang created ZEPPELIN-5235: Summary: Add button to stop running note Key: ZEPPELIN-5235 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5235 Project: Zeppelin Issue Type: Improvement Components: zeppelin-zengine Affects Versions: 0.9.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5234) Increase default value of ZEPPELIN_INTERPRETER_CONNECTION_POOL_SIZE
Jeff Zhang created ZEPPELIN-5234: Summary: Increase default value of ZEPPELIN_INTERPRETER_CONNECTION_POOL_SIZE Key: ZEPPELIN-5234 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5234 Project: Zeppelin Issue Type: Improvement Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5233) Clear output before running paragraph
Jeff Zhang created ZEPPELIN-5233: Summary: Clear output before running paragraph Key: ZEPPELIN-5233 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5233 Project: Zeppelin Issue Type: Improvement Components: zeppelin-zengine Affects Versions: 0.9.0, 0.10.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5232) Default value of zeppelin server memory & interpreter memory should be 1024m
Jeff Zhang created ZEPPELIN-5232: Summary: Default value of zeppelin server memory & interpreter memory should be 1024m Key: ZEPPELIN-5232 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5232 Project: Zeppelin Issue Type: Improvement Affects Versions: 0.9.0, 0.10.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5231) Livy Interpreter doesn't support Japanese Character - Encoding Issue
Sai Charan G created ZEPPELIN-5231: -- Summary: Livy Interpreter doesn't support Japanese Character - Encoding Issue Key: ZEPPELIN-5231 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5231 Project: Zeppelin Issue Type: Bug Components: zeppelin-interpreter Affects Versions: 0.8.0 Reporter: Sai Charan G Attachments: Screenshot 2021-02-02 at 11.24.35 AM.png Livy interpreter is not encoding Japanese characters !Screenshot 2021-02-02 at 11.24.35 AM.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5230) Apache Zeppelin 0.8 and 0.9 configured for OIDC redirects to http://localhost:8081/null
Alfredo Revilla created ZEPPELIN-5230: - Summary: Apache Zeppelin 0.8 and 0.9 configured for OIDC redirects to http://localhost:8081/null Key: ZEPPELIN-5230 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5230 Project: Zeppelin Issue Type: Bug Environment: * Windows 10 * JDK1.8 Reporter: Alfredo Revilla I've tried with both Apache Zeppelin 0.8 and 0.9 + pac4j and the problem is the same. When visiting the app root at http://localhost:8081/ I get redirected to http://localhost:8081/null. log4j does not output anything that may help. This is my shiro.ini file: {{[main]}} {{sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager}} {{securityManager.sessionManager = $sessionManager}} {{securityManager.sessionManager.globalSessionTimeout = 8640}}{{oidcConfig = org.pac4j.oidc.config.OidcConfiguration}} {{oidcConfig.discoveryURI = http://localhost:8080/auth/realms/Test/.well-known/openid-configuration}} {{oidcConfig.clientId = Zeppelin}} {{oidcConfig.secret = e15b220e-9b3c-4997-9a76-81086e3e1ca3}} {{oidcConfig.clientAuthenticationMethodAsString = client_secret_basic}} {{oidcClient = org.pac4j.oidc.client.OidcClient}} {{oidcClient.configuration = $oidcConfig}} {{clients = org.pac4j.core.client.Clients}} {{clients.callbackUrl = http://localhost:8081/api/callback}} {{clients.clients = $oidcClient}}{{requireRoleAdmin = org.pac4j.core.authorization.authorizer.RequireAnyRoleAuthorizer}}{{config = org.pac4j.core.config.Config}} {{config.clients = $clients}}{{pac4jRealm = io.buji.pac4j.realm.Pac4jRealm}} {{pac4jSubjectFactory = io.buji.pac4j.subject.Pac4jSubjectFactory}} {{securityManager.subjectFactory = $pac4jSubjectFactory}}{{oidcSecurityFilter = io.buji.pac4j.filter.SecurityFilter}} {{oidcSecurityFilter.config = $config}} {{oidcSecurityFilter.clients = oidcClient}} {{callbackFilter = io.buji.pac4j.filter.CallbackFilter}} {{callbackFilter.defaultUrl = http://localhost:8081}} {{callbackFilter.config = $config}}{{[urls]}} {{/api/version = anon}} {{/api/callback = callbackFilter}} {{/** = oidcSecurityFilter}} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5229) Update flink 1.10 to 1.10.3
Jeff Zhang created ZEPPELIN-5229: Summary: Update flink 1.10 to 1.10.3 Key: ZEPPELIN-5229 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5229 Project: Zeppelin Issue Type: Improvement Components: flink Affects Versions: 0.9.0, 0.10.0 Reporter: Jeff Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5228) IPySpark unsupported environment would cause other spark interpreter fail
Jeff Zhang created ZEPPELIN-5228: Summary: IPySpark unsupported environment would cause other spark interpreter fail Key: ZEPPELIN-5228 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5228 Project: Zeppelin Issue Type: Bug Components: spark Affects Versions: 0.9.0 Reporter: Jeff Zhang How to reproduce it: # Run %spark.ipyspark in ipyspark unsupported enviromnet, e.g. missing jupyter-client. # Then run other spark interpreter, would fail to run any scala or sql code -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5227) Zeppelin 0.9.0 cannot create paragraph after call create note http api
Zhubowen created ZEPPELIN-5227: -- Summary: Zeppelin 0.9.0 cannot create paragraph after call create note http api Key: ZEPPELIN-5227 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5227 Project: Zeppelin Issue Type: Bug Components: zeppelin-server Affects Versions: 0.9.0 Reporter: Zhubowen Hi With zeppelin 0.9.0, when I call create note http api, like this: POST [http://[zeppelin-server]:[zeppelin-port]/api/notebook] JSON: \{"name": "test"} The response is OK, but no paragraph is been initialized, and I can not create any paragraph either. !image-2021-01-28-18-47-05-181.png|width=514,height=337! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5226) Zeppelin Client update paragraph issues
jinpeng.chen created ZEPPELIN-5226: -- Summary: Zeppelin Client update paragraph issues Key: ZEPPELIN-5226 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5226 Project: Zeppelin Issue Type: Bug Components: zeppelin-client Affects Versions: 0.9.0 Reporter: jinpeng.chen I wanted to update the first line of the paragraph with the following configuration: %flink.ssql(runAsOne=true,savepointDir=hdfs:///tmp/flink/savepoint,execution.savepoint.path= xx,resumeFromSavepoint=false) use zeppelin client api method: !image-2021-01-28-17-47-03-535.png! jBut after the update, it looks like this: !https://static.dingtalk.com/media/lALPDiCpt1F1FZLNAcTNDIY_3206_452.png?auth_bizType=IM_bizEntity=%7B%22cid%22%3A%225344911%3A433569138%22%2C%22msgId%22%3A%226122544543334%22%7D=im_id=5344911! !https://static.dingtalk.com/media/lALPDgfLQNAITYvNAUzNCTw_2364_332.png?auth_bizType=IM_bizEntity=%7B%22cid%22%3A%225344911%3A433569138%22%2C%22msgId%22%3A%226122528699240%22%7D=im_id=5344911! !https://static.dingtalk.com/media/lALPDgfLQNAKh2HNAtzNDdg_3544_732.png?auth_bizType=IM_bizEntity=%7B%22cid%22%3A%225344911%3A433569138%22%2C%22msgId%22%3A%226122496955417%22%7D=im_id=5344911! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5225) RemoteInterpreterManagedProcess soft shutdown and abstraction
Philipp Dallig created ZEPPELIN-5225: Summary: RemoteInterpreterManagedProcess soft shutdown and abstraction Key: ZEPPELIN-5225 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5225 Project: Zeppelin Issue Type: Improvement Components: Core Affects Versions: 0.9.1, 0.10.0 Reporter: Philipp Dallig Assignee: Philipp Dallig During development I recognize many shutdown errors of remote interpreters. {code} 2021-01-25T10:43:33.2749004Z WARN [2021-01-25 10:43:33,274] ({Exec Default Executor} ProcessLauncher.java[onProcessFailed]:134) - Process with cmd [/home/runner/work/zeppelin/zeppelin/zeppelin-zengine/../bin/interpreter.sh, -d, /home/runner/work/zeppelin/zeppelin/zeppelin-zengine/../interpreter_NotebookTest/test, -c, 10.1.0.4, -p, 40207, -r, :, -i, test-isolated-2FYUBYUH2-2021-01-25_10-43-31, -l, /home/runner/work/zeppelin/zeppelin/zeppelin-zengine/../local-repo/test, -g, test] is failed due to 2021-01-25T10:43:33.2755177Z org.apache.commons.exec.ExecuteException: Process exited with an error: 143 (Exit value: 143) 2021-01-25T10:43:33.2757145Zat org.apache.commons.exec.DefaultExecutor.executeInternal(DefaultExecutor.java:404) 2021-01-25T10:43:33.2759258Zat org.apache.commons.exec.DefaultExecutor.access$200(DefaultExecutor.java:48) 2021-01-25T10:43:33.2760971Zat org.apache.commons.exec.DefaultExecutor$1.run(DefaultExecutor.java:200) 2021-01-25T10:43:33.2762144Zat java.lang.Thread.run(Thread.java:748) {code} Zeppelin server does not wait for a clean shutdown of the remote interpreter, but stops the process hard. The relevant code is located in [RemoteInterpreterManagedProcess|https://github.com/apache/zeppelin/blob/d63289a47a9ed26098ad93cb62ae1660bb937182/zeppelin-zengine/src/main/java/org/apache/zeppelin/interpreter/remote/RemoteInterpreterManagedProcess.java#L138-L157]. We should also abstract the RemoteInterpreterManagedProcess class and move the exec code to a new class, because the RemoteInterpreterManagedProcess class contains a lot of code that is only necessary when the Zeppelin server controls a remote interpreter via exec. In the meantime, we have many remote interpreter processes that are started by API calls to a cluster manager (e.g. K8s, YARN, Docker) but cannot use the code from the RemoteInterpreterManagedProcess class. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ZEPPELIN-5224) Suppress ConfigurationException
Philipp Dallig created ZEPPELIN-5224: Summary: Suppress ConfigurationException Key: ZEPPELIN-5224 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5224 Project: Zeppelin Issue Type: Improvement Affects Versions: 0.9.1, 0.10.0 Reporter: Philipp Dallig Assignee: Philipp Dallig During the tests I often see this stack trace. We should suppress the stack trace by default. {code:java} 2021-01-25T10:32:57.0063530Z org.apache.commons.configuration2.ex.ConfigurationException: Could not locate: org.apache.commons.configuration2.io.FileLocator@2049a9c1[fileName=zeppelin-site.xml,basePath=/conf/,sourceURL=,encoding=,fileSystem=,locationStrategy=org.apache.commons.configuration2.io.CombinedLocationStrategy@6a47b187] 2021-01-25T10:32:57.0068633Zat org.apache.commons.configuration2.io.FileLocatorUtils.locateOrThrow(FileLocatorUtils.java:345) 2021-01-25T10:32:57.0071518Zat org.apache.commons.configuration2.io.FileHandler.load(FileHandler.java:971) 2021-01-25T10:32:57.0073746Zat org.apache.commons.configuration2.io.FileHandler.load(FileHandler.java:701) 2021-01-25T10:32:57.0077530Zat org.apache.commons.configuration2.builder.FileBasedConfigurationBuilder.initFileHandler(FileBasedConfigurationBuilder.java:311) 2021-01-25T10:32:57.0083069Zat org.apache.commons.configuration2.builder.FileBasedConfigurationBuilder.initResultInstance(FileBasedConfigurationBuilder.java:290) 2021-01-25T10:32:57.0088583Zat org.apache.commons.configuration2.builder.FileBasedConfigurationBuilder.initResultInstance(FileBasedConfigurationBuilder.java:59) 2021-01-25T10:32:57.0096609Zat org.apache.commons.configuration2.builder.BasicConfigurationBuilder.createResult(BasicConfigurationBuilder.java:420) 2021-01-25T10:32:57.0101658Zat org.apache.commons.configuration2.builder.BasicConfigurationBuilder.getConfiguration(BasicConfigurationBuilder.java:284) 2021-01-25T10:32:57.0106051Zat org.apache.zeppelin.conf.ZeppelinConfiguration.loadXMLConfig(ZeppelinConfiguration.java:104) 2021-01-25T10:32:57.0108598Zat org.apache.zeppelin.conf.ZeppelinConfiguration.(ZeppelinConfiguration.java:83) 2021-01-25T10:32:57.0111030Zat org.apache.zeppelin.conf.ZeppelinConfiguration.create(ZeppelinConfiguration.java:131) 2021-01-25T10:32:57.0113455Zat org.apache.zeppelin.conf.ZeppelinConfiguration.create(ZeppelinConfiguration.java:121) 2021-01-25T10:32:57.0115923Zat org.apache.zeppelin.dep.AbstractDependencyResolver.(AbstractDependencyResolver.java:53) 2021-01-25T10:32:57.0123913Zat org.apache.zeppelin.dep.DependencyResolver.(DependencyResolver.java:59) 2021-01-25T10:32:57.0126119Zat org.apache.zeppelin.dep.DependencyResolverTest.setUp(DependencyResolverTest.java:49) 2021-01-25T10:32:57.0128543Zat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2021-01-25T10:32:57.0130399Zat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2021-01-25T10:32:57.0132632Zat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2021-01-25T10:32:57.0134352Zat java.lang.reflect.Method.invoke(Method.java:498) 2021-01-25T10:32:57.0135794Zat org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) 2021-01-25T10:32:57.0138415Zat org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) 2021-01-25T10:32:57.0141796Zat org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) 2021-01-25T10:32:57.0144168Zat org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) 2021-01-25T10:32:57.0146411Zat org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 2021-01-25T10:32:57.0148172Zat org.junit.runners.ParentRunner.run(ParentRunner.java:363) 2021-01-25T10:32:57.0149944Zat org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264) 2021-01-25T10:32:57.0152616Zat org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) 2021-01-25T10:32:57.0155058Zat org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124) 2021-01-25T10:32:57.0158205Zat org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) 2021-01-25T10:32:57.0161513Zat org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) 2021-01-25T10:32:57.0163955Zat org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)