[GitHub] [zeppelin] Reamer commented on a change in pull request #4118: [ZEPPELIN-5370] Polish pom.xml files
Reamer commented on a change in pull request #4118: URL: https://github.com/apache/zeppelin/pull/4118#discussion_r634925921 ## File path: zeppelin-plugins/pom.xml ## @@ -25,10 +25,8 @@ zeppelin org.apache.zeppelin 0.10.0-SNAPSHOT -.. Review comment: You are right, I changed my mind because of the good xsd. Defaults should not be declared, so you should remove this. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [zeppelin] zjffdu commented on a change in pull request #4120: [ZEPPELIN-5374] Don't update paragraph config when latest checkpoint of flink is unchanged
zjffdu commented on a change in pull request #4120: URL: https://github.com/apache/zeppelin/pull/4120#discussion_r634877594 ## File path: flink/interpreter/src/main/java/org/apache/zeppelin/flink/JobManager.java ## @@ -254,11 +255,12 @@ public void run() { if (completedObject.has("external_path")) { String checkpointPath = completedObject.getString("external_path"); LOGGER.debug("Latest checkpoint path: {}", checkpointPath); -if (!StringUtils.isBlank(checkpointPath)) { +if (!StringUtils.isBlank(checkpointPath) && !checkpointPath.equals(latestCheckpointPath)) { Review comment: Actually here 2 not is more readable. The if condition is when checkpoint is not null and it is not the same as latestCheckpointPath. If I change it to 1 not, it is a little harder to understand what the if condition means -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [zeppelin] zjffdu commented on a change in pull request #4121: [ZEPPELIN-5376] Polish server and service package codes of zeppelin-server
zjffdu commented on a change in pull request #4121: URL: https://github.com/apache/zeppelin/pull/4121#discussion_r634876522 ## File path: zeppelin-server/src/main/java/org/apache/zeppelin/service/ShiroAuthenticationService.java ## @@ -407,8 +407,8 @@ public boolean isAuthenticated() { Connection con = null; PreparedStatement ps = null; ResultSet rs = null; -DataSource dataSource = null; -String authQuery = ""; +DataSource dataSource; +String authQuery; Review comment: Only change these 2 variables ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [zeppelin] zjffdu commented on a change in pull request #4121: [ZEPPELIN-5376] Polish server and service package codes of zeppelin-server
zjffdu commented on a change in pull request #4121: URL: https://github.com/apache/zeppelin/pull/4121#discussion_r634876091 ## File path: zeppelin-server/src/main/java/org/apache/zeppelin/service/NoAuthenticationService.java ## @@ -30,7 +30,7 @@ import org.slf4j.LoggerFactory; public class NoAuthenticationService implements AuthenticationService { - private static Logger logger = LoggerFactory.getLogger(NoAuthenticationService.class); + private static final Logger logger = LoggerFactory.getLogger(NoAuthenticationService.class); Review comment: Rename it to be `LOGGER`, this is our convention for static final variables. ## File path: zeppelin-server/src/main/java/org/apache/zeppelin/service/NoAuthenticationService.java ## @@ -30,7 +30,7 @@ import org.slf4j.LoggerFactory; public class NoAuthenticationService implements AuthenticationService { - private static Logger logger = LoggerFactory.getLogger(NoAuthenticationService.class); + private static final Logger logger = LoggerFactory.getLogger(NoAuthenticationService.class); Review comment: Rename it to be `LOGGER`, this is our naming convention for static final variables. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [zeppelin] cuspymd opened a new pull request #4121: [ZEPPELIN-5376] Polish server and service package codes of zeppelin-server
cuspymd opened a new pull request #4121: URL: https://github.com/apache/zeppelin/pull/4121 ### What is this PR for? Polish server and service package codes of zeppelin-server - Delete unused import - Delete and optimize redundant codes - Add `final` keyword to some variables that have unchanged reference. ### What type of PR is it? [Refactoring] ### What is the Jira issue? * [ZEPPELIN-5376] Polish server and service package codes of zeppelin-server ### How should this be tested? * CI ### Questions: * Does the licenses files need update? No * Is there breaking changes for older versions? No * Does this needs documentation? No -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [zeppelin] zjffdu commented on pull request #4097: [ZEPPELIN-5330]. Support conda env for python interpreter in yarn mode
zjffdu commented on pull request #4097: URL: https://github.com/apache/zeppelin/pull/4097#issuecomment-843283503 > I was aware of this, but it seems that downloading dependencies several times is the way of `spark.archives`. It is clear that this is not optimal. If I read spark code correctly, spark only download `spark.archives` in driver side and distribute to executors via its internal mechanism between driver and executor. > By local, do you mean the local file system of the Zeppelin server? In my environment, the Zeppelin user does not have access to the local file system of the Zeppelin server. Therefore, I would prefer a remote endpoint that is under the control of the Zeppelin user. > I understand your development approach and it sounds great, but I think this is not suitable for a production environment. Let me clarify it, actually it is not only local file system, it could be any hadoop compatible file system, such as hdfs, s3. > Maybe we can support the download in `JupyterKernelInterpreter.java` with an additional property. Then it should not matter whether the files were provided by YARN or the download. Actually for spark yarn mode, it would still use yarn cache mechanism to distribute archives [1]. Of course, for k8s mode, we can use other property to download the archive in `JupyterKernelInterpreter.java` for pure python interpreter, but for pyspark, it is not necessary, because it is already done by SparkSubmit [2] * [1] https://github.com/apache/spark/blob/master/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala#L1663 * [2] https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala#L391 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (ZEPPELIN-5376) Polish server and service package codes of zeppelin-server
Myoungdo Park created ZEPPELIN-5376: --- Summary: Polish server and service package codes of zeppelin-server Key: ZEPPELIN-5376 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5376 Project: Zeppelin Issue Type: Sub-task Components: zeppelin-server Reporter: Myoungdo Park Assignee: Myoungdo Park -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [zeppelin] cuspymd commented on a change in pull request #4118: [ZEPPELIN-5370] Polish pom.xml files
cuspymd commented on a change in pull request #4118: URL: https://github.com/apache/zeppelin/pull/4118#discussion_r634501953 ## File path: zeppelin-plugins/pom.xml ## @@ -25,10 +25,8 @@ zeppelin org.apache.zeppelin 0.10.0-SNAPSHOT -.. -org.apache.zeppelin Review comment: I found the version value of "r/pom.xml" is "0.9.0-SNAPSHOT". Can I understand that it is missing from the version update changes? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [zeppelin] cuspymd commented on a change in pull request #4118: [ZEPPELIN-5370] Polish pom.xml files
cuspymd commented on a change in pull request #4118: URL: https://github.com/apache/zeppelin/pull/4118#discussion_r634501953 ## File path: zeppelin-plugins/pom.xml ## @@ -25,10 +25,8 @@ zeppelin org.apache.zeppelin 0.10.0-SNAPSHOT -.. -org.apache.zeppelin Review comment: Upon checking, only the version value of "r/pom.xml" is "0.9.0-SNAPSHOT". Can I understand that it is missing from the version update changes? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [zeppelin] cuspymd commented on a change in pull request #4118: [ZEPPELIN-5370] Polish pom.xml files
cuspymd commented on a change in pull request #4118: URL: https://github.com/apache/zeppelin/pull/4118#discussion_r634479834 ## File path: zeppelin-plugins/pom.xml ## @@ -25,10 +25,8 @@ zeppelin org.apache.zeppelin 0.10.0-SNAPSHOT -.. Review comment: As `maven-4.0.0.xsd`, The default value of the tag `relativePath` is "../pom.xml". Nevertheless, do you think it should be declared explicitly? > The relative path of the parent pom.xml file within the check out. If not specified, it defaults to ../pom.xml. > https://maven.apache.org/xsd/maven-4.0.0.xsd -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [zeppelin] cuspymd commented on a change in pull request #4118: [ZEPPELIN-5370] Polish pom.xml files
cuspymd commented on a change in pull request #4118: URL: https://github.com/apache/zeppelin/pull/4118#discussion_r634479834 ## File path: zeppelin-plugins/pom.xml ## @@ -25,10 +25,8 @@ zeppelin org.apache.zeppelin 0.10.0-SNAPSHOT -.. Review comment: The default value of the tag `relativePath` is "../pom.xml". Nevertheless, do you think it should be declared explicitly? > The relative path of the parent pom.xml file within the check out. If not specified, it defaults to ../pom.xml. > https://maven.apache.org/xsd/maven-4.0.0.xsd -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [zeppelin] Reamer edited a comment on pull request #4097: [ZEPPELIN-5330]. Support conda env for python interpreter in yarn mode
Reamer edited a comment on pull request #4097: URL: https://github.com/apache/zeppelin/pull/4097#issuecomment-843240046 Hi @zjffdu, > We can leverage yarn's resource cache mechanism. That means the same conda env downloaded by yarn_app_1 can be reused by yarn_app_2. If we download it in JupyterKernelInterpreter.java, it may cause network congestion if many python interpreters runs at the same time. I was aware of this, but it seems that downloading dependencies several times is the way of `spark.archives`. It is clear that this is not optimal. > Don't need to update conda archives to hdfs, just use the local file system to store the conda env. This would make the development much smooth, user use the local conda env to verify it in local environment and then move it to yarn environment in production environment. By local, do you mean the local file system of the Zeppelin server? In my environment, the Zeppelin user does not have access to the local file system of the Zeppelin server. Therefore, I would prefer a remote endpoint that is under the control of the Zeppelin user. I understand your development approach and it sounds great, but I think this is not suitable for a production environment. Maybe we can support the download in `JupyterKernelInterpreter.java` with an additional property. Then it should not matter whether the files were provided by YARN or the download. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (ZEPPELIN-5375) Stack traces exposed in REST API
Joris Gillis created ZEPPELIN-5375: -- Summary: Stack traces exposed in REST API Key: ZEPPELIN-5375 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5375 Project: Zeppelin Issue Type: Bug Affects Versions: 0.9.0 Reporter: Joris Gillis When an error occurs in Zeppelin after a REST call is made, a stack trace is returned. While this can be very useful during debugging, it would be nice to be able to switch it off in production. (See also: [https://security.stackexchange.com/questions/19130/is-a-stack-trace-of-a-server-application-a-vulnerability).] For example, if you create a new notebook, a POST call is sent to the Zeppelin backend (/api/notebook). If that request is replayed after a first successful run, the result is: {code:java} {"exception":"IOException","message":"java.io.IOException: Note \u0027/A new notebook\u0027 existed","stacktrace":"java.io.IOException: java.io.IOException: Note \u0027/A new notebook\u0027 existed\n\tat org.apache.zeppelin.rest.AbstractRestApi$RestServiceCallback.onFailure(AbstractRestApi.java:54)\n\tat org.apache.zeppelin.service.NotebookService.createNote(NotebookService.java:169)\n\tat org.apache.zeppelin.rest.NotebookRestApi.createNote(NotebookRestApi.java:392)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52)\n\tat org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124)\n\tat org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167)\n\tat org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:176)\n\tat org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79)\n\tat org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469)\n\tat org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391)\n\tat org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80)\n\tat org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253)\n\tat org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)\n\tat org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)\n\tat org.glassfish.jersey.internal.Errors.process(Errors.java:292)\n\tat org.glassfish.jersey.internal.Errors.process(Errors.java:274)\n\tat org.glassfish.jersey.internal.Errors.process(Errors.java:244)\n\tat org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)\n\tat org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232)\n\tat org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680)\n\tat org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394)\n\tat org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)\n\tat org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366)\n\tat org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319)\n\tat org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)\n\tat org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:791)\n\tat org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1626)\n\tat org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61)\n\tat org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)\n\tat org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)\n\tat org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)\n\tat org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)\n\tat org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)\n\tat org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)\n\tat org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)\n\tat org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)\n\tat org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:450)\n\tat
[GitHub] [zeppelin] Reamer commented on pull request #4097: [ZEPPELIN-5330]. Support conda env for python interpreter in yarn mode
Reamer commented on pull request #4097: URL: https://github.com/apache/zeppelin/pull/4097#issuecomment-843240046 Hi @zjffdu, > We can leverage yarn's resource cache mechanism. That means the same conda env downloaded by yarn_app_1 can be reused by yarn_app_2. If we download it in JupyterKernelInterpreter.java, it may cause network congestion if many python interpreters runs at the same time. I was aware of this, but it seems that downloading dependencies several times is the way of `spark.archives`. It is clear that this is not optimal. > Don't need to update conda archives to hdfs, just use the local file system to store the conda env. This would make the development much smooth, user use the local conda env to verify it in local environment and then move it to yarn environment in production environment. By local, do you mean the local file system of the Zeppelin server? In my environment, the Zeppelin user does not have access to the local file system of the Zeppelin server. Therefore, I would prefer a remote endpoint that is under the control of the Zeppelin user. I understand your development approach and it sounds great, but I think this is not suitable for a production environment. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [zeppelin] cuspymd commented on a change in pull request #4116: [ZEPPELIN-5352] Support flink in k8s mode
cuspymd commented on a change in pull request #4116: URL: https://github.com/apache/zeppelin/pull/4116#discussion_r634453738 ## File path: flink/interpreter/src/main/scala/org/apache/zeppelin/flink/FlinkScalaInterpreter.scala ## @@ -182,16 +182,27 @@ class FlinkScalaInterpreter(val properties: Properties) { properties.getProperty("flink.execution.mode", "LOCAL") .replace("-", "_") .toUpperCase) -if (mode == ExecutionMode.YARN_APPLICATION) { + +if (ExecutionMode.isYarnAppicationMode(mode)) { if (flinkVersion.isFlink110) { -throw new Exception("yarn-application mode is only supported after Flink 1.11") +throw new Exception("application mode is only supported after Flink 1.11") } // use current yarn container working directory as FLINK_HOME, FLINK_CONF_DIR and HIVE_CONF_DIR val workingDirectory = new File(".").getAbsolutePath flinkHome = workingDirectory flinkConfDir = workingDirectory hiveConfDir = workingDirectory +} else if (ExecutionMode.isK8sApplicationMode(mode)) { + if (flinkVersion.isFlink110) { +throw new Exception("application mode is only supported after Flink 1.11") + } + // use current k8s working directory as FLINK_HOME + val workingDirectory = new File(".").getAbsolutePath + flinkHome = workingDirectory Review comment: The repeated codes might be moved outside the if statement(186 line). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [zeppelin] cuspymd commented on a change in pull request #4120: [ZEPPELIN-5374] Don't update paragraph config when latest checkpoint of flink is unchanged
cuspymd commented on a change in pull request #4120: URL: https://github.com/apache/zeppelin/pull/4120#discussion_r634442983 ## File path: flink/interpreter/src/main/java/org/apache/zeppelin/flink/JobManager.java ## @@ -254,11 +255,12 @@ public void run() { if (completedObject.has("external_path")) { String checkpointPath = completedObject.getString("external_path"); LOGGER.debug("Latest checkpoint path: {}", checkpointPath); -if (!StringUtils.isBlank(checkpointPath)) { +if (!StringUtils.isBlank(checkpointPath) && !checkpointPath.equals(latestCheckpointPath)) { Review comment: It seems that using the operator "not (!)" only once is good for readability. ```suggestion if (!(StringUtils.isBlank(checkpointPath) || checkpointPath.equals(latestCheckpointPath))) { ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [zeppelin] zjffdu commented on a change in pull request #4116: [ZEPPELIN-5352] Support flink in k8s mode
zjffdu commented on a change in pull request #4116: URL: https://github.com/apache/zeppelin/pull/4116#discussion_r634151978 ## File path: zeppelin-interpreter/src/main/java/org/apache/zeppelin/interpreter/remote/RemoteInterpreterServer.java ## @@ -142,6 +141,7 @@ private ScheduledExecutorService resultCleanService = Executors.newSingleThreadScheduledExecutor(); private boolean isTest; + private boolean isFlinkK8sApplicationMode = false; Review comment: No problem, let's wait for #4089 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [zeppelin] Reamer commented on a change in pull request #4116: [ZEPPELIN-5352] Support flink in k8s mode
Reamer commented on a change in pull request #4116: URL: https://github.com/apache/zeppelin/pull/4116#discussion_r634085242 ## File path: flink/interpreter/pom.xml ## @@ -95,6 +96,13 @@ ${project.version} + + io.fabric8 + kubernetes-client + ${kubernetes.client.version} + compile Review comment: I also have the requirement to add additional ingress k8s resources during the runtime of the zeppelin interpreter. See [ZEPPELIN-5145](https://issues.apache.org/jira/browse/ZEPPELIN-5145). In my opinion, the interpreter pod should **not** have the right to create ingress resources, but the Zeppelin server service user should have the right. So my plan is to add an additional interface to the Zeppelin interpreter project. Something like `ExternalURLSupported`. This interface should be implemented by the `InterpreterProcess` e.g. `K8sRemoteInterpreterProcess.java` on the ZeppelinServer side and any specific interpreter can query the function via `instanceof`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [zeppelin] Reamer commented on a change in pull request #4116: [ZEPPELIN-5352] Support flink in k8s mode
Reamer commented on a change in pull request #4116: URL: https://github.com/apache/zeppelin/pull/4116#discussion_r634075607 ## File path: zeppelin-interpreter/src/main/java/org/apache/zeppelin/interpreter/remote/RemoteInterpreterServer.java ## @@ -142,6 +141,7 @@ private ScheduledExecutorService resultCleanService = Executors.newSingleThreadScheduledExecutor(); private boolean isTest; + private boolean isFlinkK8sApplicationMode = false; Review comment: I agree with @cuspymd, changing `RemoteInterpreterServer` for a specific interpreter is not good. In general, calling `System.exit()` is not good style, so we should wait for #4089 and help @PrarthiJain if necessary. This PR removes a `System.exit(0)`. Maybe we are also able to remove the second [`System.exit(1)`](https://github.com/apache/zeppelin/blob/97ce203a7fb72fd0300ff84ea5fa8e52f6125ebc/zeppelin-interpreter/src/main/java/org/apache/zeppelin/interpreter/remote/RemoteInterpreterServer.java#L705) If we cannot remove the `System.exit()`, then we should work with an interpreter-independent variable that should be set via the start command or evaluated at start-up. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org