[jira] [Created] (ZEPPELIN-4066) Introduce ProcessLauncher to encapsulate process launch
Jeff Zhang created ZEPPELIN-4066: Summary: Introduce ProcessLauncher to encapsulate process launch Key: ZEPPELIN-4066 URL: https://issues.apache.org/jira/browse/ZEPPELIN-4066 Project: Zeppelin Issue Type: Improvement Reporter: Jeff Zhang There're several places in zeppelin to launch processes, such as interpreter process launch, python process launch. It would be helpful if we could provide a utility class ProcessLauncher to encapsulate it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] [zeppelin] asfgit closed pull request #3330: [ZEPPELIN-4054] Fix LdapGroupRealm path in shiro doc
asfgit closed pull request #3330: [ZEPPELIN-4054] Fix LdapGroupRealm path in shiro doc URL: https://github.com/apache/zeppelin/pull/3330 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] zjffdu commented on issue #3313: [ZEPPELIN-4004] add a systemd unit file to launch the Zeppelin daemon via systemd commands
zjffdu commented on issue #3313: [ZEPPELIN-4004] add a systemd unit file to launch the Zeppelin daemon via systemd commands URL: https://github.com/apache/zeppelin/pull/3313#issuecomment-472309416 @liuxunorg Could you help review it ? Thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] zjffdu commented on a change in pull request #3320: [ZEPPELIN-3977]. Create shell script for converting old note file to new file
zjffdu commented on a change in pull request #3320: [ZEPPELIN-3977]. Create shell script for converting old note file to new file URL: https://github.com/apache/zeppelin/pull/3320#discussion_r264999491 ## File path: docs/setup/operation/upgrading.md ## @@ -35,6 +35,12 @@ So, copying `notebook` and `conf` directory should be enough. ## Migration Guide +### Upgrading from Zeppelin 0.8 to 0.9 + + - From 0.9, we change the notes file name structure ([ZEPPELIN-2619](https://issues.apache.org/jira/browse/ZEPPELIN-2619)). So when you upgrading zeppelin to 0.9, you need to upgrade note file. Here's steps you need to follow: Review comment: ping @felixcheung This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (ZEPPELIN-4068) implement MongoNotebookRepo
yx91490 created ZEPPELIN-4068: - Summary: implement MongoNotebookRepo Key: ZEPPELIN-4068 URL: https://issues.apache.org/jira/browse/ZEPPELIN-4068 Project: Zeppelin Issue Type: New Feature Affects Versions: 0.9.0 Reporter: yx91490 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] [zeppelin] liuxunorg commented on issue #3320: [ZEPPELIN-3977]. Create shell script for converting old note file to new file
liuxunorg commented on issue #3320: [ZEPPELIN-3977]. Create shell script for converting old note file to new file URL: https://github.com/apache/zeppelin/pull/3320#issuecomment-472312177 @zjffdu Add a zeppelin version number field to the note? for example, `version=0.8.1` So only need to keep the parsing code for each version in zeppelin. It is possible to use different versions of the note correctly. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] zjffdu commented on issue #3320: [ZEPPELIN-3977]. Create shell script for converting old note file to new file
zjffdu commented on issue #3320: [ZEPPELIN-3977]. Create shell script for converting old note file to new file URL: https://github.com/apache/zeppelin/pull/3320#issuecomment-472457940 @liuxunorg Thanks for the suggestion, I have added the version. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
Season of Docs 2019
https://developers.google.com/season-of-docs/ Sounds like a good idea? Google announces the inaugural year of Season of Docs, a Google program that fosters collaboration between open source projects and technical writers. Season of Docs is similar to Summer of Code, but with a focus on open source documentation and technical writers. Details are on website:g.co/seasonofdocs.
[GitHub] [zeppelin] asfgit closed pull request #3312: [ZEPPELIN-4004] Fix RemoteResource.invokeMethod()
asfgit closed pull request #3312: [ZEPPELIN-4004] Fix RemoteResource.invokeMethod() URL: https://github.com/apache/zeppelin/pull/3312 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] Leemoonsoo edited a comment on issue #3312: [ZEPPELIN-4004] Fix RemoteResource.invokeMethod()
Leemoonsoo edited a comment on issue #3312: [ZEPPELIN-4004] Fix RemoteResource.invokeMethod() URL: https://github.com/apache/zeppelin/pull/3312#issuecomment-471790630 Merge to master if no further comment. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] liuxunorg commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
liuxunorg commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265397993 ## File path: scripts/docker/submarine/submarine-interpreter-cpu-1.0.0/Dockerfile ## @@ -0,0 +1,19 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +FROM zeppelin-tensorflow_1.10-hadoop_3.1.2-cpu:1.0.0 Review comment: 1. I have carefully considered the issue of the docker name. Otherwise it will be difficult to maintain in the future. 2. Because the name needs to include the name and version number of the main component in the image. 3. **Naming rules** + The name `_` indicates the separator, splitting different systems, for example: `tensorflow` and `hadoop` + The name`-` indicates the connector, the connection system and the version number, for example: `hadoop-3.1.2` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] liuxunorg commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
liuxunorg commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265398900 ## File path: submarine/src/main/java/org/apache/zeppelin/submarine/SubmarineInterpreter.java ## @@ -0,0 +1,275 @@ +/* + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.zeppelin.submarine; + +import org.apache.commons.lang.StringUtils; +import org.apache.zeppelin.display.ui.OptionInput.ParamOption; +import org.apache.zeppelin.interpreter.Interpreter; +import org.apache.zeppelin.interpreter.InterpreterContext; +import org.apache.zeppelin.interpreter.InterpreterResult; +import org.apache.zeppelin.interpreter.thrift.InterpreterCompletion; +import org.apache.zeppelin.scheduler.Scheduler; +import org.apache.zeppelin.scheduler.SchedulerFactory; +import org.apache.zeppelin.submarine.commons.SubmarineCommand; +import org.apache.zeppelin.submarine.commons.SubmarineConstants; +import org.apache.zeppelin.submarine.job.SubmarineJob; +import org.apache.zeppelin.submarine.commons.SubmarineUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.File; +import java.util.List; +import java.util.Properties; + +import static org.apache.zeppelin.submarine.commons.SubmarineCommand.CLEAN_RUNTIME_CACHE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.CHECKPOINT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.CLEAN_CHECKPOINT; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_CLEAN; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_JOB_RUN; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_JOB_SHOW; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_TYPE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_USAGE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.INPUT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.MACHINELEARING_DISTRIBUTED_ENABLE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.OPERATION_TYPE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.PS_LAUNCH_CMD; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.SUBMARINE_ALGORITHM_HDFS_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.TF_CHECKPOINT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.USERNAME_SYMBOL; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.WORKER_LAUNCH_CMD; +import static org.apache.zeppelin.submarine.commons.SubmarineUtils.unifyKey; + +/** + * SubmarineInterpreter of Hadoop Submarine implementation. + * Support for Hadoop Submarine cli. All the commands documented here + * https://github.com/apache/hadoop/tree/trunk/hadoop-submarine/hadoop-submarine-core + * /src/site/markdown/QuickStart.md is supported. + */ +public class SubmarineInterpreter extends Interpreter { + private Logger LOGGER = LoggerFactory.getLogger(SubmarineInterpreter.class); + + // Number of submarines executed in parallel for each interpreter instance + protected int concurrentExecutedMax = 1; + + private boolean needUpdateConfig = true; + private String currentReplName = ""; + + SubmarineContext submarineContext = null; + + public SubmarineInterpreter(Properties properties) { +super(properties); + +String concurrentMax = getProperty(SubmarineConstants.SUBMARINE_CONCURRENT_MAX, "1"); +concurrentExecutedMax = Integer.parseInt(concurrentMax); + +submarineContext = SubmarineContext.getInstance(); + } + + @Override + public void open() { +LOGGER.info("SubmarineInterpreter open()"); + } + + @Override + public void close() { +submarineContext.stopAllSubmarineJob(); + } + + private void setParagraphConfig(InterpreterContext context) { +String replName = context.getReplName(); +if (StringUtils.equals(currentReplName, replName)) { + currentReplName = context.getReplName(); + needUpdateConfig = true; +} +if (needUpdateConfig) { + needUpdateConfig = false; + if (currentReplName.equals("submarine") || currentReplName.isEmpty()) { Review comment: I am at https://github.com/apache/zeppelin/pull/3271 , Has all interpreters
[GitHub] [zeppelin] liuxunorg commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
liuxunorg commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265401655 ## File path: submarine/src/main/java/org/apache/zeppelin/submarine/SubmarineInterpreter.java ## @@ -0,0 +1,275 @@ +/* + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.zeppelin.submarine; + +import org.apache.commons.lang.StringUtils; +import org.apache.zeppelin.display.ui.OptionInput.ParamOption; +import org.apache.zeppelin.interpreter.Interpreter; +import org.apache.zeppelin.interpreter.InterpreterContext; +import org.apache.zeppelin.interpreter.InterpreterResult; +import org.apache.zeppelin.interpreter.thrift.InterpreterCompletion; +import org.apache.zeppelin.scheduler.Scheduler; +import org.apache.zeppelin.scheduler.SchedulerFactory; +import org.apache.zeppelin.submarine.commons.SubmarineCommand; +import org.apache.zeppelin.submarine.commons.SubmarineConstants; +import org.apache.zeppelin.submarine.job.SubmarineJob; +import org.apache.zeppelin.submarine.commons.SubmarineUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.File; +import java.util.List; +import java.util.Properties; + +import static org.apache.zeppelin.submarine.commons.SubmarineCommand.CLEAN_RUNTIME_CACHE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.CHECKPOINT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.CLEAN_CHECKPOINT; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_CLEAN; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_JOB_RUN; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_JOB_SHOW; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_TYPE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_USAGE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.INPUT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.MACHINELEARING_DISTRIBUTED_ENABLE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.OPERATION_TYPE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.PS_LAUNCH_CMD; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.SUBMARINE_ALGORITHM_HDFS_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.TF_CHECKPOINT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.USERNAME_SYMBOL; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.WORKER_LAUNCH_CMD; +import static org.apache.zeppelin.submarine.commons.SubmarineUtils.unifyKey; + +/** + * SubmarineInterpreter of Hadoop Submarine implementation. + * Support for Hadoop Submarine cli. All the commands documented here + * https://github.com/apache/hadoop/tree/trunk/hadoop-submarine/hadoop-submarine-core + * /src/site/markdown/QuickStart.md is supported. + */ +public class SubmarineInterpreter extends Interpreter { + private Logger LOGGER = LoggerFactory.getLogger(SubmarineInterpreter.class); + + // Number of submarines executed in parallel for each interpreter instance + protected int concurrentExecutedMax = 1; + + private boolean needUpdateConfig = true; + private String currentReplName = ""; + + SubmarineContext submarineContext = null; + + public SubmarineInterpreter(Properties properties) { +super(properties); + +String concurrentMax = getProperty(SubmarineConstants.SUBMARINE_CONCURRENT_MAX, "1"); +concurrentExecutedMax = Integer.parseInt(concurrentMax); + +submarineContext = SubmarineContext.getInstance(); + } + + @Override + public void open() { +LOGGER.info("SubmarineInterpreter open()"); + } + + @Override + public void close() { +submarineContext.stopAllSubmarineJob(); + } + + private void setParagraphConfig(InterpreterContext context) { +String replName = context.getReplName(); +if (StringUtils.equals(currentReplName, replName)) { + currentReplName = context.getReplName(); + needUpdateConfig = true; +} +if (needUpdateConfig) { + needUpdateConfig = false; + if (currentReplName.equals("submarine") || currentReplName.isEmpty()) { +context.getConfig().put("editorHide", true); +context.getConfig().put("title",
[GitHub] [zeppelin] liuxunorg commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
liuxunorg commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265401783 ## File path: submarine/src/main/java/org/apache/zeppelin/submarine/SubmarineShellInterpreter.java ## @@ -0,0 +1,107 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.zeppelin.submarine; + +import org.apache.commons.exec.CommandLine; +import org.apache.commons.exec.DefaultExecutor; +import org.apache.commons.lang3.StringUtils; +import org.apache.zeppelin.interpreter.InterpreterContext; +import org.apache.zeppelin.interpreter.InterpreterException; +import org.apache.zeppelin.interpreter.InterpreterResult; +import org.apache.zeppelin.shell.ShellInterpreter; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.Properties; + +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.SUBMARINE_ALGORITHM_HDFS_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.SUBMARINE_HADOOP_KEYTAB; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.SUBMARINE_HADOOP_PRINCIPAL; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.TF_CHECKPOINT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.USERNAME_SYMBOL; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.ZEPPELIN_SUBMARINE_AUTH_TYPE; + +/** + * Submarine Shell interpreter for Zeppelin. + */ +public class SubmarineShellInterpreter extends ShellInterpreter { + private static final Logger LOGGER = LoggerFactory.getLogger(SubmarineShellInterpreter.class); + + private final boolean isWindows = System.getProperty("os.name").startsWith("Windows"); + private final String shell = isWindows ? "cmd /c" : "bash -c"; + + public SubmarineShellInterpreter(Properties property) { +super(property); + } + + @Override + public InterpreterResult internalInterpret(String cmd, InterpreterContext intpContext) { +// algorithm path & checkpoint path support replaces ${username} with real user name +String algorithmPath = properties.getProperty(SUBMARINE_ALGORITHM_HDFS_PATH, ""); +if (algorithmPath.contains(USERNAME_SYMBOL)) { + algorithmPath = algorithmPath.replace(USERNAME_SYMBOL, userName); + properties.setProperty(SUBMARINE_ALGORITHM_HDFS_PATH, algorithmPath); +} +String checkpointPath = properties.getProperty( +TF_CHECKPOINT_PATH, ""); +if (checkpointPath.contains(USERNAME_SYMBOL)) { + checkpointPath = checkpointPath.replace(USERNAME_SYMBOL, userName); + properties.setProperty(TF_CHECKPOINT_PATH, checkpointPath); +} + +return super.internalInterpret(cmd, intpContext); + } + + @Override + protected boolean runKerberosLogin() { +try { + createSecureConfiguration(); + return true; +} catch (Exception e) { + LOGGER.error("Unable to run kinit for zeppelin", e); +} +return false; + } + + public void createSecureConfiguration() throws InterpreterException { +Properties properties = getProperties(); +CommandLine cmdLine = CommandLine.parse(shell); +cmdLine.addArgument("-c", false); +String kinitCommand = String.format("kinit -k -t %s %s", +properties.getProperty(SUBMARINE_HADOOP_KEYTAB), +properties.getProperty(SUBMARINE_HADOOP_PRINCIPAL)); +cmdLine.addArgument(kinitCommand, false); +DefaultExecutor executor = new DefaultExecutor(); +try { + executor.execute(cmdLine); +} catch (Exception e) { + LOGGER.error("Unable to run kinit for zeppelin user " + kinitCommand, e); + throw new InterpreterException(e); +} + } + + @Override + protected boolean isKerboseEnabled() { +if (!StringUtils.isAnyEmpty(getProperty(ZEPPELIN_SUBMARINE_AUTH_TYPE)) +&& getProperty(ZEPPELIN_SUBMARINE_AUTH_TYPE) +.equalsIgnoreCase("kerberos")) { Review comment: OK. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific
[GitHub] [zeppelin] liuxunorg commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
liuxunorg commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265402272 ## File path: submarine/src/main/java/org/apache/zeppelin/submarine/commons/SubmarineConstants.java ## @@ -0,0 +1,119 @@ +/* + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.zeppelin.submarine.commons; + +/* + * NOTE: use lowercase + "_" for the option name + */ +public class SubmarineConstants { + // Docker container Environmental variable at `submarine-job-run-tf.jinja` + // and `/bin/interpreter.sh` + public static final String DOCKER_HADOOP_HDFS_HOME = "DOCKER_HADOOP_HDFS_HOME"; + public static final String DOCKER_JAVA_HOME= "DOCKER_JAVA_HOME"; + public static final String DOCKER_CONTAINER_TIME_ZONE = "DOCKER_CONTAINER_TIME_ZONE"; + public static final String INTERPRETER_LAUNCH_MODE = "INTERPRETER_LAUNCH_MODE"; + + // interpreter.sh Environmental variable + public static final String SUBMARINE_HADOOP_HOME = "SUBMARINE_HADOOP_HOME"; + public static final String HADOOP_YARN_SUBMARINE_JAR = "HADOOP_YARN_SUBMARINE_JAR"; + public static final String SUBMARINE_INTERPRETER_DOCKER_IMAGE + = "SUBMARINE_INTERPRETER_DOCKER_IMAGE"; + + public static final String ZEPPELIN_SUBMARINE_AUTH_TYPE = "zeppelin.submarine.auth.type"; + public static final String SUBMARINE_HADOOP_CONF_DIR = "SUBMARINE_HADOOP_CONF_DIR"; + public static final String SUBMARINE_HADOOP_KEYTAB= "SUBMARINE_HADOOP_KEYTAB"; + public static final String SUBMARINE_HADOOP_PRINCIPAL = "SUBMARINE_HADOOP_PRINCIPAL"; + public static final String SUBMARINE_HADOOP_KRB5_CONF = "submarine.hadoop.krb5.conf"; + + public static final String JOB_NAME = "JOB_NAME"; + public static final String CLEAN_CHECKPOINT = "CLEAN_CHECKPOINT"; + public static final String INPUT_PATH = "INPUT_PATH"; + public static final String CHECKPOINT_PATH = "CHECKPOINT_PATH"; + public static final String PS_LAUNCH_CMD = "PS_LAUNCH_CMD"; + public static final String WORKER_LAUNCH_CMD = "WORKER_LAUNCH_CMD"; + public static final String MACHINELEARING_DISTRIBUTED_ENABLE + = "machinelearing.distributed.enable"; + + public static final String ZEPPELIN_INTERPRETER_RPC_PORTRANGE + = "zeppelin.interpreter.rpc.portRange"; Review comment: This definition is used in the second PR (`YarnInterpreterLauncher`), This PR is merged and will be submitted shortly. 1. The `submarine interpreter` runs in a remote Docker container via `YARN`. 2. Work together through this PRC port with the local zeppelin server. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] liuxunorg commented on a change in pull request #3313: [ZEPPELIN-4004] add a systemd unit file to launch the Zeppelin daemon via systemd commands
liuxunorg commented on a change in pull request #3313: [ZEPPELIN-4004] add a systemd unit file to launch the Zeppelin daemon via systemd commands URL: https://github.com/apache/zeppelin/pull/3313#discussion_r265008164 ## File path: bin/zeppelin-systemd-service.sh ## @@ -0,0 +1,101 @@ +#!/usr/bin/env bash +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# description: Enable/disable the Zeppelin systemd service. +# + +# Directory in which the systemd unit files sit. +SYSTEMD_DIR=/etc/systemd/system + +function enable_systemd_service() +{ +# Where are we in the fs? +OLD_PWD=$(pwd) + +# Work out where the script is run from and cd into said directory. +cd "$(dirname "${BASH_SOURCE[0]}")" + +# Work out the current directory. +MY_PWD=$(readlink -f .) + +# Work out the Zeppelin source directory (go up a directory actually). +ZEPPELIN_DIR=$(dirname "${MY_PWD}") + +# Copy the unit file. +cp "${ZEPPELIN_DIR}"/scripts/systemd/zeppelin.systemd "${SYSTEMD_DIR}" + +# Swap the template variable with the right directory path. +sed -i -e "s#%ZEPPELIN_DIR%#${ZEPPELIN_DIR}#g;" \ +"${SYSTEMD_DIR}"/zeppelin.systemd + +# Set up the unit file. +systemctl daemon-reload +systemctl enable zeppelin.service + +# Display a help message. +echo "To start Zeppelin using systemd, simply type: +# systemctl start zeppelin + +To check the service health: +# systemctl status zeppelin" + +# Go back where we came from. +cd "${OLD_PWD}" +} + +function disable_systemd_service() +{ +# Let's mop up. +systemctl stop zeppelin.service +systemctl disable zeppelin.service +rm "${SYSMTED_DIR}"/zeppelin.systemd +systemctl daemon-reload +systemctl reset-failed + +# We're done. Explain what's just happened. +echo "Zeppelin systemd service has been disabled and removed from your system." +} + +function check_user() +{ +# Are we root? +if [[ $(id -u) -ne 0 ]]; then +echo "Please run this script as root!" +exit -1 +fi +} + +USAGE="usage: zeppelin-systemd-service.sh {enable|disable} + + enable:enable Zeppelin systemd service. + disable: disable Zeppelin systemd service. +" + +# Main method starts from here downwards. +check_user + Review comment: check_operationSystem This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] liuxunorg commented on a change in pull request #3313: [ZEPPELIN-4004] add a systemd unit file to launch the Zeppelin daemon via systemd commands
liuxunorg commented on a change in pull request #3313: [ZEPPELIN-4004] add a systemd unit file to launch the Zeppelin daemon via systemd commands URL: https://github.com/apache/zeppelin/pull/3313#discussion_r265007925 ## File path: bin/zeppelin-systemd-service.sh ## @@ -0,0 +1,101 @@ +#!/usr/bin/env bash +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# description: Enable/disable the Zeppelin systemd service. +# + +# Directory in which the systemd unit files sit. +SYSTEMD_DIR=/etc/systemd/system + +function enable_systemd_service() +{ +# Where are we in the fs? +OLD_PWD=$(pwd) + +# Work out where the script is run from and cd into said directory. +cd "$(dirname "${BASH_SOURCE[0]}")" + +# Work out the current directory. +MY_PWD=$(readlink -f .) + +# Work out the Zeppelin source directory (go up a directory actually). +ZEPPELIN_DIR=$(dirname "${MY_PWD}") + +# Copy the unit file. +cp "${ZEPPELIN_DIR}"/scripts/systemd/zeppelin.systemd "${SYSTEMD_DIR}" + +# Swap the template variable with the right directory path. +sed -i -e "s#%ZEPPELIN_DIR%#${ZEPPELIN_DIR}#g;" \ +"${SYSTEMD_DIR}"/zeppelin.systemd + +# Set up the unit file. +systemctl daemon-reload +systemctl enable zeppelin.service + +# Display a help message. +echo "To start Zeppelin using systemd, simply type: +# systemctl start zeppelin + +To check the service health: +# systemctl status zeppelin" + +# Go back where we came from. +cd "${OLD_PWD}" +} + +function disable_systemd_service() +{ +# Let's mop up. +systemctl stop zeppelin.service +systemctl disable zeppelin.service +rm "${SYSMTED_DIR}"/zeppelin.systemd +systemctl daemon-reload +systemctl reset-failed + +# We're done. Explain what's just happened. +echo "Zeppelin systemd service has been disabled and removed from your system." +} + +function check_user() +{ +# Are we root? +if [[ $(id -u) -ne 0 ]]; then +echo "Please run this script as root!" +exit -1 +fi +} + Review comment: Because systemctl only supports centos, you need to check the operating system version. ``` function check_operationSystem() { case ${OPERATING_SYSTEM} in centos) echo "The ./bin/zeppelin-systemd-service.sh only support [centos] operating system version." ;; *) echo "ERROR: The ./bin/zeppelin-systemd-service.sh Unsupported [${OPERATING_SYSTEM}] operating system!" exit -1 ;; esac } ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265409051 ## File path: scripts/docker/submarine/tensorflow_1.10-hadoop_3.1.2-gpu-1.0.0/Dockerfile ## @@ -0,0 +1,71 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +FROM nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04 + +# Pick up some TF dependencies +RUN apt-get update && apt-get install -y --allow-downgrades --no-install-recommends \ + --allow-change-held-packages --allow-unauthenticated \ + build-essential libfreetype6-dev libpng12-dev \ + libzmq3-dev pkg-config python python-dev \ + rsync software-properties-common curl unzip wget grep sed vim \ +iputils-ping net-tools gdb python2.7-dbg tzdata \ +cuda-cublas-9-0 cuda-cufft-9-0 cuda-curand-9-0 cuda-cusolver-9-0 \ + cuda-cusparse-9-0 libcudnn7=7.0.5.15-1+cuda9.0 && \ + apt-get clean && \ + rm -rf /var/lib/apt/lists/* + +RUN export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -yq krb5-user libpam-krb5 && apt-get clean + +RUN curl -O https://bootstrap.pypa.io/get-pip.py && \ + python get-pip.py && \ + rm get-pip.py + +RUN echo "Install python related packages" && \ + apt-get -y update && \ + apt-get install -y gfortran && \ + # numerical/algebra packages + apt-get install -y libblas-dev libatlas-dev liblapack-dev && \ + # font, image for matplotlib + apt-get install -y libpng-dev libxft-dev && \ + # for tkinter + apt-get install -y python-tk libxml2-dev libxslt-dev zlib1g-dev + +RUN pip --no-cache-dir install Pillow h5py ipykernel jupyter matplotlib numpy pandas scipy sklearn && \ + python -m ipykernel.kernelspec + +# Set the locale +# disable bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8) +RUN apt-get clean && apt-get update && apt-get install -y locales +RUN locale-gen en_US.UTF-8 + +# Install TensorFlow GPU version. +ENV TENSORFLOW_VERSION="1.10.0" +RUN pip --no-cache-dir install \ + http://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-${TENSORFLOW_VERSION}-cp27-none-linux_x86_64.whl +RUN apt-get update && apt-get install git -y + +# Install hadoop +ENV HADOOP_VERSION="3.1.2" +RUN wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-${HADOOP_VERSION}/hadoop-${HADOOP_VERSION}.tar.gz Review comment: I don't mean the hadoop version - I mean the "http://mirrors...; site. a) this is known to be unreliable b) as a policy, all apache download should be from the download script https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-3.1.2/hadoop-3.1.2-src.tar.gz https://hadoop.apache.org/releases.html This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265409266 ## File path: scripts/docker/submarine/submarine-interpreter-cpu-1.0.0/Dockerfile ## @@ -0,0 +1,19 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +FROM zeppelin-tensorflow_1.10-hadoop_3.1.2-cpu:1.0.0 Review comment: if we are not releasing / publishing this docker image as a part of the Apache Zeppelin release, I think this is ok, otherwise a different naming will need to be used This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] liuxunorg commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
liuxunorg commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265409886 ## File path: scripts/docker/submarine/tensorflow_1.10-hadoop_3.1.2-gpu-1.0.0/Dockerfile ## @@ -0,0 +1,71 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +FROM nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04 + +# Pick up some TF dependencies +RUN apt-get update && apt-get install -y --allow-downgrades --no-install-recommends \ + --allow-change-held-packages --allow-unauthenticated \ + build-essential libfreetype6-dev libpng12-dev \ + libzmq3-dev pkg-config python python-dev \ + rsync software-properties-common curl unzip wget grep sed vim \ +iputils-ping net-tools gdb python2.7-dbg tzdata \ +cuda-cublas-9-0 cuda-cufft-9-0 cuda-curand-9-0 cuda-cusolver-9-0 \ + cuda-cusparse-9-0 libcudnn7=7.0.5.15-1+cuda9.0 && \ + apt-get clean && \ + rm -rf /var/lib/apt/lists/* + +RUN export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -yq krb5-user libpam-krb5 && apt-get clean + +RUN curl -O https://bootstrap.pypa.io/get-pip.py && \ + python get-pip.py && \ + rm get-pip.py + +RUN echo "Install python related packages" && \ + apt-get -y update && \ + apt-get install -y gfortran && \ + # numerical/algebra packages + apt-get install -y libblas-dev libatlas-dev liblapack-dev && \ + # font, image for matplotlib + apt-get install -y libpng-dev libxft-dev && \ + # for tkinter + apt-get install -y python-tk libxml2-dev libxslt-dev zlib1g-dev + +RUN pip --no-cache-dir install Pillow h5py ipykernel jupyter matplotlib numpy pandas scipy sklearn && \ + python -m ipykernel.kernelspec + +# Set the locale +# disable bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8) +RUN apt-get clean && apt-get update && apt-get install -y locales +RUN locale-gen en_US.UTF-8 + +# Install TensorFlow GPU version. +ENV TENSORFLOW_VERSION="1.10.0" +RUN pip --no-cache-dir install \ + http://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-${TENSORFLOW_VERSION}-cp27-none-linux_x86_64.whl +RUN apt-get update && apt-get install git -y + +# Install hadoop +ENV HADOOP_VERSION="3.1.2" +RUN wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-${HADOOP_VERSION}/hadoop-${HADOOP_VERSION}.tar.gz Review comment: OK, Thanks for your correction. :-) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265409462 ## File path: scripts/docker/submarine/submarine-interpreter-cpu-1.0.0/Dockerfile ## @@ -0,0 +1,19 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +FROM zeppelin-tensorflow_1.10-hadoop_3.1.2-cpu:1.0.0 Review comment: why is cpu at the end? maybe it should be `zeppelin-cpu-tensorflow_1.10-hadoop_3.1.2`? also any reason for tensorflow 1.10 vs the latest 1.13? https://github.com/tensorflow/tensorflow/releases/tag/v1.13.1 it looks like we need 3 point version number for tensorflow too, 1.10.0? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265409462 ## File path: scripts/docker/submarine/submarine-interpreter-cpu-1.0.0/Dockerfile ## @@ -0,0 +1,19 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +FROM zeppelin-tensorflow_1.10-hadoop_3.1.2-cpu:1.0.0 Review comment: why is cpu at the end? maybe it should be `zeppelin-cpu-tensorflow_1.10-hadoop_3.1.2`? also any reason for tensorflow 1.10 vs the latest 1.13? https://github.com/tensorflow/tensorflow/releases/tag/v1.13.1 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] liuxunorg commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
liuxunorg commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265410250 ## File path: scripts/docker/submarine/submarine-interpreter-cpu-1.0.0/Dockerfile ## @@ -0,0 +1,19 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +FROM zeppelin-tensorflow_1.10-hadoop_3.1.2-cpu:1.0.0 Review comment: About the Docker Image Name, Can you give me some suggestion? Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265410534 ## File path: submarine/src/main/java/org/apache/zeppelin/submarine/SubmarineInterpreter.java ## @@ -0,0 +1,275 @@ +/* + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.zeppelin.submarine; + +import org.apache.commons.lang.StringUtils; +import org.apache.zeppelin.display.ui.OptionInput.ParamOption; +import org.apache.zeppelin.interpreter.Interpreter; +import org.apache.zeppelin.interpreter.InterpreterContext; +import org.apache.zeppelin.interpreter.InterpreterResult; +import org.apache.zeppelin.interpreter.thrift.InterpreterCompletion; +import org.apache.zeppelin.scheduler.Scheduler; +import org.apache.zeppelin.scheduler.SchedulerFactory; +import org.apache.zeppelin.submarine.commons.SubmarineCommand; +import org.apache.zeppelin.submarine.commons.SubmarineConstants; +import org.apache.zeppelin.submarine.job.SubmarineJob; +import org.apache.zeppelin.submarine.commons.SubmarineUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.File; +import java.util.List; +import java.util.Properties; + +import static org.apache.zeppelin.submarine.commons.SubmarineCommand.CLEAN_RUNTIME_CACHE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.CHECKPOINT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.CLEAN_CHECKPOINT; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_CLEAN; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_JOB_RUN; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_JOB_SHOW; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_TYPE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_USAGE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.INPUT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.MACHINELEARING_DISTRIBUTED_ENABLE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.OPERATION_TYPE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.PS_LAUNCH_CMD; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.SUBMARINE_ALGORITHM_HDFS_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.TF_CHECKPOINT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.USERNAME_SYMBOL; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.WORKER_LAUNCH_CMD; +import static org.apache.zeppelin.submarine.commons.SubmarineUtils.unifyKey; + +/** + * SubmarineInterpreter of Hadoop Submarine implementation. + * Support for Hadoop Submarine cli. All the commands documented here + * https://github.com/apache/hadoop/tree/trunk/hadoop-submarine/hadoop-submarine-core + * /src/site/markdown/QuickStart.md is supported. + */ +public class SubmarineInterpreter extends Interpreter { + private Logger LOGGER = LoggerFactory.getLogger(SubmarineInterpreter.class); + + // Number of submarines executed in parallel for each interpreter instance + protected int concurrentExecutedMax = 1; + + private boolean needUpdateConfig = true; + private String currentReplName = ""; + + SubmarineContext submarineContext = null; + + public SubmarineInterpreter(Properties properties) { +super(properties); + +String concurrentMax = getProperty(SubmarineConstants.SUBMARINE_CONCURRENT_MAX, "1"); +concurrentExecutedMax = Integer.parseInt(concurrentMax); + +submarineContext = SubmarineContext.getInstance(); + } + + @Override + public void open() { +LOGGER.info("SubmarineInterpreter open()"); + } + + @Override + public void close() { +submarineContext.stopAllSubmarineJob(); + } + + private void setParagraphConfig(InterpreterContext context) { +String replName = context.getReplName(); +if (StringUtils.equals(currentReplName, replName)) { + currentReplName = context.getReplName(); + needUpdateConfig = true; +} +if (needUpdateConfig) { + needUpdateConfig = false; + if (currentReplName.equals("submarine") || currentReplName.isEmpty()) { +context.getConfig().put("editorHide", true); +context.getConfig().put("title",
[GitHub] [zeppelin] felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265410447 ## File path: submarine/src/main/java/org/apache/zeppelin/submarine/SubmarineInterpreter.java ## @@ -0,0 +1,275 @@ +/* + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.zeppelin.submarine; + +import org.apache.commons.lang.StringUtils; +import org.apache.zeppelin.display.ui.OptionInput.ParamOption; +import org.apache.zeppelin.interpreter.Interpreter; +import org.apache.zeppelin.interpreter.InterpreterContext; +import org.apache.zeppelin.interpreter.InterpreterResult; +import org.apache.zeppelin.interpreter.thrift.InterpreterCompletion; +import org.apache.zeppelin.scheduler.Scheduler; +import org.apache.zeppelin.scheduler.SchedulerFactory; +import org.apache.zeppelin.submarine.commons.SubmarineCommand; +import org.apache.zeppelin.submarine.commons.SubmarineConstants; +import org.apache.zeppelin.submarine.job.SubmarineJob; +import org.apache.zeppelin.submarine.commons.SubmarineUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.File; +import java.util.List; +import java.util.Properties; + +import static org.apache.zeppelin.submarine.commons.SubmarineCommand.CLEAN_RUNTIME_CACHE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.CHECKPOINT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.CLEAN_CHECKPOINT; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_CLEAN; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_JOB_RUN; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_JOB_SHOW; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_TYPE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_USAGE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.INPUT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.MACHINELEARING_DISTRIBUTED_ENABLE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.OPERATION_TYPE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.PS_LAUNCH_CMD; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.SUBMARINE_ALGORITHM_HDFS_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.TF_CHECKPOINT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.USERNAME_SYMBOL; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.WORKER_LAUNCH_CMD; +import static org.apache.zeppelin.submarine.commons.SubmarineUtils.unifyKey; + +/** + * SubmarineInterpreter of Hadoop Submarine implementation. + * Support for Hadoop Submarine cli. All the commands documented here + * https://github.com/apache/hadoop/tree/trunk/hadoop-submarine/hadoop-submarine-core + * /src/site/markdown/QuickStart.md is supported. + */ +public class SubmarineInterpreter extends Interpreter { + private Logger LOGGER = LoggerFactory.getLogger(SubmarineInterpreter.class); + + // Number of submarines executed in parallel for each interpreter instance + protected int concurrentExecutedMax = 1; + + private boolean needUpdateConfig = true; + private String currentReplName = ""; + + SubmarineContext submarineContext = null; + + public SubmarineInterpreter(Properties properties) { +super(properties); + +String concurrentMax = getProperty(SubmarineConstants.SUBMARINE_CONCURRENT_MAX, "1"); +concurrentExecutedMax = Integer.parseInt(concurrentMax); + +submarineContext = SubmarineContext.getInstance(); + } + + @Override + public void open() { +LOGGER.info("SubmarineInterpreter open()"); + } + + @Override + public void close() { +submarineContext.stopAllSubmarineJob(); + } + + private void setParagraphConfig(InterpreterContext context) { +String replName = context.getReplName(); +if (StringUtils.equals(currentReplName, replName)) { + currentReplName = context.getReplName(); + needUpdateConfig = true; +} +if (needUpdateConfig) { + needUpdateConfig = false; + if (currentReplName.equals("submarine") || currentReplName.isEmpty()) { Review comment: I mean like install any interpreter with a different name `--name` in
[GitHub] [zeppelin] felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265410906 ## File path: submarine/src/main/java/org/apache/zeppelin/submarine/commons/SubmarineConstants.java ## @@ -0,0 +1,119 @@ +/* + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.zeppelin.submarine.commons; + +/* + * NOTE: use lowercase + "_" for the option name + */ +public class SubmarineConstants { + // Docker container Environmental variable at `submarine-job-run-tf.jinja` + // and `/bin/interpreter.sh` + public static final String DOCKER_HADOOP_HDFS_HOME = "DOCKER_HADOOP_HDFS_HOME"; + public static final String DOCKER_JAVA_HOME= "DOCKER_JAVA_HOME"; + public static final String DOCKER_CONTAINER_TIME_ZONE = "DOCKER_CONTAINER_TIME_ZONE"; + public static final String INTERPRETER_LAUNCH_MODE = "INTERPRETER_LAUNCH_MODE"; + + // interpreter.sh Environmental variable + public static final String SUBMARINE_HADOOP_HOME = "SUBMARINE_HADOOP_HOME"; + public static final String HADOOP_YARN_SUBMARINE_JAR = "HADOOP_YARN_SUBMARINE_JAR"; + public static final String SUBMARINE_INTERPRETER_DOCKER_IMAGE + = "SUBMARINE_INTERPRETER_DOCKER_IMAGE"; + + public static final String ZEPPELIN_SUBMARINE_AUTH_TYPE = "zeppelin.submarine.auth.type"; + public static final String SUBMARINE_HADOOP_CONF_DIR = "SUBMARINE_HADOOP_CONF_DIR"; + public static final String SUBMARINE_HADOOP_KEYTAB= "SUBMARINE_HADOOP_KEYTAB"; + public static final String SUBMARINE_HADOOP_PRINCIPAL = "SUBMARINE_HADOOP_PRINCIPAL"; + public static final String SUBMARINE_HADOOP_KRB5_CONF = "submarine.hadoop.krb5.conf"; + + public static final String JOB_NAME = "JOB_NAME"; + public static final String CLEAN_CHECKPOINT = "CLEAN_CHECKPOINT"; + public static final String INPUT_PATH = "INPUT_PATH"; + public static final String CHECKPOINT_PATH = "CHECKPOINT_PATH"; + public static final String PS_LAUNCH_CMD = "PS_LAUNCH_CMD"; + public static final String WORKER_LAUNCH_CMD = "WORKER_LAUNCH_CMD"; + public static final String MACHINELEARING_DISTRIBUTED_ENABLE + = "machinelearing.distributed.enable"; Review comment: `machinelearing` -> `machinelearning` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265410868 ## File path: submarine/src/main/java/org/apache/zeppelin/submarine/commons/SubmarineConstants.java ## @@ -0,0 +1,119 @@ +/* + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.zeppelin.submarine.commons; + +/* + * NOTE: use lowercase + "_" for the option name + */ +public class SubmarineConstants { + // Docker container Environmental variable at `submarine-job-run-tf.jinja` + // and `/bin/interpreter.sh` + public static final String DOCKER_HADOOP_HDFS_HOME = "DOCKER_HADOOP_HDFS_HOME"; + public static final String DOCKER_JAVA_HOME= "DOCKER_JAVA_HOME"; + public static final String DOCKER_CONTAINER_TIME_ZONE = "DOCKER_CONTAINER_TIME_ZONE"; + public static final String INTERPRETER_LAUNCH_MODE = "INTERPRETER_LAUNCH_MODE"; + + // interpreter.sh Environmental variable + public static final String SUBMARINE_HADOOP_HOME = "SUBMARINE_HADOOP_HOME"; + public static final String HADOOP_YARN_SUBMARINE_JAR = "HADOOP_YARN_SUBMARINE_JAR"; + public static final String SUBMARINE_INTERPRETER_DOCKER_IMAGE + = "SUBMARINE_INTERPRETER_DOCKER_IMAGE"; + + public static final String ZEPPELIN_SUBMARINE_AUTH_TYPE = "zeppelin.submarine.auth.type"; + public static final String SUBMARINE_HADOOP_CONF_DIR = "SUBMARINE_HADOOP_CONF_DIR"; + public static final String SUBMARINE_HADOOP_KEYTAB= "SUBMARINE_HADOOP_KEYTAB"; + public static final String SUBMARINE_HADOOP_PRINCIPAL = "SUBMARINE_HADOOP_PRINCIPAL"; + public static final String SUBMARINE_HADOOP_KRB5_CONF = "submarine.hadoop.krb5.conf"; + + public static final String JOB_NAME = "JOB_NAME"; + public static final String CLEAN_CHECKPOINT = "CLEAN_CHECKPOINT"; + public static final String INPUT_PATH = "INPUT_PATH"; + public static final String CHECKPOINT_PATH = "CHECKPOINT_PATH"; + public static final String PS_LAUNCH_CMD = "PS_LAUNCH_CMD"; + public static final String WORKER_LAUNCH_CMD = "WORKER_LAUNCH_CMD"; + public static final String MACHINELEARING_DISTRIBUTED_ENABLE + = "machinelearing.distributed.enable"; + + public static final String ZEPPELIN_INTERPRETER_RPC_PORTRANGE + = "zeppelin.interpreter.rpc.portRange"; Review comment: if this only affects submarine, suggest calling it `submarine.rpc.portRange` or `submarine.interpreter.rpc.portRange` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] liuxunorg commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
liuxunorg commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265410629 ## File path: scripts/docker/submarine/submarine-interpreter-cpu-1.0.0/Dockerfile ## @@ -0,0 +1,19 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +FROM zeppelin-tensorflow_1.10-hadoop_3.1.2-cpu:1.0.0 Review comment: Yes, tensorflow uses version 1.13.0. I made a mistake and I will correct this error. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] liuxunorg commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
liuxunorg commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265411201 ## File path: submarine/src/main/java/org/apache/zeppelin/submarine/SubmarineInterpreter.java ## @@ -0,0 +1,275 @@ +/* + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.zeppelin.submarine; + +import org.apache.commons.lang.StringUtils; +import org.apache.zeppelin.display.ui.OptionInput.ParamOption; +import org.apache.zeppelin.interpreter.Interpreter; +import org.apache.zeppelin.interpreter.InterpreterContext; +import org.apache.zeppelin.interpreter.InterpreterResult; +import org.apache.zeppelin.interpreter.thrift.InterpreterCompletion; +import org.apache.zeppelin.scheduler.Scheduler; +import org.apache.zeppelin.scheduler.SchedulerFactory; +import org.apache.zeppelin.submarine.commons.SubmarineCommand; +import org.apache.zeppelin.submarine.commons.SubmarineConstants; +import org.apache.zeppelin.submarine.job.SubmarineJob; +import org.apache.zeppelin.submarine.commons.SubmarineUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.File; +import java.util.List; +import java.util.Properties; + +import static org.apache.zeppelin.submarine.commons.SubmarineCommand.CLEAN_RUNTIME_CACHE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.CHECKPOINT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.CLEAN_CHECKPOINT; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_CLEAN; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_JOB_RUN; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_JOB_SHOW; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_TYPE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_USAGE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.INPUT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.MACHINELEARING_DISTRIBUTED_ENABLE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.OPERATION_TYPE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.PS_LAUNCH_CMD; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.SUBMARINE_ALGORITHM_HDFS_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.TF_CHECKPOINT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.USERNAME_SYMBOL; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.WORKER_LAUNCH_CMD; +import static org.apache.zeppelin.submarine.commons.SubmarineUtils.unifyKey; + +/** + * SubmarineInterpreter of Hadoop Submarine implementation. + * Support for Hadoop Submarine cli. All the commands documented here + * https://github.com/apache/hadoop/tree/trunk/hadoop-submarine/hadoop-submarine-core + * /src/site/markdown/QuickStart.md is supported. + */ +public class SubmarineInterpreter extends Interpreter { + private Logger LOGGER = LoggerFactory.getLogger(SubmarineInterpreter.class); + + // Number of submarines executed in parallel for each interpreter instance + protected int concurrentExecutedMax = 1; + + private boolean needUpdateConfig = true; + private String currentReplName = ""; + + SubmarineContext submarineContext = null; + + public SubmarineInterpreter(Properties properties) { +super(properties); + +String concurrentMax = getProperty(SubmarineConstants.SUBMARINE_CONCURRENT_MAX, "1"); +concurrentExecutedMax = Integer.parseInt(concurrentMax); + +submarineContext = SubmarineContext.getInstance(); + } + + @Override + public void open() { +LOGGER.info("SubmarineInterpreter open()"); + } + + @Override + public void close() { +submarineContext.stopAllSubmarineJob(); + } + + private void setParagraphConfig(InterpreterContext context) { +String replName = context.getReplName(); +if (StringUtils.equals(currentReplName, replName)) { + currentReplName = context.getReplName(); + needUpdateConfig = true; +} +if (needUpdateConfig) { + needUpdateConfig = false; + if (currentReplName.equals("submarine") || currentReplName.isEmpty()) { Review comment: Ok, I am going to change the way to determine if it is a submarine
[GitHub] [zeppelin] liuxunorg commented on issue #3333: [ZEPPELIN-4053]. Implement impersonation via c native api
liuxunorg commented on issue #: [ZEPPELIN-4053]. Implement impersonation via c native api URL: https://github.com/apache/zeppelin/pull/#issuecomment-472296791 For security, YARN uses the yarn user group to execute `/etc/yarn/sbin/Linux-amd64-64/container-executor`, which requires special settings for the `container-executor` file. 1. Running zeppelin with root can switch users, But this is not safe enough. 2. So need to create a zeppelin user group in the operating system (for example: zeppelin-group). The user running the zeppelin service belongs to this group zeppelin-group. After performing the following operations on the file using the root account, the execution user of the zeppelin service can also switch users. ``` chown root:${zeppelin-group} zeppelin/bin/execute-as-user chmod 6050 zeppelin/bin/execute-as-user ``` So need to add some instructions for use. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (ZEPPELIN-4067) Add a dual axis (Yaxis)
efrat caspi created ZEPPELIN-4067: - Summary: Add a dual axis (Yaxis) Key: ZEPPELIN-4067 URL: https://issues.apache.org/jira/browse/ZEPPELIN-4067 Project: Zeppelin Issue Type: Improvement Reporter: efrat caspi when plotting 2 columns with different scale, it is useful to set a dual axis -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] [zeppelin] felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265334390 ## File path: submarine/src/main/java/org/apache/zeppelin/submarine/SubmarineInterpreter.java ## @@ -0,0 +1,275 @@ +/* + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.zeppelin.submarine; + +import org.apache.commons.lang.StringUtils; +import org.apache.zeppelin.display.ui.OptionInput.ParamOption; +import org.apache.zeppelin.interpreter.Interpreter; +import org.apache.zeppelin.interpreter.InterpreterContext; +import org.apache.zeppelin.interpreter.InterpreterResult; +import org.apache.zeppelin.interpreter.thrift.InterpreterCompletion; +import org.apache.zeppelin.scheduler.Scheduler; +import org.apache.zeppelin.scheduler.SchedulerFactory; +import org.apache.zeppelin.submarine.commons.SubmarineCommand; +import org.apache.zeppelin.submarine.commons.SubmarineConstants; +import org.apache.zeppelin.submarine.job.SubmarineJob; +import org.apache.zeppelin.submarine.commons.SubmarineUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.File; +import java.util.List; +import java.util.Properties; + +import static org.apache.zeppelin.submarine.commons.SubmarineCommand.CLEAN_RUNTIME_CACHE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.CHECKPOINT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.CLEAN_CHECKPOINT; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_CLEAN; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_JOB_RUN; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_JOB_SHOW; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_TYPE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_USAGE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.INPUT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.MACHINELEARING_DISTRIBUTED_ENABLE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.OPERATION_TYPE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.PS_LAUNCH_CMD; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.SUBMARINE_ALGORITHM_HDFS_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.TF_CHECKPOINT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.USERNAME_SYMBOL; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.WORKER_LAUNCH_CMD; +import static org.apache.zeppelin.submarine.commons.SubmarineUtils.unifyKey; + +/** + * SubmarineInterpreter of Hadoop Submarine implementation. + * Support for Hadoop Submarine cli. All the commands documented here + * https://github.com/apache/hadoop/tree/trunk/hadoop-submarine/hadoop-submarine-core + * /src/site/markdown/QuickStart.md is supported. + */ +public class SubmarineInterpreter extends Interpreter { + private Logger LOGGER = LoggerFactory.getLogger(SubmarineInterpreter.class); + + // Number of submarines executed in parallel for each interpreter instance + protected int concurrentExecutedMax = 1; + + private boolean needUpdateConfig = true; + private String currentReplName = ""; + + SubmarineContext submarineContext = null; + + public SubmarineInterpreter(Properties properties) { +super(properties); + +String concurrentMax = getProperty(SubmarineConstants.SUBMARINE_CONCURRENT_MAX, "1"); +concurrentExecutedMax = Integer.parseInt(concurrentMax); + +submarineContext = SubmarineContext.getInstance(); + } + + @Override + public void open() { +LOGGER.info("SubmarineInterpreter open()"); + } + + @Override + public void close() { +submarineContext.stopAllSubmarineJob(); + } + + private void setParagraphConfig(InterpreterContext context) { +String replName = context.getReplName(); +if (StringUtils.equals(currentReplName, replName)) { + currentReplName = context.getReplName(); + needUpdateConfig = true; +} +if (needUpdateConfig) { + needUpdateConfig = false; + if (currentReplName.equals("submarine") || currentReplName.isEmpty()) { +context.getConfig().put("editorHide", true); +context.getConfig().put("title",
[GitHub] [zeppelin] felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265333667 ## File path: scripts/docker/submarine/tensorflow_1.10-hadoop_3.1.2-gpu-1.0.0/Dockerfile ## @@ -0,0 +1,71 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +FROM nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04 + +# Pick up some TF dependencies +RUN apt-get update && apt-get install -y --allow-downgrades --no-install-recommends \ + --allow-change-held-packages --allow-unauthenticated \ + build-essential libfreetype6-dev libpng12-dev \ + libzmq3-dev pkg-config python python-dev \ + rsync software-properties-common curl unzip wget grep sed vim \ +iputils-ping net-tools gdb python2.7-dbg tzdata \ +cuda-cublas-9-0 cuda-cufft-9-0 cuda-curand-9-0 cuda-cusolver-9-0 \ + cuda-cusparse-9-0 libcudnn7=7.0.5.15-1+cuda9.0 && \ + apt-get clean && \ + rm -rf /var/lib/apt/lists/* + +RUN export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -yq krb5-user libpam-krb5 && apt-get clean + +RUN curl -O https://bootstrap.pypa.io/get-pip.py && \ + python get-pip.py && \ + rm get-pip.py + +RUN echo "Install python related packages" && \ + apt-get -y update && \ + apt-get install -y gfortran && \ + # numerical/algebra packages + apt-get install -y libblas-dev libatlas-dev liblapack-dev && \ + # font, image for matplotlib + apt-get install -y libpng-dev libxft-dev && \ + # for tkinter + apt-get install -y python-tk libxml2-dev libxslt-dev zlib1g-dev + +RUN pip --no-cache-dir install Pillow h5py ipykernel jupyter matplotlib numpy pandas scipy sklearn && \ + python -m ipykernel.kernelspec + +# Set the locale +# disable bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8) +RUN apt-get clean && apt-get update && apt-get install -y locales +RUN locale-gen en_US.UTF-8 + +# Install TensorFlow GPU version. +ENV TENSORFLOW_VERSION="1.10.0" +RUN pip --no-cache-dir install \ + http://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-${TENSORFLOW_VERSION}-cp27-none-linux_x86_64.whl +RUN apt-get update && apt-get install git -y + +# Install hadoop +ENV HADOOP_VERSION="3.1.2" +RUN wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-${HADOOP_VERSION}/hadoop-${HADOOP_VERSION}.tar.gz Review comment: we can't hardcode the mirror This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265334232 ## File path: submarine/src/main/java/org/apache/zeppelin/submarine/SubmarineInterpreter.java ## @@ -0,0 +1,275 @@ +/* + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.zeppelin.submarine; + +import org.apache.commons.lang.StringUtils; +import org.apache.zeppelin.display.ui.OptionInput.ParamOption; +import org.apache.zeppelin.interpreter.Interpreter; +import org.apache.zeppelin.interpreter.InterpreterContext; +import org.apache.zeppelin.interpreter.InterpreterResult; +import org.apache.zeppelin.interpreter.thrift.InterpreterCompletion; +import org.apache.zeppelin.scheduler.Scheduler; +import org.apache.zeppelin.scheduler.SchedulerFactory; +import org.apache.zeppelin.submarine.commons.SubmarineCommand; +import org.apache.zeppelin.submarine.commons.SubmarineConstants; +import org.apache.zeppelin.submarine.job.SubmarineJob; +import org.apache.zeppelin.submarine.commons.SubmarineUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.File; +import java.util.List; +import java.util.Properties; + +import static org.apache.zeppelin.submarine.commons.SubmarineCommand.CLEAN_RUNTIME_CACHE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.CHECKPOINT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.CLEAN_CHECKPOINT; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_CLEAN; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_JOB_RUN; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_JOB_SHOW; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_TYPE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.COMMAND_USAGE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.INPUT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.MACHINELEARING_DISTRIBUTED_ENABLE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.OPERATION_TYPE; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.PS_LAUNCH_CMD; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.SUBMARINE_ALGORITHM_HDFS_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.TF_CHECKPOINT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.USERNAME_SYMBOL; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.WORKER_LAUNCH_CMD; +import static org.apache.zeppelin.submarine.commons.SubmarineUtils.unifyKey; + +/** + * SubmarineInterpreter of Hadoop Submarine implementation. + * Support for Hadoop Submarine cli. All the commands documented here + * https://github.com/apache/hadoop/tree/trunk/hadoop-submarine/hadoop-submarine-core + * /src/site/markdown/QuickStart.md is supported. + */ +public class SubmarineInterpreter extends Interpreter { + private Logger LOGGER = LoggerFactory.getLogger(SubmarineInterpreter.class); + + // Number of submarines executed in parallel for each interpreter instance + protected int concurrentExecutedMax = 1; + + private boolean needUpdateConfig = true; + private String currentReplName = ""; + + SubmarineContext submarineContext = null; + + public SubmarineInterpreter(Properties properties) { +super(properties); + +String concurrentMax = getProperty(SubmarineConstants.SUBMARINE_CONCURRENT_MAX, "1"); +concurrentExecutedMax = Integer.parseInt(concurrentMax); + +submarineContext = SubmarineContext.getInstance(); + } + + @Override + public void open() { +LOGGER.info("SubmarineInterpreter open()"); + } + + @Override + public void close() { +submarineContext.stopAllSubmarineJob(); + } + + private void setParagraphConfig(InterpreterContext context) { +String replName = context.getReplName(); +if (StringUtils.equals(currentReplName, replName)) { + currentReplName = context.getReplName(); + needUpdateConfig = true; +} +if (needUpdateConfig) { + needUpdateConfig = false; + if (currentReplName.equals("submarine") || currentReplName.isEmpty()) { Review comment: IIRC, the interpreter name is configurable? eg. spark is `spark2` in
[GitHub] [zeppelin] felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265333214 ## File path: scripts/docker/submarine/submarine-interpreter-cpu-1.0.0/Dockerfile ## @@ -0,0 +1,19 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +FROM zeppelin-tensorflow_1.10-hadoop_3.1.2-cpu:1.0.0 Review comment: how is zeppelin-tensorflow_1.10-hadoop_3.1.2-cpu created? it looks like it is built by a script in the PR, how is this name chosen? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265334590 ## File path: submarine/src/main/java/org/apache/zeppelin/submarine/SubmarineShellInterpreter.java ## @@ -0,0 +1,107 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.zeppelin.submarine; + +import org.apache.commons.exec.CommandLine; +import org.apache.commons.exec.DefaultExecutor; +import org.apache.commons.lang3.StringUtils; +import org.apache.zeppelin.interpreter.InterpreterContext; +import org.apache.zeppelin.interpreter.InterpreterException; +import org.apache.zeppelin.interpreter.InterpreterResult; +import org.apache.zeppelin.shell.ShellInterpreter; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.Properties; + +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.SUBMARINE_ALGORITHM_HDFS_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.SUBMARINE_HADOOP_KEYTAB; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.SUBMARINE_HADOOP_PRINCIPAL; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.TF_CHECKPOINT_PATH; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.USERNAME_SYMBOL; +import static org.apache.zeppelin.submarine.commons.SubmarineConstants.ZEPPELIN_SUBMARINE_AUTH_TYPE; + +/** + * Submarine Shell interpreter for Zeppelin. + */ +public class SubmarineShellInterpreter extends ShellInterpreter { + private static final Logger LOGGER = LoggerFactory.getLogger(SubmarineShellInterpreter.class); + + private final boolean isWindows = System.getProperty("os.name").startsWith("Windows"); + private final String shell = isWindows ? "cmd /c" : "bash -c"; + + public SubmarineShellInterpreter(Properties property) { +super(property); + } + + @Override + public InterpreterResult internalInterpret(String cmd, InterpreterContext intpContext) { +// algorithm path & checkpoint path support replaces ${username} with real user name +String algorithmPath = properties.getProperty(SUBMARINE_ALGORITHM_HDFS_PATH, ""); +if (algorithmPath.contains(USERNAME_SYMBOL)) { + algorithmPath = algorithmPath.replace(USERNAME_SYMBOL, userName); + properties.setProperty(SUBMARINE_ALGORITHM_HDFS_PATH, algorithmPath); +} +String checkpointPath = properties.getProperty( +TF_CHECKPOINT_PATH, ""); +if (checkpointPath.contains(USERNAME_SYMBOL)) { + checkpointPath = checkpointPath.replace(USERNAME_SYMBOL, userName); + properties.setProperty(TF_CHECKPOINT_PATH, checkpointPath); +} + +return super.internalInterpret(cmd, intpContext); + } + + @Override + protected boolean runKerberosLogin() { +try { + createSecureConfiguration(); + return true; +} catch (Exception e) { + LOGGER.error("Unable to run kinit for zeppelin", e); +} +return false; + } + + public void createSecureConfiguration() throws InterpreterException { +Properties properties = getProperties(); +CommandLine cmdLine = CommandLine.parse(shell); +cmdLine.addArgument("-c", false); +String kinitCommand = String.format("kinit -k -t %s %s", +properties.getProperty(SUBMARINE_HADOOP_KEYTAB), +properties.getProperty(SUBMARINE_HADOOP_PRINCIPAL)); +cmdLine.addArgument(kinitCommand, false); +DefaultExecutor executor = new DefaultExecutor(); +try { + executor.execute(cmdLine); +} catch (Exception e) { + LOGGER.error("Unable to run kinit for zeppelin user " + kinitCommand, e); + throw new InterpreterException(e); +} + } + + @Override + protected boolean isKerboseEnabled() { +if (!StringUtils.isAnyEmpty(getProperty(ZEPPELIN_SUBMARINE_AUTH_TYPE)) +&& getProperty(ZEPPELIN_SUBMARINE_AUTH_TYPE) +.equalsIgnoreCase("kerberos")) { Review comment: indentation? can you merge this into the last line This is an automated message from the Apache Git Service. To respond to the message, please log on to
[GitHub] [zeppelin] felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
felixcheung commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265334757 ## File path: submarine/src/main/java/org/apache/zeppelin/submarine/commons/SubmarineConstants.java ## @@ -0,0 +1,119 @@ +/* + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.zeppelin.submarine.commons; + +/* + * NOTE: use lowercase + "_" for the option name + */ +public class SubmarineConstants { + // Docker container Environmental variable at `submarine-job-run-tf.jinja` + // and `/bin/interpreter.sh` + public static final String DOCKER_HADOOP_HDFS_HOME = "DOCKER_HADOOP_HDFS_HOME"; + public static final String DOCKER_JAVA_HOME= "DOCKER_JAVA_HOME"; + public static final String DOCKER_CONTAINER_TIME_ZONE = "DOCKER_CONTAINER_TIME_ZONE"; + public static final String INTERPRETER_LAUNCH_MODE = "INTERPRETER_LAUNCH_MODE"; + + // interpreter.sh Environmental variable + public static final String SUBMARINE_HADOOP_HOME = "SUBMARINE_HADOOP_HOME"; + public static final String HADOOP_YARN_SUBMARINE_JAR = "HADOOP_YARN_SUBMARINE_JAR"; + public static final String SUBMARINE_INTERPRETER_DOCKER_IMAGE + = "SUBMARINE_INTERPRETER_DOCKER_IMAGE"; + + public static final String ZEPPELIN_SUBMARINE_AUTH_TYPE = "zeppelin.submarine.auth.type"; + public static final String SUBMARINE_HADOOP_CONF_DIR = "SUBMARINE_HADOOP_CONF_DIR"; + public static final String SUBMARINE_HADOOP_KEYTAB= "SUBMARINE_HADOOP_KEYTAB"; + public static final String SUBMARINE_HADOOP_PRINCIPAL = "SUBMARINE_HADOOP_PRINCIPAL"; + public static final String SUBMARINE_HADOOP_KRB5_CONF = "submarine.hadoop.krb5.conf"; + + public static final String JOB_NAME = "JOB_NAME"; + public static final String CLEAN_CHECKPOINT = "CLEAN_CHECKPOINT"; + public static final String INPUT_PATH = "INPUT_PATH"; + public static final String CHECKPOINT_PATH = "CHECKPOINT_PATH"; + public static final String PS_LAUNCH_CMD = "PS_LAUNCH_CMD"; + public static final String WORKER_LAUNCH_CMD = "WORKER_LAUNCH_CMD"; + public static final String MACHINELEARING_DISTRIBUTED_ENABLE + = "machinelearing.distributed.enable"; + + public static final String ZEPPELIN_INTERPRETER_RPC_PORTRANGE + = "zeppelin.interpreter.rpc.portRange"; Review comment: is this defined else where? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] felixcheung commented on a change in pull request #3320: [ZEPPELIN-3977]. Create shell script for converting old note file to new file
felixcheung commented on a change in pull request #3320: [ZEPPELIN-3977]. Create shell script for converting old note file to new file URL: https://github.com/apache/zeppelin/pull/3320#discussion_r265339525 ## File path: zeppelin-zengine/src/main/java/org/apache/zeppelin/notebook/repo/NotebookRepoSync.java ## @@ -62,59 +63,84 @@ public void init(ZeppelinConfiguration conf) throws IOException { oneWaySync = conf.getBoolean(ConfVars.ZEPPELIN_NOTEBOOK_ONE_WAY_SYNC); String allStorageClassNames = conf.getNotebookStorageClass().trim(); if (allStorageClassNames.isEmpty()) { - allStorageClassNames = defaultStorage; + allStorageClassNames = DEFAULT_STORAGE; LOGGER.warn("Empty ZEPPELIN_NOTEBOOK_STORAGE conf parameter, using default {}", - defaultStorage); + DEFAULT_STORAGE); } String[] storageClassNames = allStorageClassNames.split(","); if (storageClassNames.length > getMaxRepoNum()) { LOGGER.warn("Unsupported number {} of storage classes in ZEPPELIN_NOTEBOOK_STORAGE : {}\n" + "first {} will be used", storageClassNames.length, allStorageClassNames, getMaxRepoNum()); } +// init the underlying NotebookRepo for (int i = 0; i < Math.min(storageClassNames.length, getMaxRepoNum()); i++) { NotebookRepo notebookRepo = PluginManager.get().loadNotebookRepo(storageClassNames[i].trim()); if (notebookRepo != null) { notebookRepo.init(conf); repos.add(notebookRepo); } } + // couldn't initialize any storage, use default if (getRepoCount() == 0) { - LOGGER.info("No storage could be initialized, using default {} storage", defaultStorage); - NotebookRepo defaultNotebookRepo = PluginManager.get().loadNotebookRepo(defaultStorage); + LOGGER.info("No storage could be initialized, using default {} storage", DEFAULT_STORAGE); + NotebookRepo defaultNotebookRepo = PluginManager.get().loadNotebookRepo(DEFAULT_STORAGE); defaultNotebookRepo.init(conf); repos.add(defaultNotebookRepo); } +// sync for anonymous mode on start +if (getRepoCount() > 1 && conf.getBoolean(ConfVars.ZEPPELIN_ANONYMOUS_ALLOWED)) { + try { +sync(AuthenticationInfo.ANONYMOUS); + } catch (IOException e) { +LOGGER.error("Couldn't sync anonymous mode on start ", e); + } +} + } + + // Zeppelin change its note file name structure in 0.9.0, this is called when upgrading + // from 0.9.0 before to 0.9.0 after + public void convertNoteFiles(ZeppelinConfiguration conf, boolean deleteOld) throws IOException { // convert old note file (noteId/note.json) to new note file (note_name_note_id.zpln) -boolean convertToNew = conf.getBoolean(ConfVars.ZEPPELIN_NOTEBOOK_NEW_FORMAT_CONVERT); -boolean deleteOld = conf.getBoolean(ConfVars.ZEPPELIN_NOTEBOOK_NEW_FORMAT_DELETE_OLD); -if (convertToNew) { - NotebookRepo newNotebookRepo = repos.get(0); +for (int i = 0; i < repos.size(); ++i) { + NotebookRepo newNotebookRepo = repos.get(i); OldNotebookRepo oldNotebookRepo = - PluginManager.get().loadOldNotebookRepo(newNotebookRepo.getClass().getCanonicalName()); + PluginManager.get().loadOldNotebookRepo(newNotebookRepo.getClass().getCanonicalName()); oldNotebookRepo.init(conf); List oldNotesInfo = oldNotebookRepo.list(AuthenticationInfo.ANONYMOUS); LOGGER.info("Convert old note file to new style, note count: " + oldNotesInfo.size()); + LOGGER.info("Delete old note: " + deleteOld); Review comment: I think w/ version we should check here if the note is "old" format before converting it This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
Re: Season of Docs 2019
Thanks Felix, I think it is valuable to promote zeppelin for more adoption. Felix Cheung 于2019年3月14日周四 上午1:05写道: > https://developers.google.com/season-of-docs/ > > Sounds like a good idea? > > > Google announces the inaugural year of Season of Docs, a Google program > that fosters collaboration between open source projects and technical > writers. Season of Docs is similar to Summer of Code, but with a focus on > open source documentation and technical writers. Details are on website: > g.co/seasonofdocs. > > -- Best Regards Jeff Zhang
Re: Season of Docs 2019
I agree. Thank you, Felix. Looks like application opens on April 2nd. I can sign up for Apache Zeppelin. On Wed, Mar 13, 2019 at 9:43 PM Jeff Zhang wrote: > Thanks Felix, I think it is valuable to promote zeppelin for more > adoption. > > Felix Cheung 于2019年3月14日周四 上午1:05写道: > > > https://developers.google.com/season-of-docs/ > > > > Sounds like a good idea? > > > > > > Google announces the inaugural year of Season of Docs, a Google program > > that fosters collaboration between open source projects and technical > > writers. Season of Docs is similar to Summer of Code, but with a focus on > > open source documentation and technical writers. Details are on website: > > g.co/seasonofdocs. > > > > > > -- > Best Regards > > Jeff Zhang >
[GitHub] [zeppelin] asfgit closed pull request #3332: ZEPPELIN-4062. Don't wait ipython kernel if python process failed
asfgit closed pull request #3332: ZEPPELIN-4062. Don't wait ipython kernel if python process failed URL: https://github.com/apache/zeppelin/pull/3332 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] liuxunorg commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter
liuxunorg commented on a change in pull request #3331: [Zeppelin-4049] Hadoop Submarine (Machine Learning) interpreter URL: https://github.com/apache/zeppelin/pull/3331#discussion_r265397174 ## File path: scripts/docker/submarine/tensorflow_1.10-hadoop_3.12-cpu-1.0.0/Dockerfile ## @@ -0,0 +1,69 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +FROM ubuntu:16.04 + +# Pick up some TF dependencies +RUN apt-get update && apt-get install -y --allow-downgrades --no-install-recommends \ +--allow-change-held-packages --allow-unauthenticated \ +build-essential libfreetype6-dev libpng12-dev \ +libzmq3-dev pkg-config python python-dev \ +rsync software-properties-common curl unzip wget grep sed vim iputils-ping net-tools gdb python2.7-dbg tzdata && \ + apt-get clean && \ + rm -rf /var/lib/apt/lists/* + +RUN export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -yq krb5-user libpam-krb5 && apt-get clean + +RUN curl -O https://bootstrap.pypa.io/get-pip.py && \ + python get-pip.py && \ + rm get-pip.py + +RUN echo "Install python related packages" && \ + apt-get -y update && \ + apt-get install -y gfortran && \ + # numerical/algebra packages + apt-get install -y libblas-dev libatlas-dev liblapack-dev && \ + # font, image for matplotlib + apt-get install -y libpng-dev libxft-dev && \ + # for tkinter + apt-get install -y python-tk libxml2-dev libxslt-dev zlib1g-dev + +RUN pip --no-cache-dir install Pillow h5py ipykernel jupyter matplotlib numpy pandas scipy sklearn && \ + python -m ipykernel.kernelspec + +# Set the locale +# disable bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8) +RUN apt-get clean && apt-get update && apt-get install -y locales +RUN locale-gen en_US.UTF-8 + +# Install TensorFlow CPU version. +ENV TENSORFLOW_VERSION="1.10.0" +RUN pip --no-cache-dir install \ + http://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-${TENSORFLOW_VERSION}-cp27-none-linux_x86_64.whl +RUN apt-get update && apt-get install git -y + +# Install hadoop +ENV HADOOP_VERSION="3.1.2" +RUN wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-${HADOOP_VERSION}/hadoop-${HADOOP_VERSION}.tar.gz Review comment: The version number cannot be modified. ditto. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] felixcheung commented on a change in pull request #3320: [ZEPPELIN-3977]. Create shell script for converting old note file to new file
felixcheung commented on a change in pull request #3320: [ZEPPELIN-3977]. Create shell script for converting old note file to new file URL: https://github.com/apache/zeppelin/pull/3320#discussion_r265339213 ## File path: zeppelin-zengine/src/main/java/org/apache/zeppelin/notebook/repo/UpgradeNoteFileTool.java ## @@ -0,0 +1,44 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.zeppelin.notebook.repo; + +import org.apache.commons.cli.*; +import org.apache.zeppelin.conf.ZeppelinConfiguration; + +import java.io.IOException; + +public class UpgradeNoteFileTool { + + public static void main(String[] args) throws IOException { +Options options = new Options(); +Option input = new Option("d", "deleteOld", false, "Whether delete old note file"); +options.addOption(input); +CommandLineParser parser = new DefaultParser(); +CommandLine cmd = null; +try { + cmd = parser.parse(options, args); +} catch (ParseException e) { + System.out.println(e.getMessage()); Review comment: might help to print the whole stack This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [zeppelin] felixcheung commented on a change in pull request #3320: [ZEPPELIN-3977]. Create shell script for converting old note file to new file
felixcheung commented on a change in pull request #3320: [ZEPPELIN-3977]. Create shell script for converting old note file to new file URL: https://github.com/apache/zeppelin/pull/3320#discussion_r265338872 ## File path: docs/setup/operation/upgrading.md ## @@ -35,6 +35,12 @@ So, copying `notebook` and `conf` directory should be enough. ## Migration Guide +### Upgrading from Zeppelin 0.8 to 0.9 + + - From 0.9, we change the notes file name structure ([ZEPPELIN-2619](https://issues.apache.org/jira/browse/ZEPPELIN-2619)). So when you upgrading zeppelin to 0.9, you need to upgrade note file. Here's steps you need to follow: Review comment: hm, I'm confused still - git source control is one, but it sounds like there is no way other than trying to open it to know what format the note is saved in. I kinda agree maybe we need a version number or change file extension or something. for instance, if my team started using zeppelin a year ago, and have long existing notes, and someone upgrade to zeppelin 0.9 or get some note from the internet, would I know what's going on if I open the new note on zeppelin 0.8? or old note on zeppelin 0.9? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services