This is an automated email from the ASF dual-hosted git repository.

liuxun pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/submarine.git


The following commit(s) were added to refs/heads/master by this push:
     new 9154c45  SUBMARINE-469. Update Submarine Architecture Doc
9154c45 is described below

commit 9154c45d43166ea921fec39b1100654132eadf25
Author: Wangda Tan <[email protected]>
AuthorDate: Sun Apr 19 17:04:06 2020 -0700

    SUBMARINE-469. Update Submarine Architecture Doc
    
    ### What is this PR for?
    We have many discussions recently and made some proposals for the Submarine 
roadmap and corresponding roadmaps. We need to update our docs.
    
    ### What type of PR is it?
    Documentation
    
    ### Todos
    * [ ] - Task
    
    ### What is the Jira issue?
    * https://issues.apache.org/jira/projects/SUBMARINE/issues/SUBMARINE-469
    
    ### How should this be tested?
    * First time? Setup Travis CI as described on 
https://submarine.apache.org/contribution/contributions.html#continuous-integration
    * Strongly recommended: add automated unit tests for any new or changed 
behavior
    * Outline any manual steps to test the PR here.
    
    ### Screenshots (if appropriate)
    
    ### Questions:
    * Does the licenses files need update? Yes/No
    * Is there breaking changes for older versions? Yes/No
    * Does this needs documentation? Yes/No
    
    Author: Wangda Tan <[email protected]>
    
    Closes #261 from wangdatan/SUBMARINE-469 and squashes the following commits:
    
    9406ff7 [Wangda Tan] changes..
    6046126 [Wangda Tan] changes ..
    33c36af [Wangda Tan] changes ..
    59193ea [Wangda Tan] added changes to storage and experiment
    9cca4fc [Wangda Tan] changes ..
    adf9381 [Wangda Tan] changes ..
    1c9c357 [Wangda Tan] Changes, added more implementations
    94264f3 [Wangda Tan] Changes ..
    cf62a53 [Wangda Tan] SUBMARINE-469
---
 docs/assets/design/experiments.png                 | Bin 0 -> 75806 bytes
 docs/design/architecture-and-requirements.md       | 288 ++++++------
 docs/design/environments-implementation.md         | 194 ++++++++
 docs/design/experiment-implementation.md           | 500 +++++++++++++++++++++
 docs/design/implementation-notes.md                |  30 ++
 docs/design/storage-implementation.md              | 157 +++++++
 docs/design/submarine-server/architecture.md       | 132 +++++-
 .../{ => wip-designs}/SubmarineClusterServer.md    |   8 +-
 .../design/{ => wip-designs}/submarine-launcher.md |   0
 9 files changed, 1129 insertions(+), 180 deletions(-)

diff --git a/docs/assets/design/experiments.png 
b/docs/assets/design/experiments.png
new file mode 100644
index 0000000..f3deb32
Binary files /dev/null and b/docs/assets/design/experiments.png differ
diff --git a/docs/design/architecture-and-requirements.md 
b/docs/design/architecture-and-requirements.md
index e2df5de..d41e879 100644
--- a/docs/design/architecture-and-requirements.md
+++ b/docs/design/architecture-and-requirements.md
@@ -20,7 +20,6 @@
 | User | A single data-scientist/data-engineer. User has resource quota, 
credentials |
 | Team | User belongs to one or more teams, teams have ACLs for artifacts 
sharing such as notebook content, model, etc. |
 | Admin | Also called SRE, who manages user's quotas, credentials, team, and 
other components. |
-| Project | A project may include one or multiple notebooks, zero or multiple 
running jobs. And could be collaborated by multiple users who have ACLs on it |
 
 
 # Background 
@@ -46,7 +45,7 @@ In the last decade, the software industry has built many open 
source tools for m
    
 6. It was hard to build a data pipeline that flows/transform data from a raw 
data source to whatever required by ML applications. 
    **Answer to that:** Open source big data industry plays an important role 
in providing, simplify, unify processes and building blocks for data flows, 
transformations, etc.
-   
+
 The machine learning industry is moving on the right track to solve major 
roadblocks. So what are the pain points now for companies which have machine 
learning needs? What can we help here? To answer this question, let's look at 
machine learning workflow first. 
 
 ## Machine Learning Workflows & Pain points
@@ -129,11 +128,7 @@ It is also designed to be resource management independent, 
no matter if you have
 
 ## Requirements and non-requirements
 
-### Requirements
-
-Following items are charters of Submarine project:
-
-#### Notebook
+### Notebook
 
 1) Users should be able to create, edit, delete a notebook. (P0)
 2) Notebooks can be persisted to storage and can be recovered if failure 
happens. (P0)
@@ -142,210 +137,187 @@ Following items are charters of Submarine project:
 5) Users can define a list of parameters of a notebook (looks like parameters 
of the notebook's main function) to allow executing a notebook like a job. (P1)
 6) Different users can collaborate on the same notebook at the same time. (P2)
 
-#### Job
+A running notebook instance is called notebook session (or session for short).
 
-Job of Submarine is an executable code section. It could be a shell command, a 
Python command, a Spark job, a SQL query, a training job (such as Tensorflow), 
etc. 
+### Experiment
 
-1) Job can be submitted from UI/CLI.
-2) Job can be monitored/managed from UI/CLI.
-3) Job should not bind to one resource management platform (YARN/K8s).
+Experiments of Submarine is an offline task. It could be a shell command, a 
Python command, a Spark job, a SQL query, or even a workflow. 
 
-#### Training Job
+The primary purposes of experiments under Submarine's context is to do 
training tasks, offline scoring, etc. However, experiment can be generalized to 
do other tasks as well.
 
-Training job is a special kind of job, which includes Tensorflow, PyTorch, and 
other different frameworks: 
+Major requirement of experiment: 
 
-1) Allow model engineer, data scientist to run *unmodified* Tensorflow 
programs on YARN/K8s/Container-cloud. 
-2) Allow jobs easy access data/models in HDFS and other storage. 
-3) Support run distributed Tensorflow jobs with simple configs.
-4) Support run user-specified Docker images.
-5) Support specify GPU and other resources.
-6) Support launch tensorboard (and other equivalents for non-TF frameworks) 
for training jobs if user specified.
+1) Experiments can be submitted from UI/CLI/SDK.
+2) Experiments can be monitored/managed from UI/CLI/SDK.
+3) Experiments should not bind to one resource management platform (K8s/YARN).
 
-[TODO] (Need help)
+#### Type of experiments
 
-#### Model Management 
+![](../assets/design/experiments.png)
 
-After training, there will be model artifacts created. Users should be able to:
+There're two types of experiments: 
+`Adhoc experiments`: which includes a Python/R/notebook, or even an adhoc 
Tensorflow/PyTorch task, etc. 
 
-1) View model metrics.
-2) Save, versioning, tagging model.
-3) Run model verification tasks. 
-4) Run A/B testing, push to production, etc.
+`Predefined experiment library`: This is specialized experiments, which 
including developed libraries such as CTR, BERT, etc. Users are only required 
to specify a few parameters such as input, output, hyper parameters, etc. 
Instead of worrying about where's training script/dependencies located.
 
-#### Metrics for training job and model
+#### Adhoc experiment
 
-Submarine-SDK provides tracking/metrics APIs, which allows developers to add 
tracking/metrics and view tracking/metrics from Submarine Workbench UI.
+Requirements:
 
-#### Workflow 
+- Allow run adhoc scripts.
+- Allow model engineer, data scientist to run Tensorflow/Pytorch programs on 
YARN/K8s/Container-cloud. 
+- Allow jobs easy access data/models in HDFS/s3, etc. 
+- Support run distributed Tensorflow/Pytorch jobs with simple configs.
+- Support run user-specified Docker images.
+- Support specify GPU and other resources.
 
-Data-Scientists/Data-Engineers can create workflows from UI. Workflow is DAG 
of jobs.
+#### Predefined experiment library
 
-### Non-requirements
+Here's an example of predefined experiment library to train deepfm model: 
 
-TODO: Add non-requirements which we want to avoid.
+```
+{
+  "input": {
+    "train_data": ["hdfs:///user/submarine/data/tr.libsvm"],
+    "valid_data": ["hdfs:///user/submarine/data/va.libsvm"],
+    "test_data": ["hdfs:///user/submarine/data/te.libsvm"],
+    "type": "libsvm"
+  },
+  "output": {
+    "save_model_dir": "hdfs:///user/submarine/deepfm",
+    "metric": "auc"
+  },
+  "training": {
+    "batch_size" : 512,
+    "field_size": 39,
+    "num_epochs": 3,
+    "feature_size": 117581,
+    ...
+  }
+}
+```
 
-[TODO] (Need help)
+Predefined experiment libraries can be shared across users on the same 
platform, users can also add new or modified predefined experiment library via 
UI/REST API.
 
-## Architecture Overview
+We will also model AutoML, auto hyper-parameter tuning to predefined 
experiment library.
 
-### Architecture Diagram
+#### Pipeline 
 
-```
-      +-----------------<---+Submarine Workbench+---->------------------+
-      | +---------+ +---------+ +-----------+ +----------+ +----------+ |
-      | |Data Mart| |Notebooks| |Projects   | |Metrics   | |Models    | |
-      | +---------+ +---------+ +-----------+ +----------+ +----------+ |
-      +-----------------------------------------------------------------+
-
-
-      +----------------------Submarine Service--------------------------+
-      |                                                                 |
-      | +-----------------+ +-----------------+ +--------------------+  |
-      | |Compute Engine   | |Job Orchestrator | |     SDK            |  |
-      | |    Connector    | |                 | +--------------------+  |
-      | +-----------------+ +-----------------+                         |
-      |   Spark, Flink         YARN/K8s/Docker    Java/Python/REST      |
-      |   TF, PyTorch                               Mini-Submarine      |
-      |                                                                 |
-      |                                                                 |
-      +-----------------------------------------------------------------+
-      
-      (You can use http://stable.ascii-flow.appspot.com/#Draw 
-      to draw such diagrams)
-```
+Pipeline is a special kind of experiment:
+
+- A pipeline is a DAG of experiments. 
+- Can be also treated as a special kind of experiment.
+- Users can submit/terminate a pipeline.
+- Pipeline can be created/submitted via UI/API.
+
+### Environment Profiles
+
+Environment profiles (or environment for short) defines a set of libraries and 
when Docker is being used, a Docker image in order to run an experiment or a 
notebook. 
 
-#### Submarine Workbench 
+Docker or VM image (such as AMI: Amazon Machine Images) defines the base layer 
of the environment. 
 
-Submarine Workbench is a UI designed for data scientists. Data scientists can 
interact with Submarine Workbench UI to access notebooks, submit/manage jobs, 
manage models, create model training workflows, access datasets, etc.
+On top of that, users can define a set of libraries (such as Python/R) to 
install.
 
-### Components for Data Scientists
+Users can save different environment configs which can be also shared across 
the platform. Environment profiles can be used to run a notebook (e.g. by 
choosing different kernel from Jupyter), or an experiment. Predefined 
experiment library includes what environment to use so users don't have to 
choose which environment to use.
 
-1) `Notebook Service` helps to do works from data insight to model creation 
and allows notebook sharing between teams. 
-2) `Workflow Service` helps to construct workflows across notebooks or include 
other executable code entry points. This module can construct a DAG and execute 
a user-specified workflow from end to end. 
+Environments can be added/listed/deleted/selected through CLI/SDK.
 
-`NoteBook Service` and `Workflow Service` deployed inside Submarine Workbench 
Server, and provides Web, CLI, REST APIs for 3rd-party integration.
+### Model
 
-4) `Data Mart` helps to create, save, and share dataset which can be used by 
other modules or training.
-5) `Model Training Service`
-   - `Metrics Service` Helps to save metrics during training and analysis 
training results if needed.
-   - `Job Orchestrator` Helps to submit a job (such as 
Tensorflow/PyTorch/Spark) to a resource manager, such as YARN or K8s. It also 
supports submitting a distributed training job. Also, get status/logs, etc. of 
a job regardless of Resource Manager implementation. 
-   - `Compute Engine Connector` Work with Job Orchestrator to submit different 
kinds of jobs. One connector connects to one specific kind of compute 
framework, such as Tensorflow. 
-6) `Model Service` helps to manage, save, version, analysis a model. It also 
helps to push model to production.
-7) `Submarine SDK` provides Java/Python/REST API to allow DS or other 
engineers to integrate into Submarine services. It also includes a 
`mini-submarine` component that launches Submarine components from a single 
Docker container (or a VM image).
-8) `Project Manager` helps to manage projects. Each project can have multiple 
notebooks, workflows, etc.
+#### Model management 
 
-### Components for SREs 
+- Model artifacts are generated by experiments or notebook.
+- A model consists of artifacts from one or multiple files. 
+- Users can choose to save, tag, version a produced model.
+- Once The Model is saved, Users can do the online model serving or offline 
scoring of the model.
 
-The following components are designed for SREs (or system admins) of the 
Machine Learning Platform.
+#### Model serving
 
-1) `User Management System` helps admin to onboard new users, upload user 
credentials, assign resource quotas, etc. 
-2) `Resource Quota Management System` helps admin to manage resources quotas 
of teams, organizations. Resources can be machine resources like 
CPU/Memory/Disk, etc. It can also include non-machine resources like $$-based 
budgets.
+After model saved, users can specify a serving script, a model and create a 
web service to serve the model. 
 
-[TODO] (Need help)
+We call the web service to "endpoint". Users can manage (add/stop) model 
serving endpoints via CLI/API/UI.
 
-## User Flows
+### Metrics for training job and model
 
-### User flows for Data-Scientists/Data-engineers
+Submarine-SDK provides tracking/metrics APIs, which allows developers to add 
tracking/metrics and view tracking/metrics from Submarine Workbench UI.
 
-DS/DE will interact with Submarine to do the following things: 
+### Deployment
 
-New onboard to Submarine Service:
-- Need Admin/SRE help to create a user account, set up user credentials (to 
access storage, metadata, resource manager, etc.), etc.
-- Submarine can integrate with LDAP or similar systems, and users can login 
using OpenID, etc.
+Submarine Services (See architecture overview below) should be deployed easily 
on-prem / on-cloud. Since there're more and more public cloud offering for 
compute/storage management on cloud, we need to support deploy Submarine 
compute-related workloads (such as notebook session, experiments, etc.) to 
cloud-managed clusters. 
 
-Access Data: 
-- DS/DE can access data via DataMart. DataMart is an abstraction layer to view 
available datasets that can be accessed. DS/DE needs proper underlying 
permission to read this data. 
-- DataMart provides UIs/APIs to allow DS/DE to preview, upload, import 
data-sets from various locations.
-- DS/DE can also bring their data from different sources, on-prem, or 
on-cloud. They can also add a data quick link (like a URL) to DataMart.
-- Data access could be limited by the physical location of compute clusters. 
For example, it is not viable to access data stored in another data center that 
doesn't have a network connectivity setup.
+This also include Submarine may need to take input parameters from customers 
and create/manage clusters if needed. It is also a common requirement to use 
hybrid of on-prem/on-cloud clusters.
 
-Projects: 
-- Every user starts with a default project. Users can choose to create new 
projects.
-- A project belongs to one user, and user can share a project with different 
users/teams.
-- Users can clone a project belongs to other users who have access.
-- A project can have notebooks, dependencies, or any required custom files. 
(Such as configuration files). We don't suggest to upload any full-sized 
data-set files to a project.
-- Projects can include folders (It's like a regular FS), and folders can be 
mounted to a running notebook/jobs.
+### Security / Access Control / User Management / Quota Management
 
-Notebook: 
-- A notebook belongs to a project.
-- Users can create, clone, import (from file), export (to file) notebook. 
-- Users can ask to attach notebook by notebook service on one of the compute 
clusters. 
-- In contrast, users can request to detach a running notebook instance.
-- A notebook can be shared with other users, teams.
-- A notebook will be versioned, persisted (using Git, Mysql, etc.), and users 
can traverse back to older versions.
+There're 4 kinds of objects need access-control: 
 
-Dependencies:
-- A dependency belongs to a project.
-- Users can add dependencies (a.k.a libraries) to a project. 
-- Dependency can be jar/python(PyPI, Egg, Wheel), etc.
-- Users can choose to have BYOI (bring your own container image) for a 
project. 
+- Assets belong to Submarine system, which includes notebook, experiments and 
results, models, predefined experiment libraries, environment profiles.
+- Data security. (Who owns what data, and what data can be accessed by each 
users). 
+- User credentials. (Such as LDAP).
+- Other security, such as Git repo access, etc.
 
-Job: 
-- A job belongs to a project. 
-- Users can run (or terminate) job with type and parameters on one of the 
running cluster.
-- Users can get the status of running jobs, retrieve job logs, metrics, etc.
-- Job submission and basic operation should be available on both API (CLI) and 
UI. 
-- For different types of jobs, Submarine's `Compute Engine Connector` allows 
taking different parameters to submit a job. For example, submitting a 
`Tensorflow` job allows a specifying number of parameter servers, workers, etc. 
Which is different from `Spark` job.
-- A Notebook can be treated as a special kind of job. (Runnable notebook).
+For the data security / user credentials / other security, it will be 
delegated to 3rd libraries such as Apache Ranger, IAM roles, etc. 
 
-Workflow: 
-- A workflow belongs to a project. 
-- A workflow is a DAG of jobs. 
-- Users can submit/terminate a workflow.
-- Users can get status from a workflow, and also get a list of 
running/finished/failed/pending jobs from the workflow.
-- Workflow can be created/submitted via UI/API.
+Assets belong to Submarine system will be handled by Submarine itself.
 
-Model:
-- The Model is generated by training jobs.
-- A model consists of artifacts from one or multiple files. 
-- Users can choose to save, tag, version a produced model.
-- Once The Model is saved, Users can do the online serving or offline scoring 
of the model.
+Here're operations which Submarine admin can do for users / teams which can be 
used to access Submarine's assets. 
 
-### User flows for Admins/SRE
+**Operations for admins** 
 
-Operations for users/teams: 
+- Admin uses "User Management System" to onboard new users, upload user 
credentials, assign resource quotas, etc. 
 - Admins can create new users, new teams, update user/team mappings. Or remove 
users/teams. 
 - Admin can set resource quotas (if different from system default), 
permissions, upload/update necessary credentials (like Kerberos keytab) of a 
user.
 - A DE/DS can also be an admin if the DE/DS has admin access. (Like a 
privileged user). This will be useful when a cluster is exclusively shared by a 
user or only shared by a small team.
+- `Resource Quota Management System` helps admin to manage resources quotas of 
teams, organizations. Resources can be machine resources like CPU/Memory/Disk, 
etc. It can also include non-machine resources like $$-based budgets.
 
-## Deployment
+### Dataset 
 
-```
+There's also need to tag dataset which will be used for training and shared 
across the platform by different users. 
 
+Like mentioned above, access to the actual data will be handled by 3rd party 
system like Apache Ranger / Hive Metastore which is out of the Submarine's 
scope.
 
-    +---------------Submarine Service---+
-    |                                   |
-    | +------------+ +------------+     |
-    | |Web Svc/Prxy| |Backend Svc |     |    +--Submarine Data+-+
-    | +------------+ +------------+     |    |Project/Notebook  |
-    |   ^                               |    |Model/Metrics     |
-    +---|-------------------------------+    |Libraries/DataMart|
-        |                                    +------------------+
-        |
-        |      +----Compute Cluster 1---+    +--Image Registry--+
-        +      |User Notebook Instance  |    |   User's Images  |
-      User /   |Jobs (Spark, TF)        |    |                  |
-      Admin    |                        |    +------------------+
-               |                        |
-               +------------------------+    +-Data Storage-----+
-                                             | S3/HDFS, etc.    |
-               +----Compute Cluster 2---+    |                  |
-                                             +------------------+
-                        ...
-```
+## Architecture Overview
 
-Here's a diagram to illustrate the Submarine's deployment.
+### Architecture Diagram
 
-- Submarine service consists of web service/proxy, and backend services. 
They're like "control planes" of Submarine, and users will interact with these 
services.
-- Submarine service could be a microservice architecture and can be deployed 
to one of the compute clusters. (see below). 
-- There're multiple compute clusters that could be used by Submarine service. 
For user's running notebook instance, jobs, etc. they will be placed to one of 
the compute clusters by user's preference or defined policies.
-- Submarine's data includes project/notebook(content)/models/metrics, etc. 
will be stored separately from dataset (DataMart)
-- Datasets can be stored in various locations such as S3/HDFS. 
-- Users can push container (such as Docker) images to a preconfigured registry 
in Submarine, so Submarine service can know how to pull required container 
images.
+```
+     +-----------------------------------------------------------------+
+     |            Submarine UI / CLI / REST API / SDK                  |
+     |                 Mini-Submarine                                  |
+     +-----------------------------------------------------------------+
+
+     +--------------------Submarine Server-----------------------------+
+     | +---------+ +---------+ +----------+ +----------+ +------------+|
+     | |Data set | |Notebooks| |Experiment| |Models    | |Servings    ||
+     | +---------+ +---------+ +----------+ +----------+ +------------+|
+     |-----------------------------------------------------------------|
+     |                                                                 |
+     | +-----------------+ +-----------------+ +---------------------+ |
+     | |Experiment       | |Compute Resource | |Other Management     | |
+     | |Manager          | |   Manager       | |Services             | |
+     | +-----------------+ +-----------------+ +---------------------+ |
+     |   Spark, template      YARN/K8s/Docker                          |
+     |   TF, PyTorch, pipeline                                         |
+     |                                                                 |
+     + +-----------------+                                             +
+     | |Submarine Meta   |                                             |
+     | |    Store        |                                             |
+     | +-----------------+                                             |
+     |                                                                 |
+     +-----------------------------------------------------------------+
+
+      (You can use http://stable.ascii-flow.appspot.com/#Draw
+      to draw such diagrams)
+```
 
+`Compute Resource Manager` Helps to manage compute resources on-prem/on-cloud, 
this module can also handle cluster creation / management, etc.
 
-## Security Models
+`Experiment Manager` Work with "Compute Resource Manager" to submit different 
kinds of workloads such as (distributed) Tensorflow / Pytorch, etc.
 
-[TODO] (Need help)
+`Submarine SDK` provides Java/Python/REST API to allow DS or other engineers 
to integrate into Submarine services. It also includes a `mini-submarine` 
component that launches Submarine components from a single Docker container (or 
a VM image).
+
+Details of Submarine Server design can be found at 
[submarine-server-design](./submarine-server/architecture.md).
 
 # References
+
+
diff --git a/docs/design/environments-implementation.md 
b/docs/design/environments-implementation.md
new file mode 100644
index 0000000..14e51c8
--- /dev/null
+++ b/docs/design/environments-implementation.md
@@ -0,0 +1,194 @@
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+   http://www.apache.org/licenses/LICENSE-2.0
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+# Overview
+
+Environment profiles (or environment for short) defines a set of libraries and 
when Docker is being used, a Docker image in order to run an experiment or a 
notebook. 
+
+Docker and/or VM-image (such as, VirtualBox/VMWare images, Amazon Machine 
Images - AMI, Or custom image of Azure VM) defines the base layer of the 
environment. Please note that VM-image is different from VM instance type,
+
+On top of that, users can define a set of libraries (such as Python/R) to 
install, we call it kernel.
+
+**Example of Environment**
+
+```
+
+     +-------------------+
+     |+-----------------+|
+     || Python=3.7      ||
+     || Tensorflow=2.0  ||
+     |+---Exp Dependency+|
+     |+-----------------+|
+     ||OS=Ubuntu16.04   ||
+     ||CUDA=10.2        ||
+     ||GPU_Driver=375.. ||
+     |+---Base Library--+|
+     +-------------------+
+```
+
+As you can see, There're base libraries, such as what OS, CUDA version, GPU 
driver, etc. They can be achieved by specifying a VM-image / Docker image.
+
+On top of that, user can bring their dependencies, such as different version 
of Python, Tensorflow, Pandas, etc.
+
+**How users use environment?**
+
+Users can save different environment configs which can be also shared across 
the platform. Environment profiles can be used to run a notebook (e.g. by 
choosing different kernel from Jupyter), or an experiment. Predefined 
experiment library includes what environment to use so users don't have to 
choose which environment to use.
+
+```
+
+        +-------------------+
+        |+-----------------+|       +------------+
+        || Python=3.7      ||       |User1       |
+        || Tensorflow=2.0  ||       +------------+
+        |+---Kernel -------+|       +------------+
+        |+-----------------+|<----+ |User2       |
+        ||OS=Ubuntu16.04   ||     + +------------+
+        ||CUDA=10.2        ||     | +------------+
+        ||GPU_Driver=375.. ||     | |User3       |
+        |+---Base Library--+|     | +------------+
+        +-----Default-Env---+     |
+                                  |
+                                  |
+        +-------------------+     |
+        |+-----------------+|     |
+        || Python=3.3      ||     |
+        || Tensorflow=2.0  ||     |
+        |+---kernel--------+|     |
+        |+-----------------+|     |
+        ||OS=Ubuntu16.04   ||     |
+        ||CUDA=10.3        ||<----+
+        ||GPU_Driver=375.. ||
+        |+---Base Library--+|
+        +-----My-Customized-+
+```
+
+There're two environments in the above graph, "Default-Env" and 
"My-Customized", which can have different combinations of libraries for 
different experiments/notebooks. Users can choose different environments for 
different experiments as they want.
+
+Environments can be added/listed/deleted/selected through CLI/SDK/UI.
+
+# Implementation
+
+## Environment API definition
+
+Let look at what object definition looks like to define an environment, API of 
environment looks like:
+
+```
+    name: "my_submarine_env",
+    vm-image: "...",
+    docker-image: "...", 
+    kernel: 
+       <object of kernel>
+    description: "this is the most common env used by team ABC"
+```
+
+- `vm-image` is optional if we don't need to launch new VM (like running a 
training job in a cloud-remote machine). 
+- `docker-image` is required
+- `kernel` could be optional if kernel is already included by vm-image or 
docker-image.
+- `name` of the environment should be unique in the system, so user can 
reference it when create a new experiment/notebook.
+
+## VM-image and Docker-image
+
+Docker-image and VM image should be prepared by system admin / SREs, it is 
hard for Data-Scientists to write an error-proof Dockerfile, and push/manage 
Docker images. This is one of the reason we hide Docker-image inside 
"environment", we will encourage users to customize their kernels if needed, 
but don't have to touch Dockerfile and build/push/manage new Docker images.
+
+As a project, we will document what's the best practice and example of 
Dockerfiles. 
+
+Dockerfile should include proper `ENTRYPOINT` definition which pointed to our 
default script, so no matter it is notebook, or an experiment, we will setup 
kernel (see below) and other environment variables properly.
+
+## Kernel Implementation
+
+After investigating different alternatives (such as pipenv, venv, etc.), we 
decided to use Conda environment which nicely replaces Python virtual env, pip, 
and can also support other languages. More details can be found at: 
https://medium.com/@krishnaregmi/pipenv-vs-virtualenv-vs-conda-environment-3dde3f6869ed
+
+When once Conda, users can easily add, remove dependency of a Conda 
environment. User can also easily export environment to yaml file.
+
+The yaml file of Conda environment by using `conda env export` looks like: 
+
+```
+name: base
+channels:
+  - defaults
+dependencies:
+  - _ipyw_jlab_nb_ext_conf=0.1.0=py37_0
+  - alabaster=0.7.12=py37_0
+  - anaconda=2020.02=py37_0
+  - anaconda-client=1.7.2=py37_0
+  - anaconda-navigator=1.9.12=py37_0
+  - anaconda-project=0.8.4=py_0
+  - applaunchservices=0.2.1=py_0
+```
+
+Including Conda kernel, the environment object may look like: 
+
+```
+name: "my_submarine_env",
+    vm-image: "...",
+    docker-image: "...", 
+    kernel: 
+      name: team_default_python_3.7
+      channels:
+        - defaults
+      dependencies:
+        - _ipyw_jlab_nb_ext_conf=0.1.0=py37_0
+        - alabaster=0.7.12=py37_0
+        - anaconda=2020.02=py37_0
+        - anaconda-client=1.7.2=py37_0
+        - anaconda-navigator=1.9.12=py37_0
+```
+
+When launch a new experiment / notebook session using the `my_submarine_env`, 
submarine server will use defined Docker image, and Conda kernel to launch of 
container. 
+
+## Storage of Environment 
+
+Environment of Submarine is just a simple text file, so it will be persisted 
in Submarine metastore, which is ideally a Database. 
+
+Docker image is stored inside a regular Docker registry, which will be handled 
outside of the system. 
+
+Conda dependencies are stored in Conda channel (where referenced packages are 
stored), which will be handled/setuped separately. (Popular conda channels are 
`default` and `conda-forge`)
+
+For more detailed discussion about storage-related implementations, please 
refer to [storage-implementation](./storage-implementation,md).
+
+## How to implement to make user can easily use Submarine environments? 
+
+We like simplicities, and we don't want to leak complexities of 
implementations to the users. To make it happen, we have to do some works to 
hide complexities. 
+
+There're two primary uses of environments: experiments and notebook, for both 
of them, users should not do works like explictily call `conda active 
$env_name` to active environments. To make it happen, what we can do is to 
include following parts in Dockerfile 
+
+```
+FROM ubuntu:18.04
+
+<Include whatever base-libraries like CUDA, etc.>
+
+<Make sure conda (with our preferred version) is installed>
+<Make sure Jupyter (with our preferred version) is installed>
+
+# This is just a sample of Dockerfile, users can do more customizations if 
needed
+ENTRYPOINT ["/submarine-bootstrap.sh"]
+```
+
+When Submarine Server (this is implementation detail of Submarine Server, user 
will not see it at all) launch an experiment, or notebook, it will invoke 
following `docker run` command (or any other equvilant like using K8s spec): 
+
+```
+docker run <submarine_docker_image> --kernel <kernel_name> -- .... python 
train.py --batch_size 5 (and other parameters)
+```
+
+Similarily, to launch a notebook: 
+
+```
+docker run <submarine_docker_image> --kernel <kernel_name> -- .... jupyter
+```
+
+The `submarine-bootstrap.sh` is part of Submarine repo, and will handle 
`--kernel` argument which will invoke  `conda active $kernel_name` before 
anything else. (Like run the training job).
+
+
+
diff --git a/docs/design/experiment-implementation.md 
b/docs/design/experiment-implementation.md
new file mode 100644
index 0000000..815122f
--- /dev/null
+++ b/docs/design/experiment-implementation.md
@@ -0,0 +1,500 @@
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+   http://www.apache.org/licenses/LICENSE-2.0
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+# Experiment Implementations
+
+## Overview
+
+This document talks about implementation of experiment, flows and design 
considerations.
+
+Experiment consists of following components, also interact with other 
Submarine or 3rd-party components, showing below: 
+
+```
+
+
+              +---------------------------------------+
+ +----------+ |      Experiment Tasks                 |
+ |Run       | |                                       |
+ |Configs   | | +----------------------------------+  |
+ +----------+ | |   Experiment Runnable Code       |  | +-----------------+
+ +----------+ | |                                  |  | |Output Artifacts |
+ |Input Data| | |     (Like train-job.py)          |  | |(Models, etc.)   |
+ |          | | +----------------------------------+  | +-----------------+
+ |          | | +----------------------------------+  |
+ +----------+ | |   Experiment Deps (Like Python)  |  | +-------------+
+              | +----------------------------------+  | |Logs/Metrics |
+              | +----------------------------------+  | |             |
+              | |  OS, Base Libaries (Like CUDA)   |  | +-------------+
+              | +----------------------------------+  |
+              +---------------------------------------+
+                                 ^
+                                 | (Launch Task with resources)
+                                 +
+                 +---------------------------------+
+                 |Resource Manager (K8s/YARN/Cloud)|
+                 +---------------------------------+
+```
+
+As showing in the above diagram, Submarine experiment consists of the 
following items: 
+
+- On the left side, there're input data and run configs. 
+- In the middle box, they're experiment tasks, it could be multiple tasks when 
we run distributed training, pipeline, etc. 
+  - There're main runnable code, such as `train.py` for the training main 
entry point. 
+  - The two boxes below: experiment depencies and OS/Base libraries we called 
`Submarine Environment Profile` or  `Environment` for short. Which defined what 
is the basic libraries to run the main experiment code. 
+  - Experiment tasks are launched by Resource Manager, such as K8s/YARN/Cloud 
or just launched locally. There're resources constraints for each experiment 
tasks. (e.g. how much memory, cores, GPU, disk etc. can be used by tasks). 
+- On the right side, they're artifacts generated by experiments: 
+  - Output artifacts: Which are main output of the experiment, it could be 
model(s), or output data when we do batch prediction.
+  - Logs/Metrics for further troubleshooting or understanding of experiment's 
quality.
+
+For the rest of the design doc, we will talk about how we handle environment, 
code, and manage output/logs, etc.
+
+## API of Experiment
+
+This is not a full definition of experiment, for more details, please 
reference to experiment API. 
+
+Here's just an example of experiment object which help developper to 
understand what included in an experiment.
+
+```yaml
+experiment:
+       name: "abc",
+       type: "script",
+       environment: "team-default-ml-env"
+       code:
+              sync_mode: s3
+           url: "s3://bucket/training-job.tar.gz" 
+       parameter: > python training.py --iteration 10 
+                    --input=s3://bucket/input output=s3://bucket/output
+       resource_constraint: 
+                  res="mem=20gb, vcore=3, gpu=2"
+       timeout: "30 mins"
+```
+
+This defined a "script" experiment, which has a name "abc", the name can be 
used to track the experiment. There's environment "team-default-ml-env" defined 
to make sure dependencies of the job can be downloaded properly before 
executing the job. 
+
+`code` defined where the experiment code will be downloaded, we will support a 
couple of sync_mode like s3 (or abfs/hdfs), git, etc. 
+
+Different types of experiments will have different specs, for example 
distributed Tensorflow spec may look like:
+
+```yaml
+experiment:
+       name: "abc-distributed-tf",
+       type: "distributed-tf",
+       ps: 
+            environment: "team-default-ml-cpu"
+            resource_constraint: 
+                 res="mem=20gb, vcore=3, gpu=0"
+       worker: 
+            environment: "team-default-ml-gpu"
+            resource_constraint: 
+                 res="mem=20gb, vcore=3, gpu=2"
+       code:
+              sync_mode: git
+           url: "https://foo.com/training-job.git"; 
+       parameter: > python /code/training-job/training.py --iteration 10 
+                    --input=s3://bucket/input output=s3://bucket/output
+       tensorboard: enabled
+       timeout: "30 mins"
+```
+
+Since we have different Docker image, one is using GPU and one is not using 
GPU, we can specify different environment and resource constraint.
+
+## Manage environments for experiment
+
+Please refer to 
[environment-implementation.md](./environment-implementation.md) for more 
details
+
+## Manage storages for experiment
+
+There're different types of storage, such as logs, metrics, dependencies 
(environments). For more details. Please refer to 
[storage-implementations](./storage-implementation.md) for more details. This 
also includes how to manage code for experiment code.
+
+## Manage Pre-defined experiment libraries
+
+## Flow: Submit an experiment
+
+### Submit via SDK Flows.
+
+To better understand experiment implementation, It will be good to understand 
what is the steps of experiment submission.
+
+*Please note that below code is just pesudo code, not offical APIs.*
+
+### Specify what environment to use
+
+Before submit the environment, you have to choose what environment to choose. 
Environment defines dependencies, etc. of an experiment or a notebook. might 
looks like below:
+
+```
+conda_environment = 
+"""
+  name: conda-env
+  channels:
+    - defaults
+  dependencies:
+    - asn1crypto=1.3.0=py37_0
+    - blas=1.0=mkl
+    - ca-certificates=2020.1.1=0
+    - certifi=2020.4.5.1=py37_0
+    - cffi=1.14.0=py37hb5b8e2f_0
+    - chardet=3.0.4=py37_1003
+  prefix: /opt/anaconda3/envs/conda-env
+"""
+
+# This environment can be different from notebook's own environment
+environment = create_environment {
+    DockerImage = "ubuntu:16",
+    CondaEnvironment = conda_environment
+}
+```
+
+To better understand how environment works, please refer to 
[environment-implementation](./environment-implementation.md). 
+
+### Create experiment, specify where's training code located, and parameters.
+
+For  ad-hoc experiment (code located at S3), assume training code is part of 
the `training-job.tar.gz` and main class is `train.py`. When the job is 
launched, whatever specified in the localize_artifacts will be downloaded.
+
+```
+experiment = create_experiment {
+    Environment = environment, 
+    ExperimentConfig = {
+       type = "adhoc",
+       localize_artifacts = [
+            "s3://bucket/training-job.tar.gz"
+       ],
+       name = "abc",
+       parameter = "python training.py --iteration 10 
--input="s3://bucket/input output="s3://bucket/output",
+    }
+}
+experiment.run()
+experiment.wait_for_finish(print_output=True)
+```
+
+##### Run notebook file in offline mode
+
+It is possible we want to run a notebook file in offline mode, to do that, 
here's code to use to run a notebook code
+
+```
+experiment = create_experiment {
+    Environment = environment, 
+    ExperimentConfig = {
+       type = "adhoc",
+       localize_artifacts = [
+            "s3://bucket/folder/notebook-123.ipynb"
+       ],
+       name = "abc",
+       parameter = "runipy training.ipynb --iteration 10 
--input="s3://bucket/input output="s3://bucket/output",
+    }
+}
+experiment.run()
+experiment.wait_for_finish(print_output=True)
+```
+
+##### Run pre-defined experiment library
+
+```
+experiment = create_experiment {
+    # Here you can use default environment of library
+    Environment = environment, 
+    ExperimentConfig = {
+       type = "template",
+       name = "abc",
+       # A unique name of template 
+       template = "deepfm_ctr", 
+       # yaml file defined what is the parameters need to be specified.
+       parameter = {
+           Input: "S3://.../input",
+           Output: "S3://.../output"
+           Training: {
+              "batch_size": 512,
+              "l2_reg": 0.01,
+              ...
+           }
+       }
+    }
+}
+experiment.run()
+experiment.wait_for_finish(print_output=True)
+```
+
+## Summarize: Experiment v.s. Notebook session
+
+There's a common misunderstanding about what is the differences between 
running experiment v.s. running task from a notebook session. We will talk 
about differences and commonalities:
+
+**Differences**
+
+|                                   | Experiment                               
                    | Notebook Session                                          
   |
+| --------------------------------- | 
------------------------------------------------------------ | 
------------------------------------------------------------ |
+| Run mode                          | Offline                                  
                    | Interactive                                               
   |
+| Output Artifacts (a.k.a model)    | Persisted in a shared storage (like 
S3/NFS)                  | Local in the notebook session container, could be 
emphameral |
+| Run history (meta, logs, metrics) | Meta/logs/metrics can be traced from 
experiment UI (or corresponding API) | No run history can be traced from 
Submarine UI/API. Can view the current running paragraph's log/metrics, etc. |
+| What to run?                      | Code from Docker image or shared storage 
(like Tarball on S3, Github, etc.) | Local in the notebook's paragraph          
                  |
+
+**Commonalities** 
+
+|             | Experiment & Notebook Session                     |
+| ----------- | ------------------------------------------------- |
+| Environment | They can share the same Environment configuration |
+
+## Experiment-related modules inside Submarine-server
+
+(Please refer to [architecture of submarine 
server](./submarine-server/architecture.md) for more details)
+
+### Experiment Manager
+
+The experiment manager receives the experiment requests, persisting the 
experiment metas in a database(e.g. MySQL), will invoke subsequence modules to 
submit and monitor the experiment's execution.
+
+### Compute Cluster Manager
+
+After experiment accepted by experiment manager, based on which cluster the 
experiment intended to run (like mentioned in the previous sections, Submarine 
supports to manage multiple compute clusters), compute cluster manager will 
returns credentials to access the compute cluster. It will also be responsible 
to create a new compute cluster if needed. 
+
+For most of the on-prem use cases, there's only one cluster involved, for such 
cases, ComputeClusterManager returns credentials to access local cluster if 
needed. 
+
+### Experiment Submitter
+
+Experiment Submitter handles different kinds of experiments to run (e.g. 
ad-hoc script, distributed TF, MPI, pre-defined templates, Pipeline, AutoML, 
etc.). And such experiments can be managed by different resource management 
systems (e.g. K8s, YARN, container cloud, etc.)
+
+To meet the requirements to support variant kinds of experiments and resource 
managers, we choose to use plug-in modules to support different submitters 
(which requires jars to submarine-server’s classpath). 
+
+To avoid jars and dependencies of plugins break the submarine-server, the 
plug-ins manager, or both. To solve this issue, we can instantiate submitter 
plug-ins using a classloader that is different from the system classloader.
+
+#### Submitter Plug-ins
+
+Each plug-in uses a separate module under the server-submitter module. As the 
default implements, we provide for YARN and K8s. For YARN cluster, we provide 
the submitter-yarn and submitter-yarnservice plug-ins. The submitter-yarn 
plug-in used the [TonY](https://github.com/linkedin/TonY) as the runtime to run 
the training job, and the submitter-yarnservice plug-in direct use the [YARN 
Service](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/yarn-service/Overview.html)
 w [...]
+
+The submitter-k8s plug-in is used to submit the job to Kubernetes cluster and 
use the 
[operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) as 
the runtime. The submitter-k8s plug-in implements the operation of CRD object 
and provides the java interface. In the beginning, we use the 
[tf-operator](https://github.com/kubeflow/tf-operator) for the TensorFlow.
+
+If Submarine want to support the other resource management system in the 
future, such as submarine-docker-cluster (submarine uses the Raft algorithm to 
create a docker cluster on the docker runtime environment on multiple servers, 
providing the most lightweight resource scheduling system for small-scale 
users). We should create a new plug-in module named submitter-docker under the 
server-submitter module.
+
+### Experiment Monitor
+
+The monitor tracks the experiment life cycle and records the main events and 
key info in runtime. As the experiment run progresses, the metrics are needed 
for evaluation of the ongoing success or failure of the execution progress. Due 
to adapt the different cluster resource management system, so we need a generic 
metric info structure and each submitter plug-in should inherit and complete it 
by itself.
+
+### Invoke flows of experiment-related components
+
+```
+ +-----------------+  +----------------+ +----------------+ +-----------------+
+ |Experiments      |  |Compute Cluster | |Experiment      | | Experiment      |
+ |Mgr              |  |Mgr             | |Submitter       | | Monitor         |
+ +-----------------+  +----------------+ +----------------+ +-----------------+
+          +                    +                  +                  +
+ User     |                    |                  |                  |
+ Submit   |+------------------------------------->+                  +
+ Xperiment|          Use submitter.validate(spec) |                  |
+          |          to validate spec and create  |                  |
+          |          experiment object (state-    |                  |
+          |          machine).                    |                  |
+          |                                       |                  |
+          |          The experiment manager will  |                  |
+          |          persist meta-data to Database|                  |
+          |                    |                  |                  |
+          |                    |                  +                  +
+          |+-----------------> +                  |                  |
+          |  Submit Experiments|                  |                  |
+          |   To ComputeCluster|                  |                  |
+          |   Mgr, get existing|+---------------->|                  |
+          |   cluster, or      |  Use Submitter   |                  |
+          |   create a new one.|  to submit       |+---------------> |
+          |                    |  Different kinds |  Once job is     |
+          |                    |  of experiments  |  submitted, use  |+----+
+          |                    |  to k8s/yarn, etc|  monitor to get  |     |
+          |                    |                  |  status updates  |     |
+          |                    |                  |                  |     | 
Monitor
+          |                    |                  |                  |     | 
Xperiment
+          |                    |                  |                  |     | 
status
+          |                    |                  |                  |     |
+          |<--------------------------------------------------------+|     |
+          |                    |                  |                  |     |
+          |                  Update Status back to Experiment        |     |
+          |                    |      Manager     |                  |<----+
+          |                    |                  |                  |
+          |                    |                  |                  |
+          |                    |                  |                  |
+          v                    v                  v                  v
+```
+
+TODO: add more details about template, environment, etc.
+
+## Common modules of experiment/notebook-session/model-serving
+
+Experiment/notebook-session/model-serving share a lot of commonalities, all of 
them are: 
+
+- Some workloads running on YARN/K8s.
+- Need persist meta data to DB. 
+- Need monitor task/service running status from resource management system. 
+
+We need to make their implementation are loose-coupled, but at the same time, 
share some building blocks as much as possible (e.g. submit PodSpecs to K8s, 
monitor status, get logs, etc.) to reduce duplications.
+
+## Support Predefined-experiment-templates
+
+Predefined Experiment Template is just a way to save data-scientists time to 
repeatly entering parameters which is not error-proof and user experience is 
also bad. 
+
+### Predefined-experiment-template API to run experiment
+
+Predefined experiment template consists a list of parameters, each of the 
parameter has 4 properties:
+
+| Key             | Required   | Default Value                                 
               | Description                  |
+| --------------- | ---------- | 
------------------------------------------------------------ | 
---------------------------- |
+| Name of the key | true/false | When required = false, a default value can be 
provided by the template | Description of the parameter |
+
+For the example of deepfm CTR training experiment mentioned in the 
[architecture-and-requirements.md](./architecture-and-requirements.md)
+
+```
+{
+  "input": {
+    "train_data": ["hdfs:///user/submarine/data/tr.libsvm"],
+    "valid_data": ["hdfs:///user/submarine/data/va.libsvm"],
+    "test_data": ["hdfs:///user/submarine/data/te.libsvm"],
+    "type": "libsvm"
+  },
+  "output": {
+    "save_model_dir": "hdfs:///user/submarine/deepfm",
+    "metric": "auc"
+  },
+  "training": {
+    "batch_size" : 512,
+    "field_size": 39,
+    "num_epochs": 3,
+    "feature_size": 117581,
+    ...
+  }
+}
+```
+
+The template will be (in yaml format):
+
+```yaml
+# deepfm.ctr template
+name: deepfm.ctr
+author: 
+description: >
+  This is a template to run CTR training using deepfm algorithm, by default it 
runs
+  single node TF job, you can also overwrite training parameters to use 
distributed
+  training. 
+  
+parameters: 
+  - name: input.train_data
+    required: true 
+    description: >
+      train data is expected in SVM format, and can be stored in HDFS/S3 
+    ...
+  - name: training.batch_size
+    required: false
+    default: 32 
+    description: This is batch size of training
+```
+
+The batch format can be used in UI/API. 
+
+### Handle Predefined-experiment-template from server side
+
+Please note that, the conversion of predefined-experiment-template will be 
always handled by server. The invoke flow looks like: 
+
+```
+
+                         +------------Submarine Server -----------------------+
+   +--------------+      |  +-----------------+                               |
+   |Client        |+------->|Experimment Mgr  |                               |
+   |              |      |  |                 |                               |
+   +--------------+      |  +-----------------+                               |
+                         |          +                                         |
+          Submit         |  +-------v---------+       Get Experiment Template |
+          Template       |  |Experiment       |<-----+From pre-registered     |
+          Parameters     |  |Template Registry|       Templates               |
+          to Submarine   |  +-------+---------+                               |
+          Server         |          |                                         |
+                         |  +-------v---------+       +-----------------+     |
+                         |  |Deepfm CTR Templ-|       |Experiment-      |     |
+                         |  |ate Handler      +------>|Tensorflow       |     |
+                         |  +-----------------+       +--------+--------+     |
+                         |                                     |              |
+                         |                                     |              |
+                         |                            +--------v--------+     |
+                         |                            |Experiment       |     |
+                         |                            |Submitter        |     |
+                         |                            +--------+--------+     |
+                         |                                     |              |
+                         |                                     |              |
+                         |                            +--------v--------+     |
+                         |                            |                 |     |
+                         |                            | ......          |     |
+                         |                            +-----------------+     |
+                         |                                                    |
+                         +----------------------------------------------------+
+```
+
+Basically, from Client, it submitted template parameters to Submarine Server, 
inside submarine server, it finds the corresponding template handler based on 
the name. And the template handler converts input parameters to an actual 
experiment, such as a distributed TF experiment. After that, it goes the 
similar route to validate experiment spec, compute cluster manager, etc. to get 
the experiment submitted and monitored. 
+
+Predefined-experiment-template is able to create any kind of experiment, it 
could be a pipeline: 
+
+```
+
+   +-----------------+                  +------------------+
+   |Template XYZ     |                  | XYZ Template     |
+   |                 |+---------------> | Handler          |
+   +-----------------+                  +------------------+
+                                                   +
+                                                   |
+                                                   |
+                                                   |
+                                                   |
+                                                   v
+             +--------------------+      +------------------+
+             | +-----------------+|      | Predefined       |
+             | |  Split Train/   ||<----+| Pipeline         |
+             | |  Test data      ||      +------------------+
+             | +-------+---------+|
+             |         |          |
+             | +-------v---------+|
+             | |  Spark Job ETL  ||
+             | |                 ||
+             | +-------+---------+|
+             |         |          |
+             | +-------v---------+|
+             | | Train using     ||
+             | | XGBoost         ||
+             | +-------+---------+|
+             |         |          |
+             | +-------v---------+|
+             | | Validate Train  ||
+             | | Results         ||
+             | +-----------------+|
+             |                    |
+             +--------------------+
+```
+
+Template can be also chained to reuse other template handlers
+
+```
+
+   +-----------------+                  +------------------+
+   |Template XYZ     |                  | XYZ Template     |
+   |                 |+---------------> | Handler          |
+   +-----------------+                  +------------------+
+                                                   +
+                                                   |
+                                                   v
+               +------------------+      +------------------+
+               |Distributed       |      | ABC Template     |
+               |TF Experiment     |<----+| Handler          |
+               +------------------+      +------------------+
+```
+
+Template Handler is a callable class inside Submarine Server with a standard 
interface defined like.
+
+```java
+interface ExperimentTemplateHandler {
+   ExperimentSpec createExperiment(TemplatedExperimentParameters param)
+}
+```
+
+We should avoid users to do coding when they want to add new template, we 
should have several standard template handler to deal with most of the template 
handling.
+
+Experiment templates can be registered/updated/deleted via Submarine Server's 
REST API, which need to be discussed separately in the doc. (TODO)
\ No newline at end of file
diff --git a/docs/design/implementation-notes.md 
b/docs/design/implementation-notes.md
new file mode 100644
index 0000000..2bbad3e
--- /dev/null
+++ b/docs/design/implementation-notes.md
@@ -0,0 +1,30 @@
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+   http://www.apache.org/licenses/LICENSE-2.0
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+# Submarine Implementation Notes 
+
+Before digging into details of implementations, you should read 
[architecture-and-requirements](./architecture-and-requirements.md) first to 
understand overall requirements and architecture.
+
+Here're sub topics of Submarine implementations:
+
+- [Submarine Storage](./storage-implementation.md): How to store metadata, 
logs, metrics, etc. of Submarine.
+- [Submarine Environment](./environments-implementation.md): How environments 
created, managed, stored in Submarine. 
+- [Submarine Experiment](./experiment-implementation.md): How experiments 
managed, stored, and how the predefined experiment template works.
+- [Submarine Server](./submarine-server/architecture.md): How Submarine server 
is designed, architectuer, implementation notes, etc.
+
+Working-in-progress designs, Below are designs which are working-in-progress, 
we will move them to the upper section once design & review is finished: 
+
+- [Submarine HA Design](./wip-designs/SubmarineClusterServer.md): How 
Submarine HA can be achieved, using RAFT, etc.
+- [Submarine services deployment module:](./wip-designs/submarine-launcher.md) 
How to deploy submarine services to k8s, YARN or cloud. 
\ No newline at end of file
diff --git a/docs/design/storage-implementation.md 
b/docs/design/storage-implementation.md
new file mode 100644
index 0000000..36d41a6
--- /dev/null
+++ b/docs/design/storage-implementation.md
@@ -0,0 +1,157 @@
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+   http://www.apache.org/licenses/LICENSE-2.0
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+# Submarine Storage Implementation 
+
+## ML-related objects and their storages
+
+First let's look at what user will interact for most of the time: 
+
+- Notebook 
+- Experiment
+- Model Servings
+
+```
+
+
+                              +---------+    +------------+
+                              |Logs     |<--+|Notebook    |
+      +----------+            +---------+    +------------+     
+----------------+
+      |Trackings |                        <-+|Experiment  |<--+>|Model 
Artifacts |
+      +----------+     +-----------------+   +------------+     
+----------------+
+      +----------+<---+|ML-related Metric|<--+Servings    |
+      |tf.events |     +-----------------+   +------------+
+      +----------+                                 ^              
+-----------------+
+                                                   +              | 
Environments    |
+                                        +----------------------+  |            
     |
+            +-----------------+         | Submarine Metastore  |  |  
Dependencies   |
+            |Code             |         +----------------------+  |            
     |
+            +-----------------+         |Experiment Meta       |  |   Docker 
Images |
+                                        +----------------------+  
+-----------------+
+                                        |Model Store Meta      |
+                                        +----------------------+
+                                        |Model Serving Meta    |
+                                        +----------------------+
+                                        |Notebook meta         |
+                                        +----------------------+
+                                        |Experiment Templates  |
+                                        +----------------------+
+                                        |Environments Meta     |
+                                        +----------------------+
+```
+
+First of all, all the notebook-sessions / experiments / model-serving 
instances) are more or less interact with following storage objects:
+
+- Logs for these tasks for troubleshooting. 
+- ML-related metrics such as loss, epoch, etc. (in contrast of system metrics 
such as CPU/memory usage, etc.)
+  - There're different types of ML-related metrics, for Tensorflow/pytorch, 
they can use tf.events and get visualizations on tensorboard. 
+  - Or they can use tracking APIs (such as Submarine tracking, mlflow 
tracking, etc.) to output customized tracking results for non TF/Pytorch 
workloads. 
+- Training jobs of experiment typically generate model artifacts (files) which 
need perisisted, and both of notebook, model serving needs to load model 
artifacts from persistent storage. 
+- There're various of meta information, such as experiment meta, model 
registry, model serving, notebook, experiment, environment, etc. We need be 
able to read these meta information back.
+- We also have code for experiment (like training/batch-prediction), notebook 
(ipynb), and model servings.
+- And notebook/experiments/model-serving need depend on environments 
(dependencies such as pip, and Docker Images).
+
+### Implementation considerations for ML-related objects
+
+| Object Type                              | Characteristics                   
                           | Where to store                                     
          |
+| ---------------------------------------- | 
------------------------------------------------------------ | 
------------------------------------------------------------ |
+| Metrics: tf.events                       | Time series data with k/v, 
appendable to file                | Local/EBS, HDFS, Cloud Blob Storage         
                 |
+| Metrics: other tracking metrics          | Time series data with k/v, 
appendable to file                | Local, HDFS, Cloud Blob Storage, Database   
                 |
+| Logs                                     | Large volumes, #files are 
potentially huge.                  | Local (temporary), HDFS (need 
aggregation), Cloud Blob Storage |
+| Submarine Metastore                      | CRUD operations for small meta 
data.                         | Database                                        
             |
+| Model Artifacts                          | Size varies for model (from KBs 
to GBs). #files are potentially huge. | HDFS, Cloud Blob Storage                
                     |
+| Code                                     | Need version control. (Please 
find detailed discussions below for code storage and localization) | Tarball on 
HDFS/Cloud Blog Storage, or Git                   |
+| Environment (Dependencies, Docker Image) |                                   
                           | Public/private environment repo (like Conda 
channel), Docker registry. |
+
+### Detailed discussions
+
+#### Store code for experiment/notebook/model-serving
+
+There're following ways to get experiment code: 
+
+**1) Code is part of Git repo:** (***<u>Recommended</u>***)
+
+This is our recommended approach, once code is part of Git, it will be stored 
in version control, any change will be tracked, and much easier for users to 
trace back what change triggered a new bug, etc.
+
+**2) Code is part of Docker image:** 
+
+***This is an anti-pattern and we will NOT recommend you to use it***, Docker 
image can be used to include ANYTHING, like dependencies, the code you will 
execute, or even data. But this doesn't mean you should do it. We recommend to 
use Docker image ONLY for libraries/dependencies.
+
+Making code to be part of Docker image makes hard to edit code (if you want to 
update a value in your Python file, you will have to recreate the Docker image, 
push it and rerun it).
+
+**3) Code is part of S3/HDFS/ABFS:** 
+
+User may want to store their training code to a tarball on a shared storage. 
Submarine need to download code from remote storage to the launched container 
before running the code. 
+
+#### Localization of experiment/notebook/model-serving code
+
+To make user experiences keeps same across different environment, we will 
localize code to a same folder after the container is launched, preferrably 
`/code`
+
+For example, there's a git repo need to be synced up for an 
experiment/notebook/model-serving (example above):
+
+```
+experiment: #Or notebook, model-serving
+       name: "abc",
+       environment: "team-default-ml-env"
+       ... (other fields)
+                        code:
+              sync_mode: git
+           url: "https://foo.com/training-job.git"; 
+```
+
+After localize, `training-job/` will be placed under `/code` 
+
+When we running on K8s environment, we can use K8s's initContainer and 
emptyDir to do these things for us. K8s POD spec (generated by Submarine server 
instead of user, user should NEVER edit K8s spec, that's too unfriendly to 
data-scientists): 
+
+```
+apiVersion: v1
+kind: Pod
+metadata:
+  name: experiment-abc
+spec:
+  containers:
+  - name: experiment-task
+    image: training-job
+    volumeMounts:
+    - name: code-dir
+      mountPath: /code
+  initContainers:
+  - name: git-localize
+    image: git-sync
+    command: "git clone .. /code/"
+    volumeMounts:
+    - name: code-dir
+      mountPath: /code
+  volumes:
+  - name: code-dir
+    emptyDir: {}
+```
+
+The above K8s spec create a code-dir and mount it to `/code` to launched 
containers. The initContiner `git-localize` uses 
`https://github.com/kubernetes/git-sync` to do the sync up. (If other storages 
are used such as s3, we can use similar initContainer approach to download 
contents)
+
+## System-related metrics/logs and their storages
+
+Other than ML-related objects, we have system-related objects, including: 
+
+- Daemon logs (like logs of Submarine server). 
+- Logs for other dependency components (like Kubernetes logs when running on 
K8s). 
+- System metrics (Physical resource usages by daemons, launched training 
containers, etc.). 
+
+All these information should be handled by 3rd party system, such as Grafana, 
Prometheus, etc. And system admins are responsible to setup these 
infrastructures, dashboard. Users of submarine should NOT interact with system 
related metrics/logs. It is system admin's responsibility.
+
+## In-scope / Out-of-scope 
+
+ Describe what Submarine project should own and what Submarine project should 
NOT own.
+
diff --git a/docs/design/submarine-server/architecture.md 
b/docs/design/submarine-server/architecture.md
index 447bd8f..530dc00 100644
--- a/docs/design/submarine-server/architecture.md
+++ b/docs/design/submarine-server/architecture.md
@@ -16,10 +16,67 @@
   specific language governing permissions and limitations
   under the License.
   -->
-# Submarine Server
+# Submarine Server Architecture And Implementation
 
-## Motivation
-Nowadays we use the client to submit the machine learning job and the cluster 
config is stored at the client. It’s insecure and difficult to use. So we 
should hide the underlying cluster system such as YARN/K8s.
+## Architecture Overview
+
+```
+    +---------------Submarine Server ---+
+    |                                   |
+    | +------------+ +------------+     |
+    | |Web Svc/Prxy| |Backend Svc |     |    +--Submarine Asset +
+    | +------------+ +------------+     |    |Project/Notebook  |
+    |   ^         ^                     |    |Model/Metrics     |
+    +---|---------|---------------------+    |Libraries/Dataset |
+        |         |                          +------------------+
+        |         |
+        |      +--|-Compute Cluster 1---+    +--Image Registry--+
+        +      |  |                     |    |   User's Images  |
+      User /   |  +                     |    |                  |
+      Admin    | User Notebook Instance |    +------------------+
+               | Experiment Runs        |
+               +------------------------+    +-Data Storage-----+
+                                             | S3/HDFS, etc.    |
+               +----Compute Cluster 2---+    |                  |
+                                             +------------------+
+                        ...
+```
+
+Here's a diagram to illustrate the Submarine's deployment.
+
+- Submarine Server consists of web service/proxy, and backend services. 
They're like "control planes" of Submarine, and users will interact with these 
services.
+- Submarine server could be a microservice architecture and can be deployed to 
one of the compute clusters. (see below, this will be useful when we only have 
one cluster). 
+- There're multiple compute clusters that could be used by Submarine service. 
For user's running notebook instance, jobs, etc. they will be placed to one of 
the compute clusters by user's preference or defined policies.
+- Submarine's asset includes 
project/notebook(content)/models/metrics/dataset-meta, etc. can be stored 
inside Submarine's own database.
+- Datasets can be stored in various locations such as S3/HDFS. 
+- Users can push container (such as Docker) images to a preconfigured registry 
in Submarine, so Submarine service can know how to pull required container 
images.
+- Image Registry/Data-Storage, etc. are outside of Submarine server's scope 
and should be managed by 3rd party applications.
+
+## Submarine Server and its APIs
+
+Submarine server is designed to allow data scientists to access notebooks, 
submit/manage jobs, manage models, create model training workflows, access 
datasets, etc.
+
+Submarine Server exposed UI and REST API. Users can also use CLI / SDK to 
manage assets inside Submarine Server.
+
+```
+           +----------+
+           | CLI      |+---+
+           +----------+    v              +----------------+
+                         +--------------+ | Submarine      |
+           +----------+  | REST API     | |                |
+           | SDK      |+>|              |+>  Server        |
+           +----------+  +--------------+ |                |
+                           ^              +----------------+
+           +----------+    |
+           | UI       |+---+
+           +----------+
+```
+
+REST API will be used by the other 3 approaches. (CLI/SDK/UI) 
+
+The REST API Service handles HTTP requests and is responsible for 
authentication. It acts as the caller for the JobManager component.
+
+The REST component defines the generic job spec which describes the detailed 
info about job. For more details, refer to 
[here](https://docs.google.com/document/d/1kd-5UzsHft6gV7EuZiPXeWIKJtPqVwkNlqMvy0P_pAw/edit#).
 (Please note that we're converting REST endpoint description from Java-based 
REST API to swagger definition, once that is done, we should replace the link 
with swagger definition spec).
 
 ## Proposal
  ```
@@ -45,31 +102,64 @@ Nowadays we use the client to submit the machine learning 
job and the cluster co
  ```
 We propose to split the original core module in the old layout into two 
modules, CLI and server as shown in FIG. The submarine-client calls the REST 
APIs to submit and retrieve the job info. The submarine-server provides the 
REST service, job management, submitting the job to cluster, and running job in 
different clusters through the corresponding runtime.
 
-## submarine-server
+## Submarine Server Components
 
-### REST
-The REST API Service handles HTTP requests and is responsible for 
authentication. It acts as the caller for the JobManager component.
+```
+
+   +----------------------Submarine Server--------------------------------+
+   | +-----------------+ +------------------+ +--------------------+      |
+   | |  Experiment     | |Notebook Session  | |Environment Mgr     |      |
+   | |  Mgr            | |Mgr               | |                    |      |
+   | +-----------------+ +------------------+ +--------------------+      |
+   |                                                                      |
+   | +-----------------+ +------------------+ +--------------------+      |
+   | |  Model Registry | |Model Serving Mgr | |Compute Cluster Mgr |      |
+   | |                 | |                  | |                    |      |
+   | +-----------------+ +------------------+ +--------------------+      |
+   |                                                                      |
+   | +-----------------+ +------------------+ +--------------------+      |
+   | | DataSet Mgr     | |User/Team         | |Metadata Mgr        |      |
+   | |                 | |Permission Mgr    | |                    |      |
+   | +-----------------+ +------------------+ +--------------------+      |
+   +----------------------------------------------------------------------+
+```
+
+### Experiment Manager 
+
+TODO
+
+### Notebook Sessions Manager 
+
+TODO
+
+### Environment Manager
+
+TODO
+
+### Model Registry
+
+TODO
+
+### Model Serving Manager 
+
+TODO
+
+### Compute Cluster Manager
 
-The REST component defines the generic job spec which describes the detailed 
info about job. For more details, refer to 
[here](https://docs.google.com/document/d/1kd-5UzsHft6gV7EuZiPXeWIKJtPqVwkNlqMvy0P_pAw/edit#).
+TODO
 
-### JobManager
-The JobManager receives the job requests, persisting the job metadata in a 
database(MySQL in production), submitting and monitoring the job. Using the 
plug-in design pattern for submitter to extends more features. Submitting the 
job to cluster resource management system through the specified submitter 
plug-in. 
+### Dataset Manager 
 
-The JobManager has two main components: plug-ins manager and job monitor.
+TODO
 
-#### PlugMgr
-The plug-ins manager is responsible for launching the submitter plug-ins, 
users have to add their jars to submarine-server’s classpath directly, thus put 
them on the system classloader. But if there are any conflicts between the 
dependencies introduced by the submitter plug-ins and the submarine-server 
itself, they can break the submarine-server, the plug-ins manager, or both. To 
solve this issue, we can instantiate submitter plug-ins using a classloader 
that is different from the system [...]
+### User/team permissions manager 
 
-#### Monitor
-The monitor tracks the training life cycle and records the main events and key 
info in runtime. As the training job progresses, the metrics are needed for 
evaluation of the ongoing success or failure of the training progress. Due to 
adapt the different cluster resource management system, so we need a generic 
metric info structure and each submitter plug-in should inherit and complete it 
by itself.
+TODO
 
-### Submitter Plug-ins
-Each plug-in uses a separate module under the server-submitter module. As the 
default implements, we provide for YARN and K8s. For YARN cluster, we provide 
the submitter-yarn and submitter-yarnservice plug-ins. The submitter-yarn 
plug-in used the [TonY](https://github.com/linkedin/TonY) as the runtime to run 
the training job, and the submitter-yarnservice plug-in direct use the [YARN 
Service](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/yarn-service/Overview.html)
 w [...]
+### Metadata Manager
 
-If Submarine want to support the other resource management system in the 
future, such as submarine-docker-cluster (submarine uses the Raft algorithm to 
create a docker cluster on the docker runtime environment on multiple servers, 
providing the most lightweight resource scheduling system for small-scale 
users). We should create a new plug-in module named submitter-docker under the 
server-submitter module.
+TODO
 
-### Failure Recovery
-Use the database(MySQL in production) to do the submarine-server failure 
recovery. 
+## Components/services outside of Submarine Server's scope
 
-## submarine-client
-The CLI client is the default implements for submarine-server RESTful API, it 
provides all the features which declared in the RESTful API.
+TODO: Describe what are the out-of-scope components, which should be handled 
and managed outside of Submarine server. Candidates are: Identity management, 
data storage, metastore storage, etc.
\ No newline at end of file
diff --git a/docs/design/SubmarineClusterServer.md 
b/docs/design/wip-designs/SubmarineClusterServer.md
similarity index 97%
rename from docs/design/SubmarineClusterServer.md
rename to docs/design/wip-designs/SubmarineClusterServer.md
index 58bd9cb..b83d797 100644
--- a/docs/design/SubmarineClusterServer.md
+++ b/docs/design/wip-designs/SubmarineClusterServer.md
@@ -1,4 +1,10 @@
-# Submarine Cluster Server Design
+# Submarine Cluster Server Design - High-Availability (working-in-progress)
+
+## Please note that this design doc is working-in-progress and need more works 
to complete. 
+
+
+
+## Below is existing proposal:
 
 ## Introduction
 The Submarine system contains a total of two daemon services, Submarine Server 
and Workbench Server.
diff --git a/docs/design/submarine-launcher.md 
b/docs/design/wip-designs/submarine-launcher.md
similarity index 100%
rename from docs/design/submarine-launcher.md
rename to docs/design/wip-designs/submarine-launcher.md


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to