Francesco Romani has uploaded a new change for review.

Change subject: WIP: docs: add tutorial
......................................................................

WIP: docs: add tutorial

Probably not worth merging

Change-Id: I3a9e947c30f2978fd9670fd64d99bf380181aa9c
Signed-off-by: Francesco Romani <from...@redhat.com>
---
A doc/containers-tutorial.md
1 file changed, 204 insertions(+), 0 deletions(-)


  git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/00/65400/1

diff --git a/doc/containers-tutorial.md b/doc/containers-tutorial.md
new file mode 100644
index 0000000..37f1ee4
--- /dev/null
+++ b/doc/containers-tutorial.md
@@ -0,0 +1,204 @@
+# How to try how the experimental container support for Vdsm.
+
+## What works, aka what to expect
+
+The basic features are expected to work:
+1. Run any docker image on the public docker registry
+2. Make the container accessible from the outside (aka not just from localhost)
+3. Use file-based storage for persistent volumes
+
+## What does not yet work, aka what NOT to expect
+
+Few things are planned and currently under active development:
+1. Monitoring. Engine will not get any update from the container besides "VM" 
status (Up, Down...)
+   One important drawback is that you will not be told the IP of the container 
from Engine,
+   you will need to connect to the Vdsm host to discover it using standard 
docker tools.
+2. Proper network integration. Some steps still need manual intervention
+3. Stability and recovery - it's pre-alpha software after all! :)
+
+## 1. Introduction and prerequisites
+
+Trying out container support affects only the host and the Vdsm.
+Besides add few custom variables (totally safe and supported since early
+3.z), there are zero changes required to the DB and to Engine.
+Nevertheless, we recommend to dedicate one oVirt 4.y environment,
+or at least one 4.y host, to try out the container feature.
+
+To get started, first thing you need is to setup a vanilla oVirt 4.y
+installation. We will need to make changes to the Vdsm and to the
+Vdsm host, so hosted engine and/or oVirt node may add extra complexity.
+
+The reminder of this tutorial assumes you are using two hosts,
+one for Vdsm (will be changed) and one for Engine (will require zero changes);
+furthermore, we assume the Vdsm host is running on CentOS 7.y.
+
+We require:
+- one test host for Vdsm. This host need to have one NIC dedicated to 
containers.
+  We will use the [docker macvlan 
driver](https://raesene.github.io/blog/2016/07/23/Docker-MacVLAN/),
+  so this NIC *must not be* part of one bridge.
+- docker >= 1.12
+- oVirt >= 4.0.5 (Vdsm >= 4.18.15)
+- CentOS >= 7.2
+
+Docker >= 1.12 is avaialable for download 
[here](https://docs.docker.com/engine/installation/linux/centos/)
+
+Caveats:
+1. docker from official rpms conflicts con docker from CentOS, and has a 
different package name: docker-engine vs docker.
+   Please note that the kubernetes package from CentOS, for example, require 
'docker', not 'docker-engine'.
+2. you may want to replace the default service file
+   [with this 
one](https://github.com/mojaves/convirt/blob/master/patches/centos72/systemd/docker/docker.service)
+   and to use this
+   [sysconfig 
file](https://github.com/mojaves/convirt/blob/master/patches/centos72/systemd/docker/docker-engine).
+   Here I'm just adding the storage options docker requires, much like the 
CentOS docker is configured.
+   Configuring docker like this can save you some troubleshooting, especially 
if you had docker from CentOS installed
+   on the testing box.
+
+## 2. Patch Vdsm to support containers
+
+You need to patch and rebuild Vdsm.
+Fetch [this 
patch](https://github.com/mojaves/convirt/blob/master/patches/vdsm/4.18.15/0001-container-support-for-Vdsm.patch.gz)
+and apply it against Vdsm 4.18.15.
+
+Rebuild Vdsm and reinstall on your box. Make sure you install the Vdsm command 
line client (vdsm-cli)
+
+Restart *both* Vdsm and Supervdsm, make sure Engine still works flawlessly 
with patched Vdsm.
+This ensure that no regression is introduced, and that your environment can 
run VMs just as before.
+Now we can proceed adding the container support.
+
+start docker:
+
+  # systemctl start docker-engine
+  (optional)
+  # systemctl enable docker-engine
+
+Restart Vdsm again
+
+  # systemctl restart vdsm
+
+Now we can check if Vdsm detects docker, so you can use it:
+still on the same Vdsm host, run
+
+  $ vdsClient -s 0 getVdsCaps | grep containers
+       containers = ['docker', 'fake']
+
+This means this Vdsm can run containers using 'docker' and 'fake' runtimes.
+Ignore the 'fake' runtime; as the name suggests, is a test driver, kinda like 
/dev/null.
+
+Now we need to make sure the host network configuration is fine.
+
+### 2.1. Configure the docker network for Vdsm
+
+  PLEASE NOTE
+  that the suggested network configuration assumes that
+  * you have one network, `ovirtmgmt` (the default one) you use for everything
+  * you have one Vdsm host with at least two NICs, one bound to the 
`ovirtmgmt` network, and one spare
+
+_This step is not yet automated by Vdsm_, so manual action is needed; Vdsm 
will take
+care of this automatically in the future.
+
+You can use
+[this helper 
script](https://github.com/mojaves/convirt/blob/master/patches/vdsm/cont-setup-net),
+which reuses the Vdsm libraries. Make sure
+you have patched Vdsm to support container before to use it.
+
+Let's review what the script needs:
+
+  # ./cont-setup-net -h
+  usage: cont-setup-net [-h] [--name [NAME]] [--bridge [BRIDGE]]
+                        [--interface [INTERFACE]] [--gateway [GATEWAY]]
+                        [--subnet [SUBNET]] [--mask [MASK]]
+  
+  optional arguments:
+    -h, --help            show this help message and exit
+    --name [NAME]         network name to use
+    --bridge [BRIDGE]     bridge to use
+    --interface [INTERFACE]
+                          interface to use
+    --gateway [GATEWAY]   address of the gateway
+    --subnet [SUBNET]     subnet to use
+    --mask [MASK]         netmask to use
+  
+So we need to feed --name, --interface, --gateway, --subnet and optionally 
--mask (default, /24, is often fine).
+
+For my case the default mask was indeed fine, so I used the script like this:
+
+  # ./cont-setup-net --name ovirtmgmt --interface enp3s0 --gateway 192.168.1.1 
--subnet 192.168.1.0
+
+Thhis is the output I got:
+
+  DEBUG:virt.containers.runtime:configuring runtime 'docker'
+  DEBUG:virt.containers.command:* calling ['/bin/docker', 'network', 
'inspect', 'ovirtmgmt']
+  Error: No such network: ovirtmgmt
+  DEBUG:virt.containers.command:* called ['/bin/docker', 'network', 'inspect', 
'ovirtmgmt']
+  DEBUG:virt.containers.runtime.Docker:config: cannot load 'ovirtmgmt', ignored
+  DEBUG:virt.containers.command:* calling ['/bin/docker', 'network', 'create', 
'-d', 'macvlan', '--subnet=192.168.1.0/24', '--gateway=192.168.1.1', 
'--ip-range=192.168.1.0/24', '-o', 'parent=enp3s0', 'ovirtmgmt']
+  DEBUG:virt.containers.command:* called ['/bin/docker', 'network', 'create', 
'-d', 'macvlan', '--subnet=192.168.1.0/24', '--gateway=192.168.1.1', 
'--ip-range=192.168.1.0/24', '-o', 'parent=enp3s0', 'ovirtmgmt']
+  DEBUG:virt.containers.runtime:configuring runtime 'fake'
+
+You can clearly see what the script did, and why it needed the root 
privileges. Let's deoublecheck using the docker tools:
+
+  # docker network ls
+  NETWORK ID          NAME                DRIVER              SCOPE
+  91535f3425a8        bridge              bridge              local            
   
+  d42f7e5561b5        host                host                local            
   
+  621ab6dd49b1        none                null                local            
   
+  f4b88e4a67eb        ovirtmgmt           macvlan             local            
   
+
+  # docker network inspect ovirtmgmt
+  [
+      {
+          "Name": "ovirtmgmt",
+          "Id": 
"f4b88e4a67ebb7886ec74073333d613b1893272530cae4d407c95ab587c5fea1",
+          "Scope": "local",
+          "Driver": "macvlan",
+          "EnableIPv6": false,
+          "IPAM": {
+              "Driver": "default",
+              "Options": {},
+              "Config": [
+                  {
+                      "Subnet": "192.168.1.0/24",
+                      "IPRange": "192.168.1.0/24",
+                      "Gateway": "192.168.1.1"
+                  }
+              ]
+          },
+          "Internal": false,
+          "Containers": {},
+          "Options": {
+              "parent": "enp3s0"
+          },
+          "Labels": {}
+      }
+  ]
+
+Looks good! the host configuration is completed. Let's move to the Engine side.
+
+## 3. Configure Engine
+
+As mentioned above, we need now to configure Engine. This boils down to:
+Add a few custom variables for VMs:
+
+In case you were already using custom variables, you need to amend the command
+line to not overwrite your existing ones.
+
+  # engine-config -s 
UserDefinedVMProperties='volumeMap=^[a-zA-Z_-]+:[a-zA-Z_-]+$;containerImage=^[a-zA-Z]+(://|)[a-zA-Z]+$;containerType=^(docker|rkt)$'
 --cver=4.0
+
+It is worth stressing that while the variables are container-specific,
+the VM custom variables are totally inuntrusive and old concept in oVirt, so
+this step is totally safe.
+
+Now restart Engine to let it use the new variables:
+
+  # systemctl restart ovirt-engine
+
+The next step is actually configure one "container VM" and run it.
+
+## 4. Create the container "VM"
+
+### 4.1. A little bit of extra work: preload the images on the Vdsm host
+
+## 5. Run the container "VM"
+
+At last! you can now run your "VM" using oVirt Engine.
+


-- 
To view, visit https://gerrit.ovirt.org/65400
To unsubscribe, visit https://gerrit.ovirt.org/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I3a9e947c30f2978fd9670fd64d99bf380181aa9c
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Francesco Romani <from...@redhat.com>
_______________________________________________
vdsm-patches mailing list -- vdsm-patches@lists.fedorahosted.org
To unsubscribe send an email to vdsm-patches-le...@lists.fedorahosted.org

Reply via email to