On Friday, 12.02.2016 at 07:27, David Halls wrote: > > > > My intent with unikernel-runner is to use it as a platform to experiment > > with: > > > > 1. Improving Docker/unikernel integration.(e.g. native support for L2 > > connectivity so that CAP_NET_ADMIN is not required) > > 2. Use it as a "higher level stack" client for the rumprun configuration > > specification, so that I can validate and test that work. > > 3. Supporting other unikernel projects, either by adding specific support > > to unikernel-runner or getting other projects on board with using the > > configuration spec. > > 4. Distributing "ready-to-run" binary unikernels using Docker Hub. This > > ties in with David Halls' work[4], and completes the stack with > > Docker+KVM providing the "run" part. > > > > > What are the advantages of this approach over using something like Alpine > Linux in a container (or for that matter a VM) to run your app? It's a > question that people are going to ask.
Perhaps my description of what this does is misleading? It's not about "running unikernels _as_ Docker _containers_", but rather "using Docker to build, distribute and orchestrate unikernels". The fact that the unikernel + hypervisor run _as_ a _container_ is a (good, from an isolation of QEMU bugs PoV) implementation detail. Thus, you get an end-to-end stack where you can do "docker run foo/unikernel" and it integrates with your existing networking setup, any other containers or unikernels on your Docker network, service discovery, etc. You get to keep using all your Docker-aware tools, but get the benefits of running your app as a unikernel, fully isolated in a VM. Does this answer your question?
