Hey Jan, * Jan Damborsky (Jan.Damborsky at Sun.COM) wrote: > Hi Glenn, > > this is pretty interesting stuff - I have couple of generic > thoughts/comments which are perhaps more related to the long > term solution. > I might have misunderstood some points, please feel free to > correct me, if my thoughts went astray too much :-) > > Thank you, > Jan > > > Glenn Lagasse wrote: >> Preliminary design outline for generating pre-constructed virtual >> machine disk images via Distro Constructor. >> >> Purpose: >> >> To be able to construct a virtual machine disk image via the Distro >> Constructor in a 'hands-off' manner given an arbitrary set of packages >> for the purpose of distributing the disk image as a pre-installed >> OpenSolaris environment. >> >> While we're primarily targeting VirtualBox for the constructed disk >> image we should be able to support other hypervisors that support the >> Open Virtual Machine Format[1] which is what we'll use for the >> constructed images. > > I have just taken a quick look at OVF spec and it seems it is pretty > generic and sophisticated container for virtual appliance serving for > "packaging and distribution of (collections of) virtual machines." > > If I understood correctly, the standard itself doesn't introduce > new open format of virtual disk image. However, it allows to encapsulate > within OVF any existing or future format of virtual disks (vmdk, vdi, > qcow2, ...). Might it be correct ? If this is the case, are we > going to support only those recognized by VirtualBox or more formats > will be taken into account, as the goal is to produce OVF which > could be deployed on other hypervisors ?
Until (and there's no guarantee it will ever happen) OpenSolaris supports manipulating virtual disk images directly or some other 'consumer type' hypervisor (I don't consider XVM consumer-type) is ported to OpenSolaris we'll be constrained to support only that which VirtualBox supports. Because we need VirtualBox to create the containers and manage them from an installation point of view. > If the goal is to deploy OVF on other hypervisors (xen, vmware, ...), > it seems that there might be a need to teach DC to understand and > manipulate OVF (e.g. by consuming some standard API) in order to Even if OpenSolaris could manipulate OVF directly, that doesn't buy us very much. I say that because you still have to 'install' under the 'virtualized' hardware in order to not run into device issues. For instance, an early approach I tried to take was constructing a ZFS pool on a spare disk on 'real' hardware and installing to that and then moving it to VirtualBox. That didn't work because the kernel couldn't mount the ZFS pool inside VirtualBox because the deviceid associated with the pool was different under VirtualBox than it had been under the 'real' hardware. There are possibly other issues like this that we would run into over time. A good rule of thumb imo is, install under the same hardware you're going to run under. This applies in both the 'real' and 'virtual' sense. > be able to define and specify in OVF what hypervisors are to be > supported along with set of supported hardware and OVF portability level: > > 1 - Only runs on a particular virtualization product and/or CPU > architecture and/or virtual hardware selection. > > 2 - Runs on a specific family of virtual hardware. This would > typically be due to lack of driver support by the installed > guest software. > > 3 - Runs on multiple families of virtual hardware. For example, > the appliance could be runnable on Xen, KVM, Microsoft, and > VMware hypervisors. > > What is the final goal as far as OVF portability level is concerned ? > It seems it might be interesting (but not easy to accomplish) to support > creating OVF level 3 images. The primary goal is to produce something that VirtualBox can consume since that is our 'target'. Ideally, other hypervisors that support OVF should also be able to consume our images but other than possibly XVM I don't know that we're going to do anything special to make that happen. We shouldn't have to, given the promise of OVF. The spec itself (at least the latest I've been able to find) doesn't guarantee portability of OVF across hypervisors, merely portability from 'origin'. VirtualBox 2.2 supports OVF and also apparently an 'export appliance' option which exports the virtual disk along with some metadata that a user can 'import' directly into VirtualBox. This removes the need to have an end user have to create a virtual machine that identically matches the configuration used to create the OVF file. This is great news for us (though I'll need to try this and see how well it works in practice). >> >> Premise: >> >> We can use Distro Constructor to perform an automated, hands-off >> installation of OpenSolaris inside VirtualBox controlled from a Host >> system running Distro Constructor. >> >> To support this premise, we will need to create a new output image type >> for Distro Constructor which we'll call the Portable Virtual Machine >> Construction Appliance. This new image type will be a slimmed down >> version of the OpenSolaris liveCD which contains just enough operating >> system to boot, run Distro Constructor with a supplied xml manifest and >> successfully generate an installation inside a virtual machine on a >> virtual hard disk in a 'hands-off' manner. > > I would agree this is good approach for proof of concept, since there > is a set of specific problems which are mitigated by running DC in native > environment, but I am thinking if for long term solution this intermediate > step might be omitted. > > Have we identified what issues would arise if we would like to let DC > generate virtual appliance package directly on host system without running > Virtual Box ? I remember there were couple of them mentioned during meeting > yesterday. > > [1] running DC in native environment addresses the problem with differences > between host and guest HW. > > With respect to this point, what particular problems we are going to > encounter if the image is build directly on host system ? See my response earlier up. Essentially you run into device enumeration issues for starters. Then you run into the issue of how do you convert that 'installatinon' into something that VirtualBox can consume? If we had an interface accessible under OpenSolaris to directly manipulate OVF containers (something lofi like) then we might be able to 'transfer' the install into the OVF container. But we don't have such a thing. And even if we did, it would have to understand at least ZFS to the degree that the transfer works in such a way that the root pool is bootable under VirtualBox. There's likely other issues here, but this is what I've been able to find out during my protoyping before becoming 'stuck'. > I am asking, since it seems we already crossed the bridge, when we use > DC for building LiveCD and AI images - for instance Sparc image built > on Ultra45 is supposed to work on all sun4v and sun4u machines with > wanboot support. The similar is expected from x86 AI or LiveCD image. > It seems that Solaris is pretty flexible in this point. > > There is known issue with ZFS portability - there is already filed > bug for this and I think we might assume it will be addressed, > since there are other parts affected by this. It may, it may not. Last I heard, the ZFS team didn't consider it a bug and that it was 'working as designed'. > Do we happen to know if there have been other problems identified > in this area so far ? Nothing other than the general points I mentioned. 'Real' hardware is not going to be the same as 'virtual' hardware. > Also, when DC is run within VB, how it would guarantee that the > image might be deployed on other hypervisors ? It can't guarantee it, and that's not a primary goal (other than for XVM, and even then we don't have any direct plans to make that work other than producing an OVF container that XVM can attempt to consume). > [2] Solaris can't mount virtual disks > > Solaris can't mount virtual disks directly at this point, but > it seems that there might be ways to go. For instance, XEN > vdiskadm(1M) tool can convert raw disk image into different > types of virtual drives: > > http://docs.sun.com/app/docs/doc/819-2240/vdiskadm-1m?a=view > http://opensolaris.org/jive/thread.jspa?threadID=98762&tstart=0 Interesting. I've evaluated vdiskadm in the past for this project and it couldn't meet our needs. The convert functionality that Susan apparently is integrating in 113 *might* be something we could leverage. I'll have to investigate (thanks for pointing out the new convert stuff). > Based on this, DC could install into lofi mounted file or ZVOL > and convert the final raw image into vmdk or vdi using vdiskadm(1M). > I am not sure if there is a limitation that vdiskadm(1M) has > to be run within dom0, but if this is the case, then I think > that it might be enhanced in such a way that when called for > creating/converting the disk, it might not be necessary. I attempted to do an installation to a ZVOL, at the time I couldn't get installgrub to 'install' to it (though ZVOLs are supposed to look just like disks, they don't look enough like disks apparently). > If it would turn out that this approach is not suitable as final > solution, the possibility might be to teach lofi(7D) driver to > understand virtual disk image formats directly. We've talked about that in the past, and I believe we said we weren't going to commit resources to that sort of work. I believe the estimate was 6 months worth of work. Thanks for the feedback and questions Jan. Keep 'em coming :-) -- Glenn