Brian Weeden wrote:
I wanted to pick everyone's brain a bit about building a virtualization machine (vm).
<snip>
Questions I need to get answered before I can pull this off: - If you install some new software or have another reason to reboot one of the VM instances can you just restart it and avoid rebooting the whole machine?
Yes, the VM's are totally independent of each other. They can be brought up, shut down, created, and destroyed independently of each other.
- When you boot up, is there a primary OS that loads and then you run the different VMs inside of it or do you boot straight to a VM?
Unless you're running VMWare ESX ($3000-$4000) you'd boot into an OS and then load your hypervisor, then boot your VM's.
- Can you divvy up the resources for running multiple VMs at once so like each gets a GB of RAM and 2 cores?
You divvy up memory. They all share the CPU. Load put on one VM will have a negative impact on other VM's and your physical host. I believe you can set limits in ESX.
- Would I need 2 Video cards, one associated with the HTPC VM and one associated with the Work/gaming VM?
There's no concept of assigning physical hardware (beyond a nic) such as a video card to a VM (at least in the x86 world. You can in Solaris Logical Domains.) Each VM gets a virtual console, which you connect to with an app, or in the case of VMWare Server 2.0 beta, a web applet.
You would not want to run a HTPC in a VM. You'd probably get by making the system that hosts the hypervisor the HTPC.
- If I do need 2 cards, how would that work hardware wise? Never done it before in the same box. Do I just get a board with 2 PCI-Express slots and slap a card in each? We're not talking about SLI here - but two different cards working independently.
Absolutely will not work the way you describe.
