I'll preface this by saying that I have no experience using virtualization in a production environment, yet, although I use it a lot for testing and other non-production uses.
> At the very least there's licensing issues - there may also > be "hardware" issues - remember that you must find drivers/ > packages for the emulated hardware, not the physical hardware. I've yet to encounter this as a problem. Typically, the virtualization layer takes care of this for you - providing drivers for the virtual hardware for the OS to use. In a production server environment, what hardware would you need that isn't already supported? In addition, because the same virtual "hardware" would be used no matter what the underlying hardware actually was, this should simplify things a bit. > The folks from CFX Hosting have already indicated that they > had to do some "massaging" - for example determining a way > to let the end use "reboot" the system. I don't really understand what this would entail. From within VMware, this doesn't require any "massaging" at all. You just connect like you would to any server, and shutdown however your OS allows. But in any case, this is the kind of problem that you'd only have to solve once, and once solved, it wouldn't be an issue for future deployments, I imagine. > But do those automation support VMs yet? (I don't know.) Yes. The whole point of the VM is that, in all respects that matter, it's indistinguishable from a "real" machine. You can use SMS, scripts, and third-party tools with Windows VMs, and you can use things like apt-get with Linux VMs. > Even with that supporting 100 machines is ALWAYS harder > than 10 if only for the fact that no matter how much you > try to standardize something is always going to come up > - and with 100 machines it's that much more likely. > > (Our facilities team support 40-100 dedicated servers - > all "standardized". More machines means more work even if > you take advantage of short cuts and tool sets.) My Unix admin friends would disagree, although I suspect that the truth lies somewhere between - keep in mind, too, that these "machines" would be simple, compared to machines used to host a bunch of virtual servers. > Also I'm unclear as to the mechanics here... if a VPS breaks > down nad has to be rebuilt how does that affect other VPSs > running on the same physical machine? It shouldn't affect them at all. It should be as simple, in most cases, as copying a file, then restarting the virtual OS. I imagine in practice it might be a bit more complex. > I also agree that using cheaper versions of software is > a HUGE plus - but this does seem to be a benefit only > for CF. Needing to license separate OSes and tools may > very well override that "savings" (at least on windows). In general, I agree, and think that we'll see more use of virtualization with Linux - that's where it's been most successful so far, anyway. But with the introduction of the Web Edition of Windows Server 2003, which is a little over $300, the OS cost is less of a factor even with Windows. > I'm really not arguing the point with you - I think that > VPS are great and will revolutionize the hosting industry. > But I also think there are cost and other benefits to > application isolation that may be attractive to some users. I think so, too, but I think that, in the long run, those users won't be in the shared hosting environment but rather in the enterprise, where failover and redundancy are big issues. Dave Watts, CTO, Fig Leaf Software http://www.figleaf.com/ voice: (202) 797-5496 fax: (202) 797-5444 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~| Archives: http://www.houseoffusion.com/cf_lists/index.cfm?forumid=4 Subscription: http://www.houseoffusion.com/cf_lists/index.cfm?method=subscribe&forumid=4 FAQ: http://www.thenetprofits.co.uk/coldfusion/faq This list and all House of Fusion resources hosted by CFHosting.com. The place for dependable ColdFusion Hosting. http://www.cfhosting.com Unsubscribe: http://www.houseoffusion.com/cf_lists/unsubscribe.cfm?user=89.70.4

