Regarding virtual loopback it seems that in standard builds of VServer-enabled 2.6.32 kernels available from Debian repositories this problem do not exist. I'm not too sure though but I don't remember experiencing it. Besides it is possible to change localhost address in /etc/hosts
Absence of network virtualization in VServer is deliberate and for a good reason. My bad - for some reason I mistakenly thought that OpenVZ license is not GPL - thanks for highlighting this error. I don't see how kernel support relevant to RHEL upstream - a year ago there was no OpenVZ support for 2.6.32 whatsoever. And frankly this was one of the reasons I've chosen VServer for a machine hosting around 20 VMs. Obviously 2.6.32 have a number of important features notably KSM which makes a lot of sense for virtual host and also EXT4. At that time (a year ago) I installed VServer-patched kernel 2.6.32 from native Debian repository. "*more performant": I agree with you that difference in network performance between VServer and OpenVZ is not terribly different. Perhaps it can be manifested with some sort of artificial testing. However here I was quoting Herbert Poetzl (VServer developer). While performance difference is not too big there is another thing which I believe is equally important - simplicity. If the same result can be achieved easier, without even little virtualization overhead it is certainly better, more maintainable, probably has less bugs etc. Simplicity matters. "Easier": Well this is really quite a subjective matter. Available tools is a different argument. I got my first experience with OpenVZ about 18 months ago when I created several VMs but there were some problems motivating my migration to VServer - a decision I've never regret about. Somehow I found memory management easier for VServer. It could be just my perception but to me Vserver is easier to configure and use. Debian make VServer installation trivial. One of my problems with OpenVZ was understanding of how its memory limits work. it is indeed a problem related to lack of experience but number of times services inside OpenVZ VM were failing to allocate required amount of RAM so a tweaked some parameter until it happen again, then I had to tweak another setting and so on and so on. After week of struggling I had no confidence regarding settings I used and I had to read a lot to get a detailed understanding of all those parameters. Obviously defaults was not good. Then I had a chat with the guy from another company who tried to adopt OpenVZ for large Java-based application spanned across a dozen VMs. They gave up after running into problems with Java memory management in OpenVZ so they end up using KVM for a project. (Personally I do believe they just hadn't enough patience to chase all the problems). When I decided to try VServer - I already had 5 or 6 OpenVZ VMs. Using VServer was surprisingly easy (perhaps only documentation lacking some up-to-date examples) so soon enough I found myself virtualizing physical servers to VServer and creating more VMs mostly Debian or CentOS based. Migration to VServer was trivial for me - for a year I had no problems with memory allocations in any of more than 20 VServer VMs, many of which have Java application servers running. Defaults are not restrictive in VServer so it is easier to set up a VM and restrict it later upon finalizing its configuration when memory usage already known. I like VServer more, particularly the way we do things in VServer. To me administration efforts less for VServer but you may argue that this is a matter of experience. Regards, Onlyjob. On 12 January 2011 14:56, Daniel Pittman <[email protected]> wrote: > On Tue, Jan 11, 2011 at 16:24, onlyjob <[email protected]> wrote: > >> No, no, please not OpenVZ. It is certainly not for beginners. >> Better use VServer instead. >> I used both, first OpenVZ (but was never really happy with it) and then >> VServer. > > Have VServer added network virtualization yet? Last time I used it > they hadn't, so your containers didn't have, for example, the loopback > interface, or a 127.0.0.1 address they could use. > > That made for a constant, ongoing pain in the neck compared to OpenVZ > which *did* look like that. Every single distribution package that > assumed, for example, that it could talk to 'localhost' would do the > wrong thing. > > Ah. I see the experimental releases do add support for a virtualized > loopback adapter, along with IPv6, which is nice, and probably > addresses my biggest operational issue with VServer. > >> There are number of benefits of VServer over OpenVZ: >> >> * GPL License > > http://openvz.org/documentation/licenses > The OpenVZ software — the kernel and and the user-level tools — is > licensed under GNU GPL version 2. > > It is also notable that a bunch of the upstream, in-kernel code *is* > from OpenVZ, including a bunch of the namespace support that underpins > the LXC implementations and, these days, OpenVZ itself. > > Can you tell me where you got the impression that OpenVZ was not GPL? > >> * Better kernel support: >> OpenVZ kernel 2.6.32 become available only recently. >> VServer supported 2.6.32 for a while - much much longer. OpenVZ's >> adoption of new kernels is quite slow - perhaps just too slow... > > FWIW, because their upstream kernel is based on the RHEL kernel > releases, we often found that they had sufficiently recent drivers > despite the older core version. This is a genuine drawback, however, > and makes it hard to have upstream support if you are not using RHEL > as your base system (eg: Debian, Ubuntu.) > > Er, also, am I looking at the right place? I went to check out the > "feature equivalent" stuff because I am quite interested in keeping > up, and the linux-vserver site tells me that the latest stable release > is vs2.2.0.7 for 2.6.22.19 – they have an *experimental* patch for > 2.6.32, but I presume there must be some other stable release for the > .32 series or something? > > [...] > >> * more performant: >> Linux-VServer has no measureable overhead for >> network isolation and allows the full performance >> (OpenVZ report 1-3% overhead, not verified) > > Our measurements show pretty much identical performance cost for > either tool, FWIW, and we generally found that either of them was able > to exhaust the IOPS or memory capacity of a modern server well before > they could make a couple of percent of CPU overhead matter. (KVM-like > tools were far worse for this, of course, because of their increased > IO and memory overheads.) > > [...] > >> * Easier. > > I don't agree here: other than the more modern set of front ends (like > Proxmox) for OpenVZ, I never found there to be a detectable difference > in tools overheads between VServer and OpenVZ, and OpenVZ was actually > a bit easier to fiddle with outside the rules and all. > > Regards, > Daniel > -- > ✉ Daniel Pittman <[email protected]> > ⌨ [email protected] (XMPP) > ☎ +1 503 893 2285 > ♻ made with 100 percent post-consumer electrons > -- > SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/ > Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html > -- SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/ Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
