Re: [CentOS-virt] OS-level virtualization using LXC and systemd-nspawn containers

2021-01-26 Thread Scott Dowdle
Greetings,

- Original Message -
> Can you share your experience with LXC and/or systemd-nspawn
> for RHEL 8 / CentOS 8 operating system on the hardware node?

> I can't use host network for [system] containers.
> Each container must have its own private network.

In that case, perhaps you'd like docker/podman's private networking?

> Backuping persistent containers and restoring from backup - issue.
> I don't want have deal with a mash of different images and layers.

I haven't thought of backups.  I assume there are a number of backup solutions 
for docker/podman containers but I'm completely ignorant.

> Each my systemd-nspawn container located in separate filesystem:
> 
> # zfs list
> NAME  USED  AVAIL REFER  MOUNTPOINT
> tank  531G  1.13T   96K  /tank
> tank/containers   528G  1.13T  168K  /tank/containers
> tank/containers/119.1G  1.13T 8.00G  /tank/containers/1

Ok, so you are turning off SELinux and using ZFS too?  And you still want to 
stay with EL?  Why?

> Upstream also doesn't support ZFS, but this is extraordinary file system
> with excellent feature set.

Ubuntu and LXD do support ZFS and Canonical's lawyers seem happy to allow ZFS 
to be bundled with Ubuntu by default.  You should get along nicely.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] OS-level virtualization using LXC and systemd-nspawn containers

2021-01-26 Thread Scott Dowdle
Greetings,

- Original Message -
> > LXD is a management layer on top of it which provides for easy
> > clustering and even managing VMs.  I think it is the closest thing
> > to vzctl/prlctl from OpenVZ.
> 
> "Yes, you could use LXC without LXD. But you probably would not want to.
> On its own, LXC will give you only a basic subset of features.
> For a production environment, you’ll want to use LXD".

Have you tried LXD?  Again, I'd only recommend it on Ubuntu LTS and I believe 
your target is CentOS so that is probably why you are excluding it, eh?

> podman is replacement for Docker,
> it is not replacement for OpenVZ 6 containers.

Docker definitely targets "Application Containers"... with one service per 
container.  podman says they can also do "System Containers" by running systemd 
as the entry point.  Of course the vast majority of pre-made container images 
you'll find in container image repositories aren't built for that, but you can 
use distro provided images and build a system container image out of them.  I 
have a simple recipe for Fedora, CentOS, and Ubuntu.  I don't know how many 
people are using podman in this capacity yet, and I don't know if it is mature 
or not for production... but the limited testing I've done with it, has worked 
out fairly well... using Fedora or CentOS Stream 8 as the host OS... and yes, 
even running the container as a regular user after doing:

setsebool -P container_manage_cgroup on

Yes, podman does still use it's own private network addressing, but I guess 
that can be overcome by telling it to use the host network.  I haven't tried 
that.  Not exactly like OpenVZ's container networking for sure.

> I have containers with 1.6 TiB of valuable data - podman
> not designed to work in this mode and in such conditions.

Persistent data really isn't an issue.  You just have to understand how it 
works.  Plenty of people run long-term / persistent-data Docker and podman 
containers... although granted, most folks say if you are using persistent data 
containers, you are doing it wrong.  I guess I prefer to do it wrong. :)

> So I have only two alternatives for OS-level virtualization:
> LXC or systemd-nspawn.

If CentOS is your target host, I'd guess that neither of those really are a 
good solutions... simply because they aren't supported and upstream doesn't 
care about anything other than podman for containers.

LXC varies from one distro to the next... with different kernels, and different 
versions of libraries and management scripts.  Again, LXD on an Ubuntu LTS host 
is probably the most stable... with Proxmox VE as a close second.  Both of 
those upstreams care about system containers and put in a lot of effort to make 
it work.

Good luck.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] OS-level virtualization using LXC and systemd-nspawn containers

2021-01-25 Thread Scott Dowdle
Greetings,

- Original Message -
> OpenVZ 7 has no updates, and therefore is not suitable for
> production.

The free updates lag behind the paid Virtuozzo 7 version and plenty of people 
are using it in production.  I'm not one of those.

> LXC/LXD is the same technology, as I understand from
> linuxcontainers.org

linuxcontainers.org is owned by Canonical and yes it documents LXC... but LXD 
is a management layer on top of it which provides for easy clustering and even 
managing VMs.  I think it is the closest thing to vzctl/prlctl from OpenVZ.

> podman can't be a replacement for OpenVZ 6 / systemd-nspawn because
> it destroys the root filesystem on the container stop, and all
> changes made in container configs and other container files will be lost.
> This is a nightmare for the website hosting server with containers.

No, it does NOT destroy the delta disk (that's what I call where changes are 
stored) upon container stop and I'm not sure why you think it does.  You can 
even export a systemd unit file to manage the container as a systemd service or 
user service.  volumes are a nice way to handle persistence of data if you want 
to nuke the existing container and make a new one from scratch without losing 
your data.  While it is true you have to approach the container a little 
differently, podman systemd containers are fairly reasonable "system 
containers".
 
TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] OS-level virtualization using LXC and systemd-nspawn containers

2021-01-25 Thread Scott Dowdle
Greetings,

- Original Message -
> I found only two possible free/open source alternatives for OpenVZ 6:
> 
> - LXC
> - systemd-nspawn

Some you seem to have overlooked?!?

1) OpenVZ 7
2) LXD from Canonical that is part of Ubuntu
3) podman containers with systemd installed (set /sbin/init as the entry point)

I use LXC on Proxmox VE (which I guess should be #4 above) some although I 
primarily use it for VMs.

Oh, LXD is supposedly packaged for other distros but given that they aren't 
much into SELinux and they are into snaps, I'd not really recommend it outside 
of Ubuntu.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Apparent discontinuity between advertised centos7 release 1803_01 and content of centos-release file

2018-04-19 Thread Scott Dowdle
Greetings,

- Original Message -
> Hello,
> 
> I searched centos7 in the AWS marketplace for the
> at-time-of-writing-latest centos7 image:
> https://aws.amazon.com/marketplace/pp/B00O7WM7QW?qid=1524138193326=0-1_=srh_res_product_title
> 
> I built a standard free tier t2.micro from this putative 1803_01 AMI.
> I see from the docs, this is thus a March 2018 compilation.
> When I get CLI, I get this:
> 
> [centos@ip-172-31-27-32 etc]$ cat centos-release
> CentOS Linux release 7.4.1708 (Core)
> 
> which suggests that the AMI I just launched was built on a version of
> centos compiled from upstream sources in August 2017.
> Does the release of centos7 on the marketplace sport a release
> version which does not pertain to the machine's release? Unlike with
> releases published outside AWS, the Centos cloud page does not
> specify version info as listed on AWS.
> 
> Thanks v much IA,

RHEL releases about every 6 months... and then CentOS lags behind a little bit 
as a rebuilder.

The most current release of CentOS is based on RHEL 7.4... and is dated August 
2017 (1708).  RHEL 7.5 was released a week or so ago and CentOS is frantically 
working on coming out with CentOS release based on it.  When that comes out, 
the latest release will be 1804 (or 1805, etc).

That doesn't mean there aren't in-between release updates because there are... 
but they don't re-number the whole release because of regular updates.

CentOS has a lot of products that they produce and some of them may be rebuilt 
and distributed more frequently (like CentOS Atomic Host or their Vagrant 
image, etc)... but not the oldest, main product.

Did that answer your question?

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] LXC on CentOS 7 HowTo: PAM Configuration

2016-02-09 Thread Scott Dowdle
Greetings,

- Original Message -
> I am trying to implement something like an "LXC on CentOS 7 HowTo"
> for internal use. (Might as well get public afterwards.) I am following
> the HowTo for CentOS 6
> (https://wiki.centos.org/HowTos/LXC-on-CentOS6). So, here's what I
> did so far (Steps 1-6 can easily be omitted, but I am trying to be
> complete.)

Do you want to use the libvirt tools or the lxc-{whatever} tools?

I haven't worked with LXC on EL6 nor EL7 much at all... but I have been playing 
with it some on Fedora 23.

Anyway, to create a CentOS container, the lxc tools can do a lot of the work 
for you... and I don't know that all of the steps are needed from that wiki... 
at least if you use the lxc tools rather than libvirt... although you'll still 
use libvirt for it's networking stuff.

To create a CentOS 7 container:

lxc-create -t download -n {desired-name}

That should give you a list of available Templates... and you would type in:

Distribution: centos
Release: 7
Architecture: amd64

It should download the template and put it under /var/cache/lxc/ and create the 
container under /var/lib/lxc/.

The Template should just work and not require any fiddling with... I'm hoping.

LXC is still rather lacking in isolation features as it does not give the 
container a subset of /proc... so within the container you can see all of the 
RAM and disk... and your root user can do bad things if you don't trust them.  
That is with a "privileged" container.  Supposedly there is a way to run a 
container as a user and then grant capabilities as needed to reduce the 
security footprint but I don't know much about that.

Docker is a subset of that design for Applications (rather than the full distro 
with an init system of its own) that provides a really nice image library and 
image builder... but unless you are trying to do fleet computing (aka 
microservices) then Docker really isn't the container I've been looking for.

If you want privileged containers you don't have to worry about, you'll most 
likely want tp create an OpenVZ host (warning, third-party repo / kernel / 
tools needed).  The current stable version of OpenVZ is "OpenVZ Legacy" which 
is EL6-based.  They have been working hard on "Virtuozzo 7" which is merger of 
OpenVZ and the upstream Virtuozzo product-line still offering a FLOSS 
version... that is based on EL7 and also provides KVM VM management along-side 
of containers.  They are trying to integrate Virtuozzo support into libvirt and 
the libvirt-based tools like virsh and virt-manager... and get as much of that 
work upstreamed as possible... and switch from the kernel-patch based 
checkpoint code they have in OpenVZ Legacy to the mostly upstreamed CRIU C/R.  
Hopefully in the next 3-6 months Virtuozzo 7 will go GA.  They basically have 
created a complete distro for it which is based on CentOS.

I'd be interested to hear of the lxc tools work for you or not.  The little bit 
I tried them on EL7 I seemed to get journald CPU max-outs on the host node.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Hostname inside lxc container

2016-01-21 Thread Scott Dowdle
Greetings,

- Original Message -
> I have installed a CentOS6 lxc guest under a Debian 8.x LXC host.
> All it is working ok but I can't change the hostname for the centos6 lxc
> container (it is using the same hostname from Debian host). I have
> modifyed HOSTNAME under /etc/sysconfig/network and /etc/hosts file,
> but it doesn't works.
> 
> Do I need to change anything else??

If I remember correctly, it is generally set in /etc/sysconfig/network on a 
regular EL6 host.  If that isn't working when run within an LXC container on a 
Debian host... ask whoever packaged up the LXC container template you used.  
There is a lot of variance in LXC from one distro to another.  Some recipes say 
to install from a .tar.gz file (which I prefer to call an OS Template) others 
say to do a chroot install using a distro's native package manager.  My point 
is, that LXC tools and methods really vary... and I don't think the CentOS 
project can help much in this situation.

For containers, I've been using OpenVZ (now called OpenVZ Legacy while they 
work on the newer Virtuozzo 7) for 10 years now... and they have a standard set 
of OS Templates for a range of distros and everything (generally) works as it 
is supposed to... and even with that... the CentOS Project isn't interested in 
helping OpenVZ users running CentOS containers... because it uses a non-CentOS 
provided kernel and management tools... although the recommended OpenVZ Legacy 
hostnode distro is CentOS 6.x.  Virtuozzo 7 is its own distro rebased from 
CentOS 7.

One container technology they are interested in supporting is Docker (app 
containers) especially when using the official CentOS Docker images 
built/provided by the CentOS Project... running on a CentOS host.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Can KVM and VirtualBox co-exist on same host?

2014-07-24 Thread Scott Dowdle
Greetings,

- Original Message -
 A supplemental question:  Is there any way to convert a VB guest image into a
 KVM guest image?

I believe qemu-image can convert between a few different virtualization disk 
image formats.  Of course that just changes the disk image itself... and not 
the drivers inside... so if you do convert it (I'd recommend working on a 
copy)... then you'll probably have to pull the VirtualBox guest addons out... 
and install the KVM guest stuff... but it shouldn't be that difficult.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Fwd: About Centos 7 + Virt-manager

2014-07-16 Thread Scott Dowdle
Greetings,

- Original Message -
 while much of what you say is true
 you somehow could lead unaware users
 to the conclusion that docker and lxc
 are two very different container technologies.
 
 In fact, docker uses lxc for containers.
 So it's more a management abstraction layer
 with an API.
 
 Nevertheless for true and secure containerization
 you'll need openvz atm, sadly it's not in the kernel yet.

Docker dropped LXC with version 0.6 or was it 1.0?  They have their own library 
that they use now.  Since I can't remember what version they switched, nor what 
version Red Hat has in RHEL7 (I'll leave it as an exercise for the reader)... 
perhaps the Docker currently shipping with RHEL7 still uses LXC?!?

The bulk of OpenVZ will never make it into the kernel although the Parallels 
kernel devs do try to integrate existing kernel features into OpenVZ (so the 
patch becomes smaller over time) as well as getting bits and pieces into the 
kernel or into userland (criu for example).

I wonder how much change OpenVZ will undergo in the port to the RHEL7 kernel... 
where considerable container building blocks are already part of the kernel.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Fwd: About Centos 7 + Virt-manager

2014-07-16 Thread Scott Dowdle
Greetings,

- Original Message -
 Thanks a lot, for your replies, my boss is a big fan of lxc, but I
 have read many forums, and what I perceive is rhel7 - docker,
 centos7 --- openvz
 
 With great difficulty, I managed a container with virt-manager, I
 even noticed a bug when trying to create a bridge.
 
 Conclusion, as we want to use a container operating system is better
 to use openvz, now is there interfaces that allow a user no expert
 reserve resources such as memory, cpu, etc without going to browse
 cgroups?

Just to clarify, the OpenVZ kernel runs fine on RHEL(56) too.  In fact I have 
a couple of RHEL hosts running OpenVZ.

So far as resource management goes, in the EL6-based OpenVZ kernel, vSwap-based 
configuration/management is preferred whereas in the EL5-based kernel (which 
OpenVZ is EOL'ing in Oct. I think), user_beancounters are what you have.  
vzctl's resource management facilities do what the vast majority of container 
users need.

vzctl-core is available for non-OpenVZ kernels but it is missing quite a few 
features compared to when run on an OpenVZ-based kernel.  See: 
https://wiki.openvz.org/Vzctl_for_upstream_kernel  I don't think it is well 
tested on upstream kernels.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Fwd: About Centos 7 + Virt-manager

2014-07-16 Thread Scott Dowdle
Greetings,

- Original Message -
 Am 16.07.2014 15:16, schrieb Scott Dowdle:
  Docker dropped LXC with version 0.6 or was it 1.0?  They have their
  own library that they use now.
 
 This is not correct, or the docker docs are out of date:
 
 Docker combines these components into a wrapper we call a container
 format. The default container format is called libcontainer.Docker also
 supports traditional Linux containers using LXC. Source:[1]
 
 [1]https://docs.docker.com/introduction/understanding-docker/#the-underlying-technology

Yes, if you read that it says the default container format is called 
libcontainer.  It also supports LXC.  Originally it was LXC only.  Then 
they switched to libcontainer... but they say they support several formats 
(including OpenVZ although I haven't seen any elaboration as to how) and plan 
to add more in the future.

Docs that were written before the move to libcontainer (and in all honesty, I'm 
not really sure what libcontainer is vs. LXC) won't mention it.  Everything 
after the move, should mention it.

For more information about libcontainer... and it was the 0.9 release of Docker 
that announced it (not 0.6 nor 1.0 as I had previously guessed)... can be found 
here:

http://blog.docker.com/2014/03/docker-0-9-introducing-execution-drivers-and-libcontainer/

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Fwd: About Centos 7 + Virt-manager

2014-07-16 Thread Scott Dowdle
Greetings again,

- Original Message -
 Am 16.07.2014 15:16, schrieb Scott Dowdle:
  Docker dropped LXC with version 0.6 or was it 1.0?  They have their
  own library that they use now.
 
 This is not correct, or the docker docs are out of date:
 
 Docker combines these components into a wrapper we call a container
 format. The default container format is called libcontainer.Docker
 also
 supports traditional Linux containers using LXC. Source:[1]
 
 [1]https://docs.docker.com/introduction/understanding-docker/#the-underlying-technology

For anyone not willing to take the time to visit the Docker 0.9 release blog 
post, here's a snippet from it:

Thanks to libcontainer, Docker out of the box can now manipulate namespaces, 
control groups, capabilities, apparmor profiles, network interfaces and 
firewalling rules – all in a consistent and predictable way, and without 
depending on LXC or any other userland package. This drastically reduces the 
number of moving parts, and insulates Docker from the side-effects introduced 
across versions and distributions of LXC.

Is that more clear?

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Fwd: About Centos 7 + Virt-manager

2014-07-15 Thread Scott Dowdle
Greetings,

- Original Message -
 Thanks Nux, but now I'm fighting with the network settings...
 There is some tutorial for this?,
 How do you create centos template, can you give me some steps for
 this...
 Thanks in Advance
 Pablo
 
 On Tue, Jul 15, 2014 at 2:32 PM, Nux!  n...@li.nux.ro  wrote:
 Hi,
 I've got an old image here: http://li.nux.ro/download/LXC/
 Use at your own risk etc :-)
 --
 - Original Message -
  From: Pablo Silva  psil...@gmail.com 
  To: centos-virt@centos.org
  Sent: Tuesday, 15 July, 2014 3:53:45 PM
  Subject: [CentOS-virt] Fwd: About Centos 7 + Virt-manager
  
  Dear Colleagues:
  
  I'm trying to use virt-manager to create a container with lxc, as the
  picture I attached, I indicated that I must point to a directory where the
  template is to manage it with virt-manager.
  
  I tried to use a template of
  http://wiki.openvz.org/Download/template/precreated , but not
  working, being on the console option displays a black screen with nothing.
  
  I wonder, where can I get a template to work with virt-manager, or what
  are the steps to follow so you can create a template with Centos Minimal,
  do not know if the template must be the same version of Centos 7 or can be
  a template with Centos 6.5
  
  In advance, thank you very much.
  -Pablo
  
  http://picpaste.com/Captura_de_pantalla_2014-07-15_a_las_10.47.26-DHi28gtt.png
  http://picpaste.com/Captura_de_pantalla_2014-07-15_a_las_10.49.02-y4TkGxLY.png

While virt-manager has had an lxc option for some time, and yes, I've actually 
seen a few tutorials by adventurous Fedorians on Fedora Planet over the last 
couple of years... to the best of my knowledge, almost no one is using 
virt-manager with containers.  Red Hat only sees containers for applications 
and they are favoring strongly Docker.

Among the many guides released with RHEL7 was one entitled, Resource 
Management and Linux Containers 
(https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html-single/Resource_Management_and_Linux_Containers_Guide/index.html).
  It does have a section on LXC containers but they really don't go into it too 
much.

Canonical is the main lead on LXC containers... so maybe they have some 
documentation?

If you want to containerize applications, then Docker might be the way to go.  
If you want a full-blown container (complete distro with independent accounts, 
networking, services, etc) then OpenVZ is the way to go.  OpenVZ is a 
third-party patch to the kernel that will never get into the mainline kernel so 
you have to use an OpenVZ-provided kernel (their stable branches are based on 
RHEL-kernels) and utils.  They have EL5 and EL6-based kernels... and are 
working on an EL7-based one but no date on when that will be released.

I'm a big OpenVZ user (since 2005) so if you have questions, feel free to email 
me directly if desired... or find me in #openvz on freenode during MST business 
hours.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Fwd: About Centos 7 + Virt-manager

2014-07-15 Thread Scott Dowdle
Greetings,

- Original Message -
 Canonical is the main lead on LXC containers... so maybe they have
 some documentation?

I haven't processed it yet... but they appear to have some documentation.  See 
the LXC section of their Server manual starting on page 323:

https://help.ubuntu.com/14.04/serverguide/serverguide.pdf

That doesn't help much on Fedora nor CentOS... because LXC varies greatly from 
kernel to kernel and distro to distro.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] How to create an OpenVZ OS Template for CentOS 7 Public QA

2014-06-17 Thread Scott Dowdle
Greetings,

First start of by working on a physical system, virtual machine, or container 
that matches the OS Template you are wanting to build.  I used my CentOS 7 
Public QA OS OpenVZ container to build it.

You must of course have a working yum.  Once we are beyond Public QA and there 
is stuff in /etc/yum.repos.d/ this won't be a problem.  One thing to note is 
that the --enablerepo= must refer to a repo your build host has and viewable 
via yum repolist.  That repo should point to the desired CentOS 7 build tree 
directory.

A note about the package list.  Yes, listed out every individual packcage is 
tedious.  Perhaps some package groups could be used but they typically drag in 
a lot of unwanted additional packages.  Suggestions welcome.

Here is a simple script and please don't nag at me because I'm a scripting 
novice. I hope email client word wrapping and screen sizes don't butcher it too 
bad:

- - - - -

# To get a package list without version numbers from a target system
# rpm -qa --qf %{n}   packages.txt
# Put contents of packages.txt after -y install \ line below

mkdir /ostemplate

yum \
--installroot /ostemplate \
--nogpg \
--releasever=7 \
--enablerepo=centos7pubqa \
-y install \
centos-release filesystem ncurses-base mailcap tzdata glibc-common xz-libs \
ncurses-libs pcre libselinux info libdb popt sed libcom_err libuuid expat \
libacl libgpg-error dbus-libs gawk lua libxml2 glib2 shared-mime-info apr cpio \
gmp p11-kit tcp_wrappers-libs perl-parent perl-podlators perl-Text-ParseWords \
perl-Pod-Escapes perl-libs perl-threads perl-constant perl-Filter \
perl-Time-Local perl-threads-shared perl-File-Path perl-Scalar-List-Utils \
perl-Getopt-Long libcap-ng nss-softokn libassuan libunistring diffutils 
gpm-libs \
libnfnetlink keyutils-libs gettext-libs p11-kit-trust nettle \
gobject-introspection vim-minimal pinentry make libselinux-utils ncurses \
libverto libsemanage krb5-libs openldap cracklib libmount systemd-libs libuser \
pam libblkid util-linux python-libs dhcp-libs libcurl python-urlgrabber 
rpm-libs \
dhcp-common libselinux-python python-iniparse python-chardet 
yum-metadata-parser \
python-backports-ssl_match_hostname newt-python pyxattr binutils logrotate \
procps-ng mariadb-libs fipscheck-lib openssh libmnl iptables json-c \
device-mapper cryptsetup-libs dbus iputils cronie-anacron crontabs libestr \
gnupg2 rpm-python pygpgme libnl3 yum-utils man-db dhclient audit openssh-server 
\
libgudev1 net-tools elinks python-pyudev policycoreutils python-configobj \
pygobject3-base sudo wget file tar which psmisc libpcap libsysfs libdaemon lzo \
libgcc setup basesystem kbd-misc bind-license nss-softokn-freebl glibc 
libstdc++ \
bash libsepol zlib audit-libs nspr chkconfig bzip2-libs nss-util grep libattr \
libcap elfutils-libelf libgcrypt readline libidn libffi pkgconfig sqlite \
groff-base file-libs libtasn1 slang gdbm perl-HTTP-Tiny perl-Pod-Perldoc \
perl-Encode perl-Pod-Usage perl-macros perl-Storable perl-Carp perl-Exporter \
perl-Socket perl-File-Temp perl-PathTools perl-Pod-Simple perl apr-util 
libcroco \
cyrus-sasl-lib libgomp kmod-libs libedit hostname js newt ca-certificates less \
dbus-glib acl libdb-utils findutils xz sysvinit-tools ustr nss-tools \
openssl-libs gzip cracklib-dicts nss libpwquality coreutils shadow-utils \
libutempter nss-sysinit python libssh2 python-pycurl curl rpm python-decorator \
python-slip dbus-python python-kitchen python-backports python-setuptools \
pyliblzma centos-logos kmod openssl nss_compat_ossl bind-libs-lite fipscheck \
httpd-tools libnetfilter_conntrack iproute qrencode-libs device-mapper-libs \
systemd systemd-sysv initscripts cronie libpipeline pth rpm-build-libs gpgme 
yum \
libnl3-cli rsyslog mlocate kbd postfix httpd ebtables openssh-clients 
authconfig \
python-slip-dbus mc gettext screen passwd gnutls elfutils-libs libss nano 
snappy \
libndp ethtool hardlink rootfiles 

ln -sf /proc/mounts /ostemplate/etc/mtab

# I want Mountain time to be the default
ln -sf /usr/share/zoneinfo/America/Denver /ostemplate/etc/localtime

# Now compress that sucker
cd /ostemplate ; tar -cvJf /root/centos-7-x86_64-viayum.tar.xz . ; cd
ls -lh /root/centos-7-x86_64-viayum.tar.xz
echo Done building OS Template.  Now test it.

- - - - - 

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] OpenVZ variant

2014-04-03 Thread Scott Dowdle
Greetings,

I was reading the LWN article from today (free to non-subscribers next 
Thursday). Here's a subscriber link for those who might want to see it now:

CentOS and Red Hat - http://lwn.net/SubscriberLink/592723/485ea802859f6c36/

I saw that Xen was mentioned as an area where CentOS went beyond RHEL with 
CentOS 6... and being that I'm deeply in the OpenVZ community, I thought it 
might be natural to have an OpenVZ CentOS Variant.  I just noticed that the 
CentOS Virt-SIG page already mentions OpenVZ.  Is this only for the upcoming 
CentOS 7 or would it be possible to produce a spin/remix that is CentOS 6-based 
that includes the OpenVZ kernel and OpenVZ utils?

Looking at the stats provided by the OpenVZ Project (http://stats.openvz.org/) 
it is obvious that CentOS is the most popular platform for both OpenVZ hosts 
and OpenVZ containers:

Top  host   distros
---
CentOS   56,725
Scientific2,471
RHEL869
Debian  576
Fedora  111
Ubuntu   82
Gentoo   54
openSUS  18
ALT Linux10
Sabayon   6

and

Top 10  CT  distros
---
centos  245,468
debian  106,350
ubuntu   83,197
OR8,354
gentoo7,017
pagoda4,024
scientific3,604
fedora3,173
seedunlimited 1,965

This morning I sent out some feelers to the OpenVZ community (via the OpenVZ 
Users mailing list, blog.openvz.org, and the #openvz IRC channel) to see if any 
OpenVZ users were already working with the CentOS project (I'm not).

So does anyone that is part of this SIG care to tell me how much OpenVZ 
interest there currently is and how I might become a part of the effort?  I 
know the virt-sig is probably quite broad beyond OpenVZ.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] kvm

2013-01-16 Thread Scott Dowdle
Mattias,

- Original Message -
 i have installed kvm on centos 6
 but how to run it
 i not use libvirt

There is a fine virtualization guide written by Red Hat so check that if you 
haven't.

Basically you DO want to use libvirt in so much as you use a client that uses 
it.  What client?  Like virt-manager and/or virsh.  Those should be the clues 
that you need.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] create a guest

2013-01-16 Thread Scott Dowdle
Greetings,

- Original Message -
 but i allredy have the freebsd disc image file on the server

DUH... I thought you were saying you had a disc or disc image of the FreeBSD 
install media.  You already have a FreeBSD VM disk image?  In that case, what 
is it from?  Is it a KVM VM or did it come from some other virt platform like 
VMware, VirtualBox, Parallels, Xen or what?

You may run into an issue with drivers if it came from another virt platform 
that uses product specific tools (like VMware tools for example)... where to 
make it work correctly you have to remove their tools.  This is probably less 
of an issue with a FreeBSD VM though.

There is also v2v which supposedly can convert a disk image of a VM from one 
product format to another.  I haven't used it.  There should be good 
documentation for v2v if you do a search.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] create a guest

2013-01-16 Thread Scott Dowdle
Greetings,

- Original Message -
 stacklet.com
 kvm image

In that case, what I would do would be to create a new VM with virt-manager but 
use the disk image file provided.  That will basically create 
/etc/libvirt/qemu/whatever.xml where whatever is the name you gave the VM in 
virt-manager.  Then you can use virt-manager to start, stop, console connect 
etc... or you can use virsh from the command line.

BTW, if the VM is to have a public IP address then you want to setup a bridge 
device if you don't already have one, and associate the VM with that when you 
create it.  If it is going to use a private IP address, then you can just use 
the default NAT.

KVM is a little complicated to get going with but the effort is definitely 
worth it.

And again, there is good documentation if you do a few searches.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] (no subject)

2012-12-07 Thread Scott Dowdle
Shawn,

- Original Message -
 On an otherwise completely idle system I've noticed the load to be
 1.0 to 1.5 range.  Running top shows the culprit to be: qemu-kvm.
 
 Is this normal behavior?  I would have expected the load to be pretty
 light.

You have to remember, or at least as I understand it, that a load of 1 is full 
for a single CPU/core.  Since you have 24, a full load would be 24.

Linux gets even weirder with more cores.  I have one system that has 64 
cores... and it has a lot of threads/process running just to support those.

Another thing that uses quite a bit of CPU is ksm.  If you don't have a number 
of similar VMs then I don't think it is very helpful... and it seems to eat up 
quite a bit of CPU resources trying to be helpful.

Ok, now the uber-CentOS geeks can tell me how stupid I am. Mmmm... go.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] turning off udev for eth0

2012-01-03 Thread Scott Dowdle
Greetings,

- Original Message -
 I have set up a kvm host and configured a standard clone
 prototype for generating new guests. One persistent (pun
 intended) annoyance when cloning is the behaviour of udev
 with respect to the virtual network interface.
 
 The prototype is configured with just eth0 having a
 dedicated IP addr.  When the prototype is cloned udev
 creates rules for both eth0 and eth1 in the clone.
 Because eth1 does not exist in the cloned guest one has to
 manually edit /etc/udev/rules.d/70-persistent-net.rules to
 get rid of the bogus entries and then restart the clone
 instance to have the changes take effect. All this does is
 return the new guest to the prototype eth0 configuration.
 
 Is there no way to alter udev's behaviour?  Is udev even
 needed on a server system using virtual hardware?
 Altering the rules file not a big deal in itself but it
 adds needless busywork when setting up a new guest.

That's how it is on physical machines (I image lab computers and have to clean 
up the udev rules file among other things) and would expect the same behavior 
from virtual machines.

The limitations of virt-clone are known and are being addressed in 
virt-sysprep... which hasn't made it to RHEL yet I don't think... but you can 
find out about it here:

http://rwmj.wordpress.com/2011/10/08/new-tool-virt-sysprep/

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Now on to creation of disk images

2011-06-28 Thread Scott Dowdle
Greetings,

- Original Message -
 Mr. Heron was so kind to make a suggestion that I should use disk images
 to install VMs. Upon further thought, I kinda like the idea. So I
 re-read the manual and google a little, and discover I still don't know
 what should be in these disk images.
 
 Should I copy the contents of the CDs to a file or what?

I haven't been following this conversation but I think I can answer that last 
question.  Using a disk image file rather than a physical disk or partition(s) 
on a physical means that you use the file as if it were a disk.  You don't need 
to put anything on it... and you boot install media and then select the disk 
image file as the disk you want to use to install your OS too.

Either that, or you are talking about using .iso files on disk as install media 
rather than physical optical media.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Reading the new 6.0 manual - now questions

2011-06-17 Thread Scott Dowdle
Greetings,

- Original Message -
 RH makes this an add-on to their license. Does anyone know if the upcoming 
 Centos 6 will provide the virtualization packages (right away or in the 
 future)?

Just to clarify... Red Hat's virtualization entitlement is for 
management/support from RHN.  The way they sell RHEL... you can have 1 VM, 4 
VMs or unlimited VMs.  When I say VMs there I mean supported RHN subscribed 
RHEL installs where you register them with RHN and they get updates like any 
RHEL box would.  So you are affectively getting 2, 5 or unlimited RHEL update 
entitlements.  This is done by installing an additional package or two in the 
RHEL VM, and registering it with RHN so it knows it is a VM and RHN knows which 
physical host it is associated with.

If you want to run any number of virtual non-RHEL OSes, go for it.  They are 
not accounted for.  The only thing accounted for are RHN subscriptions by 
physical or virtual machines. It isn't like virt-manager phones home... it does 
not.

None of that entitlement stuff applies to the free RHEL clones so it isn't an 
issue.

 Secondly, I'm not sure I understand the CPU allocation stuff. If I
 have 6 cores, it appears I can only create VMs that use 6 cores total.

It is my understanding that you can allocate basically all of the vcpus you 
want... the only rule though is that you can't assign more vcpus to a single VM 
than you have physical cpus as the OS sees them.  So if you have two quad-core 
CPUs and they can do multiple threads per core... just look at /proc/cpuinfo to 
see how many cpus are listed there.  It is probably the total number of threads 
times the total number of cores per CPU times the number of CPUs.  You can't go 
over that number of vcpus in a single VM.

So if /proc/cpuinfo on the physical host shows 16 CPUs you could make x number 
of VMs with 16 or less vcpus each.  It doesn't matter what the total number is 
across VMs.

From a performance stand point some might want to allocate only as many vcpus 
as they have physical cores or cpus and then pin them so they get a 1-to-1 
allocation... but for most folks, as long as their hardware isn't bogged down 
too much, it is a freeforall.:)

That's my understanding anyway.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] SPICE Benchmark

2010-11-18 Thread Scott Dowdle
Greetings,

- Original Message -
 Google's NX implementation is called 'neatx':
 http://code.google.com/p/neatx/

Thanks.  I was looking for that.

 NX the protocol is open already.. :)

It is for all versions before 4.0.  4.0 will be completely closed.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] SPICE Benchmark

2010-11-17 Thread Scott Dowdle
Greetings,

- Original Message -
 So as I understand it correctly, this whole SPICE thing is just
 something like VNC on steroids? Why can't we have this SPICE thing
 work on physical hosts as well?

SPICE was specifically designed to be a display protocol for a KVM virtual 
machine.  Most remote display protocols have two pieces... a client and a 
server.  SPICE has three... a client, a server, and a qxl device driver 
provided inside of a KVM virtual machine.  It is probably possible to take the 
SPICE protocol and adapt it to work without the qxl device driver... but no one 
has done that yet.

The best experience I've seen for a remote Linux box was provided by No 
Machine's NX protocol.  FreeNX comes from NX but it appears FreeNX has stalled. 
 Google created some project, I forget the name, forked from FreeNX I believe.  
No Machine is working on version 4.0 of NX and supposedly will be releasing a 
beta in the not too distant future.  Some time ago they posted a lot of 
information about NX 4.0 on their website and it seems to rival SPICE to a 
certain degree... but that remains to be seen.

What I'd like to see happen would be for SPICE to be adapted to a general 
purpose remote display protocol or perhap Red Hat could buy No Machine and open 
source that protocol too. :)

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] SPICE Benchmark

2010-11-16 Thread Scott Dowdle
Greetings,

- Original Message -
 I don’t suppose you have any websites you would recommend that shows
 how to install and use spice?

I second that question.  While I've found instructions here and there, and have 
even given Fedora 14 a try as well... the processes is very manual and I have 
yet to find a very detailed set of instructions for getting SPICE going.

I understand that the benchmark paper used RHEV for Desktops... which I assume 
does all of the work for you... but what about us folks who are using RHEL 5.5, 
RHEL 6, and/or Fedora 14.  All of them come with KVM and SPICE packages but we 
need some detailed instructions on setting it up and making it work.

Thanks in advance for any consideration,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] SPICE Benchmark

2010-11-16 Thread Scott Dowdle
Greetings,

- Original Message -
 If you want just to see SPICE in action it is not hard. You need qemu
 with SPICE support on server and SPICE client on client.
 
 You need to start qemu on server with additional options:
 -spice port=port,disable-ticketing - use this one if you do not need
 password protection
 OR
 -spice port=port,password=secret - if you need to protect
 connection
 
 After it you can connect from client using
 spicec -h host -p port

That isn't quite all there is to it.

What about installing the xorg-x11-drv-qxl package in the guest VM and 
configuring it in xorg.conf?  How exactly is that done?  Also I believe 
spice-server needs to be running on the VM host.  I'm kinda going by Fedora 
14's SPICE feature page (http://fedoraproject.org/wiki/Features/Spice) as well 
as a CentOS related one (http://www.geekgoth.de/tag/centos/).

Are all the packages one needs to get SPICE going on the VM host included in 
CentOS 5.5?  What packages are those?  Are there any steps required for inside 
of the VM?

Simply having the right qemu-kvm and starting up the VM with the -spice flag 
isn't all there is to it.  What many of us need are step by step instructions.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Can KVM be run headless?

2010-11-10 Thread Scott Dowdle
Greetings,

- Original Message -
 On 11/10/2010 06:46 PM, compdoc wrote:
 If I am running a windows client, I would run something like
 Ultra VNC server. And on my host, communicate with something
 like krdc. Am I correct?

There are a few ways to do it.

If I'm on a Linux box (which I am 99.9% of the time), I ssh -X into the host 
running the KVM virtual machines, and run virt-viewer {vm-name} and a 
graphical display of the machine pops up.  If I just want text access, I ssh 
into the VM directly.  If I want to run just one graphical app, I might ssh -X 
into the VM.

If you are on a Windows client then you can use an ssh client (PuTTY for 
example) or if you want the GUI stuff, you can install an Xserver app on 
Windows and tunnel the X traffic over ssh.  A nice free Xserver for Windows is 
Xming.

If you would prefer to run a remote display protocol server inside of the VM 
and connect with that you can install vnc-server or No Machine's NX server.  NX 
is a ton faster but the free version (not open source but free of charge) 
limits you to two user connections which is plenty for most people on a server. 
 Then you install the vnc-client or NX client on your Windows box and use that 
to connect to the remote display server in the VM.

 How do you start and stop your headless machines?

You can use virt-manager if you have GUI access to the host or you can use the 
command line tool virsh.  For example:

virsh list --all (Shows all VMs on the host and shows their status)
virsh start {vm-name} (Starts a VM)
virsh shutdown {vm-name} (Shuts down your VM)

You really should check out Red Hat's Virtualizaiton Guide.  They have a book 
quality guide that explains KVM, virt-manager and virsh.  

http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Virtualization/index.html

RHEL6 just came out today and they have greatly enhanced virt-manager and 
KVM... and you can expect a CentOS 6 release in 1 - 2 months... or so goes the 
pattern.  If you want to see what KVM is like in RHEL6 you can check out their 
updated Virtualization Guide:

http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization/index.html

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Djbdns Working in VPS ?

2008-08-13 Thread Scott Dowdle
Ludwig,

- Lodewijk christoffel [EMAIL PROTECTED] wrote:
 It is said at djbdnsrocks.org that it shouln't work on VPS or
 jails...i'm try it out on my VPS and nothing seems work, at 
 least to see this thing working by using ps -aux.

I see on this (http://www.djbdnsrocks.org/single/getting_started.htm) page they 
say, Virtual private servers (jails) will usually NOT work.  That implies 
that a VPS is a jail.  OpenVZ is much, much more than a jail and I see no 
reason it shouldn't work.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] Check out Proxmox VE... can CentOS improve on this?

2008-08-13 Thread Scott Dowdle
Greetings,

I'm a big OpenVZ fanboy.  I've sent a few emails on this list that proves 
that... and I'm sure I've annoyed some people... but be that as it may... I 
would like to draw everyone on this list's attention to Proxmox VE.  What is 
Proxmox VE?

Here's a review from the end of June:

ProxMox: The high-performance virtualization server for the rest of us
http://blogs.zdnet.com/BTL/?p=9181

It is a bare-metal Linux distribution based on Debian that has been stripped 
down to a minimum that includes a kernel that provides support for both KVM and 
OpenVZ so it does fully virtualized machines (requiring hardware support in the 
CPU) and containers.  It has a nice web-based management system that is fairly 
feature complete and allows for the easy creation and management of KVM virtual 
machines and OpenVZ containers.  It also has clustering features.  That's a 
long enough description.

Ok, how does this differ from everything else out there?

1) It is bare-metal... just pop in the CD (which is a 250MB download)... boot 
up the machine, answer about two questions... a few minutes worth of install 
time... and a reboot later... you have what looks very similar to a VMware ESX 
host... with a console (text) login screen that says... I man a Proxmox VE 
machine... connect to me at the following URL.  Start browsing.

2) It supports both fully virtualized machines AND containers

3) It has a really nice, maturing (two releases so far) web-interface for 
managing everything

4) It is cluster aware - add additional Proxmox VE machines, use the really 
simple command line program to make each machine aware of all of the others... 
and bang the web-interface on any machine sees all of the virtual machines on 
all nodes

5) The web-interface includes VNC-java-applet based access so you can 
graphically attach to any virtual machine including the console of an OpenVZ 
container. For KVM machines, you get to see the BIOS / boot from ISO image 
before virtual machine is even on the network

6) Proxmox VE shows the potential that exists with a FOSS OS, FOSS 
Virtualization products... in a completely FOSS product... that is freaking 
easy to install, setup and use... that works well

Problems - Some of the features aren't done yet.  Proxmox VE is a little ahead 
of its time.  Live migration doesn't quite work for OpenVZ yet (although it 
works fine in stock OpenVZ with their stable kernel branches).  Proxmox VE is 
still in beta.

Basically, if and when the Proxmox VE concept is fully realized and matured 
it'll be a killer app that makes business folks go... Linux DUH... for those 
that care about virtualization... like yoous guyyys on this hea list.

Ok... so any chance CentOS or some of the members of the CentOS development 
community would like to borrow the Proxmox VE idea (perhaps even their 
web-interface code) and make something like Proxmox VE... but based on 
CentOS... that supports Xen and OpenVZ?  It would have the benefit of being 
able to run on both systems with and without hardware support for 
Virtualization - if you have VT you can run fully virtualized Xen AND 
para-virtualized machines... if not... para-virtualized VMs only... and in both 
cases OpenVZ containers.

The form-factor would have to retain all of the properties I mentioned above... 
I think... for it to be a huge success.

Where to start?  Get two machines to test on... desktops are fine as long as 
they have VT in the CPU.  Download the 250MB Proxmox VE iso.  Burn disc.  Boot 
disk.  Answer two questions.  Wait 5 minutes.  Reboot.  Play with it.  See what 
you think... and use your imagination.  If impressed, plan the take over of the 
world with a similar setup based on CentOS.  Then when RHEL6 comes out and KVM 
is here... switch to KVM.

BTW, the OpenVZ Project had a kernel package built on the RHEL kernel that 
included both Xen and OpenVZ but I can't seem to find it now.

Notice I'm not providing any links to Proxmox VE.  You have to care enough to 
google for it. :)

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Djbdns Working in VPS ?

2008-08-12 Thread Scott Dowdle
Ludwig,

There isn't anything about djbdns that should cause it not to run inside of an 
OpenVZ container.  Of course there aren't any easy to install packages for it I 
don't think.

- Lodewijk christoffel [EMAIL PROTECTED] wrote:
 Hi,
 
 I try dig google for this question and found little notes in there,
 I already try it on my VPS machine and end up with nothing working...
 
 I'm using CentOS 4.6 in VPS (openvz), at first i'm trying BIND and
 it's just nice, and now i want to try djbdns...
 
 Is it djbdns working in VPS ? if not, is there anything that i can
 work on it ? i can only afford VPS for now.
 
 Thank you...
 
 Regards,
 Ludwig

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Virtuozzo GFS

2008-05-31 Thread Scott Dowdle
James,

- James Thompson [EMAIL PROTECTED] wrote:
 I have just finished deploying two Dell PowerEdge 1950s with CentOS
 5.1 and Virtuozzo 4. GFS is up and running and Virtuozzo is configured
 for shared-storage clustering. Everything works adequately but I am
 wondering if anyone else has experienced load issues like I am seeing.
 I have three VEs/VMs running, two on one node and one on the other
 node. One of the VEs on each node are doing very little (one is just
 idling with apache and mysql and the other is running rsync every six
 hours). The other is running Zimbra. Every so often load will spike on
 the node running the Zimbra VE to as high as 2 or 3 then settle down a
 short while later to around 0.8 or 0.9. During the spikes the node not
 running Zimbra will other see an increase from its idle load of 0.4 or
 so up to as high as 1.7 as I have seen. I notice when running top that
 dlm_send and dlm_recv will jump to the top fairly frequently when
 these load spikes occur.
 
 What I am wondering is whether anyone else has experienced these kind
 of load scenarios with GFS and what they have done to deal with them?
 We are hoping to deploy a bit more densely on this setup so I'd like
 to make any performance adjustments I can at this stage.
 
 Thanks,
 James Thompson 

A load of 2-3 isn't much at all... so I don't think I'd call that much of a 
spike.  I have run OpenVZ at work and on a hobby server.  In both cases I have 
about 7 containers... one of them being Zimbra.  The other 6 containers are 
fairly busy so the two machines see a decent amount of load.  I am NOT using 
GFS though.  What is dlm_send and dlm_recv part of?  GFS?

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] i386 VM on x86_64 host in Xen

2007-12-11 Thread Scott Dowdle
Christopher,

- Christopher G. Stach II [EMAIL PROTECTED] wrote:
 It would really suck to have 3795 virtual machines die all at the
 same time from a single kernel panic.

Yes, absolutely it would.  I use the OpenVZ kernels that are based on the RHEL4 
and RHEL5 kernels and I haven't had any problems with them... just like I 
haven't had any problems with the stock RHEL4 and RHEL5 kernels... nor CentOS 
kernels.

I usually end up rebooting host node machines because of kernel upgrades... so 
my machines don't get a chance to have longish uptimes... but on one remote 
colocation machine I have for hobby stuff... it currently has an uptime of 106 
days.  It has 7 VPSes on it and they are fairly fat as they all run a full set 
of services.  I know I've been running that machine for close to 2 years now... 
and if I remember correctly it started out with CentOS 4.0.  I've upgraded to 
each release (on the host node and the VPSes) and am currently at CentOS 4.5.  
I look foward forward to 4.6.

Here's what they look like (ip addresses and hostnames obscured):

[EMAIL PROTECTED] ~]# vzlist
  VEID  NPROC   STATUS IP_ADDR HOSTNAME
   101 53runningxx.xx.xx.xxvps101.localdomain
   102 44runningxx.xx.xx.xxvps102.localdomain
   103 44runningxx.xx.xx.xxvps103.localdomain
   104 32runningxx.xx.xx.xxvps104.localdomain
   105322   runningxx.xx.xx.xxvps105.localdomain
   106 32runningxx.xx.xx.xxvps106.localdomain
   107 29runningxx.xx.xx.xxvps107.localdomain

Looking at the number of processes, can you tell which VPS is running Zimbra? :)

6 of the 7 VPSes are CentOS and the remaining 1 is Debian.

Speaking of uptimes, I have a legacy machine at work running Linux-VServer on 
a 2.4.x series kernel.  It had the longest uptime of any machine I've had... 
and was well over 400 days... when a power outage that outlasted its UPS took 
it down.  That particular machine runs three VPSes that are mail 
relay/frontends and they get pounded... so that uptime is notable.

So, my experience has been that physical failures and power failures (although 
pretty rare) are more common that kernel panics that take down all of my 
virtual machines.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt