Re: [CentOS-virt] OS-level virtualization using LXC and systemd-nspawn containers

2021-01-26 Thread Gena Makhomed

On 26.01.2021 20:24, Scott Dowdle wrote:


Ok, so you are turning off SELinux and using ZFS too?  And you still want to 
stay with EL? Why?


RHEL is more stable than Ubuntu, it has 10 year support and rpm installs
silently without additional questions and dialogues, as it in deb world.
dnf / yum is very useful toolkit for managing operating system packages.
And there are many other reasons that will be off-topic to discuss here.


Ubuntu and LXD do support ZFS and Canonical's lawyers seem happy to allow ZFS 
to be bundled with Ubuntu by default.  You should get along nicely.


I feel comfortable using RHEL, because I still remember CVE-2008-0166.

--
Best regards,
 Gena
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] OS-level virtualization using LXC and systemd-nspawn containers

2021-01-26 Thread Gena Makhomed

On 26.01.2021 18:41, Scott Dowdle wrote:


Have you tried LXD?


Not yet. My first post on this mailing list
asked if anyone was using LXC in production:

Does anyone use LXC and/or systemd-nspawn
containers on RHEL 8 / CentOS 8 for production?

What are advantages and disadvantages of each of these technologies?

Can you share your experience with LXC and/or systemd-nspawn
for RHEL 8 / CentOS 8 operating system on the hardware node?




podman is replacement for Docker,
it is not replacement for OpenVZ 6 containers.



Docker definitely targets "Application Containers"... with one service per container.  
podman says they can also do "System Containers" by running systemd as the entry point.  
Of course the vast majority of pre-made container images you'll find in container image 
repositories aren't built for that, but you can use distro provided images and build a system 
container image out of them.  I have a simple recipe for Fedora, CentOS, and Ubuntu.  I don't know 
how many people are using podman in this capacity yet, and I don't know if it is mature or not for 
production... but the limited testing I've done with it, has worked out fairly well... using Fedora 
or CentOS Stream 8 as the host OS...


No problem, systemd-nspawn also has worked out fairly well, without
extra complexity, introduced by podman "System Containers" images.


Yes, podman does still use it's own private network addressing, but I guess 
that can be overcome by telling it to use the host network.  I haven't tried 
that.  Not exactly like OpenVZ's container networking for sure.


I can't use host network for [system] containers.
Each container must have its own private network.


I have containers with 1.6 TiB of valuable data - podman
not designed to work in this mode and in such conditions.



Persistent data really isn't an issue.  You just have to understand how it 
works.  Plenty of people run long-term / persistent-data Docker and podman 
containers...


Backuping persistent containers and restoring from backup - issue.
I don't want have deal with a mash of different images and layers.

Each my systemd-nspawn container located in separate filesystem:

# zfs list
NAME  USED  AVAIL REFER  MOUNTPOINT
tank  531G  1.13T   96K  /tank
tank/containers   528G  1.13T  168K  /tank/containers
tank/containers/119.1G  1.13T 8.00G  /tank/containers/1
tank/containers/100  7.59G  1.13T 6.59G  /tank/containers/100
tank/containers/111   169G  1.13T 27.6G  /tank/containers/111
tank/containers/120  3.05G  1.13T 1.31G  /tank/containers/120
tank/containers/121  10.2G  1.13T 9.20G  /tank/containers/121
tank/containers/122  8.80G  1.13T 7.23G  /tank/containers/122
tank/containers/124  3.20G  1.13T 2.21G  /tank/containers/124
tank/containers/125  3.08G  1.13T 2.12G  /tank/containers/125
tank/containers/126  87.1G  1.13T 64.1G  /tank/containers/126
tank/containers/127   145G  1.13T  125G  /tank/containers/127
tank/containers/128  7.46G  1.13T 5.62G  /tank/containers/128
tank/containers/129  6.04G  1.13T 3.92G  /tank/containers/129
tank/containers/130  5.03G  1.13T 3.01G  /tank/containers/130
tank/containers/131  6.41G  1.13T 2.94G  /tank/containers/131
tank/containers/132  4.55G  1.13T 2.98G  /tank/containers/132
tank/containers/133  22.7G  1.13T 20.6G  /tank/containers/133
tank/containers/134  3.36G  1.13T 1.61G  /tank/containers/134
tank/containers/135  3.82G  1.13T 1.73G  /tank/containers/135
tank/containers/25   1.74G  1.13T  960M  /tank/containers/25
tank/containers/30   2.15G  1.13T 1.35G  /tank/containers/30
tank/containers/97   5.90G  1.13T 2.06G  /tank/containers/97
tank/containers/99   3.15G  1.13T 2.20G  /tank/containers/99

Each filesystem has many snapshots (24 hourly and 30 daily),
which are replicated to backup server, without the need to stop
each systemd-nspawn container for creating snapshot/backup of it.


So I have only two alternatives for OS-level virtualization:
LXC or systemd-nspawn.



If CentOS is your target host, I'd guess that neither of those really are a 
good solutions... simply because they aren't supported and upstream doesn't 
care about anything other than podman for containers.


Upstream also doesn't support ZFS, but this is extraordinary file system
with excellent feature set.


LXC varies from one distro to the next... with different kernels, and different 
versions of libraries and management scripts.  Again, LXD on an Ubuntu LTS host 
is probably the most stable... with Proxmox VE as a close second.  Both of 
those upstreams care about system containers and put in a lot of effort to make 
it work.


LXC/LXD for CentOS 8 and other Linux distros distributed
in the form of snap package. Inside snap - ordinary Ubuntu.
Google "Install LXC CentOS 8" for more details about this.


Good luck.


Thank you.

Luck is 

Re: [CentOS-virt] OS-level virtualization using LXC and systemd-nspawn containers

2021-01-26 Thread Gena Makhomed

On 26.01.2021 0:05, Scott Dowdle wrote:


OpenVZ 7 has no updates, and therefore is not suitable for production.



The free updates lag behind the paid Virtuozzo 7 version and plenty of people 
are using it in production. I'm not one of those.


See all released OpenVZ 7 updates:

http://ftp.netinch.com/pub/openvz/virtuozzo/releases/

Lag between two serial updates can be up to 4-5 month.

OpenVZ 7 has many other disadvantages, so I can't use it for production.


LXC/LXD is the same technology, as I understand from linuxcontainers.org



LXD is a management layer on top of it which provides for easy clustering and 
even managing VMs.  I think it is the closest thing to vzctl/prlctl from OpenVZ.


"Yes, you could use LXC without LXD. But you probably would not want to.
On its own, LXC will give you only a basic subset of features.
For a production environment, you’ll want to use LXD".


podman can't be a replacement for OpenVZ 6 / systemd-nspawn because
it destroys the root filesystem on the container stop, and all
changes made in container configs and other container files will be lost.
This is a nightmare for the website hosting server with containers.



No, it does NOT destroy the delta disk (that's what I call where changes are stored) upon 
container stop and I'm not sure why you think it does.  You can even export a systemd 
unit file to manage the container as a systemd service or user service.  volumes are a 
nice way to handle persistence of data if you want to nuke the existing container and 
make a new one from scratch without losing your data.  While it is true you have to 
approach the container a little differently, podman systemd containers are fairly 
reasonable "system containers".


podman is replacement for Docker,
it is not replacement for OpenVZ 6 containers.

I have containers with 1.6 TiB of valuable data - podman
not designed to work in this mode and in such conditions.

So I have only two alternatives for OS-level virtualization:
LXC or systemd-nspawn.

--
Best regards,
 Gena

___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] OS-level virtualization using LXC and systemd-nspawn containers

2021-01-25 Thread Gena Makhomed

On 25.01.2021 22:24, Scott Dowdle wrote:


I found only two possible free/open source alternatives for OpenVZ 6:

- LXC
- systemd-nspawn



Some you seem to have overlooked?!?

1) OpenVZ 7
2) LXD from Canonical that is part of Ubuntu
3) podman containers with systemd installed (set /sbin/init as the entry point)


OpenVZ 7 has no updates, and therefore is not suitable for production.

LXC/LXD is the same technology, as I understand from linuxcontainers.org

podman can't be a replacement for OpenVZ 6 / systemd-nspawn because
it destroys the root filesystem on the container stop, and all changes
made in container configs and other container files will be lost.
This is a nightmare for the website hosting server with containers.

systemd-nspawn probably is the best fit for my tasks.
But systemd-nspawn also have some major disadvantages
in the current RHEL-stable and RHEL-beta versions:

https://bugzilla.redhat.com/show_bug.cgi?id=1913734

https://bugzilla.redhat.com/show_bug.cgi?id=1913806

Answering to your previous question:

> in the reproduction steps, disabling SELinux is a step?

SELinux must be disabled, because if SELinux is enabled
- it prevents systemd-nspawn containers from starting.

SELinux permissive mode is useless because it consumes
more resources compared to completely disabled SELinux.

--
Best regards,
 Gena
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] OS-level virtualization using LXC and systemd-nspawn containers

2021-01-25 Thread Gena Makhomed

Hello All,

OpenVZ 6 in the past was a very popular technology
for creating OS-level virtualization containers.

But OpenVZ 6 is EOL now (because RHEL 6 / CentOS 6 is EOL)
and all OpenVZ 6 users should migrate to some alternatives.

I found only two possible free/open source alternatives for OpenVZ 6:

- LXC
- systemd-nspawn

Does anyone use LXC and/or systemd-nspawn
containers on RHEL 8 / CentOS 8 for production?

What are advantages and disadvantages of each of these technologies?

Can you share your experience with LXC and/or systemd-nspawn
for RHEL 8 / CentOS 8 operating system on the hardware node?



As I understand, LXC is not supported by Red Hat and it should be used 
on RHEL at its own risk?


But, as I understand from the articles

- https://access.redhat.com/solutions/1533893
- https://access.redhat.com/articles/2726611

systemd-nspawn is also not supported by Red Hat and should be used at 
its own risk?


So, between LXC and systemd-nspawn is there no difference despite 
what systemd-nspawn is the part of the RHEL 8 operating system

and can be installed on the RHEL 8 from the BaseOS repo?

Are there any chances that the situation with support for systemd-nspawn
will change in the future and this OS-level virtualization technology
will become fully supported in the RHEL 8.x or the RHEL 9.x version?

--
Best regards,
 Gena

___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] centos-qemu-ev repository not work for x86_64 arch on CentOS 7.5

2018-05-15 Thread Gena Makhomed

On 15.05.2018 13:52, Sandro Bonazzola wrote:


failure: repodata/repomd.xml from centos-qemu-ev: [Errno 256] No more

mirrors to try.
http://mirror.centos.org/altarch/7/virt/x86_64/kvm-common/
repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found



Something wrong with $contentdir variable,
it points to altarch for x86_64 $basearch.


can't reproduce on a fresh x86_64 installation. Adding Brian in case he has
a clue for this.


I use fresh installed x86_64 CentOS 7.5 via VNC
with partially filled config anaconda-ks.cfg

And I can see in /etc/yum/vars/contentdir:

# cat /etc/yum/vars/contentdir
altarch

As I understand, on the fresh x86_64 installation
file /etc/yum/vars/contentdir should have centos value.

P.S.

I will report this issue to my hoster (hetzner.com)
may be this is hoster issue and not CentOS issue ?

But I am not sure what this is hoster bug,
it may be CentOS 7.5 anaconda installer bug.

I delete after installation all anaconda configs and all anaconda
log files, so now I can't debug this issue more deeply, sorry.

--
Best regards,
 Gena
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] centos-qemu-ev repository not work for x86_64 arch on CentOS 7.5

2018-05-15 Thread Gena Makhomed

Hello, Sandro!

On 15.05.2018 13:24, Gena Makhomed wrote:

failure: repodata/repomd.xml from centos-qemu-ev: [Errno 256] No more 
mirrors to try.
http://mirror.centos.org/altarch/7/virt/x86_64/kvm-common/repodata/repomd.xml: 
[Errno 14] HTTP Error 404 - Not Found


I found workaround:

# diff -u CentOS-QEMU-EV.repo.old CentOS-QEMU-EV.repo
--- CentOS-QEMU-EV.repo.old 2018-05-15 13:30:27.500156416 +0300
+++ CentOS-QEMU-EV.repo 2018-05-15 13:30:38.980208715 +0300
@@ -5,7 +5,7 @@

 [centos-qemu-ev]
 name=CentOS-$releasever - QEMU EV
-baseurl=http://mirror.centos.org/$contentdir/$releasever/virt/$basearch/kvm-common/
+baseurl=http://mirror.centos.org/centos/$releasever/virt/$basearch/kvm-common/
 gpgcheck=1
 enabled=1
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Virtualization

=

now all works fine.

But as I understand, this is bug and it should be fixed.

Something wrong with $contentdir variable,
it points to altarch for x86_64 $basearch.

--
Best regards,
 Gena
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Red Hat VirtIO SCSI pass-through controller

2017-03-15 Thread Gena Makhomed

On 14.03.2017 20:05, Gena Makhomed wrote:


When I use "VirtIO SCSI controller" in virt-manager,
and when I install Windows Server 2012 R2 -
I can use only viostor driver and can't use vioscsi driver,
Windows Server 2012 R2 can't load vioscsi driver (?)
from virtio-win.iso and did not see virtual hard disks
with vioscsi driver. Only viostor driver work normally.

How I can use "Red Hat VirtIO SCSI pass-through controller"
for Windows Server 2012 R2 virtual machine, what I need
to configure in xml-config?

current /etc/libvirt/qemu/windows.xml config:

   
  



  
  
  
  
  



I found solution.

bus='virtio' - this is The paravirtualized block device (virtio-blk)
fou using "Red Hat VirtIO SCSI pass-through controller" vioscsi driver
I need to use only SCSI virtual disks (bus='scsi'), not VirtIO ones.

Correct config fragment:


  function='0x0'/>




  discard='unmap'/>

  
  
  
  



  discard='unmap'/>

  
  
  


Now all works fine, I install Windows Server 2012 R2 with vioscsi driver

And now Windows Server 2012 R2 see disks as "thin provisioned drives".
(SCSI UNMAP feature works only with vioscsi driver, not with viostor).

No more need to use sdelete.exe -z to reclaim unused VM disk space. :-)

More details:

https://technet.microsoft.com/en-us/library/hh831391(v=ws.11).aspx
Thin Provisioning and Trim Storage Overview

https://kallesplayground.wordpress.com/2014/04/26/storage-reclamation-part-2-windows/comment-page-1/
Storage reclamation – part 2 – Windows

--
Best regards,
 Gena

___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Red Hat VirtIO SCSI pass-through controller

2017-03-15 Thread Gena Makhomed

On 15.03.2017 17:42, Jean-Marc LIGER wrote:


When I use "VirtIO SCSI controller" in virt-manager,
and when I install Windows Server 2012 R2 -
I can use only viostor driver and can't use vioscsi driver,
Windows Server 2012 R2 can't load vioscsi driver (?)
from virtio-win.iso and did not see virtual hard disks
with vioscsi driver. Only viostor driver work normally.


[...]


Additional info:


virtio-win.iso version: virtio-win-0.1.126.iso
from https://fedoraproject.org/wiki/Windows_Virtio_Drivers


Maybe you can try the lastest virtio-win-0.1.133.iso :
https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/latest-virtio/virtio-win.iso


Ok, I tested with virtio-win.iso versions

0.1.96-1
0.1.102-1
0.1.110-1
0.1.126-2
0.1.133-2

- nothing help, I can't load vioscsi driver in Windows Server 2012 R2.

Also I tested Windows Server 2008 R2 - also can't load vioscsi driver.

Also I tested with qemu-kvm-ev-2.6.0-28.el7_3.3.1.x86_64 - also can't 
load vioscsi driver.


Also I tested two different CentOS 7.3 servers, with different CPUs:
Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz
Intel(R) Atom(TM) CPU  C2750  @ 2.40GHz
- also can't load vioscsi driver in Windows Server 2012 R2 setup.

Why I can't load vioscsi driver in Windows Virtual Machine?

I am doing something wrong?


Or try to reinstall the latest spice guest tools 0.132 :
https://www.spice-space.org/download/binaries/spice-guest-tools/spice-guest-tools-latest.exe


I can't install Windows Server 2012 R2 with vioscsi driver,
how spice guest tools can help with it?


OS: CentOS Linux release 7.3.1611 (Core)

QEMU: qemu-kvm-1.5.3-126.el7_3.5.x86_64


Why don't you upgrade qemu-kvm to qemu-kvm-ev ?



Already upgrade, it does not help.

--
Best regards,
 Gena
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] Red Hat VirtIO SCSI pass-through controller

2017-03-14 Thread Gena Makhomed

Hello, All!

virtio-win.iso contains two different Windows drivers.
these Windows Server 2012 R2 drivers have different hardware IDs:

\vioscsi\2k12R2\amd64\vioscsi.inf
"Red Hat VirtIO SCSI pass-through controller"
PCI\VEN_1AF4_1004_00081AF4_00
PCI\VEN_1AF4_1048_11001AF4_01

\viostor\2k12R2\amd64\viostor.inf
"Red Hat VirtIO SCSI controller"
PCI\VEN_1AF4_1001_00021AF4_00
PCI\VEN_1AF4_1042_11001AF4_01

When I use "VirtIO SCSI controller" in virt-manager,
and when I install Windows Server 2012 R2 -
I can use only viostor driver and can't use vioscsi driver,
Windows Server 2012 R2 can't load vioscsi driver (?)
from virtio-win.iso and did not see virtual hard disks
with vioscsi driver. Only viostor driver work normally.

How I can use "Red Hat VirtIO SCSI pass-through controller"
for Windows Server 2012 R2 virtual machine, what I need
to configure in xml-config?

current /etc/libvirt/qemu/windows.xml config:

   
  function='0x0'/>




  discard='unmap'/>

  
  
  
  function='0x0'/>



Additional info:


virtio-win.iso version: virtio-win-0.1.126.iso
from https://fedoraproject.org/wiki/Windows_Virtio_Drivers

OS: CentOS Linux release 7.3.1611 (Core)

QEMU: qemu-kvm-1.5.3-126.el7_3.5.x86_64

Windows Server 2012 R2 MSDN installation ISO image:
en_windows_server_2012_r2_with_update_x64_dvd_6052708.iso
sha1sum: 865494e969704be1c4496d8614314361d025775e
size: 5397889024 bytes

devcon.exe output inside Windows Server 2012 R2 Virtual Machine:

PCI\VEN_1AF4_1001_00021AF4_00\3&13C0B0C5&0&40
Name: Red Hat VirtIO SCSI controller
Hardware IDs:
PCI\VEN_1AF4_1001_00021AF4_00
PCI\VEN_1AF4_1001_00021AF4
PCI\VEN_1AF4_1001_01
PCI\VEN_1AF4_1001_0100
Compatible IDs:
PCI\VEN_1AF4_1001_00
PCI\VEN_1AF4_1001
PCI\VEN_1AF4_01
PCI\VEN_1AF4_0100
PCI\VEN_1AF4
PCI\CC_01
PCI\CC_0100
PCI\VEN_1AF4_1001_00021AF4_00\3&13C0B0C5&0&50
Name: Red Hat VirtIO SCSI controller
Hardware IDs:
PCI\VEN_1AF4_1001_00021AF4_00
PCI\VEN_1AF4_1001_00021AF4
PCI\VEN_1AF4_1001_01
PCI\VEN_1AF4_1001_0100
Compatible IDs:
PCI\VEN_1AF4_1001_00
PCI\VEN_1AF4_1001
PCI\VEN_1AF4_01
PCI\VEN_1AF4_0100
PCI\VEN_1AF4
PCI\CC_01
PCI\CC_0100
PCI\VEN_1AF4_1004_00081AF4_00\3&13C0B0C5&0&38
Name: Red Hat VirtIO SCSI controller
Hardware IDs:
PCI\VEN_1AF4_1004_00081AF4_00
PCI\VEN_1AF4_1004_00081AF4
PCI\VEN_1AF4_1004_01
PCI\VEN_1AF4_1004_0100
Compatible IDs:
PCI\VEN_1AF4_1004_00
PCI\VEN_1AF4_1004
PCI\VEN_1AF4_01
PCI\VEN_1AF4_0100
PCI\VEN_1AF4
PCI\CC_01
PCI\CC_0100

--
Best regards,
 Gena
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] kvm-qemu-ev in testing

2015-12-19 Thread Gena Makhomed

On 30.11.2015 19:02, Jean-Marc LIGER wrote:


Is it possible to add patch
https://bugzilla.redhat.com/show_bug.cgi?id=1248758
into qemu-kvm-ev from Red Hat, oVirt and cbs/centos ?



Could you rediff this patch for qemu-kvm-ev 2.3.0 series ?


I am already wasting too many time for this patch...

--
Best regards,
 Gena
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] kvm-qemu-ev in testing

2015-10-30 Thread Gena Makhomed

On 29.10.2015 0:00, Sandro Bonazzola wrote:


it will be in http://mirror.centos.org/centos/7/virt/x86_64/kvm-common/
enabled by
http://mirror.centos.org/centos/7/extras/x86_64/Packages/centos-release-qemu-ev-1.0-1.el7.noarch.rpm


Is it possible to add patch
https://bugzilla.redhat.com/show_bug.cgi?id=1248758
into qemu-kvm-ev from Red Hat, oVirt and cbs/centos ?

Rebuilding each qemu-kvm-ev from sources is just wasting of time.

--
Best regards,
 Gena
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Adding customized options to qemu command line

2015-08-18 Thread Gena Makhomed

On 18.08.2015 14:44, C. L. Martinez wrote:


How can I add some options to qemu command line when a kvm guest
starts up from libvirtd??


# virsh edit vm-name

1. change first line from domain type='kvm' to
domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'

2. add

  qemu:commandline
qemu:arg value='-acpitable'/
qemu:arg value='file=/path/to/SLIC.BIN'/
  /qemu:commandline

before /domain tag

3. if you need qemu options for adding SLIC table - also you need
patch QEMU to add workaround for windows SLIC processing bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1248758

--
Best regards,
 Gena
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] qemu-kvm-ev with CentOS 7.1

2015-07-31 Thread Gena Makhomed

On 30.07.2015 15:25, Sandro Bonazzola wrote:


Note that qemu-kvm-ev
is built within Virt SIG too in kvm-common-testing CBS repo


Packages from kvm-common-testing
probably is not a good choice for using in production environment.

I found in internet package
http://cbs.centos.org/repos/virt7-kvm-common-release/source/SRPMS/qemu-kvm-ev-2.1.2-23.el7_1.3.1.src.rpm

http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7/SRPMS/qemu-kvm-ev-2.1.2-23.el7_1.3.1.src.rpm

==

As I understand from wiki https://wiki.centos.org/HowTos/oVirt
oVirt is upstream for Red Hat's Enterprise Virtualization product.

For first look http://cbs.centos.org/repos/ is unofficial source
even https://bugzilla.redhat.com/enter_bug.cgi?classification=__all
know only about Red Hat Enterprise Virtualization Manager
and oVirt, and it know nothing about kvm-common-testing CBS repo.

Searching in google qemu-kvm-ev site:https://bugs.centos.org/;
I found only one tiket: https://bugs.centos.org/view.php?id=8407
in CentOS bugs tracker.

I am not sure what I can use packages from http://cbs.centos.org/repos/
and report about bugs into bugzilla.redhat.com for oVirt Project.

For first look - if I want to report bugs into oVirt Project bugzilla
- I also must use qemu-kvm-ev packages from oVirt Project repository.

--
Best regards,
 Gena
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] qemu-kvm SLIC acpitable workaround of Windows bug

2015-07-30 Thread Gena Makhomed

On 30.07.2015 10:49, Nux! wrote:


Then you should definitely submit a bug with redhat about this, seems like a 
serious one.


Ok, done:

https://bugzilla.redhat.com/show_bug.cgi?id=1248758

P.S.

As I can see - bugzilla.redhat.com for oVirt Product
does not contain qemu-kvm-ev Component at all - looks
like this is yet another bug - in the bugzilla settings.

But I can't find how to report this bugzilla misconfiguration,
so I just report this oVirt bugreport as bugreport for package
qemu-kvm-rhev from Red Hat Enterprise Virtualization Manager Product.

I hope this help.


- Original Message -



On 29.07.2015 21:34, Nux! wrote:


Yes, you can.
In fact you can use the binaries from the ovirt repo itself, no need to rebuild.


Thank you!

In fact - I can't use raw binaries from the ovirt repo itself,
because these qemu-kvm binaries contains one bug,
which is already fixed in Debian:

If you want to migrate Windows from hardware node
to VM using CentOS 7.1 on hardware node and libvirt xml config:

domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'
.
   qemu:commandline
 qemu:arg value='-acpitable'/
 qemu:arg value='file=/sys/firmware/acpi/tables/SLIC'/
   /qemu:commandline
/domain

Winodws not working correctly in this case, because Windows requires,
what oem_id and oem_table_id from SLIC must be also placed
into oem_id and oem_table_id of RSDT.

Debian version of qemu-kvm contains workaround for this windows bug,
and using Debian - Windows VM will works fine. But CentOS packages
does not contain such workaround, so qemu-kvm-ev now must be patched
manually with each new release.

Patch already was created by Michael Tokarev in 2014:
this is file mjt-set-oem-in-rsdt-like-slic.diff
from https://packages.debian.org/jessie/qemu-kvm

This patch cleanly applies also to qemu-kvm-ev-2.1.2-23.el7_1.3.1

See mjt-set-oem-in-rsdt-like-slic.diff
and qemu-kvm.spec.patch in attach for details.

After executing rpmbuild -ba qemu-kvm.spec
you can place new qemu-kvm binaries into
/srv/download/centos/7/privat/x86_64/Packages
create local repo and use it for upgrading rpm packages,
for example, see privat.repo and privat-createrepo-7-x86_64
in attach.

==

Better if this workaround of Windows bug will be included
into RHEL/CentOS ovirt repo binaries, and this will allow
to anybody easy migrate Windows from hardware nodes
to VMs and easy run CentOS/RHEL at hardware nodes.


P.S.
  After patching qemu-kvm - option
  acpitable works without any bugs:

   # man qemu-kvm

 -acpitable [sig=str][...]
 If a SLIC table is supplied to qemu,
 then the oem_id from the SLIC table
 will be copied into the RSDT table
 (this is a Debian addition).


- Original Message -



Is it possible to use binary packages build from
http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7/SRPMS/qemu-kvm-ev-2.1.2-23.el7_1.3.1.src.rpm
with plain CentOS 7.1 and use all other packages from CentOS
(libvirt, virt-manager, etc)

Is it have reasons, if I not use live migrations and qcow2 snapshots?
(instead use zfs, zvols and zfs snapshots for VM disks online backups)

Is using qemu-kvm-ev with CentOS 7.1 have any disadvantages?


--
Best regards,
 Gena
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] CentOS 7.1 + qemu-kvm-ev + SLIC acpitable windows bug workaround

2015-07-29 Thread Gena Makhomed

On 29.07.2015 21:34, Nux! wrote:


Yes, you can.
In fact you can use the binaries from the ovirt repo itself, no need to rebuild.


Thank you!

In fact - I can't use raw binaries from the ovirt repo itself,
because these qemu-kvm binaries contains one bug,
which is already fixed in Debian:

If you want to migrate Windows from hardware node
to VM using CentOS 7.1 on hardware node and libvirt xml config:

domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'
.
  qemu:commandline
qemu:arg value='-acpitable'/
qemu:arg value='file=/sys/firmware/acpi/tables/SLIC'/
  /qemu:commandline
/domain

Winodws not working correctly in this case, because Windows requires,
what oem_id and oem_table_id from SLIC must be also placed
into oem_id and oem_table_id of RSDT.

Debian version of qemu-kvm contains workaround for this windows bug,
and using Debian - Windows VM will works fine. But CentOS packages
does not contain such workaround, so qemu-kvm-ev now must be patched
manually with each new release.

Patch already was created by Michael Tokarev in 2014:
this is file mjt-set-oem-in-rsdt-like-slic.diff
from https://packages.debian.org/jessie/qemu-kvm

This patch cleanly applies also to qemu-kvm-ev-2.1.2-23.el7_1.3.1

See mjt-set-oem-in-rsdt-like-slic.diff
and qemu-kvm.spec.patch in attach for details.

After executing rpmbuild -ba qemu-kvm.spec
you can place new qemu-kvm binaries into
/srv/download/centos/7/privat/x86_64/Packages
create local repo and use it for upgrading rpm packages,
for example, see privat.repo and privat-createrepo-7-x86_64
in attach.

==

Better if this workaround of Windows bug will be included
into RHEL/CentOS ovirt repo binaries, and this will allow
to anybody easy migrate Windows from hardware nodes
to VMs and easy run CentOS/RHEL at hardware nodes.


P.S.
 After patching qemu-kvm - option
 acpitable works without any bugs:

  # man qemu-kvm

-acpitable [sig=str][...]
If a SLIC table is supplied to qemu,
then the oem_id from the SLIC table
will be copied into the RSDT table
(this is a Debian addition).


- Original Message -



Is it possible to use binary packages build from
http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7/SRPMS/qemu-kvm-ev-2.1.2-23.el7_1.3.1.src.rpm
with plain CentOS 7.1 and use all other packages from CentOS
(libvirt, virt-manager, etc)

Is it have reasons, if I not use live migrations and qcow2 snapshots?
(instead use zfs, zvols and zfs snapshots for VM disks online backups)

Is using qemu-kvm-ev with CentOS 7.1 have any disadvantages?


--
Best regards,
 Gena
--- qemu-kvm.spec.orig  2015-05-15 13:27:19.0 +0300
+++ qemu-kvm.spec   2015-07-29 23:25:04.421842297 +0300
@@ -93,7 +93,7 @@
 Version: 2.1.2
 Release: 23%{?dist}_1.3.1
 # Epoch because we pushed a qemu-1.0 package. AIUI this can't ever be dropped
-Epoch: 10
+Epoch: 77
 License: GPLv2+ and LGPLv2+ and BSD
 Group: Development/Tools
 URL: http://www.qemu.org/
@@ -847,6 +847,8 @@
 Patch341: kvm-block-Fix-max-nb_sectors-in-bdrv_make_zero.patch
 # For bz#1219271 - 
 Patch342: kvm-fdc-force-the-fifo-access-to-be-in-bounds-of-the-all.patch
+# copy OEM ACPI parameters from SLIC table to RSDT
+Patch999: mjt-set-oem-in-rsdt-like-slic.diff 
 
 BuildRequires: zlib-devel
 BuildRequires: SDL-devel
@@ -1400,6 +1402,7 @@
 %patch340 -p1
 %patch341 -p1
 %patch342 -p1
+%patch999 -p1
 
 %build
 buildarch=%{kvm_target}-softmmu
commit 4933716beef26b353a1c374d6d8e6dd2e09333af
Author: Michael Tokarev m...@tls.msk.ru
Date:   Sat Apr 5 19:17:54 2014 +0400
Subject: copy OEM ACPI parameters from SLIC table to RSDT

When building RSDT table, pick OEM ID fields from uer-supplied SLIC
table instead of using hard-coded QEMU defaults.  This way, say,
OEM version of Windows7 can be run inside qemu using the same OEM
activation as on bare metal, by pointing at system firmware:

  -acpitable file=/sys/firmware/acpi/tables/SLIC

Windows7 requires that OEM ID in RSDT matches those in SLIC to
consider SLIC to be valid.

This is somewhat hackish approach, but it works fairy well in
practice.

Signed-off-by: Michael Tokarev m...@tls.msk.ru

diff --git a/hw/acpi/core.c b/hw/acpi/core.c
index 79414b4..a8a3f26 100644
--- a/hw/acpi/core.c
+++ b/hw/acpi/core.c
@@ -53,6 +53,7 @@ static const char unsigned dfl_hdr[ACPI_TABLE_HDR_SIZE - 
ACPI_TABLE_PFX_SIZE] =
 
 char unsigned *acpi_tables;
 size_t acpi_tables_len;
+size_t slic_table_offset;
 
 static QemuOptsList qemu_acpi_opts = {
 .name = acpi,
@@ -226,6 +227,10 @@ static void acpi_table_install(const char unsigned *blob, 
size_t bloblen,
 /* recalculate checksum */
 ext_hdr-checksum = acpi_checksum((const char unsigned *)ext_hdr +
   ACPI_TABLE_PFX_SIZE, acpi_payload_size);
+
+if (memcmp(ext_hdr-sig, SLIC, 4) == 0) {
+   slic_table_offset = acpi_tables_len - acpi_payload_size;
+}
 }
 
 void 

[CentOS-virt] qemu-kvm-ev with CentOS 7.1

2015-07-29 Thread Gena Makhomed

Hello, All!

Is it possible to use binary packages build from
http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7/SRPMS/qemu-kvm-ev-2.1.2-23.el7_1.3.1.src.rpm
with plain CentOS 7.1 and use all other packages from CentOS
(libvirt, virt-manager, etc)

Is it have reasons, if I not use live migrations and qcow2 snapshots?
(instead use zfs, zvols and zfs snapshots for VM disks online backups)

Is using qemu-kvm-ev with CentOS 7.1 have any disadvantages?

--
Best regards,
 Gena
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt