Hi,
does anyone currently get old mails of 2016 from the mailing list?
We are spammed with something like this from teknikservice.nu:
...
Received: from mail.ovirt.org (localhost [IPv6:::1]) by mail.ovirt.org
(Postfix) with ESMTP id A33EA46AD3; Tue, 14 May 2019 14:48:48 -0400 (EDT)
Received:
ve any idea?
Markus
____
Von: Ala Hino [ah...@redhat.com]
Gesendet: Donnerstag, 6. Oktober 2016 12:29
An: Markus Stockhausen
Betreff: Re: [ovirt-users] Cleanup illegal snapshot
Indeed, retry live merge. There is no harm in retrying live merge. As
mentioned, if the image deleted
Bug with logs attached:
https://bugzilla.redhat.com/show_bug.cgi?id=1383084
Best regards.
Markus
Von: Nir Soffer [nsof...@redhat.com]
Gesendet: Sonntag, 9. Oktober 2016 20:37
An: Markus Stockhausen
Cc: Ala Hino; Ovirt Users
Betreff: Re: [ovirt-users] Cleanup
Hi Ala,
> Von: Adam Litke [ali...@redhat.com]
> Gesendet: Freitag, 30. September 2016 15:54
> An: Markus Stockhausen
> Cc: Ovirt Users; Ala Hino; Nir Soffer
> Betreff: Re: [ovirt-users] Cleanup illegal snapshot
>
> On 30/09/16 05:47 +0000, Markus Stockhausen wrote:
>
t it right?
Markus
---
Von: Ala Hino [ah...@redhat.com]
Gesendet: Donnerstag, 6. Oktober 2016 11:21
An: Markus Stockhausen
Cc: Ovirt Users; Nir Soffer; Adam Litke
Betreff: Re: [ovirt-users] Cleanup illegal snapshot
Hi Markus,
What's the version that you
> Von: Michal Skrivanek [michal.skriva...@redhat.com]
> Gesendet: Freitag, 15. Februar 2019 18:53
> An: Erick Perez
> Cc: users@ovirt.org
> Betreff: [ovirt-users] Re: Centos 7.6 and kernel upgrading
>
> > On 14 Feb 2019, at 21:41, Erick Perez wrote:
> >
> > Good day,
> > What is the Ovirt
Grüßen,
Markus Stockhausen
Head of Software Technology
Ubierring 11 · 50678 Köln
Telefon: +49 221 33 608 611
Mobil: +49 151 12040606
Mail: markus.stockhau...@collogia.de
Web: www.collogia-it-services.de
___
Users mailing
SELinux might block access here.
Markus
Am 04.10.2018 01:57 schrieb ryan.terps...@gmail.com:
I have a ceph filesystem that I can manually mount on my ovirt host.
[root@ovirt121 ~]# mount -t ceph ceph01:/ /mnt -o name=admin,secret=
[root@ovirt121 ~]# touch /mnt/test
works great! Then I umount
Hi Sandro,
I'm wondering if BZ1513362 (AIO stuck fixed in qemu-kvm-rhev-2.9.0-16.el7_4.12)
will be worth to give the newer version a try.
Best regards.
Markus
-
Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von "Sandro
Bonazzola [sbona...@redhat.com]
Hi,
given the fact that a current yum update will bring Centos 7.4 and qemu 2.9
to the nodes I wonder if a gluster update thorugh Ovirt repos is already close
to release? Not only is 3.8 EOL but also I like to minimize the update steps to
a new stable package level.
Best regards.
Markus
Jun 25, 2017 at 10:31 PM, Markus Stockhausen
<stockhau...@collogia.de<mailto:stockhau...@collogia.de>> wrote:
Hi,
we are currently evaluating NFS 4.2 based storage for OVirt 4.1.2. Normal
operation
and discard support work like a charm.
For some strange reason we cannot use VM live
Hi,
we are currently evaluating NFS 4.2 based storage for OVirt 4.1.2. Normal
operation
and discard support work like a charm.
For some strange reason we cannot use VM live migration any more. As soon as
one
NFS 4.2 based VM disk is doing disk I/O during the operation. VM stalls and is
Maybe NFS Mounts with Version 4.2 and on Server side no SELinux nfs_t rule
defined?
Sent from mobile...
Am 19.06.2017 11:01 vorm. schrieb Moritz Baumann :
> Is there a way to "reinitialize" the lockspace so one node can become
> SPM again and we can run VMS.
errors
> Von: Yaniv Kaul [yk...@redhat.com]
> Gesendet: Sonntag, 18. Juni 2017 09:58
> An: Markus Stockhausen
> Cc: Ovirt Users
> Betreff: Re: [ovirt-users] OVirt 4.1.2 - trim/discard on HDD/XFS/NFS
> contraproductive
On Sat, Jun 17, 2017 at 1:25 AM, Markus Stockhausen
<st
Thanks for all your feedback.
Im trying to collect all the infos in BZ1462504.
Von: Fabrice Bacchella [fabrice.bacche...@orange.fr]
Gesendet: Sonntag, 18. Juni 2017 10:13
An: Idan Shaby
Cc: Markus Stockhausen; Ovirt Users
Betreff: Re: [ovirt-users] OVirt
Hi,
we just set up a new 4.1.2 OVirt cluster. It is a quite normal
HDD/XFS/NFS stack that worked quit well with 4.0 in the past.
Inside the VMs we use XFS too.
To our surprise we observe abysmal high IO during mkfs.xfs
and fstrim inside the VM. A simple example:
Step 1: Create 100G Thin disk
Hi Fernando,
we personally like XFS very much. But XFS + qcow2 (even for snapshots in OVirt)
comes close to a no-go these days. We are experience excessive fragmentation.
For more info see unresolved Redhat Info:
https://access.redhat.com/solutions/532663
Even with tuning the XFS allocation
if there might
be the possibility to relocate the VM between them online (VMWare 6
and higher).
>From a technical perspective OVirt virtualization has the same limits. So
setup small dedicated Windows (or Oracle clusters) to keep costs down.
Markus
Mit freundlichen Grüßen
Markus Stockhau
Hi,
works now as expected.
Markus
Mit freundlichen Grüßen
Markus Stockhausen
Teamleiter Softwaretechnologie
___
[https://webmail.collogia.de/logo/collogia_logo.jpg]
Ubierring 11 · 50678 Köln
Telefon: +49 221 336 08-0
Mobil: +49 151 12040 606
E-Mail: stockhau...@collogia.de
Thanks a lot,
I will test and give feedback.
Markus
Am 29.03.2017 5:13 nachm. schrieb Filip Krepinsky <fkrep...@redhat.com>:
On Wed, Mar 29, 2017 at 2:29 PM, Filip Krepinsky
<fkrep...@redhat.com<mailto:fkrep...@redhat.com>> wrote:
On Mon, Mar 27, 2017 at 1:15 PM,
Hi there,
my smartphone updated mOvirt these days to 1.7. Since then I always
get errors when trying to access the disks dialogue of a VM in mOVirt.
It boils down to the URL
https:///ovirt-engine/api/vms//disks
Result is always 404.
A simple cross check in the webbrowser returns the same
Hi,
just want to know if I can upgrade a 4.0.5 OVirt environment
to 4.1 with cluster/dc compatibility currently still set to 3.6? Or
do I need to upgrade compatibility level first?
IIRC upgrade to 4.0 required at least compatibility 3.6.
Best regards.
Markus
Hi Yaniv,
for better tracking I opened BZ 1413847.
Best regards.
Markus
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
Hi there,
maybe i missed the discussion on the mailing list. Today we installed
a new centos host. Of course it has 7.3 and qemu 2.6 after a yum update.
It can be attached to our cluster wihtout problems. We are running Ovirt
4.0.6 but the cluster compatibility level is still 3.6.
We can
Hi,
we are running Infiniband on the NFS storage network only. Did I get
it aight that this works or do you already have issues there?
Best regards.
Markus
Web: www.collogia.de
Von: users-boun...@ovirt.org [users-boun...@ovirt.org] im Auftrag von
unt point to 1M
(xfs_io -c 'extsize 1m' /var/nas/OVirt). If this does not help I will send you
some
update.
Best regards.
Markus
>>>
Von: Maor Lipchuk [mlipc...@redhat.com]
Gesendet: Sonntag, 6. November 2016 16:33
An: Markus Stockhausen
Cc: Ovirt Users
Betreff: Re: [ovirt-users] Live
-578f1f3f06ee
du -m c8acdbc7-af24-4c5c-94c5-ae7262d98f5c
183222 c8acdbc7-af24-4c5c-94c5-ae7262d98f5c
Usually at some point the process is gaining speed and we see >100MByte/sec
speed. Can anyone explain what might be going on.
Best regards.
Markus Stockhau
Bug with logs attached:
https://bugzilla.redhat.com/show_bug.cgi?id=1383084
Best regards.
Markus
Von: Nir Soffer [nsof...@redhat.com]
Gesendet: Sonntag, 9. Oktober 2016 20:37
An: Markus Stockhausen
Cc: Ala Hino; Ovirt Users
Betreff: Re: [ovirt-users] Cleanup
ve any idea?
Markus
____
Von: Ala Hino [ah...@redhat.com]
Gesendet: Donnerstag, 6. Oktober 2016 12:29
An: Markus Stockhausen
Betreff: Re: [ovirt-users] Cleanup illegal snapshot
Indeed, retry live merge. There is no harm in retrying live merge. As
mentioned, if the image deleted
t it right?
Markus
---
Von: Ala Hino [ah...@redhat.com]
Gesendet: Donnerstag, 6. Oktober 2016 11:21
An: Markus Stockhausen
Cc: Ovirt Users; Nir Soffer; Adam Litke
Betreff: Re: [ovirt-users] Cleanup illegal snapshot
Hi Markus,
What's the version that you
Hi Ala,
> Von: Adam Litke [ali...@redhat.com]
> Gesendet: Freitag, 30. September 2016 15:54
> An: Markus Stockhausen
> Cc: Ovirt Users; Ala Hino; Nir Soffer
> Betreff: Re: [ovirt-users] Cleanup illegal snapshot
>
> On 30/09/16 05:47 +0000, Markus Stockhausen wrote:
>
Hi,
if a OVirt snapshot is illegal we might have 2 situations.
1) qemu is still using it - lsof shows qemu access to the base raw and the
delta qcow2 file. -> E.g. a previous live merge failed. In the past we
successfully solved that situation by setting the status of the delta image
in the
Hi there.
several Redhat BZs are currently targeting a live migration error with current
qemu 2.3/2.6 versions. From my understanding BZ1359731 trie to fix a queue
overflow issue. With the patch in place qemu might abort randomly during
live migration. BZ1372763 provides info about additional
Thanks for the tips.
None of them helped. I opened BZ1376156.
Best regards.
Markus
Von: Nir Soffer [nsof...@redhat.com]
Gesendet: Mittwoch, 14. September 2016 19:56
An: Markus Stockhausen
Cc: Ovirt Users
Betreff: Re: [ovirt-users] Cannot relocate SPM
Hi there,
trying to relocate the SPM (OVirt 3.6.7) we get the following error:
Error while executing action: Cannot force select SPM. The Storage
Pool has running taks.
Any idea what is wrong? Ovirt WebGui shows no running tasks.
Best regards.
Markus
image files
(on our NFS), merging images on the command line or manipulating the DB.
First shot would be: Stop VM & backup images + snapshots for the recovery case.
Afterwards try to
start the VM and see what happens.
Best regards.
Markus Stockhausen
Von: O
> Von: users-boun...@ovirt.org [users-boun...@ovirt.org] im Auftrag von
> Ollie Armstrong >[ol...@fubra.com]
> Gesendet: Freitag, 5. August 2016 11:39
> An: users@ovirt.org
> Betreff: [ovirt-users] VM storage issue after snapshot deletion
>
> Hi everyone.
>
> I'm having an issue with a VM after
I know of at least one live Disk Migration issue with Multi Disk VMs.
https://bugzilla.redhat.com/show_bug.cgi?id=1319400
Might be totally different but I must admit that this feature had several ups
and downs the last years.
Markus
Am 26.05.2016 3:50 vorm. schrieb Christopher Cox
Hi,
we've seen a lot of trouble when deleting snapshots of multidisk VMs.
Especially endless running snapshot deletions is one of our culprits.
More here: https://bugzilla.redhat.com/show_bug.cgi?id=1319400
Markus
Von: users-boun...@ovirt.org
> Von: Yaniv Kaul [yk...@redhat.com]
> Gesendet: Dienstag, 19. April 2016 10:41
> An: Markus Stockhausen
> Cc: Sandro Bonazzola [sbona...@redhat.com]; users@ovirt.org
> Betreff: Re: [ovirt-users] qemu patch in OVirt repos
>
>
> On Tue, Apr 19, 2016 at 11:33 AM, Markus
Hi Sandro,
I don't exactly know the process how qemu 2.3 patches get into the
OVirt repos. At least you should be someone who knows better.
Will it be possible to get a version with fix for BZ1319400 into the
tree? See https://bugzilla.redhat.com/show_bug.cgi?id=1319400
It's a quite nasty bug
> Von: users-boun...@ovirt.org [users-boun...@ovirt.org] im Auftrag von
> Clint Boggio [cl...@theboggios.com]
> Gesendet: Montag, 18. April 2016 14:16
> An: users@ovirt.org
> Betreff: [ovirt-users] Disks Illegal State
>
> OVirt 3.6, 4 node cluster with dedicated engine. Main storage domain is
>
works. So we currently have the following situation:
Live merge (for all disks) stalls the VM
Live merge (for single disks) seems to work but logs give other info.
Markus
Von: Gianluca Cecchi [gianluca.cec...@gmail.com]
Gesendet: Mittwoch, 13. April 2016 20:59
An: Markus Stockhausen
Cc: Pavel
: Pavel Gashev [p...@acronis.com]
Gesendet: Mittwoch, 13. April 2016 15:12
An: Markus Stockhausen; users
Betreff: Re: AW: [ovirt-users] stalls during live Merge Centos 7 / qemu 2.3
Markus,
So all CPU threads are blocked by the main loop. The main loop is busy draining
IO requests from all drives
Ok will give it a try...
Von: Pavel Gashev [p...@acronis.com]
Gesendet: Mittwoch, 13. April 2016 15:12
An: Markus Stockhausen; users
Betreff: Re: AW: [ovirt-users] stalls during live Merge Centos 7 / qemu 2.3
Markus,
So all CPU threads are blocked
Hi there,
I'm getting slowly mad about our new Centos 7 cluster. Whenever
I start a live merge the machine completely freezes. It seems to be
independent of the Guest OS (tried SLES 11 SP3, SLES11 SP4 and
SLES12). I already opened BZ1319400 because I'm clueless.
Doing the same in our Fedora 20
> Von: Nicolas Ecarnot [nico...@ecarnot.net]
> Gesendet: Sonntag, 3. April 2016 21:32
> An: Markus Stockhausen; users@ovirt.org
> Betreff: Re: AW: [ovirt-users] heavy webadmin
>
> Le 03/04/2016 21:25, Markus Stockhausen a écrit :
> > switch refresh interval to 60s.
&g
> Von: users-boun...@ovirt.org [users-boun...@ovirt.org] im Auftrag von
> Nicolas Ecarnot [nico...@ecarnot.net]
> Gesendet: Sonntag, 3. April 2016 21:20
> An: users@ovirt.org
> Betreff: Re: [ovirt-users] heavy webadmin
>
> Le 03/04/2016 17:13, Greg Sheremeta a écrit :
> > We have patches in
> Von: Francesco Romani [from...@redhat.com]
> Gesendet: Montag, 22. Februar 2016 09:06
> An: Markus Stockhausen
> Cc: users
> Betreff: Re: [ovirt-users] Going crazy with emory hotplug on 3.6
>
> - Original Message -----
> > From: "Markus Stockhausen" <
> Von: Nir Soffer [nsof...@redhat.com]
> Gesendet: Sonntag, 21. Februar 2016 14:10
> An: Markus Stockhausen; Francesco Romani
> Cc: users
> Betreff: Re: [ovirt-users] Going crazy with emory hotplug on 3.6
>
> Adding Francesco.
>
> On Sun, Feb 21, 2016 at 2:19 PM, Mar
Hi there,
we upgraded Ovirt to 3.6, added the first Centos 7 host and created a new
cluster
with compatibility level 3.6 around it. Until now we are running with Fedora
nodes.
The first Linux VMs are already running in the new cluster. With the first
Windows
VM migrated over we once again
> Von: Vinzenz Feenstra [vfeen...@redhat.com]
> Gesendet: Dienstag, 12. Januar 2016 09:00
> An: Markus Stockhausen
> Cc: users@ovirt.org; Mike Hildebrandt
> Betreff: Re: [ovirt-users] NFS IO timeout configuration
> > Hi there,
> >
> > we got a nasty situa
>> Von: Yaniv Kaul [yk...@redhat.com]
>> Gesendet: Dienstag, 12. Januar 2016 13:15
>> An: Markus Stockhausen
>> Cc: users@ovirt.org; Mike Hildebrandt
>> Betreff: Re: [ovirt-users] NFS IO timeout configuration
>>
>> On Tue, Jan 12, 2016 at 9:32 AM,
Hi there,
we got a nasty situation yesterday in our OVirt 3.5.6 environment.
We ran a LSM that failed during the cleanup operation. To be precise
when the process deleted an image on the source NFS storage.
Engine log gives:
2016-01-11 20:49:45,120 INFO
Hi there,
with the advent of oVirt 3.6 and our aging FC20 nodes I'm searching
for a replacement. Until today I always matched oVirt on CentOS 7.1
hypervisors to qemu 2.1.2. That would make no difference to our
already running Fedora virt-preview 2.1.3 version.
Looking at
> Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von "Budur
> Nagaraju [nbud...@gmail.com]
> Gesendet: Montag, 9. November 2015 05:53
> An: users
> Betreff: [ovirt-users] Multiple console access
>
> HI ,
>
> AM using SPICE console to access the vm console ,how to enable
Nice to hear. Congratulations and thumbs up.
Markus
P.S. The usual delay of two months seems to have become common courtesy.
Von: users-boun...@ovirt.org [users-boun...@ovirt.org] im Auftrag von
Sandro Bonazzola [sbona...@redhat.com]
Gesendet: Mittwoch,
Hi Jasper,
from time to time we see a similar behaviour. All of a sudden a VM pauses due
to
some IO error. But it takes 5 months to occur. Our
/var/log/libvirt/qemu/.log gives
qemu-system-x86_64: block.c:2806: bdrv_error_action: Assertion `error >= 0'
failed.
Currently we are waiting to
> Von: Greg Padgett [gpadg...@redhat.com]
> Gesendet: Samstag, 19. September 2015 02:19
> An: Markus Stockhausen
> Cc: Users@ovirt.org
> Betreff: Re: [ovirt-users] Live Storage Migration
>
> On 09/14/2015 05:20 AM, Markus Stockhausen wrote:
> > Hi,
> >
> > s
Do you have a chance to install qemu-debug? If yes I would try a backtrace.
gdb -p
# bt
Markus
Am 15.09.2015 4:15 nachm. schrieb Daniel Helgenberger
:
Hello,
I do not want to hijack the thread but maybe my issue is related?
It might have started with ovirt
.
Markus
**
Von: Christian Hailer [christ...@hailer.eu]
Gesendet: Dienstag, 15. September 2015 21:24
An: Markus Stockhausen; 'Daniel Helgenberger'
Cc: yd...@redhat.com; users@ovirt.org
Betreff: AW: [ovirt-users] Some VMs in status "not responding" in oVirt
> Von: users-boun...@ovirt.org [users-boun...@ovirt.org] im Auftrag von
> Lionel Caignec [caig...@cines.fr]
> Gesendet: Montag, 14. September 2015 15:47
> An: users@ovirt.org
> Betreff: [ovirt-users] [HA] Restart guest on other node on network SAN
> problem
>
> Hi,
>
> i've ovirt nodes
Hi,
somehow I got lost about the possibility to do a live storage migration.
We are using OVirt 3.5.4 + FC20 Nodes (virt-preview - qemu 2.1.3)
>From the WebUI I have the following possibilites:
1) disk without snapshot: VMs tab -> Disks -> Move: Button is active
but it does not allow to do a
"yum update" on the hosts only.
Von: users-boun...@ovirt.org [users-boun...@ovirt.org] im Auftrag von
Jason Keltz [j...@cse.yorku.ca]
Gesendet: Mittwoch, 9. September 2015 21:08
An: users
Betreff: [ovirt-users] Question about upgrading ovirt 3.5.3 to 3.5.4
Hi there,
we noticed that a newly created NFS data domain is mounted with
UDP protocol. Does anyone know if that is the desired behaviour
of current OVirt versions?
ovirtnode# mount -a
...
10.10.30.254:/var/nas4/OVirtIB on
/rhev/data-center/mnt/10.10.30.254:_var_nas4_OVirtIB
type nfs
From my experience I strongly advise to avoid a drop_caches
with option 3. We still see commands hanging for hours
if they are issued during high CPU/memory load. Even with
matured Ubuntu 3.16 kernels
Staying with option 1 should be enough and much more save.
Markus
Von: Markus Stockhausen
Gesendet: Freitag, 10. April 2015 20:51
An: users@ovirt.org
Betreff: Live migration qemu 2.1.2 - 2.1.3: Unknown savevm section
Hi,
don't know what will be the best place for the following question.
So starting with the OVirt mailing list.
We are using OVirt
Die you try to increase refresh interval to 60. Seemed to help for me,
especially in WAN connect.
Markus
Am 06.05.2015 3:48 nachm. schrieb lofyer lof...@lofyer.org:
I've installed ovirt-engine-3.5.1 and created about 120 VMs.
Everytime I did a multi-selection it will take a not-so-short time
Hi,
although we already upgraded several hypervisor nodes to Ovirt 3.5.1
the newest upgrade has left the host in a very strang state. We did:
- Host was removed from cluster
- Ovirt 3.5 repo was activated on host
- Host was reinstalled from enging
And we got:
- A host that is active and looks
Von: Paul Heinlein [heinl...@madboa.com]
Gesendet: Mittwoch, 18. März 2015 18:43
An: Markus Stockhausen
Cc: Users@ovirt.org
Betreff: Re: [ovirt-users] Live migration fails - domain not found -
On Wed, 18 Mar 2015, Markus Stockhausen wrote:
although we already upgraded several
Hi,
back in december there was a discussion about Ovirt on Fedora 21. From
my point of view that was about ovirt engine. So I'm somehow lost if
Fedora 21 is at least supported as hypervisor host. Anyone with deeper
knowledge?
The reason I'm asking:
We are currently on FC20 + virt-preview and
Hi,
just installed movirt on my mobile. Upon connection it breaks with the attached
error in the 3.5.1 engine server logs.
Something I'm missing?
Markus
2015-02-19 12:21:19,757 ERROR
[org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/ovirt-engine/api]]
Von: Juan Hernández [jhern...@redhat.com]
Gesendet: Donnerstag, 19. Februar 2015 12:53
An: Markus Stockhausen; users@ovirt.org
Betreff: Re: [ovirt-users] movirt - ovirt 3.5.1 - server error 500
On 02/19/2015 12:22 PM, Markus Stockhausen wrote:
Hi,
just installed movirt on my mobile
Von: users-boun...@ovirt.org [users-boun...@ovirt.org]quot; im Auftrag von
quot;Darrell Budic [bu...@onholyground.com]
Gesendet: Freitag, 13. Februar 2015 19:03
An: Nicolas Ecarnot
Cc: users
Betreff: Re: [ovirt-users] How long do your migrations last?
I’m under the impression it depends
Memory usage 80%: ksm kicks in. There it will run at full speed until usage
is below 80%. There is an open BZ from me. Bad behaviour is controlled by mom.
Markus
Am 06.12.2014 15:58 schrieb mad Engineer themadengin...@gmail.com:
Hello All,
I am using centos6.5 x64 on a server with
Von: users-boun...@ovirt.org [users-boun...@ovirt.org]quot; im Auftrag von
quot;Brian Proffitt [bprof...@redhat.com]
Gesendet: Mittwoch, 26. November 2014 17:01
An: users
Cc: board
Betreff: [ovirt-users] oVirt Weekly Sync: November 26, 2014
...
* 3.6 status Still gathering 3.6 features
[users-boun...@ovirt.org] im Auftrag von Markus
Stockhausen [stockhau...@collogia.de]
Gesendet: Freitag, 21. November 2014 08:38
An: Gianluca Cecchi
Cc: s k; users@ovirt.org
Betreff: Re: [ovirt-users] Simple way to activate live merge in FC20 cluster
Wow. Very quick test. Thanks for sharing
Iirc you simply need libvirt 1.2.9
Am 20.11.2014 16:20 schrieb Bob Doolittle b...@doolittle.us.com:
Are there any bugs related to the changes in question that we can track
so we know when the changes are reflected in our distros of interest?
Thanks,
Bob
On 11/20/2014 03:51 AM, s k wrote:
Doolittle [b...@doolittle.us.com]
Gesendet: Donnerstag, 20. November 2014 16:49
An: Markus Stockhausen
Cc: s k; users@ovirt.org; Daniel Helgenberger; Coffee Chou
Betreff: Re: [ovirt-users] Live Merge Functionality disabled on CentOS 6.6 Node
and oVirt 3.5.0
On 11/20/2014 10:32 AM, Markus Stockhausen
everything from virt-preview
- Wait for FC21
- Wait for Centos 21
Still a long way to go to get all the beloved features out of the box.
Markus
Von: Bob Doolittle [b...@doolittle.us.com]
Gesendet: Donnerstag, 20. November 2014 19:38
An: Markus Stockhausen
Wow. Very quick test. Thanks for sharing the results. I will have a look what
qemu 1.6.2 might need.
Regarding stability of qemu 2.1.2: You should scan the qemu stable mailing list
if some severe fixes have been posted after the release. If you feel
comfortable take qemu from the preview
Hi Ernest,
we have similar issues with IPoIB. To fix it we use VDSM hooks:
# cat /usr/libexec/vdsm/hooks/before_vdsm_start/network.sh
...
ethtool -K ib0 tso off 2/dev/null
ethtool -K ib1 tso off 2/dev/null
...
Nevertheless this is similar to running self defined init scripts.
But at least I
Hi,
maybe a stupid one. Just want to make sure that nothing goes wrong. We
plan to make a rolling upgrade of our landscape going to 3.5.1 in december.
So we always need some hypervisor nodes up and running during the
process.
Looking at older posts I assume that upgrading the engine should be
Hi,
sorry I forgot that: NFS, engine is running in a qemu VM outside the cluster.
Markus
Von: Gabi C [gab...@gmail.com]
Gesendet: Freitag, 31. Oktober 2014 12:12
An: Markus Stockhausen
Cc: ovirt-users
Betreff: Re: [ovirt-users] Upgrade order 3.4.2 - 3.5.x
Hello
Do you see swapping on the SPM? If yes a regular echo 3 drop_caches could
help.
Markus
Am 27.10.2014 10:57 schrieb Stefan Wendler stefan.wend...@tngtech.com:
Hi,
we have some really large snapshots left from a migration. Since our
store is almost full now we have to delete them now.
Some
Von: Stefan Wendler [stefan.wend...@tngtech.com]
Gesendet: Montag, 27. Oktober 2014 11:39
An: Markus Stockhausen
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] Deleting large snapshots blocks the whole cluster
Hi,
do you mean during snapshot deletion or in general?
In general we do
Hi,
we are running the hyperv flags since months in our Win7 VMs without any issue.
As we are still on OVirt 3.4 FC20 infrastructure we set them with hooks (and
really depend on them).
Maybe RHEL related?
Markus
Am 24.10.2014 18:22 schrieb Charles Gruener cgrue...@gruener.us:
Von: users-boun...@ovirt.org [users-boun...@ovirt.org] im Auftrag von Trey
Dockendorf [treyd...@gmail.com]
Gesendet: Sonntag, 19. Oktober 2014 20:43
An: Arman Khalatyan
Cc: users
Betreff: Re: [ovirt-users] How to add custom lines in to host interface?
I'd be interested in this too. I
If you are speaking about ib0 and so on this will be fixed with 3.5. The
interfaces will then be advertised as 10Gbit.
Am 18.10.2014 13:14 schrieb Arman Khalatyan arm2...@gmail.com:
Hi,
I am using ovirt 3.4.4-1.
On the hosts I have 1Gbit(eth),10Gbit(eth) and 40Gbit(QDRInfiniband) interfaces.
ib
Forgot the CC.
-- Weitergeleitete Nachricht --
Von: Markus Stockhausen stockhau...@collogia.de
Datum: 09.10.2014 21:42
Betreff: Re: [ovirt-users] Cluster settings: KSM Control not working?
An: Frank Wall f...@moov.de
Cc:
Have a look at redhat bugzilla #1114226. It should give
Are you running FC20 on the hypervisor host and if yes what kernel?
Am 27.09.2014 02:39 schrieb Grant Pasley gr...@xtranet.com.au:
good morning guys
i have an issue with my 2008 vm going to pause within 5 secs of starting it up.
new install of ovirt 3.4.4 on hp dl160, installed the windows vm
Hi,
will be fixed in 3.5. In advance you should set hv_relaxed options via hook.
See: https://bugzilla.redhat.com/show_bug.cgi?id=1110305
Best regards.
Markus
Von: users-boun...@ovirt.org [users-boun...@ovirt.org] im Auftrag von Carlos
Castillo
users-boun...@ovirt.org [users-boun...@ovirt.org]quot; im Auftrag von
quot;Federico Alberto Sayd [fs...@uncu.edu.ar]
Gesendet: Donnerstag, 24. Juli 2014 18:16
An: users@ovirt.org
Betreff: [ovirt-users] Disk migration eats all CPU, vms running in SPM
become unresponsive
Hello:
I
Von: users-boun...@ovirt.org [users-boun...@ovirt.org] im Auftrag von André
Freitas [afrei...@ubiwhere.com]
Gesendet: Mittwoch, 16. Juli 2014 15:22
An: users@ovirt.org
Betreff: [ovirt-users] Removal of snapshot taking too long
Hi,
i don't know if its normal but i'm having situations
Von: users-boun...@ovirt.org [users-boun...@ovirt.org] im Auftrag von
Maurice James [mja...@media-node.com]
Gesendet: Freitag, 27. Juni 2014 01:42
An: users
Betreff: [ovirt-users] Spam Latency
I noticed that the following operations take way, way too long:
Any type of import/export (VM,
Von: users-boun...@ovirt.org [users-boun...@ovirt.org]quot; im Auftrag von
quot;Maurice James [mja...@media-node.com]
Gesendet: Montag, 30. Juni 2014 16:33
An: Brian Proffitt
Cc: users
Betreff: [ovirt-users] Spam Re: [Video]: New Live Migration Progess Bar
for oVirt
is that progress
Hello,
right now we modify /etc/vdsm/vdsm.conf on each host to raise migration
bandwidth on our IPoIB network. Is there any place to set it permanently?
If not I'll file a RFE.
Markus
Diese E-Mail enthält vertrauliche
.
Markus
Von: Douglas Schilling Landgraf [dougsl...@redhat.com]
Gesendet: Dienstag, 24. Juni 2014 23:23
An: Dan Kenigsberg; Markus Stockhausen
Cc: ovirt-users
Betreff: Re: [ovirt-users] FC20 vdsmd broken after latest yum update
On 06/24/2014 04:35 AM, Dan
Hi,
after a maintenance of one of our hosts, vdsmd does not start anymore.
Error could be narrowed down to the following command:
[root ~]# /usr/bin/vdsm-tool is-configured
Traceback (most recent call last):
File /usr/bin/vdsm-tool, line 145, in module
sys.exit(main())
File
Von: users-boun...@ovirt.org [users-boun...@ovirt.org] im Auftrag von
Markus Stockhausen [stockhau...@collogia.de]
Gesendet: Dienstag, 24. Juni 2014 09:53
An: ovirt-users
Betreff: [ovirt-users] FC20 vdsmd broken after latest yum update
Hi,
after a maintenance of one of our hosts, vdsmd
1 - 100 of 207 matches
Mail list logo