Hi,
does anyone currently get old mails of 2016 from the mailing list?
We are spammed with something like this from teknikservice.nu:
...
Received: from mail.ovirt.org (localhost [IPv6:::1]) by mail.ovirt.org
(Postfix) with ESMTP id A33EA46AD3; Tue, 14 May 2019 14:48:48 -0400 (EDT)
Received: by
Do you have any idea?
Markus
Von: Ala Hino [ah...@redhat.com]
Gesendet: Donnerstag, 6. Oktober 2016 12:29
An: Markus Stockhausen
Betreff: Re: [ovirt-users] Cleanup illegal snapshot
Indeed, retry live merge. There is no harm in retrying live merge. As
mentioned, if the ima
Bug with logs attached:
https://bugzilla.redhat.com/show_bug.cgi?id=1383084
Best regards.
Markus
Von: Nir Soffer [nsof...@redhat.com]
Gesendet: Sonntag, 9. Oktober 2016 20:37
An: Markus Stockhausen
Cc: Ala Hino; Ovirt Users
Betreff: Re: [ovirt-users] Cleanup
Hi Ala,
> Von: Adam Litke [ali...@redhat.com]
> Gesendet: Freitag, 30. September 2016 15:54
> An: Markus Stockhausen
> Cc: Ovirt Users; Ala Hino; Nir Soffer
> Betreff: Re: [ovirt-users] Cleanup illegal snapshot
>
> On 30/09/16 05:47 +0000, Markus Stockhausen wrote:
>
t it right?
Markus
---
Von: Ala Hino [ah...@redhat.com]
Gesendet: Donnerstag, 6. Oktober 2016 11:21
An: Markus Stockhausen
Cc: Ovirt Users; Nir Soffer; Adam Litke
Betreff: Re: [ovirt-users] Cleanup illegal snapshot
Hi Markus,
What's the version tha
> Von: Michal Skrivanek [michal.skriva...@redhat.com]
> Gesendet: Freitag, 15. Februar 2019 18:53
> An: Erick Perez
> Cc: users@ovirt.org
> Betreff: [ovirt-users] Re: Centos 7.6 and kernel upgrading
>
> > On 14 Feb 2019, at 21:41, Erick Perez wrote:
> >
> > Good day,
> > What is the Ovirt positio
Grüßen,
Markus Stockhausen
Head of Software Technology
Ubierring 11 · 50678 Köln
Telefon: +49 221 33 608 611
Mobil: +49 151 12040606
Mail: markus.stockhau...@collogia.de
Web: www.collogia-it-services.de
___
Users mailing
SELinux might block access here.
Markus
Am 04.10.2018 01:57 schrieb ryan.terps...@gmail.com:
I have a ceph filesystem that I can manually mount on my ovirt host.
[root@ovirt121 ~]# mount -t ceph ceph01:/ /mnt -o name=admin,secret=
[root@ovirt121 ~]# touch /mnt/test
works great! Then I umount t
Hi Sandro,
I'm wondering if BZ1513362 (AIO stuck fixed in qemu-kvm-rhev-2.9.0-16.el7_4.12)
will be worth to give the newer version a try.
Best regards.
Markus
-
Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von "Sandro
Bonazzola [sbona...@redhat.com]
Ge
Hi,
given the fact that a current yum update will bring Centos 7.4 and qemu 2.9
to the nodes I wonder if a gluster update thorugh Ovirt repos is already close
to release? Not only is 3.8 EOL but also I like to minimize the update steps to
a new stable package level.
Best regards.
Markus
, Markus Stockhausen
mailto:stockhau...@collogia.de>> wrote:
Hi,
we are currently evaluating NFS 4.2 based storage for OVirt 4.1.2. Normal
operation
and discard support work like a charm.
For some strange reason we cannot use VM live migration any more. As soon as one
NFS 4.2 based VM disk is
Hi,
we are currently evaluating NFS 4.2 based storage for OVirt 4.1.2. Normal
operation
and discard support work like a charm.
For some strange reason we cannot use VM live migration any more. As soon as
one
NFS 4.2 based VM disk is doing disk I/O during the operation. VM stalls and is
pause
Maybe NFS Mounts with Version 4.2 and on Server side no SELinux nfs_t rule
defined?
Sent from mobile...
Am 19.06.2017 11:01 vorm. schrieb Moritz Baumann :
> Is there a way to "reinitialize" the lockspace so one node can become
> SPM again and we can run VMS.
errors in /var/log/sanlock.log look
> Von: Yaniv Kaul [yk...@redhat.com]
> Gesendet: Sonntag, 18. Juni 2017 09:58
> An: Markus Stockhausen
> Cc: Ovirt Users
> Betreff: Re: [ovirt-users] OVirt 4.1.2 - trim/discard on HDD/XFS/NFS
> contraproductive
On Sat, Jun 17, 2017 at 1:25 AM, Markus Stockhausen
mailto:stock
Thanks for all your feedback.
Im trying to collect all the infos in BZ1462504.
Von: Fabrice Bacchella [fabrice.bacche...@orange.fr]
Gesendet: Sonntag, 18. Juni 2017 10:13
An: Idan Shaby
Cc: Markus Stockhausen; Ovirt Users
Betreff: Re: [ovirt-users] OVirt
Hi,
we just set up a new 4.1.2 OVirt cluster. It is a quite normal
HDD/XFS/NFS stack that worked quit well with 4.0 in the past.
Inside the VMs we use XFS too.
To our surprise we observe abysmal high IO during mkfs.xfs
and fstrim inside the VM. A simple example:
Step 1: Create 100G Thin disk
Res
Hi Fernando,
we personally like XFS very much. But XFS + qcow2 (even for snapshots in OVirt)
comes close to a no-go these days. We are experience excessive fragmentation.
For more info see unresolved Redhat Info:
https://access.redhat.com/solutions/532663
Even with tuning the XFS allocation poli
rs if there might
be the possibility to relocate the VM between them online (VMWare 6
and higher).
>From a technical perspective OVirt virtualization has the same limits. So
setup small dedicated Windows (or Oracle clusters) to keep costs down.
Markus
Mit freundlichen Grüßen
Markus Sto
Hi,
works now as expected.
Markus
Mit freundlichen Grüßen
Markus Stockhausen
Teamleiter Softwaretechnologie
___
[https://webmail.collogia.de/logo/collogia_logo.jpg]
Ubierring 11 · 50678 Köln
Telefon: +49 221 336 08-0
Mobil: +49 151 12040 606
E-Mail: stockhau...@collogia.de
Thanks a lot,
I will test and give feedback.
Markus
Am 29.03.2017 5:13 nachm. schrieb Filip Krepinsky :
On Wed, Mar 29, 2017 at 2:29 PM, Filip Krepinsky
mailto:fkrep...@redhat.com>> wrote:
On Mon, Mar 27, 2017 at 1:15 PM, Markus Stockhausen
mailto:stockhau...@collogia.de>&g
Hi there,
my smartphone updated mOvirt these days to 1.7. Since then I always
get errors when trying to access the disks dialogue of a VM in mOVirt.
It boils down to the URL
https:///ovirt-engine/api/vms//disks
Result is always 404.
A simple cross check in the webbrowser returns the same result
Hi,
just want to know if I can upgrade a 4.0.5 OVirt environment
to 4.1 with cluster/dc compatibility currently still set to 3.6? Or
do I need to upgrade compatibility level first?
IIRC upgrade to 4.0 required at least compatibility 3.6.
Best regards.
Markus
Hi Yaniv,
for better tracking I opened BZ 1413847.
Best regards.
Markus
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtÃ
Hi there,
maybe i missed the discussion on the mailing list. Today we installed
a new centos host. Of course it has 7.3 and qemu 2.6 after a yum update.
It can be attached to our cluster wihtout problems. We are running Ovirt
4.0.6 but the cluster compatibility level is still 3.6.
We can migrate
Hi,
we are running Infiniband on the NFS storage network only. Did I get
it aight that this works or do you already have issues there?
Best regards.
Markus
Web: www.collogia.de
Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von
"cl.
unt point to 1M
(xfs_io -c 'extsize 1m' /var/nas/OVirt). If this does not help I will send you
some
update.
Best regards.
Markus
>>>
Von: Maor Lipchuk [mlipc...@redhat.com]
Gesendet: Sonntag, 6. November 2016 16:33
An: Markus Stockhausen
Cc: Ovirt Users
Betreff: Re: [ovirt-u
-578f1f3f06ee
du -m c8acdbc7-af24-4c5c-94c5-ae7262d98f5c
183222 c8acdbc7-af24-4c5c-94c5-ae7262d98f5c
Usually at some point the process is gaining speed and we see >100MByte/sec
speed. Can anyone explain what might be going on.
Best regards.
Markus Stockhau
Bug with logs attached:
https://bugzilla.redhat.com/show_bug.cgi?id=1383084
Best regards.
Markus
Von: Nir Soffer [nsof...@redhat.com]
Gesendet: Sonntag, 9. Oktober 2016 20:37
An: Markus Stockhausen
Cc: Ala Hino; Ovirt Users
Betreff: Re: [ovirt-users] Cleanup
Do you have any idea?
Markus
Von: Ala Hino [ah...@redhat.com]
Gesendet: Donnerstag, 6. Oktober 2016 12:29
An: Markus Stockhausen
Betreff: Re: [ovirt-users] Cleanup illegal snapshot
Indeed, retry live merge. There is no harm in retrying live merge. As
mentioned, if the ima
t it right?
Markus
---
Von: Ala Hino [ah...@redhat.com]
Gesendet: Donnerstag, 6. Oktober 2016 11:21
An: Markus Stockhausen
Cc: Ovirt Users; Nir Soffer; Adam Litke
Betreff: Re: [ovirt-users] Cleanup illegal snapshot
Hi Markus,
What's the version tha
Hi Ala,
> Von: Adam Litke [ali...@redhat.com]
> Gesendet: Freitag, 30. September 2016 15:54
> An: Markus Stockhausen
> Cc: Ovirt Users; Ala Hino; Nir Soffer
> Betreff: Re: [ovirt-users] Cleanup illegal snapshot
>
> On 30/09/16 05:47 +0000, Markus Stockhausen wrote:
>
Hi,
if a OVirt snapshot is illegal we might have 2 situations.
1) qemu is still using it - lsof shows qemu access to the base raw and the
delta qcow2 file. -> E.g. a previous live merge failed. In the past we
successfully solved that situation by setting the status of the delta image
in the da
Hi there.
several Redhat BZs are currently targeting a live migration error with current
qemu 2.3/2.6 versions. From my understanding BZ1359731 trie to fix a queue
overflow issue. With the patch in place qemu might abort randomly during
live migration. BZ1372763 provides info about additional pat
Thanks for the tips.
None of them helped. I opened BZ1376156.
Best regards.
Markus
Von: Nir Soffer [nsof...@redhat.com]
Gesendet: Mittwoch, 14. September 2016 19:56
An: Markus Stockhausen
Cc: Ovirt Users
Betreff: Re: [ovirt-users] Cannot relocate SPM
Hi there,
trying to relocate the SPM (OVirt 3.6.7) we get the following error:
Error while executing action: Cannot force select SPM. The Storage
Pool has running taks.
Any idea what is wrong? Ovirt WebGui shows no running tasks.
Best regards.
Markus
**
image files
(on our NFS), merging images on the command line or manipulating the DB.
First shot would be: Stop VM & backup images + snapshots for the recovery case.
Afterwards try to
start the VM and see what happens.
Best regards.
Markus Stockhausen
Von: O
> Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von
> "Ollie Armstrong >[ol...@fubra.com]
> Gesendet: Freitag, 5. August 2016 11:39
> An: users@ovirt.org
> Betreff: [ovirt-users] VM storage issue after snapshot deletion
>
> Hi everyone.
>
> I'm having an issue with a VM afte
I know of at least one live Disk Migration issue with Multi Disk VMs.
https://bugzilla.redhat.com/show_bug.cgi?id=1319400
Might be totally different but I must admit that this feature had several ups
and downs the last years.
Markus
Am 26.05.2016 3:50 vorm. schrieb Christopher Cox :
In our old
Hi,
we've seen a lot of trouble when deleting snapshots of multidisk VMs.
Especially endless running snapshot deletions is one of our culprits.
More here: https://bugzilla.redhat.com/show_bug.cgi?id=1319400
Markus
Von: users-boun...@ovirt.org [users-boun..
> Von: Yaniv Kaul [yk...@redhat.com]
> Gesendet: Dienstag, 19. April 2016 10:41
> An: Markus Stockhausen
> Cc: Sandro Bonazzola [sbona...@redhat.com]; users@ovirt.org
> Betreff: Re: [ovirt-users] qemu patch in OVirt repos
>
>
> On Tue, Apr 19, 2016 at 11:3
Hi Sandro,
I don't exactly know the process how qemu 2.3 patches get into the
OVirt repos. At least you should be someone who knows better.
Will it be possible to get a version with fix for BZ1319400 into the
tree? See https://bugzilla.redhat.com/show_bug.cgi?id=1319400
It's a quite nasty bug an
> Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von
> "Clint Boggio [cl...@theboggios.com]
> Gesendet: Montag, 18. April 2016 14:16
> An: users@ovirt.org
> Betreff: [ovirt-users] Disks Illegal State
>
> OVirt 3.6, 4 node cluster with dedicated engine. Main storage domain is
eally works. So we currently have the following situation:
Live merge (for all disks) stalls the VM
Live merge (for single disks) seems to work but logs give other info.
Markus
Von: Gianluca Cecchi [gianluca.cec...@gmail.com]
Gesendet: Mittwoch, 13. April 2016 20:59
An: Markus Stockhausen
Cc:
: Pavel Gashev [p...@acronis.com]
Gesendet: Mittwoch, 13. April 2016 15:12
An: Markus Stockhausen; users
Betreff: Re: AW: [ovirt-users] stalls during live Merge Centos 7 / qemu 2.3
Markus,
So all CPU threads are blocked by the main loop. The main loop is busy draining
IO requests from all drives
Ok will give it a try...
Von: Pavel Gashev [p...@acronis.com]
Gesendet: Mittwoch, 13. April 2016 15:12
An: Markus Stockhausen; users
Betreff: Re: AW: [ovirt-users] stalls during live Merge Centos 7 / qemu 2.3
Markus,
So all CPU threads are blocked by the
> Von: Pavel Gashev [p...@acronis.com]
> Gesendet: Dienstag, 12. April 2016 16:15
> An: Markus Stockhausen; users
> Betreff: Re: [ovirt-users] stalls during live Merge Centos 7 / qemu 2.3
>
> Markus,
>
> I saw similar issues. Looks like it's related to multidisk VM
Hi there,
I'm getting slowly mad about our new Centos 7 cluster. Whenever
I start a live merge the machine completely freezes. It seems to be
independent of the Guest OS (tried SLES 11 SP3, SLES11 SP4 and
SLES12). I already opened BZ1319400 because I'm clueless.
Doing the same in our Fedora 20 c
> Von: Nicolas Ecarnot [nico...@ecarnot.net]
> Gesendet: Sonntag, 3. April 2016 21:32
> An: Markus Stockhausen; users@ovirt.org
> Betreff: Re: AW: [ovirt-users] heavy webadmin
>
> Le 03/04/2016 21:25, Markus Stockhausen a écrit :
> > switch refresh interval to 60s.
> &
> Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von
> "Nicolas Ecarnot [nico...@ecarnot.net]
> Gesendet: Sonntag, 3. April 2016 21:20
> An: users@ovirt.org
> Betreff: Re: [ovirt-users] heavy webadmin
>
> Le 03/04/2016 17:13, Greg Sheremeta a écrit :
> > We have patches in rev
> Von: Francesco Romani [from...@redhat.com]
> Gesendet: Montag, 22. Februar 2016 09:06
> An: Markus Stockhausen
> Cc: users
> Betreff: Re: [ovirt-users] Going crazy with emory hotplug on 3.6
>
> - Original Message -----
> > From: "Markus Stockhausen&quo
> Von: Nir Soffer [nsof...@redhat.com]
> Gesendet: Sonntag, 21. Februar 2016 14:10
> An: Markus Stockhausen; Francesco Romani
> Cc: users
> Betreff: Re: [ovirt-users] Going crazy with emory hotplug on 3.6
>
> Adding Francesco.
>
> On Sun, Feb 21, 2016 at 2:19 PM,
Hi there,
we upgraded Ovirt to 3.6, added the first Centos 7 host and created a new
cluster
with compatibility level 3.6 around it. Until now we are running with Fedora
nodes.
The first Linux VMs are already running in the new cluster. With the first
Windows
VM migrated over we once again face
>> Von: Yaniv Kaul [yk...@redhat.com]
>> Gesendet: Dienstag, 12. Januar 2016 13:15
>> An: Markus Stockhausen
>> Cc: users@ovirt.org; Mike Hildebrandt
>> Betreff: Re: [ovirt-users] NFS IO timeout configuration
>>
>> On Tue, Jan 12, 2016 at 9:32 AM,
> Von: Vinzenz Feenstra [vfeen...@redhat.com]
> Gesendet: Dienstag, 12. Januar 2016 09:00
> An: Markus Stockhausen
> Cc: users@ovirt.org; Mike Hildebrandt
> Betreff: Re: [ovirt-users] NFS IO timeout configuration
> > Hi there,
> >
> > we got a nasty situa
Hi there,
we got a nasty situation yesterday in our OVirt 3.5.6 environment.
We ran a LSM that failed during the cleanup operation. To be precise
when the process deleted an image on the source NFS storage.
Engine log gives:
2016-01-11 20:49:45,120 INFO
[org.ovirt.engine.core.vdsbroker.irsb
Hi there,
with the advent of oVirt 3.6 and our aging FC20 nodes I'm searching
for a replacement. Until today I always matched oVirt on CentOS 7.1
hypervisors to qemu 2.1.2. That would make no difference to our
already running Fedora virt-preview 2.1.3 version.
Looking at http://resources.ovir
> Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von "Budur
> Nagaraju [nbud...@gmail.com]
> Gesendet: Montag, 9. November 2015 05:53
> An: users
> Betreff: [ovirt-users] Multiple console access
>
> HI ,
>
> AM using SPICE console to access the vm console ,how to enable multi
Nice to hear. Congratulations and thumbs up.
Markus
P.S. The usual delay of two months seems to have become common courtesy.
Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von
"Sandro Bonazzola [sbona...@redhat.com]
Gesendet: Mittwoch
Hi Jasper,
from time to time we see a similar behaviour. All of a sudden a VM pauses due
to
some IO error. But it takes 5 months to occur. Our
/var/log/libvirt/qemu/.log gives
qemu-system-x86_64: block.c:2806: bdrv_error_action: Assertion `error >= 0'
failed.
Currently we are waiting to capt
> Von: Greg Padgett [gpadg...@redhat.com]
> Gesendet: Samstag, 19. September 2015 02:19
> An: Markus Stockhausen
> Cc: Users@ovirt.org
> Betreff: Re: [ovirt-users] Live Storage Migration
>
> On 09/14/2015 05:20 AM, Markus Stockhausen wrote:
> > Hi,
> >
> > s
| awk '{ print $2 }'`
echo 0 > /proc/$libvirtpid/coredump_filter
Best regards.
Markus
Von: Christian Hailer [christ...@hailer.eu]
Gesendet: Donnerstag, 17. September 2015 07:39
An: 'Daniel Helgenberger'; Markus Stockhausen
Cc:
.
Markus
**
Von: Christian Hailer [christ...@hailer.eu]
Gesendet: Dienstag, 15. September 2015 21:24
An: Markus Stockhausen; 'Daniel Helgenberger'
Cc: yd...@redhat.com; users@ovirt.org
Betreff: AW: [ovirt-users] Some VMs in status "not responding"
Do you have a chance to install qemu-debug? If yes I would try a backtrace.
gdb -p
# bt
Markus
Am 15.09.2015 4:15 nachm. schrieb Daniel Helgenberger
:
Hello,
I do not want to hijack the thread but maybe my issue is related?
It might have started with ovirt 3.5.3; but I cannot tell for sure.
> Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von
> "Lionel Caignec [caig...@cines.fr]
> Gesendet: Montag, 14. September 2015 15:47
> An: users@ovirt.org
> Betreff: [ovirt-users] [HA] Restart guest on other node on network SAN
> problem
>
> Hi,
>
> i've ovirt nodes conne
Hi,
somehow I got lost about the possibility to do a live storage migration.
We are using OVirt 3.5.4 + FC20 Nodes (virt-preview - qemu 2.1.3)
>From the WebUI I have the following possibilites:
1) disk without snapshot: VMs tab -> Disks -> Move: Button is active
but it does not allow to do a mig
"yum update" on the hosts only.
Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von
"Jason Keltz [j...@cse.yorku.ca]
Gesendet: Mittwoch, 9. September 2015 21:08
An: users
Betreff: [ovirt-users] Question about upgrading ovirt 3.5.3 to 3.5.
Hi there,
we noticed that a newly created NFS data domain is mounted with
UDP protocol. Does anyone know if that is the desired behaviour
of current OVirt versions?
ovirtnode# mount -a
...
10.10.30.254:/var/nas4/OVirtIB on
/rhev/data-center/mnt/10.10.30.254:_var_nas4_OVirtIB
type nfs (rw,relat
OMG.
Got this message one day after we upgraded to 3.5.2. We hit the bug and I opened
BZ1227693. Before that we were on 3.5.1 and everything worked fine. Just give me
feedback what I can test for you.
Markus
Von: users-boun...@ovirt.org [users-boun...@ovirt.org]"
>From my experience I strongly advise to avoid a drop_caches
with option 3. We still see commands hanging for hours
if they are issued during high CPU/memory load. Even with
matured Ubuntu 3.16 kernels
Staying with option 1 should be enough and much more save.
Markus
> Von: Markus Stockhausen
> Gesendet: Freitag, 10. April 2015 20:51
> An: users@ovirt.org
> Betreff: Live migration qemu 2.1.2 -> 2.1.3: Unknown savevm section
>
> Hi,
>
> don't know what will be the best place for the following question.
> So starting with t
Die you try to increase refresh interval to 60. Seemed to help for me,
especially in WAN connect.
Markus
Am 06.05.2015 3:48 nachm. schrieb lofyer :
I've installed ovirt-engine-3.5.1 and created about 120 VMs.
Everytime I did a multi-selection it will take a not-so-short time with
not-so-friendl
> Von: Paul Heinlein [heinl...@madboa.com]
> Gesendet: Mittwoch, 18. März 2015 18:43
> An: Markus Stockhausen
> Cc: Users@ovirt.org
> Betreff: Re: [ovirt-users] Live migration fails - domain not found -
>
> On Wed, 18 Mar 2015, Markus Stockhausen wrote:
>
> > althou
Hi,
although we already upgraded several hypervisor nodes to Ovirt 3.5.1
the newest upgrade has left the host in a very strang state. We did:
- Host was removed from cluster
- Ovirt 3.5 repo was activated on host
- Host was "reinstalled" from enging
And we got:
- A host that is active and look
Hi,
back in december there was a discussion about Ovirt on Fedora 21. From
my point of view that was about ovirt engine. So I'm somehow lost if
Fedora 21 is at least supported as hypervisor host. Anyone with deeper
knowledge?
The reason I'm asking:
We are currently on FC20 + virt-preview and enj
> Von: Juan Hernández [jhern...@redhat.com]
> Gesendet: Donnerstag, 19. Februar 2015 12:53
> An: Markus Stockhausen; users@ovirt.org
> Betreff: Re: [ovirt-users] movirt -> ovirt 3.5.1 -> server error 500
>
> On 02/19/2015 12:22 PM, Markus Stockhausen wrote:
> > Hi,
Hi,
just installed movirt on my mobile. Upon connection it breaks with the attached
error in the 3.5.1 engine server logs.
Something I'm missing?
Markus
2015-02-19 12:21:19,757 ERROR
[org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/ovirt-engine/api]]
(ajp--127.0.0.1-8702
> Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von
> "Darrell Budic [bu...@onholyground.com]
> Gesendet: Freitag, 13. Februar 2015 19:03
> An: Nicolas Ecarnot
> Cc: users
> Betreff: Re: [ovirt-users] How long do your migrations last?
>
> I’m under the impression it depends m
Hello,
we just build a new cluster with FC20 + virt-preview repos enabled.
Idea behind that is to enable snapshot live merge feature. This
seems to work quite well.
The only culprit is the Windows activation. For some reasons the
VM hardware of the old qemu 1.6/seabios 1.7.3 hypervisors is
differ
Memory usage > 80%: ksm kicks in. There it will run at full speed until usage
is below 80%. There is an open BZ from me. Bad behaviour is controlled by mom.
Markus
Am 06.12.2014 15:58 schrieb mad Engineer :
Hello All,
I am using centos6.5 x64 on a server with 48 G RAM and 8
Cores.Ma
> Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von
> "Brian Proffitt [bprof...@redhat.com]
> Gesendet: Mittwoch, 26. November 2014 17:01
> An: users
> Cc: board
> Betreff: [ovirt-users] oVirt Weekly Sync: November 26, 2014
> ...
> * 3.6 status Still gathering 3.6 features a
[users-boun...@ovirt.org]" im Auftrag von "Markus
Stockhausen [stockhau...@collogia.de]
Gesendet: Freitag, 21. November 2014 08:38
An: Gianluca Cecchi
Cc: s k; users@ovirt.org
Betreff: Re: [ovirt-users] Simple way to activate live merge in FC20 cluster
Wow. Very quick test. Thanks for s
Wow. Very quick test. Thanks for sharing the results. I will have a look what
qemu 1.6.2 might need.
Regarding stability of qemu 2.1.2: You should scan the qemu stable mailing list
if some severe fixes have been posted after the release. If you feel
comfortable take qemu from the preview repo
everything from virt-preview
- Wait for FC21
- Wait for Centos 21
Still a long way to go to get all the beloved features out of the box.
Markus
Von: Bob Doolittle [b...@doolittle.us.com]
Gesendet: Donnerstag, 20. November 2014 19:38
An: Markus Stockhausen
_
Von: Bob Doolittle [b...@doolittle.us.com]
Gesendet: Donnerstag, 20. November 2014 16:49
An: Markus Stockhausen
Cc: s k; users@ovirt.org; Daniel Helgenberger; Coffee Chou
Betreff: Re: [ovirt-users] Live Merge Functionality disabled on CentOS 6.6 Node
and oVirt 3.5.0
On 11/20/201
Iirc you simply need libvirt 1.2.9
Am 20.11.2014 16:20 schrieb Bob Doolittle :
Are there any bugs related to the changes in question that we can track
so we know when the changes are reflected in our distros of interest?
Thanks,
Bob
On 11/20/2014 03:51 AM, s k wrote:
> Hi,
>
>
> Live snapsho
Hi Ernest,
we have similar issues with IPoIB. To fix it we use VDSM hooks:
# cat /usr/libexec/vdsm/hooks/before_vdsm_start/network.sh
...
ethtool -K ib0 tso off 2>/dev/null
ethtool -K ib1 tso off 2>/dev/null
...
Nevertheless this is similar to running self defined init scripts.
But at least I ha
Hi,
sorry I forgot that: NFS, engine is running in a qemu VM "outside" the cluster.
Markus
Von: Gabi C [gab...@gmail.com]
Gesendet: Freitag, 31. Oktober 2014 12:12
An: Markus Stockhausen
Cc: ovirt-users
Betreff: Re: [ovirt-users] Upgrade order 3.4.
Hi,
maybe a stupid one. Just want to make sure that nothing goes wrong. We
plan to make a rolling upgrade of our landscape going to 3.5.1 in december.
So we always need some hypervisor nodes up and running during the
process.
Looking at older posts I assume that upgrading the engine should be
suf
> Von: Stefan Wendler [stefan.wend...@tngtech.com]
> Gesendet: Montag, 27. Oktober 2014 11:39
> An: Markus Stockhausen
> Cc: users@ovirt.org
> Betreff: Re: [ovirt-users] Deleting large snapshots blocks the whole cluster
>
> Hi,
>
> do you mean during snapshot dele
Do you see swapping on the SPM? If yes a regular echo 3 > drop_caches could
help.
Markus
Am 27.10.2014 10:57 schrieb Stefan Wendler :
Hi,
we have some really large snapshots left from a migration. Since our
store is almost full now we have to delete them now.
Some snapshots are around 1TB alre
Hi,
we are running the hyperv flags since months in our Win7 VMs without any issue.
As we are still on OVirt 3.4 FC20 infrastructure we set them with hooks (and
really depend on them).
Maybe RHEL related?
Markus
Am 24.10.2014 18:22 schrieb Charles Gruener :
https://bugzilla.redhat.com/show_bu
> Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von "Trey
> Dockendorf [treyd...@gmail.com]
> Gesendet: Sonntag, 19. Oktober 2014 20:43
> An: Arman Khalatyan
> Cc: users
> Betreff: Re: [ovirt-users] How to add custom lines in to host interface?
>
> I'd be interested in this t
If you are speaking about ib0 and so on this will be fixed with 3.5. The
interfaces will then be advertised as 10Gbit.
Am 18.10.2014 13:14 schrieb Arman Khalatyan :
Hi,
I am using ovirt 3.4.4-1.
On the hosts I have 1Gbit(eth),10Gbit(eth) and 40Gbit(QDRInfiniband) interfaces.
ib interface is used
Forgot the CC.
-- Weitergeleitete Nachricht --
Von: Markus Stockhausen
Datum: 09.10.2014 21:42
Betreff: Re: [ovirt-users] Cluster settings: "KSM Control" not working?
An: Frank Wall
Cc:
Have a look at redhat bugzilla #1114226. It should give an idea what is
happen
Are you running FC20 on the hypervisor host and if yes what kernel?
Am 27.09.2014 02:39 schrieb Grant Pasley :
good morning guys
i have an issue with my 2008 vm going to pause within 5 secs of starting it up.
new install of ovirt 3.4.4 on hp dl160, installed the windows vm and windows
drivers e
Hi,
will be fixed in 3.5. In advance you should set hv_relaxed options via hook.
See: https://bugzilla.redhat.com/show_bug.cgi?id=1110305
Best regards.
Markus
Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von "Carlos
Castillo [carlos.casti..
> users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von
> "Federico Alberto Sayd [fs...@uncu.edu.ar]
> Gesendet: Donnerstag, 24. Juli 2014 18:16
> An: users@ovirt.org
> Betreff: [ovirt-users] Disk migration eats all CPU, vms running in SPM
> become unresponsive
>
> Hello:
>
> I
> Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von "André
> Freitas [afrei...@ubiwhere.com]
> Gesendet: Mittwoch, 16. Juli 2014 15:22
> An: users@ovirt.org
> Betreff: [ovirt-users] Removal of snapshot taking too long
>
> Hi,
>
> i don't know if its normal but i'm having sit
> Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von
> "Maurice James [mja...@media-node.com]
> Gesendet: Freitag, 27. Juni 2014 01:42
> An: users
> Betreff: [ovirt-users] Spam Latency
>
> I noticed that the following operations take way, way too long:
> Any type of import/exp
> Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von
> "Maurice James [mja...@media-node.com]
> Gesendet: Montag, 30. Juni 2014 16:33
> An: Brian Proffitt
> Cc: users
> Betreff: [ovirt-users] Spam Re: [Video]: New Live Migration Progess Bar
> for oVirt
>
> is that progress
1 - 100 of 225 matches
Mail list logo