you need the eql hit kit to make it work at least somehow better, but
hit kit requires multipathd to be disabled which is an dependency to ovirt.
so far, no real workaround seems to be known
Am 06.10.2016 um 09:19 schrieb Gary Lloyd:
> I asked on the Dell Storage Forum and they recommend the
> >
> > Suppose I have one 500Gb thin provisioned disk
> > Why can I indirectly see that the actual size is 300Gb only in Snapshots
> > tab --> Disks of its VM ?
>
> if you are using live storage migration, ovirt creates a qcow/lvm
> snapshot of the vm block device. but
Am 27.06.2017 um 11:27 schrieb Gianluca Cecchi:
> Hello,
> I have a storage domain that I have to empty, moving its disks to
> another storage domain,
>
> Both source and target domains are iSCSI
> What is the behavior in case of preallocated and thin provisioned disk?
> Are they preserved with
> 2. Should I migrate from XenServer to oVirt? This is biased, I know, but
> I would like to hear opinions. The folks with @redhat.com email
> addresses will know how to advocate in favor of oVirt.
>
in term of reliability, better stay with xenserver
Am 20.12.2016 um 11:08 schrieb InterNetX - Juergen Gotteswinter:
> Am 15.12.2016 um 17:26 schrieb Paolo Bonzini:
>>
>>
>> On 15/12/2016 16:46, Sandro Bonazzola wrote:
>>>
>>>
>>> Il 15/Dic/2016 16:17, "InterNetX - Juergen Gotteswinter&quo
Am 15.12.2016 um 17:26 schrieb Paolo Bonzini:
>
>
> On 15/12/2016 16:46, Sandro Bonazzola wrote:
>>
>>
>> Il 15/Dic/2016 16:17, "InterNetX - Juergen Gotteswinter"
>> <j...@internetx.com <mailto:j...@internetx.com>> ha scritto:
&g
Am 15.12.2016 um 16:46 schrieb Sandro Bonazzola:
>
>
> Il 15/Dic/2016 16:17, "InterNetX - Juergen Gotteswinter"
> <j...@internetx.com <mailto:j...@internetx.com>> ha scritto:
>
> Am 15.12.2016 um 15:51 schrieb Sandro Bonazzola:
> >
&
Am 15.12.2016 um 15:51 schrieb Sandro Bonazzola:
>
>
> On Thu, Dec 15, 2016 at 3:02 PM, InterNetX - Juergen Gotteswinter
> <j...@internetx.com <mailto:j...@internetx.com>> wrote:
>
> i can confirm that it will break ...
>
> Dec 15 14:58:43 vm1 journa
i can confirm that it will break ...
Dec 15 14:58:43 vm1 journal: internal error: qemu unexpectedly closed
the monitor: Unexpected error in object_property_find() at
qom/object.c:1003:#0122016-12-15T13:58:43.140073Z qemu-kvm: can't apply
global Opteron_G4-x86_64-cpu.x1apic=off: Property '.x1apic'
you need the eql hit kit to make it work at least somehow better, but
hit kit requires multipathd to be disabled which is an dependency to ovirt.
so far, no real workaround seems to be known
Am 06.10.2016 um 09:19 schrieb Gary Lloyd:
> I asked on the Dell Storage Forum and they recommend the
Am 29.08.2016 um 12:25 schrieb Nir Soffer:
> On Thu, Aug 25, 2016 at 2:37 PM, InterNetX - Juergen Gotteswinter
> <j...@internetx.com> wrote:
>> currently, iscsi multipathed with solaris based filer as backend. but
>> this is already in progress of getting migrated to a di
Gotteswinter:
>
>
> Am 25.08.2016 um 15:53 schrieb Yaniv Kaul:
>>
>>
>> On Wed, Aug 24, 2016 at 6:15 PM, InterNetX - Juergen Gotteswinter
>> <juergen.gotteswin...@internetx.com
>> <mailto:juergen.gotteswin...@internetx.com>> wrote:
>>
Am 25.08.2016 um 08:42 schrieb Uwe Laverenz:
> Hi Jürgen,
>
> Am 24.08.2016 um 17:15 schrieb InterNetX - Juergen Gotteswinter:
>> iSCSI & Ovirt is an awful combination, no matter if multipathed or
>> bonded. its always gambling how long it will work, and when it
iSCSI & Ovirt is an awful combination, no matter if multipathed or
bonded. its always gambling how long it will work, and when it fails why
did it fail.
its supersensitive to latency, and superfast with setting an host to
inactive because the engine thinks something is wrong with it. in most
Am 6/3/2016 um 6:37 PM schrieb Nir Soffer:
> On Fri, Jun 3, 2016 at 11:27 AM, InterNetX - Juergen Gotteswinter
> <juergen.gotteswin...@internetx.com> wrote:
>> What if we move all vm off the lun which causes this error, drop the lun
>> and recreated it. Will we "m
What if we move all vm off the lun which causes this error, drop the lun
and recreated it. Will we "migrate" the error with the VM to a different
lun or could this be a fix?
Am 6/3/2016 um 10:08 AM schrieb InterNetX - Juergen Gotteswinter:
> Hello David,
>
> thanks for your e
Hello David,
thanks for your explanation of those messages, is there any possibility
to get rid of this? i already figured out that it might be an corruption
of the ids file, but i didnt find anything about re-creating or other
solutions to fix this.
Imho this occoured after an outage where
Am 5/30/2016 um 3:59 PM schrieb Nicolas Ecarnot:
> Le 30/05/2016 15:30, InterNetX - Juergen Gotteswinter a écrit :
>> Hi,
>>
>> you are aware of the fact that eql sync replication is just about
>> replication, no single piece of high availability? i am not even sur
Hi,
you are aware of the fact that eql sync replication is just about
replication, no single piece of high availability? i am not even sure if
it does ip failover itself. so better think about minutes of
interruptions than seconds.
anyway, dont count on ovirts pause/unpause. theres a real chance
Hi,
since some time we get Error Messages from Sanlock, and so far i was not
able to figure out what exactly they try to tell and more important if
its something which can be ignored or needs to be fixed (and how).
Here are the Versions we are using currently:
Engine
We see exactly the same, and it does not seem to be Vendor dependend.
- Equallogic Controller Failover -> VM get paused and maybe unpaused but
most dont
- Nexenta ZFS iSCSI with RSF1 HA -> same
- FreeBSD ctld iscsi-target + Heartbeat -> same
- CentOS + iscsi-target + Heartbeat -> same
Multipath
Generally, avoid mode 0 - Round Robin or you will face problems sooner
or later. RR will generate TCP out of Order.
Go for mode 4 / lacp wherever possible
Am 5/22/2016 um 9:33 PM schrieb Alan Murrell:
> Hello,
>
> I am wondering what the recommended bonding modes are for the different
> types
nop, thats direct attached sas. you want/need iscsi or nfs
Am 21.03.2016 um 10:50 schrieb Maton, Brett:
> Hello list,
>
> I was wondering if any one could tell me if a Dell MD1400 Direct
> Attached Storage unit is suitable for shared storage between ovirt hosts?
>
> I really don't want to
got exactly the same issue, with all nice side effects like performance
degradation. Until now i was not able to fix this, or to fool the engine somehow
that it whould show the image as ok again and give me a 2nd chance to drop the
snapshot.
in some cases this procedure helped (needs 2nd storage
Jan Siml js...@plusline.net hat am 28. August 2015 um 15:15 geschrieben:
Hello Juergen,
got exactly the same issue, with all nice side effects like performance
degradation. Until now i was not able to fix this, or to fool the engine
somehow that it whould show the image as ok again and
Jan Siml js...@plusline.net hat am 28. August 2015 um 16:47 geschrieben:
Hello,
got exactly the same issue, with all nice side effects like performance
degradation. Until now i was not able to fix this, or to fool the
engine
somehow that it whould show the image as ok again and
Jan Siml js...@plusline.net hat am 28. August 2015 um 19:52 geschrieben:
Hello,
got exactly the same issue, with all nice side effects like
performance
degradation. Until now i was not able to fix this, or to fool the
engine
somehow that it whould show the image as ok
if the vm is really down, you whould find this helpful
[root@portal dbscripts]# ./unlock_entity.sh -h
Usage: ./unlock_entity.sh [options] [ENTITIES]
-h- This help text.
-v- Turn on verbosity (WARNING:
lots of output)
-l LOGFILE- The
Hi,
seems like the Setting EnableMACAntiSpoofingFilterRules only applies to
the main IP of a VM, additional IP Adresses on Alias Interfaces (eth0:x)
are not included in the generated ebtables ruleset.
Is there any Workaround / Setting / whatever to allow more than one IP
without completly
Hi,
when trying to export a VM to the NFS Export Domain, the Process fails
nearly immediatly. VDSM drops this:
-- snip --
::Request was made in '/usr/share/vdsm/storage/sp.py' line '1549' at
'moveImage'
0a3c909d-0737-492e-a47c-bc0ab5e1a603::DEBUG::2015-06-02
We see this, too. It appeared (if i didnt miss something) either with
el7 + ovirt 3.5 or with ovirt 3.5 no matter if its el6 or 7. Doesnt seem
to have any further impact so far.
Am 22.04.2015 um 17:58 schrieb Kapetanakis Giannis:
Hi,
Any idea what this means?
vdsm.log:
bonding. The interfaces on the nodes don't show
any drops or errors, i checked 2 of the VMs that got paused the last
time it happened they have dropped packets on their interfaces.
We don't have a subscription with nexenta (anymore).
On 04/21/2015 04:41 PM, InterNetX - Juergen Gotteswinter
running production so not very
easy to change the pool.
On 04/22/2015 11:48 AM, InterNetX - Juergen Gotteswinter wrote:
i expect that you are aware of the fact that you only get the write
performance of a single disk in that configuration? i whould drop that
pool configuration, drop the spare
Frames?
Kind regards,
Maikel
On 04/21/2015 02:51 PM, InterNetX - Juergen Gotteswinter wrote:
Hi,
how about Load, Latency, strange dmesg messages on the Nexenta ? You
are
using bonded Gbit Networking? If yes, which mode?
Cheers,
Juergen
Am 20.04.2015 um 14:25 schrieb Maikel vd
Am 21.04.2015 um 16:09 schrieb Maikel vd Mosselaar:
Hi Fred,
This is one of the nodes from yesterday around 01:00 (20-04-15). The
issue started around 01:00.
https://bpaste.net/raw/67542540a106
The VDSM logs are very big so i am unable to paste a bigger part of the
logfile, i don't
Hi,
how about Load, Latency, strange dmesg messages on the Nexenta ? You are
using bonded Gbit Networking? If yes, which mode?
Cheers,
Juergen
Am 20.04.2015 um 14:25 schrieb Maikel vd Mosselaar:
Hi,
We are running ovirt 3.5.1 with 3 nodes and seperate engine.
All on CentOS 6.6:
3 x
,
Maikel
On 04/21/2015 02:51 PM, InterNetX - Juergen Gotteswinter wrote:
Hi,
how about Load, Latency, strange dmesg messages on the Nexenta ? You are
using bonded Gbit Networking? If yes, which mode?
Cheers,
Juergen
Am 20.04.2015 um 14:25 schrieb Maikel vd Mosselaar:
Hi,
We
Hi,
accidently, i delete the wrong live snapshot of a Virtual Machine. Which
whouldnt be that big deal, since Filer Snapshots are created every hour.
But... after pulling out the qcow Files of the VM, i am confronted with
several files.
- the biggest one, which is most likely the main part from
Hi,
already captured some iscsi Traffic of both Situations and compared them?
MTU maybe?
Juergen
Am 18.02.2015 um 02:08 schrieb John Florian:
Hi all,
I've been trying to resolve a storage performance issue but have had no
luck in identifying the exact cause. I have my storage domain on
RHEV or Ovirt? If RHEV, i whould suggest contacting the RH Support?
Am 19.02.2015 um 11:46 schrieb Jakub Bittner:
Hello,
after restart we had problem with manager to connect to nodes. All nodes
are down and not reachable. When I try to remove and readd to engine it
fails too.
10 days ago
So, did the dirty Way and it was successful :)
Thanks to everyone who helped me out, great community!
Cheers,
Juergen
Am 29.12.2014 um 10:28 schrieb InterNetX - Juergen Gotteswinter:
Hello both of you,
thanks for your detailed explainations and support, still thinking which
way i will go
Hello both of you,
thanks for your detailed explainations and support, still thinking which
way i will go. tending to try the dirty way in a lab setup before to see
what happens.
Will post updates when i got more :)
Cheers,
Juergen
It seems that somebody had deleted manually the constraint
Hi,
i am trying to get the noipspoof.py Hook up and running, which works
fine so far if i only feed it with a single ip. when trying to add 2+,
like described in the ource (comma seperated), the gui tells me that
this isnt expected / nice and wont let me do this.
I already tried modding the
Am 23.12.2014 um 02:15 schrieb Eli Mesika:
- Original Message -
From: InterNetX - Juergen Gotteswinter j...@internetx.com
To: users@ovirt.org
Sent: Monday, December 22, 2014 2:07:55 PM
Subject: Re: [ovirt-users] Problem Upgrading 3.4.4 - 3.5
Hello again,
It seems that somebody
It seems that somebody had deleted manually the constraint
fk_event_subscriber_event_notification_methods from your database
Therefor, the first line that attempts to drop this constraint in
03_05_0050_event_notification_methods.sql: ALTER TABLE event_subscriber
DROP CONSTRAINT
Hi,
i am currently trying to upgrade an existing 3.4.4 Setup (which got
upgraded several times before, starting at 3.3), but this time i run
into a Error while Upgrading the DB
-- snip --
* QUERY **
ALTER TABLE event_subscriber DROP CONSTRAINT
Am 22.12.2014 um 10:41 schrieb Eli Mesika:
- Original Message -
From: InterNetX - Juergen Gotteswinter j...@internetx.com
To: users@ovirt.org
Sent: Monday, December 22, 2014 11:08:05 AM
Subject: [ovirt-users] Problem Upgrading 3.4.4 - 3.5
Hi,
i am currently trying to upgrade
Hello again,
It seems that somebody had deleted manually the constraint
fk_event_subscriber_event_notification_methods from your database
Therefor, the first line that attempts to drop this constraint in
03_05_0050_event_notification_methods.sql: ALTER TABLE event_subscriber DROP
i have seen those messages in setups with 10G networking, when the
network throughput went to ~1gbit those messages appeared. regarding to
rhn knowledgebase this is a false alert, seemed like some versions of
ovirt/rhev got this hardcoded.
Am 18.12.2014 um 09:47 schrieb Lior Vernia:
Hi Mario,
Am 24.11.2014 um 13:16 schrieb s k:
Hello,
I'm building an oVirt Cluster on a remote site and to avoid installing a
separate oVirt Engine there I'm thinking of using the central oVirt
Engine. The problem is that the latency between the two sites is up to
100ms.
Ovirt is very picky
: Re: [ovirt-users] Problem Refreshing/Using ovirt-image-repository /
3.5 RC2
- Original Message -
From: InterNetX - Juergen Gotteswinter j...@internetx.com
To: users@ovirt.org
Sent: Tuesday, September 23, 2014 10:41:41 AM
Subject: Re: [ovirt-users] Problem Refreshing/Using ovirt
Hi,
when trying to refresh the ovirt glance repository i get a 500 Error Message
Operation Canceled
Error while executing action: A Request to the Server failed with the
following Status Code: 500
engine.log says:
2014-09-23 09:23:08,960 INFO
Am 09.09.2014 um 16:38 schrieb Dafna Ron:
On 09/09/2014 03:21 PM, Frank Wall wrote:
On Tue, Sep 09, 2014 at 03:09:02PM +0100, Dafna Ron wrote:
qemu would pause a vm when doing extend on the vm disk and this would
result in INFO messages on vm's pause.
looks like this is what you are
Am 08.09.2014 um 15:05 schrieb Jimmy Dorff:
On 09/08/2014 05:54 AM, Finstrle, Ludek wrote:
I think about two possibilities:
1) One central Engine
- how to manage guests when connection drop between engine and node
- latency is up to 1 second is it ok even with working connection?
I had
Am 08.09.2014 um 15:18 schrieb InterNetX - Juergen Gotteswinter:
Am 08.09.2014 um 15:05 schrieb Jimmy Dorff:
On 09/08/2014 05:54 AM, Finstrle, Ludek wrote:
I think about two possibilities:
1) One central Engine
- how to manage guests when connection drop between engine and node
55 matches
Mail list logo