Re: [ovirt-users] ISSUES WITH EDITING A CLUSTER SO IT CAN BE PART OF A NEW DATACENTER

2017-04-18 Thread Yaniv Kaul
On Tue, Apr 18, 2017 at 8:05 AM, martin chamambo 
wrote:

> @gianlucca yes you are right that would have worked definitely...because a
> prerequisite for removing a storage domain is having another say nfs or
> iscsi as the master then decommissioning the old onei opted for the
> longer route of deleting the data center, since I am also still testing the
> platform.
>
> I don't think there is a work flow we are missing...im still researching
> for options and will update..but seems there is no easy way
>

You could edit the storage connection via the API (perhaps the 'force'
attribute is needed).
See https://bugzilla.redhat.com/show_bug.cgi?id=1379771
Y.


> On Apr 18, 2017 12:29 AM, "Gianluca Cecchi" 
> wrote:
> >
> > On Mon, Apr 17, 2017 at 4:03 PM, martin chamambo 
> wrote:
> >>
> >> I had issues with my master data storage domain and the only way was to
> set the hosts in maintenence mode ,delete the datacenter so i can recreate
> it
> >>
> >> I managed to delete the datacenter and the storage domains , and
> created a new datacenter , but now the existing clusters are not part of a
> datacenter and trying to add them to the datacenter gives out this error
> below
> >>
> >> Error while executing action: Cannot edit Cluster. Changing management
> network in a non-empty cluster is not allowed.
> >>
> >> and by non empty i guess it means there is a host inside that cluster ?
> fair enough
> >>
> >> trying to change a node cluster comes with the Host Cluster Dropdown
> with this
> >>
> >> Datacenter:Undefined
> >>
> >> How can i fix the catch 22 scenario without deleting clusters or hosts..
> >>
> >>
> >> surely there should be a smarter way ?
> >>
> >
> > I don't know how to manage your current situation.
> > But I somehow had a similar problem as your initial one a few weeks ago
> but for other reasons.
> > My case was that I defined an iSCSI DC and then used a LUN to create an
> iSCSI storage domain.
> > I initially formatted and tested the infrastructure to manage all the
> relevant configuration (iSCSI config, multipath config, ecc.) before going
> to production.
> > Then I had to decommission/re-create this storage domain at its target
> stage: my iSCSI lun had to be a raw copy of a SAN FC lun I migrated from
> older DC.
> > But in the mean time I created a cluster of two hosts and such and I
> discovered that I was in problem as you described
> > My solution was to create a smaller NFS share and add it as a new
> storage domain: in old releases of oVirt it was not possible to mix
> different types of storage domains in the same DC, but since 3.4 (oVirt
> version and DC version) it is:
> > http://www.ovirt.org/develop/release-management/features/
> storage/mixed-types-data-center/
> >
> > With this workaround I was able to put then my iSCSI storage domain into
> maintenance and have the small NFS one to become the master.
> > And finally to remove the iSCSI storage domain and import the copied
> one, finally decommissioning the temporary NFS share.
> >
> > Not the best way but it could have helped you too, eventually setting a
> directory of the host itself as a share for a temporary operation of this
> kind.
> > In the past it was discussed about SPM removal and by consequence the
> removal of master storage domain concept... but probably it didn't get any
> update.
> > See this whole thread for example:
> > http://lists.ovirt.org/pipermail/users/2016-May/039782.html
> >
> > For sure it would be nice to have a clean way (if we are not missing
> some other correct workflow) to manage the case when you have only one SD
> and for some reason you need to scratch it but preserve your DC/cluster
> config.
> > HIH,
> > Gianluca
> >
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem adding Labels to Bonds on big clusters

2017-04-19 Thread Yaniv Kaul
On Wed, Apr 19, 2017 at 9:02 AM, Marcel Hanke  wrote:

> Hi,
> I currently have some trouble adding a label to a bond on an host.
> The problem seems to come from the number of Vlans (108 in that label) and
> the
> number of hosts (160 in 4 cluster). As far as I can tell the process gets
> finished on the node (all the network configurations are there in the log),
> but ovirt seems to fail sometime after the node has nothing to do anymore.
> The
> Logs gives me a timeout error for the configuration command.
>
> Does anyone know how to encrease the timeout or fix that problem an other
> way?
>

For the time being, I suggest using the API. We have an open bug on this
(couldn't find it right now).
Y.


>
> The exact same setup with only 80 vlan and 120 hosts in 3 cluster is
> running
> fine.
>
> Cheers Marcel
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] massive simultaneous vms migrations ?

2017-04-19 Thread Yaniv Kaul
On Tue, Apr 18, 2017 at 7:48 PM, Nelson Lameiras <
nelson.lamei...@lyra-network.com> wrote:

> hello,
>
> When putting a host on "maintenance mode", all vms start migrating to
> other hosts.
>
> We have some hosts that have 60 vms. So this will create a 60 vms
> migrating simultaneously.
> Some vms are under so much heavy loads that migration fails often (our
> guess is that massive simultaneous migrations does not help migration
> convergence) - even with "suspend workload if needed" migraton policy.
>
> - Does oVirt really launches 60 simultaneous migrations or is there a
> queuing system ?
> - If there is a queuing system, is there a way to configure a maximum
> number of simultaneous migrations ?
>
> I did see a "migration bandwidth limit", but this is quite what we are
> looking for.
>

What migration policy are you using?
Are you using a dedicated migration network, or the ovirtmgmt network?


>
> my setup:
> ovirt-engine +hosted engine 4.1.1
> hosts : centos 7.3 fully updated.
>
> for full context to understand this question : 2 times in the past, when
> trying to put a host in maintenance, host stopped responding during massive
> migrations and was fenced by engine. It's still unclear why host stopped
> responding, but we think that migrating 60+ vms simultaneously puts a heavy
> strain on storage ? So we would like to better control migration process in
> order to better understand what's happening. This scenario is "production
> only" since our labs do not contain nearly as much vm with such heavy
> loads. So rather than trying to reproduce, we are trying to avoid ;)
>

If you could open a bug with relevant logs on the host not responding,
that'd be great.
Live migration doesn't touch the storage (disks re not moving anywhere),
but it does stress the network. I doubt it, but perhaps you over-saturate
the ovirtmgmt network.
Y.


>
> cordialement, regards,
>
> 
> Nelson LAMEIRAS
> Ingénieur Systèmes et Réseaux / Systems and Networks engineer
> Tel: +33 5 32 09 09 70 <+33%205%2032%2009%2009%2070>
> nelson.lamei...@lyra-network.com
> www.lyra-network.com | www.payzen.eu 
> 
> 
> 
> 
> --
> Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?

2017-04-19 Thread Yaniv Kaul
On Tue, Apr 18, 2017 at 9:57 PM, Bryan Sockel  wrote:

> Was reading over this post to the group about storage options.  I am more
> of a windows guy as appose to a linux guy, but am learning quickly and had
> a question.  You said that LACP will not provide extra band with
> (Especially with NFS).  Does the same hold true with GlusterFS.  We are
> currently using GlusterFS for the file replication piece.  Does Glusterfs
> take advantage of any multipathing?
>
> Thanks
>
>

I'd expect Gluster to take advantage of LACP, as it has replication to
multiple peers (as opposed to NFS). See[1].
Y.

[1]
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Network%20Configurations%20Techniques/


>
>
> -Original Message-
> From: Yaniv Kaul 
> To: Charles Tassell 
> Cc: users 
> Date: Sun, 26 Mar 2017 10:40:00 +0300
> Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?
>
>
>
> On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell 
> wrote:
>>
>> Hi Everyone,
>>
>>   I'm about to setup an oVirt cluster with two hosts hitting a Linux
>> storage server.  Since the Linux box can provide the storage in pretty much
>> any form, I'm wondering which option is "best." Our primary focus is on
>> reliability, with performance being a close second.  Since we will only be
>> using a single storage server I was thinking NFS would probably beat out
>> GlusterFS, and that NFSv4 would be a better choice than NFSv3.  I had
>> assumed that that iSCSI would be better performance wise, but from what I'm
>> seeing online that might not be the case.
>
>
> NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD support,
> which is nice.
> Gluster probably requires 3 servers.
> In most cases, I don't think people see the difference in performance
> between NFS and iSCSI. The theory is that block storage is faster, but in
> practice, most don't get to those limits where it matters really.
>
>
>>
>>   Our servers will be using a 1G network backbone for regular traffic and
>> a dedicated 10G backbone with LACP for redundancy and extra bandwidth for
>> storage traffic if that makes a difference.
>
>
> LCAP many times (especially on NFS) does not provide extra bandwidth, as
> the (single) NFS connection tends to be sticky to a single physical link.
> It's one of the reasons I personally prefer iSCSI with multipathing.
>
>
>>
>>   I'll probably try to do some performance benchmarks with 2-3 options,
>> but the reliability issue is a little harder to test for.  Has anyone had
>> any particularly bad experiences with a particular storage option?  We have
>> been using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues
>> with the multipath setup, but that won't be a problem with the new SAN
>> since it's only got a single controller interface.
>
>
> A single controller is not very reliable. If reliability is your primary
> concern, I suggest ensuring there is no single point of failure - or at
> least you are aware of all of them (does the storage server have redundant
> power supply? to two power sources? Of course in some scenarios it's an
> overkill and perhaps not practical, but you should be aware of your weak
> spots).
>
> I'd stick with what you are most comfortable managing - creating, backing
> up, extending, verifying health, etc.
> Y.
>
>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?

2017-04-19 Thread Yaniv Kaul
On Wed, Apr 19, 2017 at 5:07 PM, Bryan Sockel  wrote:

> Thank you for the information, i did check my servers this morning, in
> total i have 4 servers configured as part of my ovirt deployment, two
> virtualization servers and 2 gluster servers, with one of the
> virtualization being the arbiter for my gluster replicated storage.
>
> From what i can see on my 2 dedicated gluster boxes i see traffic going
> out over multiple links.  On both of my virtualization hosts i am seeing
> all traffic go out via em1, and no traffic going out over the other
> interfaces.  All four interfaces are configured in a single bond as 802.3ad
> on both hosts with my logical networks attached to the bond.
>

the balancing is based on hash with either L2+L3, or L3+L4. It may well be
that both end up with the same hash and therefore go through the same link.
Y.


>
>
>
> -Original Message-
> From: Yaniv Kaul 
> To: Bryan Sockel 
> Cc: users 
> Date: Wed, 19 Apr 2017 10:41:40 +0300
> Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?
>
>
>
> On Tue, Apr 18, 2017 at 9:57 PM, Bryan Sockel 
> wrote:
>>
>> Was reading over this post to the group about storage options.  I am more
>> of a windows guy as appose to a linux guy, but am learning quickly and had
>> a question.  You said that LACP will not provide extra band with
>> (Especially with NFS).  Does the same hold true with GlusterFS.  We are
>> currently using GlusterFS for the file replication piece.  Does Glusterfs
>> take advantage of any multipathing?
>>
>> Thanks
>>
>>
>
> I'd expect Gluster to take advantage of LACP, as it has replication to
> multiple peers (as opposed to NFS). See[1].
> Y.
>
> [1] https://gluster.readthedocs.io/en/latest/Administrator%
> 20Guide/Network%20Configurations%20Techniques/
>
>>
>>
>> -Original Message-
>> From: Yaniv Kaul 
>> To: Charles Tassell 
>> Cc: users 
>> Date: Sun, 26 Mar 2017 10:40:00 +0300
>> Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?
>>
>>
>>
>> On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell 
>> wrote:
>>>
>>> Hi Everyone,
>>>
>>>   I'm about to setup an oVirt cluster with two hosts hitting a Linux
>>> storage server.  Since the Linux box can provide the storage in pretty much
>>> any form, I'm wondering which option is "best." Our primary focus is on
>>> reliability, with performance being a close second.  Since we will only be
>>> using a single storage server I was thinking NFS would probably beat out
>>> GlusterFS, and that NFSv4 would be a better choice than NFSv3.  I had
>>> assumed that that iSCSI would be better performance wise, but from what I'm
>>> seeing online that might not be the case.
>>
>>
>> NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD
>> support, which is nice.
>> Gluster probably requires 3 servers.
>> In most cases, I don't think people see the difference in performance
>> between NFS and iSCSI. The theory is that block storage is faster, but in
>> practice, most don't get to those limits where it matters really.
>>
>>
>>>
>>>   Our servers will be using a 1G network backbone for regular traffic
>>> and a dedicated 10G backbone with LACP for redundancy and extra bandwidth
>>> for storage traffic if that makes a difference.
>>
>>
>> LCAP many times (especially on NFS) does not provide extra bandwidth, as
>> the (single) NFS connection tends to be sticky to a single physical link.
>> It's one of the reasons I personally prefer iSCSI with multipathing.
>>
>>
>>>
>>>   I'll probably try to do some performance benchmarks with 2-3 options,
>>> but the reliability issue is a little harder to test for.  Has anyone had
>>> any particularly bad experiences with a particular storage option?  We have
>>> been using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues
>>> with the multipath setup, but that won't be a problem with the new SAN
>>> since it's only got a single controller interface.
>>
>>
>> A single controller is not very reliable. If reliability is your primary
>> concern, I suggest ensuring there is no single point of failure - or at
>> least you are aware of all of them (does the storage server have redundant
>> power supply? to two power sources? Of course in some scenarios it's an
>> overkill and perhaps not practical, but you should be aware of your weak
>> spots).
>>
>> I'd stick with what you are most comfortable managing - creating, backing
>> up, extending, verifying health, etc.
>> Y.
>>
>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] The web portal gives: Bad Request: 400

2017-04-20 Thread Yaniv Kaul
On Thu, Apr 20, 2017 at 1:06 PM, Arman Khalatyan  wrote:

> After the recent upgrade from ovirt Version 4.1.1.6-1.el7.centos. to
> Version 4.1.1.8-1.el7.centos
>
> The web portal gives following error:
> Bad Request
>
> Your browser sent a request that this server could not understand.
>
> Additionally, a 400 Bad Request error was encountered while trying to use
> an ErrorDocument to handle the request.
>
>
> Are there any hints how to fix it?
>

It'd be great if you could share some logs. The httpd logs, server.log and
engine.log, all might be useful.
Y.


> BTW the rest API works as expected, engine-setup went without errors.
>
> Thanks,
>
> Arman.
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.0 to 4.1 CentOS7.3 and libvirtd 2.0.0 segfault issue

2017-04-21 Thread Yaniv Kaul
On Fri, Apr 21, 2017 at 4:38 PM, Rafał Wojciechowski <
i...@rafalwojciechowski.pl> wrote:

> hi,
>
> my issue was related to bug in libvirtd.
> it was found in core dump by libvirt team
>
> "
>
> I'll send a patch to upstream libvirt to fix this crash.  However it can take
> a while to get it back to CentOS/RHEL.  The source of this crash is that you
> have a "tun0" network interface without IP address and that interface is
> checked before "ovirtmgmt" and it causes the crash.  You can workaround it
> by removing the "tun0" interface if it doesn't have any IP address.
>
> Pavel
>
> "
>
> workaround is working fine for me.
>

Thanks for following this!
Any idea how did you get to have the tun0 there in the first place?
Y.

Regards,
> Rafal Wojciechowski
>
> W dniu 18.04.2017 o 16:55, Rafał Wojciechowski pisze:
>
> hi,
>
> I was unable to just remove glibc and install it again - I have
> reinstalled it and rebooted the machine but it was not fixed anything
> Thanks anyway.
>
> Regards,
> Rafal Wojciechowski
> W dniu 18.04.2017 o 10:53, Yanir Quinn pisze:
>
> Hi Rafal
> not sure it relates to your issue, but i experienced a similar issue with
> segfault (running on fedora 25)
> to resolve it i had to remove glibc packages and then install them again
> (maybe a same workaround for libvirt will do the job here)
>
> Regards
> Yanir Quinn
>
> On Tue, Apr 18, 2017 at 9:32 AM, Francesco Romani 
> wrote:
>
>>
>>
>> On 04/18/2017 08:09 AM, Rafał Wojciechowski wrote:
>> >
>> > hello,
>> >
>> > I made comparison(+diff) between xml passing through vdsm which is
>> > working and another one which cause libvirtd segfault
>> >
>> > https://paste.fedoraproject.org/paste/eqpe8Byu2l-3SRdXc6LTLl
>> 5M1UNdIGYhyRLivL9gydE=
>> >
>> >
>> > I am not sure if below setting are fine but I dont know how to change
>> them
>> >
>> > 
>> > (I dont have so much ram and vgamem)
>> >
>>
>> those are kibibytes though
>> (https://libvirt.org/formatdomain.html#elementsVideo), are pretty
>> conservarvative settings
>> >
>> > > > passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1"
>> type="spice">
>> > (ports with "-"? maybe it is fine because of autoport settings...)
>> >
>>
>> Yes, "-1" means "autoallocation from libvirt".
>> I don't see obvious issues in this XML, and, most importantly, one
>> invalid XML should never cause libvirtd to segfault.
>>
>> I'd file a libvirt bug.
>>
>>
>> --
>> Francesco Romani
>> Senior SW Eng., Virtualization R&D
>> Red Hat
>> IRC: fromani github: @fromanirh
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] upgrade to 4.1

2017-04-23 Thread Yaniv Kaul
On Thu, Apr 20, 2017 at 4:50 PM, Fabrice Bacchella <
fabrice.bacche...@orange.fr> wrote:

> I tried to upgrade ovirt to version 4.1 from 4.0 and got:
>
>   Found the following problems in PostgreSQL configuration for the
> Engine database:
>autovacuum_vacuum_scale_factor required to be at most 0.01
>autovacuum_analyze_scale_factor required to be at most 0.075
>autovacuum_max_workers required to be at least 6
>Postgresql client version is '9.4.8', whereas the version on
> XXX is '9.4.11'. Please use a Postgresql server of version '9.4.8'.
>

I'm not sure we support 9.4.8, we use whatever comes with EL7, which AFAIR,
is 9.2.x
Y.


>   Please set:
>autovacuum_vacuum_scale_factor = 0.01
>autovacuum_analyze_scale_factor = 0.075
>autovacuum_max_workers = 6
>server_version = 9.4.8
>   in postgresql.conf on ''. Its location is usually
> /var/lib/pgsql/data , or somewhere under /etc/postgresql* .
>
> I'm a little afraid about that. Does ovirt want pg to lies about it's
> version ? It's a shared instance so what about other tools that access it ?
> Is there some explanation about the meaning of those values ?
>
> And it was not in the release notes, it's not funny to get this warning
> after starting the upgrade
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] The web portal gives: Bad Request: 400

2017-04-23 Thread Yaniv Kaul
On Sun, Apr 23, 2017 at 2:32 PM, Arman Khalatyan  wrote:

> Hi Yaniv,
> I'm unfortunately there is nothing in the logs, it looks like a ovirt
> doesnot listen to the right interface.
> We have multiples interfaces internal and external access.
> We sometimes ago we renamed the engine name to work with the external ip
> address. Now it is listening only to the local address.
> To fix it i just added:
>
> /etc/ovirt-engine/engine.conf.d/99-setup-http-proxy.conf
>
>
And this file did not survive the upgrade?
Y.


>
> One need to dig more to fix it
>
>
> Am 20.04.2017 2:55 nachm. schrieb "Yaniv Kaul" :
>
>>
>>
>> On Thu, Apr 20, 2017 at 1:06 PM, Arman Khalatyan 
>> wrote:
>>
>>> After the recent upgrade from ovirt Version 4.1.1.6-1.el7.centos. to
>>> Version 4.1.1.8-1.el7.centos
>>>
>>> The web portal gives following error:
>>> Bad Request
>>>
>>> Your browser sent a request that this server could not understand.
>>>
>>> Additionally, a 400 Bad Request error was encountered while trying to
>>> use an ErrorDocument to handle the request.
>>>
>>>
>>> Are there any hints how to fix it?
>>>
>>
>> It'd be great if you could share some logs. The httpd logs, server.log
>> and engine.log, all might be useful.
>> Y.
>>
>>
>>> BTW the rest API works as expected, engine-setup went without errors.
>>>
>>> Thanks,
>>>
>>> Arman.
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt GUI bug? clicking "ok" on upgrade host confirmation screen

2017-04-24 Thread Yaniv Kaul
On Mon, Apr 24, 2017 at 1:29 PM, Nelson Lameiras <
nelson.lamei...@lyra-network.com> wrote:

> Hi kasturi,
>
> Thanks for your answer,
>
> Indeed, I tried again and after 1 minute and 17 seconds (!!) the
> confirmation screen disappeared. Is it really necessary to wait this long
> for screen to disapear? (I can see in the background that "upgrade" stars a
> few seconds after clicking ok)
>
> When putting host into maintenance mode, a circular "waiting" animation is
> used in order to warn user "something" is happening. A similar animation
> would be usefull in "upgrade" screen after clicking ok, no?
>

We should certainly improve this.
Can you please open a bug (and attach engine.log ) ?
Y.


>
> cordialement, regards,
>
> 
> Nelson LAMEIRAS
> Ingénieur Systèmes et Réseaux / Systems and Networks engineer
> Tel: +33 5 32 09 09 70 <+33%205%2032%2009%2009%2070>
> nelson.lamei...@lyra-network.com
> www.lyra-network.com | www.payzen.eu 
> 
> 
> 
> 
> --
> Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
>
>
> --
> *From: *"knarra" 
> *To: *"Nelson Lameiras" , "ovirt users"
> 
> *Sent: *Monday, April 24, 2017 7:34:17 AM
> *Subject: *Re: [ovirt-users] oVirt GUI bug? clicking "ok" on upgrade host
> confirmation screen
>
> On 04/21/2017 10:20 PM, Nelson Lameiras wrote:
>
> Hello,
>
> Since "upgrade" functionality is available for hosts in oVirt GUI I have
> this strange bug :
>
> - Click on "Installation>>Upgrade"
> - Click "ok" on confirmation screen
> - -> (bug) confirmation screen does not dissapear as expected
> - Click "ok" again on confirmation screen -> error : "system is already
> upgrading"
> - Click "cancel" to be able to return to oVirt
>
> This happens using on :
> ovirt engine : oVirt Engine Version: 4.1.1.6-1.el7.centos
> client : windows 10
> client : chrome Version 57.0.2987.133 (64-bit)
>
> This bug was already present on oVirt 4.0 before updating to 4.1.
>
> Has anybody else have this problem?
>
> (will try to reproduce with firefox, IE)
>
> cordialement, regards,
>
> 
> Nelson LAMEIRAS
> Ingénieur Systèmes et Réseaux / Systems and Networks engineer
> Tel: +33 5 32 09 09 70 <+33%205%2032%2009%2009%2070>
> nelson.lamei...@lyra-network.com
> www.lyra-network.com | www.payzen.eu 
> 
> 
> 
> 
> --
> Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
>
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
> Hi Nelson,
>
> Once you click on 'OK' you will need to wait for few seconds (before
> the confirmation disappears) then you can see that upgrade starts.  In the
> previous versions once user clicks on 'OK' confirmation screen usually
> disappears immediately.
>
> Thanks
>
> kasturi
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Setup

2017-04-25 Thread Yaniv Kaul
On Mon, Apr 24, 2017 at 5:38 PM, Mohd Zainal Abidin 
wrote:

> Hi,
>
> Hardware:
>
> 1x DL380 G9 32GB Memory (Ovirt Engine)
> 3x DL560 G9 512GB Memory (Hypervisor)
> 4x DL180se 6x3TB 32GB Memory (Storage - GlusterFS)
>
> What is the best setup from my requirement? Which step i need to do? How
> many NIC do i need to use for each server?
>

Depends on your required performance, scalability and resiliency.
Theoretically, you'd like to have bond for every critical network, but
that's not always feasible.
10g are very much recommended for high performance (for example, fast
storage and live migration performance).
Y.


>
>
> --
> Thank you
> __
>
> Mohd Zainal Abidin
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Ovirt 4.0] HA VM fail to start on another Host.

2017-04-26 Thread Yaniv Kaul
On Wed, Apr 26, 2017 at 8:44 AM, knarra  wrote:

> On 04/26/2017 10:39 AM, TranceWorldLogic . wrote:
>
> Hi,
>
> VM have guest agent install but power management is disable.
>
> Just want to confirm my understanding "HA will not work without power
> management", please say yes or no.
>
> yes.
>
>
> We have Dell server and it access through idrac, I am not sure how to
> setup power management for idrac.
> I tried with drac5 and drac7 and both case while testing I got error
> saying *"*
> *Test failed: [Connection timed out, , ]" *
> But ping is working from ovirt-engine.
>
> Would you please point me to some link how to setup power management for
> DELL idrac ?
>
>
> Thanks,
> ~Rohit
>
> Hi,
>
> With out configuring powermanagement HA will not work. Below are the
> steps to setup power management. I do not have the link handy but below are
> the steps.
>

4.1 can perform VM HA without power fencing via storage leases, btw.
Y.


>
> 1) Move the host to maintenance
> 2) click on edit button -> select Power Management -> select check box
> 'Enable power management'
> 3) click on "+" button near Add fence agent
> 4) In the address field provide IPMI port which is the ip used for
> accessing your mm console of the server.
> 5) Provide USERName and password to access the console of that machine.
> 6) select the correct provider type.
> 7) click "ok".
>
> you should be able to set it up. AFAIK, drac5 / 7 needs to be used for
> dell.
>
> Thanks
> kasturi.
>
>
>
>
>
> On Tue, Apr 25, 2017 at 10:29 PM, knarra  wrote:
>
>> On 04/25/2017 08:45 PM, TranceWorldLogic . wrote:
>>
>> Hi,
>>
>> This is regarding HA VM fail to restart on other host.
>>
>> I have setup, which has 2 host in a cluster let say host1 and host2.
>> And one HA VM (with High priority), say vm1.
>> And also not storage domain is configure on host3 and it available all
>> time.
>>
>> 1> Initially vm1 was running on host2.
>> 2> Then I power OFF host2 to see whether ovirt start vm1 on host1.
>>
>> I found two result in this case as below:
>> 1> Sometime vm1 retrying to start but retrying on host2 itself.
>> 2> Sometime vm1 move in down state without retrying.
>>
>> Can anyone explain about this behaviour ? Or Is this an issue ?
>>
>> Note : I am using Ovirt 4.0.
>>
>> Thanks,
>> ~Rohit
>>
>>
>> ___
>> Users mailing 
>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>
>> Hi,
>>
>>   If a host is powered off and if powermanagment is enabled engine
>> will fence the host and restarts it. During this process host residing on
>> the vm  will be shutdown and will be restarted on another node. All the
>> events can be seen in the engine UI.
>>
>> Hope you have not missed to enable power management on the hosts.
>> with out enabling power management even if the vm is marked to be Highly
>> available it will not  be.
>>
>> Second thing to check for is if the vm has guest-agent installed on
>> it. If the vm does not have guest-agent installed then it wont be restarted
>> on different host. More info on this can be found at  [1].
>>
>>  [1] https://bugzilla.redhat.com/show_bug.cgi?id=1341106#c35
>>
>> Thanks
>>
>> kasturi
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine already imported

2017-04-26 Thread Yaniv Kaul
On Tue, Apr 25, 2017 at 8:53 PM, Jamie Lawrence 
wrote:

> Hi all,
>
> In my hopefully-near-complete quest to automate Ovirt configuration in our
> environment, I’m very close. One outstanding issue that remains is that,
> even though the hosted engine storage domain actually was imported and
> shows in the GUI, some part of Ovirt appears to think that hasn’t happened
> yet.
>

We'll be happy to hear, once complete, how you've achieved this automation.
Note that there are several implementations already doing this
successfully. See [1] for one.
Y.

[1] https://github.com/fusor/ansible-ovirt


> In the GUI, a periodic error is logged: “The Hosted Engine Storage Domain
> doesn’t exist. It will be imported automatically…”
>
> In engine.log, all I’m seeing that appears relevant is:
>
> 2017-04-25 10:28:57,988-07 INFO  [org.ovirt.engine.core.bll.
> storage.domain.ImportHostedEngineStorageDomainCommand]
> (org.ovirt.thread.pool-6-thread-9) [1e44dde0] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[]', sharedLocks='null'}'
> 2017-04-25 10:28:57,992-07 WARN  [org.ovirt.engine.core.bll.
> storage.domain.ImportHostedEngineStorageDomainCommand]
> (org.ovirt.thread.pool-6-thread-9) [1e44dde0] Validation of action '
> ImportHostedEngineStorageDomain' failed for user SYSTEM. Reasons:
> VAR__ACTION__ADD,VAR__TYPE__STORAGE__DOMAIN,ACTION_TYPE_
> FAILED_STORAGE_DOMAIN_ALREADY_EXIST
> 2017-04-25 10:28:57,992-07 INFO  [org.ovirt.engine.core.bll.
> storage.domain.ImportHostedEngineStorageDomainCommand]
> (org.ovirt.thread.pool-6-thread-9) [1e44dde0] Lock freed to object
> 'EngineLock:{exclusiveLocks='[]', sharedLocks='null’}’
>
> Otherwise the log is pretty clean.
>
> I saw nothing of interest in the Gluster logs or the hosted-engine-ha logs
> on the host it is running on.
>
> It appears harmless, but then we aren’t actually using these systems yet,
> and in any case we don’t want the error spamming the log forever. This is
> 4.1.1.8-1.el7.centos, hosted engine on Centos 7.6.1311, Gluster for both
> the hosted engine and general data domains.
>
> Has anyone seen this before?
>
> Thanks in advance,
>
> -j
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt at Red Hat Summit 2017

2017-04-27 Thread Yaniv Kaul
If you plan to attend Red Hat Summit 2017 next week in Boston, you are
welcome to come and meet oVirt developers face to face.
We'll have a booth in the community part - come and say hi.

Yaniv.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how does ovirt deal with multiple networks with multiple gateways

2017-04-29 Thread Yaniv Kaul
Can you share some more details?
- What version are you using?
- What is the network topology? Are any static routes defined? The fact
oVirt sets a specific network for storage does not imply it'll use it for
storage automatically - unless routing is properly defined for it.

TIA,
Y.

On Sun, Apr 30, 2017 at 8:11 AM, martin chamambo 
wrote:

> Hello
>
> I am testing ovirt and i have configured it with 5 networks as shown below
>
> Display , Migration , VMnetwork ,Storage and the default ovirtmngmnt
> network
>
> These networks are represented by individual phyiscal interfaces on the
> ovirt nodes and for some reason the default gateway is not being set
> correctly
>
> it always seem to prefer the ovirtmngmnt interface as the default
>
> The only network thats supposed to have internet is the VMnetwork role
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [oVirt4.1] Does oVirt 4.1 support iSCSI offload?

2017-04-29 Thread Yaniv Kaul
On Sat, Apr 29, 2017 at 7:27 PM, wodel youchi 
wrote:

> Hi,
>
> We have two hypervisors, each one has a 10Gbe nic card with two ports, the
> cards support iSCSI offload.
>
> Does oVirt 4.1 support this?
>

Yes.


> if yes how can someone use it on hosted-engine deployment? the VM engine
> will be in the SAN targeted by these cards.
>

Since iSCSI offloading is configured before the OS even loads, to the OS
(and therefore oVirt hypervisors) it seems that the storage is 'just
connected' - same as FC for all we care.

Y.


>
> Regards.
>
>
> 
>  Virus-free.
> www.avast.com
> 
> <#m_4452747818142096065_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how does ovirt deal with multiple networks with multiple gateways

2017-04-30 Thread Yaniv Kaul
On Sun, Apr 30, 2017 at 2:03 PM, martin chamambo 
wrote:

> I am using ovirt 4.1.1.X.X  version on both nodes and Engine.
> There are no static routes and the network topology is is such a way that
> only the VMnetwork logical network is on a network with internet and has
> got  a default gateway .
>
> so i have
> 192.168.1.X  with a gateway of say 192.168.1.50 (This has internet) and
> this is the VM network
>  192.168.2.X , no gateway and is the ovirtmangnt
> 192.168.3.X  no gateway and its the display
> ,192.168.4.X , no gateway and its the Migration network
> 192.168.5.X , no gateway and its the Storage
>

Where's the storage? is it on 192.168.5.x? Otherwise, you'll need a static
route added to ensure it is indeed going via this network and interface(s) .
Y.


> when i set up these networks using the Ovirt engine GUI , ovirt seems to
> create routes and rule files for each specific network ,but for some reason
> , i cant ping anything besides the networks defined
>
> On Sun, Apr 30, 2017 at 8:45 AM, Yaniv Kaul  wrote:
>
>> Can you share some more details?
>> - What version are you using?
>> - What is the network topology? Are any static routes defined? The fact
>> oVirt sets a specific network for storage does not imply it'll use it for
>> storage automatically - unless routing is properly defined for it.
>>
>> TIA,
>> Y.
>>
>> On Sun, Apr 30, 2017 at 8:11 AM, martin chamambo 
>> wrote:
>>
>>> Hello
>>>
>>> I am testing ovirt and i have configured it with 5 networks as shown
>>> below
>>>
>>> Display , Migration , VMnetwork ,Storage and the default ovirtmngmnt
>>> network
>>>
>>> These networks are represented by individual phyiscal interfaces on the
>>> ovirt nodes and for some reason the default gateway is not being set
>>> correctly
>>>
>>> it always seem to prefer the ovirtmngmnt interface as the default
>>>
>>> The only network thats supposed to have internet is the VMnetwork role
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI LUN recommended sector size

2017-05-04 Thread Yaniv Kaul
On May 5, 2017 1:11 AM, "William Cooley"  wrote:

I’m setting up a new iSCSI LUN / volume and am wondering what the
recommended sector size is.



The setup wizard recommends 8KB for vmware.



I’ve read that QCOW2 uses 64KB blocks? Does this mean I use 64KB on the
iscsi LUN? Sorry if this is a stupid question.


Yes, qcow2 uses 64K, but I'm unsure if that's optimal in any way to the
underlying disks.
I doubt it makes a noticeable difference.
Y.



Regards,

William

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Move direct LUN?

2017-05-04 Thread Yaniv Kaul
On May 3, 2017 9:22 AM, "gflwqs gflwqs"  wrote:

Hi list!
I need to move a direct LUN from ISCSI storage to FC storage.
I have one ISCSI SAN and one FC SAN connected to my ovirt environment.
I need to migrate all my data from the ISCSI SAN to the FC SAN.
It is no problem to move the images from ISCSI to FC however i also have
direct LUN:s attached from the ISCSI SAN that i need to move to FC however
when i mark the direct LUN the "move" button is greyed out? How do i move
the direct LUN?


I'd remove the FC one, copy it (via dd or the storage) and recreate it as
iSCSI.
oVirt does not move direct LUNs.
Y.


Regards
Christian

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Info on live snapshot and agent interaction

2017-05-04 Thread Yaniv Kaul
On May 4, 2017 10:09 PM, "Michal Skrivanek" 
wrote:


> On 4 May 2017, at 17:51, Gianluca Cecchi 
wrote:
>
> Hello,
> supposing to have a Linux VM with ovirt-guest-agent installed, during a
live snapshot operation it should be freeze of filesystems.
> Where to find confirmation of correct/successful interaction?

if it’s not successful there should be an event log message about that. And
prior to taking the snapshot a warning in red at the bottom of the dialog
(that check happens when you open the dialog, so it may not be 100%
reliable)

> /var/log/messages or agent log or other kind of files?

if you want to doublecheck then this is noticable in vdsm.log. First we try
to take the snapshot with fsfreeze, and only when it fails we take it again
without it.

> Are there any limitations on filesystems that support freeze? Is it
fsfreeze the command executed at VM OS level or any other low level command?

It’s a matter of Linux and Windows implementation, they both have an API
supporting that at kernel level. I’m not aware of filesystem limitations.


If you use Windows and the qemu-ga, it'd call VSS, which, if supported by
applications (ranging from Notepad to SQL server) will provide application
consistent snapshot.
Y.


Thanks,
michal

>
> Thanks,
> Gianluca
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] command not found

2017-05-06 Thread Yaniv Kaul
On Sun, May 7, 2017 at 1:57 AM, Brahim Rifahi 
wrote:

> I am having issues installing oVirt on a CentOS 7 server. After logging in
> as sudo, I run the following series of commands:
>
> # yum -y update
> # yum install http://plain.resources.ovirt.org/pub/yum-repo/ovirt-
> release41.rpm
> 
> # yum -y install ovirt-engine
>
> I then attempted to use the following command and received a “command not
> found” error.
>

Which command exactly failed?
Y.


>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] High latency on storage domains and sanlock renewal error

2017-05-06 Thread Yaniv Kaul
On Tue, May 2, 2017 at 11:09 PM, Stefano Bovina  wrote:

> Hi, the engine logs show high latency on storage domains: "Storage domain
>  experienced a high latency of 19.2814 seconds from .. This may
> cause performance and functional issues."
>
> Looking at host logs, I found also these locking errors:
>
> 2017-05-02 20:52:13+0200 33883 [10098]: s1 renewal error -202 delta_length
> 10 last_success 33853
> 2017-05-02 20:52:19+0200 33889 [10098]: 6a386652 aio collect 0
> 0x7f1fb80008c0:0x7f1fb80008d0:0x7f1fbe9fb000 result 1048576:0 other free
> 2017-05-02 21:08:51+0200 34880 [10098]: 6a386652 aio timeout 0
> 0x7f1fb80008c0:0x7f1fb80008d0:0x7f1fbe4f2000 ioto 10 to_count 24
> 2017-05-02 21:08:51+0200 34880 [10098]: s1 delta_renew read rv -202 offset
> 0 /dev/6a386652-629d-4045-835b-21d2f5c104aa/ids
> 2017-05-02 21:08:51+0200 34880 [10098]: s1 renewal error -202 delta_length
> 10 last_success 34850
> 2017-05-02 21:08:53+0200 34883 [10098]: 6a386652 aio collect 0
> 0x7f1fb80008c0:0x7f1fb80008d0:0x7f1fbe4f2000 result 1048576:0 other free
> 2017-05-02 21:30:40+0200 36189 [10098]: 6a386652 aio timeout 0
> 0x7f1fb80008c0:0x7f1fb80008d0:0x7f1fbe9fb000 ioto 10 to_count 25
> 2017-05-02 21:30:40+0200 36189 [10098]: s1 delta_renew read rv -202 offset
> 0 /dev/6a386652-629d-4045-835b-21d2f5c104aa/ids
> 2017-05-02 21:30:40+0200 36189 [10098]: s1 renewal error -202 delta_length
> 10 last_success 36159
> 2017-05-02 21:30:45+0200 36195 [10098]: 6a386652 aio collect 0
> 0x7f1fb80008c0:0x7f1fb80008d0:0x7f1fbe9fb000 result 1048576:0 other free
>
> and this vdsm errors too:
>
> Thread-22::ERROR::2017-05-02 21:53:48,147::sdc::137::Storag
> e.StorageDomainCache::(_findDomain) looking for unfetched domain
> f8f21d6c-2425-45c4-aded-4cb9b53ebd96
> Thread-22::ERROR::2017-05-02 21:53:48,148::sdc::154::Storag
> e.StorageDomainCache::(_findUnfetchedDomain) looking for domain
> f8f21d6c-2425-45c4-aded-4cb9b53ebd96
>
> Engine instead is showing this errors:
>
> 2017-05-02 21:40:38,089 ERROR [org.ovirt.engine.core.vdsbrok
> er.vdsbroker.SpmStatusVDSCommand] (DefaultQuartzScheduler_Worker-96)
> Command SpmStatusVDSCommand(HostName = , HostId =
> dcc0275a-b011-4e33-bb95-366ffb0697b3, storagePoolId =
> 715d1ba2-eabe-48db-9aea-c28c30359808) execution failed. Exception:
> VDSErrorException: VDSGenericException: VDSErrorException: Failed to
> SpmStatusVDS, error = (-202, 'Sanlock resource read failure', 'Sanlock
> exception'), code = 100
> 2017-05-02 21:41:08,431 ERROR [org.ovirt.engine.core.vdsbrok
> er.vdsbroker.SpmStatusVDSCommand] (DefaultQuartzScheduler_Worker-53)
> [6e0d5ebf] Failed in SpmStatusVDS method
> 2017-05-02 21:41:08,443 ERROR [org.ovirt.engine.core.vdsbrok
> er.vdsbroker.SpmStatusVDSCommand] (DefaultQuartzScheduler_Worker-53)
> [6e0d5ebf] Command SpmStatusVDSCommand(HostName = ,
> HostId = 7991933e-5f30-48cd-88bf-b0b525613384, storagePoolId =
> 4bd73239-22d0-4c44-ab8c-17adcd580309) execution failed. Exception:
> VDSErrorException: VDSGenericException: VDSErrorException: Failed to
> SpmStatusVDS, error = (-202, 'Sanlock resource read failure', 'Sanlock
> exception'), code = 100
> 2017-05-02 21:41:31,975 ERROR [org.ovirt.engine.core.vdsbrok
> er.vdsbroker.SpmStatusVDSCommand] (DefaultQuartzScheduler_Worker-61)
> [2a54a1b2] Failed in SpmStatusVDS method
> 2017-05-02 21:41:31,987 ERROR [org.ovirt.engine.core.vdsbrok
> er.vdsbroker.SpmStatusVDSCommand] (DefaultQuartzScheduler_Worker-61)
> [2a54a1b2] Command SpmStatusVDSCommand(HostName = ,
> HostId = dcc0275a-b011-4e33-bb95-366ffb0697b3, storagePoolId =
> 715d1ba2-eabe-48db-9aea-c28c30359808) execution failed. Exception:
> VDSErrorException: VDSGenericException: VDSErrorException: Failed to
> SpmStatusVDS, error = (-202, 'Sanlock resource read failure', 'Sanlock
> exception'), code = 100
>
>
> I'm using Fibre Channel or FCoE connectivity; storage array technical
> support has analyzed it (also switch and OS configurations), but nothing
> has been found.
>

Is this on a specific hosts, or multiple hosts?
Is that FC or FCoE? Anything on the host's /var/log/messages?


>
> Any advice?
>
> Thanks
>
>
> Installation info:
>
> ovirt-release35-006-1.noarch
>

This is a very old release, I suggest, regardless of this issue, to upgrade.
Y.


> libgovirt-0.3.3-1.el7_2.1.x86_64
> vdsm-4.16.30-0.el7.centos.x86_64
> vdsm-xmlrpc-4.16.30-0.el7.centos.noarch
> vdsm-yajsonrpc-4.16.30-0.el7.centos.noarch
> vdsm-jsonrpc-4.16.30-0.el7.centos.noarch
> vdsm-python-zombiereaper-4.16.30-0.el7.centos.noarch
> vdsm-python-4.16.30-0.el7.centos.noarch
> vdsm-cli-4.16.30-0.el7.centos.noarch
> qemu-kvm-ev-2.3.0-29.1.el7.x86_64
> qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64
> qemu-kvm-tools-ev-2.3.0-29.1.el7.x86_64
> libvirt-client-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-driver-storage-1.2.17-13.el7_2.3.x86_64
> libvirt-python-1.2.17-2.el7.x86_64
> libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.3.x86_64
> libvirt-lock-sanlock-1.2.17-13.el7_2.3.x86_64
> libvi

Re: [ovirt-users] New oVirt Node install on oVirt Cluster 4.0.5 - How can I install oVirt Node with same 4.0.5 version ???

2017-05-06 Thread Yaniv Kaul
On Fri, May 5, 2017 at 9:03 PM, Rogério Ceni Coelho <
rogeriocenicoe...@gmail.com> wrote:

> Hi Yuval,
>
> Yes, it is, but every time I need to install a new oVirt node i will need
> to upgrade ovirt engine at least ?
>

Of course not. You can upgrade the Engine independently from the hosts.


> And i will need to upgrade ovirt nodes that already exist ? I have 20
> ovirt nodes  So, this means a lot of work.
>

No, you can upgrade each host independently.


>
> My enviroment are stable with 4.0.5 and i am happy for now. oVirt is an
> excellent product. Thanks for that.
>
> For example, this morning i try to put an export import storage domain in
> manteinance and an error occur only with my new ovirt node running 4.0.6.1
> and yesterday a lost 2 days debbuging another problem with network start
> with many vlans on 4.0.6.1 ... :-(
>

It'd be great if you can start a different thread about it, or file a bug,
with all the details (VDSM and Engine logs attached).
Y.


>
> StorageDomainDoesNotExist: Storage domain does not exist:
> (u'f9e051a9-6660-4e49-a3f1-354583501610',)
> Thread-12::DEBUG::2017-05-05 10:39:07,473::check::296::
> storage.check::(_start_process) START check 
> '/dev/c58ce4b0-7145-4cd0-900e-eeb99177a7de/metadata'
> cmd=['/usr/bin/taskset', '--cpu-list', '0-31', '/usr/bin/dd',
> 'if=/dev/c58ce4b0-7145-4cd0-900e-eeb99177a7de/metadata', 'of=/dev/null',
> 'bs=4096', 'count=1', 'iflag=direct'] delay=0.00
> Thread-12::DEBUG::2017-05-05 
> 10:39:07,524::asyncevent::564::storage.asyncevent::(reap)
> Process  terminated (count=1)
> Thread-12::DEBUG::2017-05-05 10:39:07,525::check::327::
> storage.check::(_check_completed) FINISH check
> '/dev/c58ce4b0-7145-4cd0-900e-eeb99177a7de/metadata' rc=0
> err=bytearray(b'1+0 records in\n1+0 records out\n4096 bytes (4.1 kB)
> copied, 0.000292117 s, 14.0 MB/s\n') elapsed=0.06
> Thread-12::DEBUG::2017-05-05 10:39:07,886::check::296::
> storage.check::(_start_process) START check u'/rhev/data-center/mnt/corot.
> rbs.com.br:_u00_oVirt_PRD_ISO__DOMAIN/7b8c9293-f103-401a-
> 93ac-550981837224/dom_md/metadata' cmd=['/usr/bin/taskset', '--cpu-list',
> '0-31', '/usr/bin/dd', u'if=/rhev/data-center/mnt/corot.rbs.com.br:
> _u00_oVirt_PRD_ISO__DOMAIN/7b8c9293-f103-401a-93ac-550981837224/dom_md/metadata',
> 'of=/dev/null', 'bs=4096', 'count=1', 'iflag=direct'] delay=0.00
> Thread-12::DEBUG::2017-05-05 10:39:07,898::check::296::
> storage.check::(_start_process) START check u'/rhev/data-center/mnt/
> dd6701.bkp.srvr.rbs.net:_data_col1_ovirt__prd/db89d5df-00ac-
> 4e58-a7e5-e31272f9ea92/dom_md/metadata' cmd=['/usr/bin/taskset',
> '--cpu-list', '0-31', '/usr/bin/dd', u'if=/rhev/data-center/mnt/
> dd6701.bkp.srvr.rbs.net:_data_col1_ovirt__prd/db89d5df-00ac-
> 4e58-a7e5-e31272f9ea92/dom_md/metadata', 'of=/dev/null', 'bs=4096',
> 'count=1', 'iflag=direct'] delay=0.00
> Thread-12::DEBUG::2017-05-05 10:39:07,906::check::327::
> storage.check::(_check_completed) FINISH check
> u'/rhev/data-center/mnt/corot.rbs.com.br:_u00_oVirt_PRD_ISO_
> _DOMAIN/7b8c9293-f103-401a-93ac-550981837224/dom_md/metadata' rc=0
> err=bytearray(b'0+1 records in\n0+1 records out\n386 bytes (386 B) copied,
> 0.00038325 s, 1.0 MB/s\n') elapsed=0.02
> Thread-12::DEBUG::2017-05-05 10:39:07,916::check::296::
> storage.check::(_start_process) START check u'/rhev/data-center/mnt/vnx01.
> srv.srvr.rbs.net:_fs__ovirt_prd__data__domain/fdcf130d-
> 53b8-4978-8f97-82f364639b4a/dom_md/metadata' cmd=['/usr/bin/taskset',
> '--cpu-list', '0-31', '/usr/bin/dd', u'if=/rhev/data-center/mnt/
> vnx01.srv.srvr.rbs.net:_fs__ovirt_prd__data__domain/
> fdcf130d-53b8-4978-8f97-82f364639b4a/dom_md/metadata', 'of=/dev/null',
> 'bs=4096', 'count=1', 'iflag=direct'] delay=0.00
> Thread-12::DEBUG::2017-05-05 10:39:07,930::check::327::
> storage.check::(_check_completed) FINISH check u'/rhev/data-center/mnt/
> dd6701.bkp.srvr.rbs.net:_data_col1_ovirt__prd/db89d5df-00ac-
> 4e58-a7e5-e31272f9ea92/dom_md/metadata' rc=0 err=bytearray(b'0+1 records
> in\n0+1 records out\n360 bytes (360 B) copied, 0.000363751 s, 990 kB/s\n')
> elapsed=0.03
> Thread-12::DEBUG::2017-05-05 
> 10:39:07,964::asyncevent::564::storage.asyncevent::(reap)
> Process  terminated (count=1)
> Thread-12::DEBUG::2017-05-05 10:39:07,964::check::327::
> storage.check::(_check_completed) FINISH check
> u'/rhev/data-center/mnt/vnx01.srv.srvr.rbs.net:_fs__ovirt_
> prd__data__domain/fdcf130d-53b8-4978-8f97-82f364639b4a/dom_md/metadata'
> rc=0 err=bytearray(b'0+1 records in\n0+1 records out\n369 bytes (369 B)
> copied, 0.000319659 s, 1.2 MB/s\n') elapsed=0.05
> Thread-12::DEBUG::2017-05-05 10:39:09,035::check::296::
> storage.check::(_start_process) START check 
> '/dev/1804d02e-7865-4acd-a04f-8200ac2d2b84/metadata'
> cmd=['/usr/bin/taskset', '--cpu-list', '0-31', '/usr/bin/dd',
> 'if=/dev/1804d02e-7865-4acd-a04f-8200ac2d2b84/metadata', 'of=/dev/null',
> 'bs=4096', 'count=1', 'iflag=direct'] delay=0.00
> Thread-12::DEBUG::2017-05-05 
> 10:39:09,084::asyncevent::564::storage.asynceve

Re: [ovirt-users] oVirt Hosted Engine Setup fails

2017-05-06 Thread Yaniv Kaul
On Thu, May 4, 2017 at 8:45 PM, Manuel Luis Aznar <
manuel.luis.az...@gmail.com> wrote:

> Hello there,
>
> Sorry for the delay to answer the mail, but, I have been busy doing
> things...
>
> The permission on /dev/random are the following:
>
> [root@host1 manuel]# ls -la /dev/random
> crw-rw-rw-. 1 root root 1, 8 may  4 18:06 /dev/random
>
> Suppose that these permission should look something like:
>
> [root@host1 manuel]# ls -la /dev/random
> crw-rw-rw-. 1 vdsm kvm 1, 8 may  4 18:06 /dev/random
>
> Finally I do not know what you meant with permission on SELinux audit
> logs?¿? Sorry for my lack of understanding, so, please let me know much
> more precisely and I will look for it
>

You can get the selinux settings of a file with ls -Z . For example:
[ykaul@ykaul ovirt-system-tests]$ ls -Z /dev/kvm
system_u:object_r:kvm_device_t:s0 /dev/kv

Also, you can search for selinux issues either in /var/log/audit or using
ausearch. For example:
sudo ausearch -m AVC -i

Y.

>
>
> Thanks for all in advance
> I will be waiting for you
> Manuel Luis Aznar
>
> 2017-05-03 15:09 GMT+01:00 Simone Tiraboschi :
>
>>
>>
>> On Wed, May 3, 2017 at 11:30 AM, Manuel Luis Aznar <
>> manuel.luis.az...@gmail.com> wrote:
>>
>>> Hello Simone and all others,
>>>
>>> I have attached to the mail the requested files. If you have any other
>>> inquiry just say it, The failed installation drive would be keep safe until
>>> solving this problem.
>>>
>>> Thanks for all in advance
>>> Manuel
>>>
>>
>> The issue is here:
>> May  1 11:47:45 host1 journal: libvirt version: 2.0.0, package:
>> 10.el7_3.5 (CentOS BuildSystem ,
>> 2017-03-03-02:09:45, c1bm.rdu2.centos.org)
>> May  1 11:47:45 host1 journal: hostname: host1.bajada.es
>> May  1 11:47:45 host1 journal: Falló al conectar con el socket de
>> monitor: No existe el proceso
>> May  1 11:47:45 host1 journal: internal error: process exited while
>> connecting to monitor: /dev/random -device 
>> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x7
>> -msg timestamp=on#012Could not access KVM kernel module: Permission
>> denied#012failed to initialize KVM: Permission denied
>> May  1 11:47:45 host1 journal: libvirt version: 2.0.0, package:
>> 10.el7_3.5 (CentOS BuildSystem ,
>> 2017-03-03-02:09:45, c1bm.rdu2.centos.org)
>> May  1 11:47:45 host1 journal: hostname: host1.bajada.es
>> May  1 11:47:45 host1 journal: Fin de archivo al leer datos: Error de
>> entrada/salida
>> May  1 11:47:45 host1 journal: Fin de archivo al leer datos: Error de
>> entrada/salida
>>
>> could you please also check the permission on /dev/random and SELinux
>> audit logs?
>>
>>
>>
>>>
>>> 2017-05-02 10:55 GMT+01:00 Simone Tiraboschi :
>>>
 Sure, but first we need to understand what it's happening: in our CI
 process everything is fine so I think it's something specific to your env.
 Could you please share your:
 /var/log/libvirt/qemu/HostedEngine.log
 /var/log/messages

 thanks,
 Simone


 On Tue, May 2, 2017 at 11:38 AM, Manuel Luis Aznar <
 manuel.luis.az...@gmail.com> wrote:

> Ok thankyou.
>
> Suppose that this problem probably would be solve in a future release.
>
> Thanks,
> Manuel
>
> 2017-05-02 10:35 GMT+01:00 Simone Tiraboschi :
>
>>
>>
>> On Tue, May 2, 2017 at 11:30 AM, Manuel Luis Aznar <
>> manuel.luis.az...@gmail.com> wrote:
>>
>>> Hello there again,
>>>
>>> Yes as I say, I have done several clean installations and the VM
>>> engine sometimes starts without any problem. So Simone any 
>>> recommendation
>>> to make the engine VM starts properly?¿
>>>
>>> While is installing the HA agent and HA broker are down, would I get
>>> good result by starting the services myself?¿
>>>
>>> Any help from Simone or somebody would be appreciated
>>> Thanks for all in advance
>>> Manuel Luis Aznar
>>>
>>
>> I suggest to check libvirt logs.
>>
>>
>>>
>>> 2017-05-02 7:54 GMT+01:00 Simone Tiraboschi :
>>>


 On Mon, May 1, 2017 at 3:14 PM, Manuel Luis Aznar <
 manuel.luis.az...@gmail.com> wrote:

> Hello there,
>
> I have been looking in the internet using google why my
> installation of ovirt-hosted-engine is failing.
>
> I have found this link:
>
>  https://www.mail-archive.com/users@ovirt.org/msg40864.html
> (Hosted engine install failed; vdsm upset about broker)
>
> It seems to be the same error...
>
> So to knarra and Jamie Lawrence my question is:
>
> Did you manage to discover the problem?? In my instalation I
> am using nfs and not gluster...
>
> I have read the error and is the same error
> "BrokerConnectionError: ...". The ovirt-ha-agent and ovirt-ha-broker 
> did
> 

Re: [ovirt-users] [oVirt4.1] Does oVirt 4.1 support iSCSI offload?

2017-05-06 Thread Yaniv Kaul
On Wed, May 3, 2017 at 7:57 PM, wodel youchi  wrote:

> Hi again, and sorry for the delay, I was sick, actually I am still
> sick :-(
>
> Just to have more details, from what I have read about iSCSI offload,
> there are two modes, depending on the hardware you have (the adapter),
> using VMware terminology, no offense :-) , there is :
>
> - Dependent mode : in this mode the adapter still need the software iSCSI
> to use the network configuration of the nic port, so the discovery is done
> on tcp, the rest on offload driver like bnxi2.
>

This one is similar to TCP offload - the iSCSI processing is done on the
HW, the rest is done in software and is supported - it looks like regular
iSCSI to oVirt.

>
> - Independent mode : in this mode the adapter takes all the work by
> itself, the network part, it will have it's own IP address and so on, and
> it does not need anything from the OS or the software iSCSI, the
> configuration is made in the BIOS of the adapter, and the connection is
> made before the OS is up.
>

This one indeed is transparent to the OS, and it looks like FC to oVirt.


>
> Does the configuration on oVirt support the two modes? and is there any
> differences when using them, I mean during the setup.
>

Both are supported. The former looks like regular iSCSI, the latter like FC
to oVirt.
Y.


>
> Regards.
>
>
> 2017-04-30 7:47 GMT+01:00 Yaniv Kaul :
>
>>
>>
>> On Sat, Apr 29, 2017 at 7:27 PM, wodel youchi 
>> wrote:
>>
>>> Hi,
>>>
>>> We have two hypervisors, each one has a 10Gbe nic card with two ports,
>>> the cards support iSCSI offload.
>>>
>>> Does oVirt 4.1 support this?
>>>
>>
>> Yes.
>>
>>
>>> if yes how can someone use it on hosted-engine deployment? the VM engine
>>> will be in the SAN targeted by these cards.
>>>
>>
>> Since iSCSI offloading is configured before the OS even loads, to the OS
>> (and therefore oVirt hypervisors) it seems that the storage is 'just
>> connected' - same as FC for all we care.
>>
>> Y.
>>
>>
>>>
>>> Regards.
>>>
>>>
>>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
>>>  Virus-free.
>>> www.avast.com
>>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
>>> <#m_4334077165374483247_m_8732461098692771785_m_4452747818142096065_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM Template Error

2017-05-07 Thread Yaniv Kaul
On Thu, May 4, 2017 at 4:57 PM, Bryan Sockel  wrote:

> I have been moving around my storage setup, and this storage domain does
> not exist in my environment anymore.  Not sure if this if this VM was
> created on this storage domain and then moved.
>
> I am attempting to create a template on a storage domain that does
> currently exist in my environment.  How would i remove the reference to
> this storage domain so it is not used?
>

You can probably right click on it and select 'destroy'.
Y.


>
>
>
> -Original Message-
> From: Shahar Havivi 
> To: Michal Skrivanek 
> Cc: Bryan Sockel , users 
> Date: Thu, 4 May 2017 08:19:34 +0300
> Subject: Re: [ovirt-users] VM Template Error
>
> According to the log the the storage domain (where the disk is stored is
> not exists)
> StorageDomainDoesNotExist: Storage domain does not exist:
> (u'2de6ad97-1f33-41e5-b021-bacec14ce6e4',)
>
> Does all the storage domain active in your data center?
> Did you move the disk or remove and added a different storage domain?
>
> On Wed, May 3, 2017 at 8:30 PM, Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
>>
>>
>>
>> On 3 May 2017, at 16:09, Bryan Sockel  wrote:
>>
>> Hi.  the vm does exist and is bootable.  The vm only has one disk and it
>> is enabled.
>>
>>
>> yes, but the template’s copy of that disk somehow can’t be created.
>> Perhaps share more of the vdsm logs to see why the image copy failed
>>
>> Thanks,
>> michal
>>
>>
>>
>>
>> Thanks
>>
>>
>>
>> -Original Message-
>> From: Shahar Havivi 
>> To: Bryan Sockel 
>> Cc: "users@ovirt.org" 
>> Date: Wed, 3 May 2017 09:05:34 +0300
>> Subject: Re: [ovirt-users] VM Template Error
>>
>> its looks like vdsm complain that the image is not exists.
>> can you run the VM with no errors? (if the VM have more then one disk
>> make sure that its exists and accessible).
>>
>> if all good please attach the full vdsm and engine log.
>>
>>  Shahar.
>>
>> On Tue, May 2, 2017 at 6:29 PM, Bryan Sockel 
>> wrote:
>>>
>>> Hi,
>>>
>>> Having an issue a template from a VM's.
>>>
>>> I am getting the following errors:
>>> engine.log
>>> 2017-05-02 09:40:10,059-05 INFO  [org.ovirt.engine.core.dal.db
>>> broker.auditloghandling.AuditLogDirector] (default task-24) [2ef2fce]
>>> EVENT_ID: USER_ADD_VM_TEMPLATE(48), Correlation ID:
>>> 176fdfa7-0467-48f5-9ecc-901e86768c28, Job ID:
>>> fed9a848-1971-495a-a391-be1fa4a67908, Call Stack: null, Custom Event
>>> ID: -1, Message: Creation of Template test from VM Windows-10-Template was
>>> initiated by admin@internal-authz.
>>>
>>> 2017-05-02 09:48:43,059-05 ERROR [org.ovirt.engine.core.dal.dbb
>>> roker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-20)
>>> [] EVENT_ID: USER_ADD_VM_TEMPLATE_FINISHED_FAILURE(52), Correlation ID:
>>> 176fdfa7-0467-48f5-9ecc-901e86768c28, Job ID:
>>> fed9a848-1971-495a-a391-be1fa4a67908, Call Stack: null, Custom Event
>>> ID: -1, Message: Failed to complete creation of Template test from VM
>>> Windows-10-Template.
>>>
>>> Events Log
>>> ID 10803 - VDSM command DeleteImageGroupVDS failed: Image does not exist
>>> in domain: u'image=6aa525ad-e7f1-432b-959a-2223f7e77083,
>>> domain=e371d380-7194-4950-b901-5f2aed5dfb35'
>>> ID 10802 - VDSM vm-host-colo-2 command HSMGetAllTasksStatusesVDS failed:
>>> low level Image copy failed
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] High latency on storage domains and sanlock renewal error

2017-05-08 Thread Yaniv Kaul
On Sun, May 7, 2017 at 1:27 PM, Stefano Bovina  wrote:

> Sense data are 0x0/0x0/0x0


Interesting - first time I'm seeing 0/0/0. The 1st is usually 0x2 (see
[1]), and then the rest [2], [3] make sense.

A google search found another user with Clarion with the exact same
error[4], so I'm leaning toward misconfiguration of multipathing/clarion
here.

Is your multipathing configuration working well for you?
Are you sure it's a EL7 configuration? For example, I believe you should
have rr_min_io_rq and not rr_min_io .
Y.

[1] http://www.t10.org/lists/2status.htm
[2] http://www.t10.org/lists/2sensekey.htm
[3] http://www.t10.org/lists/asc-num.htm
[4] http://www.linuxquestions.org/questions/centos-111/
multipath-problems-4175544908/
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] High latency on storage domains and sanlock renewal error

2017-05-08 Thread Yaniv Kaul
On Mon, May 8, 2017 at 11:50 AM, Stefano Bovina  wrote:

> Yes,
> this configuration is the one suggested by EMC for EL7.
>

https://access.redhat.com/solutions/139193 suggest that for alua, the patch
checker needs to be different.

Anyway, it is very likely that you have storage issues - they need to be
resolved first and I believe they have little to do with oVirt at the
moment.
Y.


>
> By the way,
> "The parameters rr_min_io vs. rr_min_io_rq mean the same thing but are
> used for device-mapper-multipath on differing kernel versions." and
> rr_min_io_rq default value is 1, rr_min_io default value is 1000, so it
> should be fine.
>
>
> 2017-05-08 9:39 GMT+02:00 Yaniv Kaul :
>
>>
>> On Sun, May 7, 2017 at 1:27 PM, Stefano Bovina  wrote:
>>
>>> Sense data are 0x0/0x0/0x0
>>
>>
>> Interesting - first time I'm seeing 0/0/0. The 1st is usually 0x2 (see
>> [1]), and then the rest [2], [3] make sense.
>>
>> A google search found another user with Clarion with the exact same
>> error[4], so I'm leaning toward misconfiguration of multipathing/clarion
>> here.
>>
>> Is your multipathing configuration working well for you?
>> Are you sure it's a EL7 configuration? For example, I believe you should
>> have rr_min_io_rq and not rr_min_io .
>> Y.
>>
>> [1] http://www.t10.org/lists/2status.htm
>> [2] http://www.t10.org/lists/2sensekey.htm
>> [3] http://www.t10.org/lists/asc-num.htm
>> [4] http://www.linuxquestions.org/questions/centos-111/multi
>> path-problems-4175544908/
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] do we need some documentation mainteiners?

2017-05-09 Thread Yaniv Kaul
On Tue, May 9, 2017 at 5:59 PM, Fabrice Bacchella <
fabrice.bacche...@orange.fr> wrote:

> The documentation is alway a good laugh at ovirt. Look for RHEL instead.
>

After you finish laughing, you could improve it.
I have no doubt that you have both the skills as well as the experience and
knowledge using oVirt to do so and provide significant value to the
community.
Y.


>
> Le 9 mai 2017 à 16:13, Juan Pablo  a écrit :
>
> Team, Is just me or the documentation pages are not being updated ? many
> are outdated.. how can we collaborate?
>
> whats up with http://www.ovirt.org/documentation/admin-guide/ ?
>
> regards,
> JP
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] slow kerberos authentication

2017-05-12 Thread Yaniv Kaul
On May 11, 2017 8:25 PM, "Fabrice Bacchella" 
wrote:

I'm using kerberos authentication in ovirt for the URL
/sso/oauth/token-http-auth, but kerberos is done in Apache using
auth_gssapi_module and it's quite slow, about 6s for a request.

I'm trying to understand if it's apache or ovirt-engine that are slow. Is
there a way to get response time metered for http requests inside ovirt
instead of seen from apache ?


In 4.1, look under /var/log/httpd, there should be an ovirt specific log
file for exactly this - end to end latency of requests.
Y.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] User story

2017-05-13 Thread Yaniv Kaul
Hi,

First of all, thanks for sharing. It's always good to get feedback,
especially when it's balanced and with specific examples and comparisons.

Secondly, I do believe you have touched on what I believe is a conceptual
difference oVirt has, which translates to a gap in the experience you have
described: when managing 2-3 hosts, it is more intuitive and easier to just
configure each separately (and there's very little to configure anyway, and
the number of hosts is low), then to configure on a higher level (in oVirt
case, data center and cluster level) and apply - who needs either when you
have 2-3 hosts, right?

In a sense, the hyper-converged (gdeploy - see
http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-gluster-storage/
) provides a good 'day 1' experience I believe - but is indeed limited to
the hyper-converged deployment type. It'd be a good idea to expand it to
the general case of 2-3 hosts, I reckon.

Perhaps we need to go further and somehow hide both data center and cluster
(for X hosts, where X is lower than... 5?) assuming you'd have only a
single DC and a single cluster - and present their options as 'global'?
Once you go above 5 hosts we'll expand the options and display the bigger
hierarchy?

We've had the idea of 'ovirt-lite' years ago, and it never really
materialized - perhaps we should revisit it. I think it's easy
technologically, a bit more challenging to get right the improved user
experience. I can certainly see the use cases of both small labs, remote
offices and proof-of-concept setups.


As for the installation, I would really like to see:
1. Install an OS -or- install oVirt node
2. Go to http://
3. Installation wizard.

This is exactly (again) what gdeploy provides, as well as hosted-engine -
but we probably need to streamline further more and add regular engine
setup to it.

Thanks again,
Y.



On Sat, May 13, 2017 at 9:04 PM, Johannes Spanier  wrote:

> Hi oVirt community.
>
> I did a short series for tweets @jospanier judging my first time user
> experience with several virtualization platforms and was asked by Sandro
> Bonazzola to elaborate a bit further than what fits into 140 chars.
>
> I had a specific use case: The small-ish learning lab with only 2-3 nodes
> and it needs to be free. I also wanted live migration to stay flexible with
> my hosts.
>
> I currently use my lab for to run ~10 virtual CSR1000V routers on free
> ESXi in addition to some real router hardware. I want to expand the lab to
> be able to explore some other technologies as well like network automation,
> SDN, infrastructure as code and the likes.
>
> The lineup for the PoC was oVirt, ESXi, Openstack and Proxmox VE.
>
> I my tweets I was referring to a) the install procedure and b) the
> operational experience.
>
> Here is what I found. These findings are highly subjective and debatable.
> I am aware of that.
>
> Both ESXi and Proxmox VE is trivial to install. You grab the ISO image,
> use a tool like Rufus to make an bootable USB stick or use iLO virtual CD
> functionality and off you go. Both installers do not ask many questions and
> just do their job. After installation ESXi is all ready to run. Just open
> the WebGui and start deploying your first node. With Proxmox VE you get a
> TUI wizard guiding you though the last steps. After that the WebGui is
> ready and you can deploy your first VM immediately.
>
> I found oVirt a bit more involved to install. You have to install the
> Engine on one node and then register the other hosts with it. While that
> process is easy to handle it is a bit more work. A big thing for me was
> that at first glance there did no seem to be a "single node" install. My
> fist impression was that I needed a minimum of two servers. Of course later
> I learned about the Hosted Engine and the All-In-One install.
>
> Do not get me wrong. First time oVirt installation is still easy to handle
> on a quiet afternoon.
>
> Openstack installation compared to that is a PITA nightmare. I tried both
> RDO (TripleO) and Fuel for setup but gave up after two days for both,
> confused about what I actually need to do for a start. Got some nodes
> running with Fuel but was not satisfied. I then followed the Openstack
> manual Install Guide. I have a day job, so it took me about 5 days to get
> through the whole procedure, but a least I understood what was going on and
> what I needed to do.
>
> So that was my "first day" experience with those.
> Now for the "second day" i.e. operation.
>
> ESXi and Proxmox VE are both very simple to understand. You usually do not
> need a manual to find you way around. Deploying a VM is a breeze. oVirt is
> pretty simple to understand too. But you have to wrap your head around the
> Data Center principle underpinning everything. Its just a bit more
> complicated. On one or two occasions while playing around it was unclear at
> first why my datacenter was offline and I had to consult the manual for
> that. One can immediately feel that m

Re: [ovirt-users] Regenerating SSL keys

2017-05-14 Thread Yaniv Kaul
On Sat, May 13, 2017 at 2:35 AM, Jamie Lawrence 
wrote:

> The key generated by the engine install ended up with a bad CN; it has a
> five-digit number appended to the host name, and no SAN.
>

The 5 random digits are supposed to be OK, and are actually a feature - it
ensures uniqueness if you re-generate (most likely reinstall your Engine),
as otherwise some browsers fail miserably if a CA cert mismatches what they
know.

SAN is being worked on - we are aware of Chrome 58 now requiring it.
I sincerely hope to see it in 4.1.2 (see https://bugzilla.redhat.com/1449084
).
Y.



> I've lived with this through setup, but now I'm getting close to prod use,
> and need to clean up so that it is usable for general consumption. And the
> SPICE HTML client is completely busted due to this; that's a problem
> because we're mostly MacOS on the client side, and the Mac Spice client is
> unusable for normal humans.
>
>  I'm wary of attempting to regenerate these manually, as I don't have a
> handle on how the keysare used by the various components.
>
> What is the approved method of regenerating these keys?
>
> -j
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Engine deploy tab is missing when restoring from db-backup?

2017-05-14 Thread Yaniv Kaul
On Thu, May 11, 2017 at 12:14 PM, gflwqs gflwqs  wrote:

> Hi list,
> I have restored my 4.1 engine.
> After i have restored the engine the engine deploy tab is missing?
>

Can you explain what tab exactly you are referring to?
Y.


> And since it is not possible anymore to deploy thru command line i am not
> able to deploy new hosts?
> - Does anybody know a workaround to deploy new hosts?
> - Is this a bug or am i doing something wrong when restoring?
>
> This is how i restored:
> [root@ovirt-engine ~]# engine-backup --mode=restore --file=backup.tar
> --log=restore.log --provision-db --provision-dwh-db --restore-permissions
>
>
> Thanks!
> Christian
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Feature: enhanced OVA support

2017-05-15 Thread Yaniv Kaul
On Mon, May 15, 2017 at 10:43 AM, Eduardo Mayoral  wrote:

> As a user I must say this is a very welcome feature. I appreciate a lot
> that oVirt already makes it very easy to import VMs from VMWare .
>
> Allowing to export oVirt machines in OVA format, which as of today is the
> "lingua franca" of VM migrations offers a nice and clear path out of oVirt.
> Having an easy way to migrate out of oVirt makes me MORE likely to keep on
> using oVirt, not less.
>

There is some confusion around the OVA format that I'd like to clarify. OVA
is a very flexible format, in the sense that it's essentially an XML to
describe the VM, but the content within may be (and practically is)
whatever one feels like. A concrete example is a NIC - it can be (in
VMware's case) a vmxnet3 and in our case virtio-net driver. So the fact
there's a single XML that describes a VM has absolutely no assurance of an
interchangeability.
oVirt, via virt-v2v integration, took the additional step to actually
perform a conversion from VMware format and drivers and specs to KVM and
oVirt.

The other way is left as an exercise for the reader, I'm afraid.

The goal of this feature, among others, is to more easily share VMs and
templates between setups for example.
Y.


Eduardo Mayoral Jimeno (emayo...@arsys.es)
> Administrador de sistemas. Departamento de Plataformas. Arsys internet.+34 
> 941 620 145 ext. 5153 <+34%20941%2062%2001%2045>
>
> On 14/05/17 15:56, Arik Hadas wrote:
>
> Hi everyone,
>
> We would like to share our plan for extending the currently provided
> support for OVA files with:
> 1. Support for uploading OVA.
> 2. Support for exporting a VM/template as OVA.
> 3. Support for importing OVA that was generated by oVirt (today, we only
> support those that are VMware-compatible).
> 4. Support for downloading OVA.
>
> This can be found on the feature page
> 
> .
>
> Your feedback and cooperation will be highly appreciated.
>
> Thanks,
> Arik
>
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] Feature: enhanced OVA support

2017-05-16 Thread Yaniv Kaul
On Mon, May 15, 2017 at 10:52 PM, Arik Hadas  wrote:

>
>
> On Mon, May 15, 2017 at 8:05 PM, Richard W.M. Jones 
> wrote:
>
>> On Sun, May 14, 2017 at 04:56:56PM +0300, Arik Hadas wrote:
>> > Hi everyone,
>> >
>> > We would like to share our plan for extending the currently provided
>> > support for OVA files with:
>> > 1. Support for uploading OVA.
>> > 2. Support for exporting a VM/template as OVA.
>> > 3. Support for importing OVA that was generated by oVirt (today, we only
>> > support those that are VMware-compatible).
>> > 4. Support for downloading OVA.
>> >
>> > This can be found on the feature page
>> > > rt/enhance-import-export-with-ova/>
>> > .
>> >
>> > Your feedback and cooperation will be highly appreciated.
>>
>> The plan as stated seems fine, but I have some reservations which I
>> don't think are answered by the page:
>>
>> (1) How will oVirt know the difference between an OVA generated
>> by oVirt and one generated by VMware (or indeed other sources)?
>> A VMware OVF has an XML comment:
>>
>> 
>>
>> but not any official metadata that I could see.
>
>
> So that's something that we have not decided on yet.
> Indeed, we need some indication of the system that generated the OVA and
> it makes sense to have it inside the OVF. I thought about a field that is
> part of the VM configuration, like the "origin" field of VMs in
> ovirt-engine. Having a comment like you mentioned is also an option.
>
>
>>
>> (By the way, I don't think importing via virt-v2v vs directly will be
>> any quicker.  The v2v conversion / device installation takes only a
>> fraction of the time.  Most of the time is consumed doing the format
>> conversion from VMDK to qcow2.  However you are correct that when you
>> know that the source is oVirt/KVM, you should not run virt-v2v.)
>>
>
> Note that the disks within the OVA will be of type qcow2. So not only that
> no v2v conversion / device installation is needed, but also no format
> conversion will be needed on the import and upload flows.
>
>
>>
>> (2) I think you're going to have a lot of fun generating OVAs which
>> work on VMware.  As Yaniv says, the devices aren't the same so you'd
>> be having to do some virt-v2v -like driver installation / registry
>> modification.  Plus the OVF file is essentially a VMware data dump
>> encoded as XML.  OVF isn't a real standard.  I bet there are a million
>> strange corner cases.  Even writing VMDK files is full of pitfalls.
>>
>> VMware has a reasonable V2V import tool (actually their P2V tooling is
>> very decent).  Of course it's proprietary, but then so is their
>> hypervisor.  Maybe oVirt can drive their tools?
>>
>
> No no, that fun is not part of the plan :)
> The OVAs we'll generate are supposed to contain:
> 1. OVF - it should be similar to the one virt-v2v generates for oVirt
> (that is similar to the one we use internally in oVirt for snapshots and
> for backup within storage domains, i.e., OVF-stores). We will definitely
> need some extensions, like an indication that the OVA was generated by
> oVirt. We may make some tweaks here and there, like removing network
> interfaces from the list of resources. But we already are generally aligned
> with what the specification says about OVFs.
>
> 2. qcow2 disks - thus, no conversion (device installation) and no format
> conversion will be needed (we may consider to convert them to raw later on,
> since they are expected to be the base volumes of the disks, not sure it
> worth the effort though).
>

I wonder if it's worth looking into several options when exporting:
1. Run virt-sysprep with 'harmless' operations such as
'abrt-data,backup-files,crash-data,tmp-files'

> 2. Run virt-sparsify on the disks.
3. Convert to compressed qcow2. In most cases it certainly can have a
significant saving (1:2 should be reasonable, but some can get 1:6 even) of
disk space (and later, file transfer times, etc).
If we are already converting, we can also convert from qcow2 to qcow2v3
(compat 0.1 to compat 1.1).

Y.


> I will add this to the feature page.
>
> We are aware to the fact that with this design, the OVAs could not be
> directly consumed by others, like VMware. But this could make it easier for
> them to make the needed conversion - they won't need to query the VM
> configuration from oVirt and won't need to lookup the disks inside oVirt's
> storage domains. Anyway, we assume that this conversion is done by other
> tools.
>
>
>> Rich.
>>
>> --
>> Richard Jones, Virtualization Group, Red Hat
>> http://people.redhat.com/~rjones
>> Read my programming and virtualization blog: http://rwmj.wordpress.com
>> Fedora Windows cross-compiler. Compile Windows programs, test, and
>> build Windows installers. Over 100 libraries supported.
>> http://fedoraproject.org/wiki/MinGW
>>
>
>
> ___
> Devel mailing list
> de...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
_

Re: [ovirt-users] Disk image upload

2017-05-24 Thread Yaniv Kaul
On Wed, May 24, 2017 at 6:21 AM, Anantha Raghava <
rag...@exzatechconsulting.com> wrote:

> Hi,
>
> I am trying to upload a Disk image using Web Admin UI. It is pausing the
> process and throwing up this error.
>
> "Unable to upload image to disk 3e71c278-e03b-40e5-afaa-a13bed115229 due
> to a network error. Make sure ovirt-imageio-proxy service is installed and
> configured, and ovirt-engine's certificate is registered as a valid CA in
> the browser. The certificate can be fetched from https://
> /ovirt-engine/services/pki-resource?resource=ca-certificate&
> format=X509-PEM-CA
>

Did you follow the instructions in the error message?
- Is ovirt-imageio-proxy installed and configured?
- Is ovirt-engine's certificate is registered as a valid CA in the browser?
Y.


> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Is 3.6.7 direct to 4.1.2 a supported upgrade path?

2017-05-24 Thread Yaniv Kaul
On Wed, May 24, 2017 at 5:36 AM, Richard Chan 
wrote:

> Time to take the plunge to 4.x — is it supported to upgrade from 3.6.7 to
> 4.1.2 directly?
>

I'm not sure what 'supported' means. I don't think many have tried - it
probably works though. I suggest testing on a non-production system first.
Y.


>
>
>
>
> --
> Richard Chan
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] perf tool ?

2017-05-25 Thread Yaniv Kaul
On Wed, May 24, 2017 at 5:12 PM, Fabrice Bacchella <
fabrice.bacche...@orange.fr> wrote:

> I'm playing with perf in vm and getting inconsistent result. But I wonder
> if it's a kvm, ovirt or hardware problem.
>
> On a ovirt's vm:
> $ sudo perf list | grep Hardware | wc -l
> 1
> $ lscpu
> ...
> Model name:Intel Core Processor (Haswell, no TSX)
>
> On another ovirt's vm:
> $ sudo perf list | grep Hardware | wc -l
> 27
> $ lscpu
> ...
> Model name:AMD Opteron 23xx (Gen 3 Class Opteron)
>
> On a libvirtm vm:
> sudo perf list | grep Hardware | wc -l
> 1
> lscpu
> ...
> Model name:Westmere E56xx/L56xx/X56xx (Nehalem-C)
> ...
>
> Look's like intel CPU don't expose hardware events. Is there an option on
> kvm or ovirt to help that ?
>

Perhaps you can try with a VDSM hook adding the relevant events[1] to the
libvirt XML?
Y.

[1] https://libvirt.org/formatdomain.html#elementsPerf

>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hi everyone, new guy in the mail list

2017-05-25 Thread Yaniv Kaul
On Wed, May 24, 2017 at 9:28 PM, Odilon Junior 
wrote:

> Hi everyone, i've just subscribed to the maling list, I was going to ask
> one thing about ovirt in the IRC channel but no one was there.
>
> I've been using ovirt since 3.1, today we have one enviroment with 1 Ovirt
> Engine and 3 baremetal that we use for virtualization running the Ovirt
> 3.6.3. We have 3 NFS share from our colocation provider that are used as
> Storage.
>
> Our collocation will be rolling one emergency maintenance to upgrade the
> firmware on the routers that provides our Private Network, the NFS service
> run on this Private Network,the downtime should be no more than 3 minutes.
>
> I'm concerned, what is the best way to approach this, should we shutdown
> all the VMs and the hypervisor? Or we can just let the Ovirt handle this?
> The communication between the Ovirts and the Engine are from another
> network that will not be affected.
>

You should put everything into maintenance.
Unlike other storages, NFS will hang and you'll have stale mounts
eventually, which cannot be recovered from.
Y.


>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can I upload an ISO image using the web portal

2017-05-25 Thread Yaniv Kaul
On Thu, May 25, 2017 at 7:03 PM, Dan Sullivan  wrote:

> I'm trying to upload a ISO image that I have and keep getting a connection
> refused erro using ovirt-iso-uploader.  I was wondering if there was a way
> to use the web admin portal to to the upload?
>

Not yet.
But we'd be happy to take a look at the error and see what the issue is.
Y.


>
> Dan
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Cloud-init] fail on CentOS

2017-05-25 Thread Yaniv Kaul
On Thu, May 25, 2017 at 2:40 PM, TranceWorldLogic . <
tranceworldlo...@gmail.com> wrote:

> Hi,
>
> OS: CentOS 6.8
>
> For me , cloud-init is failing but not getting idea where.
> Would you please help me to enable logs of cloud-init ?
>
> cloud-init.log file is empty and I have also verified
> /etc/cloud/cloud.cfg.d/05_logging.cfg.
> It is default and look to me correct.
>

Anything in the guest's /var/log/messages ?
Y.


>
> Thanks,
> ~Rohit
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can I upload an ISO image using the web portal

2017-05-26 Thread Yaniv Kaul
On May 26, 2017 4:48 PM, "Dan Sullivan"  wrote:

Here is my configuration and the steps I have tried.

oVirt engine version: 3.6.6.2-1.el7.centos
Hosted engine on a Dell 610 server
Default data center
Default cluster
hosts: hosted_engine_1
network: ovirtmgmt (default)
3 storage domains - all hosted locally and served via NFS (400+GB free on
partition)
hosted_storage Type=data
ISO_storage type=ISO
local_vm_storage type=data(master)
Two VMs (not including engine host)
HostedEngine (hosted by hosted_engine_1) FQDN=localhost.localdomain.
localdomain


That's not an ideal host name. Is that really the name?

vm_one (hosted by hosted_engine_1)
vm_two (hosted by hosted_engine_1)
No pools
One blank template
Two users
admin (admin@internal-authz, internal-authz, *)
everyone (, , *)

I first try and list the domains

ovirt-iso-uploader list
Please provide the REST API username for oVirt Engine (CTRL+D to abort):
admin@internal-authz
Please provide the REST API password for the admin@internal-authz oVirt
Engine user (CTRL+D to abort):
ERROR: Problem connecting to the REST API at https://localhost:443/api
[ERROR]::oVirt API connection failure, (7, 'Failed connect to
localhost:443; Connection refused')


How do you connect to the UI for example?
The tool fails to even connect to the engine. Aa I assume it is up and
running, I reckon it's some wrong name resolution.
Y.



I've tried various permutations of this command with similar results.  I've
added -u with admin, admin@internal, admin@internal-authz,
admin@internal-authz.local... I've added -r with various permutations of
the host info. I've added -i ISO_storage, etc.

Any help or suggestions would be appreciated.

Dan

On 05/25/2017 04:24 PM, Yaniv Kaul wrote:



On Thu, May 25, 2017 at 7:03 PM, Dan Sullivan  wrote:

> I'm trying to upload a ISO image that I have and keep getting a connection
> refused erro using ovirt-iso-uploader.  I was wondering if there was a way
> to use the web admin portal to to the upload?
>

Not yet.
But we'd be happy to take a look at the error and see what the issue is.
Y.


>
> Dan
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can I upload an ISO image using the web portal

2017-05-27 Thread Yaniv Kaul
On May 26, 2017 7:19 PM, "Dan Sullivan"  wrote:



On 05/26/2017 12:06 PM, Yaniv Kaul wrote:

Two VMs (not including engine host)

HostedEngine (hosted by hosted_engine_1) FQDN=localhost.localdomain.loc
aldomain


That's not an ideal host name. Is that really the name?

What is not ideal?  HostedEngine? hosted_engine_1? localhost...?


localhost.localdomain.localdomain


vm_one (hosted by hosted_engine_1)
vm_two (hosted by hosted_engine_1)


I first try and list the domains

ovirt-iso-uploader list
Please provide the REST API username for oVirt Engine (CTRL+D to abort):
admin@internal-authz
Please provide the REST API password for the admin@internal-authz oVirt
Engine user (CTRL+D to abort):
ERROR: Problem connecting to the REST API at https://localhost:443/api
[ERROR]::oVirt API connection failure, (7, 'Failed connect to
localhost:443; Connection refused')


How do you connect to the UI for example?

I point firefox to guest147...com


Can you try and edit the conf file to point to it as well?
Y.


The tool fails to even connect to the engine. Aa I assume it is up and
running, I reckon it's some wrong name resolution.
Y.

HostedEngine is up as are both vm_one and vm_two.

Dan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] no spm node in cluster and unable to start any vm or stopped storage domain

2017-05-29 Thread Yaniv Kaul
On Mon, May 29, 2017 at 3:08 PM, Moritz Baumann 
wrote:

> Hi,
> after an upgrade I get the following errors in the web gui:
>
> VDSM ovirt-node01 command SpmStatusVDS failed: (13, 'Sanlock resource read
> failure', 'Permission denied')
>

What do you see on sanlock.log?
What kind of storage do you have, that you experience a permission denied
error? Is it a file-system based one, with actual permission issue?
Y.


> VDSM ovirt-node03 command HSMGetAllTasksStatusesVDS failed: Not SPM
>
> These messages happen from all nodes.
>
> I can stop vms and migrate them but I cannot start any vm again
>
> How do I get bet to a sane state where one node is SPM.
>
> Best,
> Moritz
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to add storage domains to Node Hosted Engine

2017-05-29 Thread Yaniv Kaul
On Mon, May 29, 2017 at 2:25 PM, Andy Gibbs  wrote:

> On 29 May 2017 08:22, Sandro Bonazzola wrote:
> > Hi, so if I understood correctly, you're trying to work on a single host
> > deployment right?
> > Or are you just trying to replace the bare metal all-in-one 3.6 in a
> context
> > with more hosts?
> > If this is the case, can you share your use case? I'm asking because for
> > single host installations there are other solutions that may fit better
> than
> > oVirt, like virt-manager or kimchi (https://github.com/kimchi-
> project/kimchi)
>
> Sandro, thank you for your reply.
>
> I hadn't heard about kimchi before.  Virt-manager had been discounted as
> the user interface is not really friendly enough for non-technical people,
> which is important for us.  The simple web interface with oVirt, however,
> is excellent in this regard.
>
> I would say that the primary use-case is this: We want a server which
> individual employees can log into (using their active directory logins),
> access company-wide "public" VMs or create their own private VMs for their
> own use (if permitted).  Users should be able to start and stop the
> "public" VMs but not be able to edit or delete them.  They should only have
> full control over the VMs that they create for themselves.  And very
> importantly, it should be possible to say which users have the ability to
> create their own VMs.  Nice to have would be the ability for users to be
> able to share their VMs with other users.  Really nice to have would be a
> way of detecting whether VMs are in use by someone else before opening a
> console and stealing it away from the current user!  (Actually, case in
> point, the user web interface for oVirt 3.6 always starts a console for a
> VM when the user logs in, if it is the only one running on the server and
> which the user has access to.  I don't know i
>  f this is fixed in 4.1, but our work-around is to have a dummy VM that
> always runs and displays a graphic with helpful text for any that see it!
> Bit of a nuisance, but not too bad.  We never found a way to disable this
> behaviour.)
>

This sounds like a bug to me, if guest agent is installed and running on
the guest.
I'd appreciate if you could open a bug with all relevant details.


> We started off some years ago with a server running oVirt 3.4, now running
> 3.6, with the all-in-one plugin and had good success with this.  The hosted
> engine for oVirt 4.1 seemed to be the recommended "upgrade path" --
> although we did also start with entirely new server hardware.
>
> Ultimately once this first server is set up we will want to convert the
> old server hardware to a second node so that we can balance the load (we
> have a number of very resource-hungry VMs).  This would be our secondary
> use-case.  More nodes may follow in future.  However, we don't see the
> particular need to have VMs that migrate from node to node, and each node
> will most likely have its own storage domains for the VMs that run on it.
> But to have one central web interface for managing the whole lot is a huge
> advantage.
>
> Coming then to the storage issue that comes up in my original post, we are
> trying to install this first server platform, keeping the node, the hosted
> engine, and the storage all on one physical machine.  We don't (currently)
> want to set up a separate storage server, and don't really see the benefit
> of doing so.  Since my first email, I've actually succeeded in getting the
> engine to recognise the node's storage paths.  However, I'm not sure it
> really is the right way.  The solution I found was to create a third path,
> /srv/ovirt/engine, in addition to the data and iso paths.  The engine gets
> installed to /srv/ovirt/engine and then once the engine is started up, I
> create a new data domain at node:/srv/ovirt/data.  This then adds the new
> path as the master data domain, and then after thinking a bit to itself,
> suddenly the hosted_storage data domain appears, and after a bit more
> thinking, everything seems to get properly registered and works.  I can
> then also create the ISO storag
>  e domain.
>
> Does this seem like a viable solution, or have I achieved something
> "illegal"?
>

Sounds a bit of a hack, but I don't see a good reason why it wouldn't work
- perhaps firewalling issues. Certainly not a common or tested scenario.


>
> I am still not having much luck with my other problem(s) to do with
> restarting the server: it still hangs on shutdown and it still takes a very
> long time (about ten minutes) after the node starts for the engine to
> start.  Any help on this would be much appreciated.
>

Logs would be appreciated - engine.log, server.log, perhaps journal
entries. Perhaps there's race between NFS and Engine services?
Y.


>
> Thanks
> Andy
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing

Re: [ovirt-users] oVirt v4.1.2 error: Failed to start service 'vdsmd'

2017-05-29 Thread Yaniv Kaul
Can you ensure virtualization is enabled on your host?
"Cannot detect if hardware supports virtualization " seems to hint of an
issue.
Y.

On Tue, May 30, 2017 at 7:15 AM,  wrote:

>  Failed to execute stage 'Environment setup': Failed to start service
> 'vdsmd'
>  Hosted Engine deployment failed.
>
> This is my 4th try installing oVirt...each time a different error
>
> logs attached
>
> THanks
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] live migration between datacenters with shared storage

2017-06-01 Thread Yaniv Kaul
On Thu, Jun 1, 2017 at 4:55 PM, Adam Litke  wrote:

> You cannot migrate VMs between Datacenters.  I think an export domain will
> be your easiest option but there may be a way to upgrade in-place (ie.
> upgrade engine while vms are running, then upgrade cluster) but I am not an
> expert in this area.
>

Why is an export domain better than detach and attach a storage domain?
Y.


>
> On Wed, May 31, 2017 at 4:08 PM, Charles Kozler 
> wrote:
>
>> I couldnt find a definitive on this so I would like to inquire here
>>
>> I have gluster on my storage backend exporting the volume from a single
>> node via NFS
>>
>> I have a DC of 4.0 and I would like to upgrade to 4.1. I would ideally
>> like to take one node out of the cluster and build a 4.1 datacenter. Then
>> live migrate VMs from the 4.0 DC over to the 4.1 DC with zero downtime to
>> the VMs
>>
>> Is this possible? Or would I be safer to export/import VMs?
>>
>> Thanks!
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
> Adam Litke
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Seamless SAN HA failovers with oVirt?

2017-06-08 Thread Yaniv Kaul
On Tue, Jun 6, 2017 at 1:45 PM, Matthew Trent <
matthew.tr...@lewiscountywa.gov> wrote:

> Thanks for the replies, all!
>
> Yep, Chris is right. TrueNAS HA is active/passive and there isn't a way
> around that when failing between heads.
>

General comment - 30 seconds is A LOT. Many application-level IO might
timeout. Most storage strive to remain lower than that.


>
> Sven: In my experience with iX support, they have directed me to reboot
> the active node to initiate failover. There's "hactl takeover" and "hactl
> giveback" commends, but reboot seems to be their preferred method.
>
> VMs going into a paused state and resuming when storage is back online
> sounds great. As long as oVirt's pause/resume isn't significantly slower
> than the 30-or-so seconds the TrueNAS takes to complete its failover,
> that's a pretty tolerable interruption for my needs. So my next questions
> are:
>
> 1) Assuming the SAN failover DOES work correctly, can anyone comment on
> their experience with oVirt pausing/thawing VMs in an NFS-based
> active/passive SAN failover scenario? Does it work reliably without
> intervention? Is it reasonably fast?
>

oVirt is not pausing VMs. qemu-kvm pauses the specific VM that issues an IO
and that IO is stuck. The reason is that the VM cannot reliably continue
without a concern for data loss (the data is in-flight somewhere, right?
host kernel, NIC buffers, etc.)


>
> 2) Is there anything else in the oVirt stack that might cause it to "freak
> out" rather than gracefully pause/unpause VMs?
>

We do monitor storage domain health regularly. We are working on ignoring
short hiccups (see https://bugzilla.redhat.com/show_bug.cgi?id=1459370 for
example).


>
> 2a) Particularly: I'm running hosted engine on the same TrueNAS storage.
> Does that change anything WRT to timeouts and oVirt's HA and fencing and
> sanlock and such?
>
> 2b) Is there a limit to how long oVirt will wait for storage before doing
> something more drastic than just pausing VMs?
>

As explained above, generally, no. We can't do much tbh, and we'd like to
ensure there is no data loss.
That being said, in extreme cases hosts may become unresponsive - if you
have fencing they may even be fenced (there's an option to fence a host
which cannot renew its storage lease). We have not seen that happening for
quite some time, and I don't anticipate short storage hiccups to cause that
, though.
Depending on your application, it may be the right thing to do, btw.
Y.


>
> --
> Matthew Trent
> Network Engineer
> Lewis County IT Services
> 360.740.1247 - Helpdesk
> 360.740.3343 - Direct line
>
> 
> From: users-boun...@ovirt.org  on behalf of
> Chris Adams 
> Sent: Tuesday, June 6, 2017 7:21 AM
> To: users@ovirt.org
> Subject: Re: [ovirt-users] Seamless SAN HA failovers with oVirt?
>
> Once upon a time, Juan Pablo  said:
> > Chris, if you have active-active with multipath: you upgrade one system,
> > reboot it, check it came active again, then upgrade the other.
>
> Yes, but that's still not how a TrueNAS (and most other low- to
> mid-range SANs) works, so is not relevant.  The TrueNAS only has a
> single active node talking to the hard drives at a time, because having
> two nodes talking to the same storage at the same time is a hard problem
> to solve (typically requires custom hardware with active cache coherency
> and such).
>
> You can (and should) use multipath between servers and a TrueNAS, and
> that protects against NIC, cable, and switch failures, but does not help
> with a controller failure/reboot/upgrade.  Multipath is also used to
> provide better bandwidth sharing between links than ethernet LAGs.
>
> --
> Chris Adams 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt client developpement

2017-06-11 Thread Yaniv Kaul
On Fri, Jun 9, 2017 at 10:30 AM, Fabrice Bacchella <
fabrice.bacche...@orange.fr> wrote:

>
> > Le 9 juin 2017 à 16:25, Luca 'remix_tj' Lorenzetto <
> lorenzetto.l...@gmail.com> a écrit :
> >
> > On Fri, Jun 9, 2017 at 4:19 PM, Fabrice Bacchella
> >  wrote:
> >> For my ovirt cli, I would like to have unit tests. But there is nothing
> to test in standalone mode, I need a running ovirt with a database in a
> known state.
> >>
> >> Is there some where a docker images with a toy setup, or a mock ovirt
> engine that can be downloaded and used for that ?
> >
> > Maybe you can run lago
> > http://lago.readthedocs.io/en/stable/README.html and setup an ovirt
> > env on the fly?
>
> That's not a answer to my question. I can always build one manually. I
> know how to build a VM/contenaire from that, but I will still need to fill
> it with fake data, and needs to update it for every release of ovirt.
>
> With a prebuild system, provided by oVirt people, I could also run it on
> release candidate and help them find bugs.
>

ovirt-system-tests, on top of Lago is what we use all the time for system
tests. It takes few minutes to set up a system, you can save-restore a
running system, update it easily, etc. It's quite fully featured and easily
extendable.
Y.



> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster storage best practice

2017-06-11 Thread Yaniv Kaul
On Sat, Jun 10, 2017 at 1:43 PM,  wrote:

> Martin,
>
> Looking to test oVirt on real hardware (aka no nesting)
>
> Scenario # 1:
> 1x Supermicro 2027TR-HTRF 2U 4 node server
>

Is that a hyper-converged setup of both oVirt and Gluster?
We usually do it in batches of 3 nodes.

I will install the o/s for each node on a SATADOM.
> Since each node will have 6x SSD for gluster storage.
> Should this be software RAID, hardware RAID or no RAID?
>

I'd reckon that you should prefer HW RAID on software RAID, and some RAID
on no RAID at all, but it really depends on your budget, performance, and
your availability requirements.


>
> Scenario # 2:
> 3x SuperMicro SC216E16-R1200LPB 2U server
> Each server has 24x 2.5" bays (front) + 2x 2.5" bays (rear)
> I will install the o/s on the drives using the rear bays (maybe RAID 1?)
>

Makes sense (I could not see the rear bays - might have missed them).
Will you be able to put some SSDs there for caching?


> For Gluster, we will use the 24 front bays.
> Should this be software RAID, hardware RAID or no RAID?
>

Same answer is above.
Y.


> Thanks
> Femi
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] trouble when creating VM snapshots including memory

2017-06-11 Thread Yaniv Kaul
On Fri, Jun 9, 2017 at 3:39 PM, Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:

> hi,
>
> i'm having trouble creating VM snapshots that include memory in my oVirt
> 4.1 test environment. when i do this the VM gets paused and shortly
> (20-30s) afterwards i'm seeing messages in engine.log about both iSCSI
> storage domains (master storage domain and data storage where VM resides)
> experiencing high latency. this quickly worsens from the engines view: VM
> is unresponsive, Host is unresponsive, engine wants to fence the host
> (impossible because it's the only host in the test cluster). in the end
> there is an EngineException
>
> EngineException: org.ovirt.engine.core.vdsbroke
> r.vdsbroker.VDSNetworkException: VDSGenericException:
> VDSNetworkException: Message timeout which can be caused by communication
> issues (Failed with error VDS_NETWORK_ERROR and code 5022)
>
> the snapshot fails and is left in an inconsistent state. the situation has
> to be resolved manually with unlock_entity.sh and maybe lvm commands. this
> happened twice in exactly the same manner. VM snapshots without memory for
> this VM are not a problem.
>
> VM guest OS is CentOS7 installed from one of the ovirt-image-repository
> images. it has the oVirt guest agent running.
>
> what could be wrong?
>
> this is a test environment where lots of parameters aren't optimal but i
> never had problems like this before, nothing concerning network latency.
> iSCSI is on a FreeNAS box. CPU, RAM, ethernet (10GBit for storage) on all
> hosts involved (engine hosted externally, oVirt Node, storage) should be OK
> by far.
>

Are you sure iSCSI traffic is going over the 10gb interfaces?
If it doesn't, it might choke the mgmt interface.
Regardless, how is the performance of the storage? I don't expect it to
require too much, but saving the memory might require some storage
performance. Perhaps there's a bottleneck there?
Y.


>
> it looks like some obvious configuration botch or performance bottleneck
> to me. can it be linked to the network roles (management and migration
> network are on a 1 GBit link)?
>
> i'm still new to this, not a lot of KVM experience, too. maybe someone
> recognizes the culprit...
>
> thx
> matthias
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt storage best practise

2017-06-14 Thread Yaniv Kaul
On Wed, Jun 14, 2017 at 9:23 AM, Idan Shaby  wrote:

> Direct luns are disks that are not managed by oVirt. Ovirt communicates
> directly with the lun itself, without any other layer in between (like lvm
> in image disks).
> The advantage of the direct lun is that it should have better performance
> since there's no overhead of another layer in the middle.
> The disadvantage is that you can't take a snapshot of it (when attached to
> a vm, of course), can't make it a part of a template, export it, and in
> general - you don't manage it.
>

You can, of course, create a snapshot from the storage-side.
Y.


>
> Regards,
> Idan
>
> On Mon, Jun 12, 2017 at 10:10 PM, Stefano Bovina  wrote:
>
>> Thank you very much.
>> What about "direct lun" usage and database example?
>>
>>
>> 2017-06-08 16:40 GMT+02:00 Elad Ben Aharon :
>>
>>> Hi,
>>> Answer inline
>>>
>>> On Thu, Jun 8, 2017 at 1:07 PM, Stefano Bovina  wrote:
>>>
 Hi,
 does a storage best practise document for oVirt exist?


 Some examples:

 oVirt allows to extend an existing storage domain: Is it better to keep
 a 1:1 relation between LUN and oVirt storage domain?

>>> What do you mean by 1:1 relation? Between storage domain and the number
>>> of LUNs the domain reside on?
>>>
 If not, is it better to avoid adding LUNs to an already existing
 storage domain?

>>> No problems with storage domain extension.
>>>

 Following the previous questions:

 Is it better to have 1 Big oVirt storage domain or many small oVirt
 storage domains?

>>> Depends on your needs, be aware to the following:
>>> - Each domain has its own metadata which allocates ~5GB of the domain
>>> size.
>>> - Each domain is being constatntly monitored by the system, so large
>>> number of domain can decrease the system performance.
>>> There are also downsides with having big domains, like less flexability
>>>
>>>
 There is a max num VM/disks for storage domain?


 In which case is it better to use "direct attached lun" with respect to
 an image on an oVirt storage domain?

>>>

>>>
 Example:

 Simple web server:   > image
 Large database (simple example):
- root,swap etc: 30GB  > image?
- data disk: 500GB-> (direct or image?)

 Regards,

 Stefano

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Cloud-Init

2017-06-15 Thread Yaniv Kaul
On Wed, Jun 14, 2017 at 4:11 PM, Luca 'remix_tj' Lorenzetto <
lorenzetto.l...@gmail.com> wrote:

> On Wed, Jun 14, 2017 at 3:08 PM, Adam Mills  wrote:
> > Hello oVirt Users!
> >
> > Recent the team that I work on began investigating oVirt as a
> virtualization
> > platform. It is extremely promising, however we have some questions about
> > how oVirt does provisioning using Cloud-Init.
>
> Very good!
>

+1. Welcome aboard.


>
> >
> > Specifically the question is: On the wiki the it states "We are most
> > interested in using config-drive version 2 [2], which is also in
> supported
> > by OpenStack". Is that currently how Cloud-Init is providing the
> datasource
> > to the machine is via a config-drive being mounted?
>
> Yes. If you start the vm through run-once and specify to use cloud
> init (with any configuration specified) a second cdrom drive is
> attached automatically containing cloud-init infos. Cloud-init will
> read that drive and apply the configuration accordingly.
>
>
> >
> > And the last question: How frequent is too frequent to ask questions to
> this
> > mailing group :D
>
> You can ask as much as you want. I case of excessive mailing, i'll
> filter out your request :-P (joking)
>

My request would be a different thread per question topic and sensible
subject line.
That would make it easier for all to read and respond.
Y.


>
>
> Luca
>
>
> --
> "E' assurdo impiegare gli uomini di intelligenza eccellente per fare
> calcoli che potrebbero essere affidati a chiunque se si usassero delle
> macchine"
> Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
>
> "Internet è la più grande biblioteca del mondo.
> Ma il problema è che i libri sono tutti sparsi sul pavimento"
> John Allen Paulos, Matematico (1945-vivente)
>
> Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <
> lorenzetto.l...@gmail.com>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt storage best practise

2017-06-15 Thread Yaniv Kaul
On Wed, Jun 14, 2017 at 4:18 PM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:

> I normally assume that any performance gain from directlly attaching a LUN
> to a Virtual Machine then using it in the traditional way are so little to
> compensate the extra hassle to do that. I would avoid as much as I cacn use
> it, unless it is for some very special reason where you cannot do in any
> other way. The only real usage for it so far was Microsoft SQL Server
> Clustering requirements.
>

I tend to agree (from performance perspective), though I don't have numbers
to back it up. It probably doesn't matter that much.
There are however other reasons to use a direct LUN - use of storage-side
features, such as replication, QoS, encryption, compression, etc. that you
may wish to apply (or disable) per storage.
Also, there are some strange SCSI commands that some strange applications
need that require direct LUN and SCSI pass-through. Clustering (via SCSI
reservations is certainly the first and foremost but not the only one).
Y.


> Fernando
>
> On 14/06/2017 03:23, Idan Shaby wrote:
>
> Direct luns are disks that are not managed by oVirt. Ovirt communicates
> directly with the lun itself, without any other layer in between (like lvm
> in image disks).
> The advantage of the direct lun is that it should have better performance
> since there's no overhead of another layer in the middle.
> The disadvantage is that you can't take a snapshot of it (when attached to
> a vm, of course), can't make it a part of a template, export it, and in
> general - you don't manage it.
>
>
> Regards,
> Idan
>
> On Mon, Jun 12, 2017 at 10:10 PM, Stefano Bovina  wrote:
>
>> Thank you very much.
>> What about "direct lun" usage and database example?
>>
>>
>> 2017-06-08 16:40 GMT+02:00 Elad Ben Aharon :
>>
>>> Hi,
>>> Answer inline
>>>
>>> On Thu, Jun 8, 2017 at 1:07 PM, Stefano Bovina  wrote:
>>>
 Hi,
 does a storage best practise document for oVirt exist?


 Some examples:

 oVirt allows to extend an existing storage domain: Is it better to keep
 a 1:1 relation between LUN and oVirt storage domain?

>>> What do you mean by 1:1 relation? Between storage domain and the number
>>> of LUNs the domain reside on?
>>>
 If not, is it better to avoid adding LUNs to an already existing
 storage domain?

>>> No problems with storage domain extension.
>>>

 Following the previous questions:

 Is it better to have 1 Big oVirt storage domain or many small oVirt
 storage domains?

>>> Depends on your needs, be aware to the following:
>>> - Each domain has its own metadata which allocates ~5GB of the domain
>>> size.
>>> - Each domain is being constatntly monitored by the system, so large
>>> number of domain can decrease the system performance.
>>> There are also downsides with having big domains, like less flexability
>>>
>>>
 There is a max num VM/disks for storage domain?


 In which case is it better to use "direct attached lun" with respect to
 an image on an oVirt storage domain?

>>>

>>>
 Example:

 Simple web server:   > image
 Large database (simple example):
- root,swap etc: 30GB  > image?
- data disk: 500GB-> (direct or image?)

 Regards,

 Stefano

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Version of engine vs version of host

2017-06-16 Thread Yaniv Kaul
On Fri, Jun 16, 2017 at 6:27 PM, Gianluca Cecchi 
wrote:

> Hello,
> between problems solved in upcoming 4.1.3 release I see this:
>
> Lost Connection After Host Deploy when 4.1.3 Host Added to 4.1.2 Engine
> tracked by
> https://bugzilla.redhat.com/show_bug.cgi?id=1459484
>

I *think* the specific bug was discovered (and fixed) while developing
4.1.3.


>
>
> As a matter of principle I would prefer to force that an engine version
> must be greater or equal than all the hosts it is intended to manage.
> I don't find safe to allow this and probably unnecessary maintenance
> work... what do you think?
>
> For example if you go here:
> http://www.vmware.com/resources/compatibility/sim/
> interop_matrix.php#interop&1=&2=
>
> you can see that:
> - a vCenter Server 5.0U3 cannot manage an ESXi 5.1 host
> - a vCenter Server 5.1U3 cannot manage an ESXi 6.0 host
> - a vCenter Server 6.0U3 cannot manage an ESXi 6.5 host
>

We are more flexible ;-)

While I think it's a matter of taste, I think there are merits to upgrading
the hosts first. For example, assuming you have many hosts, to me it makes
sense to upgrade just one, see that things work well. Then, upgrade
another, perform live migration, etc, see that it's smooth, before
upgrading the manager, which is a bigger task sometimes (rollback is more
challenging, for example, it has a downtime requirements where as single
host maintenance is not requiring the same level of downtime, etc.).
In addition, there are host-based features (VDSM hooks) which do not
mandate a manager upgrade.


> In my opinion an administrator of the virtual infrastructure doesn't
> expect to be able to manage newer versions' hosts with older engines... and
> probably he/she doesn't feel this feature as a value added.
>

I'm on your side on this, as I believe the manager should always be the
most up-to-date, but I know others have different opinions and we'd like to
keep it that way.
Y.


> Just my thoughts.
> Cheers,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-06-16 Thread Yaniv Kaul
On Fri, Jun 16, 2017 at 5:20 PM, Gianluca Cecchi 
wrote:

> On Thu, Apr 27, 2017 at 11:25 AM, Evgenia Tokar  wrote:
>
>> Hi,
>>
>> It looks like the graphical console fields are not editable for hosted
>> engine vm.
>> We are trying to figure out how to solve this issue, it is not
>> recommended to change db values manually.
>>
>> Thanks,
>> Jenny
>>
>>
>> On Thu, Apr 27, 2017 at 10:49 AM, Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>> On Thu, Apr 27, 2017 at 9:46 AM, Gianluca Cecchi <
>>> gianluca.cec...@gmail.com> wrote:
>>>


 BTW: if I try to set the video type to Cirrus from web admin gui (and
 automatically the Graphics Protocol becomes "VNC"), I get this when I press
 the OK button:

 Error while executing action:

 HostedEngine:

- There was an attempt to change Hosted Engine VM values that are
locked.

 The same if I choose "VGA"
 Gianluca

>>>
>>>
>>> I verified that I already have in place this parameter:
>>>
>>> [root@ractorshe ~]# engine-config -g AllowEditingHostedEngine
>>> AllowEditingHostedEngine: true version: general
>>> [root@ractorshe ~]#
>>>
>>>
>>
> Hello is there a solution for this problem?
> I'm now in 4.1.2 but still not able to access the engine console
>

I thought https://bugzilla.redhat.com/show_bug.cgi?id=1441570 was supposed
to handle it...
Can you share more information in the bug?
Y.


>
> [root@ractor ~]# hosted-engine --add-console-password --password=pippo
> no graphics devices configured
> [root@ractor ~]#
>
> In web admin
>
> Graphics protocol: None  (while in edit vm screen it appears as "SPICE"
> and still I can't modify it)
> Video Type: QXL
>
> Any chance for upcoming 4.1.3? Can I test it it there is new changes
> related to this problem.
>
> the qemu-kvm command line for hosted engine is now this one:
>
> qemu  8761 1  0 May30 ?01:33:29 /usr/libexec/qemu-kvm
> -name guest=c71,debug-threads=on -S -object secret,id=masterKey0,format=
> raw,file=/var/lib/libvirt/qemu/domain-3-c71/master-key.aes -machine
> pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Nehalem -m
> size=1048576k,slots=16,maxmem=4194304k -realtime mlock=off -smp
> 1,maxcpus=16,sockets=16,cores=1,threads=1 -numa
> node,nodeid=0,cpus=0,mem=1024 -uuid 202e6f2e-f8a1-4e81-a079-c775e86a58d5
> -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-3.1611.el7.
> centos,serial=4C4C4544-0054-5910-8056-C4C04F30354A,uuid=
> 202e6f2e-f8a1-4e81-a079-c775e86a58d5 -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-3-c71/monitor.sock,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2017-05-30T13:18:37,driftfix=slew -global 
> kvm-pit.lost_tick_policy=discard
> -no-hpet -no-shutdown -boot strict=on -device 
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
> -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
> -drive if=none,id=drive-ide0-1-0,readonly=on -device
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
> file=/rhev/data-center/0001-0001-0001-0001-00ec/556abaa8-0fcc-
> 4042-963b-f27db5e03837/images/7d5dd44f-f5d1-4984-9e76-
> 2b2f5e42a915/6d873dbd-c59d-4d6c-958f-a4a389b94be5,format=
> raw,if=none,id=drive-virtio-disk0,serial=7d5dd44f-f5d1-
> 4984-9e76-2b2f5e42a915,cache=none,werror=stop,rerror=stop,aio=threads
> -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-
> virtio-disk0,id=virtio-disk0,bootindex=1 -netdev
> tap,fd=33,id=hostnet0,vhost=on,vhostfd=35 -device virtio-net-pci,netdev=
> hostnet0,id=net0,mac=00:1a:4a:16:01:51,bus=pci.0,addr=0x3 -chardev
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/
> 202e6f2e-f8a1-4e81-a079-c775e86a58d5.com.redhat.rhevm.vdsm,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/
> 202e6f2e-f8a1-4e81-a079-c775e86a58d5.org.qemu.guest_agent.0,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev
> spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-
> serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> -spice tls-port=5901,addr=10.4.168.81,x509-dir=/etc/pki/vdsm/
> libvirt-spice,tls-channel=default,tls-channel=main,tls-
> channel=display,tls-channel=inputs,tls-channel=cursor,tls-
> channel=playback,tls-channel=record,tls-channel=smartcard,
> tls-channel=usbredir,seamless-migration=on -device
> qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,
> vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2 -msg timestamp=on
>
>
> Thanks in advance,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
__

Re: [ovirt-users] hosted-engine VM and services not working

2017-06-16 Thread Yaniv Kaul
On Fri, Jun 16, 2017 at 9:11 AM, Andrew Dent  wrote:

> Hi
>
> Well I've got myself into a fine mess.
>
> host01 was setup with hosted-engine v4.1. This was successful.
> Imported 3 VMs from a v3.6 OVirt AIO instance. (This OVirt 3.6 is still
> running with more VMs on it)
> Tried to add host02 to the new Ovirt 4.1 setup. This partially succeeded
> but I couldn't add any storage domains to it. Cannot remember why.
> In Ovirt engine UI I removed host02.
> I reinstalled host02 with Centos7, tried to add it and Ovirt UI told me it
> was already there (but it wasn't listed in the UI).
> Renamed the reinstalled host02 to host03, changed the ipaddress, reconfig
> the DNS server and added host03 into the Ovirt Engine UI.
> All good, and I was able to import more VMs to it.
> I was also able to shutdown a VM on host01 assign it to host03 and start
> the VM. Cool, everything working.
> The above was all last couple of weeks.
>
> This week I performed some yum updates on the Engine VM. No reboot.
> Today noticed that the Ovirt services in the Engine VM were in a endless
> restart loop. They would be up for a 5 minutes and then die.
> Looking into /var/log/ovirt-engine/engine.log and I could only see errors
> relating to host02. Ovirt was trying to find it and failing. Then falling
> over.
> I ran "hosted-engine --clean-metadata" thinking it would cleanup and
> remove bad references to hosts, but now realise that was a really bad idea
> as it didn't do what I'd hoped.
> At this point the sequence below worked, I could login to Ovirt UI but
> after 5 minutes the services would be off
> service ovirt-engine restart
> service ovirt-websocket-proxy restart
> service httpd restart
>
> I saw some reference to having to remove hosts from the database by hand
> in situations where under the hood of Ovirt a decommission host was still
> listed, but wasn't showing in the GUI.
> So I removed reference to host02 (vds_id and host_id) in the following
> tables in this order.
> vds_dynamic
> vds_statistics
> vds_static
> host_device
>
> Now when I try to start ovirt-websocket it will not start
> service ovirt-websocket start
> Redirecting to /bin/systemctl start  ovirt-websocket.service
> Failed to start ovirt-websocket.service: Unit not found.
>
> I'm now thinking that I need to do the following in the engine VM
>
> # engine-cleanup
> # yum remove ovirt-engine
> # yum install ovirt-engine
> # engine-setup
>
> But to run engine-cleanup I need to put the engine-vm into maintenance
> mode and because of the --clean-metadata that I ran earlier on host01 I
> cannot do that.
>
> What is the best course of action from here?
>

To be honest, with all the steps taken above, I'd install everything
(including OS) from scratch...
There's a bit too much mess to try to clean up properly here.
Y.


>
> Cheers
>
>
> Andrew
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OVirt 4.1.2 - trim/discard on HDD/XFS/NFS contraproductive

2017-06-18 Thread Yaniv Kaul
On Sat, Jun 17, 2017 at 1:25 AM, Markus Stockhausen  wrote:

> Hi,
>
> we just set up a new 4.1.2 OVirt cluster. It is a quite normal
> HDD/XFS/NFS stack that worked quit well with 4.0 in the past.
> Inside the VMs we use XFS too.
>
> To our surprise we observe abysmal high IO during mkfs.xfs
> and fstrim inside the VM. A simple example:
>
> Step 1: Create 100G Thin disk
> Result 1: Disk occupies ~10M on storage
>
> Step 2: Format disk inside VM with mkfs.xfs
> Result 2: Disk occupies 100G on storage
>
> Changing the discard flag on the disk does not have any effect.
>

Are you sure it's discarding, at all?
1. NFS: only NFSv4.2 supports discard. Is that the case in your setup?
2. What's the value of /sys/block//queue/discard_granularity ?
3. Can you share the mkfs.xfs command line?
4. Are you sure it's not a raw-sparse image?
Y.


> Am I missing something?
>
> Best regards.
>
> Markus
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine VM and services not working

2017-06-18 Thread Yaniv Kaul
On Sat, Jun 17, 2017 at 12:50 AM,  wrote:

> If I reinstall and the rerun the hosted-engine setup how do I get the VMs
> in their current running state back into and being recognised by the new
> hosted engine?
>

Current running state is again quite challenging. You'll need to fix the
hosted-engine.

Can import the storage domain? (not for running VMs)
Y.


> Kind regards
>
> Andrew
>
> On 17 Jun 2017, at 6:54 AM, Yaniv Kaul  wrote:
>
>
>
> On Fri, Jun 16, 2017 at 9:11 AM, Andrew Dent 
> wrote:
>
>> Hi
>>
>> Well I've got myself into a fine mess.
>>
>> host01 was setup with hosted-engine v4.1. This was successful.
>> Imported 3 VMs from a v3.6 OVirt AIO instance. (This OVirt 3.6 is still
>> running with more VMs on it)
>> Tried to add host02 to the new Ovirt 4.1 setup. This partially succeeded
>> but I couldn't add any storage domains to it. Cannot remember why.
>> In Ovirt engine UI I removed host02.
>> I reinstalled host02 with Centos7, tried to add it and Ovirt UI told me
>> it was already there (but it wasn't listed in the UI).
>> Renamed the reinstalled host02 to host03, changed the ipaddress, reconfig
>> the DNS server and added host03 into the Ovirt Engine UI.
>> All good, and I was able to import more VMs to it.
>> I was also able to shutdown a VM on host01 assign it to host03 and start
>> the VM. Cool, everything working.
>> The above was all last couple of weeks.
>>
>> This week I performed some yum updates on the Engine VM. No reboot.
>> Today noticed that the Ovirt services in the Engine VM were in a endless
>> restart loop. They would be up for a 5 minutes and then die.
>> Looking into /var/log/ovirt-engine/engine.log and I could only see
>> errors relating to host02. Ovirt was trying to find it and failing. Then
>> falling over.
>> I ran "hosted-engine --clean-metadata" thinking it would cleanup and
>> remove bad references to hosts, but now realise that was a really bad idea
>> as it didn't do what I'd hoped.
>> At this point the sequence below worked, I could login to Ovirt UI but
>> after 5 minutes the services would be off
>> service ovirt-engine restart
>> service ovirt-websocket-proxy restart
>> service httpd restart
>>
>> I saw some reference to having to remove hosts from the database by hand
>> in situations where under the hood of Ovirt a decommission host was still
>> listed, but wasn't showing in the GUI.
>> So I removed reference to host02 (vds_id and host_id) in the following
>> tables in this order.
>> vds_dynamic
>> vds_statistics
>> vds_static
>> host_device
>>
>> Now when I try to start ovirt-websocket it will not start
>> service ovirt-websocket start
>> Redirecting to /bin/systemctl start  ovirt-websocket.service
>> Failed to start ovirt-websocket.service: Unit not found.
>>
>> I'm now thinking that I need to do the following in the engine VM
>>
>> # engine-cleanup
>> # yum remove ovirt-engine
>> # yum install ovirt-engine
>> # engine-setup
>>
>> But to run engine-cleanup I need to put the engine-vm into maintenance
>> mode and because of the --clean-metadata that I ran earlier on host01 I
>> cannot do that.
>>
>> What is the best course of action from here?
>>
>
> To be honest, with all the steps taken above, I'd install everything
> (including OS) from scratch...
> There's a bit too much mess to try to clean up properly here.
> Y.
>
>
>>
>> Cheers
>>
>>
>> Andrew
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Very poor GlusterFS performance

2017-06-20 Thread Yaniv Kaul
On Mon, Jun 19, 2017 at 7:32 PM, Ralf Schenk  wrote:

> Hello,
>
> Gluster-Performance is bad. Thats why I asked for native qemu-libgfapi
> access for Ovirt-VM's to gluster volumes which I thought to be possible
> since 3.6.x. Documentation is misleading and still in 4.1.2 Ovirt is using
> fuse to mount gluster-based VM-Disks.
>

Can you please open a bug to fix documentation? We are working on libgfapi,
but it's indeed not in yet.
Y.


> Bye
>
> Am 19.06.2017 um 17:23 schrieb Darrell Budic:
>
> Chris-
>
> You probably need to head over to gluster-us...@gluster.org for help with
> performance issues.
>
> That said, what kind of performance are you getting, via some form or
> testing like bonnie++ or even dd runs? Raw bricks vs gluster performance is
> useful to determine what kind of performance you’re actually getting.
>
> Beyond that, I’d recommend dropping the arbiter bricks and re-adding them
> as full replicas, they can’t serve distributed data in this configuration
> and may be slowing things down on you. If you’ve got a storage network
> setup, make sure it’s using the largest MTU it can, and consider
> adding/testing these settings that I use on my main storage volume:
>
> performance.io-thread-count: 32
> client.event-threads: 8
> server.event-threads: 3
> performance.stat-prefetch: on
>
> Good luck,
>
>   -Darrell
>
>
> On Jun 19, 2017, at 9:46 AM, Chris Boot  wrote:
>
> Hi folks,
>
> I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
> configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
> 6 bricks, which themselves live on two SSDs in each of the servers (one
> brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
> SSDs. Connectivity is 10G Ethernet.
>
> Performance within the VMs is pretty terrible. I experience very low
> throughput and random IO is really bad: it feels like a latency issue.
> On my oVirt nodes the SSDs are not generally very busy. The 10G network
> seems to run without errors (iperf3 gives bandwidth measurements of >=
> 9.20 Gbits/sec between the three servers).
>
> To put this into perspective: I was getting better behaviour from NFS4
> on a gigabit connection than I am with GlusterFS on 10G: that doesn't
> feel right at all.
>
> My volume configuration looks like this:
>
> Volume Name: vmssd
> Type: Distributed-Replicate
> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x (2 + 1) = 6
> Transport-type: tcp
> Bricks:
> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
> Brick2: ovirt1:/gluster/ssd0_vmssd/brick
> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
> Brick4: ovirt3:/gluster/ssd1_vmssd/brick
> Brick5: ovirt1:/gluster/ssd1_vmssd/brick
> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet6
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> performance.low-prio-threads: 32
> network.remote-dio: off
> cluster.eager-lock: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-max-threads: 8
> cluster.shd-wait-qlength: 1
> features.shard: on
> user.cifs: off
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard-block-size: 128MB
> performance.strict-o-direct: on
> network.ping-timeout: 30
> cluster.granular-entry-heal: enable
>
> I would really appreciate some guidance on this to try to improve things
> because at this rate I will need to reconsider using GlusterFS altogether.
>
> Cheers,
> Chris
>
> --
> Chris Boot
> bo...@bootc.net
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
> --
>
>
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70 <+49%202405%20408370>
> fax +49 (0) 24 05 / 40 83 759 <+49%202405%204083759>
> mail *r...@databay.de* 
>
> *Databay AG*
> Jens-Otto-Krag-Straße 11
> D-52146 Würselen
> *www.databay.de* 
>
> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
> Philipp Hermanns
> Aufsichtsratsvorsitzender: Wilhelm Dohmen
> --
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine VM and services not working

2017-06-21 Thread Yaniv Kaul
On Wed, Jun 21, 2017 at 5:20 AM, Andrew Dent  wrote:

> Hi Yaniv
>
> I found a solution.
> Our Ovirt 3.6 AIO box was still running and had those VMs still configured
> in their pre exported and switch off state.
> I removed any snap shots I found from those pre exported VMs, then copied
> the disk image files and other bits from host01 (Ovirt v 4.1) back into
> the Ovirt 3.6 AIO box, and were needed fixing the relevent IDs to be what
> the Engine in the Ovirt 3.6 box expected.
> The VMs then started up properly again without hassle and with the latest
> files on the Ovirt 3.6 AIO box.
>

Well done and kudos for the resourcefulness!
Y.


>
> So now in the progress of rebuilding host01 with hosted-engine v4.1
>
> Kind regards
>
>
> Andrew
> -- Original Message --
> From: "Yaniv Kaul" 
> To: "Andrew Dent" 
> Cc: "users" 
> Sent: 18/06/2017 6:00:09 PM
> Subject: Re: [ovirt-users] hosted-engine VM and services not working
>
>
>
> On Sat, Jun 17, 2017 at 12:50 AM,  wrote:
>
>> If I reinstall and the rerun the hosted-engine setup how do I get the VMs
>> in their current running state back into and being recognised by the new
>> hosted engine?
>>
>
> Current running state is again quite challenging. You'll need to fix the
> hosted-engine.
>
> Can import the storage domain? (not for running VMs)
> Y.
>
>
>> Kind regards
>>
>> Andrew
>>
>> On 17 Jun 2017, at 6:54 AM, Yaniv Kaul  wrote:
>>
>>
>>
>> On Fri, Jun 16, 2017 at 9:11 AM, Andrew Dent 
>> wrote:
>>
>>> Hi
>>>
>>> Well I've got myself into a fine mess.
>>>
>>> host01 was setup with hosted-engine v4.1. This was successful.
>>> Imported 3 VMs from a v3.6 OVirt AIO instance. (This OVirt 3.6 is still
>>> running with more VMs on it)
>>> Tried to add host02 to the new Ovirt 4.1 setup. This partially succeeded
>>> but I couldn't add any storage domains to it. Cannot remember why.
>>> In Ovirt engine UI I removed host02.
>>> I reinstalled host02 with Centos7, tried to add it and Ovirt UI told me
>>> it was already there (but it wasn't listed in the UI).
>>> Renamed the reinstalled host02 to host03, changed the ipaddress,
>>> reconfig the DNS server and added host03 into the Ovirt Engine UI.
>>> All good, and I was able to import more VMs to it.
>>> I was also able to shutdown a VM on host01 assign it to host03 and start
>>> the VM. Cool, everything working.
>>> The above was all last couple of weeks.
>>>
>>> This week I performed some yum updates on the Engine VM. No reboot.
>>> Today noticed that the Ovirt services in the Engine VM were in a endless
>>> restart loop. They would be up for a 5 minutes and then die.
>>> Looking into /var/log/ovirt-engine/engine.log and I could only see
>>> errors relating to host02. Ovirt was trying to find it and failing. Then
>>> falling over.
>>> I ran "hosted-engine --clean-metadata" thinking it would cleanup and
>>> remove bad references to hosts, but now realise that was a really bad idea
>>> as it didn't do what I'd hoped.
>>> At this point the sequence below worked, I could login to Ovirt UI but
>>> after 5 minutes the services would be off
>>> service ovirt-engine restart
>>> service ovirt-websocket-proxy restart
>>> service httpd restart
>>>
>>> I saw some reference to having to remove hosts from the database by hand
>>> in situations where under the hood of Ovirt a decommission host was still
>>> listed, but wasn't showing in the GUI.
>>> So I removed reference to host02 (vds_id and host_id) in the following
>>> tables in this order.
>>> vds_dynamic
>>> vds_statistics
>>> vds_static
>>> host_device
>>>
>>> Now when I try to start ovirt-websocket it will not start
>>> service ovirt-websocket start
>>> Redirecting to /bin/systemctl start  ovirt-websocket.service
>>> Failed to start ovirt-websocket.service: Unit not found.
>>>
>>> I'm now thinking that I need to do the following in the engine VM
>>>
>>> # engine-cleanup
>>> # yum remove ovirt-engine
>>> # yum install ovirt-engine
>>> # engine-setup
>>>
>>> But to run engine-cleanup I need to put the engine-vm into maintenance
>>> mode and because of the --clean-metadata that I ran earlier on host01 I
>>> cannot do that.
>>>
>>> What is the best course of action from here?
>>>
>>
>> To be honest, with all the steps taken above, I'd install everything
>> (including OS) from scratch...
>> There's a bit too much mess to try to clean up properly here.
>> Y.
>>
>>
>>>
>>> Cheers
>>>
>>>
>>> Andrew
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vm freezes when using yum update

2017-06-21 Thread Yaniv Kaul
On Thu, Jun 22, 2017 at 5:07 AM, M Mahboubian 
wrote:

> Dear all,
> I appreciate if anybody could possibly help with the issue I am facing.
>
> In our environment we have 2 hosts 1 NFS server and 1 ovirt engine server.
> The NFS server provides storage to the VMs in the hosts.
>
> I can create new VMs and install os but once i do something like yum
> update the VM freezes. I can reproduce this every single time I do yum
> update.
>

Is it paused, or completely frozen?


>
> what information/log files should I provide you to trubleshoot this?
>

Versions of all the components involved - guest OS, host OS (qemu-kvm
version), how do you run the VM (vdsm log would be helpful here), exact
storage specification (1Gb or 10Gb link? What is the NFS version? What is
it hosted on? etc.)
 Y.


>  Regards
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Frustration defines the deployment of Hosted Engine

2017-06-23 Thread Yaniv Kaul
On Sat, Jun 24, 2017 at 12:23 AM, Vinícius Ferrão  wrote:

> Hello Adam and Karli,
>
> I will remap uid and gid of NFS to 36 and try again with NFS sharing.
>
> But this does not make much sense, because on iSCSI this should not
> happen. There are no permissions involved and when oVirt runs the
> hosted-engine setup it creates the ext3 filesystem on the iSCSI share
> without any issue. Here’s a photo of the network bandwidth during the OVF
> deployment: http://www.if.ufrj.br/~ferrao/ovirt/bandwidth-iscsi-ovf.jpg
>
> So it’s appears to be working. Something happens after the deployment that
> brokes the connections and kills vdsm.
>

Indeed - may be two different issues. Let us know how the NFS works first,
then let's try with iSCSI.
Y.


>
> Thanks,
> V.
>
> On 23 Jun 2017, at 17:47, Adam Litke  wrote:
>
>
>
> On Fri, Jun 23, 2017 at 4:40 PM, Karli Sjöberg 
> wrote:
>
>>
>>
>> Den 23 juni 2017 21:08 skrev Vinícius Ferrão :
>>
>> Hello oVirt folks.
>>
>> I’m a traitor of the Xen movement and was looking for some good
>> alternatives for XenServer hypervisors. I was aware of KVM for a long time
>> but I was missing a more professional and appliance feeling of the product,
>> and oVirt appears to deliver exactly what I’m looking for.
>>
>> Don’t get me wrong, I’m not saying that Xen is not good, I’m looking for
>> equal or better alternatives, but I’m starting to get frustrated with oVirt.
>>
>> Firstly I’ve tried to install the oVirt Node on a VM in VMware Fusion on
>> my notebook, it was a no go. For whatever reasons I don’t know
>> vdsmd.service and libvirtd failed to start. I make sure that I was running
>> with EPT support enabled to achieve nested virtualization, but as I said:
>> it was a no go.
>>
>> So I’ve decommissioned a XenServer machine that was in production just to
>> try oVirt. The hardware is not new, but’s it’s very capable: Dual Xeon
>> E5506 with 48GB of system RAM, but I can’t get the hosted engine to work,
>> it always insults my hardware: --- Hosted Engine deployment failed: this
>> system is not reliable, please check the issue,fix and redeploy.
>>
>> It’s definitely a problem on the storage subsystem, the error is just
>> random, at this moment I’ve got:
>>
>> [ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No
>> response for JSON-RPC StorageDomain.detach request.
>>
>> But on other tries it came up with something like this:
>>
>> No response for JSON-RPC Volume.getSize request.
>>
>> I was thinking that the problem was on the NFSv3 server on our FreeNAS
>> box, so I’ve changed to an iSCSI backend, but the problem continues.
>>
>>
>> Can't say anything about your issues without the logs but there's nothing
>> wrong with FreeNAS (FreeBSD) NFS, I've been running oVirt with FreeBSD's
>> NFS since oVirt 3.2 so...
>>
>
> Could you share your relevant exports configuration to make sure he's
> using something that you know works?
>
>
>>
>> "You're holding it wrong":) Sorry, I know you're frustrated but that's
>> what I can add to the conversation.
>>
>> /K
>>
>> This happens at the very end of the ovirt-hosted-engine-setup command,
>> which leads me to believe that’s an oVirt issue. The OVA was already copied
>> and deployed to the storage:
>>
>> [ INFO ] Starting vdsmd
>> [ INFO ] Creating Volume Group
>> [ INFO ] Creating Storage Domain
>> [ INFO ] Creating Storage Pool
>> [ INFO ] Connecting Storage Pool
>> [ INFO ] Verifying sanlock lockspace initialization
>> [ INFO ] Creating Image for 'hosted-engine.lockspace' ...
>> [ INFO ] Image for 'hosted-engine.lockspace' created successfully
>> [ INFO ] Creating Image for 'hosted-engine.metadata' ...
>> [ INFO ] Image for 'hosted-engine.metadata' created successfully
>> [ INFO ] Creating VM Image
>> [ INFO ] Extracting disk image from OVF archive (could take a few minutes
>> depending on archive size)
>> [ INFO ] Validating pre-allocated volume size
>> [ INFO ] Uploading volume to data domain (could take a few minutes
>> depending on archive size)
>> [ INFO ] Image successfully imported from OVF
>> [ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No
>> response for JSON-RPC StorageDomain.detach request.
>> [ INFO ] Yum Performing yum transaction rollback
>> [ INFO ] Stage: Clean up
>> [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-
>> setup/answers/answers-20170623032541.conf'
>> [ INFO ] Stage: Pre-termination
>> [ INFO ] Stage: Termination
>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable,
>> please check the issue,fix and redeploy
>> Log file is located at /var/log/ovirt-hosted-engine-s
>> etup/ovirt-hosted-engine-setup-20170623023424-o9rbt0.log
>>
>> At this point I really don’t know what I should try. And the log file is
>> too verborragic (hoping this word exists) to look for errors.
>>
>> Any guidance?
>>
>> Thanks,
>> V.
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>

Re: [ovirt-users] Very big device. Trying to use READ CAPACITY

2017-06-24 Thread Yaniv Kaul
On Sat, Jun 24, 2017 at 10:58 AM, Iman Darabi  wrote:

> hi.
> i'm using ovirt version 4.0 .
>
> I've created two LUNs.  firs lun is 2047GB and working properly as data
> domain. but when i try to add second lun  as second data domain, i get this
> error on server:
>
> Jun 21 13:22:41 compute52 kernel: sd 12:0:0:16384: [sdf] Very big device.
> Trying to use READ CAPACITY(16).
> Jun 21 13:22:41 compute52 kernel: sd 5:0:0:0: [sdb] Very big device.
> Trying to use READ CAPACITY(16).
> Jun 21 13:22:41 compute52 kernel: sd 12:0:1:16384: [sdg] Very big device.
> Trying to use READ CAPACITY(16).
> Jun 21 13:22:41 compute52 kernel: sd 5:0:1:0: [sdc] Very big device.
> Trying to use READ CAPACITY(16).
> Jun 21 13:22:41 compute52 kernel: sd 12:0:0:0: [sdd] Read Capacity(10)
> failed: Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
> Jun 21 13:22:41 compute52 kernel: sd 12:0:0:0: [sdd] Sense Key : Illegal
> Request [current]
> Jun 21 13:22:41 compute52 kernel: sd 12:0:0:0: [sdd] Add. Sense: Logical
> unit not supported
>

Looks like an issue between your platforms (what version is it? You have
not specified it) and your storage (what is it? You have not specified it),
unrelated directly to oVirt.


> Jun 21 13:22:41 compute52 kernel: sd 12:0:1:0: [sde] Read Capacity(10)
> failed: Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
> Jun 21 13:22:41 compute52 kernel: sd 12:0:1:0: [sde] Sense Key : Illegal
> Request [current]
> Jun 21 13:22:41 compute52 kernel: sd 12:0:1:0: [sde] Add. Sense: Logical
> unit not supported
> Jun 21 13:22:41 compute52 kernel: device-mapper: table: 253:16: multipath:
> error getting device
> Jun 21 13:22:41 compute52 kernel: device-mapper: ioctl: error adding
> target to table
> Jun 21 13:22:41 compute52 multipathd: dm-16: remove map (uevent)
> Jun 21 13:22:41 compute52 multipathd: dm-16: remove map (uevent)
>
> error shows that second lun is very big device, but i've created second
> lun as 10GB for test.
>
> BTW: is there any good documentation for ovirt and FC in detail. all
> administration docs are using GUI ... .
>

What are you missing exactly?
Y.


>
> --
> R&D expert at Ferdowsi University of Mashhadhttps://ir.linkedin.com/in/
> imandarabi
> 
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Very big device. Trying to use READ CAPACITY

2017-06-25 Thread Yaniv Kaul
_id = "0x6d0200"
> port_name   = "0x5001438033136e4c"
> port_state  = "Online"
> port_type   = "NPort (fabric via point-to-point)"
> speed   = "8 Gbit"
>     supported_classes   = "Class 3"
> supported_speeds= "1 Gbit, 2 Gbit, 4 Gbit, 8 Gbit"
> symbolic_name   = "HPAJ764A FW:v7.03.00 DVR:v8.07.00.18.07.2-k"
> system_hostname = ""
> tgtid_bind_type = "wwpn (World Wide Port Name)"
> uevent  =
> vport_create= 
> vport_delete= 
>
> Device = "host5"
> Device path = "/sys/devices/pci:80/:
> 80:01.0/:81:00.0/host5"
>   fw_dump =
>   nvram   = "ISP "
>   optrom_ctl  = 
>   optrom  =
>   reset   = 
>   sfp = " "
>   uevent  = "DEVTYPE=scsi_host"
>   vpd = "�$"
>
>
>
>
> On Sun, Jun 25, 2017 at 11:27 AM, Yaniv Kaul  wrote:
>
>>
>>
>> On Sat, Jun 24, 2017 at 10:58 AM, Iman Darabi 
>> wrote:
>>
>>> hi.
>>> i'm using ovirt version 4.0 .
>>>
>>> I've created two LUNs.  firs lun is 2047GB and working properly as data
>>> domain. but when i try to add second lun  as second data domain, i get this
>>> error on server:
>>>
>>> Jun 21 13:22:41 compute52 kernel: sd 12:0:0:16384: [sdf] Very big
>>> device. Trying to use READ CAPACITY(16).
>>> Jun 21 13:22:41 compute52 kernel: sd 5:0:0:0: [sdb] Very big device.
>>> Trying to use READ CAPACITY(16).
>>> Jun 21 13:22:41 compute52 kernel: sd 12:0:1:16384: [sdg] Very big
>>> device. Trying to use READ CAPACITY(16).
>>> Jun 21 13:22:41 compute52 kernel: sd 5:0:1:0: [sdc] Very big device.
>>> Trying to use READ CAPACITY(16).
>>> Jun 21 13:22:41 compute52 kernel: sd 12:0:0:0: [sdd] Read Capacity(10)
>>> failed: Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
>>> Jun 21 13:22:41 compute52 kernel: sd 12:0:0:0: [sdd] Sense Key : Illegal
>>> Request [current]
>>> Jun 21 13:22:41 compute52 kernel: sd 12:0:0:0: [sdd] Add. Sense: Logical
>>> unit not supported
>>>
>>
>> Looks like an issue between your platforms (what version is it? You have
>> not specified it) and your storage (what is it? You have not specified it),
>> unrelated directly to oVirt.
>>
>>
>>> Jun 21 13:22:41 compute52 kernel: sd 12:0:1:0: [sde] Read Capacity(10)
>>> failed: Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
>>> Jun 21 13:22:41 compute52 kernel: sd 12:0:1:0: [sde] Sense Key : Illegal
>>> Request [current]
>>> Jun 21 13:22:41 compute52 kernel: sd 12:0:1:0: [sde] Add. Sense: Logical
>>> unit not supported
>>> Jun 21 13:22:41 compute52 kernel: device-mapper: table: 253:16:
>>> multipath: error getting device
>>> Jun 21 13:22:41 compute52 kernel: device-mapper: ioctl: error adding
>>> target to table
>>> Jun 21 13:22:41 compute52 multipathd: dm-16: remove map (uevent)
>>> Jun 21 13:22:41 compute52 multipathd: dm-16: remove map (uevent)
>>>
>>> error shows that second lun is very big device, but i've created second
>>> lun as 10GB for test.
>>>
>>> BTW: is there any good documentation for ovirt and FC in detail. all
>>> administration docs are using GUI ... .
>>>
>>
>> What are you missing exactly?
>> Y.
>>
>>
>>>
>>> --
>>> R&D expert at Ferdowsi University of Mashhad
>>> https://ir.linkedin.com/in/imandarabi
>>> <https://www.linkedin.com/profile/public-profile-settings?trk=prof-edit-edit-public_profile>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
>
> --
> R&D expert at Ferdowsi University of Mashhadhttps://ir.linkedin.com/in/
> imandarabi
> <https://www.linkedin.com/profile/public-profile-settings?trk=prof-edit-edit-public_profile>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine local disk estimate for OVA file

2017-06-25 Thread Yaniv Kaul
On Sun, Jun 25, 2017 at 12:38 PM, Ben De Luca  wrote:

> Hi,
>  I am in the middle of a disaster recovery situation trying to install
> ovirt 4.1 after a failure of some of our NFS systems. So I was redeploying
> 4.0 but there is a bug with the image uploader, that means I cant upload
> images. Our actually virtual machine hosts have very small HD's and the
> current 4.1 release installer thinks that the OVA extracted is 50GiB, I
> have exactly 50GB free! yay. So I did manage to hack the installer, to
> ignore my local disk space as the OVA is really only a few gigs in side.
>
>  But its been pretty painful. Any chance of some one fixing the
> estimate there, I have read through the code, there was an attempt to find
> out the real size but they gave up and just guessed.
>
>
We changed it to 50G in[1].
Did we over-estimate?

Y.

[1] https://gerrit.ovirt.org/#/c/72962/


> -bd
> *tired sysadmin*
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Very big device. Trying to use READ CAPACITY

2017-06-25 Thread Yaniv Kaul
On Sun, Jun 25, 2017 at 11:25 AM, Iman Darabi  wrote:

> i can not add second lun, this is my problem. when ever i try to add lun i
> get that message ... .
>
> Jun 25 12:51:55 compute52 kernel: sd 12:0:0:16384: [sdf] Very big device.
> Trying to use READ CAPACITY(16).
> Jun 25 12:51:55 compute52 kernel: sd 5:0:0:0: [sdb] Very big device.
> Trying to use READ CAPACITY(16).
> Jun 25 12:51:55 compute52 kernel: sd 12:0:1:16384: [sdg] Very big device.
> Trying to use READ CAPACITY(16).
> Jun 25 12:51:55 compute52 kernel: sd 5:0:1:0: [sdc] Very big device.
> Trying to use READ CAPACITY(16).
> Jun 25 12:51:55 compute52 kernel: sd 12:0:0:0: [sdd] Read Capacity(10)
> failed: Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
> Jun 25 12:51:55 compute52 kernel: sd 12:0:0:0: [sdd] Sense Key : Illegal
> Request [current]
> Jun 25 12:51:55 compute52 kernel: sd 12:0:0:0: [sdd] Add. Sense: Logical
> unit not supported
> Jun 25 12:51:55 compute52 kernel: sd 12:0:1:0: [sde] Read Capacity(10)
> failed: Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
> Jun 25 12:51:55 compute52 kernel: sd 12:0:1:0: [sde] Sense Key : Illegal
> Request [current]
> Jun 25 12:51:55 compute52 kernel: sd 12:0:1:0: [sde] Add. Sense: Logical
> unit not supported
> Jun 25 12:51:56 compute52 kernel: device-mapper: table: 253:14: multipath:
> error getting device
> Jun 25 12:51:56 compute52 kernel: device-mapper: ioctl: error adding
> target to table
> Jun 25 12:51:56 compute52 multipathd: dm-14: remove map (uevent)
> Jun 25 12:51:56 compute52 multipathd: dm-14: remove map (uevent)
>
> mybe i've problem with multipath ... ?!.
>

I think so. Is ALUA correctly configured?
I'd try to connect with oVirt and see that all works well first.
Y.


>
>
> On Sun, Jun 25, 2017 at 12:38 PM, Yaniv Kaul  wrote:
>
>>
>>
>> On Sun, Jun 25, 2017 at 10:51 AM, Iman Darabi 
>> wrote:
>>
>>> my os version is:
>>>   Operating System: CentOS Linux 7 (Core)
>>>   CPE OS Name: cpe:/o:centos:centos:7
>>>   Kernel: Linux 3.10.0-327.28.3.el7.x86_64
>>>   Architecture: x86-64
>>> and san storage is EMC VNX 5400 with double cisco mds 9148s.
>>>
>>>
>> I completely forgot to ask, is there a real problem besides those
>> messages?
>> See [1] - appears to be harmless.
>> Y.
>>
>> [1] https://bugzilla.kernel.org/show_bug.cgi?id=115351
>>
>>
>>> actually i had problem with one of my switches and sent it to garanty
>>> guarantee. now i set it up again ... . but i get these errors from dmesg:
>>> [20258845.245029] sd 12:0:0:16384: [sdf] Very big device. Trying to use
>>> READ CAPACITY(16).
>>> [20258845.245454] sd 5:0:0:0: [sdb] Very big device. Trying to use READ
>>> CAPACITY(16).
>>> [20258845.246554] sd 12:0:1:16384: [sdg] Very big device. Trying to use
>>> READ CAPACITY(16).
>>> [20258845.246650] sd 5:0:1:0: [sdc] Very big device. Trying to use READ
>>> CAPACITY(16).
>>> [20258845.248915] sd 12:0:0:0: [sdd] Read Capacity(10) failed: Result:
>>> hostbyte=DID_OK driverbyte=DRIVER_SENSE
>>> [20258845.248919] sd 12:0:0:0: [sdd] Sense Key : Illegal Request
>>> [current]
>>> [20258845.248922] sd 12:0:0:0: [sdd] Add. Sense: Logical unit not
>>> supported
>>> [20258845.249561] sd 12:0:1:0: [sde] Read Capacity(10) failed: Result:
>>> hostbyte=DID_OK driverbyte=DRIVER_SENSE
>>> [20258845.249564] sd 12:0:1:0: [sde] Sense Key : Illegal Request
>>> [current]
>>> [20258845.249565] sd 12:0:1:0: [sde] Add. Sense: Logical unit not
>>> supported
>>> [20258845.285583] device-mapper: table: 253:14: multipath: error getting
>>> device
>>> [20258845.286019] device-mapper: ioctl: error adding target to table
>>>
>>> here is output of HBA configurations:
>>>
>>> [root@compute52 host12]# multipath -ll
>>> 3600601604d003a00036e268de611 dm-2 DGC ,VRAID
>>> size=2.0T features='1 retain_attached_hw_handler' hwhandler='1 alua'
>>> wp=rw
>>> |-+- policy='service-time 0' prio=50 status=active
>>> | `- 5:0:1:0  sdc 8:32  active ready  running
>>> `-+- policy='service-time 0' prio=10 status=enabled
>>>   `- 5:0:0:0  sdb 8:16  active ready  running
>>> ==
>>> [root@compute52 host12]# systool -c fc_host
>>> Class = "fc_host"
>>>
>>>   Class Device = "host12"
>>> Device = "host12"
>>>
>>>   Class Device = "host5"
>

Re: [ovirt-users] Frustration defines the deployment of Hosted Engine

2017-06-26 Thread Yaniv Kaul
On Mon, Jun 26, 2017 at 1:16 AM, Vinícius Ferrão  wrote:

> Hello again,
>
> Setting the folder permissions to vdsm:kvm (36:36) done the trick to make
> NFS work. I wasn't expecting this to work since it does not make sense to
> me, making a parallel with the iSCSI problem.
>
> I'm starting to believe that's something just bad. Perhaps the Storage
> system is running with something broken or the host machine is just
> unstable.
>
> I will consider this as solved, since further inside the oVirt Engine
> panel I was able to mount and use the same iSCSI sharing.
>
> But let me ask for sincere answers here:
>
> 1. oVirt is feature complete comparing to RHEV?
>

Yes. In fact it may have additional features unavailable in RHV.


>
> 2. Should I migrate from XenServer to oVirt? This is biased, I know, but I
> would like to hear opinions. The folks with @redhat.com email addresses
> will know how to advocate in favor of oVirt.
>

I don't know, and I don't think (having a @redhat.com email myself) I
should advocate.
If someone made a comparison (even if it's just for their own specific use
case), it could be great if it could be shared. If you do - please share
your thoughts.


>
> 3. Some "journalists" says that oVirt is like Fedora in comparison to
> RHEL, is this really true? Or it's more aligned with a CentOS-like release?
> Because Fedora isn't really an Enterprise OS, and I was looking for an
> Enterprise Hypervisor. I'm aware that oVirt is the upstream from RHEV.
>

I'd cautiously say 'in between'. We strive to ensure oVirt is stable, and I
believe we make good progress in every release.
We also make an effort to quickly release fixes (minor releases). That
being said, RHV has a longer life cycle, and for example, when oVirt
stopped releasing oVirt 3.6.x, Red Hat continued to release minor versions
of it.

We have hundreds of oVirt users running it in production, many in large
scale, with mission critical workloads.

Lastly, oVirt  enables features in upstream before they are delivered in
RHV.

(That being said, I've had a very good experience with Fedora 24 which was
rock solid for me, then I've had some misfortune with Fedora 25, and now
I'm assessing if I should upgrade to F26 beta...)


> 4. There is any good SPICE client for macOS? Or should I just use the
> HTML5 version instead?
>

I'm afraid not.
Y.


>
> Thanks,
> V.
>
> Sent from my iPhone
>
> On 23 Jun 2017, at 18:50, Yaniv Kaul  wrote:
>
>
>
> On Sat, Jun 24, 2017 at 12:23 AM, Vinícius Ferrão 
> wrote:
>
>> Hello Adam and Karli,
>>
>> I will remap uid and gid of NFS to 36 and try again with NFS sharing.
>>
>> But this does not make much sense, because on iSCSI this should not
>> happen. There are no permissions involved and when oVirt runs the
>> hosted-engine setup it creates the ext3 filesystem on the iSCSI share
>> without any issue. Here’s a photo of the network bandwidth during the OVF
>> deployment: http://www.if.ufrj.br/~ferrao/ovirt/bandwidth-iscsi-ovf.jpg
>>
>> So it’s appears to be working. Something happens after the deployment
>> that brokes the connections and kills vdsm.
>>
>
> Indeed - may be two different issues. Let us know how the NFS works first,
> then let's try with iSCSI.
> Y.
>
>
>>
>> Thanks,
>> V.
>>
>> On 23 Jun 2017, at 17:47, Adam Litke  wrote:
>>
>>
>>
>> On Fri, Jun 23, 2017 at 4:40 PM, Karli Sjöberg 
>> wrote:
>>
>>>
>>>
>>> Den 23 juni 2017 21:08 skrev Vinícius Ferrão :
>>>
>>> Hello oVirt folks.
>>>
>>> I’m a traitor of the Xen movement and was looking for some good
>>> alternatives for XenServer hypervisors. I was aware of KVM for a long time
>>> but I was missing a more professional and appliance feeling of the product,
>>> and oVirt appears to deliver exactly what I’m looking for.
>>>
>>> Don’t get me wrong, I’m not saying that Xen is not good, I’m looking for
>>> equal or better alternatives, but I’m starting to get frustrated with oVirt.
>>>
>>> Firstly I’ve tried to install the oVirt Node on a VM in VMware Fusion on
>>> my notebook, it was a no go. For whatever reasons I don’t know
>>> vdsmd.service and libvirtd failed to start. I make sure that I was running
>>> with EPT support enabled to achieve nested virtualization, but as I said:
>>> it was a no go.
>>>
>>> So I’ve decommissioned a XenServer machine that was in production just
>>> to try oVirt. The hardware is not new, but’s it’s very capable: Dual Xeon
>>> E5506

Re: [ovirt-users] Very big device. Trying to use READ CAPACITY

2017-06-26 Thread Yaniv Kaul
On Sun, Jun 25, 2017 at 8:17 PM, Iman Darabi  wrote:

> i didn't configured any thing and let ovirt configure storage
> automatically. should i configure multipath manually?
>

Usually not, but for ALUA you might want to.
Y.


>
> On Sun, Jun 25, 2017 at 3:06 PM, Yaniv Kaul  wrote:
>
>>
>>
>> On Sun, Jun 25, 2017 at 11:25 AM, Iman Darabi 
>> wrote:
>>
>>> i can not add second lun, this is my problem. when ever i try to add lun
>>> i get that message ... .
>>>
>>> Jun 25 12:51:55 compute52 kernel: sd 12:0:0:16384: [sdf] Very big
>>> device. Trying to use READ CAPACITY(16).
>>> Jun 25 12:51:55 compute52 kernel: sd 5:0:0:0: [sdb] Very big device.
>>> Trying to use READ CAPACITY(16).
>>> Jun 25 12:51:55 compute52 kernel: sd 12:0:1:16384: [sdg] Very big
>>> device. Trying to use READ CAPACITY(16).
>>> Jun 25 12:51:55 compute52 kernel: sd 5:0:1:0: [sdc] Very big device.
>>> Trying to use READ CAPACITY(16).
>>> Jun 25 12:51:55 compute52 kernel: sd 12:0:0:0: [sdd] Read Capacity(10)
>>> failed: Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
>>> Jun 25 12:51:55 compute52 kernel: sd 12:0:0:0: [sdd] Sense Key : Illegal
>>> Request [current]
>>> Jun 25 12:51:55 compute52 kernel: sd 12:0:0:0: [sdd] Add. Sense: Logical
>>> unit not supported
>>> Jun 25 12:51:55 compute52 kernel: sd 12:0:1:0: [sde] Read Capacity(10)
>>> failed: Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
>>> Jun 25 12:51:55 compute52 kernel: sd 12:0:1:0: [sde] Sense Key : Illegal
>>> Request [current]
>>> Jun 25 12:51:55 compute52 kernel: sd 12:0:1:0: [sde] Add. Sense: Logical
>>> unit not supported
>>> Jun 25 12:51:56 compute52 kernel: device-mapper: table: 253:14:
>>> multipath: error getting device
>>> Jun 25 12:51:56 compute52 kernel: device-mapper: ioctl: error adding
>>> target to table
>>> Jun 25 12:51:56 compute52 multipathd: dm-14: remove map (uevent)
>>> Jun 25 12:51:56 compute52 multipathd: dm-14: remove map (uevent)
>>>
>>> mybe i've problem with multipath ... ?!.
>>>
>>
>> I think so. Is ALUA correctly configured?
>> I'd try to connect with oVirt and see that all works well first.
>> Y.
>>
>>
>>>
>>>
>>> On Sun, Jun 25, 2017 at 12:38 PM, Yaniv Kaul  wrote:
>>>
>>>>
>>>>
>>>> On Sun, Jun 25, 2017 at 10:51 AM, Iman Darabi 
>>>> wrote:
>>>>
>>>>> my os version is:
>>>>>   Operating System: CentOS Linux 7 (Core)
>>>>>   CPE OS Name: cpe:/o:centos:centos:7
>>>>>   Kernel: Linux 3.10.0-327.28.3.el7.x86_64
>>>>>   Architecture: x86-64
>>>>> and san storage is EMC VNX 5400 with double cisco mds 9148s.
>>>>>
>>>>>
>>>> I completely forgot to ask, is there a real problem besides those
>>>> messages?
>>>> See [1] - appears to be harmless.
>>>> Y.
>>>>
>>>> [1] https://bugzilla.kernel.org/show_bug.cgi?id=115351
>>>>
>>>>
>>>>> actually i had problem with one of my switches and sent it to garanty
>>>>> guarantee. now i set it up again ... . but i get these errors from dmesg:
>>>>> [20258845.245029] sd 12:0:0:16384: [sdf] Very big device. Trying to
>>>>> use READ CAPACITY(16).
>>>>> [20258845.245454] sd 5:0:0:0: [sdb] Very big device. Trying to use
>>>>> READ CAPACITY(16).
>>>>> [20258845.246554] sd 12:0:1:16384: [sdg] Very big device. Trying to
>>>>> use READ CAPACITY(16).
>>>>> [20258845.246650] sd 5:0:1:0: [sdc] Very big device. Trying to use
>>>>> READ CAPACITY(16).
>>>>> [20258845.248915] sd 12:0:0:0: [sdd] Read Capacity(10) failed: Result:
>>>>> hostbyte=DID_OK driverbyte=DRIVER_SENSE
>>>>> [20258845.248919] sd 12:0:0:0: [sdd] Sense Key : Illegal Request
>>>>> [current]
>>>>> [20258845.248922] sd 12:0:0:0: [sdd] Add. Sense: Logical unit not
>>>>> supported
>>>>> [20258845.249561] sd 12:0:1:0: [sde] Read Capacity(10) failed: Result:
>>>>> hostbyte=DID_OK driverbyte=DRIVER_SENSE
>>>>> [20258845.249564] sd 12:0:1:0: [sde] Sense Key : Illegal Request
>>>>> [current]
>>>>> [20258845.249565] sd 12:0:1:0: [sde] Add. Sense: Logical unit not
>>>>> supported
>>>>> [20258845.285583] device-mapper: table: 253:14: multipath: error

Re: [ovirt-users] VM live migration and NFS 4.2

2017-06-26 Thread Yaniv Kaul
On Sun, Jun 25, 2017 at 10:31 PM, Markus Stockhausen <
stockhau...@collogia.de> wrote:

> Hi,
>
> we are currently evaluating NFS 4.2 based storage for OVirt 4.1.2. Normal
> operation
> and discard support work like a charm.
>
> For some strange reason we cannot use VM live migration any more. As soon
> as one
> NFS 4.2 based VM disk is doing disk I/O during the operation. VM stalls
> and is paused.
> It seems as if qemu on target node cannot take over disk images. See
> BZ1464787.
>

Can you make sure you are not suffering from any SELinux issues?
What NFS server are you using? If it's Linux based, it needs some selinux
commands.
For example (from ovirt-system-tests):
semanage fcontext -a -t nfs_t '/exports/nfs(/.*)?'
restorecon -Rv /exports/nfs


>
> Has anybody else seen similar issues?
>

Raz?
Y.


>
> Best regards.
>
> Markus
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migrating VMS from old hosted engine to new hosted engine

2017-06-26 Thread Yaniv Kaul
On Fri, Jun 23, 2017 at 1:13 AM, Paul Groeneweg | Pazion 
wrote:

> We are moving our VMs from an old 3.6 oVirt platform to a new oVirt 4.1
> platform.
>
> I am looking for an easy way to migrate the VMs. ( I know about export
> domain, but this takes 2 times a copy action ) I tested and just detaching
> the Data domain and attaching it, didn't work.
>
>
It should work, and is the easiest way by far. Can you share more details
on what did not work for you?
Y.


> Then I found the VM import option:
>  - Adding a ssh key gen and setting password on virsh
> - load a list of machines in the webinterface from VM import
> - when I start an import the system complains:
> The following VMs retrieved from external server
> qemu+ssh://r...@xxx.xxx.xxx.xxx/system are not in down status: VM
> [VM-NAME].
> - I tried to suspend the VM, but this didn;t work either ( same error )
> - Problem is ,when VM is shutdown, it disappears from this system list.
>
> Is there a way to directly import a VM from old oVirt platform to the new
> platform
>
> Looking forward to find a solution to quickly migrate our VMs.
>
> Best Regards,
>
> Paul Groeneweg
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Empty cgroup files on centos 7.3 host

2017-06-27 Thread Yaniv Kaul
On Mon, Jun 26, 2017 at 11:03 PM, Florian Schmid  wrote:

> Hi,
>
> I wanted to monitor disk IO and R/W on all of our oVirt centos 7.3
> hypervisor hosts, but it looks like that all those files are empty.
>

We have a very nice integration with Elastic based monitoring and logging -
why not use it.
On the host, we use collectd for monitoring.
See
http://www.ovirt.org/develop/release-management/features/engine/metrics-store/

Y.


> For example:
> ls -al /sys/fs/cgroup/blkio/machine.slice/machine-qemu\\x2d14\\
> x2dHostedEngine.scope/
> insgesamt 0
> drwxr-xr-x.  2 root root 0 30. Mai 10:09 .
> drwxr-xr-x. 16 root root 0 26. Jun 09:25 ..
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_merged
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_merged_recursive
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_queued
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_queued_recursive
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_service_bytes
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_service_bytes_recursive
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_serviced
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_serviced_recursive
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_service_time
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_service_time_recursive
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_wait_time
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_wait_time_recursive
> -rw-r--r--.  1 root root 0 30. Mai 10:09 blkio.leaf_weight
> -rw-r--r--.  1 root root 0 30. Mai 10:09 blkio.leaf_weight_device
> --w---.  1 root root 0 30. Mai 10:09 blkio.reset_stats
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.sectors
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.sectors_recursive
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.throttle.io_service_bytes
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.throttle.io_serviced
> -rw-r--r--.  1 root root 0 30. Mai 10:09 blkio.throttle.read_bps_device
> -rw-r--r--.  1 root root 0 30. Mai 10:09 blkio.throttle.read_iops_device
> -rw-r--r--.  1 root root 0 30. Mai 10:09 blkio.throttle.write_bps_device
> -rw-r--r--.  1 root root 0 30. Mai 10:09 blkio.throttle.write_iops_device
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.time
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.time_recursive
> -rw-r--r--.  1 root root 0 30. Mai 10:09 blkio.weight
> -rw-r--r--.  1 root root 0 30. Mai 10:09 blkio.weight_device
> -rw-r--r--.  1 root root 0 30. Mai 10:09 cgroup.clone_children
> --w--w--w-.  1 root root 0 30. Mai 10:09 cgroup.event_control
> -rw-r--r--.  1 root root 0 30. Mai 10:09 cgroup.procs
> -rw-r--r--.  1 root root 0 30. Mai 10:09 notify_on_release
> -rw-r--r--.  1 root root 0 30. Mai 10:09 tasks
>
>
> I thought, I can get my needed values from there, but all files are empty.
>
> Looking at this post: http://lists.ovirt.org/pipermail/users/2017-January/
> 079011.html
> this should work.
>
> Is this normal on centos 7.3 with oVirt installed? How can I get those
> values, without monitoring all VMs directly?
>
> oVirt Version we use:
> 4.1.1.8-1.el7.centos
>
> BR Florian
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ansible/ansible] ovirt_storage_domains can not find iscsi disk during creation (state: present) (#25417)

2017-06-28 Thread Yaniv Kaul
Can you share vdsm log?

On Wed, Jun 28, 2017 at 4:16 PM, victor 600833 
wrote:

> Hi ondra,
>
> As you pointed. in your previous mail.
>
> The iscsiadm discovered througth  10.11.8.190. It should  have probed  
> 10.10.1.1.
> This may be the root cause.
>
>
>
> Victor,
>
>
>
>
>
> If you lookup at our playbook.
>
>   - name: create iscsi_dsk02
> ovirt_storage_domains:
>  auth:
>   url: https://rhevm2.res1/ovirt-engine/api
>   token: "{{ ovirt_auth.token }}"
>   insecure: True
>  name: iscsi_dsk02
>  domain_function: data
>  host: pdx1cr207
>  data_center: default
>  iscsi:
>   target: iqn.2017-06.stet.iscsi.server:target
>   address: *10.10.1.1*
>   lun_id: 3600140550738b53dd774303bfedac122
>  state: absent
>  destroy: true
> register: res_iscsi_dsk02
>   - debug:
>  var: res_iscsi_dsk02
>
> On Wed, Jun 28, 2017 at 7:58 AM, Ondra Machacek 
> wrote:
>
>> It's using the host, which you will pass as host parameter to the task
>> for iscsi discover.
>>
>> As Yaniv said, can you please share your issue on our mailing list:
>>
>> users@ovirt.org
>>
>> There are people with knowledge of iSCSI and they will help you.
>>
>> —
>> You are receiving this because you were mentioned.
>> Reply to this email directly, view it on GitHub
>> ,
>> or mute the thread
>> 
>> .
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Ovirt 4.0.6] Suggestion required for Network Throughput options

2017-06-29 Thread Yaniv Kaul
On Thu, Jun 29, 2017 at 10:07 AM, TranceWorldLogic . <
tranceworldlo...@gmail.com> wrote:

> Hi,
>
> To increase network throughput we have changed txqueuelen of network
> device and bridge manually. And observed  improved throughput.
>

Interesting, as I've read the default (1000) should be good enough for 10g,
for example.

Are you actually seeing errors on the interface (overruns) and such?


>
> But in ovirt I not see any option to increase txqueuelen.
>

Perhaps use ifup-local script to set it when the interface goes up?


> Can someone suggest me what will be the right way to increase throughput ?
>
> Note: I am trying to increase throughput for  ipsec packets.
>

For ipsec, probably best to ensure virtio-rng is enabled.
Y.


>
> Thanks,
> ~Rohit
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Ovirt 4.0.6] Suggestion required for Network Throughput options

2017-06-29 Thread Yaniv Kaul
On Thu, Jun 29, 2017 at 4:02 PM, TranceWorldLogic . <
tranceworldlo...@gmail.com> wrote:

> Got it, just I need to do modprobe to add virtio-rng driver.
> I will try with this option.
>

Make sure it is checked on the cluster.
Y.

>
> Thanks for your help,
> ~Rohit
>
> On Thu, Jun 29, 2017 at 6:20 PM, TranceWorldLogic . <
> tranceworldlo...@gmail.com> wrote:
>
>> Hi,
>>
>> I am using host as Centos 7.3 and guest also centos 7.3
>> it have 3.10 kernel version.
>>
>> But I not see virtio-rng in guest VM. Is this module come with kernel or
>> separately I have to install ?
>>
>> Thanks,
>> ~Rohit
>>
>> On Thu, Jun 29, 2017 at 5:28 PM, Yaniv Kaul  wrote:
>>
>>>
>>>
>>> On Thu, Jun 29, 2017 at 10:07 AM, TranceWorldLogic . <
>>> tranceworldlo...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> To increase network throughput we have changed txqueuelen of network
>>>> device and bridge manually. And observed  improved throughput.
>>>>
>>>
>>> Interesting, as I've read the default (1000) should be good enough for
>>> 10g, for example.
>>>
>>> Are you actually seeing errors on the interface (overruns) and such?
>>>
>>>
>>>>
>>>> But in ovirt I not see any option to increase txqueuelen.
>>>>
>>>
>>> Perhaps use ifup-local script to set it when the interface goes up?
>>>
>>>
>>>> Can someone suggest me what will be the right way to increase
>>>> throughput ?
>>>>
>>>> Note: I am trying to increase throughput for  ipsec packets.
>>>>
>>>
>>> For ipsec, probably best to ensure virtio-rng is enabled.
>>> Y.
>>>
>>>
>>>>
>>>> Thanks,
>>>> ~Rohit
>>>>
>>>> ___
>>>> Users mailing list
>>>> Users@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Frustration defines the deployment of Hosted Engine

2017-06-30 Thread Yaniv Kaul
On Fri, Jun 30, 2017 at 2:30 AM, Ben De Luca  wrote:

> Are we running the same code?
>
> I applaud the amount of effort, but I cant image there is a depth of
> testing. Oh thats right we are the testers for rhel
>


Hi Ben,

It's always great to get constructive feedback.
I've tried to look for bugs you have reported, for RHEL, oVirt or Fedora,
but did not find any[1].
Perhaps I was searching with the wrong username.

Respectfully[2],
Y.

[1]
https://bugzilla.redhat.com/buglist.cgi?chfield=%5BBug%20creation%5D&chfieldto=Now&email1=bdeluca%40gmail.com&emailreporter1=1&emailtype1=substring&f3=OP&j3=OR&list_id=7540196&query_format=advanced

[2] http://www.ovirt.org/community/about/community-guidelines/#be-respectful


> On 29 June 2017 at 21:27, Frank Wall  wrote:
>
>> On Mon, Jun 26, 2017 at 10:51:52AM +0200, InterNetX - Juergen
>> Gotteswinter wrote:
>> > > 2. Should I migrate from XenServer to oVirt? This is biased, I know,
>> but
>> > > I would like to hear opinions. The folks with @redhat.com email
>> > > addresses will know how to advocate in favor of oVirt.
>> >
>> > in term of reliability, better stay with xenserver
>>
>> Seriously, you should have provided some more insights to support your
>> statement. What reliability issues did you encounter in oVirt that are
>> not present in Xenserver?
>>
>> I have deployed *several* oVirt setups since 2012 and haven't found a
>> single realibility issue since then. Of course, there have been some bugs,
>> but the oVirt project made *tremendous* progress since 2012.
>>
>>
>> Regards
>> - Frank
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Ovirt 4.0.6] Suggestion required for Network Throughput options

2017-06-30 Thread Yaniv Kaul
On Fri, Jun 30, 2017 at 4:14 PM, TranceWorldLogic . <
tranceworldlo...@gmail.com> wrote:

> Hi Yaniv,
>
> I have enabled random generator in cluster and also in VM.
> But still not see any improvement in throughput.
>
> lsmod | grep -i virtio
> virtio_rng 13019  0
>

Are you sure it's being used? What is the qemu command line (do you see the
device in the guest?)


> virtio_balloon 13834  0
> virtio_console 28115  2
> virtio_blk 18156  4
> virtio_scsi18361  0
> virtio_net 28024  0
> virtio_pci 22913  0
> virtio_ring21524  7 virtio_blk,virtio_net,virtio_
> pci,virtio_rng,virtio_balloon,virtio_console,virtio_scsi
> virtio 15008  7 virtio_blk,virtio_net,virtio_
> pci,virtio_rng,virtio_balloon,virtio_console,virtio_scsi
>
> Would please check do I missing some virtio module ?
>
> One more finding, if I set queue property in vnic profile then I got good
> throughput.
>

Interesting - I had assumed the bottleneck would be the
encryption/decryption process, not the network. What do you set exactly?
Does it matter in non-encrypted traffic as well? Are the packets (and the
whole communication) large or small (i.e, would jumbo frames help) ?
 Y.


> Thanks,
> ~Rohit
>
>
> On Fri, Jun 30, 2017 at 12:11 AM, Yaniv Kaul  wrote:
>
>>
>>
>> On Thu, Jun 29, 2017 at 4:02 PM, TranceWorldLogic . <
>> tranceworldlo...@gmail.com> wrote:
>>
>>> Got it, just I need to do modprobe to add virtio-rng driver.
>>> I will try with this option.
>>>
>>
>> Make sure it is checked on the cluster.
>> Y.
>>
>>>
>>> Thanks for your help,
>>> ~Rohit
>>>
>>> On Thu, Jun 29, 2017 at 6:20 PM, TranceWorldLogic . <
>>> tranceworldlo...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> I am using host as Centos 7.3 and guest also centos 7.3
>>>> it have 3.10 kernel version.
>>>>
>>>> But I not see virtio-rng in guest VM. Is this module come with kernel
>>>> or separately I have to install ?
>>>>
>>>> Thanks,
>>>> ~Rohit
>>>>
>>>> On Thu, Jun 29, 2017 at 5:28 PM, Yaniv Kaul  wrote:
>>>>
>>>>>
>>>>>
>>>>> On Thu, Jun 29, 2017 at 10:07 AM, TranceWorldLogic . <
>>>>> tranceworldlo...@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> To increase network throughput we have changed txqueuelen of network
>>>>>> device and bridge manually. And observed  improved throughput.
>>>>>>
>>>>>
>>>>> Interesting, as I've read the default (1000) should be good enough for
>>>>> 10g, for example.
>>>>>
>>>>> Are you actually seeing errors on the interface (overruns) and such?
>>>>>
>>>>>
>>>>>>
>>>>>> But in ovirt I not see any option to increase txqueuelen.
>>>>>>
>>>>>
>>>>> Perhaps use ifup-local script to set it when the interface goes up?
>>>>>
>>>>>
>>>>>> Can someone suggest me what will be the right way to increase
>>>>>> throughput ?
>>>>>>
>>>>>> Note: I am trying to increase throughput for  ipsec packets.
>>>>>>
>>>>>
>>>>> For ipsec, probably best to ensure virtio-rng is enabled.
>>>>> Y.
>>>>>
>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> ~Rohit
>>>>>>
>>>>>> ___
>>>>>> Users mailing list
>>>>>> Users@ovirt.org
>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vm freezes when using yum update

2017-07-03 Thread Yaniv Kaul
27;, 
> spUUID=u'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', 
> imgUUID=u'3c26476e-1dae-44d7-9208-531b91ae5ae1', 
> volUUID=u'a7e789fb-6646-4d0a-9b51-f5ab8242c8d5', options=None) (logUtils:51)
> 2017-07-03 09:50:46,738+0800 INFO  (periodic/0) [dispatcher] Run and protect: 
> getVolumeSize(sdUUID=u'f620781f-93d4-4410-8697-eb41045cacd6', 
> spUUID=u'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', 
> imgUUID=u'2158fdae-54e1-413d-a844-73da5d1bb4ca', 
> volUUID=u'6ee0b0eb-0bba-4e18-9c00-c1539b632e8a', options=None) (logUtils:51)
> 2017-07-03 09:50:46,740+0800 INFO  (periodic/2) [dispatcher] Run and protect: 
> getVolumeSize(sdUUID=u'f620781f-93d4-4410-8697-eb41045cacd6', 
> spUUID=u'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', 
> imgUUID=u'a967016d-a56b-41e8-b7a2-57903cbd2825', 
> volUUID=u'784514cb-2b33-431c-b193-045f23c596d8', options=None) (logUtils:51)
> 2017-07-03 09:50:46,741+0800 INFO  (periodic/1) [dispatcher] Run and protect: 
> getVolumeSize(sdUUID=u'721b5233-b0ba-4722-8a7d-ba2a372190a0', 
> spUUID=u'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', 
> imgUUID=u'bb35c163-f068-4f08-a1c2-28c4cb1b76d9', 
> volUUID=u'fce7e0a0-7411-4d8c-b72c-2f46c4b4db1e', options=None) (logUtils:51)
> 2017-07-03 09:50:46,743+0800 INFO  (periodic/0) [dispatcher] Run and protect: 
> getVolumeSize, Return response: {'truesize': '6361276416', 'apparentsize': 
> '107374182400'} (logUtils:54)
>
>
> ..
>
> ..
>
>
> *2017-07-03 09:52:16,941+0800 INFO  (libvirt/events) [virt.vm] 
> (vmId='c84f519e-398d-40a3-85b2-b7e53f3d7f67') abnormal vm stop device 
> scsi0-0-0-0 error eio (vm:4112)
> 2017-07-03 09:52:16,941+0800 INFO  (libvirt/events) [virt.vm] 
> (vmId='c84f519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997)
> 2017-07-03 09:52:16,942+0800 INFO  (libvirt/events) [virt.vm] 
> (vmId='c84f519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onSuspend (vm:4997)
> 2017-07-03 09:52:16,942+0800 INFO  (libvirt/events) [virt.vm] 
> (vmId='c84f519e-398d-40a3-85b2-b7e53f3d7f67') abnormal vm stop device 
> scsi0-0-0-0 error eio (vm:4112)
> 2017-07-03 09:52:16,943+0800 INFO  (libvirt/events) [virt.vm] 
> (vmId='c84f519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997)
> 2017-07-03 09:52:16,943+0800 INFO  (libvirt/events) [virt.vm] 
> (vmId='c84f519e-398d-40a3-85b2-b7e53f3d7f67') abnormal vm stop device 
> scsi0-0-0-0 error eio (vm:4112)
> 2017-07-03 09:52:16,944+0800 INFO  (libvirt/events) [virt.vm] 
> (vmId='c84f519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onIOError*
>
>
>
>
>
>
> *On Thursday, June 22, 2*017, 2:48 PM, Yaniv Kaul 
> wrote:
>
>
>
> On Thu, Jun 22, 2017 at 5:07 AM, M Mahboubian 
> wrote:
>
> Dear all,
> I appreciate if anybody could possibly help with the issue I am facing.
>
> In our environment we have 2 hosts 1 NFS server and 1 ovirt engine server.
> The NFS server provides storage to the VMs in the hosts.
>
> I can create new VMs and install os but once i do something like yum
> update the VM freezes. I can reproduce this every single time I do yum
> update.
>
>
> Is it paused, or completely frozen?
>
>
>
> what information/log files should I provide you to trubleshoot this?
>
>
> Versions of all the components involved - guest OS, host OS (qemu-kvm
> version), how do you run the VM (vdsm log would be helpful here), exact
> storage specification (1Gb or 10Gb link? What is the NFS version? What is
> it hosted on? etc.)
>  Y.
>
>
>  Regards
>
> __ _
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/ mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
>
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vm freezes when using yum update

2017-07-03 Thread Yaniv Kaul
On Mon, Jul 3, 2017 at 11:00 AM, M Mahboubian 
wrote:

> Hi Yanis,
>
> Thank you for your reply.
>
> | Interesting - what interface are they using?
> | Is that raw or raw sparse? How did you perform the conversion? (or no
> conversion - just copied the disks over?)
>
> The VM disks are in the SAN storage in order to use oVirt we just pointed
> them to the oVirt VMs. This is how we did it precisely:
> First we created the VMs in oVirt with disks which are the same size as
> the existing disks. then we deleted these disks which was generated by
> oVirt and renamed our existing disks to match the deleted ones naming-wise.
> Finally we started the oVirt VMs and they were able to run and these VMs
> are always ok without any issue.
>
> The new VMs which have problem are from scratch (no template). One thing
> though, all these new VMs are created based on an CentOS 7 ISO. We have not
> tried any other flavor of Linux.
>
> The kernel 4.1 is actually from Oracle Linux repository since we needed to
> have OCFS2 support. So after installing oVirt we updated the kernel to
> Oracle Linux kernel 4.1 since that kernel supports OCFS2.
>

Would it be possible to test with the regular CentOS kernel? Just to ensure
it's not the kernel causing this?
Y.


>
> | We might need to get libvirt debug logs (and perhaps journal output of
> the host).
>
> I'll get this information and post here.
>
> Regards
>
>
>
>
>
>
> On Monday, July 3, 2017 3:01 PM, Yaniv Kaul  wrote:
>
>
>
>
> On Mon, Jul 3, 2017 at 6:49 AM, M Mahboubian 
> wrote:
>
> Hi Yaniv,
>
> Thanks for your reply. Apologies for my late reply we had a long holiday
> here.
>
> To answer you:
>
> Yes the  guest VM become completely frozen and non responsive as soon as
> its disk has any activity for example when we shutdown or do a yum update.
>
>
> Versions of all the components involved - guest OS, host OS (qemu-kvm
> version), how do you run the VM (vdsm log would be helpful here), exact
> storage specification (1Gb or 10Gb link? What is the NFS version? What is
> it hosted on? etc.)
>  Y.
>
> Some facts about our environment:
>
> 1) Previously, this environment was using XEN using raw disk and we change
> it to Ovirt (Ovirt were able to read the VMs's disks without any
> conversion.)
>
>
> Interesting - what interface are they using?
> Is that raw or raw sparse? How did you perform the conversion? (or no
> conversion - just copied the disks over?)
>
>
> 2) The issue we are facing is not happening for any of the existing VMs.
> *3) This issue only happens for new VMs.*
>
>
> New VMs from blank, or from a template (as a snapshot over the previous
> VMs) ?
>
>
> 4) Guest (kernel v3.10) and host(kernel v4.1) OSes are both CentOS 7
> minimal installation.
>
>
> Kernel 4.1? From where?
>
>
> *5) NFS version 4* and Using Ovirt 4.1
> 6) *The network speed is 1 GB.*
>
>
> That might be very slow (but should not cause such an issue, unless
> severely overloaded.
>
> 7) The output for rpm -qa | grep qemu-kvm shows:
> * qemu-kvm-common-ev-2.6.0-28. e17_3.6.1.x86_64*
> * qemu-kvm-tools-ev-2.6.0-28. e17_3.6.1.x86_64*
> * qemu-kvm-ev-2.6.0-28.e17_3.6. 1.x86_64*
>
>
> That's good - that's almost the latest-greatest.
>
>
> *8) *The storage is from a SAN device which is connected to the NFS
> server using fiber channel.
>
> So for example during shutdown also it froze and shows something like this
> in event section:
>
> *VM ILMU_WEB has been paused due to storage I/O problem.*
>
>
> We might need to get libvirt debug logs (and perhaps journal output of the
> host).
> Y.
>
>
>
>
> More information:
>
> VDSM log at the time of this issue (The issue happened at Jul 3, 2017
> 9:50:43 AM):
>
> 2017-07-03 09:50:37,113+0800 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC 
> call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515)
> 2017-07-03 09:50:37,897+0800 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC 
> call Host.getAllVmIoTunePolicies succeeded in 0.02 seconds (__init__:515)
> 2017-07-03 09:50:42,510+0800 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC 
> call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515)
> *2017-07-03 09:50:43,548+0800 INFO  (jsonrpc/3) [dispatcher] Run and protect: 
> repoStats(options=None) (logUtils:51)
> 2017-07-03 09:50:43,548+0800 INFO  (jsonrpc/3) [dispatcher] Run and protect: 
> repoStats, Return response: {u'e01186c1-7e44-4808-b551- 4722f0f8e84b': 
> {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': 
> '0.000144822

Re: [ovirt-users] vdsm changing disk scheduler when starting, configurable?

2017-07-03 Thread Yaniv Kaul
On Mon, Jul 3, 2017 at 12:27 AM, Darrell Budic 
wrote:

> It seems vdsmd under 4.1.x (or something under it’s control) changes the
> disk schedulers when it starts or a host node is activated, and I’d like to
> avoid this. Is it preventable? Or configurable anywhere? This was probably
> happening under earlier version, but I just noticed it while upgrading some
> converged boxes today.
>
> It likes to set deadline, which I understand is the RHEL default for
> centos 7 on non SATA disks. But I’d rather have NOOP on my SSDs because
> SSDs, and NOOP on my SATA spinning platters because ZFS does it’s own
> scheduling, and running anything other than NOOP can cause increased CPU
> utilization for no gain. It’s also fighting ZFS, which tires to set NOOP on
> whole disks it controls, and my kernel command line setting.
>

We've stopped doing it in 4.1.1 (
https://bugzilla.redhat.com/show_bug.cgi?id=1381219 )
Y.


>
> Thanks,
>
>   -Darrell
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Other portals work, user portal experiences console errors

2017-07-03 Thread Yaniv Kaul
On Mon, Jul 3, 2017 at 12:08 PM, Sophia Valentine  wrote:

> Hi all!
>
> I installed the nightly from the repo on CentOS 7.0.
>

I suggest a newer CentOS - 7.3 has been released few months ago already.


>
> I currently have an issue where visiting the user portal triggers the
> following errors in the browser console. I believe that I will need to
> compile the engine from source due to the current source being
> compacted, thus obscuring the stacktrace.
>
> If that is needed, can I generate an RPM with the debug build I create
> (might need some help with this, if that's okay)? I'm hoping it's a
> "known error" and that there's a simple fix. Would love to get to the
> bottom of this!
>

Indeed - build the debuginfo RPM and restart engine, it should provide with
a de-cyphered stack trace.
However, it should be available from the same repo you've installed the
webadmin already, I believe.
I'm not sure why you need to build it from source?
Y.


>
> Thanks.
>
> Stacktrace below:
>
> Mon Jul 03 09:13:51 GMT+100 2017 com.gwtplatform.mvp.client.pre
> senter.slots.LegacySlotConvertor
>
> SEVERE: Warning: You're using an untyped slot!
> Untyped slots are dangerous! Please upgrade your slots using
> the Arcbee's easy upgrade tool at
> https://arcbees.github.io/gwtp-slot-upgrader
> uGc @ userportal-0.js:12779
> userportal-0.js:12779 Mon Jul 03 09:13:51 GMT+100 2017 SEVERE: Uncaught
> exception
> com.google.web.bindery.event.shared.UmbrellaException: 2 exceptions
> caught: Adding non GroupedTabData; Adding non GroupedTabData
> at Unknown.lp(userportal-0.js)
> at Unknown.xp(userportal-0.js)
> at Unknown.dQ(userportal-0.js)
> at Unknown.EP(userportal-0.js)
> at Unknown.HP(userportal-0.js)
> at Unknown.YP(userportal-0.js)
> at Unknown.Omd(userportal-0.js)
> at Unknown.Und(userportal-28.js)
> at Unknown.bmd(userportal-0.js)
> at Unknown.mnd(userportal-28.js)
> at Unknown.amd(userportal-0.js)
> at Unknown.Rqo(userportal-2.js)
> at Unknown.dro(userportal-2.js)
> at Unknown.yq(userportal-0.js)
> at Unknown.Dq(userportal-0.js)
> at Unknown.Zq(userportal-0.js)
> at Unknown.ar(userportal-0.js)
> at Unknown.eval(userportal-0.js)
> at Unknown.anonymous(userportal-2.js)
> at  Unknown.userportal.__installRunAsyncCode(
> https://vmengine.x.com/ovirt-engine/userportal/userportal.nocache.js)
> at Unknown.__gwtInstallCode(userportal-0.js)
> at Unknown.mr(userportal-0.js)
> at Unknown.Kr(userportal-0.js)
> at Unknown.eval(userportal-0.js)
> at Unknown.Zq(userportal-0.js)
> at Unknown.ar(userportal-0.js)
> at Unknown.eval(userportal-0.js)
> at  Unknown.anonymous(https://vmen
> gine.x.com/ovirt-engine/userportal/deferredjs/5D86F4BF62D99E
> 7C2EA706F122B819FC/2.cache.js)
> Suppressed: java.lang.RuntimeException: Adding non
> GroupedTabData
> at Unknown.kp(userportal-0.js)
> at Unknown.up(userportal-0.js)
> at Unknown.wp(userportal-0.js)
> at Unknown.XLk(userportal-2.js)
> at Unknown.w2j(userportal-28.js)
> at Unknown.Rnd(userportal-28.js)
> at Unknown.znd(userportal-28.js)
> at Unknown.Bnd(userportal-28.js)
> at Unknown.EP(userportal-0.js)
> at Unknown.HP(userportal-0.js)
> at Unknown.YP(userportal-0.js)
> at Unknown.Omd(userportal-0.js)
> at Unknown.Und(userportal-28.js)
> at Unknown.bmd(userportal-0.js)
> at Unknown.mnd(userportal-28.js)
> at Unknown.amd(userportal-0.js)
> at Unknown.Rqo(userportal-2.js)
> at Unknown.dro(userportal-2.js)
> at Unknown.yq(userportal-0.js)
> at Unknown.Dq(userportal-0.js)
> at Unknown.Zq(userportal-0.js)
> at Unknown.ar(userportal-0.js)
> at Unknown.eval(userportal-0.js)
> at Unknown.anonymous(userportal-2.js)
> at  Unknown.userportal.__installRunAsyncCode(
> https://vmengine.x.com/ovirt-engine/userportal/userportal.nocache.js)
> at Unknown.__gwtInstallCode(userportal-0.js)
> at Unknown.mr(userportal-0.js)
> at Unknown.Kr(userportal-0.js)
> at Unknown.eval(userportal-0.js)
> at Unknown.Zq(userportal-0.js)
> at Unknown.ar(userportal-0.js)
> at Unknown.eval(userportal-0.js)
> at  Unknown.anonymous(https://vmen
> gine.x.com/ovirt-engine/userportal/deferredjs/5D86F4BF62D99E
> 7C2EA706F122B819FC/2.cache.js)
> Caused by: java.lang.RuntimeException: Adding non GroupedTabData
> at Unknown.kp(userportal-0.js)
> at Unknown

Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Yaniv Kaul
On Jul 4, 2017 7:14 AM, "Konstantin Shalygin"  wrote:

Yes, I do deployment in four steps:

1. Install CentOS via iDRAC.
2. Attach vlan to 10G physdev via iproute. This is one handwork. May be
replaced via DHCP management, but for now I only have 2x10G Fiber, without
any DHCP.
3. Run ovirt_deploy Ansible role.
4. Attach oVirt networks after host activate.

About iSCSI, NFS. I don't know anything about it. I use Ceph.


How are you using Ceph for hosted engine?
Y.


On 07/04/2017 10:50 AM, Vinícius Ferrão wrote:

Thanks, Konstantin.

Just to be clear enough: the first deployment would be made on classic eth
interfaces and later after the deployment of Hosted Engine I can convert
the "ovirtmgmt" network to a LACP Bond, right?

Another question: what about iSCSI Multipath on Self Hosted Engine? I've
looked through the net and only found this issue: https://bugzilla.
redhat.com/show_bug.cgi?id=1193961

Appears to be unsupported as today, but there's an workaround on the
comments. It's safe to deploy this way? Should I use NFS instead?


-- 
Best regards,
Konstantin Shalygin


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-05 Thread Yaniv Kaul
On Wed, Jul 5, 2017 at 7:12 AM, Vinícius Ferrão  wrote:

> Adding another question to what Matthias has said.
>
> I also noted that oVirt (and RHV) documentation does not mention the
> supported block size on iSCSI domains.
>
> RHV: https://access.redhat.com/documentation/en-us/red_
> hat_virtualization/4.0/html/administration_guide/chap-storage
> oVirt: http://www.ovirt.org/documentation/admin-guide/chap-Storage/
>
> I’m interested on 4K blocks over iSCSI, but this isn’t really widely
> supported. The question is: oVirt supports this? Or should we stay with the
> default 512 bytes of block size?
>

It does not.
Y.


>
> Thanks,
> V.
>
> On 4 Jul 2017, at 09:10, Matthias Leopold  ac.at> wrote:
>
>
>
> Am 2017-07-04 um 10:01 schrieb Simone Tiraboschi:
>
> On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão  mailto:fer...@if.ufrj.br >> wrote:
>Thanks, Konstantin.
>Just to be clear enough: the first deployment would be made on
>classic eth interfaces and later after the deployment of Hosted
>Engine I can convert the "ovirtmgmt" network to a LACP Bond, right?
>Another question: what about iSCSI Multipath on Self Hosted Engine?
>I've looked through the net and only found this issue:
>https://bugzilla.redhat.com/show_bug.cgi?id=1193961
>
>Appears to be unsupported as today, but there's an workaround on the
>comments. It's safe to deploy this way? Should I use NFS instead?
> It's probably not the most tested path but once you have an engine you
> should be able to create an iSCSI bond on your hosts from the engine.
> Network configuration is persisted across host reboots and so the iSCSI
> bond configuration.
> A different story is instead having ovirt-ha-agent connecting multiple
> IQNs or multiple targets over your SAN. This is currently not supported for
> the hosted-engine storage domain.
> See:
> https://bugzilla.redhat.com/show_bug.cgi?id=1149579
>
>
> Hi Simone,
>
> i think my post to this list titled "iSCSI multipathing setup troubles"
> just recently is about the exact same problem, except i'm not talking about
> the hosted-engine storage domain. i would like to configure _any_ iSCSI
> storage domain the way you describe it in https://bugzilla.redhat.com/
> show_bug.cgi?id=1149579#c1. i would like to do so using the oVirt "iSCSI
> Multipathing" GUI after everything else is setup. i can't find a way to do
> this. is this now possible? i think the iSCSI Multipathing documentation
> could be improved by describing an example IP setup for this.
>
> thanks a lot
> matthias
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-05 Thread Yaniv Kaul
On Wed, Jul 5, 2017 at 11:54 AM, Vinícius Ferrão  wrote:

>
> On 5 Jul 2017, at 05:35, Yaniv Kaul  wrote:
>
>
>
> On Wed, Jul 5, 2017 at 7:12 AM, Vinícius Ferrão  wrote:
>
>> Adding another question to what Matthias has said.
>>
>> I also noted that oVirt (and RHV) documentation does not mention the
>> supported block size on iSCSI domains.
>>
>> RHV: https://access.redhat.com/documentation/en-us/red_hat_
>> virtualization/4.0/html/administration_guide/chap-storage
>> oVirt: http://www.ovirt.org/documentation/admin-guide/chap-Storage/
>>
>> I’m interested on 4K blocks over iSCSI, but this isn’t really widely
>> supported. The question is: oVirt supports this? Or should we stay with the
>> default 512 bytes of block size?
>>
>
> It does not.
> Y.
>
>
> Discovered this with the hard way, the system is able to detect it as 4K
> LUN, but ovirt-hosted-engine-setup gets confused:
>
> [2] 36589cfc0071cbf2f2ef314a6212c   1600GiB
> FreeNAS iSCSI Disk
> status: free, paths: 4 active
>
> [3] 36589cfc0043589992bce09176478
>   200GiB  FreeNAS iSCSI Disk
> status: free, paths: 4 active
>
> [4] 36589cfc00992f7abf38c11295bb6
>   400GiB  FreeNAS iSCSI Disk
> status: free, paths: 4 active
>
> [2] is 4k
> [3] is 512bytes
> [4] is 1k (just to prove the point)
>
> On the system it’s appears to be OK:
>
> Disk /dev/mapper/36589cfc0071cbf2f2ef314a6212c: 214.7 GB,
> 214748364800 bytes, 52428800 sectors
> Units = sectors of 1 * 4096 = 4096 bytes
> Sector size (logical/physical): 4096 bytes / 16384 bytes
> I/O size (minimum/optimal): 16384 bytes / 1048576 bytes
>
>
> Disk /dev/mapper/36589cfc0043589992bce09176478: 214.7 GB,
> 214748364800 bytes, 419430400 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 16384 bytes
> I/O size (minimum/optimal): 16384 bytes / 1048576 bytes
>
> But whatever, just reporting back to the list. It’s a good ideia to have a
> note about it on the documentation.
>

Indeed.
Can you file a bug or send a patch to upstream docs?
Y.


>
> V.
>
>
>
>>
>> Thanks,
>> V.
>>
>> On 4 Jul 2017, at 09:10, Matthias Leopold > c.at> wrote:
>>
>>
>>
>> Am 2017-07-04 um 10:01 schrieb Simone Tiraboschi:
>>
>> On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão > mailto:fer...@if.ufrj.br >> wrote:
>>Thanks, Konstantin.
>>Just to be clear enough: the first deployment would be made on
>>classic eth interfaces and later after the deployment of Hosted
>>Engine I can convert the "ovirtmgmt" network to a LACP Bond, right?
>>Another question: what about iSCSI Multipath on Self Hosted Engine?
>>I've looked through the net and only found this issue:
>>https://bugzilla.redhat.com/show_bug.cgi?id=1193961
>><https://bugzilla.redhat.com/show_bug.cgi?id=1193961>
>>Appears to be unsupported as today, but there's an workaround on the
>>comments. It's safe to deploy this way? Should I use NFS instead?
>> It's probably not the most tested path but once you have an engine you
>> should be able to create an iSCSI bond on your hosts from the engine.
>> Network configuration is persisted across host reboots and so the iSCSI
>> bond configuration.
>> A different story is instead having ovirt-ha-agent connecting multiple
>> IQNs or multiple targets over your SAN. This is currently not supported for
>> the hosted-engine storage domain.
>> See:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1149579
>>
>>
>> Hi Simone,
>>
>> i think my post to this list titled "iSCSI multipathing setup troubles"
>> just recently is about the exact same problem, except i'm not talking about
>> the hosted-engine storage domain. i would like to configure _any_ iSCSI
>> storage domain the way you describe it in https://bugzilla.redhat.com/sh
>> ow_bug.cgi?id=1149579#c1. i would like to do so using the oVirt "iSCSI
>> Multipathing" GUI after everything else is setup. i can't find a way to do
>> this. is this now possible? i think the iSCSI Multipathing documentation
>> could be improved by describing an example IP setup for this.
>>
>> thanks a lot
>> matthias
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Removing iSCSI domain: host side part

2017-07-13 Thread Yaniv Kaul
On Thu, Jul 13, 2017 at 5:59 PM, Gianluca Cecchi 
wrote:

> On Thu, Jul 13, 2017 at 6:23 PM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> Hello,
>> I have cleanly removed an iSCSI domain from oVirt. There is another one
>> (connecting to another storage array) that is the master domain.
>> But I see that oVirt hosts still maintain the iscsi session to the LUN.
>> So I want to clean from os point of view before removing the LUN itself
>> from storage.
>>
>> At the moment I still see the multipath lun on both hosts
>>
>> [root@ov301 network-scripts]# multipath -l
>> . . .
>> 364817197b5dfd0e5538d959702249b1c dm-2 EQLOGIC ,100E-00
>> size=4.0T features='0' hwhandler='0' wp=rw
>> `-+- policy='round-robin 0' prio=0 status=active
>>   |- 9:0:0:0  sde 8:64  active undef  running
>>   `- 10:0:0:0 sdf 8:80  active undef  running
>>
>> and
>> [root@ov301 network-scripts]# iscsiadm -m session
>> tcp: [1] 10.10.100.9:3260,1 iqn.2001-05.com.equallogic:4-7
>> 71816-e5d0dfb59-1c9b240297958d53-ovsd3910 (non-flash)
>> tcp: [2] 10.10.100.9:3260,1 iqn.2001-05.com.equallogic:4-7
>> 71816-e5d0dfb59-1c9b240297958d53-ovsd3910 (non-flash)
>> . . .
>>
>> Do I have to clean the multipath paths and multipath device and then
>> iSCSI logout, or is it sufficient to iSCSI logout and the multipath device
>> and its path will be cleanly removed from OS point of view?
>>
>> I would like not to have multipath device in stale condition.
>>
>> Thanks
>> Gianluca
>>
>
>
> I have not understood why, if I destroy a storage domain, still oVirt
> maintains its LVM structures
>

Destroy is an Engine command - it does not touch the storage at all (the
assumption is that you've someow lost/deleted your storage domain and now
you want to get rid of it from the Engine side).


>
> Anyway, these were the step done at host side before removal of the LUN at
> storage array level
>

I assume removing the LUN + reboot would have been quicker.
Y.


>
> Pick up the VG of which the lun is still a PV for..
>
> vgchange -an 5ed04196-87f1-480e-9fee-9dd450a3b53b
> --> actually all lvs were already inactive
>
> vgremove 5ed04196-87f1-480e-9fee-9dd450a3b53b
> Do you really want to remove volume group 
> "5ed04196-87f1-480e-9fee-9dd450a3b53b"
> containing 22 logical volumes? [y/n]: y
>   Logical volume "metadata" successfully removed
>   Logical volume "outbox" successfully removed
>   Logical volume "xleases" successfully removed
>   Logical volume "leases" successfully removed
>   Logical volume "ids" successfully removed
>   Logical volume "inbox" successfully removed
>   Logical volume "master" successfully removed
>   Logical volume "bc141d0d-b648-409b-a862-9b6d950517a5" successfully
> removed
>   Logical volume "31255d83-ca67-4f47-a001-c734c498d176" successfully
> removed
>   Logical volume "607dbf59-7d4d-4fc3-ae5f-e8824bf82648" successfully
> removed
>   Logical volume "dfbf5787-36a4-4685-bf3a-43a55e9cd4a6" successfully
> removed
>   Logical volume "400ea884-3876-4a21-9ec6-b0b8ac706cee" successfully
> removed
>   Logical volume "1919f6e6-86cd-4a13-9a21-ce52b9f62e35" successfully
> removed
>   Logical volume "a3ea679b-95c0-475d-80c5-8dc4d86bd87f" successfully
> removed
>   Logical volume "32f433c8-a991-4cfc-9a0b-7f44422815b7" successfully
> removed
>   Logical volume "7f867f59-c977-47cf-b280-a2a0fef8b95b" successfully
> removed
>   Logical volume "6e2005f2-3ff5-42fa-867e-e7812c6726e4" successfully
> removed
>   Logical volume "42344cf4-8f9c-464d-ab0f-d62beb15d359" successfully
> removed
>   Logical volume "293e169e-53ed-4d60-b22a-65835f5b0d29" successfully
> removed
>   Logical volume "e86752c4-de73-4733-b561-2afb31bcc2d3" successfully
> removed
>   Logical volume "79350ec5-eea5-458b-a3ee-ba394d2cda27" successfully
> removed
>   Logical volume "77824fce-4f95-49e3-b732-f791151dd15c" successfully
> removed
>   Volume group "5ed04196-87f1-480e-9fee-9dd450a3b53b" successfully removed
>
> pvremove /dev/mapper/364817197b5dfd0e5538d959702249b1c
>
> multipath -f 364817197b5dfd0e5538d959702249b1c
>
> iscsiadm -m session -r 1 -u
> Logging out of session [sid: 1, target: iqn.2001-05.com.equallogic:4-
> 771816-e5d0dfb59-1c9b240297958d53-ovsd3910, portal: 10.10.100.9,3260]
> Logout of [sid: 1, target: 
> iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910,
> portal: 10.10.100.9,3260] successful.
>
> iscsiadm -m session -r 2 -u
> Logging out of session [sid: 2, target: iqn.2001-05.com.equallogic:4-
> 771816-e5d0dfb59-1c9b240297958d53-ovsd3910, portal: 10.10.100.9,3260]
> Logout of [sid: 2, target: 
> iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910,
> portal: 10.10.100.9,3260] successful.
>
> done.
>
> NOTE: on one node I missed the LVM clean before logging out of the iSCSI
> session
> this resulted in impossibility to have a clean status because the
> multipath device resulted as without paths but still used (by LVM)
> and the command
> multipath -f
> failed.
> Also vgs and lvs commands threw out many errors and 

Re: [ovirt-users] Fwd: ISCSI storage with multiple nics on same subnet disabled on host activation

2017-07-17 Thread Yaniv Kaul
On Mon, Jul 17, 2017 at 10:56 AM, Nelson Lameiras <
nelson.lamei...@lyra-network.com> wrote:

> Hello, Can any one please help us with the problem described below?
>
> Nir, I'm including you since a quick search on the internet led me to
> think that you have worked on this part of the project. Please forgive me
> if I'm off topic.
>
> (I incorrectly used below the expression "patch" when I meant "configure".
> it's corrected now)
>

VDSM may indeed change the IP filter. From the function that sets it[1]:

def setRpFilterIfNeeded(netIfaceName, hostname, loose_mode):
"""
Set rp_filter to loose or strict mode if there's no session using the
netIfaceName device and it's not the device used by the OS to reach the
'hostname'.
loose mode is needed to allow multiple iSCSI connections in a multiple
NIC
per subnet configuration. strict mode is needed to avoid the security
breach where an untrusted VM can DoS the host by sending it packets with
spoofed random sources.

Arguments:
netIfaceName: the device used by the iSCSI session
target: iSCSI target object cointaining the portal hostname
loose_mode: boolean




I think it sets it to strict mode when disconnecting or removing an iSCSI
session.
Perhaps something in the check we are doing is incorrect? Do you have other
sessions open?
Y.

[1]
https://github.com/oVirt/vdsm/blob/321233bea649fb6d1e72baa1b1164c8c1bc852af/lib/vdsm/storage/iscsi.py#L556


> cordialement, regards,
>
> 
> Nelson LAMEIRAS
> Ingénieur Systèmes et Réseaux / Systems and Networks engineer
> Tel: +33 5 32 09 09 70 <+33%205%2032%2009%2009%2070>
> nelson.lamei...@lyra-network.com
> www.lyra-network.com | www.payzen.eu 
> 
> 
> 
> 
> --
> Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
>
>
> --
> *De: *"Nelson Lameiras" 
> *À: *"ovirt users" 
> *Envoyé: *Mercredi 7 Juin 2017 14:59:48
> *Objet: *[ovirt-users] ISCSI storage with multiple nics on same subnet
> disabled on host activation
>
> Hello,
>
> In our oVirt hosts, we are using DELL equallogic SAN with each server
> connecting to SAN via 2 physical interfaces. Since both interfaces share
> the same network (Equalogic limitation) we must configure sysctl to to
> allow iSCSI multipath with multiple NICs in the same subnet :
>
> 
> 
>
> net.ipv4.conf.p2p1.arp_ignore=1
> net.ipv4.conf.p2p1.arp_announce=2
> net.ipv4.conf.p2p1.rp_filter=2
>
> net.ipv4.conf.p2p2.arp_ignore=1
> net.ipv4.conf.p2p2.arp_announce=2
> net.ipv4.conf.p2p2.rp_filter=2
>
> 
> 
>
> This works great in most setups, but for a strange reason, on some of our
> setups, the sysctl configuration is updated by VDSM when activating a host
> and the second interface stops working immeadiatly :
> 
> 
> vdsm.log
>
> 2017-06-07 11:51:51,063+0200 INFO  (jsonrpc/5) [storage.ISCSI] Setting strict 
> mode rp_filter for device 'p2p2'. (iscsi:602)
> 2017-06-07 11:51:51,064+0200 ERROR (jsonrpc/5) [storage.HSM] Could not 
> connect to storageServer (hsm:2392)
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/hsm.py", line 2389, in connectStorageServer
> conObj.connect()
>   File "/usr/share/vdsm/storage/storageServer.py", line 433, in connect
> iscsi.addIscsiNode(self._iface, self._target, self._cred)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/iscsi.py", line 232, in 
> addIscsiNode
> iscsiadm.node_login(iface.name, target.address, target.iqn)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/iscsiadm.py", line 337, 
> in node_login
> raise IscsiNodeError(rc, out, err)
>
>
>
> 
> 
>
> "strict mode" is enforced for second interface, and it no longuer works...
> Which means - at least - that there is no redundancy in case of hardware
> faillure and this is not acceptable for our production needs.
>
> What is really strange is that we have another "twin" site on another
> geographic region with simillar hardware configuration and same oVirt
> installation, and this problem does not happen.
> Can this be really random?
> What can be the root cause of this behaviour? How can I correct it?
>
> our setup:
> oVirt hostedEngine : Centor 7.3, ovirt 4.1.2
> 3 physical oVirt nodes centos 7.3, ovirt 4.1.2
> SAN DELL Equalogic
>
> cordialement, regards,
>
> 
> Nelson LAMEIRAS
> Ingénieur Systèmes et Réseaux / Systems and Networks engineer

Re: [ovirt-users] workflow suggestion for the creating and destroying the VMs?

2017-07-21 Thread Yaniv Kaul
On Fri, Jul 21, 2017 at 6:07 AM, Arman Khalatyan  wrote:

> Yes, thanks for mentioning puppet, we have foreman for the bare metal
> systems.
> I was looking something like preboot hook script, to mount the /dev/sda
> and copy some stuff there.
> Is it possible to do that with cloud-init/sysprep?
>

It is.

However, I'd like to remind you that we also have some scale-up features
you might want to consider - you can hot-add CPU and memory to VMs, which
in some workloads (but not all) can be helpful and easier.
(Hot-removing though is a bigger challenge.)
Y.

>
> On Thu, Jul 20, 2017 at 1:32 PM, Karli Sjöberg 
> wrote:
>
>>
>>
>> Den 20 juli 2017 13:29 skrev Arman Khalatyan :
>>
>> Hi,
>> Can some one share an experience with dynamic creating and removing VMs
>> based on the load?
>> Currently I am just creating with the python SDK a clone of the apache
>> worker, are there way to copy some config files to the VM before starting
>> it ?
>>
>>
>> E.g. Puppet could easily swing that sort of job. If you deploy also
>> Foreman, it could automate the entire procedure. Just a suggestion
>>
>> /K
>>
>>
>> Thanks,
>> Arman.
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] workflow suggestion for the creating and destroying the VMs?

2017-07-21 Thread Yaniv Kaul
On Fri, Jul 21, 2017 at 12:06 PM, Arman Khalatyan  wrote:

> thanks,the downscaling is important for me,
>

It really depends on the guest OS cooperation. While off-lining a CPU is
relatively easy, hot-unplugging memory is a bigger challenge for the OS.

 i was testing something like:
>  1) clone from actual vm(super slow,even if it is 20GB OS, needs more
> investigation,nfs is bottle neck)
> 2) start it with dhcp,
> 3) somehow find the ip
> 4) sync parameters between running vm and new vm.
>
> looks that everything might be possible with the python sdk...
>
> are there some examples or tutorials with cloudinitscripts?
>

https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/start_vm_with_cloud_init.py

But you could also use Ansible, might be even easier:
http://docs.ansible.com/ansible/latest/ovirt_vms_module.html#examples

Y.


>
> Am 21.07.2017 3:58 nachm. schrieb "Yaniv Kaul" :
>
>>
>>
>> On Fri, Jul 21, 2017 at 6:07 AM, Arman Khalatyan 
>> wrote:
>>
>>> Yes, thanks for mentioning puppet, we have foreman for the bare metal
>>> systems.
>>> I was looking something like preboot hook script, to mount the /dev/sda
>>> and copy some stuff there.
>>> Is it possible to do that with cloud-init/sysprep?
>>>
>>
>> It is.
>>
>> However, I'd like to remind you that we also have some scale-up features
>> you might want to consider - you can hot-add CPU and memory to VMs, which
>> in some workloads (but not all) can be helpful and easier.
>> (Hot-removing though is a bigger challenge.)
>> Y.
>>
>>>
>>> On Thu, Jul 20, 2017 at 1:32 PM, Karli Sjöberg 
>>> wrote:
>>>
>>>>
>>>>
>>>> Den 20 juli 2017 13:29 skrev Arman Khalatyan :
>>>>
>>>> Hi,
>>>> Can some one share an experience with dynamic creating and removing VMs
>>>> based on the load?
>>>> Currently I am just creating with the python SDK a clone of the apache
>>>> worker, are there way to copy some config files to the VM before starting
>>>> it ?
>>>>
>>>>
>>>> E.g. Puppet could easily swing that sort of job. If you deploy also
>>>> Foreman, it could automate the entire procedure. Just a suggestion
>>>>
>>>> /K
>>>>
>>>>
>>>> Thanks,
>>>> Arman.
>>>>
>>>> ___
>>>> Users mailing list
>>>> Users@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVIRT 4.1 / iSCSI Multipathing

2017-07-21 Thread Yaniv Kaul
On Wed, Jul 19, 2017 at 9:13 PM, Vinícius Ferrão  wrote:

> Hello,
>
> I’ve skipped this message entirely yesterday. So this is per design?
> Because the best practices of iSCSI MPIO, as far as I know, recommends two
> completely separate paths. If this can’t be achieved with oVirt what’s the
> point of running MPIO?
>

With regular storage it is quite easy to achieve using 'iSCSI bonding'.
I think the Dell storage is a bit different and requires some more
investigation - or experience with it.
 Y.


> May we ask for a bug fix or a feature redesign on this?
>
> MPIO is part of my datacenter, and it was originally build for running
> XenServer, but I’m considering the move to oVirt. MPIO isn’t working right
> and this can be a great no-go for me...
>
> I’m willing to wait and hold my DC project if this can be fixed.
>
> Any answer from the redhat folks?
>
> Thanks,
> V.
>
> > On 18 Jul 2017, at 11:09, Uwe Laverenz  wrote:
> >
> > Hi,
> >
> >
> > Am 17.07.2017 um 14:11 schrieb Devin Acosta:
> >
> >> I am still troubleshooting the issue, I haven’t found any resolution to
> my issue at this point yet. I need to figure out by this Friday otherwise I
> need to look at Xen or another solution. iSCSI and oVIRT seems problematic.
> >
> > The configuration of iSCSI-Multipathing via OVirt didn't work for me
> either. IIRC the underlying problem in my case was that I use totally
> isolated networks for each path.
> >
> > Workaround: to make round robin work you have to enable it by editing
> "/etc/multipath.conf". Just add the 3 lines for the round robin setting
> (see comment in the file) and additionally add the "# VDSM PRIVATE" comment
> to keep vdsmd from overwriting your settings.
> >
> > My multipath.conf:
> >
> >
> >> # VDSM REVISION 1.3
> >> # VDSM PRIVATE
> >> defaults {
> >>polling_interval5
> >>no_path_retry   fail
> >>user_friendly_names no
> >>flush_on_last_del   yes
> >>fast_io_fail_tmo5
> >>dev_loss_tmo30
> >>max_fds 4096
> >># 3 lines added manually for multipathing:
> >>path_selector   "round-robin 0"
> >>path_grouping_policymultibus
> >>failbackimmediate
> >> }
> >> # Remove devices entries when overrides section is available.
> >> devices {
> >>device {
> >># These settings overrides built-in devices settings. It does
> not apply
> >># to devices without built-in settings (these use the settings
> in the
> >># "defaults" section), or to devices defined in the "devices"
> section.
> >># Note: This is not available yet on Fedora 21. For more info see
> >># https://bugzilla.redhat.com/1253799
> >>all_devsyes
> >>no_path_retry   fail
> >>}
> >> }
> >
> >
> >
> > To enable the settings:
> >
> >  systemctl restart multipathd
> >
> > See if it works:
> >
> >  multipath -ll
> >
> >
> > HTH,
> > Uwe
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.1 additional hosted-engine deploy setup on another host not working

2017-07-24 Thread Yaniv Kaul
On Mon, Jul 24, 2017 at 1:49 PM, TranceWorldLogic . <
tranceworldlo...@gmail.com> wrote:

> Thanks Kasturi,
>
> Would you please update note or prerequisite in below link ?
> http://www.ovirt.org/documentation/self-hosted/chap-Installing_Additional_
> Hosts_to_a_Self-Hosted_Environment/
>

We'd be happy if you could send a patch to fix it.
Y.


>
>
>
> On Mon, Jul 24, 2017 at 4:09 PM, Kasturi Narra  wrote:
>
>> Hi,
>>
>> This option appears in the host tab only when HostedEngine vm and
>> hosted_storage is present in the UI. Before adding another host make sure
>> that you add your first data domain to the UI which will automatically
>> import HostedEngine vm and hosted_storage. Once these two are imported you
>> will be able to see 'hosted-engine' sub tab in the 'Add host' / edit host
>> dialog box.
>>
>> Thanks
>> kasturi
>>
>> On Mon, Jul 24, 2017 at 4:05 PM, TranceWorldLogic . <
>> tranceworldlo...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I want to add another host to hosted-engine.
>>> Hence I tried to follow steps as shown in below link:
>>>
>>> http://www.ovirt.org/documentation/self-hosted/chap-Installi
>>> ng_Additional_Hosts_to_a_Self-Hosted_Environment/
>>> Topic :
>>>
>>> *Adding an Additional Self-Hosted Engine Host*
>>> But I not found any additional sub-tab call hosted-engine.
>>> Even adding host I tired to edit host but still not observe.
>>>
>>> Do I need to run some command hosted-engine --deploy to add another host
>>> ?
>>> Or is it handle by GUI automatically ?
>>>
>>> Thanks,
>>> ~Rohit
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt node ready for production env?

2017-07-25 Thread Yaniv Kaul
On Thu, Jul 20, 2017 at 7:42 PM, Vinícius Ferrão  wrote:

> Hello Lionel,
>
> Production ready it's definitely yes. Red Hat even sells RHV-H, which is
> the same thing as oVirt Node. But keep in mind one thing: it's an
> appliance. So modifications on the appliance isn't really supported. As far
> as I know oVirt Node is based on imgbase and updates/security are done
> through yum. But when an update is made everything is rewritten. So you
> will lose your modifications if you install additional packages on oVirt
> Node.
>
> The host is stateless, so you don't really need to backup it, the core is
> running on hosted engine.
>
> About the other questions, I can't add anything since I'm new to oVirt
> too. Perhaps someone could complete my answer.
>

The answers above are inaccurate wrt recent oVirt node, which:
1. does allow you to install additional packages (via 'yum')
2. and does save them between upgrades.

Y.


>
> V.
>
> Sent from my iPhone
>
> > On 20 Jul 2017, at 03:59, Lionel Caignec  wrote:
> >
> > Hi,
> >
> > i'm did not test myself so i prefer asking before use it (
> https://www.ovirt.org/node/).
> > Is ovirt node can be used for production environment ?
> > Is it possible to add some software on host (ex: backup tools, ossec,...
> )?
> > How does work security update, is it managed by ovirt? or can i plug
> ovirt node on spacewalk/katello?
> >
> >
> > Sorry for my "noobs question"
> >
> > Regards
> > --
> > Lionel
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt VM backups

2017-07-27 Thread Yaniv Kaul
On Thu, Jul 27, 2017 at 1:14 PM, Abi Askushi 
wrote:

> Hi All,
>
> For VM backups I am using some python script to automate the snapshot ->
> clone -> export -> delete steps (although with some issues when trying to
> backups a Windows 10 VM)
>
> I was wondering if there is there any plan to integrate VM backups in the
> GUI or what other recommended ways exist out there.
>

See https://github.com/wefixit-AT/oVirtBackup as an option.

Now that I think of this, it's probably just as easy to write as an Ansible
role and add it to https://github.com/machacekondra/ovirt-ansible-roles

If you are using Gluster, I believe
http://www.ovirt.org/develop/release-management/features/gluster/gluster-dr/
is also relevant.
Y.





>
> Thanx,
> Abi
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt VM backups

2017-07-27 Thread Yaniv Kaul
On Thu, Jul 27, 2017 at 4:28 PM, Gianluca Cecchi 
wrote:

> On Thu, Jul 27, 2017 at 3:09 PM, Yaniv Kaul  wrote:
>
>>
>>
>> Now that I think of this, it's probably just as easy to write as an
>> Ansible role and add it to https://github.com/machacek
>> ondra/ovirt-ansible-roles
>>
>
>
> Hi Yaniv,
> as the user asked for a GUI too, I think that using Ansible for oVirt
> backup could be improved by an available GUI.
>

Agreed.


> I have just seen that there is a sort of "survey" to see what is the
> feeling/interest in having an opensource alternative for commercial Red Hat
> Tower here:
>
> https://www.ansible.com/open-tower
>
> I just joined and who is interested could do the same.
> Afaik there is currently only the limited feature Semaphore at the moment
> (never tested by me, while I have some experience in using commercial
> Tower):
> https://github.com/ansible-semaphore/semaphore
>
> Share your comments,
>

Another option is to use ManageIQ, which has support for Ansible automation
(and oVirt of course).
See the release announcement[1].

Y.

[1] http://manageiq.org/blog/2017/05/manageiq-fine-ga-announcement/


> Gianluca
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] supervdsmd IOError to /dev/stdout

2017-07-30 Thread Yaniv Kaul
On Fri, Jul 28, 2017 at 8:37 PM, Richard Chan 
wrote:

> After an upgrade to 4.0 I have a single host that cannot start supervdsmd
> because of IOError on /dev/stdout. All other hosts upgraded correctly.
>

Upgrade from which version? 3.6? Did you stay on 4.0 or continued to 4.1?


>
> In the systemd unit I have to hack StandardOutput=null.
>
> Any thing I have overlooked? The hosts are all identical and it is just
> this one
> that has this weird behaviour.
>

Any logs that you can share?
 Y.


>
> --
> Richard Chan
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Pass Discard guest OS support?

2017-08-01 Thread Yaniv Kaul
On Tue, Aug 1, 2017 at 6:05 PM, Doug Ingham  wrote:

> Hi All,
>  Just today I noticed that guests can now pass discards to the underlying
> shared filesystem.
>

Or block storage. Most likely it works better on shared block storage.


>
> http://www.ovirt.org/develop/release-management/features/
> storage/pass-discard-from-guest-to-underlying-storage/
>
> Is this supported by all of the main Linux guest OS's running the virt
> agent?
>

It has nothing to do with the virt agent.
It is supported by all main and modern Linux guest OS. Depends mainly on
your mount options or your use of 'fstrim' or 'blkdiscard' utils.


> And what about Windows guests? (specifically, 2012 & 2016)
>

Yes to both.
Y.


>
> Cheers,
> --
> Doug
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 3.6.x and Centos 6.9

2017-08-02 Thread Yaniv Kaul
On Wed, Aug 2, 2017 at 12:09 PM, Neil  wrote:

> Hi guys,
>
> I upgraded to the latest ovirt-engine-3.6.7.5-1.el6.noarch available in
> the ovirt 3.6 repo, however I seem to be encountering a known bug (
> https://bugzilla.redhat.com/show_bug.cgi?id=1387949)
>

This specific bug seems to be fixed in 4.1 (and was backported to 4.0) -
are you sure it's fixed in 3.6.x?


> which looks to be fixed in ovirt 3.6.9.2 but I can't seem to find out how
> to install this.
>
> I was hoping it was via http://resources.ovirt.org/pub/ovirt-3.6-snapshot
> but this link is dead.
>
> Is anyone using ovirt 3.6.9 and how does one obtain it?
>

If you are on 3.6.7, go ahead and upgrade.
Y.


> The issues I'm facing is, after trying to update my cluster version to
> 3.6, my hosts weren't compatible, as it says they only compatible with
> version 3.3, 3.4 and 3.5 etc. I then upgraded 1 hosts to Centos 6.9 and
> installed and updated the latest vdsm from the 3.6 repo, but this still
> didn't allow me to change my cluster version. I then rolled back the
> cluster version to 3.5.
>
> At the moment because I've upgraded 1 host, I can't select this host as
> SPM and I'm wondering if I can upgrade my remaining hosts, or will this
> prevent any hosts from being my SPM? I'm seeing the following error
> "WARNING Unrecognized protocol: 'SUBSCRI'" on my upgraded host.
>
> I'm wanting to upgrade to the latest 3.6 as well as upgrade all my hosts,
> so that I can start the ovirt 4 upgrade next.
>
> Please could I have some guidance on this?
>
> Thank you.
>
> Regards.
>
> Neil Wilson.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.1 gluster with libgfapi

2017-08-02 Thread Yaniv Kaul
On Wed, Aug 2, 2017 at 12:52 PM, TranceWorldLogic . <
tranceworldlo...@gmail.com> wrote:

> Hi,
>
> I was going through gluster document it was talking about libgfapi, that
> gives better performance.
> And I also went through many bug description and comment and mail thread
> in ovirt group.
>
> But still not understood, how to enable libgfapi ?
> Is it supported in 4.1 ? I am confuse.
>

Not yet...


>
> Please help me to understand whether it supported or not.
> If supported, then how can I enable it ? or use it ?
>

It will reach 4.1.x at some point, it's still in the works. It'll be
enabled via an engine-config parameter.
See https://gerrit.ovirt.org/#/c/80061/ for specific patch (still in the
works).
Y.


>
> Thanks,
> ~Rohit
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 3.6.x and Centos 6.9

2017-08-02 Thread Yaniv Kaul
On Wed, Aug 2, 2017 at 1:06 PM, Neil  wrote:

> Hi Yaniv,
>
> Thanks for the assistance.
>
> On Wed, Aug 2, 2017 at 12:01 PM, Yaniv Kaul  wrote:
>
>>
>>
>> On Wed, Aug 2, 2017 at 12:09 PM, Neil  wrote:
>>
>>> Hi guys,
>>>
>>> I upgraded to the latest ovirt-engine-3.6.7.5-1.el6.noarch available in
>>> the ovirt 3.6 repo, however I seem to be encountering a known bug (
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1387949)
>>>
>>
>> This specific bug seems to be fixed in 4.1 (and was backported to 4.0) -
>> are you sure it's fixed in 3.6.x?
>>
>>
>
> I only "suspect" it fixed because the Redhat bug report mentions it was
> fixed.
> Is this causing the failure to negotiate SPM, as this is what I'm trying
> to resolve?
>
>
>
>> which looks to be fixed in ovirt 3.6.9.2 but I can't seem to find out how
>>> to install this.
>>>
>>> I was hoping it was via http://resources.ovirt.org
>>> /pub/ovirt-3.6-snapshot but this link is dead.
>>>
>>> Is anyone using ovirt 3.6.9 and how does one obtain it?
>>>
>>
>> If you are on 3.6.7, go ahead and upgrade.
>>
>
> Any ideas how? I don't see a repo available for it and can't find packages
> even when looking through http://resources.ovirt.org manually?
>

See the upgrade guide @
http://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/
(don't forget to enable the channels - you can do it first by installing
http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm )
Y.



>
> Y.
>>
>>
>>> The issues I'm facing is, after trying to update my cluster version to
>>> 3.6, my hosts weren't compatible, as it says they only compatible with
>>> version 3.3, 3.4 and 3.5 etc. I then upgraded 1 hosts to Centos 6.9 and
>>> installed and updated the latest vdsm from the 3.6 repo, but this still
>>> didn't allow me to change my cluster version. I then rolled back the
>>> cluster version to 3.5.
>>>
>>> At the moment because I've upgraded 1 host, I can't select this host as
>>> SPM and I'm wondering if I can upgrade my remaining hosts, or will this
>>> prevent any hosts from being my SPM? I'm seeing the following error
>>> "WARNING Unrecognized protocol: 'SUBSCRI'" on my upgraded host.
>>>
>>> I'm wanting to upgrade to the latest 3.6 as well as upgrade all my
>>> hosts, so that I can start the ovirt 4 upgrade next.
>>>
>>> Please could I have some guidance on this?
>>>
>>> Thank you.
>>>
>>> Regards.
>>>
>>> Neil Wilson.
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 3.6.x and Centos 6.9

2017-08-02 Thread Yaniv Kaul
On Wed, Aug 2, 2017 at 2:39 PM, Neil  wrote:

> Thanks Yaniv,
>
>
> On Wed, Aug 2, 2017 at 12:12 PM, Yaniv Kaul  wrote:
>
>>
>>
>> On Wed, Aug 2, 2017 at 1:06 PM, Neil  wrote:
>>
>>> Hi Yaniv,
>>>
>>> Thanks for the assistance.
>>>
>>> On Wed, Aug 2, 2017 at 12:01 PM, Yaniv Kaul  wrote:
>>>
>>>>
>>>>
>>>> On Wed, Aug 2, 2017 at 12:09 PM, Neil  wrote:
>>>>
>>>>> Hi guys,
>>>>>
>>>>> I upgraded to the latest ovirt-engine-3.6.7.5-1.el6.noarch available
>>>>> in the ovirt 3.6 repo, however I seem to be encountering a known bug (
>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1387949)
>>>>>
>>>>
>>>> This specific bug seems to be fixed in 4.1 (and was backported to 4.0)
>>>> - are you sure it's fixed in 3.6.x?
>>>>
>>>>
>>>
>>> I only "suspect" it fixed because the Redhat bug report mentions it was
>>> fixed.
>>> Is this causing the failure to negotiate SPM, as this is what I'm trying
>>> to resolve?
>>>
>>>
>>>
>>>> which looks to be fixed in ovirt 3.6.9.2 but I can't seem to find out
>>>>> how to install this.
>>>>>
>>>>> I was hoping it was via http://resources.ovirt.org
>>>>> /pub/ovirt-3.6-snapshot but this link is dead.
>>>>>
>>>>> Is anyone using ovirt 3.6.9 and how does one obtain it?
>>>>>
>>>>
>>>> If you are on 3.6.7, go ahead and upgrade.
>>>>
>>>
>>> Any ideas how? I don't see a repo available for it and can't find
>>> packages even when looking through http://resources.ovirt.org manually?
>>>
>>
>> See the upgrade guide @ http://www.ovirt.org/documen
>> tation/upgrade-guide/upgrade-guide/
>>
>
> The issue is that there isn't a repo that contains 3.6.9 by the looks of
> things, I'm running 3.6.7 and running engine-upgrade-check says there are
> no new packages. 3.6.9 doesn't seem to exist in the 3.6 repo?
>

Indeed, which is why I've suggested you go ahead and upgrade to 4.

>
>
> (don't forget to enable the channels - you can do it first by installing
>> http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm )
>> Y.
>>
>
> I'd like to upgrade to 4 asap, but this involves installing a new Centos 7
> machine to migrate my engine currently running on Centos 6.9, which I can't
> do just yet.
>

Correct, you'll need CentOS 7. I've heard 7.4 is in the oven and will be
ready in the coming weeks (?).
Y.


>
>
>
>
>
>
>>
>>
>>>
>>> Y.
>>>>
>>>>
>>>>> The issues I'm facing is, after trying to update my cluster version to
>>>>> 3.6, my hosts weren't compatible, as it says they only compatible with
>>>>> version 3.3, 3.4 and 3.5 etc. I then upgraded 1 hosts to Centos 6.9 and
>>>>> installed and updated the latest vdsm from the 3.6 repo, but this still
>>>>> didn't allow me to change my cluster version. I then rolled back the
>>>>> cluster version to 3.5.
>>>>>
>>>>> At the moment because I've upgraded 1 host, I can't select this host
>>>>> as SPM and I'm wondering if I can upgrade my remaining hosts, or will this
>>>>> prevent any hosts from being my SPM? I'm seeing the following error
>>>>> "WARNING Unrecognized protocol: 'SUBSCRI'" on my upgraded host.
>>>>>
>>>>> I'm wanting to upgrade to the latest 3.6 as well as upgrade all my
>>>>> hosts, so that I can start the ovirt 4 upgrade next.
>>>>>
>>>>> Please could I have some guidance on this?
>>>>>
>>>>> Thank you.
>>>>>
>>>>> Regards.
>>>>>
>>>>> Neil Wilson.
>>>>>
>>>>> ___
>>>>> Users mailing list
>>>>> Users@ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>
>>>>>
>>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.1 gluster with libgfapi

2017-08-02 Thread Yaniv Kaul
On Aug 3, 2017 9:04 AM, "TranceWorldLogic ." 
wrote:

Thanks Denis,

Can anyone help me to know when 4.1.5 will be release ? (release date).


We strive to release minor versions on a monthly cadence.
Y.


​

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


  1   2   3   4   5   6   7   >