Re: [ovirt-users] 回复:Re: 回复:Re: ovirt engine HA

2018-04-10 Thread FERNANDO FREDIANI
Hi

Alex K, thanks. It was not clear the previous answers to me that's why I
specified the quesiton if GUI meant Cockpit or Web Interface which was then
well answered by Martin. Therefore thanks for that.

Best regards
Fernando

2018-04-09 13:35 GMT-03:00 Martin Sivak <msi...@redhat.com>:

> > You mentioned that can be done via the web GUI. Do you mean by cockpit
> or by
> > oVirt Engine Web Interface itself ?
>
> Only via the oVirt web admin interface. You first add a standard storage
> domain, wait for a VM that represents the engine to appear (name:
> HostedEngine) and then you can add an additional host with hosted engine
> bits directly from the webadmin UI (HostedEngine side tab of Add new host
> dialog, select Deploy).
>
> Best regards
>
> Martin Sivak
>
> On Mon, Apr 9, 2018 at 6:21 PM, FERNANDO FREDIANI <
> fernando.fredi...@upx.com> wrote:
>
>> Hello Simone
>>
>> The doubt is once one hosted engine is deployed to one of the hosts the
>> process to deploy in the second one is exactly the same or does it have any
>> minor particularity ? You mentioned that can be done via the web GUI. Do
>> you mean by cockpit or by oVirt Engine Web Interface itself ?
>>
>> Thanks
>> Fernando
>>
>> 2018-04-09 7:32 GMT-03:00 Simone Tiraboschi <stira...@redhat.com>:
>>
>>>
>>>
>>> On Sun, Apr 8, 2018 at 3:53 PM, <dhy...@sina.com> wrote:
>>>
>>>> sorry. I do not know how to describe my qeustion
>>>>  I want to add a hosted-engine, I use same nfs path with first
>>>> hosted-engine, but has some error.
>>>>   --== STORAGE CONFIGURATION ==--
>>>>
>>>>   Please specify the storage you would like to use (glusterfs,
>>>> iscsi, fc, nfs3, nfs4)[nfs3]:
>>>>   Please specify the full shared storage connection path to use
>>>> (example: host:/path): 192.168.122.218:/exports/hosted-engine-test1
>>>> [ ERROR ] The selected device already contains a storage domain.
>>>> [ ERROR ] Setup of additional hosts using this software is not allowed
>>>> anymore. Please use the engine web interface to deploy any additional 
>>>> hosts.
>>>>
>>>
>>> Since 4.0 you should add additional hosted-engine hosts directly from
>>> the webadmin UI.
>>>
>>>
>>>> [ ERROR ] Failed to execute stage 'Environment customization': Setup of
>>>> additional hosts using this software is not allowed anymore. Please use the
>>>> engine web interface to deploy any additional hosts.
>>>> [ INFO  ] Stage: Clean up
>>>> [ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-
>>>> setup/answers/answers-20180408214554.conf'
>>>> [ INFO  ] Stage: Pre-termination
>>>> [ INFO  ] Stage: Termination
>>>> [ ERROR ] Hosted Engine deployment failed
>>>>   Log file is located at /var/log/ovirt-hosted-engine-s
>>>> etup/ovirt-hosted-engine-setup-20180408214515-4vofq6.log
>>>>
>>>>
>>>>
>>>> - 原始邮件 -
>>>> 发件人:Alex K <rightkickt...@gmail.com>
>>>> 收件人:dhy...@sina.com
>>>> 抄送人:FERNANDO FREDIANI <fernando.fredi...@upx.com>, users <
>>>> users@ovirt.org>
>>>> 主题:Re: [ovirt-users] 回复:Re: ovirt engine HA
>>>> 日期:2018年04月08日 19点45分
>>>>
>>>> Are you a troll?
>>>>
>>>>
>>>> On Sun, Apr 8, 2018, 12:15 <dhy...@sina.com> wrote:
>>>>
>>>>
>>>> Hi, I hava two node, and deploy hosted-engine by #hosted-engine
>>>> --deploy,
>>>>  I find two hosted-engine that  i deploied is independent,  how to make
>>>> my hosted-engine is HA?
>>>>
>>>> - 原始邮件 -
>>>> 发件人:Alex K <rightkickt...@gmail.com>
>>>> 收件人:FERNANDO FREDIANI <fernando.fredi...@upx.com>
>>>> 抄送人:users <users@ovirt.org>
>>>> 主题:Re: [ovirt-users] ovirt engine HA
>>>> 日期:2018年04月04日 01点40分
>>>>
>>>> In case you need HA for the engine you need to deploy it to other hosts
>>>> also through the GUI.
>>>>
>>>>
>>>> On Tue, Apr 3, 2018 at 4:47 PM, FERNANDO FREDIANI <
>>>> fernando.fredi...@upx.com> wrote:
>>>>
>>>> Is it enough to deploy the Self-Hosted engine in just one Host of the
>>>> cluster or is it necessary to repeat the process in each of

Re: [ovirt-users] 回复:Re: 回复:Re: ovirt engine HA

2018-04-09 Thread FERNANDO FREDIANI
Hello Simone

The doubt is once one hosted engine is deployed to one of the hosts the
process to deploy in the second one is exactly the same or does it have any
minor particularity ? You mentioned that can be done via the web GUI. Do
you mean by cockpit or by oVirt Engine Web Interface itself ?

Thanks
Fernando

2018-04-09 7:32 GMT-03:00 Simone Tiraboschi <stira...@redhat.com>:

>
>
> On Sun, Apr 8, 2018 at 3:53 PM, <dhy...@sina.com> wrote:
>
>> sorry. I do not know how to describe my qeustion
>>  I want to add a hosted-engine, I use same nfs path with first
>> hosted-engine, but has some error.
>>   --== STORAGE CONFIGURATION ==--
>>
>>   Please specify the storage you would like to use (glusterfs,
>> iscsi, fc, nfs3, nfs4)[nfs3]:
>>   Please specify the full shared storage connection path to use
>> (example: host:/path): 192.168.122.218:/exports/hosted-engine-test1
>> [ ERROR ] The selected device already contains a storage domain.
>> [ ERROR ] Setup of additional hosts using this software is not allowed
>> anymore. Please use the engine web interface to deploy any additional hosts.
>>
>
> Since 4.0 you should add additional hosted-engine hosts directly from the
> webadmin UI.
>
>
>> [ ERROR ] Failed to execute stage 'Environment customization': Setup of
>> additional hosts using this software is not allowed anymore. Please use the
>> engine web interface to deploy any additional hosts.
>> [ INFO  ] Stage: Clean up
>> [ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-
>> setup/answers/answers-20180408214554.conf'
>> [ INFO  ] Stage: Pre-termination
>> [ INFO  ] Stage: Termination
>> [ ERROR ] Hosted Engine deployment failed
>>   Log file is located at /var/log/ovirt-hosted-engine-s
>> etup/ovirt-hosted-engine-setup-20180408214515-4vofq6.log
>>
>>
>>
>> - 原始邮件 -
>> 发件人:Alex K <rightkickt...@gmail.com>
>> 收件人:dhy...@sina.com
>> 抄送人:FERNANDO FREDIANI <fernando.fredi...@upx.com>, users <users@ovirt.org
>> >
>> 主题:Re: [ovirt-users] 回复:Re: ovirt engine HA
>> 日期:2018年04月08日 19点45分
>>
>> Are you a troll?
>>
>>
>> On Sun, Apr 8, 2018, 12:15 <dhy...@sina.com> wrote:
>>
>>
>> Hi, I hava two node, and deploy hosted-engine by #hosted-engine --deploy,
>>  I find two hosted-engine that  i deploied is independent,  how to make
>> my hosted-engine is HA?
>>
>> - 原始邮件 -
>> 发件人:Alex K <rightkickt...@gmail.com>
>> 收件人:FERNANDO FREDIANI <fernando.fredi...@upx.com>
>> 抄送人:users <users@ovirt.org>
>> 主题:Re: [ovirt-users] ovirt engine HA
>> 日期:2018年04月04日 01点40分
>>
>> In case you need HA for the engine you need to deploy it to other hosts
>> also through the GUI.
>>
>>
>> On Tue, Apr 3, 2018 at 4:47 PM, FERNANDO FREDIANI <
>> fernando.fredi...@upx.com> wrote:
>>
>> Is it enough to deploy the Self-Hosted engine in just one Host of the
>> cluster or is it necessary to repeat the process in each of the nodes that
>> must be able to run it ?
>>
>> Thanks
>> Fernando
>>
>> 2018-04-03 2:01 GMT-03:00 Vincent Royer <vinc...@epicenergy.ca>:
>>
>> Same thing, the engine in this case is "self-hosted", as in, it runs in a
>> VM hosted on the cluster that it is managing.  I am a beginner here, but
>> from my understanding, each node is always checking on the health of the
>> engine VM.  If the engine is missing (ie, the host running it has gone
>> down), then another available, healthy host will spawn up the engine and
>> you will regain access.
>>
>> In my experience this has worked very reliably.  I have 2 hosts, both are
>> "able" to run the engine VM.  If I take one host down, I am not able to
>> load the engine GUI.  But if I wait a few minutes, then I regain access,
>> and see that the engine is now running on the remaining healthy host.
>>
>> *Vincent Royer*
>> *778-825-1057 <(778)%20825-1057>*
>>
>>
>> <http://www.epicenergy.ca/>
>> *SUSTAINABLE MOBILE ENERGY SOLUTIONS*
>>
>>
>>
>>
>> On Mon, Apr 2, 2018 at 6:07 PM, <dhy...@sina.com> wrote:
>>
>> what different between self-hosted engine and  hosted engine? I find a
>> project ovirt-hosted-engine-ha
>> <https://github.com/oVirt/ovirt-hosted-engine-ha>  https:
>> //github.com/oVirt/ovirt-hosted-engine-ha
>> - 原始邮件 -
>> 发件人:Vincent Royer <vinc...@epicenergy.ca>
>

Re: [ovirt-users] Hardware critique

2018-04-06 Thread FERNANDO FREDIANI
It's likely possibile you will get more performance from a NFS server
compared to Gluster. Specially if on your NFS server you have something
like ZFS + SSD for L2ARC or ext4 + Bcache, but you get not redundancy. If
you NFS server dies everything stops working, which is not the case with
Distributed Storage.

Fernando

2018-04-06 10:45 GMT-03:00 Jayme :

> Yaniv,
>
> I appreciate your input, thanks!
>
> I understand that everyone's use case is different, but I was hoping to
> hear from some users that are using oVirt hyper-converged setup and get
> some input on the performance.  When I research GlusterFS I hear a lot
> about how it can be slow especially when dealing with small files.  I'm
> starting to wonder if a straight up NFS server with a few SSDs would be
> less hassle and perhaps offer better VM performance than glusterFS can
> currently.
>
> I want to get the best oVirt performance I can get (on somewhat of a
> budget) with a fairly small amount of required disk space (under 2TB).  I'm
> not sure if hyper-converged setup w/GlusterFS is the answer or not.  I'd
> like to avoid spending 15k only to find out that it's too slow.
>
> On Fri, Apr 6, 2018 at 6:05 AM, Yaniv Kaul  wrote:
>
>>
>>
>> On Thu, Apr 5, 2018, 11:39 PM Vincent Royer 
>> wrote:
>>
>>> Jayme,
>>>
>>> I'm doing a very similar build, the only difference really is I am using
>>> SSDs instead of HDDs.   I have similar questions as you regarding expected
>>> performance. Have you considered JBOD + NFS?   Putting a Gluster Replica 3
>>> on top of RAID 10 arrays sounds very safe, but my gosh the capacity takes a
>>> massive hit.  Am I correct in saying you will only get 4TB total usable
>>> capacity out of 24TB worth of disks?  The cost per TB in that sort of
>>> scenario is immense.
>>>
>>> My plan is two 2TB SSDs per server in JBOD with a caching raid card,
>>> with replica 3.  I would end up with the same 4TB total capacity using 12TB
>>> of SSDs.
>>>
>>
>> I'm not sure I see the value in RAID card if you don't use RAID and I'm
>> not sure you really need caching on the card.
>> Y.
>>
>>
>>> I think Replica 3 is safe enough that you could forgo the RAID 10. But
>>> I'm talking from zero experience...  Would love others to chime in with
>>> their opinions on both these setups.
>>>
>>> *Vincent Royer*
>>> *778-825-1057*
>>>
>>>
>>> 
>>> *SUSTAINABLE MOBILE ENERGY SOLUTIONS*
>>>
>>>
>>>
>>>
>>> On Thu, Apr 5, 2018 at 12:22 PM, Jayme  wrote:
>>>
 Thanks for your feedback.  Any other opinions on this proposed setup?
 I'm very torn over using GlusterFS and what the expected performance may
 be, there seems to be little information out there.  Would love to hear any
 feedback specifically from ovirt users on hyperconverged configurations.

 On Thu, Apr 5, 2018 at 2:56 AM, Alex K  wrote:

> Hi,
>
> You should be ok with the setup.
> I am running around 20 vms (linux and windows, small and medium size)
> with the half of your specs. With 10G network replica 3 is ok.
>
> Alex
>
> On Wed, Apr 4, 2018, 16:13 Jayme  wrote:
>
>> I'm spec'ing hardware for a 3-node oVirt build (on somewhat of a
>> budget).  I plan to do 20-30 Linux VMs most of them very light weight + a
>> couple of heavier hitting web and DB servers with frequent rsync backups.
>> Some have a lot of small files from large github repos etc.
>>
>> 3X of the following:
>>
>> Dell PowerEdge R720
>> 2x 2.9 GHz 8 Core E5-2690 (SR0L0)
>> 256GB RAM
>> PERC H710
>> 2x10GB Nic
>>
>> Boot/OS will likely be two cheaper small sata/ssd in raid 1.
>>
>> Gluster bricks comprised of 4x2TB WD Gold 7200RPM SATA HDDs in RAID
>> 10 per server.  Using a replica 3 setup (and I'm thinking right now with 
>> no
>> arbiter for extra redundancy, although I'm not sure what the performance
>> hit may be as a result).  Will this allow for two host failure or just 
>> one?
>>
>> I've been really struggling with storage choices, it seems very
>> difficult to predict the performance of glusterFS due to the variance in
>> hardware (everyone is using something different).  I'm not sure if the
>> performance will be adequate enough for my needs.
>>
>> I will be using an all ready existing Netgear XS716T 10GB switch for
>> Gluster storage network.
>>
>> In addition I plan to build another simple glusterFS storage server
>> that I can use to georeplicate the gluster volume to for DR purposes and
>> use existing hardware to build an independent standby oVirt host that is
>> able to start up a few high priority VMs from the georeplicated glusterFS
>> volume if for some reason the primary oVirt cluster/glusterFS volume ever
>> failed.
>>
>> I would love to hear 

Re: [ovirt-users] Hardware critique

2018-04-05 Thread FERNANDO FREDIANI
I always found replica 3 a complete overkill. Don't know people made that
up that was necessary. Just looks good and costs a lot with little benefit.

Normally when using magnetic disks 2 copies are fine for most scenarios,
but if using SSDs for similar scenarios depending on the configuration of
each node disks it is possible to have a RAID 5/6 ish.
Fernando

2018-04-05 17:38 GMT-03:00 Vincent Royer :

> Jayme,
>
> I'm doing a very similar build, the only difference really is I am using
> SSDs instead of HDDs.   I have similar questions as you regarding expected
> performance. Have you considered JBOD + NFS?   Putting a Gluster Replica 3
> on top of RAID 10 arrays sounds very safe, but my gosh the capacity takes a
> massive hit.  Am I correct in saying you will only get 4TB total usable
> capacity out of 24TB worth of disks?  The cost per TB in that sort of
> scenario is immense.
>
> My plan is two 2TB SSDs per server in JBOD with a caching raid card, with
> replica 3.  I would end up with the same 4TB total capacity using 12TB of
> SSDs.
>
> I think Replica 3 is safe enough that you could forgo the RAID 10. But I'm
> talking from zero experience...  Would love others to chime in with their
> opinions on both these setups.
>
> *Vincent Royer*
> *778-825-1057*
>
>
> 
> *SUSTAINABLE MOBILE ENERGY SOLUTIONS*
>
>
>
>
> On Thu, Apr 5, 2018 at 12:22 PM, Jayme  wrote:
>
>> Thanks for your feedback.  Any other opinions on this proposed setup?
>> I'm very torn over using GlusterFS and what the expected performance may
>> be, there seems to be little information out there.  Would love to hear any
>> feedback specifically from ovirt users on hyperconverged configurations.
>>
>> On Thu, Apr 5, 2018 at 2:56 AM, Alex K  wrote:
>>
>>> Hi,
>>>
>>> You should be ok with the setup.
>>> I am running around 20 vms (linux and windows, small and medium size)
>>> with the half of your specs. With 10G network replica 3 is ok.
>>>
>>> Alex
>>>
>>> On Wed, Apr 4, 2018, 16:13 Jayme  wrote:
>>>
 I'm spec'ing hardware for a 3-node oVirt build (on somewhat of a
 budget).  I plan to do 20-30 Linux VMs most of them very light weight + a
 couple of heavier hitting web and DB servers with frequent rsync backups.
 Some have a lot of small files from large github repos etc.

 3X of the following:

 Dell PowerEdge R720
 2x 2.9 GHz 8 Core E5-2690 (SR0L0)
 256GB RAM
 PERC H710
 2x10GB Nic

 Boot/OS will likely be two cheaper small sata/ssd in raid 1.

 Gluster bricks comprised of 4x2TB WD Gold 7200RPM SATA HDDs in RAID 10
 per server.  Using a replica 3 setup (and I'm thinking right now with no
 arbiter for extra redundancy, although I'm not sure what the performance
 hit may be as a result).  Will this allow for two host failure or just one?

 I've been really struggling with storage choices, it seems very
 difficult to predict the performance of glusterFS due to the variance in
 hardware (everyone is using something different).  I'm not sure if the
 performance will be adequate enough for my needs.

 I will be using an all ready existing Netgear XS716T 10GB switch for
 Gluster storage network.

 In addition I plan to build another simple glusterFS storage server
 that I can use to georeplicate the gluster volume to for DR purposes and
 use existing hardware to build an independent standby oVirt host that is
 able to start up a few high priority VMs from the georeplicated glusterFS
 volume if for some reason the primary oVirt cluster/glusterFS volume ever
 failed.

 I would love to hear any advice or critiques on this plan.

 Thanks!
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt engine HA

2018-04-03 Thread FERNANDO FREDIANI
Is it enough to deploy the Self-Hosted engine in just one Host of the
cluster or is it necessary to repeat the process in each of the nodes that
must be able to run it ?

Thanks
Fernando

2018-04-03 2:01 GMT-03:00 Vincent Royer :

> Same thing, the engine in this case is "self-hosted", as in, it runs in a
> VM hosted on the cluster that it is managing.  I am a beginner here, but
> from my understanding, each node is always checking on the health of the
> engine VM.  If the engine is missing (ie, the host running it has gone
> down), then another available, healthy host will spawn up the engine and
> you will regain access.
>
> In my experience this has worked very reliably.  I have 2 hosts, both are
> "able" to run the engine VM.  If I take one host down, I am not able to
> load the engine GUI.  But if I wait a few minutes, then I regain access,
> and see that the engine is now running on the remaining healthy host.
>
> *Vincent Royer*
> *778-825-1057 <(778)%20825-1057>*
>
>
> 
> *SUSTAINABLE MOBILE ENERGY SOLUTIONS*
>
>
>
>
> On Mon, Apr 2, 2018 at 6:07 PM,  wrote:
>
>> what different between self-hosted engine and  hosted engine? I find a
>> project ovirt-hosted-engine-ha
>>   https:
>> //github.com/oVirt/ovirt-hosted-engine-ha
>> - 原始邮件 -
>> 发件人:Vincent Royer 
>> 收件人:dhy...@sina.com
>> 抄送人:users 
>> 主题:Re: [ovirt-users] ovirt engine HA
>> 日期:2018年04月03日 08点57分
>>
>> If your node running self-hosted engine crashes, the hosted engine will
>> be started up on another node. It just takes a few minutes for this all to
>> happen, but it works reliably in my experience.
>>
>>
>>
>> On Mon, Apr 2, 2018 at 5:42 PM,  wrote:
>>
>> How to solute ovirt engine HA, I have a three node cluster, one of is
>> deploy engine and node , others are node, if node that deplay engine and
>>  node crash, How to ensure my server is up?
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Deploy Self-Hosted Engine in a Active Host

2018-03-28 Thread FERNANDO FREDIANI
Hello

As I mentioned in another thread I am migrating a 'Bare-metal' oVirt-Engine
to a Self-Hosted Engine.
For that I am following this documentation:
https://ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment/

However I think called me attention and I wanted to clarity: Must the Host
that will deploy the Self-Hosted Engine be in Maintenance mode and
therefore with no other VMs running ?

I have a Node which is currently part of a Cluster and wish to deploy the
Self-Hosted Engine to it. Must I have to put it into Maintenance mode first
or can I just run the 'hosted-engine --deploy'.

Note: this Self-Hosted Engine will manage the existing cluster where this
Node exists. Guess that is not an issue at all and part of what Self-Hosted
Engine is intended to.

Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Snapshot of the Self-Hosted Engine

2018-03-28 Thread FERNANDO FREDIANI
Hello Sven and all.

Yes storage does have the snapshot function and could be possibility be
used, but I was wondering a even easier way through the oVirt Node CLI or
something similar that can use the qcow2 image snapshot to do that with the
Self-Hosted Engine in Global Maintenance.

I used to run the oVirt Engine in a Libvirt KVM Virtual Machine in a
separate Host and it has always been extremely handy to have this feature.
There has been times where the upgrade was not successfully and just
turning off the VM, starting it from snapshot saved my day.

Regards
Fernando

2018-03-27 14:14 GMT-03:00 Sven Achtelik <sven.achte...@eps.aero>:

> Hi Fernando,
>
>
>
> depending on where you’re having your storage you could set everything to
> global maintenance, stop the vm and copy the disk image. Or if your storage
> systeme is able to do snapshots you could use that function once the engine
> is stopped. It’s the easiest way I can think of right now. What kind of
> storage are you using ?
>
>
>
> Sven
>
>
>
> *Von:* users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *Im
> Auftrag von *FERNANDO FREDIANI
> *Gesendet:* Dienstag, 27. März 2018 15:24
> *An:* users
> *Betreff:* [ovirt-users] Snapshot of the Self-Hosted Engine
>
>
>
> Hello
>
> Is it possible to snapshot the Self-Hosted Engine before an Upgrade ? If
> so how ?
>
> Thanks
>
> Fernando
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Snapshot of the Self-Hosted Engine

2018-03-27 Thread FERNANDO FREDIANI
Hello

Is it possible to snapshot the Self-Hosted Engine before an Upgrade ? If so
how ?

Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Juniper vSRX Cluster on oVirt/RHEV

2018-03-26 Thread FERNANDO FREDIANI
Indeed, there is this problem wiht the virtio driver which creates this ,
sometimes huge bottleneck for machines tat do a fair amount of traffic.
Other than using DPDK OVS I would love to heard an alternative or a fix for
it. Currently being hit by this issue with no solution.

As you mention for a lab is fine but would be lovely to have a pretty
redundant scenario like this in production.
Fernando

2018-03-23 21:04 GMT-03:00 Charles Kozler <ckozler...@gmail.com>:

> Truth be told I dont really know. What I am going to be doing with it is
> pretty much mostly some lab stuff and get working with VRF's a bit
>
> There is a known limitation with virtio backend driver uses interrupt mode
> to receive packets and vSRX uses DPDK - https://dpdk.readthedocs.io/
> en/stable/nics/virtio.html which in turn creates a bottleneck in to the
> guest VM. It is more ideal to use something like SR-IOV instead and remove
> as many buffer layers as possible with PCI passthrough
>
> One easier way too is to use DPDK OVS. I know ovirt supports OVS in later
> versions more natively so I just didnt go after it and I dont know if there
> is any difference between just regular OVS and DPDK OVS. I dont have a huge
> requirement of insane throughput, just need to get packets from amazon back
> to my lab and support overlapping subnets
>
> This exercise was somewhat of a POC for me to see if it can be done. A lot
> of Junipers documentation does not take in to account such things as ovirt
> or proxmox or any linux overlay to hypervisors like it does for vmware /
> vcenter which is no fault of their own. They assume flat KVM host (or 2 if
> clustered) whereas stuff like ovirt can introduce variables (eg: no MAC
> spoofing)
>
> On Fri, Mar 23, 2018 at 3:27 PM, FERNANDO FREDIANI <
> fernando.fredi...@upx.com> wrote:
>
>> Out of curiosity how much traffic can it handle running in these Virtual
>> Machines on the top of reasonable hardware ?
>>
>> Fernando
>>
>> 2018-03-23 4:58 GMT-03:00 Joop <jvdw...@xs4all.nl>:
>>
>>> On 22-3-2018 10:17, Yaniv Kaul wrote:
>>>
>>>
>>>
>>> On Wed, Mar 21, 2018 at 10:37 PM, Charles Kozler <
>>> <ckozler...@gmail.com>ckozler...@gmail.com> wrote:
>>>
>>>> Hi All -
>>>>
>>>> Recently did this and thought it would be worth documenting. I couldnt
>>>> find any solid information on vsrx with kvm outside of flat KVM. This
>>>> outlines some of the things I hit along the way and how to fix. This is my
>>>> one small way of giving back to such an incredible open source tool
>>>>
>>>> https://ckozler.net/vsrx-cluster-on-ovirtrhev/
>>>>
>>>
>>> Thanks for sharing!
>>> Why didn't you just upload the qcow2 disk via the UI/API though?
>>> There's quite a bit of manual work that I hope is not needed?
>>>
>>> @Work we're using Juniper too and oud of curiosity I downloaded the
>>> qcow2 image and used the UI to upload it and add it to a VM. It just works
>>> :-) oVirt++
>>>
>>> Joop
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Juniper vSRX Cluster on oVirt/RHEV

2018-03-23 Thread FERNANDO FREDIANI
Out of curiosity how much traffic can it handle running in these Virtual
Machines on the top of reasonable hardware ?

Fernando

2018-03-23 4:58 GMT-03:00 Joop :

> On 22-3-2018 10:17, Yaniv Kaul wrote:
>
>
>
> On Wed, Mar 21, 2018 at 10:37 PM, Charles Kozler < 
> ckozler...@gmail.com> wrote:
>
>> Hi All -
>>
>> Recently did this and thought it would be worth documenting. I couldnt
>> find any solid information on vsrx with kvm outside of flat KVM. This
>> outlines some of the things I hit along the way and how to fix. This is my
>> one small way of giving back to such an incredible open source tool
>>
>> https://ckozler.net/vsrx-cluster-on-ovirtrhev/
>>
>
> Thanks for sharing!
> Why didn't you just upload the qcow2 disk via the UI/API though?
> There's quite a bit of manual work that I hope is not needed?
>
> @Work we're using Juniper too and oud of curiosity I downloaded the qcow2
> image and used the UI to upload it and add it to a VM. It just works :-)
> oVirt++
>
> Joop
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migration oVirt Engine from Dedicated Host to Self-Hosted Engine

2018-03-19 Thread FERNANDO FREDIANI
Just to add up, for the second question I am following this URL:

https://ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment/

So the question is more of anything else that may be good to take in
attention other than what is already there.

Thanks
Fernando

2018-03-19 13:38 GMT-03:00 FERNANDO FREDIANI <fernando.fredi...@upx.com>:

> Hello folks
>
> I currently have a oVirt Engine which runs in a Dedicated Virtual Machine
> in another ans separate environment. It is very nice to have it like that
> because every time I do a oVirt Version Upgrade I take a snapshot before
> and if it failed (and it did failed in the past several times) I just go
> back in time before the snapshot and all comes back to normal.
>
> Two quick questions:
>
> - Going to a Self-Hosted Engine will snapshots or recoverable ways be
> possible ?
>
> - To migrate the Engine from the current environment to the self-hosted
> engine is it just a question to backup the Database, restore it into the
> self-hosted engine keeping it with the same IP address ? Are there any
> special points to take in consideration when doing this migration ?
>
> Thanks
> Fernando
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Very Slow Console Performance - Windows 10

2018-03-08 Thread FERNANDO FREDIANI

Hello Gianluca.

As I mentioned previously I am not sure it has anything to do with SPICE 
at all, but with the amount of memory the VM has assigned to it. Proff 
of it is that when you access with via any Remote Desktop protocol it 
remains slow as if the amount of video memory wasnt being enough and 
have seen it crashing several times as well.


Fernando


On 07/03/2018 16:59, Gianluca Cecchi wrote:
On Wed, Mar 7, 2018 at 7:43 PM, Michal Skrivanek 
<michal.skriva...@redhat.com <mailto:michal.skriva...@redhat.com>> wrote:





On 7 Mar 2018, at 14:03, FERNANDO FREDIANI
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:

Hello Gianluca

Resurrecting this topic. I made the changes as per your
instructions below on the Engine configuration but it had no
effect on the VM graphics memory. Is it necessary to restart the
Engine after adding the 20-overload.properties file ? Also I
don't think is necessary to do any changes on the hosts right ?


correct on both



Hello Fernando and Michal,
at that time I was doing some tests both with plain virt-manager and 
oVirt for some Windows 10 VMs.

More recently I haven't done anything in that regard again, unfortunately.
After you have done what you did suggest yourself and Michal 
confirmed, then you can test powering off and then on again the VM (so 
that the new qemu-kvm process starts with the new parameters) and let 
us know if you enjoy better experience, so that we can ask for 
adoption as a default (eg for VMs configured as desktops) or as a 
custom property to give



On the recent updates has anything changed in the terms on how to
change the video memory assigned to any given VM. I guess it is
something that has been forgotten overtime, specially if you are
running a VDI-like environment whcih depends very much on the
video memory.


there were no changes recently, these are the most recent
guidelines we got from SPICE people. They might be out of date.
Would be good to raise that specifically (the performance
difference for default sizes) to them, can you narrow it down and
post to spice-de...@lists.freedesktop.org
<mailto:spice-de...@lists.freedesktop.org>?



This could be very useful too

Cheers,
Gianluca



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Very Slow Console Performance - Windows 10

2018-03-07 Thread FERNANDO FREDIANI

Hi

I don't think these issue have much to do with Spice, but with the 
amount of memory oVirt sets to VMs by default, which in some cases for 
desktop usage seems too little. A field where that could be adjusted 
without having to edit files in the Engine would probably resolve this 
issue, or am I missing anything ?


Fernando


On 07/03/2018 15:43, Michal Skrivanek wrote:



On 7 Mar 2018, at 14:03, FERNANDO FREDIANI <fernando.fredi...@upx.com 
<mailto:fernando.fredi...@upx.com>> wrote:


Hello Gianluca

Resurrecting this topic. I made the changes as per your instructions 
below on the Engine configuration but it had no effect on the VM 
graphics memory. Is it necessary to restart the Engine after adding 
the 20-overload.properties file ? Also I don't think is necessary to 
do any changes on the hosts right ?



correct on both


On the recent updates has anything changed in the terms on how to 
change the video memory assigned to any given VM. I guess it is 
something that has been forgotten overtime, specially if you are 
running a VDI-like environment whcih depends very much on the video 
memory.


there were no changes recently, these are the most recent guidelines 
we got from SPICE people. They might be out of date. Would be good to 
raise that specifically (the performance difference for default sizes) 
to them, can you narrow it down and post to 
spice-de...@lists.freedesktop.org 
<mailto:spice-de...@lists.freedesktop.org>?


Thanks,
michal


Let me know.
Thanks

Fernando Frediani


On 24/11/2017 20:45, Gianluca Cecchi wrote:
On Fri, Nov 24, 2017 at 5:50 PM, FERNANDO FREDIANI 
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:


I have made a Export of the same VM created in oVirt to a server
running pure qemu/KVM and which creates new VMs profiles with
vram 65536 and it turned on the Windows 10 which run perfectly
with that configuration.

Was reading some documentation that it may be possible to change
the file /usr/share/ovirt-engine/conf/osinfo-defaults.properties
in order to change it for the profile you want but I am not sure
how these changed should be made if directly in that file, on
another one just with custom configs and also how to apply them
immediatelly to any new or existing VM ? I am pretty confident
once vram is increased that should resolve the issue with not
only Windows 10 VMs, but other as well.

Anyone can give a hint about the correct procedure to apply this
change ?

Thanks in advance.
Fernando




Hi Fernando,
based on this:
https://www.ovirt.org/develop/release-management/features/virt/os-info/ 
<https://www.ovirt.org/develop/release-management/features/virt/os-info/>


you should create a file of kind
/etc/ovirt-engine/osinfo.conf.d/20-overload.properties
but I think you can only overwrite the multiplier and not directly 
the vgamem (or vgamem_mb in rhel 7) values


so that you could put something like this inside it:

os.windows_10.devices.display.vramMultiplier.value = 2
os.windows_10x64.devices.display.vramMultiplier.value = 2

I think there are no values for vgamem_mb

I found these two threads in 2016
http://lists.ovirt.org/pipermail/users/2016-June/073692.html
that confirms you cannot set vgamem
and
http://lists.ovirt.org/pipermail/users/2016-June/073786.html
that suggests to create a hook

Just a hack that came into mind:
in a CentOS vm of mine in a 4.1.5 environment I see that by default 
I get this qemu command line


-device 
qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2


Based on this:
https://www.ovirt.org/documentation/draft/video-ram/ 
<https://www.ovirt.org/documentation/draft/video-ram/>


you have
vgamem = 16 MB * number_of_heads

I verified that if I edit the vm in the gui and set Monitors=4 in 
console section (but with the aim of using only the first head) and 
then I power off and power on the VM, I get now


-device 
qxl-vga,id=video0,ram_size=268435456,vram_size=134217728,vram64_size_mb=0,vgamem_mb=64,bus=pci.0,addr=0x2


I have not a client to connect and verify any improvement: I don't 
know if you will be able to use all the new ram in the only first 
head with a better experience or if it is partitioned in some way...

Could you try eventually?

Gianluca


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Very Slow Console Performance - Windows 10

2018-03-07 Thread FERNANDO FREDIANI

Hello Gianluca

Resurrecting this topic. I made the changes as per your instructions 
below on the Engine configuration but it had no effect on the VM 
graphics memory. Is it necessary to restart the Engine after adding the 
20-overload.properties file ? Also I don't think is necessary to do any 
changes on the hosts right ?


On the recent updates has anything changed in the terms on how to change 
the video memory assigned to any given VM. I guess it is something that 
has been forgotten overtime, specially if you are running a VDI-like 
environment whcih depends very much on the video memory.


Let me know.
Thanks

Fernando Frediani


On 24/11/2017 20:45, Gianluca Cecchi wrote:
On Fri, Nov 24, 2017 at 5:50 PM, FERNANDO FREDIANI 
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:


I have made a Export of the same VM created in oVirt to a server
running pure qemu/KVM and which creates new VMs profiles with vram
65536 and it turned on the Windows 10 which run perfectly with
that configuration.

Was reading some documentation that it may be possible to change
the file /usr/share/ovirt-engine/conf/osinfo-defaults.properties
in order to change it for the profile you want but I am not sure
how these changed should be made if directly in that file, on
another one just with custom configs and also how to apply them
immediatelly to any new or existing VM ? I am pretty confident
once vram is increased that should resolve the issue with not only
Windows 10 VMs, but other as well.

Anyone can give a hint about the correct procedure to apply this
change ?

Thanks in advance.
Fernando




Hi Fernando,
based on this:
https://www.ovirt.org/develop/release-management/features/virt/os-info/ 
<https://www.ovirt.org/develop/release-management/features/virt/os-info/>


you should create a file of kind
/etc/ovirt-engine/osinfo.conf.d/20-overload.properties
but I think you can only overwrite the multiplier and not directly the 
vgamem (or vgamem_mb in rhel 7) values


so that you could put something like this inside it:

os.windows_10.devices.display.vramMultiplier.value = 2
os.windows_10x64.devices.display.vramMultiplier.value = 2

I think there are no values for vgamem_mb

I found these two threads in 2016
http://lists.ovirt.org/pipermail/users/2016-June/073692.html
that confirms you cannot set vgamem
and
http://lists.ovirt.org/pipermail/users/2016-June/073786.html
that suggests to create a hook

Just a hack that came into mind:
in a CentOS vm of mine in a 4.1.5 environment I see that by default I 
get this qemu command line


-device 
qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2


Based on this:
https://www.ovirt.org/documentation/draft/video-ram/ 
<https://www.ovirt.org/documentation/draft/video-ram/>


you have
vgamem = 16 MB * number_of_heads

I verified that if I edit the vm in the gui and set Monitors=4 in 
console section (but with the aim of using only the first head) and 
then I power off and power on the VM, I get now


-device 
qxl-vga,id=video0,ram_size=268435456,vram_size=134217728,vram64_size_mb=0,vgamem_mb=64,bus=pci.0,addr=0x2


I have not a client to connect and verify any improvement: I don't 
know if you will be able to use all the new ram in the only first head 
with a better experience or if it is partitioned in some way...

Could you try eventually?

Gianluca


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 2018 Survey

2018-01-17 Thread FERNANDO FREDIANI
Yeah, I noticied the same thing and almost didn't select anyone of these 
which are already pretty old.



On 17/01/2018 05:03, Barak Korren wrote:

On 17 January 2018 at 01:02, ~Stack~  wrote:

Greetings,
FYI, your Ubuntu options are antiquated.

12.10, 13.04, 13.10 are all unsupported.

12.04 is only in extended security maintenance.

I believe the options should be 12.04, 14.04, 16.04, and 17.10 (latest
non-LTS).


I guess you're referring to the images in glance.ovirt.org? We're
looking for help maintaining that...

Here is a tracker ticket in the meantime:
https://ovirt-jira.atlassian.net/browse/OVIRT-1848



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Persistence on oVirt-Nodes

2018-01-11 Thread FERNANDO FREDIANI

Quick questions about Nodes (in order to not hijack the other thread).

As they don't have much notion of persistence how can I:

- To have a simple configuration backup in order for, if the Operating 
System disk fails I reinstall the Node, restore the config and 
everything comes back up fine ? Backing up /var /etc and /root folders 
would be enough or anything else ?


- If I install something custom (e.g: A Zabbix Agent) it works until the 
host receives a new system/image Upgrade. Is there any way that can be 
made persistent ?


- As oVirt-Nodes are images, if I install them in a SD Card or USB Stick 
will be there much write to disk still ? Other than possibly the logs 
and configuration changes what else could be written to disk ?


If so a Feature Request to solve this would be to mount the logs 
partition in RAM as the Node boots and logrotate that. If someone needs 
to keep it more time can always use rsyslog


Thanks
Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage Hardware for Hyperconverged oVirt: Gluster best practice

2018-01-10 Thread FERNANDO FREDIANI
I really don't see much need of using hardware raid if you have a 
SSD-only environment. You will get little benefit from hardware cache 
memory and to guarantee the writes you may having the filesystem doing 
always sync operations, similar to what ZFS does. You just need to use a 
HBA to controll all disks ou even software RAID will do the job.


Fernando


On 10/01/2018 17:34, ov...@fateknollogee.com wrote:

oVirt + Gluster (hyperconverged) RAID question:
I have 3 nodes of SuperMicro hardware, each node has 1x SATADOM (boot 
drive for o/s install) and 6x 1TB SSD (to be used for Gluster).

For the SSDs, is hardware or software RAID preferred or do I use an HBA?
The RedHat docs seem to suggest hardware RAID, others on the forum say 
HBA  or software RAID.


What are other folks using?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration without Shared Storage

2017-12-28 Thread FERNANDO FREDIANI
Are you talking about all kinds of Storage (iSCSI, FC, NFS and 
Localstorage/POSIX) ?


Because I believe you may be able to specify the destination path on the 
destination Host and when working with Localstorage/POSIX that may be 
simpler.


Fernando


On 28/12/2017 17:32, Michal Skrivanek wrote:



On 28 Dec 2017, at 19:56, FERNANDO FREDIANI 
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:


Has anyone tried the command below under the hood between two oVirt 
Node (in the same Datacenter or between two different (local) ones) ? 
Does it work ?


no, it does not with ovirt. ovirt manages storage differently than 
plain libvirt



virsh migrate --live --persistent --undefinesource --copy-storage-all \
     --verbose --desturi  
This is such a fantastic features for certain scenarios that may help 
a lot maintenance or even migration between hosts with Local Storage 
to minimize Downtime and mainly all the hassle of having to Poweroff 
a VM, Export to an Export Datastore, umount it, mount on the other 
Host/Datacenter, Import and Power On.


Thanks
Regards

Fernando

[1] Ref: 
https://hgj.hu/live-migrating-a-virtual-machine-with-libvirt-without-a-shared-storage/

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Live migration without Shared Storage

2017-12-28 Thread FERNANDO FREDIANI
Has anyone tried the command below under the hood between two oVirt Node 
(in the same Datacenter or between two different (local) ones) ? Does it 
work ?


virsh migrate --live --persistent --undefinesource --copy-storage-all \
    --verbose --desturi  

This is such a fantastic features for certain scenarios that may help a 
lot maintenance or even migration between hosts with Local Storage to 
minimize Downtime and mainly all the hassle of having to Poweroff a VM, 
Export to an Export Datastore, umount it, mount on the other 
Host/Datacenter, Import and Power On.


Thanks
Regards

Fernando

[1] Ref: 
https://hgj.hu/live-migrating-a-virtual-machine-with-libvirt-without-a-shared-storage/
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Update to 4.2.0 failing in db check

2017-12-21 Thread FERNANDO FREDIANI
Should not this type of content not come from the dev team before the 
product is in GA ?



On 21/12/2017 15:53, Yaniv Kaul wrote:



On Dec 21, 2017 6:17 PM, "FERNANDO FREDIANI" 
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:


Sure Sandro, but I was talking about documentation which lacks for
certain procedures that are very common.


Our site is open and contributions are welcome. If you can start such 
a document, that'll be great.

Y.

Regards
Fernando


On 21/12/2017 12:32, Sandro Bonazzola wrote:



2017-12-21 12:55 GMT+01:00 FERNANDO FREDIANI
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>>:

Updates are often problematic.

Whenever someone manages to do a 4.1 to 4.2 upgrade could
possibility post it to the Wiki. That will help a lot of people.

Well, we did :-) Our infra is running on 4.2.0:
https://engine-phx.ovirt.org/ovirt-engine/
<https://engine-phx.ovirt.org/ovirt-engine/>
We are keeping it updates since 3.4 every time we release.


Fernando


On 21/12/2017 08:24, Sandro Bonazzola wrote:



2017-12-21 11:03 GMT+01:00 Giorgio Biacchi
<gior...@di.unimi.it <mailto:gior...@di.unimi.it>>:

Hi,
I have additional info on the problem. I run
/usr/share/ovirt-engine/setup/dbutils/fkvalidator.sh and
the problem is on 4 templates subversions. In detail we
have two templates and each one has two subversions.

Other templates with no subversions have no problem.

Thanks again, I hope this helps in debugging.


Thanks Giorgio, do you mind open a bug on
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine
<https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine>
to track this?



On 12/20/2017 04:04 PM, Martin Perina wrote:

Hi,

could you please share the full setup log?


​/var/log/ovirt-engine/setup/ovirt-engine-setup-20171220110337-cy5ri9.log

Thanks

Martin


On Wed, Dec 20, 2017 at 2:22 PM, Sandro Bonazzola
<sbona...@redhat.com <mailto:sbona...@redhat.com>
<mailto:sbona...@redhat.com
<mailto:sbona...@redhat.com>>> wrote:



    2017-12-20 11:58 GMT+01:00 Giorgio Biacchi
<gior...@di.unimi.it <mailto:gior...@di.unimi.it>
    <mailto:gior...@di.unimi.it
<mailto:gior...@di.unimi.it>>>:

        Hello list,
        I was about to upgrade from
4.1.8.2-1.el7.centos to 4.2.0 but
        engine-setup fails. Here's the relevant output:

        [ ERROR ] Failed to execute stage 'Setup
validation': Failed checking
        Engine database: an exception occurred while
validating the Engine
        database, please check the logs for getting
more info:
 Constraint violation found in  vm_interface
(vmt_guid) |1

        [ INFO  ] Stage: Clean up
                   Log file is located at
        ​​

/var/log/ovirt-engine/setup/ovirt-engine-setup-20171220110337-cy5ri9.log
        [ INFO  ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20171220110551-setup.co
<http://20171220110551-setup.co>
        <http://20171220110551-setup.co
<http://20171220110551-setup.co>>nf'
        [ INFO  ] Stage: Pre-termination
        [ INFO  ] Stage: Termination
        [ ERROR ] Execution of setup failed

        any ideas??


    Adding some people, I think one of your vms has
an invalid configuration
    saved in the db.


        --         gb

        PGP Key: http://pgp.mit.edu/
        Primary key fingerprint: C510 0765 943E EBED
A4F2 69D3 16CC DC90 B9CB 0F34
___
        Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
<mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>
       
<http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>>




   

Re: [ovirt-users] Update to 4.2.0 failing in db check

2017-12-21 Thread FERNANDO FREDIANI
Sure Sandro, but I was talking about documentation which lacks for 
certain procedures that are very common.


Regards
Fernando


On 21/12/2017 12:32, Sandro Bonazzola wrote:



2017-12-21 12:55 GMT+01:00 FERNANDO FREDIANI 
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>>:


Updates are often problematic.

Whenever someone manages to do a 4.1 to 4.2 upgrade could
possibility post it to the Wiki. That will help a lot of people.

Well, we did :-) Our infra is running on 4.2.0: 
https://engine-phx.ovirt.org/ovirt-engine/

We are keeping it updates since 3.4 every time we release.


Fernando


On 21/12/2017 08:24, Sandro Bonazzola wrote:



2017-12-21 11:03 GMT+01:00 Giorgio Biacchi <gior...@di.unimi.it
<mailto:gior...@di.unimi.it>>:

Hi,
I have additional info on the problem. I run
/usr/share/ovirt-engine/setup/dbutils/fkvalidator.sh and the
problem is on 4 templates subversions. In detail we have two
templates and each one has two subversions.

Other templates with no subversions have no problem.

Thanks again, I hope this helps in debugging.


Thanks Giorgio, do you mind open a bug on
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine
<https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine>
to track this?



On 12/20/2017 04:04 PM, Martin Perina wrote:

Hi,

could you please share the full setup log?


​/var/log/ovirt-engine/setup/ovirt-engine-setup-20171220110337-cy5ri9.log

Thanks

Martin


On Wed, Dec 20, 2017 at 2:22 PM, Sandro Bonazzola
<sbona...@redhat.com <mailto:sbona...@redhat.com>
<mailto:sbona...@redhat.com
<mailto:sbona...@redhat.com>>> wrote:



    2017-12-20 11:58 GMT+01:00 Giorgio Biacchi
<gior...@di.unimi.it <mailto:gior...@di.unimi.it>
    <mailto:gior...@di.unimi.it
<mailto:gior...@di.unimi.it>>>:

        Hello list,
        I was about to upgrade from 4.1.8.2-1.el7.centos
to 4.2.0 but
        engine-setup fails. Here's the relevant output:

        [ ERROR ] Failed to execute stage 'Setup
validation': Failed checking
        Engine database: an exception occurred while
validating the Engine
        database, please check the logs for getting more
info:
                  Constraint violation found in 
vm_interface (vmt_guid) |1

        [ INFO  ] Stage: Clean up
                   Log file is located at
        ​​
       

/var/log/ovirt-engine/setup/ovirt-engine-setup-20171220110337-cy5ri9.log
        [ INFO  ] Generating answer file
       
'/var/lib/ovirt-engine/setup/answers/20171220110551-setup.co
<http://20171220110551-setup.co>
        <http://20171220110551-setup.co
<http://20171220110551-setup.co>>nf'
        [ INFO  ] Stage: Pre-termination
        [ INFO  ] Stage: Termination
        [ ERROR ] Execution of setup failed

        any ideas??


    Adding some people, I think one of your vms has an
invalid configuration
    saved in the db.


        --         gb

        PGP Key: http://pgp.mit.edu/
        Primary key fingerprint: C510 0765 943E EBED A4F2
69D3 16CC DC90 B9CB 0F34
        ___
        Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
<mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>
        <http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>>




    --
    SANDRO BONAZZOLA

    ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG
VIRTUALIZATION R

    Red Hat EMEA <https://www.redhat.com/>

    <https://red.ht/sig>
    TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>





-- 
Martin Perina

Associate Manager, Software Engineering
Red Hat Czech s.r.o.


-- 
gb


PGP Key: http://pgp.mit.edu/
Primary key fingerprint: C510 0765 943E EBED A4F2 69D3 16CC
DC90 B9CB 0F34




-- 


SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EME

Re: [ovirt-users] Live migration without shared storage

2017-12-21 Thread FERNANDO FREDIANI
That is going certainly to be a very welcome feature and if not yet 
should be on the top of th roadmap. For planned maintenances it solves 
mostly all downtime problems.


Fernando


On 21/12/2017 12:19, Pujan Shah wrote:
We have a bit odd setup where some of our clients have dedicated hosts 
and we also have some shared hosts. We can migrate client VMs from 
their dedicated host to shared host if we need ot do some maintenance. 
We don't have shared storage and currently we are using XenServer 
which supports live migration without shared storage. We recently 
started looking into KVM as an alternative and decided to try ovirt. 
To our surprise KVM supports live migration without shared storage but 
ovirt does not. 
(https://hgj.hu/live-migrating-a-virtual-machine-with-libvirt-without-a-shared-storage/) 



​I wanted to know if anyone has dealt with such situation and is this 
something others are also looking for?​



​Regards,
Pujan Shah
Systemadministration

--
tel.: +49 (0) 221 / 95 168 - 74
mail:
​ ​
p...@dom.de 
DOM Digital Online Media GmbH,
Bismarck Str. 60
50672 Köln

http://www.dom.de/

Geschäftsführer: Markus Schulte
Handelsregister-Nr.: Amtsgericht Köln HRB 55347
UST.-Ident.Nr. DE 814 416 951


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-announce] [Call for feedback] share also your successful upgrades experience

2017-12-21 Thread FERNANDO FREDIANI
Seems an upgrade guide is more than needed for a 4.1 to 4.2 upgrade. 
Perhaps with all the feedback comming that can be done.



On 21/12/2017 11:55, Misak Khachatryan wrote:

Did upgrade to 4.2 yesterday.

Everything wen smoothly except few glitches.

I have 4 host install - 3 gluster and one node with local storage.
One of the gluster servers failed to start it's brick with Peer reject 
status, solved very fast by googling.
On the node i hit old bug, can't upgrade it since 4.1.5 version, 
finally it seems i just need to reinstall it from scratch.




Best regards,
Misak Khachatryan

On Thu, Dec 21, 2017 at 5:35 PM, Sandro Bonazzola > wrote:


Hi,
now that oVirt 4.2.0 has been released, we're starting to see some
reports about issues that for now are related to not so common
deployments.
We'd also like to get some feedback from those who upgraded to
this amazing release without any issue and add these positive
feedback under our developers (digital) Christmas tree as a gift
for the effort put in this release.
Looking forward to your positive reports!

Not having positive feedback? Let us know too!
We are putting an effort in the next weeks to promptly assist
whoever hit troubles during or after the upgrade. Let us know in
this users@ovirt.org  mailing list
(preferred) or on IRC using irc.oftc.net 
server and #ovirt channel.

We are also closely monitoring bugzilla.redhat.com
 for new bugs on oVirt project, so you
can report issues there as well.

Thanks,
-- 


SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

  
TRIED. TESTED. TRUSTED. 



___
Announce mailing list
annou...@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/announce





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Update to 4.2.0 failing in db check

2017-12-21 Thread FERNANDO FREDIANI

Updates are often problematic.

Whenever someone manages to do a 4.1 to 4.2 upgrade could possibility 
post it to the Wiki. That will help a lot of people.


Fernando


On 21/12/2017 08:24, Sandro Bonazzola wrote:



2017-12-21 11:03 GMT+01:00 Giorgio Biacchi >:


Hi,
I have additional info on the problem. I run
/usr/share/ovirt-engine/setup/dbutils/fkvalidator.sh and the
problem is on 4 templates subversions. In detail we have two
templates and each one has two subversions.

Other templates with no subversions have no problem.

Thanks again, I hope this helps in debugging.


Thanks Giorgio, do you mind open a bug on 
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine to 
track this?




On 12/20/2017 04:04 PM, Martin Perina wrote:

Hi,

could you please share the full setup log?


​/var/log/ovirt-engine/setup/ovirt-engine-setup-20171220110337-cy5ri9.log

Thanks

Martin


On Wed, Dec 20, 2017 at 2:22 PM, Sandro Bonazzola

>> wrote:



    2017-12-20 11:58 GMT+01:00 Giorgio Biacchi

    >>:

        Hello list,
        I was about to upgrade from 4.1.8.2-1.el7.centos to
4.2.0 but
        engine-setup fails. Here's the relevant output:

        [ ERROR ] Failed to execute stage 'Setup validation':
Failed checking
        Engine database: an exception occurred while
validating the Engine
        database, please check the logs for getting more info:
                  Constraint violation found in vm_interface
(vmt_guid) |1

        [ INFO  ] Stage: Clean up
                   Log file is located at
        ​​
       
/var/log/ovirt-engine/setup/ovirt-engine-setup-20171220110337-cy5ri9.log
        [ INFO  ] Generating answer file
       
'/var/lib/ovirt-engine/setup/answers/20171220110551-setup.co

        >nf'
        [ INFO  ] Stage: Pre-termination
        [ INFO  ] Stage: Termination
        [ ERROR ] Execution of setup failed

        any ideas??


    Adding some people, I think one of your vms has an invalid
configuration
    saved in the db.


        --         gb

        PGP Key: http://pgp.mit.edu/
        Primary key fingerprint: C510 0765 943E EBED A4F2 69D3
16CC DC90 B9CB 0F34
        ___
        Users mailing list
Users@ovirt.org 
>
http://lists.ovirt.org/mailman/listinfo/users

        >




    --
    SANDRO BONAZZOLA

    ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG
VIRTUALIZATION R

    Red Hat EMEA 

    
    TRIED. TESTED. TRUSTED. 





-- 
Martin Perina

Associate Manager, Software Engineering
Red Hat Czech s.r.o.


-- 
gb


PGP Key: http://pgp.mit.edu/
Primary key fingerprint: C510 0765 943E EBED A4F2 69D3 16CC DC90
B9CB 0F34




--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

  
TRIED. TESTED. TRUSTED. 




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] bonding mode-alb

2017-12-11 Thread FERNANDO FREDIANI

Hello

If you have 10Gb ports you hardly need to use these aggregation in order 
to have more bandwidth. 10Gb is enough for A LOT of things. Just use 
bonding mode=1 (active/backup) if your switches don't support stacking.


Doing mode-tlb and alb is not always straight forward as mode 1 or mode 4.

Fernando


On 11/12/2017 09:53, Demeter Tibor wrote:

Hi,

Could help anyone for me in this question?
Thanks.

R.

Tibor



- 2017. dec.. 6., 14:07, Demeter Tibor  írta:

Dear members,

I would like to use two switch to make high-availability network
connection for my nfs storage.
Unfortunately, these switches does not support 802.3.ad lacp,
(really I can't stack them)  but I've read about mode-alb and
mode-tlb bonding modes.
I know,these modes are available in ovirt, but how is work that?
Also how is safe? Are there for HA or for load balance?

I've read some forums, where does not recommended these modes to
use in ovirt. What is the truths?
I would like to use only for storage-traffic, it will be separated
from other network traffic. I have two 10Gbe switches and two
10Gbe ports in my nodes.

Thanks in advance,

R

Tibor

**


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practice for iSCSI storage domains

2017-12-07 Thread FERNANDO FREDIANI
That's one of the reasons I prefer File storage (like NFS) than iSCSI or 
Fiberchannel. A lot more flexible and manageable.


In the past for VMFS5 I used to work with 4TB LUNs. Now a days something 
between 4TB and 8TB may be ok given the bigger size of VMs, depending on 
your enviroment of course. However one thing to take attention in this 
scenario is how the metadata update process and how that can impact 
performance on bigger LUNs.


Fernando


On 07/12/2017 08:51, Maor Lipchuk wrote:
On Thu, Dec 7, 2017 at 9:10 AM, Richard Chan 
> 
wrote:


What is the best practice for iSCSI storage domains:

Many small targets vs a few large targets?

Specific example: if you wanted a 8TB storage domain would you
prepare a single 8TB LUN or (for example) 8 x 1 TB LUNs.


There could be many reasons to use each type.
From the top of my head I think that configuration wise it will be 
better to configure more than one lun.

That can be helpful if you plan to use external LUN disks for example.

Multiple targets might also come useful if you plan to configure iSCSI 
multipath in the future, that way you can choose only part of the 
targets to apply the MPIO on.






-- 
Richard Chan



___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.1.8 Fourth Release Candidate is now available

2017-12-01 Thread FERNANDO FREDIANI

Thanks for the reply Lev.

Do you beleive BZ 1464043 will be at 4.1.x at some point ?

Thanks
Fernando


On 01/12/2017 10:18, Lev Veyde wrote:

Hi Fernando,

>> BZ 1448831 - Issues with automating the configuration of VMs 
(cloud-init) - https://bugzilla.redhat.com/show_bug.cgi?id=1448831 
<https://bugzilla.redhat.com/show_bug.cgi?id=1448831>

That was already fixed in 4.1.3.5.

>> BZ 1464043 - Cloud-init network configuraton doesn't work - 
https://bugzilla.redhat.com/show_bug.cgi?id=1464043 
<https://bugzilla.redhat.com/show_bug.cgi?id=1464043>

AFAIK it's fixed only in the 4.2.

Thanks in advance,

On Fri, Dec 1, 2017 at 1:58 PM, FERNANDO FREDIANI 
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:


Thanks Simone.

Let me ask about a two bugfixes I am eagerly waiting for 4.1.x and
they are already on the 4.2.0 release notes.

BZ 1448831 - Issues with automating the configuration of VMs
(cloud-init) - https://bugzilla.redhat.com/show_bug.cgi?id=1448831
<https://bugzilla.redhat.com/show_bug.cgi?id=1448831>
BZ 1464043 - Cloud-init network configuraton doesn't work -
https://bugzilla.redhat.com/show_bug.cgi?id=1464043
<https://bugzilla.redhat.com/show_bug.cgi?id=1464043>

Have they been included in 4.1.8 or any other previous versions ?
I have searched through the release notes and couldn't find
anything related. And if not will it be at some point ?

Thanks
Fernando



On 01/12/2017 07:18, Simone Tiraboschi wrote:

The oVirt Project is pleased to announce the availability of the
Fourth Release Candidate of oVirt 4.1.8, as of November 30th, 2017

This update is the eighth in a series of stabilization updates to
the 4.1
series.

Starting from 4.1.5 oVirt supports libgfapi [5]. Using libgfapi
provides a
real performance boost for ovirt when using GlusterFS .
Due  to a known issue [6], using this will break live storage
migration.
This is expected to be fixed soon. If you do not use live storage
migration you can give it a try. Use [7] for more details on how
to  enable
it.

This release is available now for:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later
* oVirt Node 4.1

See the release notes draft [3] for installation / upgrade
instructions and
a list of new features and bugs fixed.

Notes:
* oVirt Appliance is already available
* oVirt Live is already available[4]
* oVirt Node is already available[4]

Additional Resources:
* Read more about the oVirt 4.1.8 release
highlights:http://www.ovirt.org/release/4.1.8/
<http://www.ovirt.org/release/4.1.8/>
* Get more oVirt Project updates on Twitter:
https://twitter.com/ovirt
* Check out the latest project news on the oVirt
blog:http://www.ovirt.org/blog/ <http://www.ovirt.org/blog/>

[1] https://www.ovirt.org/community/
<https://www.ovirt.org/community/>
[2]
https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
<https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt>
[3] http://www.ovirt.org/release/4.1.8/
<http://www.ovirt.org/release/4.1.8/>
[4] http://resources.ovirt.org/pub/ovirt-4.1-pre/iso/
<http://resources.ovirt.org/pub/ovirt-4.1-pre/iso/>
[5]

http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/libgfapi/

<http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/libgfapi/>
[6] https://bugzilla.redhat.com/show_bug.cgi?id=1306562
<https://bugzilla.redhat.com/show_bug.cgi?id=1306562>
[7]

http://www.ovirt.org/develop/release-management/features/storage/glusterfs-storage-domain/

<http://www.ovirt.org/develop/release-management/features/storage/glusterfs-storage-domain/>


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>



___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




--

Lev Veyde

Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel

<https://www.redhat.com>

l...@redhat.com <mailto:l...@redhat.com> | lve...@redhat.com 
<mailto:lve...@redhat.com>


<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.1.8 Fourth Release Candidate is now available

2017-12-01 Thread FERNANDO FREDIANI

Thanks Simone.

Let me ask about a two bugfixes I am eagerly waiting for 4.1.x and they 
are already on the 4.2.0 release notes.


BZ 1448831 - Issues with automating the configuration of VMs 
(cloud-init) - https://bugzilla.redhat.com/show_bug.cgi?id=1448831
BZ 1464043 - Cloud-init network configuraton doesn't work - 
https://bugzilla.redhat.com/show_bug.cgi?id=1464043


Have they been included in 4.1.8 or any other previous versions ? I have 
searched through the release notes and couldn't find anything related. 
And if not will it be at some point ?


Thanks
Fernando



On 01/12/2017 07:18, Simone Tiraboschi wrote:
The oVirt Project is pleased to announce the availability of the 
Fourth Release Candidate of oVirt 4.1.8, as of November 30th, 2017


This update is the eighth in a series of stabilization updates to the 4.1
series.

Starting from 4.1.5 oVirt supports libgfapi [5]. Using libgfapi provides a
real performance boost for ovirt when using GlusterFS .
Due  to a known issue [6], using this will break live storage migration.
This is expected to be fixed soon. If you do not use live storage
migration you can give it a try. Use [7] for more details on how to 
 enable

it.

This release is available now for:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later
* oVirt Node 4.1

See the release notes draft [3] for installation / upgrade 
instructions and

a list of new features and bugs fixed.

Notes:
* oVirt Appliance is already available
* oVirt Live is already available[4]
* oVirt Node is already available[4]

Additional Resources:
* Read more about the oVirt 4.1.8 release 
highlights:http://www.ovirt.org/release/4.1.8/

* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt 
blog:http://www.ovirt.org/blog/


[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.1.8/
[4] http://resources.ovirt.org/pub/ovirt-4.1-pre/iso/
[5] 
http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/libgfapi/

[6] https://bugzilla.redhat.com/show_bug.cgi?id=1306562
[7] 
http://www.ovirt.org/develop/release-management/features/storage/glusterfs-storage-domain/



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosts been evacuated unnecessarily

2017-11-28 Thread FERNANDO FREDIANI

Hello folks.

Ou oVirt (4.1.7.3-1.el7.centos) which runs in one Datacenter and 
controls Nodes locally and also remotelly lost communication with the 
remote Nodes in another Datacenter.
To this point nothing wrong as the Nodes can continue working as 
expected and running their Virtual Machines each without dependency of 
the oVirt Engine.


What happened at some point is that when the communication between 
Engine and Hosts came back Hosts in the remote Datacenter got confused 
and initiated a Live Migration of ALL VMs from one of the hosts to 
another. I had also to restart vdsmd agent on all Hosts in order to get 
sanity my environment.


What adds up even more strangeness to this scenario is that one of the 
Hosts affected by the need of restarting VDSM doesn't belong to the same 
Cluster as the others and had to have the vdsmd restarted.


I understand the Hosts can survive without the Engine online with 
reduced possibilities but can communicated between them, but without 
affecting the VMs or even needing to do what happened in this scenario.


Am I wrong on any of the assumptions ?

Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OVS DPDK Performance

2017-11-28 Thread FERNANDO FREDIANI
Yes Irit, but you say in the Host side that has the OVS which is 
connected to the VMs. Or are there any further steps to be done also on 
the VMs in certain cases. In short it would be interesting if you don't 
have to do much inside the VM, only in the host to have some performance 
improvment gains. Do you think that is feasible ?


Fernando


On 28/11/2017 06:12, Irit Goihman wrote:

Hi Fernando,

On Mon, Nov 27, 2017 at 9:25 PM, FERNANDO FREDIANI 
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:


Hello

As some may have seen recently OVS DPDK has been introduced to
oVirt (https://ovirt.org/blog/2017/09/ovs-dpdk/
<https://ovirt.org/blog/2017/09/ovs-dpdk/>). This is very
interesting feature which can make a huge performance difference
in terms of network performance.

Just wanted to ask if anyone has tested it in any environment and
made any comparison, specially for packet forward (e.g: running
Virtual Routers or Virtual Firewalls with virtio) or packet
dropping as well.


You can see Intel OVS DPDK performance results compared to OVS native 
in 
https://download.01.org/packet-processing/ONPS2.1/Intel_ONP_Release_2.1_Performance_Test_Report_Rev1.0.pdf


Performance results of OVS DPDK in oVirt setup will be published soon.


One doubt I have and if someone could clarify is: Should I enable
DPDK in the Host any traffic forwarded to any VMs will
automatically benefit from this performance gain of DPDK or there
additional steps that need to be put in place inside the VM when
sharing a physical Network Interface ?


Enabling DPDK is not enough. Since it's tightly coupled to system 
hardware a few steps are needed in order to achieve good performance 
results. For example: disabling interrupts, enabling hugepages, 
isolating CPU cores, allocating them to PMD threads and pinning vcpus.


More information can be found here:
http://docs.openvswitch.org/en/latest/intro/install/dpdk/
http://dpdk.org/doc/guides-16.04/linux_gsg/nic_perf_intel_platform.html



Thanks
Fernando

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




--

IRIT GOIHMAN

SOFTWARE ENGINEER

EMEA VIRTUALIZATION R

Red Hat EMEA <https://www.redhat.com/>

<https://red.ht/sig>  
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

@redhatnews <https://twitter.com/redhatnews> Red Hat 
<https://www.linkedin.com/company/red-hat> Red Hat 
<https://www.facebook.com/RedHatInc>


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] OVS DPDK Performance

2017-11-27 Thread FERNANDO FREDIANI

Hello

As some may have seen recently OVS DPDK has been introduced to oVirt 
(https://ovirt.org/blog/2017/09/ovs-dpdk/). This is very interesting 
feature which can make a huge performance difference in terms of network 
performance.


Just wanted to ask if anyone has tested it in any environment and made 
any comparison, specially for packet forward (e.g: running Virtual 
Routers or Virtual Firewalls with virtio) or packet dropping as well.


One doubt I have and if someone could clarify is: Should I enable DPDK 
in the Host any traffic forwarded to any VMs will automatically benefit 
from this performance gain of DPDK or there additional steps that need 
to be put in place inside the VM when sharing a physical Network Interface ?


Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Very Slow Console Performance - Windows 10

2017-11-24 Thread FERNANDO FREDIANI
I have made a Export of the same VM created in oVirt to a server running 
pure qemu/KVM and which creates new VMs profiles with vram 65536 and it 
turned on the Windows 10 which run perfectly with that configuration.


Was reading some documentation that it may be possible to change the 
file /usr/share/ovirt-engine/conf/osinfo-defaults.properties in order to 
change it for the profile you want but I am not sure how these changed 
should be made if directly in that file, on another one just with custom 
configs and also how to apply them immediatelly to any new or existing 
VM ? I am pretty confident once vram is increased that should resolve 
the issue with not only Windows 10 VMs, but other as well.


Anyone can give a hint about the correct procedure to apply this change ?

Thanks in advance.
Fernando


On 23/11/2017 10:46, FERNANDO FREDIANI wrote:

Hello

Has anyone installed a Windows 10 Virtual Machine ?

I am having serious Console Performance issues even after installing 
the Ted Hat QXL controller from the virtio-win ISO.
Someone informed in a forum having similar issues and have resolved by 
increasing the graphics card memory to 65536 by editing the XML 
(example below), but how is that possible in oVirt permanently ?



      heads='1'/>
      function='0x0'/>



Thanks
Fernando


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Shared vs Local Storage for Datacenters

2017-11-23 Thread FERNANDO FREDIANI

I hope the same as well.

Actually I hope this concept could be retired and be able to add any 
type of storage in any type of DC either it has 1 host or multiple.


Regards
Fernando


On 23/11/2017 17:22, Matt . wrote:

Hi Guys,

I'm wondering at the moment what the actual difference is between
Shared and Local Storage Datacenters.

As far as I can see a Local DC supports NFS as well which makes it a
more flexible DC in storage if you ask me, so why do both still exist
?

Is it possible in a decent way to change a running Shared Storage DC
to a Local one and Keep the NFS shares and add some local mounts for
specific hosts ?

I hope someone can clearify!

Cheers,

Matt
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Very Slow Console Performance - Windows 10

2017-11-23 Thread FERNANDO FREDIANI

Hello

Has anyone installed a Windows 10 Virtual Machine ?

I am having serious Console Performance issues even after installing the 
Ted Hat QXL controller from the virtio-win ISO.
Someone informed in a forum having similar issues and have resolved by 
increasing the graphics card memory to 65536 by editing the XML (example 
below), but how is that possible in oVirt permanently ?



      heads='1'/>
      function='0x0'/>



Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] recommendations for best performance and reliability

2017-11-13 Thread FERNANDO FREDIANI

Hello Rudi

If you have a 4th server it may work,but I am not knowledgeable about 
Gluster's GEOreplication. Perhaps someone else can advise.


With regards RAID 5 for 4 disks this is intended for capacity and as 
mentioned 4 disks is the maximum I would use for RAID5. As you have SSDs 
and intend to use a caching technique like bcache ou dm-cache this 
should cover the performance hit of RAID 5 write.
If you consider using RAID 6 in only 4 disks you are much better of 
using RAID 10 then as you will have double write performance and same 
capacity. I normally use RAID 10 or in ZFS configurations (not this 
case) multiple vdevs of RAID6.


For RAID in Linux I have been using mdraid. I know LVM does some Raid 
but I personally never done myself so can't advise.


Regards.
Fernando


On 13/11/2017 10:19, Rudi Ahlers wrote:

Hi Fernando,

Thanx.

I meant to say, the 4th server will be in another office. It's about 
3Km away and I was thinking of using Gluster's GEOreplication for this 
purpose.


I am not a fond user of RAID5 at all. But this raises the question: 
does RAID add any unnecessary overhead? I would rather run RAID 6 or 
RAI10.
And then, if RAID is the preferred way (over LVM?), as I don't have 
dedicated hardware RAID cards, would mdraid add any benefit?


On Mon, Nov 13, 2017 at 1:47 PM, FERNANDO FREDIANI 
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:


Helli Rudi

Nice specs.

I wouldn't use GlusterFS for this setup with the third server in a
different location. Just have this server as an Standalone and
replicate the VMs there. You won't have real time replication, but
much less hassle and probably to have constant failures, specially
knowing you have a wireless link.

For the SSDs I have been using bcache with success. Relatively
simple to setup and pretty good performance.

For your specs as you have 4 mechanical disks I would recommend
you to have a RAID 5 between them (4 disks is my limit for RAID 5)
and a RAID 0 made of SSDs for the bcache device. If the RAID 0
fails for any reason it will fall back directly to the mechanical
disks and you can do maintenance on the Node doing live migration
in order to replace the failed disks.

However as you have you have 2 remaining server to create your
cluster you may need to consider GlusterFS on the top of this RAID
to have the replication and Highavaibility.

Hope it helps.

Fernando


On 13/11/2017 08:03, Rudi Ahlers wrote:

Hi,

Can someone please give me some pointers, what would be the best
setup for performance and reliability?

We have the following hardware setup:

3x Supermicro server with following features per server:
128GB RAM
4x 8TB SATA HDD
2x SSD drives (intel_ssdsc2ba400g4 - 400GB DC S3710)
2x 12 core CPU (Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
Quad port 10Gbe Inter NIC
2x 10GB Cisco switches (to isolate storage network from LAN)

One of the servers will be in another office, with a 600Mb
wireless link for Disaster Recovery.

What is recommended for the best setup in terms of redundancy and
speed?

I am guessing GlusterFS with a Distributed Striped Replicated
Volume across 3 of the servers.

For added performance I want to use the SSD drives, perhaps with
dm-cache?

Should I combine the 4x HDD's using LVM on each host node?
What about RAID 6?



Virtual Machines will then reside on the oVirt Cluster and any
one of the 3 host nodes can fail, or any single HDD can fail and
all should still work, right/?




-- 
Kind Regards

Rudi Ahlers
Website: http://www.rudiahlers.co.za


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>



___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




--
Kind Regards
Rudi Ahlers
Website: http://www.rudiahlers.co.za


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] recommendations for best performance and reliability

2017-11-13 Thread FERNANDO FREDIANI

Helli Rudi

Nice specs.

I wouldn't use GlusterFS for this setup with the third server in a 
different location. Just have this server as an Standalone and replicate 
the VMs there. You won't have real time replication, but much less 
hassle and probably to have constant failures, specially knowing you 
have a wireless link.


For the SSDs I have been using bcache with success. Relatively simple to 
setup and pretty good performance.


For your specs as you have 4 mechanical disks I would recommend you to 
have a RAID 5 between them (4 disks is my limit for RAID 5) and a RAID 0 
made of SSDs for the bcache device. If the RAID 0 fails for any reason 
it will fall back directly to the mechanical disks and you can do 
maintenance on the Node doing live migration in order to replace the 
failed disks.


However as you have you have 2 remaining server to create your cluster 
you may need to consider GlusterFS on the top of this RAID to have the 
replication and Highavaibility.


Hope it helps.

Fernando


On 13/11/2017 08:03, Rudi Ahlers wrote:

Hi,

Can someone please give me some pointers, what would be the best setup 
for performance and reliability?


We have the following hardware setup:

3x Supermicro server with following features per server:
128GB RAM
4x 8TB SATA HDD
2x SSD drives (intel_ssdsc2ba400g4 - 400GB DC S3710)
2x 12 core CPU (Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
Quad port 10Gbe Inter NIC
2x 10GB Cisco switches (to isolate storage network from LAN)

One of the servers will be in another office, with a 600Mb wireless 
link for Disaster Recovery.


What is recommended for the best setup in terms of redundancy and speed?

I am guessing GlusterFS with a Distributed Striped Replicated Volume 
across 3 of the servers.


For added performance I want to use the SSD drives, perhaps with dm-cache?

Should I combine the 4x HDD's using LVM on each host node?
What about RAID 6?



Virtual Machines will then reside on the oVirt Cluster and any one of 
the 3 host nodes can fail, or any single HDD can fail and all should 
still work, right/?





--
Kind Regards
Rudi Ahlers
Website: http://www.rudiahlers.co.za


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Fixed ovirt-node-ng packages

2017-11-09 Thread FERNANDO FREDIANI

Thanks Lev.

In fact the bug wasn't network related so probably nothing to do to what 
I could observe.


I didn't open a bug because I wasnte sure if it was oVirt issue or not 
and was trying to confirm. The only relation was that after the reboot 
of one of the nodes everything in the whole platform (including other 
VMs running on other hosts). I will double check and if I can gather 
enough details I will send it to a new bug report.


Thanks so far.
Fernando


On 09/11/2017 14:29, Lev Veyde wrote:

Hi Fernando,

The issue fixed is : https://bugzilla.redhat.com/show_bug.cgi?id=1510858

So it doesn't look related, but regardless if the issue returns can 
you please open a bug with all the details?


Thanks in advance,

On Thu, Nov 9, 2017 at 6:14 PM, FERNANDO FREDIANI 
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:


Thanks Lev. Could you please mention in a short sentence what this
critical issue is related to ?

Coincidently or not today I had a major issue in our node related
to network which only resolved after I live migrated all VMs out
from a node and rebooted it. It seems this network issue was
happening on all hosts not only this rebooted.

Fernando


On 09/11/2017 12:54, Lev Veyde wrote:

Unfortunately shortly after the oVirt 4.1.7 release we discovered
a critical issue with ovirt-node-ng.

The issue was promptly handled, and fixed packages were pushed
into the repo.
Also the ovirt-node-ng-installer ISO was rebuilt.

Please make sure to update your repo and use the fixed
ovirt-node-ng-installer ISO:


http://plain.resources.ovirt.org/pub/ovirt-4.1/iso/ovirt-node-ng-installer-ovirt/4.1-2017110820/ovirt-node-ng-installer-ovirt-4.1-2017110820.iso

<http://plain.resources.ovirt.org/pub/ovirt-4.1/iso/ovirt-node-ng-installer-ovirt/4.1-2017110820/ovirt-node-ng-installer-ovirt-4.1-2017110820.iso>

-- 


Lev Veyde

Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel

<https://www.redhat.com>

l...@redhat.com <mailto:l...@redhat.com> | lve...@redhat.com
<mailto:lve...@redhat.com>

<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>



___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




--

Lev Veyde

Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel

<https://www.redhat.com>

l...@redhat.com <mailto:l...@redhat.com> | lve...@redhat.com 
<mailto:lve...@redhat.com>


<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Fixed ovirt-node-ng packages

2017-11-09 Thread FERNANDO FREDIANI
Thanks Lev. Could you please mention in a short sentence what this 
critical issue is related to ?


Coincidently or not today I had a major issue in our node related to 
network which only resolved after I live migrated all VMs out from a 
node and rebooted it. It seems this network issue was happening on all 
hosts not only this rebooted.


Fernando


On 09/11/2017 12:54, Lev Veyde wrote:
Unfortunately shortly after the oVirt 4.1.7 release we discovered a 
critical issue with ovirt-node-ng.


The issue was promptly handled, and fixed packages were pushed into 
the repo.

Also the ovirt-node-ng-installer ISO was rebuilt.

Please make sure to update your repo and use the fixed 
ovirt-node-ng-installer ISO:


http://plain.resources.ovirt.org/pub/ovirt-4.1/iso/ovirt-node-ng-installer-ovirt/4.1-2017110820/ovirt-node-ng-installer-ovirt-4.1-2017110820.iso

--

Lev Veyde

Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel



l...@redhat.com  | lve...@redhat.com 




TRIED. TESTED. TRUSTED. 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.2.0 First Beta Release is now available for testing

2017-11-07 Thread FERNANDO FREDIANI

Fantastic Greg and thanks for the feedback about this as well.

Keep up the good work.

Fernando


On 07/11/2017 16:45, Greg Sheremeta wrote:


On Wed, Nov 1, 2017 at 3:19 PM, Yaniv Kaul <yk...@redhat.com 
<mailto:yk...@redhat.com>> wrote:




On Wed, Nov 1, 2017 at 7:36 PM, FERNANDO FREDIANI
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:

Agreed. Otherwise it would apply 'one size fits all' as
mentioned and that is not the case.

Applying guidelines is something very good to do, by removing
stuff that may only be 'recent trend' or 'buzz stuff'
considering the audience that will use it is even better
practice. I don't think oVirt Admins can be considered masses
in that sense.

Has anyone seen a NOC or Operating Center using Tablets or
Mobile Phones to manage their infrastructure ? No, they use
Desktops/Laptops and a Mouse ;-)


In all the cool television show they have a tablet ;-)

Seriously though, today more and more laptops have a
touch-sensitive screen. And a long press is equal to right click
and I do find it comfortable to have.
I didn't know GMail had a right click-menu, but since I've learned
about it, I've been using it and finding it handy (even though the
buttons are just on top and not far away from the email list).
Y.


Everyone: thanks for the feedback so far!

After some team discussion and consultation with the PatternFly team, 
we're going to re-add a right-click menu for the grids. A patch should 
be merged in the next week.


Inline image 1


Best wishes,
Greg

--

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA

<https://www.redhat.com/>

gsher...@redhat.com <mailto:gsher...@redhat.com> IRC: gshereme

<https://red.ht/sig>



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Messed up upgrade to 4.2. Solution badly needed.

2017-11-03 Thread FERNANDO FREDIANI
oVirt upgrades between versions (major or minor) has always been a bit 
difficult for some people, including me. It would be nice if this could 
be more extensively tested before release. As people are mostly eager to 
upgrade this can be benefitial to dev team to gather even more feedback 
from early adopters.


Regards
Fernando Frediani


On 03/11/2017 10:21, Marcin Jessa wrote:

Hi guys.

I have VM with hosted engine and a two node setup. When beta came out I was 
“OH! New shiny tech! Let’s try it!”. You all know the feeling.
Unfortunately the upgrade process did not go as expected.
I created a backup of my 4.1 installation and started to upgrade the VM. All 
went well. I then migrated all the running VMs to my second node put the 
cluster in global maintenance mode and updated one of the nodes.
Then I run hosted-engine —upgrade-appliance on the node but it fails:

[ ERROR ] Failed to execute stage 'Environment customization': exceptions must 
be old-style classes or derived from BaseException, not str

The log file says:
2017-11-03 13:12:48,259+0100 DEBUG otopi.context context._executeMethod:143 
method exception
Traceback (most recent call last):
   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in 
_executeMethod
 method['method']()
   File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-upgradeappliance/engine/misc.py",
 line 180, in _check_spm
 'Unable to find this host in the engine, '
TypeError: exceptions must be old-style classes or derived from BaseException, 
not str
2017-11-03 13:12:48,259+0100 ERROR otopi.context context._executeMethod:152 
Failed to execute stage 'Environment customization': exceptions must be 
old-style classes or derived from BaseException, not str
2017-11-03 13:12:48,260+0100 DEBUG otopi.context context.dumpEnvironment:821 
ENVIRONMENT DUMP - BEGIN
2017-11-03 13:12:48,260+0100 DEBUG otopi.context context.dumpEnvironment:831 
ENV BASE/error=bool:'True'
2017-11-03 13:12:48,260+0100 DEBUG otopi.context context.dumpEnvironment:831 ENV 
BASE/exceptionInfo=list:'[(, TypeError('exceptions must 
be old-style classes or derived from BaseException, not str',), )]'
2017-11-03 13:12:48,261+0100 DEBUG otopi.context context.dumpEnvironment:835 
ENVIRONMENT DUMP - END

I can login to the hosted engine but the nodes are down, the VMs are down and 
the storage is not connected.

Can you please advice what to do now?
Is there a way to upgrade my setup?
Should I disconnect my second node, which is also upgraded to 4.2 now and try 
to install hosted engine from scratch?
Anything else I can do to save my VMs?


Cheers
Marcin.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.2.0 First Beta Release is now available for testing

2017-11-03 Thread FERNANDO FREDIANI

+1 for complete diskless hosts
-1 for mobile usability over desktop :-)

Regards
Fernando

On 02/11/2017 20:46, Arman Khalatyan wrote:

I just tested the new 4.2, looks new shiny UI, thanks.

I would like to join to Jiris statement, ovirt should become more
stable, clean and useful.
The right or left clicks  or UI designs, mobile friendly or not,those
futures are the secondary tasks for me.
For those who would like to manage the vms from the mobile devices
they can use mOvirt app.
I wish that the development team will concentrate on the main
advertised futures to make them stable.
as a user I wish that the following points can make it stronger:
- please make a ovirt as a  FULL HA solution
- one of the weak points of the ovirt is a spm, this should go away in
the first point, not the right click one:).
- hosts management like a foreman, but without foreman.
- strong solution with a multirail HA storage
- move hosts complete to disk-less infrastructure, easy to scale-up
- scheduled backup solution integrated in the gui/api
- reasonable reports,( similar like dwd in 3.6)
Most of the points are almost done, but we have always hire or there
half-solved problems.


Greetings from Potsdam,
Arman.

PS
an Ovirt user since 3.x
8hosts >50Vms
4hosts > 6VMS
10G,IB,RDMA.
looking to deploy ovirt on a cluster environment on user demand.

On Thu, Nov 2, 2017 at 9:34 PM, Jiří Sléžka <jiri.sle...@slu.cz> wrote:

On 10/31/2017 06:57 PM, Oved Ourfali wrote:

As mentioned earlier, this is one motivation but not the only one. You
see right click less and less in web applications, as it isn't
considered a good user experience. This is also the patternfly guideline
(patternfly is a framework we heavily use throughout the application).

We will however consider bringing this back if there will be high demand.

I'm using right click time to time, but for me is much more important
clean, simple and compatible UI. Especially if this means it will be
possible to simple select and copy any text or log messages from UI.
This is my biggest pain when interacting with manager.

Cheers, Jiri


Thanks for the feedback!
Oved

On Oct 31, 2017 7:50 PM, "Darrell Budic" <bu...@onholyground.com
<mailto:bu...@onholyground.com>> wrote:

 Agreed. I use the right click functionality all the time and will
 miss it. With 70+ VMs, I may check status in a mobile interface, but
 I’m never going to use it for primary work. Please prioritize ease
 of use on Desktop over Mobile!



 ----
 *From:* FERNANDO FREDIANI <fernando.fredi...@upx.com
 <mailto:fernando.fredi...@upx.com>>
 *Subject:* Re: [ovirt-users] [ANN] oVirt 4.2.0 First Beta Release
 is now available for testing
 *Date:* October 31, 2017 at 11:59:20 AM CDT
 *To:* users@ovirt.org <mailto:users@ovirt.org>


 On 31/10/2017 13:43, Alexander Wels wrote:

 Will the right click dialog be available in the final release?
 Because,
 currently in 4.2 we need to go at the up right corner to
 interact with
 object (migrate, maintenance...)


 Short answer: No, we removed it on purpose.

 Long answer: No, here are the reasons why:
 - We are attempting to get the UI more mobile friendly, and while
 its not 100%
 there yet, it is actually quite useable on a mobile device now.
 Mobile devices
 don't have a right click, so hiding functionality in there would
 make no
 sense.

 Please don't put mobile usage over Desktop usage. While mobile
 usage is nice to have in "certain" situations. In real day by day
 operation nobody uses mobile devices to do their deployments and
 manage their large environments. If having both options where you
 can switch between then is nice, but if something should prevail
 should always be Desktop. We are not talking about a Stock Trading
 interface or something you need that level or flexibility and
 mobility to do static things anytime anywhere.

 So I beg you to consider well before remove things which are
 pretty useful for a day by day and real management usage because
 of a new trend or buzz stuff.
 Right click is always on popular on Desktop enviroments and will
 be for quite a while.

 - You can now right click and get the browsers menu instead of
 ours and you
 can do things like copy from the menu.
 - We replicated all the functionality from the menu in the
 buttons/kebab menu
 available on the right. Our goal was to have all the commonly
 used actions as
 a button, and less often used actions in the kebab to declutter
 the interface.
 We traded an extra click for some mouse travel
 - Lots of people didn't realize there even was a right click menu
 because its
 a web interface, and they couldn't find some functionality that
 was only
 av

Re: [ovirt-users] [ANN] oVirt 4.2.0 First Beta Release is now available for testing

2017-11-01 Thread FERNANDO FREDIANI
Agreed. Otherwise it would apply 'one size fits all' as mentioned and 
that is not the case.


Applying guidelines is something very good to do, by removing stuff that 
may only be 'recent trend' or 'buzz stuff' considering the audience that 
will use it is even better practice. I don't think oVirt Admins can be 
considered masses in that sense.


Has anyone seen a NOC or Operating Center using Tablets or Mobile Phones 
to manage their infrastructure ? No, they use Desktops/Laptops and a 
Mouse ;-)


Fernando


On 01/11/2017 13:25, Robert Story wrote:

On Tue 2017-10-31 19:57:32+0200 Oved wrote:

As mentioned earlier, this is one motivation but not the only one.
You see right click less and less in web applications, as it isn't
considered a good user experience. This is also the patternfly
guideline (patternfly is a framework we heavily use throughout the
application).

Their user guideline is probably based on UI for the masses. I'd argue
that oVirt, particularly the admin portal, is for a much more
technical audience. I think right-click should stay for admin portal.

Users are more likely to be less technical. I'd care much less if
everything in the user portal had its own button or was in a menu list.




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.2.0 First Beta Release is now available for testing

2017-10-31 Thread FERNANDO FREDIANI
Question is: who is the user ? There are different types of them for 
different proposes.


Fernando


On 31/10/2017 15:57, Oved Ourfali wrote:
As mentioned earlier, this is one motivation but not the only one. You 
see right click less and less in web applications, as it isn't 
considered a good user experience. This is also the patternfly 
guideline (patternfly is a framework we heavily use throughout the 
application).


We will however consider bringing this back if there will be high demand.

Thanks for the feedback!
Oved

On Oct 31, 2017 7:50 PM, "Darrell Budic" <bu...@onholyground.com 
<mailto:bu...@onholyground.com>> wrote:


Agreed. I use the right click functionality all the time and will
miss it. With 70+ VMs, I may check status in a mobile interface,
but I’m never going to use it for primary work. Please prioritize
ease of use on Desktop over Mobile!



----
*From:* FERNANDO FREDIANI <fernando.fredi...@upx.com
<mailto:fernando.fredi...@upx.com>>
*Subject:* Re: [ovirt-users] [ANN] oVirt 4.2.0 First Beta Release
is now available for testing
*Date:* October 31, 2017 at 11:59:20 AM CDT
*To:* users@ovirt.org <mailto:users@ovirt.org>


On 31/10/2017 13:43, Alexander Wels wrote:


Will the right click dialog be available in the final release?
Because,
currently in 4.2 we need to go at the up right corner to
interact with
object (migrate, maintenance...)


Short answer: No, we removed it on purpose.

Long answer: No, here are the reasons why:
- We are attempting to get the UI more mobile friendly, and
while its not 100%
there yet, it is actually quite useable on a mobile device now.
Mobile devices
don't have a right click, so hiding functionality in there would
make no
sense.

Please don't put mobile usage over Desktop usage. While mobile
usage is nice to have in "certain" situations. In real day by day
operation nobody uses mobile devices to do their deployments and
manage their large environments. If having both options where you
can switch between then is nice, but if something should prevail
should always be Desktop. We are not talking about a Stock
Trading interface or something you need that level or flexibility
and mobility to do static things anytime anywhere.

So I beg you to consider well before remove things which are
pretty useful for a day by day and real management usage because
of a new trend or buzz stuff.
Right click is always on popular on Desktop enviroments and will
be for quite a while.

- You can now right click and get the browsers menu instead of
ours and you
can do things like copy from the menu.
- We replicated all the functionality from the menu in the
buttons/kebab menu
available on the right. Our goal was to have all the commonly
used actions as
a button, and less often used actions in the kebab to declutter
the interface.
We traded an extra click for some mouse travel
- Lots of people didn't realize there even was a right click
menu because its
a web interface, and they couldn't find some functionality that
was only
available in the right click menu.

Now that being said, we are still debating if it was a good move
or not. For
now we want to see how it plays out, if a lot of people want it
back, it is
certainly possible we will put it back.


that is something you are interested in. Its also much faster
and better
than before.


On 31/10/2017 10:13, Sandro Bonazzola wrote:

The oVirt Project is pleased to announce the availability of
the First
Beta Release of oVirt 4.2.0, as of October 31st, 2017


This is pre-release software. This pre-release should not to
be used
in production.

Please take a look at our community page[1] to learn how to ask
questions and interact with developers and users.All issues
or bugs
should be reported via oVirt Bugzilla[2].

This update is the first beta release of the 4.2.0 version. This
release brings more than 230 enhancements and more than one
thousand
bug fixes, including more than 380 high or urgent severity
fixes, on
top of oVirt 4.1 series.


What's new in oVirt 4.2.0?

  *
The Administration Portalhas been completely
redesigned using
 Patternfly, a widely adopted standard in web
application design.
 It now features a cleaner, more intuitive design, for
an improved
 user experience.
 *
There is an all-new VM Portalfor non-admin users.
 *
A new High Performance virtual machinetype has been
added to the
 New VM dialog box in the Administration Portal.
 *
Open Virtual Netwo

Re: [ovirt-users] [ANN] oVirt 4.2.0 First Beta Release is now available for testing

2017-10-31 Thread FERNANDO FREDIANI


On 31/10/2017 13:43, Alexander Wels wrote:


Will the right click dialog be available in the final release? Because,
currently in 4.2 we need to go at the up right corner to interact with
object (migrate, maintenance...)


Short answer: No, we removed it on purpose.

Long answer: No, here are the reasons why:
- We are attempting to get the UI more mobile friendly, and while its not 100%
there yet, it is actually quite useable on a mobile device now. Mobile devices
don't have a right click, so hiding functionality in there would make no
sense.
Please don't put mobile usage over Desktop usage. While mobile usage is 
nice to have in "certain" situations. In real day by day operation 
nobody uses mobile devices to do their deployments and manage their 
large environments. If having both options where you can switch between 
then is nice, but if something should prevail should always be Desktop. 
We are not talking about a Stock Trading interface or something you need 
that level or flexibility and mobility to do static things anytime anywhere.


So I beg you to consider well before remove things which are pretty 
useful for a day by day and real management usage because of a new trend 
or buzz stuff.
Right click is always on popular on Desktop enviroments and will be for 
quite a while.

- You can now right click and get the browsers menu instead of ours and you
can do things like copy from the menu.
- We replicated all the functionality from the menu in the buttons/kebab menu
available on the right. Our goal was to have all the commonly used actions as
a button, and less often used actions in the kebab to declutter the interface.
We traded an extra click for some mouse travel
- Lots of people didn't realize there even was a right click menu because its
a web interface, and they couldn't find some functionality that was only
available in the right click menu.

Now that being said, we are still debating if it was a good move or not. For
now we want to see how it plays out, if a lot of people want it back, it is
certainly possible we will put it back.


that is something you are interested in. Its also much faster and better
than before.


On 31/10/2017 10:13, Sandro Bonazzola wrote:

The oVirt Project is pleased to announce the availability of the First
Beta Release of oVirt 4.2.0, as of October 31st, 2017


This is pre-release software. This pre-release should not to be used
in production.

Please take a look at our community page[1] to learn how to ask
questions and interact with developers and users.All issues or bugs
should be reported via oVirt Bugzilla[2].

This update is the first beta release of the 4.2.0 version. This
release brings more than 230 enhancements and more than one thousand
bug fixes, including more than 380 high or urgent severity fixes, on
top of oVirt 4.1 series.


What's new in oVirt 4.2.0?

   *
   
  The Administration Portalhas been completely redesigned using

  Patternfly, a widely adopted standard in web application design.
  It now features a cleaner, more intuitive design, for an improved
  user experience.
   
   *
   
  There is an all-new VM Portalfor non-admin users.
   
   *
   
  A new High Performance virtual machinetype has been added to the

  New VM dialog box in the Administration Portal.
   
   *
   
  Open Virtual Network (OVN)adds support for Open vSwitch software

  defined networking (SDN).
   
   *
   
  oVirt now supports Nvidia vGPU.
   
   *
   
  The ovirt-ansible-rolespackage helps users with common

  administration tasks.
   
   *
   
  Virt-v2vnow supports Debian/Ubuntu based VMs.


For more information about these and other features, check out the
oVirt 4.2.0 blog post
.


This release is available now on x86_64 architecture for:

* Red Hat Enterprise Linux 7.4 or later

* CentOS Linux (or similar) 7.4 or later


This release supports Hypervisor Hosts on x86_64 and ppc64le
architectures for:

* Red Hat Enterprise Linux 7.4 or later

* CentOS Linux (or similar) 7.4 or later

* oVirt Node 4.2 (available for x86_64 only)


See the release notes draft [3] for installation / upgrade
instructions and a list of new features and bugs fixed.


Notes:

- oVirt Appliance is already available.

- An async release of oVirt Node will follow soon.


Additional Resources:

* Read more about the oVirt 4.2.0 release highlights:
http://www.ovirt.org/release/4.2.0/


* Get more oVirt project updates on Twitter: https://twitter.com/ovirt

* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/


[1] https://www.ovirt.org/community/ 

[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt


[3] http://www.ovirt.org/release/4.2.0/


[4] 

Re: [ovirt-users] How to start oVirt in GUI

2017-10-31 Thread FERNANDO FREDIANI

Just a note about this topic.

I miss a TUI. I knew it existed before and it's something pretty handy 
when adding new hosts and some trobleshooting.


Fernando


On 31/10/2017 13:12, Nathanaël Blanchet wrote:




Le 31/10/2017 à 15:52, Stephen Liu a écrit :

Ryan,

Thanks for your advice.

My problem here is oVirt node unable to connect Internet.  Neither I 
have a text editor here to edit /etc/sysconfig/network-scripts/ifcfg-eth0


Nor ifconfig is available here.

ifconfig is deprecated, you may use iproute:
ip ad ad 10.0.0.x/24 dev eth0
ip ro ad default via 10.0.0.x
echo "nameserver 8.8.8.8" > /etc/resolv.conf


|# ip addr show eth0|
||
|doesn't show ip address|
||

I have tried several hours without a breakthrough. Please help.

Whether I need to install CentOS 7 on a VM as host and then install 
oVirt on CentOS 7?


Thanks

Regards
SL


On Tuesday, October 31, 2017, 10:10:55 PM GMT+8, Ryan Barry 
 wrote:



oVirt Node does not ship with an X server.

The recommended setup path is to install oVirt, then open a web 
browser and browse to:


http://ip.address.of.node:9090

And click the "Virtualization" tab after you log in. Use this to 
configure oVirt Hosted Engine, and the web console for the Engine 
will manage everything which cockpit cannot


On Tue, Oct 31, 2017 at 9:51 AM, Stephen Liu > wrote:


Hi all,

I have ovirt-node-ng-installer-ovirt- 4.1-2017103006.iso
installed on KVM as VM.  It is now running but only a console
without GUI.  I can login to run commands. Copy and Paste between
Host and VM is NOT working (I suppose because not running on
graphic mode).

Please advise how to start GUI oVirt?

Thanks

Regards
SL

__ _
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/ mailman/listinfo/users





--

RYAN BARRY

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR

Red Hat NA 

rba...@redhat.com  M: +1-651-815-9306 IM: 
rbarry






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr  



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.2.0 First Beta Release is now available for testing

2017-10-31 Thread FERNANDO FREDIANI
Great. Much better Admin Portal than the usual one. Congratulations. 
Hope it  keeps getting improvements as itś very much welcome and needed.


Fernando

On 31/10/2017 10:13, Sandro Bonazzola wrote:


The oVirt Project is pleased to announce the availability of the First 
Beta Release of oVirt 4.2.0, as of October 31st, 2017



This is pre-release software. This pre-release should not to be used 
in production.


Please take a look at our community page[1] to learn how to ask 
questions and interact with developers and users.All issues or bugs 
should be reported via oVirt Bugzilla[2].


This update is the first beta release of the 4.2.0 version. This 
release brings more than 230 enhancements and more than one thousand 
bug fixes, including more than 380 high or urgent severity fixes, on 
top of oVirt 4.1 series.



What's new in oVirt 4.2.0?

 *

The Administration Portalhas been completely redesigned using
Patternfly, a widely adopted standard in web application design.
It now features a cleaner, more intuitive design, for an improved
user experience.

 *

There is an all-new VM Portalfor non-admin users.

 *

A new High Performance virtual machinetype has been added to the
New VM dialog box in the Administration Portal.

 *

Open Virtual Network (OVN)adds support for Open vSwitch software
defined networking (SDN).

 *

oVirt now supports Nvidia vGPU.

 *

The ovirt-ansible-rolespackage helps users with common
administration tasks.

 *

Virt-v2vnow supports Debian/Ubuntu based VMs.


For more information about these and other features, check out the 
oVirt 4.2.0 blog post 
.



This release is available now on x86_64 architecture for:

* Red Hat Enterprise Linux 7.4 or later

* CentOS Linux (or similar) 7.4 or later


This release supports Hypervisor Hosts on x86_64 and ppc64le 
architectures for:


* Red Hat Enterprise Linux 7.4 or later

* CentOS Linux (or similar) 7.4 or later

* oVirt Node 4.2 (available for x86_64 only)


See the release notes draft [3] for installation / upgrade 
instructions and a list of new features and bugs fixed.



Notes:

- oVirt Appliance is already available.

- An async release of oVirt Node will follow soon.


Additional Resources:

* Read more about the oVirt 4.2.0 release highlights: 
http://www.ovirt.org/release/4.2.0/ 


* Get more oVirt project updates on Twitter: https://twitter.com/ovirt

* Check out the latest project news on the oVirt blog: 
http://www.ovirt.org/blog/



[1] https://www.ovirt.org/community/ 

[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt 



[3] http://www.ovirt.org/release/4.2.0/ 



[4] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/ 



--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

  
TRIED. TESTED. TRUSTED. 




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] New post on oVirt blog: Introducing High Performance Virtual Machines

2017-10-31 Thread FERNANDO FREDIANI


On 31/10/2017 11:11, Yaniv Kaul wrote:




DPDK for sure is a fantastic feature for networking environments.


A bit over-rated, for most workloads, if you ask me...
Currently requires a bit too much configuration (in my opinion), but 
certainly there are workloads who critically need it.

Y.

Agreed




Fernando


On 31/10/2017 05:56, Yaniv Kaul wrote:



On Mon, Oct 30, 2017 at 9:33 PM, Vinícius Ferrão
> wrote:

Hello John,

This is very interesting news for HPC guys. Accordingly to
the blog post there's a new “CPU passthrough” function. Which
is interesting.

Do you guys are targeting which market? I’m looking forward
for virtual nodes on a HPC environment.


Any intensive workload, CPU and memory bound especially, would
benefit from the configuration.
In memory DBs (SAP Hana, Redis and friends) for example,
MapReduce (Hadoop), etc.

For some workloads, low latency networking is also important
(especially for nodes inter-communication) and we are looking at
DPDK for it. See[1].

Y.

[1] https://www.ovirt.org/blog/2017/09/ovs-dpdk/



Thanks,
V.


On 30 Oct 2017, at 07:38, John Marks > wrote:

Hello!
Just a quick heads up that there is a new post on the oVirt
blog:

Introducing High Performance Virtual Machines


In a nutshell:

oVirt 4.2.0 Alpha, released on September 28, features a new
high performance virtual machine type. It brings VM
performance closer to bare metal performance. Read the blog
post.


See you on the oVirt blog!

Best,

John
-- 
John Marks

Technical Writer, oVirt
redhat Israel
Cell: +972 52 8644 491

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] New post on oVirt blog: Introducing High Performance Virtual Machines

2017-10-31 Thread FERNANDO FREDIANI

Hi.

Does the virtualization layer causes any significant impact in the VM 
performance, even a high CPU VM that justify the use of this feature ?


DPDK for sure is a fantastic feature for networking environments.

Fernando

On 31/10/2017 05:56, Yaniv Kaul wrote:



On Mon, Oct 30, 2017 at 9:33 PM, Vinícius Ferrão > wrote:


Hello John,

This is very interesting news for HPC guys. Accordingly to the
blog post there's a new “CPU passthrough” function. Which is
interesting.

Do you guys are targeting which market? I’m looking forward for
virtual nodes on a HPC environment.


Any intensive workload, CPU and memory bound especially, would benefit 
from the configuration.
In memory DBs (SAP Hana, Redis and friends) for example, MapReduce 
(Hadoop), etc.


For some workloads, low latency networking is also important 
(especially for nodes inter-communication) and we are looking at DPDK 
for it. See[1].


Y.

[1] https://www.ovirt.org/blog/2017/09/ovs-dpdk/


Thanks,
V.


On 30 Oct 2017, at 07:38, John Marks > wrote:

Hello!
Just a quick heads up that there is a new post on the oVirt blog:

Introducing High Performance Virtual Machines


In a nutshell:

oVirt 4.2.0 Alpha, released on September 28, features a new high
performance virtual machine type. It brings VM performance closer
to bare metal performance. Read the blog post.


See you on the oVirt blog!

Best,

John
-- 
John Marks

Technical Writer, oVirt
redhat Israel
Cell: +972 52 8644 491

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How is everyone performing backups?

2017-10-27 Thread FERNANDO FREDIANI

Thanks for that.

Does anyone know any way to backup VMs in OVF format like or even output 
to a .zip .gz, etc ? Any way a server which is not necessarily in the 
same LAN (a offsite backup storage) receive these VMs compressed in a 
single file ?


In other words any way to perform these backups where you don't 
necessarily need to use Export Domains ?


Fernando


On 27/10/2017 15:14, Niyazi Elvan wrote:

Hi,

You may take a look at https://github.com/openbacchus/bacchus

Cheers.



On 27 October 2017 at 18:27, Wesley Stewart > wrote:


Originally, I used a script I found on github, but since updating
I can't seem to get that to work again.

I was just curious if there were any other more elegant type
solutions?  I am currently running a single host and local
storage, but I would love to backup VM's automatically once a week
or so to an NFS share.

Just curious if anyone had tackled this issue.

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





--
Niyazi Elvan


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage Performance

2017-10-26 Thread FERNANDO FREDIANI
That was my impression too, but unfortunately someone told in this mail 
list recently that Gluster isn't that clever to work without RAID 
Controllers and when disks fail it imposes some difficulties for 
replacement. Perhaps someone with more knowledge could clarify this 
point which certainly is beneficial to people.


Fernando


On 26/10/2017 10:59, Juan Pablo wrote:
Hi, can you check IOPS? and state # of VM's ? do : iostat -x 1 for a 
while =)


Isnt RAID discouraged ? AFAIK gluster likes JBOD, am I wrong?


regards,
JP

2017-10-25 12:05 GMT-03:00 Bryan Sockel >:


Have a question in regards to storage performance.  I have a
gluster replica 3 volume that we are testing for performance.  In
my current configuration is 1 server has 16X1.2TB( 10K 2.5
Inch) drives configured in Raid 10 with a 256k stripe. My 2nd
server is configured with 4X6TB (3.5 Inch Drives) configured Raid
10 with a 256k stripe.  Each server is configured with 802.3 Bond
(4X1GB) network links.  Each server is configured with write-back
on the raid controller.
I am seeing a lot of network usage (solid 3 Gbps) when i perform
file copies on the vm attached to that gluster volume,  But i see
spikes on the disk io when watching the dashboard through the
cockpit interface.  I spikes are up to 1.5 Gbps, but i would say
the average through put is maybe 256 Mbps.
Is this to be expected, or should it be a solid activity in the
graphs for disk IO.  Is it better to use a 256K stripe or a 512
strip on the hardware raid configuration?
Eventually i plan on having the hardware match up for better
performance.
Thanks
Bryan

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Node Upgrade failing

2017-10-24 Thread FERNANDO FREDIANI
How upgrades as done and testes for oVirt Node NG ? Every time I have 
tried to use it on the Engine interface it failed somehow.


The last image I have installed was 
ovirt-node-ng-installer-ovirt-4.1-2017091913 and I after installing I 
basically do two things before adding it to the Engine: 1) Change the 
SSH port and 2) Install Zabbix Agent. Then add the host to Engine, run 
Check for Upgrade and it returns me a message: " 'found updates for 
packages ovirt-node-ng-image-update-4.1.6-1.el7.centos'.


Next I do a 'Upgrade' and it stays there fore quiet a while and 
afterwards downloading several packages fails.

Watching the /var/log/ovirt-engine/engine.log I see:

2017-10-24 10:04:10,196-02 ERROR 
[org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase] (pool-5-thread-4) 
[0558504e-0595-4241-acbb-6b6a517132a1] Error during host 
hostname.fqdm.hidden install
2017-10-24 10:04:10,197-02 ERROR 
[org.ovirt.engine.core.bll.host.HostUpgradeManager] (pool-5-thread-4) 
[0558504e-0595-4241-acbb-6b6a517132a1] Failed to update host 
'hostname.fqdm.hidden' packages 'ovirt-node-ng-image-update': Command 
returned failure code 1 during SSH session 'r...@hostname.fqdn.hidden:55000'
2017-10-24 10:04:10,200-02 INFO 
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] 
(pool-5-thread-4) [0558504e-0595-4241-acbb-6b6a517132a1] START, 
SetVdsStatusVDSCommand(HostName = hostname.fqdn.hidden, 
SetVdsStatusVDSCommandParameters:{runAsync='true', 
hostId='6e99e7bd-3bd5-4de4-9794-5549f83b31a6', status='InstallFailed', 
nonOperationalReason='NONE', stopSpmFailureLogged='false', 
maintenanceReason='null'}), log id: 65815347
2017-10-24 10:04:10,224-02 INFO 
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] 
(pool-5-thread-4) [0558504e-0595-4241-acbb-6b6a517132a1] FINISH, 
SetVdsStatusVDSCommand, log id: 65815347
2017-10-24 10:04:10,258-02 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(pool-5-thread-4) [0558504e-0595-4241-acbb-6b6a517132a1] EVENT_ID: 
HOST_UPGRADE_FAILED(841), Correlation ID: 
0558504e-0595-4241-acbb-6b6a517132a1, Call Stack: null, Custom ID: null, 
Custom Event ID: -1, Message: Failed to upgrade Host 
hostname.fqdn.hidden (User: admin@internal-authz).


Where else could I look for the root of the problem ?
Could it be related to the differnt SSH port used or anything else ?
Is there an alterntiva way to upgrade the Host via Console ?

Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Install oVirt in CentOS 7 Node

2017-10-18 Thread FERNANDO FREDIANI

Hi.

I have a host which I installed a Minimal CentOS 7 and turned into a 
oVirt Node therefore it didn't come with Cockpit installed and 
configured as it does in the oVirt-Node-NG.


Comparing both types of Hosts I have the following packages below in 
each scenario.

The only package missing between both is "cockpit-ovirt-dashboard".

However I have already tried to install it and it was unable to show up 
the Virtual Machines correctly and control them. Is any specific or 
custom configuration needed in the cockpit config files to make it work 
properly ?


- oVirt-Node-NG host:
    cockpit-ws-130-1.el7.centos.x86_64
    cockpit-docker-130-1.el7.centos.x86_64
    cockpit-ovirt-dashboard-0.10.7-0.0.6.el7.centos.noarch
    cockpit-system-130-1.el7.centos.noarch
    cockpit-networkmanager-130-1.el7.centos.noarch
    cockpit-storaged-130-1.el7.centos.noarch
    cockpit-130-1.el7.centos.x86_64
    cockpit-bridge-130-1.el7.centos.x86_64
    cockpit-dashboard-130-1.el7.centos.x86_64

- CentOS 7 Minimal install
    cockpit-system-141-3.el7.centos.noarch
    cockpit-ws-141-3.el7.centos.x86_64
    cockpit-docker-141-3.el7.centos.x86_64
    cockpit-dashboard-141-3.el7.centos.x86_64
    cockpit-141-3.el7.centos.x86_64
    cockpit-bridge-141-3.el7.centos.x86_64
    cockpit-storaged-141-3.el7.centos.noarch
    cockpit-networkmanager-141-3.el7.centos.noarch

Thanks
Fernando


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] Cockpit oVirt support

2017-10-18 Thread FERNANDO FREDIANI

This is pretty intresting and nice to have.

I tried to find the screenshots and new features to see what the new 
webadmin UI looks like, but not sure if I am searching in the right place.


https://github.com/oVirt/cockpit-machines-ovirt-provider
or
https://www.ovirt.org/develop/release-management/features/integration/cockpit/

Fernando


On 18/10/2017 09:32, Barak Korren wrote:



On 18 October 2017 at 10:24, Michal Skrivanek 
> wrote:


Hi all,
I’m happy to announce that we finally finished initial
contribution of oVirt specific support into the Cockpit management
platform
See below for more details

There are only limited amount of operations you can do at the
moment, but it may already be interesting for troubleshooting and
simple admin actions where you don’t want to launch the full blown
webadmin UI

Worth noting that if you were ever intimidated by the complexity
of the GWT UI of oVirt portals and it held you back from
contributing, please take another look!

Thanks,
michal


Very nice work!

Where is this going? Are all WebAdmin features planned to be supported 
at some point? Its kinda nice to be able to access and manage the 
systems from any one of the hosts instead of having to know where the 
engine is...



--
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com  | TRIED. TESTED. TRUSTED. | 
redhat.com/trusted 



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt's VM backup

2017-09-21 Thread FERNANDO FREDIANI

Is it just me just finds strange the way oVirt/RHEV does backup ?

At the present you have to snapshot the VM (fine by that), but them you 
have to clone AND export it to an Export Domain, then delete the cloned 
VM. That means three copies of the same VM somewhere.


Wouldn't it be more logical to take a snapshot, get the then read-only 
disk and export it directly from any host that can read it, and finally 
remove the snapshot ?


Why the need to clone AND export ? What is the limitation to pull this 
VM directlly from host decreasing the time it takes the overall process 
and mainly the amount of storage necessary to do this job.
Ohh and before I forget by this workflow the disks are hammered a lot 
more decreasing their lifetime and may causing performance issues mainly 
during the clone process.


Fernando


On 21/09/2017 14:59, Nathanaël Blanchet wrote:


Yes seems to be good, the UI is very nice, but I didn't manage to make 
one backup though the connection to the API is okay. I followed the 
README but Nothing happens when lauching the backup processus...



Le 21/09/2017 à 19:34, Niyazi Elvan a écrit :

Hi,

You may check my project Bacchus at 
https://github.com/openbacchus/bacchus





On Sep 21, 2017 19:54, "Bernardo Juanicó" > wrote:


I didnt know that, we may adapt it in the future, but at first we
will probably just write a basic set of scripts for minimal
backup functionally since our dev time is limited.

Ill keep you in mind when looking into it.

Regards,

Bernardo

PGP Key

Skype: mattraken

2017-09-21 13:08 GMT-03:00 Nathanaël Blanchet >:

Hi Bernardo,

Thanks, I knew this tool, but it is based on sdk3 which will
be removed in the next version 4.2, so I'm looking at sdk4
project.

You may want to adapt it?


Le 21/09/2017 à 17:08, Bernardo Juanicó a écrit :

Hi Nathanael,

You may want to take a look at this too:

https://github.com/bjuanico/oVirtBackup


Regards,

Bernardo

PGP Key

Skype: mattraken

2017-09-21 11:00 GMT-03:00 Nathanaël Blanchet
>:

Hello Victor,

I have some questions about your script


Le 07/07/2017 à 23:40, Victor José Acosta Domínguez a
écrit :

Hello everyone, i created a python tool to backup and
restore oVirt's VMs.

Also i created a little "how to" on my blog:
http://blog.infratic.com/2017/07/create-ovirtrhevs-vm-backup/



  * Backup step is okay, and I get a usable qcow2 image
of the snapshot vm in the backup vm. It seems to be
compliant with the official backup API, except on
the step 2.

 1. /Take a snapshot of the virtual machine to be backed
up - (existing oVirt REST API operation)/
 2. /Back up the virtual machine configuration at the
time of the snapshot (the disk configuration can be
backed up as well if needed) - (added capability to
oVirt as part of the Backup API)/

I can't see any vm configuration anywhere but only the
qcow2 disk itself

 1. /Attach the disk snapshots that were created in (1)
to the virtual appliance for data backup - (added
capability to oVirt as part of the Backup API)/
 2. /
/
 3. /Detach the disk snapshots that were attached in (4)
from the virtual appliance - (added capability to
oVirt as part of the Backup API)/

An other case is when the vm to backup has more than one
disk. After I tested it, I found that only one qcow2
disk is saved on the backup vm. This is really a matter
when the original vm has many disks part of lvm, it
makes the vm restoration unusable.

  * About vm restoration, it seems that you are using
the upload_disk api, so the disk is uploaded to the
pre-defined storage domain, so it is not a real vm
restoration.

Do you plan to backup and restore a full VM (disks + vm
definition) in a next release?



I hope it help someone else

Regards

Victor Acosta




___
Users mailing list
Users@ovirt.org 

Re: [ovirt-users] PFSense VLAN Trunking

2017-09-18 Thread FERNANDO FREDIANI
I also wanted to know this which is pretty useful for these scenario. 
Great question !


Fernando Frediani


On 17/09/2017 23:33, LukeFlynn wrote:

Hello,

I'm wondering if there is a way to trunk all VLANs to a PFSense VM 
similar to using the "4095" tag in ESXi. I've tried using an untagged 
interface on the same bond to no avail.





Anyone have any ideas? Perhaps it's a problem with the virtio drivers 
and not the network setup itself?


Thanks,

Luke


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Manual transfer of VMs from DC to DC

2017-09-17 Thread FERNANDO FREDIANI
Alex, porting VMs in oVirt is not very flexible as some may expect or
commonly look for. Perhaps in future versions there will be things
like Host to Host transfer and not need to run commands to convert VMs
For now you need to use Exports (mount, umount, mount aigain) and so on.

2017-09-17 18:18 GMT-03:00 Alex K :
> Thanx, I can confirm that this way i may transfer VMs, but I was thinking a
> more dirty and perhaps portable way.
>
> Say I want to get to a external disk just one VM from DC A and copy/import
> it on DC B that has no access to the export domain of DC A.
>
> I've seen also articles converting the VM disk to qcow or raw then importing
> it with some perl script.
>
> I guess that the OVA import/export feature, still to be implemented, is what
> I need for this case.
>
> Thanx,
> Alex
>
> On Sep 17, 2017 10:13, "Fred Rolland"  wrote:
>>
>> Hi,
>>
>> You could import the storage domain from a DC to another DC with all the
>> VMs and disks.
>> See in [1], there is also a video explaining how to do it.
>>
>> Regards,
>> Fred
>>
>> [1]
>> https://www.ovirt.org/develop/release-management/features/storage/importstoragedomain/
>>
>> On Fri, Sep 15, 2017 at 10:40 AM, Abi Askushi 
>> wrote:
>>>
>>> Hi all,
>>>
>>> Is there any way of transferring VMs manually from DC to DC, without the
>>> DCs having connectivity with each other?
>>>
>>> I was thinking to backup all the export domain directory, and then later
>>> rsync this directory VMs to a new NFS share, then import this NFS share as
>>> an export domain on the other DC.
>>>
>>> What do you think?
>>>
>>> Thanx,
>>> Alex
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] update to centos 7.4

2017-09-14 Thread FERNANDO FREDIANI
It has been released yesterday. I don't thing such a quick upgrade is 
recommended. It might work well but I wouldn't find strange if there are 
issues until this is fully tested with current oVirt versions.


Fernando

On 14/09/2017 11:01, Nathanaël Blanchet wrote:

Hi all,

Now centos 7.4 is available, is it recommanted to update nodes (and 
engine os) knowing that ovirt 4.1 is officially supported for 7.3 or 
later?




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hyperconverged question

2017-09-04 Thread FERNANDO FREDIANI
I had the very same impression. It doesn't look like that it works then. 
So for a fully redundant where you can loose a complete host you must 
have at least 3 nodes then ?


Fernando


On 01/09/2017 12:53, Jim Kusznir wrote:
Huh...Ok., how do I convert the arbitrar to full replica, then?  I was 
misinformed when I created this setup.  I thought the arbitrator held 
enough metadata that it could validate or refudiate  any one replica 
(kinda like the parity drive for a RAID-4 array).  I was also under 
the impression that one replica  + Arbitrator is enough to keep the 
array online and functional.


--Jim

On Fri, Sep 1, 2017 at 5:22 AM, Charles Kozler > wrote:


@ Jim - you have only two data volumes and lost quorum. Arbitrator
only stores metadata, no actual files. So yes, you were running in
degraded mode so some operations were hindered.

@ Sahina - Yes, this actually worked fine for me once I did that.
However, the issue I am still facing, is when I go to create a new
gluster storage domain (replica 3, hyperconverged) and I tell it
"Host to use" and I select that host. If I fail that host, all VMs
halt. I do not recall this in 3.6 or early 4.0. This to me makes
it seem like this is "pinning" a node to a volume and vice versa
like you could, for instance, for a singular hyperconverged to ex:
export a local disk via NFS and then mount it via ovirt domain.
But of course, this has its caveats. To that end, I am using
gluster replica 3, when configuring it I say "host to use: " node
1, then in the connection details I give it node1:/data. I fail
node1, all VMs halt. Did I miss something?

On Fri, Sep 1, 2017 at 2:13 AM, Sahina Bose > wrote:

To the OP question, when you set up a gluster storage domain,
you need to specify backup-volfile-servers=:
where server2 and server3 also have bricks running. When
server1 is down, and the volume is mounted again - server2 or
server3 are queried to get the gluster volfiles.

@Jim, if this does not work, are you using 4.1.5 build with
libgfapi access? If not, please provide the vdsm and gluster
mount logs to analyse

If VMs go to paused state - this could mean the storage is not
available. You can check "gluster volume status " to
see if atleast 2 bricks are running.

On Fri, Sep 1, 2017 at 11:31 AM, Johan Bernhardsson
> wrote:

If gluster drops in quorum so that it has less votes than
it should it will stop file operations until quorum is
back to normal.If i rember it right you need two bricks to
write for quorum to be met and that the arbiter only is a
vote to avoid split brain.


Basically what you have is a raid5 solution without a
spare. And when one disk dies it will run in degraded
mode. And some raid systems will stop the raid until you
have removed the disk or forced it to run anyway.

You can read up on it here:

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/



/Johan

On Thu, 2017-08-31 at 22:33 -0700, Jim Kusznir wrote:

Hi all:

Sorry to hijack the thread, but I was about to start
essentially the same thread.

I have a 3 node cluster, all three are hosts and gluster
nodes (replica 2 + arbitrar).  I DO have the
mnt_options=backup-volfile-servers= set:

storage=192.168.8.11:/engine
mnt_options=backup-volfile-servers=192.168.8.12:192.168.8.13

I had an issue today where 192.168.8.11 went down.  ALL
VMs immediately paused, including the engine (all VMs
were running on host2:192.168.8.12).  I couldn't get any
gluster stuff working until host1 (192.168.8.11) was
restored.

What's wrong / what did I miss?

(this was set up "manually" through the article on
setting up self-hosted gluster cluster back when 4.0 was
new..I've upgraded it to 4.1 since).

Thanks!
--Jim


On Thu, Aug 31, 2017 at 12:31 PM, Charles Kozler
> wrote:

Typo..."Set it up and then failed that **HOST**"

And upon that host going down, the storage domain went
down. I only have hosted storage domain and this new one
- is this why the DC went down and no SPM could be elected?

I dont recall this working this way in early 4.0 or 3.6

   

Re: [ovirt-users] Inter Cluster Traffic

2017-08-22 Thread FERNANDO FREDIANI
How do you make the new cluster to use the same storage domain as the 
original one ? Storage Domains in oVirt are a bit confusing and less 
flexible and I am not sure it allows it, does it ?



On 22/08/2017 12:23, Alan Griffiths wrote:

Hi,

I'm in the process of building a second ovirt cluster within the 
default DC. This new cluster will use the same storage domains as the 
original cluster, and I will slowly migrate VMs from the old cluster 
to the new.


Given that the old and new cluster hosts have a firewall between them 
I need to ensure that all relevant ports are open, with particular 
attention to the correct operation of SPM.


Is it sufficient to open TCP ports 16514 and 54321 to achieve this?

Thanks,

Alan


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt Node with bcache

2017-08-16 Thread FERNANDO FREDIANI

Hello

I just wanted to share a scenario with you and perhaps exchange more 
information with other people that may also have a similar scenario.


For a couple of months I have been running a oVirt Node (CentOS 7.3 
Minimal) with bcache (https://bcache.evilpiepirate.org/) for caching a 
SSD with HDD disks. The setup is simple and was made for a prof of 
concept and since them has been working better than expected.
This is a standalone host with 4 disks being: 1 for Operating System, 2 
x 2TB 7200 RPM in software RAID 1 and 1 x PCI-E NVMe 400GB SSD which 
plays the caching device for both reads and writes. The VM storage 
folder is mounted as a ext4 partition on the logical device created by 
bcache (/dev/bcache0). All this is transparent to oVirt as all it sees 
is a /folder to put the VMs.


We monitor the IOPS on all block devices individually and see the 
behavior exactly as expected: random writes are all done on the SSD 
first and them streamed sequentially to the mechanical drives with 
pretty impressive performance. Also in the beginning while the total 
amount of data was less than 400GB ALL read used to come from the 
caching device and therefore didn't use IOPS from the mechanical drives 
leaving it free to do basically writes. Finally at sequential IOPS (as 
described by bcache) are intelligently passed directly to the mechanical 
drives (but they are not much).


Although bcache is present on kernel 3.10 I had to use kernel-ml 4.12 
(from Elrepo) and I had also to compile the bcache-tools as I could not 
find it available in any repository.


Regards
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Communication Problems between Engine and Hosts

2017-08-16 Thread FERNANDO FREDIANI

Hello Piotr. Thanks for your reply

I was running version 4.1.1, but since that day I have upgraded to 4.1.5 
(the Engine because the hosts remain on 4.1.1). I am not sure the logs 
still exists (how long they are kept normally).


Just to clarify the hosts didn't become unresponsive, but the 
communication between the Engine and the Hosts in question (each in a 
different Datacenter was interrupted - but locally the hosts were fine 
and accessible). What was strange was that since the Hosts could not 
talk to the Engine they seem to have got 'confused' and started several 
VM live migrations which was not expected. As a note I don't have any 
Fencing policy enabled.


Regards
Fernando


On 16/08/2017 07:00, Piotr Kliczewski wrote:

Fernando,

Which ovirt version are you running? Please share the logs so I could
check what caused the hosts to become unresponsive.

Thanks,
Piotr

On Wed, Aug 2, 2017 at 5:11 PM, FERNANDO FREDIANI
<fernando.fredi...@upx.com> wrote:

Hello.

Yesterday I had a pretty strange problem in one of our architectures. My
oVirt which runs in one Datacenter and controls Nodes locally and also
remotelly lost communication with the remote Nodes in another Datacenter.
To this point nothing wrong as the Nodes can continue working as expected
and running their Virtual Machines each without dependency of the oVirt
Engine.

What happened at some point is that when the communication between Engine
and Hosts came back Hosts got confused and initiated a Live Migration of ALL
VMs from one of the other. I had also to restart vdsmd agent on all Hosts in
order to get sanity my environment.
What adds up even more strangeness to this scenario is that one of the Hosts
affected doesn't belong to the same Cluster as the others and had to have
the vdsmd restarted.

I understand the Hosts can survive without the Engine online with reduced
possibilities but can communicated between them, but without affecting the
VMs or even needing to do what happened in this scenario.

Am I wrong on any of the assumptions ?

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Issues getting agent working on Ubuntu 17.04

2017-08-08 Thread FERNANDO FREDIANI

Wesley, it doesn't work at all. Seems to do something with Python, not sure.

Has been reported here before and the person who maintains it has been 
involved but didn't reply.


Fernando


On 08/08/2017 16:59, Wesley Stewart wrote:
I am having trouble getting the ovirt agent working on Ubuntu 17.04 
(perhaps it just isnt there yet)


Currently I have two test machines a 16.04 and a 17.04 ubuntu servers.


*On the 17.04 server*:
Currently isntalled:
ovirt-guest-agent (1.0.12.2.dfsg-2), and service --status-all reveals 
a few virtualization agents:

 [ - ]  open-vm-tools
 [ - ]  ovirt-guest-agent
 [ + ]  qemu-guest-agent

I can't seem to start ovirt-guest-agent
sudo service ovirt-guest-agent start/restart does nothing

Running */_sudo systemctl status ovirt-guest-agent.service_/*
Aug 08 15:31:50 ubuntu-template systemd[1]: Starting oVirt Guest Agent...
Aug 08 15:31:50 ubuntu-template systemd[1]: Started oVirt Guest Agent.
Aug 08 15:31:51 ubuntu-template python[1219]: *** stack smashing 
detected ***: /usr/bin/python terminated
Aug 08 15:31:51 ubuntu-template systemd[1]: ovirt-guest-agent.service: 
Main process exited, code=killed, status=6/ABRT
Aug 08 15:31:51 ubuntu-template systemd[1]: ovirt-guest-agent.service: 
Unit entered failed state.
Aug 08 15:31:51 ubuntu-template systemd[1]: ovirt-guest-agent.service: 
Failed with result 'signal'.


*/_sudo systemctl enable ovirt-guest-agent.service_/*
Also does not seem to do antyhing.

Doing more research, I found:
http://lists.ovirt.org/pipermail/users/2017-July/083071.html
So perhaps the ovirt-guest-agent is broken for Ubuntu 17.04?


*On the 16.04 Server I have:*
Took some fiddling, but I eventually got it working





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-08 Thread FERNANDO FREDIANI

Exactly Moacir, that is my point.


A proper Distributed FIlesystem should not rely on any type of RAID as 
it can make its own redundancy without having to rely on any underneath 
layer (look at CEPH). Using RAID may help with management and in certain 
scenarios to replace a faulty disk, but at a cost, not cheap by the way.
That's why in terms of resourcing saving, if a replica 3 brings those 
issues mentioned it is much worth to have a small arbiter somewhere 
instead of wasting a significant amount of disk space.



Fernando


On 08/08/2017 06:09, Moacir Ferreira wrote:


Fernando,


Let's see what people say... But this is what I understood Red Hat 
says is the best performance model. This is the main reason to open 
this discussion because as long as I can see, some of you in the 
community, do not agree.



But when I think about a "distributed file system", that can make any 
number of copies you want, it does not make sense using a RAIDed 
brick, what it makes sense is to use JBOD.



Moacir



*From:* fernando.fredi...@upx.com.br <fernando.fredi...@upx.com.br> on 
behalf of FERNANDO FREDIANI <fernando.fredi...@upx.com>

*Sent:* Tuesday, August 8, 2017 3:08 AM
*To:* Moacir Ferreira
*Cc:* Colin Coe; users@ovirt.org
*Subject:* Re: [ovirt-users] Good practices
Moacir, I understand that if you do this type of configuration you 
will be severely impacted on storage performance, specially for 
writes. Even if you have a Hardware RAID Controller with Writeback 
cache you will have a significant performance penalty and may not 
fully use all the resources you mentioned you have.


Fernando

2017-08-07 10:03 GMT-03:00 Moacir Ferreira <moacirferre...@hotmail.com 
<mailto:moacirferre...@hotmail.com>>:


Hi Colin,


Take a look on Devin's response. Also, read the doc he shared that
gives some hints on how to deploy Gluster.


It is more like that if you want high-performance you should have
the bricks created as RAID (5 or 6) by the server's disk
controller and them assemble a JBOD GlusterFS. The attached
document is Gluster specific and not for oVirt. But at this point
I think that having SSD will not be a plus as using the RAID
controller Gluster will not be aware of the SSD. Regarding the OS,
my idea is to have a RAID 1, made of 2 low cost HDDs, to install it.


So far, based on the information received I should create a single
RAID 5 or 6 on each server and then use this disk as a brick to
create my Gluster cluster, made of 2 replicas + 1 arbiter. What is
new for me is the detail that the arbiter does not need a lot of
space as it only keeps meta data.


Thanks for your response!

Moacir


*From:* Colin Coe <colin@gmail.com <mailto:colin@gmail.com>>
*Sent:* Monday, August 7, 2017 12:41 PM

*To:* Moacir Ferreira
*Cc:* users@ovirt.org <mailto:users@ovirt.org>
*Subject:* Re: [ovirt-users] Good practices
Hi

I just thought that you'd do hardware RAID if you had the
controller or JBOD if you didn't.  In hindsight, a server with
40Gbps NICs is pretty likely to have a hardware RAID controller. 
I've never done JBOD with hardware RAID.  I think having a single

gluster brick on hardware JBOD would be riskier than multiple
bricks, each on a single disk, but thats not based on anything
other than my prejudices.

I thought gluster tiering was for the most frequently accessed
files, in which case all the VMs disks would end up in the hot
tier.  However, I have been wrong before...

I just wanted to know where the OS was going as I didn't see it
mentioned in the OP.  Normally, I'd have the OS on a RAID1 but in
your case thats a lot of wasted disk.

Honestly, I think Yaniv's answer was far better than my own and
made the important point about having an arbiter.

Thanks

On Mon, Aug 7, 2017 at 5:56 PM, Moacir Ferreira
<moacirferre...@hotmail.com <mailto:moacirferre...@hotmail.com>>
wrote:

Hi Colin,


I am in Portugal, so sorry for this late response. It is quite
confusing for me, please consider:

*
*1*- *What if the RAID is done by the server's disk
controller, not by software?

2 -**For JBOD I am just using gdeploy to deploy it. However, I
am not using the oVirt node GUI to do this.


3 -**As the VM .qcow2 files are quite big, tiering would only
help if made by an intelligent system that uses SSD for chunks
of data not for the entire .qcow2 file. But I guess this is a
problem everybody else has. So, Do you know how tiering works
in Gluster?


4 - I am putting the OS on the first d

Re: [ovirt-users] Good practices

2017-08-08 Thread FERNANDO FREDIANI
That's something on the way RAID works, regardless what most 
'super-ultra' powerfull hardware controller you may have. RAID 5 or 6 
will never have the same write performance as a RAID 10 o 0 for example. 
Writeback caches can deal with bursts well but they have a limit 
therefore there will always be a penalty compared to what else you could 
have.


If you have a continuous stream of data (a big VM deployment or a large 
data copy) there will be a continuous write and that will likely fill up 
the cache making the disks underneath the bottleneck.
That's why on some other scenarios, like ZFS people have multiple groups 
of RAID 6 (called RAIDZ2) so it improves the write speeds for these type 
of scenarios.


In the scenario given in this thread with just 3 servers, each with a 
RAID 6 there will be a bare limit on the write performance specially for 
streammed data for most powerfull your hardware controller can do 
write-back.


Also I agree the 40Gb NICs may not be used fully and 10Gb can do the job 
well, but if they were available at the begining, why not use them.


Fernando


On 08/08/2017 03:16, Fabrice Bacchella wrote:

Le 8 août 2017 à 04:08, FERNANDO FREDIANI <fernando.fredi...@upx.com> a écrit :
Even if you have a Hardware RAID Controller with Writeback cache you will have 
a significant performance penalty and may not fully use all the resources you 
mentioned you have.


Nope again,from my experience with HP Smart Array and write back cache, write, 
that goes in the cache, are even faster that read that must goes to the disks. 
of course if the write are too fast and to big, they will over overflow the 
cache. But on todays controller they are multi-gigabyte cache, you must write a 
lot to fill them. And if you can afford 40Gb card, you can afford decent 
controller.





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI
Moacir, I understand that if you do this type of configuration you will be
severely impacted on storage performance, specially for writes. Even if you
have a Hardware RAID Controller with Writeback cache you will have a
significant performance penalty and may not fully use all the resources you
mentioned you have.

Fernando

2017-08-07 10:03 GMT-03:00 Moacir Ferreira :

> Hi Colin,
>
>
> Take a look on Devin's response. Also, read the doc he shared that gives
> some hints on how to deploy Gluster.
>
>
> It is more like that if you want high-performance you should have the
> bricks created as RAID (5 or 6) by the server's disk controller and them
> assemble a JBOD GlusterFS. The attached document is Gluster specific and
> not for oVirt. But at this point I think that having SSD will not be a plus
> as using the RAID controller Gluster will not be aware of the SSD.
> Regarding the OS, my idea is to have a RAID 1, made of 2 low cost HDDs, to
> install it.
>
>
> So far, based on the information received I should create a single RAID 5
> or 6 on each server and then use this disk as a brick to create my Gluster
> cluster, made of 2 replicas + 1 arbiter. What is new for me is the detail
> that the arbiter does not need a lot of space as it only keeps meta data.
>
>
> Thanks for your response!
> Moacir
>
> --
> *From:* Colin Coe 
> *Sent:* Monday, August 7, 2017 12:41 PM
>
> *To:* Moacir Ferreira
> *Cc:* users@ovirt.org
> *Subject:* Re: [ovirt-users] Good practices
>
> Hi
>
> I just thought that you'd do hardware RAID if you had the controller or
> JBOD if you didn't.  In hindsight, a server with 40Gbps NICs is pretty
> likely to have a hardware RAID controller.  I've never done JBOD with
> hardware RAID.  I think having a single gluster brick on hardware JBOD
> would be riskier than multiple bricks, each on a single disk, but thats not
> based on anything other than my prejudices.
>
> I thought gluster tiering was for the most frequently accessed files, in
> which case all the VMs disks would end up in the hot tier.  However, I have
> been wrong before...
>
> I just wanted to know where the OS was going as I didn't see it mentioned
> in the OP.  Normally, I'd have the OS on a RAID1 but in your case thats a
> lot of wasted disk.
>
> Honestly, I think Yaniv's answer was far better than my own and made the
> important point about having an arbiter.
>
> Thanks
>
> On Mon, Aug 7, 2017 at 5:56 PM, Moacir Ferreira <
> moacirferre...@hotmail.com> wrote:
>
>> Hi Colin,
>>
>>
>> I am in Portugal, so sorry for this late response. It is quite confusing
>> for me, please consider:
>>
>>
>> 1* - *What if the RAID is done by the server's disk controller, not by
>> software?
>>
>> 2 - For JBOD I am just using gdeploy to deploy it. However, I am not
>> using the oVirt node GUI to do this.
>>
>>
>> 3 - As the VM .qcow2 files are quite big, tiering would only help if
>> made by an intelligent system that uses SSD for chunks of data not for the
>> entire .qcow2 file. But I guess this is a problem everybody else has. So,
>> Do you know how tiering works in Gluster?
>>
>>
>> 4 - I am putting the OS on the first disk. However, would you do
>> differently?
>>
>>
>> Moacir
>>
>> --
>> *From:* Colin Coe 
>> *Sent:* Monday, August 7, 2017 4:48 AM
>> *To:* Moacir Ferreira
>> *Cc:* users@ovirt.org
>> *Subject:* Re: [ovirt-users] Good practices
>>
>> 1) RAID5 may be a performance hit-
>>
>> 2) I'd be inclined to do this as JBOD by creating a distributed disperse
>> volume on each server.  Something like
>>
>> echo gluster volume create dispersevol disperse-data 5 redundancy 2 \
>> $(for SERVER in a b c; do for BRICK in $(seq 1 5); do echo -e
>> "server${SERVER}:/brick/brick-${SERVER}${BRICK}/brick \c"; done; done)
>>
>> 3) I think the above.
>>
>> 4) Gluster does support tiering, but IIRC you'd need the same number of
>> SSD as spindle drives.  There may be another way to use the SSD as a fast
>> cache.
>>
>> Where are you putting the OS?
>>
>> Hope I understood the question...
>>
>> Thanks
>>
>> On Sun, Aug 6, 2017 at 10:49 PM, Moacir Ferreira <
>> moacirferre...@hotmail.com> wrote:
>>
>>> I am willing to assemble a oVirt "pod", made of 3 servers, each with 2
>>> CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use
>>> GlusterFS to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and
>>> a dual 10Gb NIC. So my intention is to create a loop like a server triangle
>>> using the 40Gb NICs for virtualization files (VMs .qcow2) access and to
>>> move VMs around the pod (east /west traffic) while using the 10Gb
>>> interfaces for giving services to the outside world (north/south traffic).
>>>
>>>
>>> This said, my first question is: How should I deploy GlusterFS in such
>>> oVirt scenario? My questions are:
>>>
>>>
>>> 1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and
>>> 

Re: [ovirt-users] Problemas with ovirtmgmt network used to connect VMs

2017-08-07 Thread FERNANDO FREDIANI

Hello.

Despite I didn't get any feedback on this topic anymore I just wanted to 
let people know that since I moved the VM to another oVirt Cluster 
running oVirt-Node-NG and Kernel 3.10 the problem stopped happening. 
Although I still don't know the cause of it I suspect it may have to do 
with the kernel that other Host (hypervsior) is running (4.12) as that 
is the only once running disk kernel for an specific reason.
To support this suspicious in the past I had another Hypervisor also 
running kernel 4.12 and a VM that does that same job had the same issue. 
After I have rebooted the Hypervisor back to default kernel (3.10) the 
problem didn't happen anymore.


If anyone ever faces this or anything similar please let me know as I am 
always interested to find out the root of this issue.


Regards
Fernando


On 28/07/2017 15:01, FERNANDO FREDIANI wrote:


Hello Edwardh and all.

I keep getting these disconnects, were you able to find anything about 
to suggest changing ?


As I mentioned this machine different from the others where it never 
happened uses the ovirtmgmt network as VM network and has kernel 4.12 
instead of the default 3.10 from CentOS 7.3. It seems a particular 
situation that is triggering this behavior but could not gather any 
hint yet.


I have tried to run a regular arping to force the bridge always learn 
the VMs MAC address but it doesn't seem to work and every in a while 
the bridge 'forgets' that particular VM mac address.
I have also even rebuilt the VM completely changing its operating 
system from Ubuntu 16.04 to CentOS 7.3 and the same problem happened.


Fernando


On 24/07/2017 18:20, FERNANDO FREDIANI wrote:


Hello Edward, this happened again today and I was able to check more 
details.


So:

- The VM stopped passing any network traffic.
- Checking 'brctl showmacs ovirtmgmt' it showed the VM's mac address 
missing.
- I then went to oVirt Engine, under VM's 'Network Interfaces' tab, 
clicked Edit and changed the Link State to Down then to Up and it 
recovered its connectivity.
- Another 'brctl showmacs ovirtmgmt' showed the VM's mac address 
learned again by the bridge.


This Node server has the particularity of sharing the ovirtmgmt with 
VMs. Could it possibly be the cause of the issue in any way ?


Thanks
Fernando


On 24/07/2017 09:47, FERNANDO FREDIANI wrote:


Not tried this yet Edwardh, but will do at next time it happens. THe 
source mac address should be the mac as the VM. I don't see any 
reason for it to change from within the VM ou outside.


What type of things would make the bridge stop learning a given VM 
mac address ?


Fernando


On 23/07/2017 07:51, Edward Haas wrote:
Have you tried to use tcpdump at the VM vNIC to examine if there is 
traffic trying to get out from there? And with what source mac address?


Thanks,
Edy,

On Fri, Jul 21, 2017 at 5:36 PM, FERNANDO FREDIANI 
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:


Has anyone had problem when using the ovirtmgmt bridge to
connect VMs ?

I am still facing a bizarre problem where some VMs connected to
this bridge stop passing traffic. Checking the problem further
I see its mac address stops being learned by the bridge and the
problem is resolved only with a VM reboot.

When I last saw the problem I run brctl showmacs ovirtmgmt and
it shows me the VM's mac adress with agening timer 200.19.
After the VM reboot I see the same mac with agening timer 0.00.
I don't see it in another environment where the ovirtmgmt is
not used for VMs.

Does anyone have any clue about this type of behavior ?

Fernando
___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>










___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI

Thanks for the detailed answer Erekle.

I conclude that it is worth in any scenario to have a arbiter node in 
order to avoid wasting more disk space to RAID X + Gluster Replication 
on the top of it. The cost seems much lower if you consider running 
costs of the whole storage and compare it with the cost to build the 
arbiter node. Even having a fully redundant arbiter service with 2 nodes 
would make it wort on a larger deployment.


Regards
Fernando

On 07/08/2017 17:07, Erekle Magradze wrote:


Hi Fernando (sorry for misspelling your name, I used a different 
keyboard),


So let's go with the following scenarios:

1. Let's say you have two servers (replication factor is 2), i.e. two 
bricks per volume, in this case it is strongly recommended to have the 
arbiter node, the metadata storage that will guarantee avoiding the 
split brain situation, in this case for arbiter you don't even need a 
disk with lots of space, it's enough to have a tiny ssd but hosted on 
a separate server. Advantage of such setup is that you don't need the 
RAID 1 for each brick, you have the metadata information stored in 
arbiter node and brick replacement is easy.


2. If you have odd number of bricks (let's say 3, i.e. replication 
factor is 3) in your volume and you didn't create the arbiter node as 
well as you didn't configure the quorum, in this case the entire load 
for keeping the consistency of the volume resides on all 3 servers, 
each of them is important and each brick contains key information, 
they need to cross-check each other (that's what people usually do 
with the first try of gluster :) ), in this case replacing a brick is 
a big pain and in this case RAID 1 is a good option to have (that's 
the disadvantage, i.e. loosing the space and not having the JBOD 
option) advantage is that you don't have the to have additional 
arbiter node.


3. You have odd number of bricks and configured arbiter node, in this 
case you can easily go with JBOD, however a good practice would be to 
have a RAID 1 for arbiter disks (tiny 128GB SSD-s ar perfectly 
sufficient for volumes with 10s of TB-s in size.)


That's basically it

The rest about the reliability and setup scenarios you can find in 
gluster documentation, especially look for quorum and arbiter node 
configs+options.


Cheers

Erekle

P.S. What I was mentioning, regarding a good practice is mostly 
related to the operations of gluster not installation or deployment, 
i.e. not the conceptual understanding of gluster (conceptually it's a 
JBOD system).


On 08/07/2017 05:41 PM, FERNANDO FREDIANI wrote:


Thanks for the clarification Erekle.

However I get surprised with this way of operating from GlusterFS as 
it adds another layer of complexity to the system (either a hardware 
or software RAID) before the gluster config and increase the system's 
overall costs.


An important point to consider is: In RAID configuration you already 
have space 'wasted' in order to build redundancy (either RAID 1, 5, 
or 6). Then when you have GlusterFS on the top of several RAIDs you 
have again more data replicated so you end up with the same data 
consuming more space in a group of disks and again on the top of 
several RAIDs depending on the Gluster configuration you have (in a 
RAID 1 config the same data is replicated 4 times).


Yet another downside of having a RAID (specially RAID 5 or 6) is that 
it reduces considerably the write speeds as each group of disks will 
end up having the write speed of a single disk as all other disks of 
that group have to wait for each other to write as well.


Therefore if Gluster already replicates data why does it create this 
big pain you mentioned if the data is replicated somewhere else, can 
still be retrieved to both serve clients and reconstruct the 
equivalent disk when it is replaced ?


Fernando


On 07/08/2017 10:26, Erekle Magradze wrote:


Hi Frenando,

Here is my experience, if you consider a particular hard drive as a 
brick for gluster volume and it dies, i.e. it becomes not accessible 
it's a huge hassle to discard that brick and exchange with another 
one, since gluster some tries to access that broken brick and it's 
causing (at least it cause for me) a big pain, therefore it's better 
to have a RAID as brick, i.e. have RAID 1 (mirroring) for each 
brick, in this case if the disk is down you can easily exchange it 
and rebuild the RAID without going offline, i.e switching off the 
volume doing brick manipulations and switching it back on.


Cheers

Erekle


On 08/07/2017 03:04 PM, FERNANDO FREDIANI wrote:


For any RAID 5 or 6 configuration I normally follow a simple gold 
rule which gave good results so far:

- up to 4 disks RAID 5
- 5 or more disks RAID 6

However I didn't really understand well the recommendation to use 
any RAID with GlusterFS. I always thought that GlusteFS likes to 
work in JBOD mode and control the disks (bricks) directlly so you 
can create whatever distribution rule you wish, and if a single

Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI
What you mentioned is a specific case and not a generic situation. The 
main point there is that RAID 5 or 6 impacts write performance compared 
when you write to only 2 given disks at a time. That was the comparison 
made.


Fernando


On 07/08/2017 16:49, Fabrice Bacchella wrote:


Le 7 août 2017 à 17:41, FERNANDO FREDIANI <fernando.fredi...@upx.com 
<mailto:fernando.fredi...@upx.com>> a écrit :




Yet another downside of having a RAID (specially RAID 5 or 6) is that 
it reduces considerably the write speeds as each group of disks will 
end up having the write speed of a single disk as all other disks of 
that group have to wait for each other to write as well.




That's not true if you have medium to high range hardware raid. For 
example, HP Smart Array come with a flash cache of about 1 or 2 Gb 
that hides that from the OS. 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI

Thanks for the clarification Erekle.

However I get surprised with this way of operating from GlusterFS as it 
adds another layer of complexity to the system (either a hardware or 
software RAID) before the gluster config and increase the system's 
overall costs.


An important point to consider is: In RAID configuration you already 
have space 'wasted' in order to build redundancy (either RAID 1, 5, or 
6). Then when you have GlusterFS on the top of several RAIDs you have 
again more data replicated so you end up with the same data consuming 
more space in a group of disks and again on the top of several RAIDs 
depending on the Gluster configuration you have (in a RAID 1 config the 
same data is replicated 4 times).


Yet another downside of having a RAID (specially RAID 5 or 6) is that it 
reduces considerably the write speeds as each group of disks will end up 
having the write speed of a single disk as all other disks of that group 
have to wait for each other to write as well.


Therefore if Gluster already replicates data why does it create this big 
pain you mentioned if the data is replicated somewhere else, can still 
be retrieved to both serve clients and reconstruct the equivalent disk 
when it is replaced ?


Fernando


On 07/08/2017 10:26, Erekle Magradze wrote:


Hi Frenando,

Here is my experience, if you consider a particular hard drive as a 
brick for gluster volume and it dies, i.e. it becomes not accessible 
it's a huge hassle to discard that brick and exchange with another 
one, since gluster some tries to access that broken brick and it's 
causing (at least it cause for me) a big pain, therefore it's better 
to have a RAID as brick, i.e. have RAID 1 (mirroring) for each brick, 
in this case if the disk is down you can easily exchange it and 
rebuild the RAID without going offline, i.e switching off the volume 
doing brick manipulations and switching it back on.


Cheers

Erekle


On 08/07/2017 03:04 PM, FERNANDO FREDIANI wrote:


For any RAID 5 or 6 configuration I normally follow a simple gold 
rule which gave good results so far:

- up to 4 disks RAID 5
- 5 or more disks RAID 6

However I didn't really understand well the recommendation to use any 
RAID with GlusterFS. I always thought that GlusteFS likes to work in 
JBOD mode and control the disks (bricks) directlly so you can create 
whatever distribution rule you wish, and if a single disk fails you 
just replace it and which obviously have the data replicated from 
another. The only downside of using in this way is that the 
replication data will be flow accross all servers but that is not 
much a big issue.


Anyone can elaborate about Using RAID + GlusterFS and JBOD + GlusterFS.

Thanks
Regards
Fernando


On 07/08/2017 03:46, Devin Acosta wrote:


Moacir,

I have recently installed multiple Red Hat Virtualization hosts for 
several different companies, and have dealt with the Red Hat Support 
Team in depth about optimal configuration in regards to setting up 
GlusterFS most efficiently and I wanted to share with you what I 
learned.


In general Red Hat Virtualization team frowns upon using each DISK 
of the system as just a JBOD, sure there is some protection by 
having the data replicated, however, the recommendation is to use 
RAID 6 (preferred) or RAID-5, or at least RAID-1 at the very least.


Here is the direct quote from Red Hat when I asked about RAID and 
Bricks:

/
/
/"A typical Gluster configuration would use RAID underneath the 
bricks. RAID 6 is most typical as it gives you 2 disk failure 
protection, but RAID 5 could be used too. Once you have the RAIDed 
bricks, you'd then apply the desired replication on top of that. The 
most popular way of doing this would be distributed replicated with 
2x replication. In general you'll get better performance with larger 
bricks. 12 drives is often a sweet spot. Another option would be to 
create a separate tier using all SSD’s.” /


/In order to SSD tiering from my understanding you would need 1 x 
NVMe drive in each server, or 4 x SSD hot tier (it needs to be 
distributed, replicated for the hot tier if not using NVME). So with 
you only having 1 SSD drive in each server, I’d suggest maybe 
looking into the NVME option. /

/
/
/Since your using only 3-servers, what I’d probably suggest is to do 
(2 Replicas + Arbiter Node), this setup actually doesn’t require the 
3rd server to have big drives at all as it only stores meta-data 
about the files and not actually a full copy. /

/
/
/Please see the attached document that was given to me by Red Hat to 
get more information on this. Hope this information helps you./

/
/

--

Devin Acosta, RHCA, RHVCA
Red Hat Certified Architect

On August 6, 2017 at 7:29:29 PM, Moacir Ferreira 
(moacirferre...@hotmail.com <mailto:moacirferre...@hotmail.com>) wrote:


I am willing to assemble a oVirt "pod", made of 3 servers, each 
with 2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The 
idea is to use Gl

Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI
Moacir, I beleive for using the 3 servers directly connected to each 
others without switch you have to have a Bridge on each server for every 
2 physical interfaces to allow the traffic passthrough in layer2 (Is it 
possible to create this from the oVirt Engine Web Interface?). If your 
ovirtmgmt network is separate from other (should really be) that should 
be fine to do.



Fernando


On 07/08/2017 07:13, Moacir Ferreira wrote:


Hi, in-line responses.


Thanks,

Moacir



*From:* Yaniv Kaul 
*Sent:* Monday, August 7, 2017 7:42 AM
*To:* Moacir Ferreira
*Cc:* users@ovirt.org
*Subject:* Re: [ovirt-users] Good practices


On Sun, Aug 6, 2017 at 5:49 PM, Moacir Ferreira 
> wrote:


I am willing to assemble a oVirt "pod", made of 3 servers, each
with 2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The
idea is to use GlusterFS to provide HA for the VMs. The 3 servers
have a dual 40Gb NIC and a dual 10Gb NIC. So my intention is to
create a loop like a server triangle using the 40Gb NICs for
virtualization files (VMs .qcow2) access and to move VMs around
the pod (east /west traffic) while using the 10Gb interfaces for
giving services to the outside world (north/south traffic).


Very nice gear. How are you planning the network exactly? Without a 
switch, back-to-back? (sounds OK to me, just wanted to ensure this is 
what the 'dual' is used for). However, I'm unsure if you have the 
correct balance between the interface speeds (40g) and the disks (too 
many HDDs?).


Moacir:The idea is to have a very high performance network for the 
distributed file system and to prevent bottlenecks when we move one VM 
from a node to another. Using 40Gb NICs I can just connect the servers 
back-to-back. In this case I don't need the expensive 40Gb switch, I 
get very high speed and no contention between north/south traffic with 
east/west.



This said, my first question is: How should I deploy GlusterFS in
such oVirt scenario? My questions are:


1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node,
and then create a GlusterFS using them?

I would assume RAID 1 for the operating system (you don't want a 
single point of failure there?) and the rest JBODs. The SSD will be 
used for caching, I reckon? (I personally would add more SSDs instead 
of HDDs, but it does depend on the disk sizes and your space requirements.


Moacir: Yes, I agree that I need a RAID-1 for the OS. Now, generic 
JBOD or a JBOD assembled using RAID-5 "disks" createdby the server's 
disk controller?


2 - Instead, should I create a JBOD array made of all server's disks?

3 - What is the best Gluster configuration to provide for HA while
not consuming too much disk space?


Replica 2 + Arbiter sounds good to me.
Moacir:I agree, and that is what I am using.

4 - Does a oVirt hypervisor pod like I am planning to build, and
the virtualization environment, benefits from tiering when using a
SSD disk? And yes, will Gluster do it by default or I have to
configure it to do so?


Yes, I believe using lvmcache is the best way to go.

Moacir: Are you sure? I say that because the qcow2 files will be
quite big. So if tiering is "file based" the SSD would have to be
very, very big unless Gluster tiering do it by "chunks of data".


At the bottom line, what is the good practice for using GlusterFS
in small pods for enterprises?


Don't forget jumbo frames. libgfapi (coming hopefully in 4.1.5). 
Sharding (enabled out of the box if you use a hyper-converged setup 
via gdeploy).
*Moacir:* Yes! This is another reason to have separate networks for 
north/south and east/west. In that way I can use the standard MTU on 
the 10Gb NICs and jumbo frames on the file/move 40Gb NICs.


Y.


You opinion/feedback will be really appreciated!

Moacir


___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI
For any RAID 5 or 6 configuration I normally follow a simple gold rule 
which gave good results so far:

- up to 4 disks RAID 5
- 5 or more disks RAID 6

However I didn't really understand well the recommendation to use any 
RAID with GlusterFS. I always thought that GlusteFS likes to work in 
JBOD mode and control the disks (bricks) directlly so you can create 
whatever distribution rule you wish, and if a single disk fails you just 
replace it and which obviously have the data replicated from another. 
The only downside of using in this way is that the replication data will 
be flow accross all servers but that is not much a big issue.


Anyone can elaborate about Using RAID + GlusterFS and JBOD + GlusterFS.

Thanks
Regards
Fernando


On 07/08/2017 03:46, Devin Acosta wrote:


Moacir,

I have recently installed multiple Red Hat Virtualization hosts for 
several different companies, and have dealt with the Red Hat Support 
Team in depth about optimal configuration in regards to setting up 
GlusterFS most efficiently and I wanted to share with you what I learned.


In general Red Hat Virtualization team frowns upon using each DISK of 
the system as just a JBOD, sure there is some protection by having the 
data replicated, however, the recommendation is to use RAID 6 
(preferred) or RAID-5, or at least RAID-1 at the very least.


Here is the direct quote from Red Hat when I asked about RAID and Bricks:
/
/
/"A typical Gluster configuration would use RAID underneath the 
bricks. RAID 6 is most typical as it gives you 2 disk failure 
protection, but RAID 5 could be used too. Once you have the RAIDed 
bricks, you'd then apply the desired replication on top of that. The 
most popular way of doing this would be distributed replicated with 2x 
replication. In general you'll get better performance with larger 
bricks. 12 drives is often a sweet spot. Another option would be to 
create a separate tier using all SSD’s.” /


/In order to SSD tiering from my understanding you would need 1 x NVMe 
drive in each server, or 4 x SSD hot tier (it needs to be distributed, 
replicated for the hot tier if not using NVME). So with you only 
having 1 SSD drive in each server, I’d suggest maybe looking into the 
NVME option. /

/
/
/Since your using only 3-servers, what I’d probably suggest is to do 
(2 Replicas + Arbiter Node), this setup actually doesn’t require the 
3rd server to have big drives at all as it only stores meta-data about 
the files and not actually a full copy. /

/
/
/Please see the attached document that was given to me by Red Hat to 
get more information on this. Hope this information helps you./

/
/

--

Devin Acosta, RHCA, RHVCA
Red Hat Certified Architect

On August 6, 2017 at 7:29:29 PM, Moacir Ferreira 
(moacirferre...@hotmail.com ) wrote:


I am willing to assemble a oVirt "pod", made of 3 servers, each with 
2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is 
to use GlusterFS to provide HA for the VMs. The 3 servers have a dual 
40Gb NIC and a dual 10Gb NIC. So my intention is to create a loop 
like a server triangle using the 40Gb NICs for virtualization files 
(VMs .qcow2) access and to move VMs around the pod (east /west 
traffic) while using the 10Gb interfaces for giving services to the 
outside world (north/south traffic).



This said, my first question is: How should I deploy GlusterFS in 
such oVirt scenario? My questions are:



1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, 
and then create a GlusterFS using them?


2 - Instead, should I create a JBOD array made of all server's disks?

3 - What is the best Gluster configuration to provide for HA while 
not consuming too much disk space?


4 - Does a oVirt hypervisor pod like I am planning to build, and the 
virtualization environment, benefits from tiering when using a SSD 
disk? And yes, will Gluster do it by default or I have to configure 
it to do so?



At the bottom line, what is the good practice for using GlusterFS in 
small pods for enterprises?



You opinion/feedback will be really appreciated!

Moacir

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Communication Problems between Engine and Hosts

2017-08-02 Thread FERNANDO FREDIANI

Hello.

Yesterday I had a pretty strange problem in one of our architectures. My 
oVirt which runs in one Datacenter and controls Nodes locally and also 
remotelly lost communication with the remote Nodes in another Datacenter.
To this point nothing wrong as the Nodes can continue working as 
expected and running their Virtual Machines each without dependency of 
the oVirt Engine.


What happened at some point is that when the communication between 
Engine and Hosts came back Hosts got confused and initiated a Live 
Migration of ALL VMs from one of the other. I had also to restart vdsmd 
agent on all Hosts in order to get sanity my environment.
What adds up even more strangeness to this scenario is that one of the 
Hosts affected doesn't belong to the same Cluster as the others and had 
to have the vdsmd restarted.


I understand the Hosts can survive without the Engine online with 
reduced possibilities but can communicated between them, but without 
affecting the VMs or even needing to do what happened in this scenario.


Am I wrong on any of the assumptions ?

Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problemas with ovirtmgmt network used to connect VMs

2017-07-28 Thread FERNANDO FREDIANI

Hello Edwardh and all.

I keep getting these disconnects, were you able to find anything about 
to suggest changing ?


As I mentioned this machine different from the others where it never 
happened uses the ovirtmgmt network as VM network and has kernel 4.12 
instead of the default 3.10 from CentOS 7.3. It seems a particular 
situation that is triggering this behavior but could not gather any hint 
yet.


I have tried to run a regular arping to force the bridge always learn 
the VMs MAC address but it doesn't seem to work and every in a while the 
bridge 'forgets' that particular VM mac address.
I have also even rebuilt the VM completely changing its operating system 
from Ubuntu 16.04 to CentOS 7.3 and the same problem happened.


Fernando


On 24/07/2017 18:20, FERNANDO FREDIANI wrote:


Hello Edward, this happened again today and I was able to check more 
details.


So:

- The VM stopped passing any network traffic.
- Checking 'brctl showmacs ovirtmgmt' it showed the VM's mac address 
missing.
- I then went to oVirt Engine, under VM's 'Network Interfaces' tab, 
clicked Edit and changed the Link State to Down then to Up and it 
recovered its connectivity.
- Another 'brctl showmacs ovirtmgmt' showed the VM's mac address 
learned again by the bridge.


This Node server has the particularity of sharing the ovirtmgmt with 
VMs. Could it possibly be the cause of the issue in any way ?


Thanks
Fernando


On 24/07/2017 09:47, FERNANDO FREDIANI wrote:


Not tried this yet Edwardh, but will do at next time it happens. THe 
source mac address should be the mac as the VM. I don't see any 
reason for it to change from within the VM ou outside.


What type of things would make the bridge stop learning a given VM 
mac address ?


Fernando


On 23/07/2017 07:51, Edward Haas wrote:
Have you tried to use tcpdump at the VM vNIC to examine if there is 
traffic trying to get out from there? And with what source mac address?


Thanks,
Edy,

On Fri, Jul 21, 2017 at 5:36 PM, FERNANDO FREDIANI 
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:


Has anyone had problem when using the ovirtmgmt bridge to
connect VMs ?

I am still facing a bizarre problem where some VMs connected to
this bridge stop passing traffic. Checking the problem further I
see its mac address stops being learned by the bridge and the
problem is resolved only with a VM reboot.

When I last saw the problem I run brctl showmacs ovirtmgmt and
it shows me the VM's mac adress with agening timer 200.19. After
the VM reboot I see the same mac with agening timer 0.00.
I don't see it in another environment where the ovirtmgmt is not
used for VMs.

Does anyone have any clue about this type of behavior ?

Fernando
___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>








___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt VM backups

2017-07-27 Thread FERNANDO FREDIANI
One thing that I cannot conceive when doing oVirt backups is the need to 
clone the VM in order to copy it. Why, as in VMware, isn't possible to 
just Snapshot and copy the read-only disk ?


Fernando


On 27/07/2017 07:14, Abi Askushi wrote:

Hi All,

For VM backups I am using some python script to automate the snapshot 
-> clone -> export -> delete steps (although with some issues when 
trying to backups a Windows 10 VM)


I was wondering if there is there any plan to integrate VM backups in 
the GUI or what other recommended ways exist out there.


Thanx,
Abi


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt Node

2017-07-25 Thread FERNANDO FREDIANI
Josep, these Hosts was CentOS Minimal Install or were oVirt-Node-NG 
images ? If they were CentOS Minimal install you must install vsdm 
before adding the host to oVirt Engine.


Fernando


On 25/07/2017 14:13, Jose Vicente Rosello Vila wrote:


Hello users,

I installed ovirt engine 4.1.3.5-1.el7.centos and I tried to install 2 
hosts, but the result was “ install failed”.


Both nodes have been installes from CD image.

What can I do?

Thanks,

Descripción: Descripción: logo_upv_val.jpg



Josep Vicent Roselló Vila

Àrea de Sistemes d’Informació i Comunicacions

*Universitat Politècnica de València *



Camí de Vera, s/n

46022 VALÈNCIA

_Edifici 4 
L___




Tel. +34 963 879 075 (ext.78746)

rose...@asic.upv.es 



Antes de imprimir este mensaje, piense si es necesario.
¡El cuidado del medioambiente es cosa de todos!



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problemas with ovirtmgmt network used to connect VMs

2017-07-24 Thread FERNANDO FREDIANI
Hello Edward, this happened again today and I was able to check more 
details.


So:

- The VM stopped passing any network traffic.
- Checking 'brctl showmacs ovirtmgmt' it showed the VM's mac address 
missing.
- I then went to oVirt Engine, under VM's 'Network Interfaces' tab, 
clicked Edit and changed the Link State to Down then to Up and it 
recovered its connectivity.
- Another 'brctl showmacs ovirtmgmt' showed the VM's mac address learned 
again by the bridge.


This Node server has the particularity of sharing the ovirtmgmt with 
VMs. Could it possibly be the cause of the issue in any way ?


Thanks
Fernando


On 24/07/2017 09:47, FERNANDO FREDIANI wrote:


Not tried this yet Edwardh, but will do at next time it happens. THe 
source mac address should be the mac as the VM. I don't see any reason 
for it to change from within the VM ou outside.


What type of things would make the bridge stop learning a given VM mac 
address ?


Fernando


On 23/07/2017 07:51, Edward Haas wrote:
Have you tried to use tcpdump at the VM vNIC to examine if there is 
traffic trying to get out from there? And with what source mac address?


Thanks,
Edy,

On Fri, Jul 21, 2017 at 5:36 PM, FERNANDO FREDIANI 
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:


Has anyone had problem when using the ovirtmgmt bridge to connect
VMs ?

I am still facing a bizarre problem where some VMs connected to
this bridge stop passing traffic. Checking the problem further I
see its mac address stops being learned by the bridge and the
problem is resolved only with a VM reboot.

When I last saw the problem I run brctl showmacs ovirtmgmt and it
shows me the VM's mac adress with agening timer 200.19. After the
VM reboot I see the same mac with agening timer 0.00.
I don't see it in another environment where the ovirtmgmt is not
used for VMs.

Does anyone have any clue about this type of behavior ?

Fernando
___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problemas with ovirtmgmt network used to connect VMs

2017-07-24 Thread FERNANDO FREDIANI
Not tried this yet Edwardh, but will do at next time it happens. THe 
source mac address should be the mac as the VM. I don't see any reason 
for it to change from within the VM ou outside.


What type of things would make the bridge stop learning a given VM mac 
address ?


Fernando


On 23/07/2017 07:51, Edward Haas wrote:
Have you tried to use tcpdump at the VM vNIC to examine if there is 
traffic trying to get out from there? And with what source mac address?


Thanks,
Edy,

On Fri, Jul 21, 2017 at 5:36 PM, FERNANDO FREDIANI 
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:


Has anyone had problem when using the ovirtmgmt bridge to connect
VMs ?

I am still facing a bizarre problem where some VMs connected to
this bridge stop passing traffic. Checking the problem further I
see its mac address stops being learned by the bridge and the
problem is resolved only with a VM reboot.

When I last saw the problem I run brctl showmacs ovirtmgmt and it
shows me the VM's mac adress with agening timer 200.19. After the
VM reboot I see the same mac with agening timer 0.00.
I don't see it in another environment where the ovirtmgmt is not
used for VMs.

Does anyone have any clue about this type of behavior ?

Fernando
___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Problemas with ovirtmgmt network used to connect VMs

2017-07-21 Thread FERNANDO FREDIANI

Has anyone had problem when using the ovirtmgmt bridge to connect VMs ?

I am still facing a bizarre problem where some VMs connected to this 
bridge stop passing traffic. Checking the problem further I see its mac 
address stops being learned by the bridge and the problem is resolved 
only with a VM reboot.


When I last saw the problem I run brctl showmacs ovirtmgmt and it shows 
me the VM's mac adress with agening timer 200.19. After the VM reboot I 
see the same mac with agening timer 0.00.
I don't see it in another environment where the ovirtmgmt is not used 
for VMs.


Does anyone have any clue about this type of behavior ?

Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt on sdcard?

2017-07-20 Thread FERNANDO FREDIANI
The proposed seems to be something interesting but is manual and 
susceptible to errors. I would much rather if this would come out of the 
box as it is VMware ESXi.


A 'squashfs' type of image boots up and runs completely in memory. Any 
logging is written and rotated also in memory which keeps only a certain 
recent period of logs necessary for quick trobleshooting. Whoever wants 
more than that can easily set a rsyslog server to collect and keep the 
logs for a longer period. With this, only the modified Node 
configuration is written in the SD Card/USB Stick when it changes which 
is not often which makes it a reliable solution.


I personally have a Linux + libvirt solution installed and running in a 
USB Stick that does exactlly this (writes up all the logs in memory) and 
it has been running for 3+ years without any issues.


Fernando


On 20/07/2017 03:54, Lionel Caignec wrote:

Ok thank you,

for now i'm not so advanced on architecture design i'm just thinking of what 
can i do.

Lionel

- Mail original -
De: "Yedidyah Bar David" 
À: "Lionel Caignec" 
Cc: "users" 
Envoyé: Jeudi 20 Juillet 2017 08:03:50
Objet: Re: [ovirt-users] ovirt on sdcard?

On Wed, Jul 19, 2017 at 10:16 PM, Lionel Caignec  wrote:

Hi,

i'm planning to install some new hypervisors (ovirt) and i'm wondering if it's 
possible to get it installed on sdcard.
I know there is write limitation on this kind of storage device.
Is it a viable solution? there is somewhere some tuto about tuning ovirt on 
this kind of storage?

Perhaps provide some more details about your plans?

The local disk is normally used only for standard OS-level stuff -
mostly logging. If you put /var/log on NFS/iSCSI/whatever, I think
you should not expect much other local writing.
Didn't test this myself.

People are doing many other things, including putting all of the
root filesystem on remote storage. There are many options, depending
on your hardware, your existing infrastructure, etc.

Best,


Thanks

--
Lionel
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Backup oVirt Node configuration

2017-07-18 Thread Fernando Frediani
Folks. I had a need to reinstall a oVirt Node a few times these days. This
imposed reconfigure it all in order to add it back to oVirt Engine.

What is a better way to backup a oVirt Node configuration, for when you
reinstall it or if it fail completelly you just reinstall it and restore
the backed up files with network configuration, UUID, VDSM, etc ?

Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Bizzare oVirt network problem

2017-07-12 Thread FERNANDO FREDIANI

Hello Pavel

What you mean by another oVirt instance ? In one Datacenter it has 2 
different clusters (or Datacenter in oVirt way of orrganizing things), 
but in the other Datacenter the oVirt Node is standlone.


Let me know.

Fernando


On 12/07/2017 16:49, Pavel Gashev wrote:


Fernando,

It looks like you have another oVirt instance in the same network 
segment(s). Don’t you?


*From: *<users-boun...@ovirt.org> on behalf of FERNANDO FREDIANI 
<fernando.fredi...@upx.com>

*Date: *Wednesday, 12 July 2017 at 16:21
*To: *"users@ovirt.org" <users@ovirt.org>
*Subject: *[ovirt-users] Bizzare oVirt network problem

Hello.

I am facing a pretty bizzare problem in two of my Nodes running oVirt. 
A given VM running a few hundred Mbps of traffic simply stops passing 
traffic and only recovers after a reboot. Checking the bridge with 
'brctl showmacs BRIDGE' I see the VM's MAC address missing during this 
event.


It seems the bridge simply unlearn the VM's mac address which only 
returns when the VM is rebooted.
This problems happened in two different Nodes running in different 
hardware, in different datacenter, in different network architecture, 
different switch vendors and different bonding modes.


The main differences these Nodes have compared to others I have and 
which don't show this problem are:

- The CentOS 7 installed is a Minimal installation instead of oVirt-NG
- The Kernel used is 4.12 (elrepo) instead of the default 3.10
- The ovirtmgmt network is used also for the Virtual Machine 
showing this problem.


Has anyone have any idea if it may have anything to do with oVirt (any 
filters) or any of the components different from a oVirt-NG installation ?


Thanks
Fernando



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Bizzare oVirt network problem

2017-07-12 Thread FERNANDO FREDIANI

Hello.

I am facing a pretty bizzare problem in two of my Nodes running oVirt. A 
given VM running a few hundred Mbps of traffic simply stops passing 
traffic and only recovers after a reboot. Checking the bridge with 
'brctl showmacs BRIDGE' I see the VM's MAC address missing during this 
event.


It seems the bridge simply unlearn the VM's mac address which only 
returns when the VM is rebooted.
This problems happened in two different Nodes running in different 
hardware, in different datacenter, in different network architecture, 
different switch vendors and different bonding modes.


The main differences these Nodes have compared to others I have and 
which don't show this problem are:

- The CentOS 7 installed is a Minimal installation instead of oVirt-NG
- The Kernel used is 4.12 (elrepo) instead of the default 3.10
- The ovirtmgmt network is used also for the Virtual Machine 
showing this problem.


Has anyone have any idea if it may have anything to do with oVirt (any 
filters) or any of the components different from a oVirt-NG installation ?


Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-guest-agent - Ubuntu 16.04

2017-07-04 Thread FERNANDO FREDIANI
I am still getting problems with ovirt-guest-agent on Ubuntu machines in 
any scenario, new or upgraded instalation.


One of the VMs has been upgraded to Ubuntu 17.04 (zesty) and the 
upgraded version of ovirt-guest-agent also doesn't start due something 
with python.


When trying to run it manually with: "/usr/bin/python 
/usr/share/ovirt-guest-agent/ovirt-guest-agent.py" I get the following 
error:
root@hostname:~# /usr/bin/python 
/usr/share/ovirt-guest-agent/ovirt-guest-agent.py

*** stack smashing detected ***: /usr/bin/python terminated
Aborted (core dumped)

Tried also to install the previous version (16.04) from evilissimo but 
doesn't work either.


Fernando


On 30/06/2017 06:16, Sandro Bonazzola wrote:
Adding Laszlo Boszormenyi (GCS) <g...@debian.org 
<mailto:g...@debian.org>> which is the maintainer according to 
http://it.archive.ubuntu.com/ubuntu/ubuntu/ubuntu/pool/universe/o/ovirt-guest-agent/ovirt-guest-agent_1.0.13.dfsg-1.dsc 



On Wed, Jun 28, 2017 at 5:37 PM, FERNANDO FREDIANI 
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:


Hello

Is the maintainer of ovirt-guest-agent for Ubuntu on this mail list ?

I have noticed that if you install ovirt-guest-agent package from
Ubuntu repositories it doesn't start. Throws an error about python
and never starts. Has anyone noticied the same ? OS in this case
is a clean minimal install of Ubuntu 16.04.

Installing it from the following repository works fine -

http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04

<http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04>

Fernando

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA <https://www.redhat.com/>

<https://red.ht/sig>  
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] download/export a VM image

2017-07-03 Thread FERNANDO FREDIANI

Have exactlly the same doubt here as well.


On 03/07/2017 12:05, aduckers wrote:

Running a 4.1 cluster with FC SAN storage.  I’ve got a VM that I’ve customized, 
and would now like to pull that out of oVirt in order to share with folks 
outside the environment.
What’s the easiest way to do that?
I see that the export domain is being deprecated, though I can still set one up 
at this time.  Even in the case of an NFS export domain though, it looks like 
I’d need to drill down into the exported file system and find the correct image 
based on VMID (I think..).

Is there a simple way to grab a VM image?

Thanks

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Virtual Machine looses connectivity with no clear explanation

2017-07-03 Thread FERNANDO FREDIANI
I have a rather strange issue which is affecting one of my last deployed 
Hypervisors. It is a CentOS 7 (not a oVirt Node) which runs only 3 
Virtual Machines.


One of these VMs have a reasonable output traffic at peaks (500 - 
700Mbps) and the hypervisor underneath is connected to the switch via a 
bonding (mode=2) which in turn creates bond0.XX interfaces which are 
connected to different bridges for each network. The VM in question is 
connected to bridge "ovirtmgmt".


When the problem happens the VM stops passing traffic and cannot reach 
even the router or other VMs in the same Layer 2. Seems the bridge stop 
passing traffic for that particular VM. Other VMs work fine since they 
were created. When this problem happens I just need to go to its Console 
and run a reboot (Ctrl-Alt-Del), don't even need to Power Off and Power 
On again using oVirt Engine.
I have even re-installed this VMs operating system from scratch but the 
problem persists. Have also changed the vNic mac address in the case 
(already check) of conflicted mac addresses somewhere in that Layer 2.


Last, my hypervisor machine (due a mistake) has been running with 
SElinux disabled, not sure if it could have anything to do with this 
behavior.


Anyway, anyone has ever seen any behavior like that ?

Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt-guest-agent - Ubuntu 16.04

2017-06-28 Thread FERNANDO FREDIANI

Hello

Is the maintainer of ovirt-guest-agent for Ubuntu on this mail list ?

I have noticed that if you install ovirt-guest-agent package from Ubuntu 
repositories it doesn't start. Throws an error about python and never 
starts. Has anyone noticied the same ? OS in this case is a clean 
minimal install of Ubuntu 16.04.


Installing it from the following repository works fine - 
http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04


Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt storage best practise

2017-06-14 Thread FERNANDO FREDIANI
I normally assume that any performance gain from directlly attaching a 
LUN to a Virtual Machine then using it in the traditional way are so 
little to compensate the extra hassle to do that. I would avoid as much 
as I cacn use it, unless it is for some very special reason where you 
cannot do in any other way. The only real usage for it so far was 
Microsoft SQL Server Clustering requirements.


Fernando


On 14/06/2017 03:23, Idan Shaby wrote:
Direct luns are disks that are not managed by oVirt. Ovirt 
communicates directly with the lun itself, without any other layer in 
between (like lvm in image disks).
The advantage of the direct lun is that it should have better 
performance since there's no overhead of another layer in the middle.
The disadvantage is that you can't take a snapshot of it (when 
attached to a vm, of course), can't make it a part of a template, 
export it, and in general - you don't manage it.



Regards,
Idan

On Mon, Jun 12, 2017 at 10:10 PM, Stefano Bovina > wrote:


Thank you very much.
What about "direct lun" usage and database example?


2017-06-08 16:40 GMT+02:00 Elad Ben Aharon >:

Hi,
Answer inline

On Thu, Jun 8, 2017 at 1:07 PM, Stefano Bovina
> wrote:

Hi,
does a storage best practise document for oVirt exist?


Some examples:

oVirt allows to extend an existing storage domain: Is it
better to keep a 1:1 relation between LUN and oVirt
storage domain?

What do you mean by 1:1 relation? Between storage domain and
the number of LUNs the domain reside on?

If not, is it better to avoid adding LUNs to an already
existing storage domain?

No problems with storage domain extension.


Following the previous questions:

Is it better to have 1 Big oVirt storage domain or many
small oVirt storage domains?

Depends on your needs, be aware to the following:
- Each domain has its own metadata which allocates ~5GB of the
domain size.
- Each domain is being constatntly monitored by the system, so
large number of domain can decrease the system performance.
There are also downsides with having big domains, like less
flexability

There is a max num VM/disks for storage domain?


In which case is it better to use "direct attached lun"
with respect to an image on an oVirt storage domain?


Example:

Simple web server: > image
Large database (simple example):
   - root,swap etc: 30GB  > image?
   - data disk: 500GB  -> (direct or image?)

Regards,

Stefano

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host name already in use in oVirt Engine

2017-06-13 Thread FERNANDO FREDIANI

Replying on my own email.

I managed to add the host again but had to do some very manual steps:

- Removed /etc/vdsm/vdsm.id from the Host
- engine=# select vds_id from vds_static where host_name = 'host.name';
- engine=# delete from vds_statistics where vds_id = 'host.name.uuid';
- engine=# delete from vds_dynamic where vds_id = 'host.name.uuid';
- engine=# delete from vds_static where vds_id = 'host.name.uuid';
- uuidgen > /etc/vdsm/vdsm.id on the Host

Should I report it as a bug as when I removed everything from the Admin 
Web Interface it should have done all this clean up in the Database ?


Fernando

On 13/06/2017 10:04, FERNANDO FREDIANI wrote:

Hello.

I had a previous Datacenter and Cluster with a Host in it which I have 
removed completelly from oVirt Engine. In order to remove I did the 
following steps:


- Removed all Virtual Machines on the top of the Host
- Put the only Local Datastore in Maintenence mode (It didn't allow to 
remove it for some reason. Said I had to remove the Datacenter instead)
- As the Datastore couldn't be remove so the Host. I then removed the 
Datacenter and it removed everything.


Then I created a new Cluster and tried to add the same Host with the 
same hostname in it and I am getting the message: "Cannot add Host. 
The Host name is already in use, please choose a unique name and try 
again."


It seems that something was left lost in the Database which still 
beleives that host exists in oVirt Engine. How can I clean that up and 
add it again as I am not willing to change its name.


Thanks
Fernand


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Host name already in use in oVirt Engine

2017-06-13 Thread FERNANDO FREDIANI

Hello.

I had a previous Datacenter and Cluster with a Host in it which I have 
removed completelly from oVirt Engine. In order to remove I did the 
following steps:


- Removed all Virtual Machines on the top of the Host
- Put the only Local Datastore in Maintenence mode (It didn't allow to 
remove it for some reason. Said I had to remove the Datacenter instead)
- As the Datastore couldn't be remove so the Host. I then removed the 
Datacenter and it removed everything.


Then I created a new Cluster and tried to add the same Host with the 
same hostname in it and I am getting the message: "Cannot add Host. The 
Host name is already in use, please choose a unique name and try again."


It seems that something was left lost in the Database which still 
beleives that host exists in oVirt Engine. How can I clean that up and 
add it again as I am not willing to change its name.


Thanks
Fernand
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Question about Clusters and Storage usage

2017-06-12 Thread FERNANDO FREDIANI

Hello folks.

I have here a scenario where I have one Datacenter and inside it I have 
one Cluster which has multiple hosts with Shared storage between them.


Now I am willing to add another standalone host with Local Storage only 
and logic tells me to add to the same Datacenter created as they are in 
fact in the same physical Datacenter, but as this host has only Local 
Storage I shouldn't obviously add to the existing cluster.


Question is: As the Datacenter was created with Storage type Shared
- Should I create a new Cluster and add it to it, even though the 
new Host has only Local Storage
- Or should I create another Datacenter with Storage Type Local, a 
cluster within it and then add the Host there ?


Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Host install common error

2017-06-08 Thread FERNANDO FREDIANI

Hello folks.

One of the most (if not the more) annoying problems of oVirt is the 
known message "... installation failed. Command returned failure code 1 
during SSH session ..." which happens quiet often in several situations.


Scrubbing installation logs it seems that most stuff goes well, but then 
it stops in a message saying: "ERROR otopi.context 
context._executeMethod:151 Failed to execute stage 'Setup validation': 
Cannot locate vdsm package, possible cause is incorrect channels" - 
followed by another message: "DEBUG otopi.context 
context.dumpEnvironment:770 ENV BASE/exceptionInfo=list:'[('exceptions.RuntimeError'>, RuntimeError('Cannot locate vdsm package, 
possible cause is incorrect channels',), )]'"


I am not sure why would it complain about the repositories as this is a 
Minimal CentOS 7 Install and the oVirt repository is added by 
oVirt-Engine itself so I assumed it added the most appropriate to its 
own version.
I even tried to copy over the same repositories used on the Hosts that 
are installed and working fine but that message shows up again on the 
Install retries.


Does anyone have any other hints where to look ?

For reference my engine version running is: 4.1.1.6-1.el7.centos.

Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Performance differences between ext4 and XFS

2017-06-08 Thread FERNANDO FREDIANI
Many thanks for your input Markus. Helps to device before putting the 
server in production.


Regards
Fernando


On 08/06/2017 02:19, Markus Stockhausen wrote:

Hi Fernando,

we personally like XFS very much. But XFS + qcow2 (even for snapshots 
in OVirt)
comes close to a no-go these days. We are experience excessive 
fragmentation.

For more info see unresolved Redhat Info:

https://access.redhat.com/solutions/532663

Even with tuning the XFS allocation policy on the qcow2 directory with

xfs_io -c 'extsize -R 2M' 

A nice 3rd party explanation can be found here:

https://blog.codecentric.de/en/2017/04/xfs-possible-memory-allocation-deadlock-kmem_alloc/

Markus


*Von:* users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag 
von "FERNANDO FREDIANI [fernando.fredi...@upx.com]

*Gesendet:* Mittwoch, 7. Juni 2017 23:35
*An:* users@ovirt.org
*Betreff:* [ovirt-users] Performance differences between ext4 and XFS

Just wanted to find out what filesystem people are using to host 
Virtual Machines in qcow2 files in a filesystem in Localstorage, ext4 
or XFS ?


I normally like XFS for big files which is the case fo VMs, but 
wondered if anyone could see any performance advantage when compared 
with ext4.


Fernando


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Performance differences between ext4 and XFS

2017-06-07 Thread FERNANDO FREDIANI
Just wanted to find out what filesystem people are using to host Virtual 
Machines in qcow2 files in a filesystem in Localstorage, ext4 or XFS ?


I normally like XFS for big files which is the case fo VMs, but wondered 
if anyone could see any performance advantage when compared with ext4.


Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-25 Thread FERNANDO FREDIANI

RAID 6 doesn't make exactly 3 copies of data.

I think storage is too expensive when compared to the total cost of the 
platform that 3 copies is waste of storage or luxury, given than if you 
have a permanent failure you still can make a new 2nd copy of the data 
provided you have storage left for that.



On 25/04/2017 10:26, Donny Davis wrote:
I personally want three copies of my data, more akin to RAID 6(ish) so 
in my case replica 3 makes perfect sense.


On Mon, Apr 24, 2017 at 11:34 AM, Denis Chaplygin <dchap...@redhat.com 
<mailto:dchap...@redhat.com>> wrote:


Hello!

On Mon, Apr 24, 2017 at 5:08 PM, FERNANDO FREDIANI
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:

Hi Denis, understood.
What if in the case of adding a fourth host to the running
cluster, will the copy of data be kept only twice in any of
the 4 servers ?


replica volumes can be build only from 2 or 3 bricks. There is no
way to make a replica volume from a 4 bricks.

But you may combine distributed volumes and replica volumes [1]:

|gluster volume create test-volume replica 2 transport tcp
server1:/b1 server2:/b2 server3:/b3 server4:/b4|

test-volume would be like a RAID10 - you will have two replica
volumes b1+b2 and b3+b4 combined into a single distributed volume.
In that case you will
have only two copies of your data. Part of your data will be
stored twice on b1 and b2 and another one part will be stored
twice at b3 and b4
You will be able to extend that distributed volume by adding new
replicas.


[1]

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/#creating-distributed-replicated-volumes

<https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/#creating-distributed-replicated-volumes>

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread FERNANDO FREDIANI

Ok, great, thanks for the clarification.

Therefore a replica 3 configuration means raw storage space cost is 
'similar' to a RAID 1 and actual data exists only 2 times and two 
different servers.


Regards
Fernando


On 24/04/2017 11:35, Denis Chaplygin wrote:
With arbiter volume you still have a replica 3 volume, meaning that 
you have three participants in your quorum. But only two of those 
participants keep the actual data. Third one, the arbiter, stores only 
some metadata, not the files content, so data is not replicated 3 times.


On Mon, Apr 24, 2017 at 3:33 PM, FERNANDO FREDIANI 
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:


But then quorum doesn't replicate data 3 times, does it ?

Fernando


On 24/04/2017 10:24, Denis Chaplygin wrote:

Hello!

On Mon, Apr 24, 2017 at 3:02 PM, FERNANDO FREDIANI
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:

Out of curiosity, why do you and people in general use more
replica 3 than replica 2 ?


The answer is simple - quorum. With just two participants you
don't know what to do, when your peer is unreachable. When you
have three participants, you are able to establish a majority. In
that case, when two partiticipants are able to communicate, they
now, that lesser part of cluster knows, that it should not accept
any changes.

If I understand correctly this seems overkill and waste of
storage as 2 copies of data (replica 2)  seems pretty
reasonable similar to RAID 1 and still in the worst case the
data can be replicated after a fail. I see that replica 3
helps more on performance at the cost of space.


You are absolutely right. You need two copies of data to provide
data redundancy and you need three (or more) members in cluster
to provide distinguishable majority. Therefore we have arbiter
volumes, thus solving that issue [1].

[1]

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/

<https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/>





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread FERNANDO FREDIANI

Hello.

Out of curiosity, why do you and people in general use more replica 3 
than replica 2 ?


If I understand correctly this seems overkill and waste of storage as 2 
copies of data (replica 2)  seems pretty reasonable similar to RAID 1 
and still in the worst case the data can be replicated after a fail. I 
see that replica 3 helps more on performance at the cost of space.


Fernando


On 24/04/2017 08:33, Sven Achtelik wrote:


Hi All,

my oVirt-Setup is 3 Hosts with gluster and reaplica 3. I always try to 
stay on the current version and I’m applying updates/upgrade if there 
are any. For this I put a host in maintenance and also use the “Stop 
Gluster Service”  checkbox. After it’s done updating I’ll set it back 
to active and wait until the engine sees all bricks again and then 
I’ll go for the next host.


This worked fine for me the last month and now that I have more and 
more VMs running the changes that are written to the gluster volume 
while a host is in maintenance become a lot more and it takes pretty 
long for the healing to complete. What I don’t understand is that I 
don’t really see a lot of network usage in the GUI during that time 
and it feels quiet slow. The Network for the gluster is a 10G and I’m 
quiet happy with the performance of it, it’s just the healing that 
takes long. I noticed that because I couldn’t update the third host 
because of unsynced gluster volumes.


Is there any limiting variable that slows down traffic during healing 
that needs to be configured ? Or should I maybe change my updating 
process somehow to avoid having so many changes in queue?


Thank you,

Sven



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] CentOS 7 and kernel 4.x

2017-04-19 Thread FERNANDO FREDIANI

Hi folks

Is anyone using KVM Nodes running CentOS with upgraded Kernel like 
Elrepo to either 4.5 (lt) or 4.10(ml) and noticed any improvements due 
that ?


What about oVirt-Node-NG ? I don't really like to make much changes on 
oVirt-Node image, but wanted to hear from whoever may have done that and 
are having good and stable results. And if so if there is a way to build 
an install image with one of those newer kernels.


Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Latency threshold between Hosted Engine and Hosts

2017-04-17 Thread FERNANDO FREDIANI

Hello.

I have a Engine which is hosted in a optimal location for the people who 
access it and this Engine manage multiple Datacenters, some close by and 
some far away in terms of latency.


What is the maximum latency advised between the Engine and the hosts for 
a healthy operation or that doesn't matter much as long the Engine can 
always reach the hosts ?


Currently the maximum latency I have between Engine and Hosts is 110ms 
and sometimes when there is a non-optimal route latency goes up to 
170ms. Should I be concerned about this ?


Thanks
Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] storage redundancy in Ovirt

2017-04-17 Thread FERNANDO FREDIANI
Raskoshnyi
<konra...@gmail.com
<mailto:konra...@gmail.com>>
wrote:

Hi Fernando,
I see each
host has
direct
connection nfs
mount, but
yes, if main
host to which
I connected
nfs storage
going down the
storage
becomes
unavailable
and all vms
are down


On Sat, Apr
    15, 2017 at
10:37 AM
FERNANDO
FREDIANI

<fernando.fredi...@upx.com

<mailto:fernando.fredi...@upx.com>>
wrote:

Hello
Konstantin.

That
doesn`t
make much
sense make
a whole
cluster
depend on
a single
host. From
what I
know any
host talk
directly
to NFS
Storage
Array or
whatever
other
Shared
Storage
you have.
Have you
tested
that host
going down
if that
affects
the other
with the
NFS
mounted
directlly
in a NFS
Storage
array ?

Fernando

2017-04-15
12:42
GMT-03:00
Konstantin
Raskoshnyi
<konra...@gmail.com

Re: [ovirt-users] storage redundancy in Ovirt

2017-04-17 Thread FERNANDO FREDIANI
If this works in this way then it's a huge downside in the architecture. 
Perhaps someone can clarify in more details.


Fernando


On 15/04/2017 14:53, Konstantin Raskoshnyi wrote:

Hi Fernando,
I see each host has direct connection nfs mount, but yes, if main host 
to which I connected nfs storage going down the storage becomes 
unavailable and all vms are down



On Sat, Apr 15, 2017 at 10:37 AM FERNANDO FREDIANI 
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:


Hello Konstantin.

That doesn`t make much sense make a whole cluster depend on a
single host. From what I know any host talk directly to NFS
Storage Array or whatever other Shared Storage you have.
Have you tested that host going down if that affects the other
with the NFS mounted directlly in a NFS Storage array ?

Fernando

2017-04-15 12:42 GMT-03:00 Konstantin Raskoshnyi
<konra...@gmail.com <mailto:konra...@gmail.com>>:

In ovirt you have to attach storage through specific host.
If host goes down storage is not available.

On Sat, Apr 15, 2017 at 7:31 AM FERNANDO FREDIANI
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>>
wrote:

Well, make it not go through host1 and dedicate a storage
server for running NFS and make both hosts connect to it.
In my view NFS is much easier to manage than any other
type of storage, specially FC and iSCSI and performance is
pretty much the same, so you won`t get better results
other than management going to other type.

Fernando

2017-04-15 5:25 GMT-03:00 Konstantin Raskoshnyi
<konra...@gmail.com <mailto:konra...@gmail.com>>:

Hi guys,
I have one nfs storage,
it's connected through host1.
host2 also has access to it, I can easily migrate
vms between them.

The question is - if host1 is down - all
infrastructure is down, since all traffic goes through
host1,
is there any way in oVirt to use redundant storage?

Only glusterfs?

Thanks


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


  1   2   >