[ovirt-users] Re: What is the status of the whole Ovirt Project?

2023-07-20 Thread Alex McWhirter
Largely package / feature support. RHEL is clearly betting the farm on 
OpenShift / OKD. Which is fine, but the decision to depreciate / remove 
things in RHEL (spice qxl, gluster) are also reflected in CentOS Stream. 
Even if you want to backport things to Stream as rebuilds of old / 
existing packages to re-enable some of those features you are now 
fighting a moving target. Would be easier to target RHEL than Stream if 
that is the goal.


Fedora has no such depreciation of features, has a larger package 
library, and more room to grow oVirt into something more compelling. If 
the decision is made to base on CentOS Stream, might aa well base on 
Fedora instead as neither is going to have the full enterprise life 
cycle of RHEL and both will break things here and there. At least with 
Fedora you don't have to maintain an ever growing list of things to 
maintain to keep oVirt's feature set in tact.


In short, targeting RHEL over Fedora made sense when CentOS existed as a 
downstream rebuild, when RHV was a product still, and when the entire 
oVirt feature set was supported by RHEL. None of those things are true 
today, and instead of targeting a psuedo RHEL where you still have to 
maintain a bunch of extra depreciated packages without the lifecycle 
commitment, Fedora makes more sense to me.


My two cents anyways, for my use case not having Gluster or spice is a 
breaking change. While i wouldn't mind contributing to oVirt here and 
there as needed if someone picks up the pieces, i don't have the 
resources to also maintain the growing list of depreciated / cut 
features in the base OS.


On 2023-07-14 02:27, Sandro Bonazzola wrote:

Il giorno ven 14 lug 2023 alle ore 00:07 Alex McWhirter 
 ha scritto:



I would personally put CloudStack in the same category as OpenStack.
Really the only difference is monolithic vs micro services. Both scale
quite well regardless, just different designs. People can weigh the 
pros

and cons of either to figure out what suites them. CloudStack is
certainly an easier thing to wrap your head around initially.


[cut]

Otherwise closest FOSS thing i have found (spent over a year 
evaluating)
is CloudStack. I am still hopeful someone will step up to maintain 
oVirt
(Oracle?), but it's clear to me at this point it will need rebased 
onto

Fedora or something else to keep it's feature set fully alive.


I'm curious, why do you think Fedora rebase is necessary to keep oVirt 
alive?
We tried that for years and gave up as Fedora is moving way too fast to 
keep oVirt aligned with the changes.
CentOS Stream 9 has EOL is estimated to be 2027 according to 
https://centos.org/stream9/ and I expect CentOS Stream 10 to show up by 
the end of this summer according to 
https://www.phoronix.com/news/CentOS-Stream-10-Start (well, official GA 
will be in a year but I guess people can start playing with it much 
earlier).


--

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING - Red Hat In-Vehicle Operating System

Red Hat EMEA [1]

sbona...@redhat.com

[1]

Red Hat respects your work life balance. Therefore there is no need to 
answer this email out of your office hours.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NTLNZVCFU525GKINY5U6VCHPVC5AFFCT/




Links:
--
[1] https://www.redhat.com/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7PVJ5DMBWB6QPXBWOKU7UOTP2C4KGE62/


[ovirt-users] Re: What is the status of the whole Ovirt Project?

2023-07-13 Thread Alex McWhirter
I would personally put CloudStack in the same category as OpenStack. 
Really the only difference is monolithic vs micro services. Both scale 
quite well regardless, just different designs. People can weigh the pros 
and cons of either to figure out what suites them. CloudStack is 
certainly an easier thing to wrap your head around initially.


There is also OpenNebula, similar to CloudStack in design. However their 
upgrade tool is closed source, so i can't recommend them as much as i 
would like to as i think they have a decent product otherwise.


ProxMox currently doesn't work as an oVirt replacement for us as it 
cannot scale beyond a single cluster and has limited multi tenancy 
support. I hear they have plans for multi cluster at some point, but 
currently i regard it as more akin to ESXi without vSphere. Where oVirt 
i would have compared closer to ESXi with vSphere.


I think XCP-ng has a bright future at the rate they are going, just 
doesn't currently supply all the features of oVirt.


OKD (OpenShift) virtualization is more or less just kubevirt. If all you 
need vm's for is a few server instances, works well enough. Feature wise 
it's not comparable to everything above, but if you can live with your 
vm's being treated more or less like containers workflow / life cycle 
wise and don't need much more then that, it probably gets the job done.


Harvester is another consideration if you think the OKD solution will 
work, but don't like the complexity. It's based on k3s, provides 
kubevirt, and is a much more simple install. I think they even provide 
an ISO installer.



Reality is, there is no great oVirt replacement so to speak. If you only 
needed oVirt to manage a single cluster, ProxMox probably fits the bill. 
XCP-ng as well if you are willing to do some leg work.


Otherwise closest FOSS thing i have found (spent over a year evaluating) 
is CloudStack. I am still hopeful someone will step up to maintain oVirt 
(Oracle?), but it's clear to me at this point it will need rebased onto 
Fedora or something else to keep it's feature set fully alive.



On 2023-07-13 11:10, Volenbovskyi, Konstantin via Users wrote:

Hi,
We switched from Gluster to NFS provided by SAN array: maybe it was
matter of combination of factors (configuration/version/whatever),
but it was unstable for us.
SPICE/QXL in RHEL 9: yeah, I understand that for some people it is
important (I saw that someone is doing some forks whatever)

I think that ovirt 4.5 (nightly build __) might be OK for some time,
but I think that alternatives are:
-OpenStack for larger setups (but be careful with distribution -as I
remember Red Hat abandons TripleO and introduces OpenShift stuff for
installation of OpenStack)
-ProxMox and CloudStack for all sizes.
-Maybe XCP-ng + (paid?) XenOrchestra, but I trust KVM/QEMU more than 
Xen __

OpenShift Virtualization/OKD Virtualization - I don't know...
Actually might be good if someone specifically comments on going from
ovirt to OpenShift Virtualization/OKD Virtualization.

Not sure if this statement below
https://news.ycombinator.com/item?id=32832999 is still correct and
what exactly are the consequences of 'OpenShift Virtualization is just
to give a path/time to migrate to containers'
"The whole purpose behind OpenShift Virtualization is to aid in
organization modernization as a way to consolidate workloads onto a
single platform while giving app dev time to migrate their work to
containers and microservice based deployments."



BR,
Konstantin

Am 13.07.23, 09:10 schrieb "Alex McWhirter" mailto:a...@triadic.us>>:


We still have a few oVirt and RHV installs kicking around, but between
this and some core features we use being removed from el8/9 (gluster,
spice / qxl, and probably others soon at this rate) we've heavily been
shifting gears away from both Red Hat and oVirt. Not to mention the
recent drama...


In the past we toyed around with the idea of helping maintain oVirt, 
but

with the list of things we'd need to support growing beyond oVirt and
into other bits as well, we aren't equipped to fight on multiple fronts
so to speak.


For the moment we've found a home with SUSE / Apache CloudStack, and
when el7 EOL's that's likely going to be our entire stack moving
forward.


On 2023-07-13 02:21, eshwa...@gmail.com <mailto:eshwa...@gmail.com> 
wrote:

I am beginning to have very similar thoughts. It's working fine for
me now, but at some point something big is going to break. I already
have VMWare running, and in fact, my two ESXi nodes have the exact
same hardware as my two KVM nodes. Would be simple to do, but I
really don't want to go just yet. At the same time, I don't want to
be the last person turning off the lights. Difficult times.
___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org 
<mailto:users-le...@ovirt.org>
Privacy Statemen

[ovirt-users] Re: What is the status of the whole Ovirt Project?

2023-07-13 Thread Alex McWhirter
We still have a few oVirt and RHV installs kicking around, but between 
this and some core features we use being removed from el8/9 (gluster, 
spice / qxl, and probably others soon at this rate) we've heavily been 
shifting gears away from both Red Hat and oVirt. Not to mention the 
recent drama...


In the past we toyed around with the idea of helping maintain oVirt, but 
with the list of things we'd need to support growing beyond oVirt and 
into other bits as well, we aren't equipped to fight on multiple fronts 
so to speak.


For the moment we've found a home with SUSE / Apache CloudStack, and 
when el7 EOL's that's likely going to be our entire stack moving 
forward.


On 2023-07-13 02:21, eshwa...@gmail.com wrote:

I am beginning to have very similar thoughts.  It's working fine for
me now, but at some point something big is going to break.  I already
have VMWare running, and in fact, my two ESXi nodes have the exact
same hardware as my two KVM nodes.  Would be simple to do, but I
really don't want to go just yet.  At the same time, I don't want to
be the last person turning off the lights.  Difficult times.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EJFIRAT6TNCS5TZUFPGBV5UZSCBW6LE4/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EFBHZ76GZ73HA52XJMGTRGTYCPIGNPHH/


[ovirt-users]Re: About oVirt’s future

2022-11-21 Thread Alex McWhirter
I have some manpower im willing to throw at oVirt, but i somewhat need 
to know if what the community wants and what we want are in line.


1. We'd bring back spice and maybe qxl. We are already maintaining forks 
of the ovirt and RH kernels for this. We use ovirt currently for a lot 
of VDI solutions paired with nvidia grid. Not usable over VNC.


2. Hyperconverged storage is important. I'd suggest integrations with 
linstor. Seems well aligned with ovirt. Bringing back gluster is likely 
the wrong move.


3. oVirt desperately needs vxlan support. Ideally integrating with FRR 
on the backend so an ovirt node can just plug into an exiting EVPN VXLAN 
setup.


4. some things need cut from ovirt, namely hosted engine and maybe the 
grafana stuff. Not that these aren't nice to have, but hosted engine 
rarely actually works reliable in it's current state (compare mailing 
list complaints of hosted engine vs other issues) and the grafana stuff 
can be pushed into a sub project.


5. It needs to do containers. Doesn't need to be the behemoth of OKD, 
but something more along the lines of what k3s does, for now.


These are the thing's i'd like to see done, and maybe also cut back some 
of the RHEL specific stuff to allow debian deployments (would massively 
help with userbase). Basically for the past year i've been tasked with 
figuring out if we are going to fork ovirt internally or move to 
OpenNebula. I prefer the ovirt option if the opportunity now exists to 
take things in another direction.


On 2022-11-15 03:31, Sandro Bonazzola wrote:

Il giorno lun 14 nov 2022 alle ore 23:40 Frank Wall  ha 
scritto:



Hi Didi,

thanks for keeping us updated. However, I'm concerned...

Ultimately, the future of oVirt lies in the hands of the community. 
If

you, as a community member, use and like oVirt, and want to see it
thrive, now is the best time to help with this!


I don't want to be rude, but this sounds to me like no developers
have shown interest in keeping oVirt alive. Is this true? Is no other
company actively developing oVirt anymore?


I've contacted directly all the companies with oVirt downstreams I was 
aware of.
I also contacted almost all the universities that asked for help in 
this mailing list.

I ended up contacting the major RHEL derivatives distributions.
So far nobody stepped in to take an active role on the oVirt project.
I saw some patches coming from individual contributors here and there 
but no company investment so far.



We worked hard over the last year or so on making sure the oVirt
project will be able to sustain development even without much
involvement from us - including moving most of the infrastructure 
from

private systems that were funded by/for oVirt/RHV, elsewhere - code
review from Gerrit to GitHub, and CI (Continuous Integration) from
jenkins to GitHub/Copr/CentOS CBS.


I appreciate the effort to make the source code accessible. However,
I'm also wondering: was any sort of governing organization 
established,

so that development could actually take place when RedHat pulls the
plug?


Yes, oVirt has an open governance: 
https://www.ovirt.org/community/about/governance.html
Right now in the oVirt board other than Red Hat there's a member of the 
Caltech university https://www.ovirt.org/community/about/board.html


The answer to this is probably related to my previous question, 
whether

or not there are any non-RedHat developers involved.

Ciao
- Frank
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5DQ3OLT3B5QALLFUK4OMKDYJEJXSYP7A/


--

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING - Red Hat In-Vehicle OS

Red Hat EMEA [1]

sbona...@redhat.com

[1]

Red Hat respects your work life balance. Therefore there is no need to 
answer this email out of your office hours.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IVCGZXVNOAFE44ASIH6UFZGURL3OUFRW/




Links:
--
[1] https://www.redhat.com/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NV2TD2L4ZCKSLTG46VFNWDNHPZBHGNOC/


[ovirt-users] Re: oVirt alternatives

2022-02-05 Thread Alex McWhirter

Oh i have spent years looking.

ProxMox is probably the closest option, but has no multi-clustering 
support. The clusters are more or less isolated from each other, and 
would need another layer if you needed the ability to migrate between 
them.


XCP-ng, cool. No spice support. No UI for managing clustered storage 
that is open source.


Harvester, probably the closest / newest contender. Needs a lot more 
attention / work.


OpenNebula, more like a DIY AWS than anything else, but was functional 
last i played with it.




Has anyone actually played with OpenShift virtualization (replaces RHV)? 
Wonder if OKD supports it with a similar model?


On 2022-02-05 07:40, Thomas Hoberg wrote:

There is unfortunately no formal announcement on the fate of oVirt,
but with RHGS and RHV having a known end-of-life, oVirt may well shut
down in Q2.

So it's time to hunt for an alternative for those of us to came to
oVirt because they had already rejected vSAN or Nutanix.

Let's post what we find here in this thread.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R4YFNNCTW5VVVRKSV2OORQ2UWZ2MTUDD/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EYPC6QXF55UCQPMQL5LDU6XMAF2CZOEG/


[ovirt-users] Re: no QXL ?

2021-12-07 Thread Alex McWhirter
It's being removed from RHEL 9, unsure of reasoning. 


So this mean that oVirt cannot offer SPICE/QXL on RHEL9, there is no
spice package, qemu is compiled without SPICE/QXL support, the kernel
does not support QXL video drivers. 


It's not that oVirt is killing off SPICE/QXL, but rather RHEL9 is and
oVirt cannot support a feature not available on the host OS unless 3rd
party packages or a SiG makes them available. 


On 2021-12-07 14:14, Patrick Hibbs wrote:

Hello, 

Can I ask why this is being removed? 

The linked bugzilla report doesn't give a reason, and at least two others have expressed concerns over SPICE's deprecation. 

Personally, I would like to know why it's being removed entirely with no recourse instead of becoming an option to enable in the VM config, or an optional RPM that can be installed by the sysadmin. 

Thanks. 

On Tue, 2021-12-07 at 09:41 +0200, Arik Hadas wrote: 

On Tue, Dec 7, 2021 at 8:33 AM Patrick Hibbs  wrote: 

Hello, 

Are we to assume that VNC mode is the only thing that will be supported for the VM consoles moving forward then?  
As the pure SPICE mode only works with QXL display as far as I can tell. 

I ask because the VNC or SPICE+VNC modes haven't worked in my environment for over a year now, and that change 
would effectively prevent the use of any VM console in my environment.  (Use of VNC with remote viewer always gives 
me an authentication error.) Not that it's a normal environment, but that kind of thing should be advertised more. Just in case 
simillar issues exist in other deployments. 

Yes, one would need to make sure vnc/vga works well before upgrading to the next cluster-level (in oVirt 4.5) 
In general it is recommended to test the configuration in the new cluster-level by setting some representative VMs in the environment with a custom compatibility level  and check that they work properly before upgrading to that cluster-level. 

Thanks. 

On Mon, 2021-12-06 at 22:03 +0200, Arik Hadas wrote: 

On Mon, Dec 6, 2021 at 8:45 PM lejeczek via Users  wrote: 


On 06/12/2021 17:42, lejeczek via Users wrote:

Hi.

I've Qemu/Libvirt from 
ovirt-release-master-4.5.0-0.0.master.20211206152702.gitebb0229.el9.noarch 
and it seems QXL is not there.

Is that a fluke or intention?
Do you have QXL working?

upss.. pardon me, these are from CentOS 9 Steam own repos 
actually. 

Right, and that's the reason for the ongoing work on removing qxl on cluster level 4.7: 
https://bugzilla.redhat.com/show_bug.cgi?id=1976607 


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DZMAQQJMPHD2L4DPVHTET5N4KB4MZDUY/ 
___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/privacy-policy.html 
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/22NZAQL46WMEFFKQ66EKZBHGE5KCX3MY/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GT5YYVAFM4P7AMCFFCJNYZO75Y6M3H4R/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UVPLNOGVY6DD6S7HKC5JEZ43N4NLHMRY/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GA7YUKUDR466NGHP7TMP6TFQLWFMH4WU/


[ovirt-users] Re: no QXL ?

2021-12-07 Thread Alex McWhirter
Additionally, should this go forward. I would be interested in 
maintaining a 3rd party repo with patched packages to keep SPICE/QXL 
support if anyone else would like to join.


On 2021-12-07 10:08, Alex McWhirter wrote:

I've sent my concerns to redhat, this would force us to look at other
software and more than likely no longer be red hat customers.

On 2021-12-07 09:15, Neal Gompa wrote:
On Tue, Dec 7, 2021 at 8:49 AM Rik Theys  
wrote:


Hi,

Will SPICE be deprecated or fully removed in oVirt 4.5?

Since spice is still more performant than VNC and also has more 
features such as USB redirection, why is it being phased out?


Which connection method should we use to connect clients with a VM 
from a pool when USB redirection is also a requirement?




There will be no way to do it. Please file bugs and support cases to
Red Hat telling them these features are needed. Then maybe they'll
reconsider...

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E7RPORLDTLLLSTUQDQPONFJYKA64Z6BP/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WEYNY7DKWYOFNRBT7XBRAY5EVRPUCET7/


[ovirt-users] Re: no QXL ?

2021-12-07 Thread Alex McWhirter
I've sent my concerns to redhat, this would force us to look at other 
software and more than likely no longer be red hat customers.


On 2021-12-07 09:15, Neal Gompa wrote:
On Tue, Dec 7, 2021 at 8:49 AM Rik Theys  
wrote:


Hi,

Will SPICE be deprecated or fully removed in oVirt 4.5?

Since spice is still more performant than VNC and also has more 
features such as USB redirection, why is it being phased out?


Which connection method should we use to connect clients with a VM 
from a pool when USB redirection is also a requirement?




There will be no way to do it. Please file bugs and support cases to
Red Hat telling them these features are needed. Then maybe they'll
reconsider...

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E7RPORLDTLLLSTUQDQPONFJYKA64Z6BP/


[ovirt-users] Re: Creating VMs from templates with their own disks

2021-11-17 Thread Alex McWhirter

On 2021-11-17 13:50, Sina Owolabi wrote:

Ok thanks  
Sounds odd but no problem  

How do I make  the new VM use its own disk, named after itself? 

On Wed, 17 Nov 2021 at 19:45, Alex McWhirter  wrote: 


On 2021-11-17 12:02, notify.s...@gmail.com wrote:

Hi All

Im very stumped on how to create VMs from templates I've made, but
having them installed with their own disks.
Please can some one guide me on how to do this?
I have Ovirt running, with local storage hypervisors.

Anytime I try to use a template, the vm is created and booted with the
template's disk.
I would especially appreciate how to do this with ansible.
Im trying to automate CentOS and Ubuntu VMs.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OX5MWWMYAW4OTYE4NFETM4WFL2YEQFBZ/


When you make a VM from a template there are two possibilities.

If the VM type is set to desktop, a qcow overlay is created against the 
template's disk images. Any changes made in the VM are stored in the 
overlay..


If the VM type is set to server the template's disks are copied to a new 
disk, it will have the same name as the template disk, but it is in fact 
a new disk with the template data copied over.

--
Sent from MetroMail


You can create a template with no disk, then VM's created from that
template will also have no disk. Then add a new disk to the VM after you
create it. This is how the default blank template works. You can also
create a template with an empty disk, then every VM created will also
get an empty disk by default. You can always rename disks as well.___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TBIPIMGW7C4Q2IKQWZEKJJRCHMKK2FZC/


[ovirt-users] Re: Creating VMs from templates with their own disks

2021-11-17 Thread Alex McWhirter

On 2021-11-17 12:02, notify.s...@gmail.com wrote:

Hi All

Im very stumped on how to create VMs from templates I've made, but
having them installed with their own disks.
Please can some one guide me on how to do this?
I have Ovirt running, with local storage hypervisors.

Anytime I try to use a template, the vm is created and booted with the
template's disk.
I would especially appreciate how to do this with ansible.
Im trying to automate CentOS and Ubuntu VMs.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OX5MWWMYAW4OTYE4NFETM4WFL2YEQFBZ/


When you make a VM from a template there are two possibilities.

If the VM type is set to desktop, a qcow overlay is created against the 
template's disk images. Any changes made in the VM are stored in the 
overlay..


If the VM type is set to server the template's disks are copied to a new 
disk, it will have the same name as the template disk, but it is in fact 
a new disk with the template data copied over.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XKM4XJD7IU6CYDY2GTG3WOLXEZP2QVW4/


[ovirt-users] Re: Cannot to update hosts, nothing provides libvirt-daemon-kvm >= 7.6.0-2 needed by vdsm-4.40.90.4-1.el8.x86_64

2021-11-03 Thread Alex McWhirter

On 2021-11-03 16:52, Patrick Lomakin wrote:

I think it's a bug. I couldn't find any rpm "libvirt-daemon-kvm"
package in CentOS or Ovirt repo.(only libvirt-daemon-kvm 7.0.0). Try
to use --nobest flag to install updates.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JR4SOVSBBMM4TIJIQXV6JMXZNAXR3QXB/


7.6.0-4 is currently what is available on stream. I believe 8.4 is no 
longer supported as a host OS (CentOS or RHEL) as they list CentOS 
Stream, Node, and RHEL 8.5 Beta as supported OS's for host.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SP6ENO2V3HMPQ54KTTHHRKFQQ5ZVAJMV/


[ovirt-users] Re: The Engine VM (/32) and this host (/32) will not be in the same IP subnet.

2021-10-27 Thread Alex McWhirter

On 2021-10-27 16:09, Sina Owolabi wrote:

Its really weird.
Just tried again, with  the same failure, on a freshly reinstalled 
CentOS 8.

Server has a number of vlan interfaces, on a physical interface
enp2s0f1, all in the defined notation,
one vlan interface has an IP, 10.200.10.3/23,
Second physical interface enp2s0f0 is configured for 10.200.30.3/23,
is the interface with a gateway and DNS, and the router can provide
other IPs with DHCP here, and which I hope to have ovirtmgmt on.
I run hosted-engine --deploy, I select the gateway for the enp2s0f1
vlan (10.200.10.1).

Please indicate the gateway IP address [10.200.30.1]: 10.200.10.1
Please indicate a nic to set ovirtmgmt bridge on (enp2s0f1,
enp2s0f1.1014, enp2s0f1.1016, enp2s0f1.1015, enp2s0f1.1005, enp2s0f0)
[enp2s0f1.1014]: enp2s0f0
  Please specify which way the network connectivity should be
checked (ping, dns, tcp, none) [dns]:dns
  How should the engine VM network be configured? (DHCP,
Static)[DHCP]: Static
  Please enter the IP address to be used for the engine VM []:
10.200.30.10
[ ERROR ] The Engine VM (10.200.30.10/32) and this host
(10.200.30.3/32) will not be in the same IP subnet.
 Static routing configuration are not supported on automatic
VM configuration.

What can I try differently?



Try setting the subnet with the engine IP.

I.E. instead of 10.200.30.10 do 10.200.30.10/23
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JC6ORL2ALO4XNT2NJUISIDIEKSKYSRNO/


[ovirt-users] Re: Add nodes to single node gluster hyperconverged

2021-08-27 Thread Alex McWhirter

On 2021-08-27 13:24, Thomas Hoberg wrote:


I'd rather doubt the GUI would help you there and what's worse, the
GUI doesn't easily tell you what it tries to do. By the time you've
found and understood what it tries from the logfiles, you'd have it
done on your own.


It's an unfortunate thing that the GUI assumptions can be counter 
intuitive in many regards, but i can confirm you can do this in the GUI 
as long as the gluster volumes was created in the GUI in the first 
place. If not, it will not show up as a volume.



Now whether or not oVirt will then treat such a hand-made 1->3 node
cluster like a 3 node HCI built by itself is something I've never
tried.


It will as long as the above gluster volume requirement is met, although 
if using gluster for hosted engine you typically end up doing that by 
hand.



If I had successfully tested the 3 node to 6 and 9 node expansion, I'd
perhaps be more confident. But it could just turn out that without
fiddling with the postgres database in the management engine this
won't happen.


I do this quite often, add hosts to compute, add bricks to a gluster 
volume, done deal. However we do tend to use ZFS underneath which is 
outside the scope of oVirt. It allows us to re-use existing storage 
elements (providing there is enough space) to create temporary volumes 
to assist in the process.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M5VC5AP6V6BV2VNQT46DKNF253W3ECPL/


[ovirt-users] Re: oVirt and the future

2021-08-27 Thread Alex McWhirter

On 2021-08-27 13:09, Thomas Hoberg wrote:
Ubuntu support: I feel ready to bet a case of beer, that that won't 
happen.


I'd tend to agree with this. oVirt embeds itself so deep into the RHEL 
architecture that moving to anything else that doesn't provide
the same provisions will be a huge undertaking. You'd almost need to 
develop a new VDSM from scratch per distro. Supporting latest Fedora
however may be an option with a more realistic amount of required 
effort.



oVirt lives in a niche, which doesn't have a lot of growth left.


Is it though? I'd almost compare oVirt to VMware's suite of products 
which see a ton of use. The problem oVirt faces is the lack of a 
consistent and easy deployment model.



It's really designed to run VMs on premise, but once you're fully VM
and containers, cloud seems even more attractive and then why bother
with oVirt (which has a steep learning curve)?


Depends on market really, there are industries where cloud is just not 
an option for legal reasons. Cloud is also sometimes significantly more 
expensive than on-prem in many markets, even within the US.



I still see some potential where you need a fault tolerant redundant
physical HCI edge built from small devices like industrial NUCs or
Atoms (remote SMB, factories, ships, railroads, military/expedition,
space stations). But for that the quality of the software would have
to improve in spades.


We use it for this use case (as well as others) it works well. The 
quality issues with oVirt tend to revolve around deployment. Hosted 
engine is often a mess, and most don't want to deal with a dedicated 
machine or VM on non clustered machine to run the engine. We've gone as 
far as having dedicated head nodes that run critical service VM's with 
some pacemaker / corosync magic to move those VM's around if needed as a 
custom solution, simply because hosted engine only works some of the 
time.


oVirt has come quite a long way in this regard, but until you see 
consistent stable releases that deploy effortlessly on a single node for 
testing, people wont test it, and it wont be considered option. Even if 
the software is pretty fantastic beyond that point.



If oVirt in HCI was as reliable as CentOS7 on physical hardware, pure
software update support could perhaps be made to cost no more than the
hardware and you'd have something really interesting.


Kind of goes along with my previous statement, i haven't had a critical 
software bug effect my installs in years, but that takes quite a lot of 
knowledge of the inner working of oVirt to keep it that way. 
Unfortunately "stable if you know what you are doing" isn't a strong 
selling point.



But that would require large masses (millions) of deployment to make
worthwhile, which can't happen with somebody doing a huge amount of
initial subsidies. And who would be able (and motivated) to shoulder
that?


I would love RHEV to be that, but it consistently seems to be neglected 
in the Red Hat lineup of products. RHEV tends to be considerably less 
buggy and does lag behind significantly at times. A free entry level 
RHEV tier could change things quite a lot, but it doesn't seem likely in 
my opinion.


But that's really just my personal opinion, Didi is the much better 
authority.


Same, I'd love to see some of these issues resolved, but it's going to 
take a unique situation to make that happen I'm afraid.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NWN4DJ2NAQOK2RLDKE3JHNW2SOT2ATEJ/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-08 Thread Alex McWhirter

I've run into a similar problem when using VDO + LVM + XFS stacks, also
with ZFS. 


If you're trying to use ZFS on 4.4, my recommendation is don't. You have
to run the testing branch at minimum, and quiet a few things just don't
work. 


As for VDO, i ran into this issue when using VDO and a NVME for LVM
caching of the thin pool, VDO would throw a fit and under high load
scenario, VM's would regularly pause.  


VDO with no cache was fine however, seems to be related to mixing device
types / block sizes (even if you override block sizes). 

Not sure if that helps. 


On 2021-06-08 12:26, Strahil Nikolov via Users wrote:

Maybe the shard xlator cannot cope with the speed of the shard creation speed. 

Are you using preallocated disks on the Zimbra VM ? 

Best Regards, 
Strahil Nikolov


On Tue, Jun 8, 2021 at 17:57, José Ferradeira via Users 
 wrote: 

Hello, 

running ovirt 4.4.4.7-1.el8 and gluster 8.3. 
When i performe a restore of Zimbra Collaboration Email with features.shard on, the VM pauses with an unknown storage error. 
When I performe a restore of Zimbra Collaboration Email with features.shard off, it fills all the gluster storage domain disks. 

With older versions of gluster and ovirt the same happens. If I use a NFS storage domain it runs OK. 


--

-

Jose Ferradeira
http://www.logicworks.pt ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGBFSIHHTDOTTOWFWQKZFQMD56YWHTPZ/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MOJJZJG7LCGHDIYULF5572L52JE53T6D/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JWUSPYSFOMFOHPDOPZBLYGRHYYFVCZ5H/


[ovirt-users] Re: Public IP routing question

2021-03-08 Thread Alex McWhirter

You can route it to a private address on your router if you want...

We use EVPN/VXLAN (but regular old vlans work too) Just put the public 
space on a vlan, add public as a vlan tagged network in ovirt. Only your 
public facing VM's need addresses in the space.


On 2021-03-08 05:53, David White via Users wrote:

If I have a private network (10.1.0.0/24) that is being used by the
cluster for intra-host communication & replication, how do I get a
block of public IP addresses routed to the virtual cluster?

For example, let's say I have a public /28, and let's use 1.1.1.0/28
for example purposes.

I'll assign 1.1.1.1 to the router.

How can I then route 1.1.1.2 - 1.1.1.16 down to the virtualized oVirt
cluster?

Do I need to assign a public IP address to a 2nd physical NIC on each
host, and put that network onto a totally different physical switch?

Or should I instead setup default routes on the 10.1.0.0/24 network?

I also wanted to follow up on my question below to see if anyone had
any thoughts on how things would function when a portion of the
network is lost.

Sent with ProtonMail [1] Secure Email.

‐‐‐ Original Message ‐‐‐

 On Thursday, March 4, 2021 4:53 AM, David White
 wrote:


I tested oVirt (4.3? I can't remember) last fall on a single host
(hyperconverged).

Now, I'm getting ready to deploy to a 3 physical node (possibly 4)
hyperconverged cluster, and I guess I'll go ahead and go with 4.4.

Although Red Hat's recent shift of CentOS 8 to the Stream model, as
well as the announcement that RHV is going away makes me nervous. I
really don't see any other virtualization software doing quite the
same stuff as oVirt at the moment.

One of my questions is around the back end out-of-band network for
data replication.

What happens if all 3 servers are healthy and the normal network is
fine for serving traffic to the VM consumers, but the switching
network for data replication goes down? Is it possible to configure
oVirt to "fail over" to the front-end network?

I'm also wondering if its possible to do away with a switch all
together, and just link the physical hosts together directly (like a
cross-over cable) for the data replication.

I'm also wondering what would happen in the following scenario:

* All 3 servers are healthy

* The out-of-band data replication network is healthy

* 1 or 2 of the servers suddenly lost network connectivity on the
front-end network

What then? Would everything just keep working, and network traffic
be forced to go out the healthy interface(s) on the remaining hosts?

Sent with ProtonMail [1] Secure Email.




Links:
--
[1] https://protonmail.com
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GMQTNI6VWTWMG6IQMFGJGBTLCLOARZYK/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EP27ICYZ3FKORZ4UXYTL2GDKO47U6LWR/


[ovirt-users] Re: 4.4.4 Image Copying / Template Create Fails - No Such File

2021-03-07 Thread Alex McWhirter

I apologize for the spam, this seems to be a long standing gluster
issue. https://github.com/gluster/glusterfs/issues/597 


sharding does not support SEEK_DATA/SEEK_HOLE, with a preallocated image
you likely never see this issue as there are no holes. However with a
sparse image, that's very much not the case. I'm not sure when qemu-img
changed to use these syscalls, as this is not something i experience on
4.3 / CentOS 7. 


I'd be interested if anyone else can replicate this image copy behavior
using raw sparse (thin provision) disks as the source on a gluster
volume with sharding enabled, on oVirt 4.4.4+ (possibly earlier is also
affected) 


If this is something current qemu-img cannot handle, i don't think
supporting sparse disks on sharded gluster volumes is wise. 


On 2021-03-08 01:06, Alex McWhirter wrote:

This actually looks to be related to sharding. Doing a strace on the qemu-img process, i can see that it is using lseek, but after the first shard this turns to EOPNOTSUPP and qemu-img dies. 

If i temporarily disable sharding, touch a file (so it will not be sharded), then re-enable sharding and use qemu-img to overwrite that new unsharded file, there are no issues. 

On 2021-03-07 18:33, Nir Soffer wrote: 

On Sun, Mar 7, 2021 at 1:14 PM Alex McWhirter  wrote: 

I've been wrestling with this all night, digging through various bits of VDSM code trying to figure why and how this is happening. I need to make some templates, but i simply can't. 

VDSM  command HSMGetAllTasksStatusesVDS failed: value=low level Image copy failed: ("Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', '/rhev/data-center/mnt/glusterSD/:_Temp/45740f16-b3c9-4bb5-ba5f-3e64657fb663/images/6f87b073-c4ec-42f2-87da-d1cb6a08a150/f2dfb779-b49c-4ec9-86cd-741f3fe5b781', '/rhev/data-center/mnt/glusterSD/:_Temp/45740f16-b3c9-4bb5-ba5f-3e64657fb663/images/84e56da6-8c26-4518-80a4-20bc395214db/039c5ada-ad6d-45c0-8393-bd4db0bbc366'] failed with rc=1 out=b'' err=bytearray(b'qemu-img: error while writing at byte 738197504: No such file or directory\\n')",) 
This is a gluster issue that was reported in the past. write() cannot return errno ENOENT 
and qemu-img cannot recover from this error. 

Can you reproduce this when running the same qemu-img command from the shell? 

Are you running the latest gluster version? 

Nir 


abortedcode=261

3/7/21 5:44:44 AM 

Following the VDSM logs, i can see the new image gets created, permissions set, etc... but as soon qemu-img starts, it fails like this. I updated all hosts and the engine, rebooted the entire stack, to no avail. So i detached the storage domain, and wiped every host and fresh installed both engine and all nodes, imported the storage domain, and still no dice. Storage domain is gluster volume, single node, created in ovirt. 

It happens when i make a template, copy an image, or make a new vm from a template. I can still create new vms from blank, and upload images via the web ui. Watching the gluster share, i can see the image being created, but its deleted at some point. I appears to not be being deleted by the template / copying process, as immediately after the above error, i get this one. 

VDSM command DeleteImageGroupVDS failed: Image does not exist in domain: 'image=4f359545-01a8-439b-832b-18c26194b066, domain=b4507449-ac40-4e35-be66-56441bb696ac' 

3/7/21 5:44:44 AM 

I thought maybe garbage collection, but don't see any indication of that in the logs. 


Any ideas? I redacted host names from the log output. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/75BZNMZUH23ZWCUHCDYCJSC6EZ45UW5O/ 
___

Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/33JF7NWW7XW4KKWM57H52TK5QDBDA4J5/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AW2L2D66HFFKF7XXWDYZRKW5R36P6HCC/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www

[ovirt-users] Re: 4.4.4 Image Copying / Template Create Fails - No Such File

2021-03-07 Thread Alex McWhirter

This actually looks to be related to sharding. Doing a strace on the
qemu-img process, i can see that it is using lseek, but after the first
shard this turns to EOPNOTSUPP and qemu-img dies. 


If i temporarily disable sharding, touch a file (so it will not be
sharded), then re-enable sharding and use qemu-img to overwrite that new
unsharded file, there are no issues. 


On 2021-03-07 18:33, Nir Soffer wrote:

On Sun, Mar 7, 2021 at 1:14 PM Alex McWhirter  wrote: 

I've been wrestling with this all night, digging through various bits of VDSM code trying to figure why and how this is happening. I need to make some templates, but i simply can't. 


VDSM  command HSMGetAllTasksStatusesVDS failed: value=low level Image copy failed: 
("Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', 
'/rhev/data-center/mnt/glusterSD/:_Temp/45740f16-b3c9-4bb5-ba5f-3e64657fb663/images/6f87b073-c4ec-42f2-87da-d1cb6a08a150/f2dfb779-b49c-4ec9-86cd-741f3fe5b781',
 
'/rhev/data-center/mnt/glusterSD/:_Temp/45740f16-b3c9-4bb5-ba5f-3e64657fb663/images/84e56da6-8c26-4518-80a4-20bc395214db/039c5ada-ad6d-45c0-8393-bd4db0bbc366']
 failed with rc=1 out=b'' err=bytearray(b'qemu-img: error while writing at byte 738197504: No such file or 
directory\\n')",)


This is a gluster issue that was reported in the past. write() cannot return errno ENOENT 
and qemu-img cannot recover from this error. 

Can you reproduce this when running the same qemu-img command from the shell? 

Are you running the latest gluster version? 

Nir 


abortedcode=261

3/7/21 5:44:44 AM 

Following the VDSM logs, i can see the new image gets created, permissions set, etc... but as soon qemu-img starts, it fails like this. I updated all hosts and the engine, rebooted the entire stack, to no avail. So i detached the storage domain, and wiped every host and fresh installed both engine and all nodes, imported the storage domain, and still no dice. Storage domain is gluster volume, single node, created in ovirt. 

It happens when i make a template, copy an image, or make a new vm from a template. I can still create new vms from blank, and upload images via the web ui. Watching the gluster share, i can see the image being created, but its deleted at some point. I appears to not be being deleted by the template / copying process, as immediately after the above error, i get this one. 

VDSM command DeleteImageGroupVDS failed: Image does not exist in domain: 'image=4f359545-01a8-439b-832b-18c26194b066, domain=b4507449-ac40-4e35-be66-56441bb696ac' 

3/7/21 5:44:44 AM 

I thought maybe garbage collection, but don't see any indication of that in the logs. 


Any ideas? I redacted host names from the log output. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/75BZNMZUH23ZWCUHCDYCJSC6EZ45UW5O/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/33JF7NWW7XW4KKWM57H52TK5QDBDA4J5/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AW2L2D66HFFKF7XXWDYZRKW5R36P6HCC/


[ovirt-users] Re: 4.4.4 Image Copying / Template Create Fails - No Such File

2021-03-07 Thread Alex McWhirter

I see, yes running the command by hand results in the same error.
Gluster version is 8.3 (I upgraded to see if 8.4.5 would result in a
different outcome) 

Previously it was gluster 7.9, same issue either way. 


On 2021-03-07 18:33, Nir Soffer wrote:

On Sun, Mar 7, 2021 at 1:14 PM Alex McWhirter  wrote: 

I've been wrestling with this all night, digging through various bits of VDSM code trying to figure why and how this is happening. I need to make some templates, but i simply can't. 


VDSM  command HSMGetAllTasksStatusesVDS failed: value=low level Image copy failed: 
("Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', 
'/rhev/data-center/mnt/glusterSD/:_Temp/45740f16-b3c9-4bb5-ba5f-3e64657fb663/images/6f87b073-c4ec-42f2-87da-d1cb6a08a150/f2dfb779-b49c-4ec9-86cd-741f3fe5b781',
 
'/rhev/data-center/mnt/glusterSD/:_Temp/45740f16-b3c9-4bb5-ba5f-3e64657fb663/images/84e56da6-8c26-4518-80a4-20bc395214db/039c5ada-ad6d-45c0-8393-bd4db0bbc366']
 failed with rc=1 out=b'' err=bytearray(b'qemu-img: error while writing at byte 738197504: No such file or 
directory\\n')",)


This is a gluster issue that was reported in the past. write() cannot return errno ENOENT 
and qemu-img cannot recover from this error. 

Can you reproduce this when running the same qemu-img command from the shell? 

Are you running the latest gluster version? 

Nir 


abortedcode=261

3/7/21 5:44:44 AM 

Following the VDSM logs, i can see the new image gets created, permissions set, etc... but as soon qemu-img starts, it fails like this. I updated all hosts and the engine, rebooted the entire stack, to no avail. So i detached the storage domain, and wiped every host and fresh installed both engine and all nodes, imported the storage domain, and still no dice. Storage domain is gluster volume, single node, created in ovirt. 

It happens when i make a template, copy an image, or make a new vm from a template. I can still create new vms from blank, and upload images via the web ui. Watching the gluster share, i can see the image being created, but its deleted at some point. I appears to not be being deleted by the template / copying process, as immediately after the above error, i get this one. 

VDSM command DeleteImageGroupVDS failed: Image does not exist in domain: 'image=4f359545-01a8-439b-832b-18c26194b066, domain=b4507449-ac40-4e35-be66-56441bb696ac' 

3/7/21 5:44:44 AM 

I thought maybe garbage collection, but don't see any indication of that in the logs. 


Any ideas? I redacted host names from the log output. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/75BZNMZUH23ZWCUHCDYCJSC6EZ45UW5O/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/33JF7NWW7XW4KKWM57H52TK5QDBDA4J5/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QHOEFQ6BMRF43PI44ZFAFLTQC6XW3UG3/


[ovirt-users] 4.4.4 Image Copying / Template Create Fails - No Such File

2021-03-07 Thread Alex McWhirter

I've been wrestling with this all night, digging through various bits of
VDSM code trying to figure why and how this is happening. I need to make
some templates, but i simply can't. 


VDSM  command HSMGetAllTasksStatusesVDS failed: value=low level
Image copy failed: ("Command ['/usr/bin/qemu-img', 'convert', '-p',
'-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw',
'/rhev/data-center/mnt/glusterSD/:_Temp/45740f16-b3c9-4bb5-ba5f-3e64657fb663/images/6f87b073-c4ec-42f2-87da-d1cb6a08a150/f2dfb779-b49c-4ec9-86cd-741f3fe5b781',
'/rhev/data-center/mnt/glusterSD/:_Temp/45740f16-b3c9-4bb5-ba5f-3e64657fb663/images/84e56da6-8c26-4518-80a4-20bc395214db/039c5ada-ad6d-45c0-8393-bd4db0bbc366']
failed with rc=1 out=b'' err=bytearray(b'qemu-img: error while writing
at byte 738197504: No such file or directory\\n')",) abortedcode=261

3/7/21 5:44:44 AM 


Following the VDSM logs, i can see the new image gets created,
permissions set, etc... but as soon qemu-img starts, it fails like this.
I updated all hosts and the engine, rebooted the entire stack, to no
avail. So i detached the storage domain, and wiped every host and fresh
installed both engine and all nodes, imported the storage domain, and
still no dice. Storage domain is gluster volume, single node, created in
ovirt. 


It happens when i make a template, copy an image, or make a new vm from
a template. I can still create new vms from blank, and upload images via
the web ui. Watching the gluster share, i can see the image being
created, but its deleted at some point. I appears to not be being
deleted by the template / copying process, as immediately after the
above error, i get this one. 


VDSM command DeleteImageGroupVDS failed: Image does not exist in domain:
'image=4f359545-01a8-439b-832b-18c26194b066,
domain=b4507449-ac40-4e35-be66-56441bb696ac' 

3/7/21 5:44:44 AM 


I thought maybe garbage collection, but don't see any indication of that
in the logs. 


Any ideas? I redacted host names from the log output.___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/75BZNMZUH23ZWCUHCDYCJSC6EZ45UW5O/


[ovirt-users] Re: VDI and ovirt

2021-02-23 Thread Alex McWhirter

On 2021-02-23 07:39, cpo cpo wrote:

Is anyone using Ovirt for a Windows 10 VDI deployment?  If so are you
using a connection broker?  If you are what are you using?

Thanks for your time


We use ovirt quite a lot for windows 10 vdi, 4.4 / EL8 is quite a bit 
nicer spice version wise.


No broker needed, ovirt does that pretty well on it's own. We do use 
squid as a spice http proxy however. We just have pools of vms based on 
sysprep'd images. We also repurposed some HP thin clients, installed 
centos minimal + openbox, wrote a small application that accepts user 
credentials and gets a spice connection via the ovirt API.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/COB73ASC5UIJIC5SZHG43BIULPUD7W4L/


[ovirt-users] Re: vGPU on ovirt 4.3

2021-01-25 Thread Alex McWhirter
IIRC oVirt 4.3 should have the basic hooks in place for mdev 
passthrough. For nvidia this mean you need the vgpu drivers and a 
license server. These licenses have a recurring cost.


AMD's solution uses SR-IOV, and requires a custom kernel module that is 
not well tested YMMV.


You can also passthrough entire cards in ovirt without any drivers, 
granted since the amount of GPU's you can stuff into a server is limited 
this is probably not ideal. Nvidia blocks this on consumer level cards.


Intel also has a mdev solution that is built into mesa / the kernel 
already, we have used it with VCA cards in the past, but perhaps the new 
Intel GPU's support it as well? Intel is the only option that will allow 
you to feed the framebuffer data back into spice so you can use the 
ovirt console. All other options require you to create a remote session 
to the guest directly.


On 2021-01-25 07:49, kim.karga...@noroff.no wrote:

Hi,

We are looking at getting some GPU's for our servers and to use vGPU
passthrough so that our students can do some video renders on the
VM's. Does anyone have good experience with the Nvidia Quadro RTX6000
or RTX8000 and ovirt 4.3?

Thanks.

Kim
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XF4LZOEAC2YTRH5LTL55YKA4BPRZLVZH/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PY4ITOIWACUWHYLAVGOMMWIXUZYNDH5L/


[ovirt-users] Re: CentOS 8 is dead

2020-12-10 Thread Alex McWhirter

On 2020-12-10 15:02, tho...@hoberg.net wrote:

I came to oVirt thinking that it was like CentOS: There might be bugs,
but given the mainline usage in home and coporate labs with light
workloads and nothing special, chances to hit one should be pretty
minor: I like looking for new fronteers atop of my OS, not inside.

I have been runing CentOS/OpenVZ for years in a previous job, mission
critical 24x7 stuff where minutes of outage meant being grilled for
hours in meetings afterwards. And with PCI-DSS compliance certified.
Never had an issue with OpenVZ/CentOS, all those minute goofs where
human error or Oracle inventing execution plans.

Boy was I wrong about oVirt! Just setting it up took weeks. Ansible
loves eating Gigahertz and I was running on Atoms. I had to learn how
to switch from an i7 in mid-installation to have it finish at all. I
the end I had learned tons of new things, but all I wanted was a
cluster that would work as much out of the box as CentOS or OpenVZ.

Something as fundamental as exporting and importing a VM might simply
not work and not even get fixed.

Migrating HCI from CentOS7/oVirt 4.3 to CentOS8/oVirt 4.4 is anything
but smooth, a complete rebuild seems the lesser evil: Now if only
exports and imports worked reliably!

Rebooting a HCI nodes seems to involve an "I am dying!" aria on the
network, where the whole switch becomes unresponsive for 10 minutes
and the fault tolerant cluster on it being 100% unresponsive
(including all other machines on that switch). I has so much fun
resynching gluster file systems and searching through all those log
files for signs as to what was going on!
And the instructions on how to fix gluster issues seems so wonderfully
detailed and vague, it seems one could spend days trying to fix things
or rebuild and restore. It doesn't help that the fate of Gluster very
much seems to hang in the air, when the scalable HCI aspect was the
only reason I ever wanted oVirt.

Could just be an issue with RealTek adapters, because I never oberved
something like that with Intel NICs or on (recycled old) enterprise
hardware

I guess official support for a 3 node HCI cluster on passive Atoms
isn't going to happen, unless I make happen 100% myself: It's open
source after all!

Just think what 3/6/9 node HCI based on Raspberry PI would do for the
project! The 9 node HCI should deliver better 10Gbit GlusterFS
performance than most QNAP units at the same cost with a single 10Gbit
interface even with 7:2 erasure coding!

I really think the future of oVirt may be at the edge, not in the
datacenter core.

In short: oVirt is very much beta software and quite simply a
full-time job if you depend on it working over time.

I can't see that getting any better when one beta gets to run on top
of another beta. At the moment my oVirt experience has me doubt RHV on
RHEL would work any better, even if it's cheaper than VMware.

OpenVZ was simply the far better alternative than KVM for most of the
things I needed from virtualization and it was mainly the hastle of
trying to make that work with RHEL which had me switching to CentOS.
CentOS with OpenVZ was the bedrock of that business for 15 years and
proved to me that Redhat was hell-bent on making bad decisions on
technological direction.

I would have actually liked to pay a license for each of the physical
hosts we used, but it turned out much less of a bother to forget about
negotiating licensing conditions for OpenVZ containers and use CentOS
instead.

BTW: I am going into a meeting tomorrow, where after two years of
pilot usage, we might just decide to kill our current oVirt farms,
because they didn't deliver on "a free open-source virtualization
solution for your entire enterprise".

I'll keep my Atoms running a little longer, mostly because I have
nothing else to use them for. For a first time in months, they show
zero gluster replication errors, perhaps because for lack of updates
there have been no node reboots. CentOS 7 is stable, but oVirt 4.3 out
of support.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GBZ46VXFZZXOMBNLNQTB34ZFYFVGDPB2/


oVirt has more or less always been RHEV upstream, while not necessarily 
beta, people using oVirt over RHEV have been subject to the occasional 
broken feature or two, at least at early release. If you need the 
stability and support, RHEV is the answer. However, we use oVirt and 
CentOS 8 in production on a fairly large scale and it's not an 
unreasonable amount of work to keep running. It's certainly not a set it 
and forget it scenario, but it works very well for us.


There are also a ton of moving parts to running oVirt at scale. 
hardware, firmware, network configuration, 

[ovirt-users] Re: Recent news & oVirt future

2020-12-10 Thread Alex McWhirter

On 2020-12-10 15:47, Charles Kozler wrote:

I guess this is probably a question for all current open source projects that red hat runs but -  

Does this mean oVirt will effectively become a rolling release type situation as well? 

How exactly is oVirt going to stay open source and stay in cadence with all the other updates happening around it on packages/etc that it depends on if the streams are rolling release? Do they now need to fork every piece of dependency? 


What exactly does this mean for oVirt going forward and its overall stability?

Notice to Recipient: https://www.fixflyer.com/disclaimer [1] 
___

Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7IUWGES2IG4BELLUPMYGEKN3GC6XVCHA/


Time will tell, but i suspect if anything this will make ovirt
development easier in some regards. ovirt already enables multiple SIG
streams, unofficial updates, etc... A lot of that would be streamlined.
Ovirt would be able to target future RHEL much easier by targeting
CentOS stream. 


Links:
--
[1] https://www.fixflyer.com/disclaimer___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7XEIEAWBERXD7PYQP572JDFJ56QBNP2E/


[ovirt-users] Re: Nodes in CentOS 8.3 and oVirt 4.4.3.12-1.el8 but not able to update cluster version

2020-12-10 Thread Alex McWhirter

You have to put all hosts in the cluster to maintenance mode first, then
you can change the compat version. 


On 2020-12-10 11:09, Gianluca Cecchi wrote:

Hello, 
my engine is 4.4.3.12-1.el8 and my 3 oVirt nodes (based on plain CentOS due to megaraid_sas kernel module needed) have been updated, bringing them to CentOS 8.3. 
But if I try to put cluster into 4.5 I get the error: 
" 
Error while executing action: Cannot change Cluster Compatibility Version to higher version when there are active Hosts with lower version. 
-Please move Host ov300, ov301, ov200 with lower version to maintenance first. 
" 
Do I have to wait until final 4.4.4 or what is the problem? 
Does 4.5 gives anything more apart the second number... (joking..)? 

On nodes: 


Software

OS Version: RHEL - 8.3 - 1.2011.el8
OS Description: CentOS Linux 8
Kernel Version: 4.18.0 - 240.1.1.el8_3.x86_64
KVM Version: 4.2.0 - 34.module_el8.3.0+555+a55c8938
LIBVIRT Version: libvirt-6.0.0-28.module_el8.3.0+555+a55c8938
VDSM Version: vdsm-4.40.35.1-1.el8
SPICE Version: 0.14.3 - 3.el8
GlusterFS Version: [N/A]
CEPH Version: librbd1-12.2.7-9.el8
Open vSwitch Version: [N/A]
Nmstate Version: nmstate-0.3.6-2.el8
Kernel Features: MDS: (Vulnerable: Clear CPU buffers attempted, no microcode; 
SMT vulnerable), L1TF: (Mitigation: PTE Inversion; VMX: conditional cache 
flushes, SMT vulnerable), SRBDS: (Not affected), MELTDOWN: (Mitigation: PTI), 
SPECTRE_V1: (Mitigation: usercopy/swapgs barriers and __user pointer 
sanitization), SPECTRE_V2: (Mitigation: Full generic retpoline, IBPB: 
conditional, IBRS_FW, STIBP: conditional, RSB filling), ITLB_MULTIHIT: (KVM: 
Mitigation: Split huge pages), TSX_ASYNC_ABORT: (Not affected), 
SPEC_STORE_BYPASS: (Mitigation: Speculative Store Bypass disabled via prctl and 
seccomp)
VNC Encryption: Disabled
FIPS mode enabled: Disabled 

Gianluca 
___

Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W7UEBHJA2ACLTMVX2QTMWZVWU4QK2Y6L/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7JT2ZUIG76FQKZRZWVHYPTJ537UVY7F/


[ovirt-users] Re: CentOS 8 is dead

2020-12-08 Thread Alex McWhirter

On 2020-12-08 14:37, Strahil Nikolov via Users wrote:

Hello All,

I'm really worried about the following news:
https://blog.centos.org/2020/12/future-is-centos-stream/

Did anyone tried to port oVirt to SLES/openSUSE or any Debian-based
distro ?

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HZC4D4OSYL64DX5VYXDJCHDNRZDRGIT6/


I fail to see a major issue honestly. If current RHEL is 8.3, CentOS 
Stream is essentially the RC for 8.4... oVirt in and of itself is also 
an upstream project, targeting upstream in advance is likely beneficial 
for all parties involved.


CentOS has been lagging behind RHEL quite a lot, creating it's own set 
of issues. Being ahead of the curve is more beneficial than detrimental 
IMO. The RHEL sources are still being published to the CentOS git, oVirt 
node could be built against that, time will tell.


Supported or not, i bet someone forks it anyways.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SM3JM6DHDCRJGH4LMGIKYOLCJKGWJKS4/


[ovirt-users] Re: new hyperconverged setup

2020-12-02 Thread Alex McWhirter

On 2020-12-02 09:56, cpo cpo wrote:

Thanks.  Trying to figure out if I should use dedup/comp now.  If I
don't is the total usable space if I follow your setup guide for my
storage domain going to be 21tb once I am done with everything (3 vm
storage volumes, one per disk)?

Thanks,
Donnie
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5F3VBLSQPRPJPMGHRZSU5KXXNRENM7JA/


the comp/dedupe features use VDO, which IMO is a fairly large 
performance hit, at least in my testing on 9x900GB 24 drive raid 10 
arrays.


My personal recommendation is to just use hardware / mdadm raid and lvm 
on top (oVirt can do this for you in the UI)


If you really need comp/dedupe, we use ZFS in production for that 
purpose. Granted this is not supported by oVirt, so YMMV.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3HUNIDA3JRCX67DXW5RZ2YZCAKAW3T5Q/


[ovirt-users] How do you manage OVN?

2020-11-19 Thread Alex McWhirter

I'm not sure if I' missing something, but it seems there is no way built
in to oVirt to manage OVN outside of network / subnet creation. In
particular routing both between networks and to external networks. 


Of course you have the OVN utilities, but it seems that the provider API
is the preffered method of interaction? 


As far as i can tell, the only utility that can use this API as intended
is ManageIQ, which is a bit a behemoth if you only need the OVN portion
of things. 


So is that it then? Interface with the API directly or use ManageIQ?
Just curious what others are doing in regards to OVN.___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q2VW36DS4QVSPCGUVBAUPJQOZZ5GO73Q/


[ovirt-users] Re: Improve glusterfs performance

2020-10-21 Thread Alex McWhirter

In my experience, the ovirt optimized defaults are fairly sane. I may
change a few things like enabling read ahead or increasing the shard
size, but these are minor performance bumps if anything. 


The most important thing is the underlying storage, RAID 10 is ideal
performance wise, large stripe sizes are preferable for VM workloads,
etc... You want the underlying storage to be as fast as possible,
dedicated cache devices are a plus. IOPS and latency are often more
important that throughput. 


Network throughput and latency are also very important. I don't think i
would attempt a gluster setup on anything slower than 10GbE, jumbo
frames are a huge help, switches with large buffers are nice as well. Do
not L3 route gluster (at least not inter-server links) unless you have a
switch that can do line rate routing. High or inconsistent network
latency will bring gluster to it's knees. 


Kernel tuning / gluster volume options do help, but they are not
groundbreaking performance improvements. Usually just little speed
boosts here and there. 


On 2020-10-21 12:30, eev...@digitaldatatechs.com wrote:





Here is the post link: 

https://lists.ovirt.org/archives/list/users@ovirt.org/thread/I62VBDYPQIWPRE3LKUUVSLHPZJVQBT4X/ 

Here is the actual link: 

https://docs.gluster.org/en/latest/Administrator%20Guide/Linux%20Kernel%20Tuning/ 

Eric Evans 

Digital Data Services LLC. 

304.660.9080 

From: supo...@logicworks.pt  
Sent: Wednesday, October 21, 2020 11:47 AM

To: eev...@digitaldatatechs.com
Cc: users 
Subject: Re: [ovirt-users] Improve glusterfs performance 

Thanks. 

This is what I found from you related to glusterfs: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DN47OTYUUCOAFDD6QQC333AW5RBBN6SM/ 

But I don't find anything on how to imporve gluster. 

José 


-

De: eev...@digitaldatatechs.com
Para: supo...@logicworks.pt, "users" 
Enviadas: Quarta-feira, 21 De Outubro de 2020 15:53:00
Assunto: RE: [ovirt-users] Improve glusterfs performance 

I posted a link in the users list that details how to improve gluster and improve performance. Search gluster. 

Eric Evans 

Digital Data Services LLC. 

304.660.9080 

From: supo...@logicworks.pt  
Sent: Wednesday, October 21, 2020 9:42 AM

To: users 
Subject: [ovirt-users] Improve glusterfs performance 

Hello, 

Can anyone help me in how can I improve the performance of glusterfs to work with oVirt? 

Thanks 


--

-

Jose Ferradeira
http://www.logicworks.pt 


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAWWGYZ4GUNHDBE23XBCOFUG56AZMFRU/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LI6QSA4N3JPXXVXWUW73TYGYOJOUTF5C/


[ovirt-users] Re: How to make oVirt + GlusterFS bulletproof

2020-10-09 Thread Alex McWhirter

A few things to consider,

what is your RAID situation per host. If you're using mdadm based soft 
raid, you need to make sure your drives support power loss data 
protection. This is mostly only a feature on enterprise drives. 
Essenstially it ensures the drives reserve enough energy to flush the 
write cache to disk on power loss. Most modern drives have a non-trivial 
amount of built in write cache and losing that data on power loss will 
gladly corrupt files, especially on soft raid setups.


If you're using hardware raid, make sure you have disabled drive based 
write cache, and that you have a battery / capacitor connected for the 
raid cards cache module.


If you're using ZFS, which isn't really supported, you need a good UPS 
and to have it set up to shut systems down cleanly. ZFS will not take 
power outages well. Power loss data protection is really important too, 
but it's not a fixall for ZFS as it also caches writes in systems RAM 
quite a bit. A dedicated cache device with power loss data protection 
can help mitigate that, but really the power issues are a more pressing 
concern in this situation.



As far as gluster is concerned, there is not much that can easily 
corrupt data on power loss. My only thought would be if your switches 
are not also battery backed, this would be an issue.


On 2020-10-08 08:15, Jarosław Prokopowski wrote:

Hi Guys,

I had a situation 2 times that due to unexpected power outage
something went wrong and VMs on glusterfs where not recoverable.
Gluster heal did not help and I could not start the VMs any more.
Is there a way to make such setup bulletproof?
Does it matter which volume type I choose - raw or qcow2? Or thin
provision versus reallocated?
Any other advise?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MRM6H2YENBP3AHQ5JWSFXH6UT6J6SDQS/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZUMGUWDRATJERHSONGMQKHH3T457LVJC/


[ovirt-users] Windows Guest Agent Issues Since 4.3

2020-04-06 Thread Alex McWhirter

Upgraded a installation to 4.3, and update the guest agent on all VM's,
now all of my Windows VM's have a exclamation point telling me to
install the latest guest agent. Some parts of the guest agent still seem
to work, the IP addresses are still showing in the portal, but not FQDN
and for some machine the shutdown button no longer functions correctly. 


I also tried to a fresh 4.3 install on a test cluster, and at first it
worked great. But after a few days all of the windows vm's on that setup
are showing the same behavior. Linux guests are fine on both systems.
Not really sure where to start looking.___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WUYJCZD7ASZANSZN5QJP255WFMBYP6BL/


[ovirt-users] Re: Windows VirtIO drivers

2020-04-02 Thread Alex McWhirter
I've never had any of these issues... These are my usual windows steps. 

1. boot fresh vm into windows ISO image installer. 


2. when i get to disk screen (blank because i use VirtIO-SCSI), change
windows ISO to ovirt guest tools ISO 


3. click load drivers, browse, load VirtIO-SCSI driver from CD, go back
to disk screen. 


4. swap ovirt guest tools ISO for Windows ISO, click refresh, then
install on HDD. 


5. when windows boots, swap ISO's again and run the ovirt guest tools
installer. Sometimes my mouse breaks at this point, but i can click
restart in the admin portal and after that all is well. 


On 2020-04-02 12:30, Shareef Jalloq wrote:

I can't seem to do anything.  If I hit the Windows key and start typing, I get a black popup with no options.  Win+R doesn't do anything aside from spit out a GSpice critical error in the terminal I'm running remote-viewer from. 

You spend a week trying to launch a VM and when you do, you can't do nowt.  :-) 

On Thu, Apr 2, 2020 at 5:24 PM Robert Webb  wrote: 


I have yet to figure that one out. All of the VM's I built I had to go old 
school with the tab key and arrow keys...

Very annoying..


From: Shareef Jalloq 
Sent: Thursday, April 2, 2020 12:21 PM
To: eev...@digitaldatatechs.com
Cc: Robert Webb; users@ovirt.org
Subject: Re: [ovirt-users] Re: Windows VirtIO drivers

Any idea what I need to do to get a working mouse in a Windows VM?  I can click 
on the SPICE viewer but the mouse pointer is fixed to the left hand border.

On Thu, Apr 2, 2020 at 4:38 PM Shareef Jalloq 
mailto:shar...@jalloq.co.uk>> wrote:
Sorry, I missed all these replies.  Thanks for the help.

I just noticed something funny in the Windows installer.  When you get to the 
point where it can't find any installation media, if you Load Driver, a popup 
appears with a Browse and OK/Cancel options.  I assumed that I should be 
browsing to the floppy here but if you do that, you get an error that no signed 
drivers can be found.  Instead, you just have to hit OK when the popup appears 
and that somehow finds the drivers.

I'm assuming that I will need to install the network driver once the 
installation has finished?

On Wed, Mar 25, 2020 at 8:37 PM 
mailto:eev...@digitaldatatechs.com>> wrote:
Ok. Personally, I like having an ISO repository, but I like the fact that's
it will be optional.
Thanks.

Eric Evans
Digital Data Services LLC.
304.660.9080

-Original Message-
From: Robert Webb mailto:rw...@ropeguru.com>>
Sent: Wednesday, March 25, 2020 4:26 PM
To: eev...@digitaldatatechs.com; 'Shareef Jalloq' 
mailto:shar...@jalloq.co.uk>>;
users@ovirt.org
Subject: Re: [ovirt-users] Re: Windows VirtIO drivers

It does work that way.

I found that out as I was testing oVirt and had not created a separate ISO
Domain. I believe it was Strahil who pointed me in the right direction.

So if one does not have an ISO Domain, it is no longer required. Along with
the fact that ISO Domains are or in the process of being deprecated.


From: eev...@digitaldatatechs.com 
mailto:eev...@digitaldatatechs.com>>
Sent: Wednesday, March 25, 2020 3:13 PM
To: Robert Webb; 'Shareef Jalloq'; users@ovirt.org
Subject: RE: [ovirt-users] Re: Windows VirtIO drivers

That may be true, but in the ISO domain, when you open virt viewer you can
change the cd very easily...maybe it works that way as well..

Eric Evans
Digital Data Services LLC.
304.660.9080

-Original Message-
From: Robert Webb mailto:rw...@ropeguru.com>>
Sent: Wednesday, March 25, 2020 2:35 PM
To: eev...@digitaldatatechs.com; 'Shareef Jalloq' 
mailto:shar...@jalloq.co.uk>>;
users@ovirt.org
Subject: [ovirt-users] Re: Windows VirtIO drivers

Don't think you have to use the ISO Domain any longer.

You can upload to a Data Domain and when you highlight the VM in the
management GUI, select the three dots in the top left for extra options and
there is a change cd option. That option will allow for attaching an ISO
from a Data Domain.

That is what I recall when I was using oVirt a month or so ago.


From: eev...@digitaldatatechs.com 
mailto:eev...@digitaldatatechs.com>>
Sent: Wednesday, March 25, 2020 2:28 PM
To: 'Shareef Jalloq'; users@ovirt.org
Subject: [ovirt-users] Re: Windows VirtIO drivers

You have to copy the iso and vfd files to the ISO domain to make them
available to the vm's that need drivers.

engine-iso-uploader options list
# engine-iso-uploader options upload file file file Documentation is found
here: https://www.ovirt.org/documentation/admin-guide/chap-Utilities.html

Eric Evans
Digital Data Services LLC.
304.660.9080
[cid:image001.jpg@01D602B1.9FEA2EC0]

From: Shareef Jalloq 

[ovirt-users] Re: Speed Issues

2020-03-27 Thread Alex McWhirter

On 2020-03-27 05:28, Christian Reiss wrote:

Hey Alex,

you too, thanks for writing.
I'm on 64mb as per default for ovirt. We tried no sharding, 128mb
sharding, 64mb sharding (always with copying the disk). There was no
increase or decrease in disk speed in any way.

Besides losing HA capabilites, what other caveats?

-Chris.

On 24/03/2020 19:25, Alex McWhirter wrote:
Red hat also recommends a shard size of 512mb, it's actually the only 
shard size they support. Also check the chunk size on the LVM thin 
pools running the bricks, should be at least 2mb. Note that changing 
the shard size only applies to new VM disks after the change. Changing 
the chunk size requires making a new brick.


libgfapi brings a huge performance boost, in my opinion its almost a 
necessity unless you have a ton of extra disk speed / network 
throughput. Just be aware of the caveats.


--
 Christian Reiss - em...@christian-reiss.de /"\  ASCII Ribbon
   supp...@alpha-labs.net   \ /Campaign
 X   against HTML
 WEB alpha-labs.net / \   in eMails

 GPG Retrieval https://gpg.christian-reiss.de
 GPG ID ABCD43C5, 0x44E29126ABCD43C5
 GPG fingerprint = 9549 F537 2596 86BA 733C  A4ED 44E2 9126 ABCD 43C5

 "It's better to reign in hell than to serve in heaven.",
  John Milton, Paradise lost.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KL6HLEIRQ6GCNP5YK7TY4UY52JOLFIC3/


You don't lose HA, you just loose live migration in between separate 
data centers or between gluster volumes. Live migration between nodes in 
the same DC / gluster volume still works fine. Some people have snapshot 
issues, i don't, but plan for problems just in case.


shard size 512MB will only affect new vm's, or new VM disks to be exact. 
LVM chunk size defaults to 2mb on CentOS 7.6+, but it should be a 
multiple of your raid stripe size. Stripe size should be fairly large, 
we use 512KB0 stripe sizes on the bricks, 2mb chunk sizes on lvm.


With that and about 90 disks we can saturate 10GBe, then we added in 
some SSD cache drives to lvm on the bricks, which helped a lot with 
random io.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BNTWPEG5EZDAE22XOF5XHXHDJB3J65AP/


[ovirt-users] Re: Speed Issues

2020-03-24 Thread Alex McWhirter
Red hat also recommends a shard size of 512mb, it's actually the only 
shard size they support. Also check the chunk size on the LVM thin pools 
running the bricks, should be at least 2mb. Note that changing the shard 
size only applies to new VM disks after the change. Changing the chunk 
size requires making a new brick.


libgfapi brings a huge performance boost, in my opinion its almost a 
necessity unless you have a ton of extra disk speed / network 
throughput. Just be aware of the caveats.


On 2020-03-24 14:12, Strahil Nikolov wrote:

On March 24, 2020 7:33:16 PM GMT+02:00, Darrell Budic
 wrote:

Christian,

Adding on to Stahil’s notes, make sure you’re using jumbo MTUs on
servers and client host nodes. Making sure you’re using appropriate
disk schedulers on hosts and VMs is important, worth double checking
that it’s doing what you think it is. If you are only HCI, gluster’s
choose-local on is a good thing, but try

cluster.choose-local: false
cluster.read-hash-mode: 3

if you have separate servers or nodes with are not HCI to allow it
spread reads over multiple nodes.

Test out these settings if you have lots of RAM and cores on your
servers, they work well for me with 20 cores and 64GB ram on my 
servers

with my load:

performance.io-thread-count: 64
performance.low-prio-threads: 32

these are worth testing for your workload.

If you’re running VMs with these, test out libglapi connections, it’s
significantly better for IO latency than plain fuse mounts. If you can
tolerate the issues, the biggest one at the moment being you can’t 
take

snapshots of the VMs with it enabled as of March.

If you have tuned available, I use throughput-performance on my 
servers
and guest-host on my vm nodes, throughput-performance on some HCI 
ones.



I’d test with out the fips-rchecksum setting, that may be creating
extra work for your servers.

If you mounted individual bricks, check that you disabled barriers on
them at mount if appropriate.

Hope it helps,

 -Darrell


On Mar 24, 2020, at 6:23 AM, Strahil Nikolov 

wrote:


On March 24, 2020 11:20:10 AM GMT+02:00, Christian Reiss

 wrote:

Hey Strahil,

seems you're the go-to-guy with pretty much all my issues. I thank

you

for this and your continued support. Much appreciated.


200mb/reads however seems like a broken config or malfunctioning
gluster
than requiring performance tweaks. I enabled profiling so I have

real

life data available. But seriously even without tweaks I would like
(need) 4 times those numbers, 800mb write speed is okay'ish, given

the

fact that 10gbit backbone can be the limiting factor.

We are running BigCouch/CouchDB Applications that really really need
IO.
Not in throughput but in response times. 200mb/s is just way off.

It feels as gluster can/should do more, natively.

-Chris.

On 24/03/2020 06:17, Strahil Nikolov wrote:

Hey Chris,,

You got some options.
1. To speedup the reads in HCI - you can use the option :
cluster.choose-local: on
2. You can adjust the server and client event-threads
3. You can use NFS Ganesha (which connects to all servers via

libgfapi)  as a NFS Server.

In such case you have to use some clustering like ctdb or

pacemaker.

Note:disable cluster.choose-local if you use this one
4 You can try the built-in NFS , although it's deprecated (NFS

Ganesha is fully supported)

5.  Create a gluster profile during the tests. I have seen numerous

improperly selected tests -> so test with real-world  workload.
Synthetic tests are not good.


Best Regards,
Strahil Nikolov


Hey Chris,

What type is your VM ?
Try with 'High Performance' one (there is a  good RH documentation on

that topic).


If the DB load  was  directly on gluster, you could use the settings
in the '/var/lib/gluster/groups/db-workload'  to optimize that, but 
I'm

not sure  if this will bring any performance  on a VM.


1. Check the VM disk scheduler. Use 'noop/none' (depends on

multiqueue is enabled) to allow  the Hypervisor aggregate the I/O
requests from multiple VMs.

Next, set 'noop/none' disk scheduler  on the hosts - these 2 are the

optimal for SSDs and NVME disks  (if I recall corectly you are  using
SSDs)


2. Disable cstates on the host and Guest (there are a lot of articles

about that)


3. Enable MTU 9000 for Hypervisor (gluster node).

4. You can try setting/unsetting the tunables in the db-workload

group and run benchmarks with real workload  .


5.  Some users  reported  that enabling  TCP offload  on the hosts

gave huge  improvement in performance  of gluster  - you can try that.

Of course  there are mixed  feelings - as others report  that

disabling it brings performance. I guess  it is workload  specific.


6.  You can try to tune  the 'performance.readahead'  on your

gluster volume.


Here are some settings  of some users /from an old e-mail/:

performance.read-ahead: on
performance.stat-prefetch: on
performance.flush-behind: on
performance.client-io-threads: on
performance.write-behind-window-size: 64MB (shard  

[ovirt-users] gluster shard size

2020-01-24 Thread Alex McWhirter

building a new gluster volume this weekend, trying to optimize it fully
for virt. RHGS states that it supports only a 512mb shard size, so i ask
why is the default for ovirt 64mb?___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5FT6HLJDQCQKQ3BQG7CD2IF2OC6F544T/


[ovirt-users] Re: Libgfapi considerations

2019-12-16 Thread Alex McWhirter
I also use libgfapi in prod. 


1. This is a pretty annoying issue, i wish engine-config would look to
see if it already enabled and just keep it that way. 


2. Edit /etc/libvirt/qemu.conf and set dynamic ownership to 0, will stop
the permission changes. 

3. I don't see this error on any of my clusters, all using libgfapi. 


I also have no issues using snapshots with libgfapi, but live migration
between storage domains indeed does not work. 


On 2019-12-16 12:46, Darrell Budic wrote:

I use libgfap in production, the performance is worth a couple of quirks for me. 

- watch major version updates, they'll silently turn it off because the engine starts using a new version variable 
- VM/qemu security quirk that resets ownership when the VM quits, was supposedly fixed in 4.3.6 but I still have it happen to me, a cron'd chown keeps it under control for me 
- some VMs cause a libvirt/vdsmd interaction that results in failed stats query, and the engine thinks my VMs are offline because the stats gathering is stuck. hoped a bug fix in 4.3.6 would take care of this too, but didn't. may be my VMs though, still analyzing for specific file issues 

I need to spend some time doing a little more research and filing/updating some bug reports, but it's been a busy end of year so far... 


-Darrell

On Dec 14, 2019, at 5:47 PM, Strahil Nikolov  wrote: 

According to GlusterFS Storage Domain [1]  
the feature is not the default as it is incompatible with Live Storage Migration. 

Best Regards, 
Strahil Nikolov 

В събота, 14 декември 2019 г., 17:06:32 ч. Гринуич+2, Jayme  написа: 


Are there currently any known issues with using libgfapi in the latest stable 
version of ovirt in hci deployments?  I have recently enabled it and have 
noticed a significant (over 4x) increase in io performance on my vms. I'm 
concerned however since it does not seem to be an ovirt default setting.  Is 
libgfapi considered safe and stable to use in ovirt 4.3 hci? 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYVTG3NUIXE5LJBBVEGGKHQFOGKJ5CU2/
 ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDZD4W5UYYR6MROFS2OS5HLZCUIJUVIJ/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6KAWATJCAONOXE2HSLPXKC4YB23JE3KA/




Links:
--
[1]
https://www.ovirt.org/develop/release-management/features/storage/glusterfs-storage-domain.html___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LLLC3CMLWGSWFMWTYRBTRBOD764N2EEF/


[ovirt-users] Re: VDI

2019-10-06 Thread Alex McWhirter

We use customized versions of spice / kvm. Same versions ovirt ships
with for compatibility reasons, with audio patches on the kvm side and
spice patches for vp8 encoding the video streams. We've been meaning to
make the repo for our custom patched versions public for a while, if you
are interested i can accelerate that. Note, you also need patched
versions of spice client if yours wasn't build with vp8 support, we have
those too. 


On the experimental side we have another in-progress set of patches that
enable h264 encoding in spice, hardware accelerated with AMD W5100's,
but this requires a lot of new software to be installed and a new
kernel, CentOS 8 should fix most of that, so we'll probably re-base and
release on that when the time is right. 


The GPU's are not used for the guests at all, we only use them for the
h264 encoding. AMD was picked to avoid proprietary drivers and stream
limits. No RAM / SR-IOV needed, if you want 3d support you will be
looking more for Nvidia-GRID. 


Anyways, with just the patched software installed and some custom
settings, video playback is about 95% the quality of native, takes about
40mbit/s per client to stream it. Audio has the occasional stutter, but
it's not bad. 


On 2019-10-06 05:09, Leo David wrote:

Thank you for sharing the informations Alex, they are very helpfull. I am now able to get sound from the vms,  although performance is pretty poor even with "adjust for performance" setting in Win10. Cannot even talk about youtube video playing - freezing and crackling.  
Could you please be so kind to share the following infos: 
1. Have you upgraded the "spice-server" installed on the hosts with a  newer version than 1.4.0 ? If so,  could you provide me how could I get these packages ? 
2. What graphic card have you used for getting better graphic performance with the vms ? Im trying to understand what "accepted" card could I use with my 1U chassis servers... 
3. Is it only needed to install the card and the platform will alocate physical video memory to "desktop" vms ? (  Will the card RAM be automatically shared across the desktop tyoe vms running on top  of the host ? ) 
4. Is it necesarilly to activate sr-iov in the hosts bios or any other platform configurations ? 

I am really sorry for asking too many things,  but im just trying to get these vdi vms working at least decent... 
Thank you so much ! 

Leo 


-- Forwarded message -
From: 
Date: Tue, Sep 24, 2019 at 7:50 PM
Subject: Re: [ovirt-users] Re: VDI
To: Leo David  

Audio should just work as long as the VM is of the desktop type. 


On Sep 24, 2019 6:50 AM, Leo David  wrote:

Thank you Alex, 
When you say "gpu backed"  are you referring to sr-iov  to share same gpu to multiple vms ? 
Any thoughts regarding passing audio form the vm to the client ? 
Did you do any update of the spice-server on the hosts ? 

Thanks, 

Leo 

On Tue, Sep 24, 2019 at 12:01 PM  wrote: 

I believe a lot of the package updates in CentOS 8 will solve some of the issues. 

But for now we get around them by disabling all visual effects on our VMS. If you are gpu backing the VMS with something like Nvidia grid the issues are non existent, but for non gpu backed VMS currently disabling all the effects is a must. 


We deploy the changes via gpo directly to the registry, so they take effect on 
first VM boot.

On Sep 24, 2019 2:03 AM, Leo David  wrote:

Thank you Alex from my side as well, very usefull information. I am the middle of vdi implementation as well, and i'm having issues with the spice console since 4.2, and it seems that latest 4.3 is still having the problem. 
What am i confrunting is: 
- spice console is very slaggy and slow for Win10 vms ( not even talking about running videos..) 
- i can't find a way to get audio from the vm 
At the moment i am running 4.3, latest virt-viewer installed on the client, and latest qxl-dod driver installed on the vm. 
Any thoughts on solving video performance and audio redirection ? 
Thank you again, 

Leo 

On Mon, Sep 23, 2019, 22:53 Alex McWhirter  wrote: 

To achieve that all you need to do is create a template of the desktop base vm, make sure the vm type is set to desktop. Afterwards just create new vms from that template. As long as the VM type is set to desktop each new VM will use a qcow overlay on top of the base image. 

Taking this a step further you can then create VM pools from said template, allowing users to dynamically be assigned a new VM on login. Granted pools are usually stateless, so you need to have network file storage. We use pools for windows 10 VDI instances, where we use sysprep to autojoin the new pool vm to the domain where redirected folders are already setup. 

For VDI only use spice protocol. By default we found spice to be semi lackluster, so we do apply custom settings and we have recompiled spice on both servers and clients with h264 support. This is not 100% necessary, but makes 

[ovirt-users] Re: VDI

2019-09-23 Thread Alex McWhirter

To achieve that all you need to do is create a template of the desktop
base vm, make sure the vm type is set to desktop. Afterwards just create
new vms from that template. As long as the VM type is set to desktop
each new VM will use a qcow overlay on top of the base image. 


Taking this a step further you can then create VM pools from said
template, allowing users to dynamically be assigned a new VM on login.
Granted pools are usually stateless, so you need to have network file
storage. We use pools for windows 10 VDI instances, where we use sysprep
to autojoin the new pool vm to the domain where redirected folders are
already setup. 


For VDI only use spice protocol. By default we found spice to be semi
lackluster, so we do apply custom settings and we have recompiled spice
on both servers and clients with h264 support. This is not 100%
necessary, but makes things like youtube much more usable. We have also
backported some audio patches to KVM. CentOS 8 should resolve a lot of
these customizations that we've had to do. 


As far as updating, pretty much. We create a VM from the template,
update it, then push it back as a new version of the template. The pools
are set to always use the latest template version. Users have to log
out, then back in to the VDI system in order to get the new image as
logging out will destroy the users current instance and create a new one
on log in. 


On 2019-09-23 15:16, Fabio Marzocca wrote:

Hi Alex, thanks for answering. 

I am approaching and studying oVirt in order to propose the solution to a customer as a replacement for a commercial solution they have now. 
They only need Desktop virtualization. 
Sorry for the silly question, but I can't find a way to deploy a VM (template) to users as a "linked-clone", meaning that the users' image still refers to the original image but modification are written (and afterwards read) from a new location. This technique is called Copy-on-write. 
Can this be achieved with oVirt? 

Then, what is the Best Practice to update WIndows OS for the all the users? Currently they simply "check-out"  the Gold Image, update it and check-in, while all users are running... 

Fabio 

On Mon, Sep 23, 2019 at 8:04 PM Alex McWhirter  wrote: 

yes, we do. All spice, with some customizations done at source level for spice / kvm packages. 

On 2019-09-23 13:44, Fabio Marzocca wrote: 
Is there anyone who uses oVirt as a full VDI environment? I would have a bunch of questions... 


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D44YB5VOKNBNCJSOMLKRAZBURFJLAOLM/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DWY46KJGSYJGRXMRY7WWCINAW2Y5ETDI/


[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-14 Thread Alex McWhirter
I have gone in an changed the libvirt configuration files on the cluster
nodes, which has resolved the issue for the time being. 

I can reverse one of them and post the logs to help with the issue,
hopefully tomorrow. 

On 2019-06-14 17:56, Nir Soffer wrote:

> On Fri, Jun 14, 2019 at 7:05 PM Milan Zamazal  wrote: 
> 
>> Alex McWhirter  writes:
>> 
>>> In this case, i should be able to edit /etc/libvirtd/qemu.conf on all
>>> the nodes to disable dynamic ownership as a temporary measure until
>>> this is patched for libgfapi?
>> 
>> No, other devices might have permission problems in such a case.
> 
> I wonder how libvirt can change the permissions for devices it does not know 
> about? 
> 
> When using libgfapi, we pass libivrt: 
> 
>  name='volume/----'
> type='network'>
>  transport="tcp"/>
>  transport="tcp"/>
> 
> 
> 
> So libvirt does not have the path to the file, and it cannot change the 
> permissions. 
> 
> Alex, can you reproduce this flow and attach vdsm and engine logs from all 
> hosts 
> to the bug? 
> 
> Nir 
> 
>>> On 2019-06-13 10:37, Milan Zamazal wrote:
>>>> Shani Leviim  writes:
>>>> 
>>>>> Hi,
>>>>> It seems that you hit this bug:
>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1666795
>>>>> 
>>>>> Adding +Milan Zamazal , Can you please confirm?
>>>> 
>>>> There may still be problems when using GlusterFS with libgfapi:
>>>> https://bugzilla.redhat.com/1719789.
>>>> 
>>>> What's your Vdsm version and which kind of storage do you use?
>>>> 
>>>>> *Regards,*
>>>>> 
>>>>> *Shani Leviim*
>>>>> 
>>>>> 
>>>>> On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter 
>>>>> wrote:
>>>>> 
>>>>>> after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
>>>>>> images are become owned by root:root. Live migration succeeds and
>>>>>> the vm
>>>>>> stays up, but after shutting down the VM from this point, starting
>>>>>> it up
>>>>>> again will cause it to fail. At this point i have to go in and change
>>>>>> the permissions back to vdsm:kvm on the images, and the VM will boot
>>>>>> again.
>>>>>> ___
>>>>>> Users mailing list -- users@ovirt.org
>>>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>> oVirt Code of Conduct:
>>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>>> List Archives:
>>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGSLA7NC5YGP7BIWQKMM3/
>>>>>> 
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/36Z6BB5NGYEEFMPRTDYKFJVVBPZFUCBL/

  

Links:
--
[1] http://brick1.example.com
[2] http://brick2.example.com___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YFAVZUB3CTAEADS54ZD73KTFSXXBM2FB/


[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-14 Thread Alex McWhirter
In this case, i should be able to edit /etc/libvirtd/qemu.conf on all 
the nodes to disable dynamic ownership as a temporary measure until this 
is patched for libgfapi?



On 2019-06-13 10:37, Milan Zamazal wrote:

Shani Leviim  writes:


Hi,
It seems that you hit this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1666795

Adding +Milan Zamazal , Can you please confirm?


There may still be problems when using GlusterFS with libgfapi:
https://bugzilla.redhat.com/1719789.

What's your Vdsm version and which kind of storage do you use?


*Regards,*

*Shani Leviim*


On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter  
wrote:



after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
images are become owned by root:root. Live migration succeeds and the 
vm
stays up, but after shutting down the VM from this point, starting it 
up

again will cause it to fail. At this point i have to go in and change
the permissions back to vdsm:kvm on the images, and the VM will boot
again.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGSLA7NC5YGP7BIWQKMM3/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N64DCWMJNVUTE2DKVZ4MFN7GATQ5INJL/


[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-14 Thread Alex McWhirter

Yes we are using GlusterFS distributed replicate with libgfapi

VDSM 4.30.17

On 2019-06-13 10:37, Milan Zamazal wrote:

Shani Leviim  writes:


Hi,
It seems that you hit this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1666795

Adding +Milan Zamazal , Can you please confirm?


There may still be problems when using GlusterFS with libgfapi:
https://bugzilla.redhat.com/1719789.

What's your Vdsm version and which kind of storage do you use?


*Regards,*

*Shani Leviim*


On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter  
wrote:



after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
images are become owned by root:root. Live migration succeeds and the 
vm
stays up, but after shutting down the VM from this point, starting it 
up

again will cause it to fail. At this point i have to go in and change
the permissions back to vdsm:kvm on the images, and the VM will boot
again.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGSLA7NC5YGP7BIWQKMM3/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SEBMA6WG6LGCTYVNQSBLTZDCSVK6QRDN/


[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-13 Thread Alex McWhirter

Gluster storage type,

Ill do some migrations and attach logs from the same period shortly


On 2019-06-13 07:47, Benny Zlotnik wrote:

Also, what is the storage domain type? Block or File?

On Thu, Jun 13, 2019 at 2:46 PM Benny Zlotnik  
wrote:


Can you attach vdsm and engine logs?
Does this happen for new VMs as well?

On Thu, Jun 13, 2019 at 12:15 PM Alex McWhirter  
wrote:

>
> after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
> images are become owned by root:root. Live migration succeeds and the vm
> stays up, but after shutting down the VM from this point, starting it up
> again will cause it to fail. At this point i have to go in and change
> the permissions back to vdsm:kvm on the images, and the VM will boot
> again.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGSLA7NC5YGP7BIWQKMM3/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SCOSASCXXDJFK6XHIWRI7Y77LJBRR7V3/


[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-13 Thread Alex McWhirter

My gluster servers are already on 5.6

On 2019-06-13 06:10, Strahil wrote:

Hi Alex,

Did you migrate from gluster v3 to v5 ?
If yes, then it could be the issue with v5.3  where permissions go 
wrong.
If so, pick ovirt 4.3.4 as it uses a  newer (fixed ) version of gluster 
-> v5.6


Best Regards,
Strahil NikolovOn Jun 13, 2019 09:46, Alex McWhirter  
wrote:


after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
images are become owned by root:root. Live migration succeeds and the 
vm
stays up, but after shutting down the VM from this point, starting it 
up

again will cause it to fail. At this point i have to go in and change
the permissions back to vdsm:kvm on the images, and the VM will boot
again.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGSLA7NC5YGP7BIWQKMM3/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I3BVYYQZNDMVQG22WBZSQZHV7TW5CI4G/


[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-13 Thread Alex McWhirter
engine: 4.3.4.2-1.el7

Node Versions 

OS Version: 
RHEL - 7 - 6.1810.2.el7.centos 

OS Description: 
CentOS Linux 7 (Core) 

Kernel Version: 
3.10.0 - 957.12.2.el7.x86_64 

KVM Version: 
2.12.0 - 18.el7_6.5.1 

LIBVIRT Version: 
libvirt-4.5.0-10.el7_6.10 

VDSM Version: 
vdsm-4.30.17-1.el7 

SPICE Version: 
0.14.0 - 6.el7_6.1 

GlusterFS Version: 
[N/A] 

On 2019-06-13 06:51, Simone Tiraboschi wrote:

> On Thu, Jun 13, 2019 at 11:18 AM Alex McWhirter  wrote: 
> 
>> after upgrading from 4.2 to 4.3, after a vm live migrates it's disk 
>> images are become owned by root:root. Live migration succeeds and the vm 
>> stays up, but after shutting down the VM from this point, starting it up 
>> again will cause it to fail. At this point i have to go in and change 
>> the permissions back to vdsm:kvm on the images, and the VM will boot 
>> again.
> 
> We had an old bug about that: 
> https://bugzilla.redhat.com/show_bug.cgi?id=1666795 
> but it's reported as fixed. 
> 
> Can you please detail the exact version of ovirt-engine and vdsm you are 
> using on all of your hosts? 
> 
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGSLA7NC5YGP7BIWQKMM3/
> 
> -- 
> 
> Simone Tiraboschi 
> 
> He / Him / His 
> 
> Principal Software Engineer 
> 
> Red Hat [1]
> 
> stira...@redhat.com   
> 
> @redhatjobs [2]   redhatjobs [3] @redhatjobs [4]   
> 
> [5]
> 
> [6]

  

Links:
--
[1] https://www.redhat.com/
[2] https://twitter.com/redhatjobs
[3] https://www.facebook.com/redhatjobs
[4] https://instagram.com/redhatjobs
[5] https://red.ht/sig
[6] https://redhat.com/summit___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DQHALQEXSVZTF5UYLMSNHZ4GMQ7EZXJ6/


[ovirt-users] 4.3 live migration creates wrong image permissions.

2019-06-13 Thread Alex McWhirter
after upgrading from 4.2 to 4.3, after a vm live migrates it's disk 
images are become owned by root:root. Live migration succeeds and the vm 
stays up, but after shutting down the VM from this point, starting it up 
again will cause it to fail. At this point i have to go in and change 
the permissions back to vdsm:kvm on the images, and the VM will boot 
again.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGSLA7NC5YGP7BIWQKMM3/


[ovirt-users] libvirt memory leak?

2019-06-01 Thread Alex McWhirter
After moving from 4.2 -> 4.3 libvirtd seems to be leaking memory, it 
recently crashed a host by eating 123GB of RAM. Seems to follow one 
specific VM around, this is the only VM i have created since 4.3, the 
others were all made in 4.2 them upgraded to 4.3


What logs would be applicable?

Libvirt 4.5.0
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SB75R63BMUEWC5T4XBTKGQNQXXTQV53M/


[ovirt-users] Gluster Snapshot Datepicker Not Working?

2019-05-10 Thread Alex McWhirter
Updated to 4.3.3.7, the date picker for gluster snapshot appears to not 
be working? It wont register clicks, and manually typing in times 
doesn't work.


Can anyone else confirm?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BV5WG6HDZ5Q4VYTZYWLDTRF4TAFAYAQ6/


[ovirt-users] Migrating Domain Storage Gluster

2019-05-09 Thread Alex McWhirter
Basically i want to take out all of the HDD's in the main gluster pool, 
and replace with SSD's.


My thought was to put everything in maintenance, copy the data manually 
over to a transient storage server. Destroy the gluster volume, swap in 
all the new drives, build a new gluster volume with the same name / 
settings, move data back, and be done.


Any thoughts on this?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TP6V2OYG5ZXPU53TYB6PVQMOCHFBA3NA/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-25 Thread Alex McWhirter
You don't create the brick on the /dev/sd* device 

You can see where i create the brick on highlighted multipath device
(see attachment), if for some reason you can't do that, you might need
to run wipefs -a on it as it probably has some leftover headers from
another FS 

On 2019-04-25 08:53, Adrian Quintero wrote:

> I understand, however the "create brick" option is greyed out (not enabled), 
> the only way I could get that option to be enabled is if I manually edit the 
> multipathd.conf file and add 
> - 
> # VDSM REVISION 1.3
> # VDSM PRIVATE
> # BEGIN Added by gluster_hci role
> 
> blacklist {
> devnode "*"
> }
> # END Added by gluster_hci role
> -- 
> 
> Then I go back to the UI and I can use sd* (multpath device). 
> 
> thanks, 
> 
> On Thu, Apr 25, 2019 at 8:41 AM Alex McWhirter  wrote: 
> 
> You create the brick on top of the multipath device. Look for one that is the 
> same size as the /dev/sd* device that you want to use. 
> 
> On 2019-04-25 08:00, Strahil Nikolov wrote: 
> 
> In which menu do you see it this way ? 
> 
> Best Regards, 
> Strahil Nikolov 
> 
> В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero 
>  написа: 
> 
> Strahil, 
> this is the issue I am seeing now 
> 
> The is thru the UI when I try to create a new brick. 
> 
> So my concern is if I modify the filters on the OS what impact will that have 
> after server reboots? 
> 
> thanks, 
> 
> On Mon, Apr 22, 2019 at 11:39 PM Strahil  wrote: I 
> have edited my multipath.conf to exclude local disks , but you need to set 
> '#VDSM private' as per the comments in the header of the file.
> Otherwise, use the /dev/mapper/multipath-device notation - as you would do 
> with any linux.
> 
> Best Regards,
> Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
>> 
>> Thanks Alex, that makes more sense now  while trying to follow the 
>> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd 
>> are locked and inidicating " multpath_member" hence not letting me create 
>> new bricks. And on the logs I see 
>> 
>> Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", 
>> "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' 
>> failed", "rc": 5} 
>> Same thing for sdc, sdd 
>> 
>> Should I manually edit the filters inside the OS, what will be the impact? 
>> 
>> thanks again.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLXBT2VLOCLBKOYJS3MYF/
>>  
> 
> -- 
> Adrian Quintero 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLPP63M7DTDF2TLSMX556/
>  
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDQFZKYIQGFSAWW5G37ZKRPFLQUXLALJ/

-- 
Adrian Quintero___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DUPTKNF4TBYVUUDUOYBVZ5AFJKXFUXMB/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-25 Thread Alex McWhirter
You create the brick on top of the multipath device. Look for one that
is the same size as the /dev/sd* device that you want to use. 

On 2019-04-25 08:00, Strahil Nikolov wrote:

> In which menu do you see it this way ? 
> 
> Best Regards, 
> Strahil Nikolov 
> 
> В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero 
>  написа: 
> 
> Strahil, 
> this is the issue I am seeing now 
> 
> The is thru the UI when I try to create a new brick. 
> 
> So my concern is if I modify the filters on the OS what impact will that have 
> after server reboots? 
> 
> thanks, 
> 
> On Mon, Apr 22, 2019 at 11:39 PM Strahil  wrote: 
> 
>> I have edited my multipath.conf to exclude local disks , but you need to set 
>> '#VDSM private' as per the comments in the header of the file.
>> Otherwise, use the /dev/mapper/multipath-device notation - as you would do 
>> with any linux.
>> 
>> Best Regards,
>> Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
>>> 
>>> Thanks Alex, that makes more sense now  while trying to follow the 
>>> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd 
>>> are locked and inidicating " multpath_member" hence not letting me create 
>>> new bricks. And on the logs I see 
>>> 
>>> Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", 
>>> "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' 
>>> failed", "rc": 5} 
>>> Same thing for sdc, sdd 
>>> 
>>> Should I manually edit the filters inside the OS, what will be the impact? 
>>> 
>>> thanks again.
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLXBT2VLOCLBKOYJS3MYF/
> 
> -- 
> Adrian Quintero 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLPP63M7DTDF2TLSMX556/
>  
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDQFZKYIQGFSAWW5G37ZKRPFLQUXLALJ/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/J2APQ4ZM4F5CPO6KJKYJWH6J54MCLDJX/


[ovirt-users] Re: Template Disk Corruption

2019-04-24 Thread Alex McWhirter
Every template, when you make a desktop VM out of it and then delete 
that VM. If you make a server VM there are no issues.



On 2019-04-24 09:30, Benny Zlotnik wrote:

Does it happen all the time? For every template you create?
Or is it for a specific template?

On Wed, Apr 24, 2019 at 12:59 PM Alex McWhirter  
wrote:


oVirt is 4.2.7.5
VDSM is 4.20.43

Not sure which logs are applicable, i don't see any obvious errors in
vdsm.log or engine.log. After you delete the desktop VM, and create
another based on the template the new VM still boots, it just reports
disk read errors and fails boot.

On 2019-04-24 05:01, Benny Zlotnik wrote:
> can you provide more info (logs, versions)?
>
> On Wed, Apr 24, 2019 at 11:04 AM Alex McWhirter 
> wrote:
>>
>> 1. Create server template from server VM (so it's a full copy of the
>> disk)
>>
>> 2. From template create a VM, override server to desktop, so that it
>> become a qcow2 overlay to the template raw disk.
>>
>> 3. Boot VM
>>
>> 4. Shutdown VM
>>
>> 5. Delete VM
>>
>>
>>
>> Template disk is now corrupt, any new machines made from it will not
>> boot.
>>
>>
>> I can't see why this happens as the desktop optimized VM should have
>> just been an overlay qcow file...
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4DGO7N3LL4DPE6PCMZSIJLXPAA6UTBU/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NRRYHYBKRUGIVPOKDUKDD4F523WEF6J/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VFI6JVQZB53PVGVGHILAICSEBXXTYMZF/


[ovirt-users] Re: Template Disk Corruption

2019-04-24 Thread Alex McWhirter

oVirt is 4.2.7.5
VDSM is 4.20.43

Not sure which logs are applicable, i don't see any obvious errors in 
vdsm.log or engine.log. After you delete the desktop VM, and create 
another based on the template the new VM still boots, it just reports 
disk read errors and fails boot.


On 2019-04-24 05:01, Benny Zlotnik wrote:

can you provide more info (logs, versions)?

On Wed, Apr 24, 2019 at 11:04 AM Alex McWhirter  
wrote:


1. Create server template from server VM (so it's a full copy of the
disk)

2. From template create a VM, override server to desktop, so that it
become a qcow2 overlay to the template raw disk.

3. Boot VM

4. Shutdown VM

5. Delete VM



Template disk is now corrupt, any new machines made from it will not
boot.


I can't see why this happens as the desktop optimized VM should have
just been an overlay qcow file...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4DGO7N3LL4DPE6PCMZSIJLXPAA6UTBU/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NRRYHYBKRUGIVPOKDUKDD4F523WEF6J/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TXD5OOS3NMRZCIWW7Q2CGKMIGIBUJNAR/


[ovirt-users] Template Disk Corruption

2019-04-24 Thread Alex McWhirter
1. Create server template from server VM (so it's a full copy of the 
disk)


2. From template create a VM, override server to desktop, so that it 
become a qcow2 overlay to the template raw disk.


3. Boot VM

4. Shutdown VM

5. Delete VM



Template disk is now corrupt, any new machines made from it will not 
boot.



I can't see why this happens as the desktop optimized VM should have 
just been an overlay qcow file...

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4DGO7N3LL4DPE6PCMZSIJLXPAA6UTBU/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-22 Thread Alex McWhirter

On 2019-04-22 17:33, adrianquint...@gmail.com wrote:

Found the following and answered part of my own questions, however I
think this sets a new set of Replica 3 Bricks, so if I have 2 hosts
fail from the first 3 hosts then I loose my hyperconverged?

https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.5/html/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/scaling#task-cockpit-gluster_mgmt-expand_cluster

thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7YVTQHOOLPM3Z73CJYCPRY6ACZ72KAUW/


When you add the new set of bricks, and rebalance, gluster will still 
respect your current replica value of 3. So every file that gets added, 
will get two copies placed on other bricks as well. You don't know which 
bricks will get a copy of said file. So the redundancy is exactly the 
same. You can lose 2 hosts out of all hosts in the cluster.


If that is not enough for you, you can increase the replica value at the 
cost of storage space.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y3M5W2E4C774RUOHS2HZN6N4HB4CKDVU/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-22 Thread Alex McWhirter

On 2019-04-22 14:48, adrianquint...@gmail.com wrote:

Hello,
I have a 3 node Hyperconverged setup with gluster and added 3 new
nodes to the cluster for a total of 6 servers.

I am now taking advantage of more compute power but cant scale out my
storage volumes.

Current Hyperconverged setup:
- host1.mydomain.com ---> Bricks: engine data1 vmstore1
- host2.mydomain.com ---> Bricks: engine data1 vmstore1
- host3.mydomain.com ---> Bricks: engine data1 vmstore1

From these 3 servers we get the following Volumes:
- engine(host1:engine, host2:engine, host3:engine)
- data1  (host1:data1, host2:data1, host3:data1)
- vmstore1  (host1:vmstore1, host2:vmstore1, host3:vmstore11)

The following are the newly added servers to the cluster, however as
you can see there are no gluster bricks
- host4.mydomain.com
- host5.mydomain.com
- host6.mydomain.com

I know that the bricks must be added in sets of 3 and per the first 3
hosts that is how it was deployed thru the web UI.

Questions:
-how can i extend the gluster volumes engine, data1 and vmstore using
host4, host5 and host6?
-Do I need to configure gluster volumes manually through the OS CLI in
order for them to span amongst all 6 servers?
-If I configure the fail storage scenario manually will oVirt know
about it?, will it still be hyperconverged?


I have only seen 3 host hyperconverged setup examples with gluster,
but have not found examples for 6, 9 or 12 host cluster with gluster.
I know it might be a lack of understanding from my end on how ovirt
and gluster integrate with one another.

If you can point me in the right direction would be great.

thanks,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UJJ2JVIXTGLCHUSGEUMHSIWPKVREPTEJ/


Compute > Hosts >  > Storage Devices > Create Brick (do all 
three)
Storage > Volumes >  > Bricks > Add (add all three at 
once)
Storage > Volumes >  > right> > rebalance


Rebalance is needed to take advantage of the new storage immediately and 
make most efficient use of it, but it is very IO heavy. So only do this 
during down time, You can go ahead and add the bricks and wait for a 
maintenance window to do the rebalance.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IEUMXSH332GS2W5WNJ5NYPHZJ472XXU7/


[ovirt-users] Re: Poor I/O Performance (again...)

2019-04-15 Thread Alex McWhirter
On 2019-04-15 13:08, Leo David wrote:

> Thank you Alex ! 
> I will try these performance settings. 
> If someone from the dev guys could validate and recommend those as a good 
> standard configuration, it would be just great. 
> If they are ok,  wouldn't be a nice to have them applied from within UI with 
> the "Optimize for VirtStore"  button ? 
> Thnak you ! 
> 
> On Mon, Apr 15, 2019 at 7:39 PM Alex McWhirter  wrote: 
> 
> On 2019-04-14 23:22, Leo David wrote: 
> Hi, 
> Thank you Alex, I was looking for some optimisation settings as well, since I 
> am pretty much in the same boat, using ssd based replicate-distributed 
> volumes across 12 hosts. 
> Could anyone else (maybe even from from ovirt or rhev team) validate these 
> settings or add some other tweaks as well, so we can use them as standard ? 
> Thank you very much again ! 
> 
> On Mon, Apr 15, 2019, 05:56 Alex McWhirter  wrote: 
> 
> On 2019-04-14 20:27, Jim Kusznir wrote: 
> 
> Hi all:
> I've had I/O performance problems pretty much since the beginning of using 
> oVirt.  I've applied several upgrades as time went on, but strangely, none of 
> them have alleviated the problem.  VM disk I/O is still very slow to the 
> point that running VMs is often painful; it notably affects nearly all my 
> VMs, and makes me leary of starting any more.  I'm currently running 12 VMs 
> and the hosted engine on the stack. 
> My configuration started out with 1Gbps networking and hyperconverged gluster 
> running on a single SSD on each node.  It worked, but I/O was painfully slow. 
>  I also started running out of space, so I added an SSHD on each node, 
> created another gluster volume, and moved VMs over to it.  I also ran that on 
> a dedicated 1Gbps network.  I had recurring disk failures (seems that disks 
> only lasted about 3-6 months; I warrantied all three at least once, and some 
> twice before giving up).  I suspect the Dell PERC 6/i was partly to blame; 
> the raid card refused to see/acknowledge the disk, but plugging it into a 
> normal PC showed no signs of problems.  In any case, performance on that 
> storage was notably bad, even though the gig-e interface was rarely taxed. 
> I put in 10Gbps ethernet and moved all the storage on that none the less, as 
> several people here said that 1Gbps just wasn't fast enough.  Some aspects 
> improved a bit, but disk I/O is still slow.  And I was still having problems 
> with the SSHD data gluster volume eating disks, so I bought a dedicated NAS 
> server (supermicro 12 disk dedicated FreeNAS NFS storage system on 10Gbps 
> ethernet).  Set that up.  I found that it was actually FASTER than the 
> SSD-based gluster volume, but still slow.  Lately its been getting slower, 
> too...Don't know why.  The FreeNAS server reports network loads around 4MB/s 
> on its 10Gbe interface, so its not network constrained.  At 4MB/s, I'd sure 
> hope the 12 spindle SAS interface wasn't constrained either.  (and disk 
> I/O operations on the NAS itself complete much faster). 
> So, running a test on my NAS against an ISO file I haven't accessed in 
> months: 
> 
> # dd 
> if=en_windows_server_2008_r2_standard_enterprise_datacenter_and_web_x64_dvd_x15-59754.iso
>  of=/dev/null bs=1024k count=500  
> 
> 500+0 records in 
> 500+0 records out 
> 524288000 bytes transferred in 2.459501 secs (213168465 bytes/sec) 
> Running it on one of my hosts: 
> 
> root@unifi:/home/kusznir# time dd if=/dev/sda of=/dev/null bs=1024k count=500 
> 500+0 records in 
> 500+0 records out 
> 524288000 bytes (524 MB, 500 MiB) copied, 7.21337 s, 72.7 MB/s 
> (I don't know if this is a true apples to apples comparison, as I don't have 
> a large file inside this VM's image).  Even this is faster than I often see. 
> I have a VoIP Phone server running as a VM.  Voicemail and other recordings 
> usually fail due to IO issues opening and writing the files.  Often, the 
> first 4 or so seconds of the recording is missed; sometimes the entire thing 
> just fails.  I didn't use to have this problem, but its definately been 
> getting worse.  I finally bit the bullet and ordered a physical server 
> dedicated for my VoIP System...But I still want to figure out why I'm having 
> all these IO problems.  I read on the list of people running 30+ VMs...I feel 
> that my IO can't take any more VMs with any semblance of reliability.  We 
> have a Quickbooks server on here too (windows), and the performance is 
> abysmal; my CPA is charging me extra because of all the lost staff time 
> waiting on the system to respond and generate reports. 
> I'm at my whits end...I started with gluster on SSD with 1Gbps network, 
> migrated to 10Gbps netw

[ovirt-users] Re: Tuning Gluster Writes

2019-04-15 Thread Alex McWhirter
On 2019-04-15 12:58, Alex McWhirter wrote:

> On 2019-04-15 12:43, Darrell Budic wrote: Interesting. Who's 10g cards and 
> which offload settings did you disable? Did you do that on the servers or the 
> vm host clients or both? 
> 
> On Apr 15, 2019, at 11:37 AM, Alex McWhirter  wrote: 
> 
> I went in and disabled TCP offload on all the nics, huge performance boost. 
> went from 110MB/s to 240MB/s seq writes, reads lost a bit of performance 
> going down to 680MB/s, but that's a decent trade off. Latency is still really 
> high though, need to work on that. I think some more TCP tuning might help.
> 
> Those changes didn't do a whole lot, but i ended up enabling 
> performance.read-ahead on the gluster volume. my blockdev read ahead values 
> were already 8192, which seemed good enough. Not sure if ovirt set those, or 
> if it's just the defaults of my raid controller. 
> 
> Anyways up to 350MB/s writes, 700MB/s reads. Which so happens to correlate 
> with the saturation of my 10G network. Latency is still a slight issue, but 
> at least now im not blocking :)
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5COPHAIVCVK42KMMGWZQVMNGDH6Q32ZC/

These are dual port qlogic QLGE cards, plugging into dual Cisco Nexus
3064's with VPC to allow me to LACP across two switches. These are
FCoE/10GBE cards, so on the Cisco Switches i had to disable lldp on the
ports to stop FCoE initiator errors from disabling ports (as i don't use
FCoE atm) 

bond options are "mode=4 lacp_rate=1 miimon=100 xmit_hash_policy=1" 

then i have the following /sbin/ifup-local script that triggers on
storage network creation 

#!/bin/bash
case "$1" in
  Storage)
/sbin/ethtool -K ens2f0 tx off rx off tso off gso off
/sbin/ethtool -K ens2f1 tx off rx off tso off gso off
/sbin/ip link set dev ens2f0 txqueuelen 1
/sbin/ip link set dev ens2f1 txqueuelen 1
/sbin/ip link set dev bond2 txqueuelen 1
/sbin/ip link set dev Storage txqueuelen 1
  ;;
  *)
  ;;
esac
exit 0 

if you have lro, disable it too IMO, these cards do not do lro so it's
not applicable to me. 

This did cut down my read performance by about 50MB/s, but my write went
from 98-110MB/s to about 240MB/s, then enabling read-ahead got me to the
350MB/s it should have been.

Oh and i did it on both, the VM hosts and storage machines. Same cards
in all of them.___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EZF2G3AWM3GSDR35CLP2SKABALQ6H4KN/


[ovirt-users] Re: Tuning Gluster Writes

2019-04-15 Thread Alex McWhirter
On 2019-04-15 12:43, Darrell Budic wrote:

> Interesting. Who's 10g cards and which offload settings did you disable? Did 
> you do that on the servers or the vm host clients or both? 
> 
> On Apr 15, 2019, at 11:37 AM, Alex McWhirter  wrote: 
> 
> I went in and disabled TCP offload on all the nics, huge performance boost. 
> went from 110MB/s to 240MB/s seq writes, reads lost a bit of performance 
> going down to 680MB/s, but that's a decent trade off. Latency is still really 
> high though, need to work on that. I think some more TCP tuning might help.
> 
> Those changes didn't do a whole lot, but i ended up enabling 
> performance.read-ahead on the gluster volume. my blockdev read ahead values 
> were already 8192, which seemed good enough. Not sure if ovirt set those, or 
> if it's just the defaults of my raid controller. 
> 
> Anyways up to 350MB/s writes, 700MB/s reads. Which so happens to correlate 
> with the saturation of my 10G network. Latency is still a slight issue, but 
> at least now im not blocking :)
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5COPHAIVCVK42KMMGWZQVMNGDH6Q32ZC/

These are dual port qlogic QLGE cards, plugging into dual Cisco Nexus
3064's with VPC to allow me to LACP across two switches. These are
FCoE/10GBE cards, so on the Cisco Switches i had to disable lldp on the
ports to stop FCoE initiator errors from disabling ports (as i don't use
FCoE atm) 

bond options are "mode=4 lacp_rate=1 miimon=100 xmit_hash_policy=1" 

then i have the following /sbin/ifup-local script that triggers on
storage network creation 

#!/bin/bash
case "$1" in
  Storage)
/sbin/ethtool -K ens2f0 tx off rx off tso off gso off
/sbin/ethtool -K ens2f1 tx off rx off tso off gso off
/sbin/ip link set dev ens2f0 txqueuelen 1
/sbin/ip link set dev ens2f1 txqueuelen 1
/sbin/ip link set dev bond2 txqueuelen 1
/sbin/ip link set dev Storage txqueuelen 1
  ;;
  *)
  ;;
esac
exit 0 

if you have lro, disable it too IMO, these cards do not do lro so it's
not applicable to me. 

This did cut down my read performance by about 50MB/s, but my write went
from 98-110MB/s to about 240MB/s, then enabling read-ahead got me to the
350MB/s it should have been.___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y6ZHB3JSQS6GLXWUWU6N7JBMRN5EUANS/


[ovirt-users] Re: Poor I/O Performance (again...)

2019-04-15 Thread Alex McWhirter
On 2019-04-14 23:22, Leo David wrote:

> Hi, 
> Thank you Alex, I was looking for some optimisation settings as well, since I 
> am pretty much in the same boat, using ssd based replicate-distributed 
> volumes across 12 hosts. 
> Could anyone else (maybe even from from ovirt or rhev team) validate these 
> settings or add some other tweaks as well, so we can use them as standard ? 
> Thank you very much again ! 
> 
> On Mon, Apr 15, 2019, 05:56 Alex McWhirter  wrote: 
> 
> On 2019-04-14 20:27, Jim Kusznir wrote: 
> 
> Hi all:
> I've had I/O performance problems pretty much since the beginning of using 
> oVirt.  I've applied several upgrades as time went on, but strangely, none of 
> them have alleviated the problem.  VM disk I/O is still very slow to the 
> point that running VMs is often painful; it notably affects nearly all my 
> VMs, and makes me leary of starting any more.  I'm currently running 12 VMs 
> and the hosted engine on the stack. 
> My configuration started out with 1Gbps networking and hyperconverged gluster 
> running on a single SSD on each node.  It worked, but I/O was painfully slow. 
>  I also started running out of space, so I added an SSHD on each node, 
> created another gluster volume, and moved VMs over to it.  I also ran that on 
> a dedicated 1Gbps network.  I had recurring disk failures (seems that disks 
> only lasted about 3-6 months; I warrantied all three at least once, and some 
> twice before giving up).  I suspect the Dell PERC 6/i was partly to blame; 
> the raid card refused to see/acknowledge the disk, but plugging it into a 
> normal PC showed no signs of problems.  In any case, performance on that 
> storage was notably bad, even though the gig-e interface was rarely taxed. 
> I put in 10Gbps ethernet and moved all the storage on that none the less, as 
> several people here said that 1Gbps just wasn't fast enough.  Some aspects 
> improved a bit, but disk I/O is still slow.  And I was still having problems 
> with the SSHD data gluster volume eating disks, so I bought a dedicated NAS 
> server (supermicro 12 disk dedicated FreeNAS NFS storage system on 10Gbps 
> ethernet).  Set that up.  I found that it was actually FASTER than the 
> SSD-based gluster volume, but still slow.  Lately its been getting slower, 
> too...Don't know why.  The FreeNAS server reports network loads around 4MB/s 
> on its 10Gbe interface, so its not network constrained.  At 4MB/s, I'd sure 
> hope the 12 spindle SAS interface wasn't constrained either.  (and disk 
> I/O operations on the NAS itself complete much faster). 
> So, running a test on my NAS against an ISO file I haven't accessed in 
> months: 
> 
> # dd 
> if=en_windows_server_2008_r2_standard_enterprise_datacenter_and_web_x64_dvd_x15-59754.iso
>  of=/dev/null bs=1024k count=500  
> 
> 500+0 records in 
> 500+0 records out 
> 524288000 bytes transferred in 2.459501 secs (213168465 bytes/sec) 
> Running it on one of my hosts: 
> 
> root@unifi:/home/kusznir# time dd if=/dev/sda of=/dev/null bs=1024k count=500 
> 500+0 records in 
> 500+0 records out 
> 524288000 bytes (524 MB, 500 MiB) copied, 7.21337 s, 72.7 MB/s 
> (I don't know if this is a true apples to apples comparison, as I don't have 
> a large file inside this VM's image).  Even this is faster than I often see. 
> I have a VoIP Phone server running as a VM.  Voicemail and other recordings 
> usually fail due to IO issues opening and writing the files.  Often, the 
> first 4 or so seconds of the recording is missed; sometimes the entire thing 
> just fails.  I didn't use to have this problem, but its definately been 
> getting worse.  I finally bit the bullet and ordered a physical server 
> dedicated for my VoIP System...But I still want to figure out why I'm having 
> all these IO problems.  I read on the list of people running 30+ VMs...I feel 
> that my IO can't take any more VMs with any semblance of reliability.  We 
> have a Quickbooks server on here too (windows), and the performance is 
> abysmal; my CPA is charging me extra because of all the lost staff time 
> waiting on the system to respond and generate reports. 
> I'm at my whits end...I started with gluster on SSD with 1Gbps network, 
> migrated to 10Gbps network, and now to dedicated high performance NAS box 
> over NFS, and still have performance issues.I don't know how to 
> troubleshoot the issue any further, but I've never had these kinds of issues 
> when I was playing with other VM technologies.  I'd like to get to the point 
> where I can resell virtual servers to customers, but I can't do so with my 
> current performance levels. 
> I'd greatly appreciate help troubleshooting this further. 

[ovirt-users] Re: Poor I/O Performance (again...)

2019-04-14 Thread Alex McWhirter
On 2019-04-14 20:27, Jim Kusznir wrote:

> Hi all:
> 
> I've had I/O performance problems pretty much since the beginning of using 
> oVirt.  I've applied several upgrades as time went on, but strangely, none of 
> them have alleviated the problem.  VM disk I/O is still very slow to the 
> point that running VMs is often painful; it notably affects nearly all my 
> VMs, and makes me leary of starting any more.  I'm currently running 12 VMs 
> and the hosted engine on the stack. 
> 
> My configuration started out with 1Gbps networking and hyperconverged gluster 
> running on a single SSD on each node.  It worked, but I/O was painfully slow. 
>  I also started running out of space, so I added an SSHD on each node, 
> created another gluster volume, and moved VMs over to it.  I also ran that on 
> a dedicated 1Gbps network.  I had recurring disk failures (seems that disks 
> only lasted about 3-6 months; I warrantied all three at least once, and some 
> twice before giving up).  I suspect the Dell PERC 6/i was partly to blame; 
> the raid card refused to see/acknowledge the disk, but plugging it into a 
> normal PC showed no signs of problems.  In any case, performance on that 
> storage was notably bad, even though the gig-e interface was rarely taxed. 
> 
> I put in 10Gbps ethernet and moved all the storage on that none the less, as 
> several people here said that 1Gbps just wasn't fast enough.  Some aspects 
> improved a bit, but disk I/O is still slow.  And I was still having problems 
> with the SSHD data gluster volume eating disks, so I bought a dedicated NAS 
> server (supermicro 12 disk dedicated FreeNAS NFS storage system on 10Gbps 
> ethernet).  Set that up.  I found that it was actually FASTER than the 
> SSD-based gluster volume, but still slow.  Lately its been getting slower, 
> too...Don't know why.  The FreeNAS server reports network loads around 4MB/s 
> on its 10Gbe interface, so its not network constrained.  At 4MB/s, I'd sure 
> hope the 12 spindle SAS interface wasn't constrained either.  (and disk 
> I/O operations on the NAS itself complete much faster). 
> 
> So, running a test on my NAS against an ISO file I haven't accessed in 
> months: 
> 
> # dd 
> if=en_windows_server_2008_r2_standard_enterprise_datacenter_and_web_x64_dvd_x15-59754.iso
>  of=/dev/null bs=1024k count=500  
> 
> 500+0 records in 
> 500+0 records out 
> 524288000 bytes transferred in 2.459501 secs (213168465 bytes/sec) 
> 
> Running it on one of my hosts: 
> 
> root@unifi:/home/kusznir# time dd if=/dev/sda of=/dev/null bs=1024k count=500 
> 500+0 records in 
> 500+0 records out 
> 524288000 bytes (524 MB, 500 MiB) copied, 7.21337 s, 72.7 MB/s 
> 
> (I don't know if this is a true apples to apples comparison, as I don't have 
> a large file inside this VM's image).  Even this is faster than I often see. 
> 
> I have a VoIP Phone server running as a VM.  Voicemail and other recordings 
> usually fail due to IO issues opening and writing the files.  Often, the 
> first 4 or so seconds of the recording is missed; sometimes the entire thing 
> just fails.  I didn't use to have this problem, but its definately been 
> getting worse.  I finally bit the bullet and ordered a physical server 
> dedicated for my VoIP System...But I still want to figure out why I'm having 
> all these IO problems.  I read on the list of people running 30+ VMs...I feel 
> that my IO can't take any more VMs with any semblance of reliability.  We 
> have a Quickbooks server on here too (windows), and the performance is 
> abysmal; my CPA is charging me extra because of all the lost staff time 
> waiting on the system to respond and generate reports. 
> 
> I'm at my whits end...I started with gluster on SSD with 1Gbps network, 
> migrated to 10Gbps network, and now to dedicated high performance NAS box 
> over NFS, and still have performance issues.I don't know how to 
> troubleshoot the issue any further, but I've never had these kinds of issues 
> when I was playing with other VM technologies.  I'd like to get to the point 
> where I can resell virtual servers to customers, but I can't do so with my 
> current performance levels. 
> 
> I'd greatly appreciate help troubleshooting this further. 
> 
> --Jim 
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZR64VABNT2SGKLNP3XNTHCGFZXSOJAQF/

Been working on optimizing the same. This is where im at currently. 

Gluster volume settings. 

diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
performance.write-behind-window-size: 64MB
performance.flush-behind: on
performance.stat-prefetch: 

[ovirt-users] Re: Tuning Gluster Writes

2019-04-14 Thread Alex McWhirter
On 2019-04-14 17:07, Strahil Nikolov wrote:

> Some kernels do not like values below 5%, thus I prefer to use  
> vm.dirty_bytes & vm.dirty_background_bytes. 
> Try the following ones (comment out the vdsm.conf values ): 
> 
> vm.dirty_background_bytes = 2 
> vm.dirty_bytes = 45000 
> It's more like shooting in the dark , but it might help. 
> 
> Best Regards, 
> Strahil Nikolov 
> 
> В неделя, 14 април 2019 г., 19:06:07 ч. Гринуич+3, Alex McWhirter 
>  написа: 
> 
> On 2019-04-13 03:15, Strahil wrote:
>> Hi,
>> 
>> What is your dirty  cache settings on the gluster servers  ?
>> 
>> Best Regards,
>> Strahil NikolovOn Apr 13, 2019 00:44, Alex McWhirter  
>> wrote:
>>> 
>>> I have 8 machines acting as gluster servers. They each have 12 drives
>>> raid 50'd together (3 sets of 4 drives raid 5'd then 0'd together as
>>> one).
>>> 
>>> They connect to the compute hosts and to each other over lacp'd 10GB
>>> connections split across two cisco nexus switched with VPC.
>>> 
>>> Gluster has the following set.
>>> 
>>> performance.write-behind-window-size: 4MB
>>> performance.flush-behind: on
>>> performance.stat-prefetch: on
>>> server.event-threads: 4
>>> client.event-threads: 8
>>> performance.io-thread-count: 32
>>> network.ping-timeout: 30
>>> cluster.granular-entry-heal: enable
>>> performance.strict-o-direct: on
>>> storage.owner-gid: 36
>>> storage.owner-uid: 36
>>> features.shard: on
>>> cluster.shd-wait-qlength: 1
>>> cluster.shd-max-threads: 8
>>> cluster.locking-scheme: granular
>>> cluster.data-self-heal-algorithm: full
>>> cluster.server-quorum-type: server
>>> cluster.quorum-type: auto
>>> cluster.eager-lock: enable
>>> network.remote-dio: off
>>> performance.low-prio-threads: 32
>>> performance.io-cache: off
>>> performance.read-ahead: off
>>> performance.quick-read: off
>>> auth.allow: *
>>> user.cifs: off
>>> transport.address-family: inet
>>> nfs.disable: off
>>> performance.client-io-threads: on
>>> 
>>> 
>>> I have the following sysctl values on gluster client and servers, 
>>> using
>>> libgfapi, MTU 9K
>>> 
>>> net.core.rmem_max = 134217728
>>> net.core.wmem_max = 134217728
>>> net.ipv4.tcp_rmem = 4096 87380 134217728
>>> net.ipv4.tcp_wmem = 4096 65536 134217728
>>> net.core.netdev_max_backlog = 30
>>> net.ipv4.tcp_moderate_rcvbuf =1
>>> net.ipv4.tcp_no_metrics_save = 1
>>> net.ipv4.tcp_congestion_control=htcp
>>> 
>>> reads with this setup are perfect, benchmarked in VM to be about 
>>> 770MB/s
>>> sequential with disk access times of < 1ms. Writes on the other hand 
>>> are
>>> all over the place. They peak around 320MB/s sequential write, which 
>>> is
>>> what i expect but it seems as if there is some blocking going on.
>>> 
>>> During the write test i will hit 320MB/s briefly, then 0MB/s as disk
>>> access time shoot to over 3000ms, then back to 320MB/s. It averages 
>>> out
>>> to about 110MB/s afterwards.
>>> 
>>> Gluster version is 3.12.15 ovirt is 4.2.7.5
>>> 
>>> Any ideas on what i could tune to eliminate or minimize that blocking?
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7F72BKYKAGICERZETSA4KCLQYR3AORR/
>>>  
> 
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FMB6NCNJL2WKEDWPAM4OJIRF2GIDJUUE/
> 
> Just the vdsm defaults
> 
> vm.dirty_ratio = 5
> vm.dirty_background_ratio = 2
> 
> these boxes only have 8gb of ram as well, so those percentages should be 
> super small. 
> 
> _

[ovirt-users] Re: Tuning Gluster Writes

2019-04-14 Thread Alex McWhirter

On 2019-04-14 13:05, Alex McWhirter wrote:

On 2019-04-14 12:07, Alex McWhirter wrote:

On 2019-04-13 03:15, Strahil wrote:

Hi,

What is your dirty  cache settings on the gluster servers  ?

Best Regards,
Strahil NikolovOn Apr 13, 2019 00:44, Alex McWhirter 
 wrote:


I have 8 machines acting as gluster servers. They each have 12 
drives

raid 50'd together (3 sets of 4 drives raid 5'd then 0'd together as
one).

They connect to the compute hosts and to each other over lacp'd 10GB
connections split across two cisco nexus switched with VPC.

Gluster has the following set.

performance.write-behind-window-size: 4MB
performance.flush-behind: on
performance.stat-prefetch: on
server.event-threads: 4
client.event-threads: 8
performance.io-thread-count: 32
network.ping-timeout: 30
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
storage.owner-gid: 36
storage.owner-uid: 36
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: off
transport.address-family: inet
nfs.disable: off
performance.client-io-threads: on


I have the following sysctl values on gluster client and servers, 
using

libgfapi, MTU 9K

net.core.rmem_max = 134217728
net.core.wmem_max = 134217728
net.ipv4.tcp_rmem = 4096 87380 134217728
net.ipv4.tcp_wmem = 4096 65536 134217728
net.core.netdev_max_backlog = 30
net.ipv4.tcp_moderate_rcvbuf =1
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_congestion_control=htcp

reads with this setup are perfect, benchmarked in VM to be about 
770MB/s
sequential with disk access times of < 1ms. Writes on the other hand 
are
all over the place. They peak around 320MB/s sequential write, which 
is

what i expect but it seems as if there is some blocking going on.

During the write test i will hit 320MB/s briefly, then 0MB/s as disk
access time shoot to over 3000ms, then back to 320MB/s. It averages 
out

to about 110MB/s afterwards.

Gluster version is 3.12.15 ovirt is 4.2.7.5

Any ideas on what i could tune to eliminate or minimize that 
blocking?

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7F72BKYKAGICERZETSA4KCLQYR3AORR/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FMB6NCNJL2WKEDWPAM4OJIRF2GIDJUUE/


Just the vdsm defaults

vm.dirty_ratio = 5
vm.dirty_background_ratio = 2

these boxes only have 8gb of ram as well, so those percentages should
be super small.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H4XWDEHYKD2MQUR45QLMMSK6FBX44KIG/


doing a gluster profile my bricks give me some odd numbers.

 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls 
Fop
 -   ---   ---   ---    
   
  0.00 131.00 us 131.00 us 131.00 us  1 
  FSTAT
  0.01 104.50 us  77.00 us 118.00 us 14 
 STATFS
  0.01  95.38 us  45.00 us 130.00 us 16 
   STAT
  0.10 252.39 us 124.00 us 329.00 us 61 
 LOOKUP
  0.22  55.68 us  16.00 us 180.00 us635
FINODELK
  0.43 543.41 us  50.00 us1760.00 us125 
  FSYNC
  1.52 573.75 us  76.00 us5463.00 us422
FXATTROP
 97.727443.50 us 184.00 us   34917.00 us   2092 
  WRITE


 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls 
Fop
 -   ---   ---   ---    
   
  0.00   0.00 us   0.00 us   0.00 us 70 
 FORGET
  0.00   0.00 us   0.00 us   0.00 us   1792 
RELEASE
  0.00   0.00 us   0.00 us   0.00 us  23422  
RELEASEDIR
  0.01 126

[ovirt-users] Re: Tuning Gluster Writes

2019-04-14 Thread Alex McWhirter

On 2019-04-14 12:07, Alex McWhirter wrote:

On 2019-04-13 03:15, Strahil wrote:

Hi,

What is your dirty  cache settings on the gluster servers  ?

Best Regards,
Strahil NikolovOn Apr 13, 2019 00:44, Alex McWhirter  
wrote:


I have 8 machines acting as gluster servers. They each have 12 drives
raid 50'd together (3 sets of 4 drives raid 5'd then 0'd together as
one).

They connect to the compute hosts and to each other over lacp'd 10GB
connections split across two cisco nexus switched with VPC.

Gluster has the following set.

performance.write-behind-window-size: 4MB
performance.flush-behind: on
performance.stat-prefetch: on
server.event-threads: 4
client.event-threads: 8
performance.io-thread-count: 32
network.ping-timeout: 30
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
storage.owner-gid: 36
storage.owner-uid: 36
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: off
transport.address-family: inet
nfs.disable: off
performance.client-io-threads: on


I have the following sysctl values on gluster client and servers, 
using

libgfapi, MTU 9K

net.core.rmem_max = 134217728
net.core.wmem_max = 134217728
net.ipv4.tcp_rmem = 4096 87380 134217728
net.ipv4.tcp_wmem = 4096 65536 134217728
net.core.netdev_max_backlog = 30
net.ipv4.tcp_moderate_rcvbuf =1
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_congestion_control=htcp

reads with this setup are perfect, benchmarked in VM to be about 
770MB/s
sequential with disk access times of < 1ms. Writes on the other hand 
are
all over the place. They peak around 320MB/s sequential write, which 
is

what i expect but it seems as if there is some blocking going on.

During the write test i will hit 320MB/s briefly, then 0MB/s as disk
access time shoot to over 3000ms, then back to 320MB/s. It averages 
out

to about 110MB/s afterwards.

Gluster version is 3.12.15 ovirt is 4.2.7.5

Any ideas on what i could tune to eliminate or minimize that 
blocking?

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7F72BKYKAGICERZETSA4KCLQYR3AORR/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FMB6NCNJL2WKEDWPAM4OJIRF2GIDJUUE/


Just the vdsm defaults

vm.dirty_ratio = 5
vm.dirty_background_ratio = 2

these boxes only have 8gb of ram as well, so those percentages should
be super small.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H4XWDEHYKD2MQUR45QLMMSK6FBX44KIG/


doing a gluster profile my bricks give me some odd numbers.

 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls  
   Fop
 -   ---   ---   ---     
  
  0.00 131.00 us 131.00 us 131.00 us  1  
 FSTAT
  0.01 104.50 us  77.00 us 118.00 us 14  
STATFS
  0.01  95.38 us  45.00 us 130.00 us 16  
  STAT
  0.10 252.39 us 124.00 us 329.00 us 61  
LOOKUP
  0.22  55.68 us  16.00 us 180.00 us635
FINODELK
  0.43 543.41 us  50.00 us1760.00 us125  
 FSYNC
  1.52 573.75 us  76.00 us5463.00 us422
FXATTROP
 97.727443.50 us 184.00 us   34917.00 us   2092  
 WRITE


 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls  
   Fop
 -   ---   ---   ---     
  
  0.00   0.00 us   0.00 us   0.00 us 70  
FORGET
  0.00   0.00 us   0.00 us   0.00 us   1792 
RELEASE
  0.00   0.00 us   0.00 us   0.00 us  23422  
RELEASEDIR
  0.01 126.20 us  80.00 us 210.00 us

[ovirt-users] Re: Tuning Gluster Writes

2019-04-14 Thread Alex McWhirter

On 2019-04-13 03:15, Strahil wrote:

Hi,

What is your dirty  cache settings on the gluster servers  ?

Best Regards,
Strahil NikolovOn Apr 13, 2019 00:44, Alex McWhirter  
wrote:


I have 8 machines acting as gluster servers. They each have 12 drives
raid 50'd together (3 sets of 4 drives raid 5'd then 0'd together as
one).

They connect to the compute hosts and to each other over lacp'd 10GB
connections split across two cisco nexus switched with VPC.

Gluster has the following set.

performance.write-behind-window-size: 4MB
performance.flush-behind: on
performance.stat-prefetch: on
server.event-threads: 4
client.event-threads: 8
performance.io-thread-count: 32
network.ping-timeout: 30
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
storage.owner-gid: 36
storage.owner-uid: 36
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: off
transport.address-family: inet
nfs.disable: off
performance.client-io-threads: on


I have the following sysctl values on gluster client and servers, 
using

libgfapi, MTU 9K

net.core.rmem_max = 134217728
net.core.wmem_max = 134217728
net.ipv4.tcp_rmem = 4096 87380 134217728
net.ipv4.tcp_wmem = 4096 65536 134217728
net.core.netdev_max_backlog = 30
net.ipv4.tcp_moderate_rcvbuf =1
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_congestion_control=htcp

reads with this setup are perfect, benchmarked in VM to be about 
770MB/s
sequential with disk access times of < 1ms. Writes on the other hand 
are
all over the place. They peak around 320MB/s sequential write, which 
is

what i expect but it seems as if there is some blocking going on.

During the write test i will hit 320MB/s briefly, then 0MB/s as disk
access time shoot to over 3000ms, then back to 320MB/s. It averages 
out

to about 110MB/s afterwards.

Gluster version is 3.12.15 ovirt is 4.2.7.5

Any ideas on what i could tune to eliminate or minimize that blocking?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7F72BKYKAGICERZETSA4KCLQYR3AORR/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FMB6NCNJL2WKEDWPAM4OJIRF2GIDJUUE/


Just the vdsm defaults

vm.dirty_ratio = 5
vm.dirty_background_ratio = 2

these boxes only have 8gb of ram as well, so those percentages should be 
super small.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H4XWDEHYKD2MQUR45QLMMSK6FBX44KIG/


[ovirt-users] Tuning Gluster Writes

2019-04-12 Thread Alex McWhirter
I have 8 machines acting as gluster servers. They each have 12 drives 
raid 50'd together (3 sets of 4 drives raid 5'd then 0'd together as 
one).


They connect to the compute hosts and to each other over lacp'd 10GB 
connections split across two cisco nexus switched with VPC.


Gluster has the following set.

performance.write-behind-window-size: 4MB
performance.flush-behind: on
performance.stat-prefetch: on
server.event-threads: 4
client.event-threads: 8
performance.io-thread-count: 32
network.ping-timeout: 30
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
storage.owner-gid: 36
storage.owner-uid: 36
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: off
transport.address-family: inet
nfs.disable: off
performance.client-io-threads: on


I have the following sysctl values on gluster client and servers, using 
libgfapi, MTU 9K


net.core.rmem_max = 134217728
net.core.wmem_max = 134217728
net.ipv4.tcp_rmem = 4096 87380 134217728
net.ipv4.tcp_wmem = 4096 65536 134217728
net.core.netdev_max_backlog = 30
net.ipv4.tcp_moderate_rcvbuf =1
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_congestion_control=htcp

reads with this setup are perfect, benchmarked in VM to be about 770MB/s 
sequential with disk access times of < 1ms. Writes on the other hand are 
all over the place. They peak around 320MB/s sequential write, which is 
what i expect but it seems as if there is some blocking going on.


During the write test i will hit 320MB/s briefly, then 0MB/s as disk 
access time shoot to over 3000ms, then back to 320MB/s. It averages out 
to about 110MB/s afterwards.


Gluster version is 3.12.15 ovirt is 4.2.7.5

Any ideas on what i could tune to eliminate or minimize that blocking?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7F72BKYKAGICERZETSA4KCLQYR3AORR/


[ovirt-users] Re: All hosts non-operational after upgrading from 4.2 to 4.3

2019-04-05 Thread Alex McWhirter
What kind of storage are you using? local? 

On 2019-04-05 12:26, John Florian wrote:

> Also, I see in the notification drawer a message that says: 
> 
> Storage domains with IDs [ed4d83f8-41a2-41bd-a0cd-6525d9649edb] could not be 
> synchronized. To synchronize them, please move them to maintenance and then 
> activate. 
> 
> However, when I navigate to Compute > Data Centers > Default, the Maintenance 
> option is greyed out.  Activate in the button bar is also greyed out, but it 
> looks like an option in the r-click context menu although selecting that 
> shows "Error while executing action: Cannot activate Storage. There is no 
> active Host in the Data Center.". 
> 
> I'm just stuck in an endless circle here.
> 
> On Fri, Apr 5, 2019 at 12:04 PM John Florian  wrote: 
> 
>> I am in a severe pinch here.  A while back I upgraded from 4.2.8 to 4.3.3 
>> and only had one step remaining and that was to set the cluster compat level 
>> to 4.3 (from 4.2).  When I tried this it gave the usual warning that each VM 
>> would have to be rebooted to complete, but then I got my first unusual piece 
>> when it then told me next that this could not be completed until each host 
>> was in maintenance mode.  Quirky I thought, but I stopped all VMs and put 
>> both hosts into maintenance mode.  I then set the cluster to 4.3.  Things 
>> didn't want to become active again and I eventually noticed that I was being 
>> told the DC needed to be 4.3 as well.  Don't remember that from before, but 
>> oh well that was easy. 
>> 
>> However, the DC and SD remains down.  The hosts are non-op.  I've powered 
>> everything off and started fresh but still wind up in the same state.  Hosts 
>> will look like their active for a bit (green triangle) but then go non-op 
>> after about a minute.  It appears that my iSCSI sessions are active/logged 
>> in.  The one glaring thing I see in the logs is this in vdsm.log: 
>> 
>> 2019-04-05 12:03:30,225-0400 ERROR (monitor/07bb1bf) [storage.Monitor] 
>> Setting up monitor for 07bb1bf8-3b3e-4dc0-bc43-375b09e06683 failed 
>> (monitor:329)
>> Traceback (most recent call last):
>> File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 326, 
>> in _setupLoop
>> self._setupMonitor()
>> File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 348, 
>> in _setupMonitor
>> self._produceDomain()
>> File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 158, in wrapper
>> value = meth(self, *a, **kw)
>> File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 366, 
>> in _produceDomain
>> self.domain = sdCache.produce(self.sdUUID)
>> File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110, in 
>> produce
>> domain.getRealDomain()
>> File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in 
>> getRealDomain
>> return self._cache._realProduce(self._sdUUID)
>> File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134, in 
>> _realProduce
>> domain = self._findDomain(sdUUID)
>> File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151, in 
>> _findDomain
>> return findMethod(sdUUID)
>> File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176, in 
>> _findUnfetchedDomain
>> raise se.StorageDomainDoesNotExist(sdUUID)
>> StorageDomainDoesNotExist: Storage domain does not exist: 
>> (u'07bb1bf8-3b3e-4dc0-bc43-375b09e06683',) 
>> 
>> How do I proceed to get back operational?
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/A7XPDM3EFUJPXON3YCX3EK66NCMFI6SJ/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BSR624W4NMQUOHHJIYX5LUDNY4XTF5QX/


[ovirt-users] Re: migrate hosted-engine vm to another cluster?

2019-01-16 Thread Alex McWhirter
I second this, it's one of the reasons i really dislike hosted engine. I
also see a need for active / passive engine clones to exist perhaps even
across multiple datacenters. Hosted engine tries to be similar to
VMWare's VCenter appliance, but falls short in the HA department. The
best you can get is that engine will restart on another node should it's
current node fail. When people ask me hosted vs baremetal engine, i
almost always advocate for baremetal if there's enough rack space as
there are just too many way's i've seen hosted engine become a brick. 

FWIW, we run baremetal engine with drbd, pacemaker, and corosync across
a few machines in an active / passive setup. This has been by far the
most reliable way we've seen to HA the engine. We have these machines
spread out over a few different DC's with pacemaker triggering routing
changes to ensure it's always available. Certainly not the easiest way
to do it, but i have yet to see a better solution. 

On 2019-01-16 19:39, Ryan Barry wrote:

> Re-raising this for discussion. 
> 
> As I commented on the bug, Hosted Engine is such a special use case in terms 
> of setup, configuration, and migration that I'm not sure engine itself is the 
> right place to handle this. We have the option of changing the ha 
> broker|agent to use the engine API to initiate migrations, but there's still 
> a risk that the hosts in the secondary cluster will not be able to reach the 
> storage, etc. 
> 
> It would be great to get this resolved if there's not currently a way to do 
> it, but we need to decide on a long-term direction for it. Currently, HE can 
> run on additional hosts in the datacenter as an emergency fallback, but it 
> reverts once the HE cluster is back out of maintenance. My ideal would be to 
> extend the hosted-engine utility with an additional parameter which reaches 
> out to the Engine API in order to handle the needed database updates after 
> some safety checks (probably over ansible) to ensure that the HE storage 
> domain is reachable from hosts in the other cluster. 
> 
> But I'm not a hosted engine expert. Is there currently a way to do this? If 
> there isn't, do we want to add additional logic to ha agent|broker, or reach 
> out to the Engine? 
> 
> On Tue, Jan 15, 2019 at 8:27 AM Douglas Duckworth  
> wrote: 
> 
> Hi 
> 
> I opened a BugZilla at https://bugzilla.redhat.com/show_bug.cgi?id=1664777 
> but no steps have been shared on how to resolve.  Does anyone know how this 
> can be fixed without destroying the data center and building a new hosted 
> engine? 
> 
> Thanks, 
> 
> Douglas Duckworth, MSc, LFCS
> HPC System Administrator
> Scientific Computing Unit [1] 
> Weill Cornell Medicine 
> 1300 York Avenue 
> New York, NY 10065 
> 
> E: d...@med.cornell.edu
> O: 212-746-6305
> F: 212-746-8690 
> 
> On Wed, Jan 9, 2019 at 10:22 AM Douglas Duckworth  
> wrote: 
> Hi 
> 
> Should I open a Bugzilla to resolve this problem? 
> 
> Thanks, 
> 
> Douglas Duckworth, MSc, LFCS
> HPC System Administrator
> Scientific Computing Unit [1] 
> Weill Cornell Medicine 
> 1300 York Avenue 
> New York, NY 10065 
> 
> E: d...@med.cornell.edu
> O: 212-746-6305
> F: 212-746-8690 
> 
> On Wed, Dec 19, 2018 at 1:13 PM Douglas Duckworth  
> wrote: 
> Hello 
> 
> I am trying to migrate my hosted-engine VM to another cluster in the same 
> data center.  Hosts in both clusters have the same logical networks and 
> storage.  Yet migrating the VM isn't an option.
> 
> To get the hosted-engine VM on the other cluster I started the VM on host in 
> that other cluster using "hosted-engine --vm-start."   
> 
> However HostedEngine still associated with old cluster as shown attached.  So 
> I cannot live migrate the VM.  Does anyone know how to resolve?  With other 
> VMs one can shut them down then using the "Edit" option.  Though that will 
> not work for HostedEngine. 
> 
> Thanks, 
> 
> Douglas Duckworth, MSc, LFCS
> HPC System Administrator
> Scientific Computing Unit [1] 
> Weill Cornell Medicine 
> 1300 York Avenue 
> New York, NY 10065 
> 
> E: d...@med.cornell.edu
> O: 212-746-6305
> F: 212-746-8690
 ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/J2GZK5PUBZIQLGLNZ2UUCSIES6HSZLHC/


-- 

Ryan Barry 

Associate Manager - RHV Virt/SLA 

 rba...@redhat.comM: +16518159306 [2] IM: rbarry 

 [3]

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

[ovirt-users] Re: multiple engines (active passive)

2019-01-14 Thread Alex McWhirter
real HA is complicated, no way around that... 

As stated earlier, we also run engine bare metal using pacemaker /
corosync / drbd to keep both nodes in perfect sync, failover happens in
a few seconds. We also do daily backups of the engine, but in the 4
years or so that we have been running ovirt, we have luckily never had
to use them with this setup. STONITH is pretty important to setup if you
are running less than three nodes as the engine, just to keep split
brain from corrupting everything. 

On 2019-01-14 14:16, maoz zadok wrote:

> Well, I really love oVirt, but I don't know.. All the solutions mentioned 
> here are complicated and or dangerous. Including hosted ha engine that fails 
> while deploying(for me). I think that test the backup for recovery is very 
> important and need to be done on a regular basis, What good is a backup if 
> you cannot restore??  I worked for a whole night trying to recover the failed 
> engine...recovery from backup was very painful. does anyone here have a 
> solution for testing backups (not in crises mode)? 
> 
> On Mon, Jan 14, 2019, 20:41  
>> I'm still sort of new to ovirt, but I went through a similar things.  I had 
>> my original engine fail and had to recover, so here is my "oVirt HA plan"
>> 
>> 1. I do NOT use hosted ovirt, I had issues getting it deployed correctly, 
>> and that doesn't help if they engine VM itself has issues.  My engine is 
>> hosted on a completely separated 2-node hyper-converged Hyper-V Cluster.  
>> Unless you have a cluster larger than 3 hosts, I really don't think hosted 
>> ovirt is a good idea, it would make more sense to just load ovirt on another 
>> PC by itself.  
>> 2. I plan on loading another copy of ovirt in a "cold storage" 
>> configuration.  Where it will be loaded on centos, and configured as close 
>> as I can without adding in any hosts.  I'll probably keep it turned off and 
>> try to updated it once a month or so.  
>> 3. If I have another oVirt failure, I know to log in, put the storage 
>> domains into maintenance mode if possible, and copy out any needed config 
>> items i may have missed.  I will the shut it off and delete it.  I then will 
>> reboot all the hosts one at a time to clear out any sanlock issues, then add 
>> the hosts to the new copy of ovirt and import the storage.  I estimate that 
>> process to take around 2-4hrs.  
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AHD3OABM65FWYMRHBFI6IHRZUE27PVWA/
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6XC6QPSLBEICJGEKOXCBFT2JIXT6IYPS/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DAWG2NROGDY5IMJVKLVXPAAAEZV24NPO/


[ovirt-users] Re: Ovirt Engine UI bug- cannot attache networks to hosts

2018-12-26 Thread Alex McWhirter
I don't hit this bug, but even when you click "unattached" you still
have to assign the networks to each host individually. Most people use
network labels for this as you can assign them with one action. 

On 2018-12-26 09:54, Leo David wrote:

> Thank you Eitan, 
> Anybody, any ideea if is any work in progress for fixing this UI issue, or 
> has been already fixed ? 
> Happy Holidays to everyone ! 
> 
> On Mon, Dec 17, 2018, 20:47 Eitan Raviv  
> You can use the REST API [1] via any REST client or the 
> python-ovirt-engine-sdk4 [2] to manually or programmaticaly perform any task 
> that can be done via the webadmin UI.
> About the UI malfunction I don't have enough information to reply. I will 
> need to have to try to reproduce it. You can open a bug on [3] to track it. 
> Please provide a detailed description 
> of the shortest flow to reproduce the malfunction and attach any relevant 
> logs.
> 
> Thanks a lot 
> 
> [1] http://ovirt.github.io/ovirt-engine-api-model/4.2/
> [2] https://github.com/oVirt/ovirt-engine-sdk/tree/sdk_4.2/sdk/examples
> [3] https://bugzilla.redhat.com/ 
> 
> On Mon, Dec 17, 2018 at 5:46 PM Leo David  wrote: 
> 
> Hello Everyone, 
> Any updates on this fix ?  
> Also, is thee any other way that I can attach hosts to gluster network  other 
> than from within engine UI ?
> I'm just standing with a ready installed 6 nodes cluster, and I would rather 
> prefer to not start using gluster by passing traffic through default 
> ovirtmgmt network 
> 
> Thank you very much ! 
> 
> Leo 
> 
> On Sun, Dec 16, 2018 at 12:02 PM Eitan Raviv  wrote: 
> 
> The flow that invokes the NPE as described is:
> "click on Networks->gluster-Hosts- Unattached, i get the following error:"
> 
> This does not invoke 'network update spinner' related code. 
> 
> On Fri, Dec 14, 2018 at 3:05 PM Greg Sheremeta  wrote: 
> 
> Ales, could this be related to 
> https://bugzilla.redhat.com/show_bug.cgi?id=1655375 ? 
> Or, Eitan, is it related to the new network updating spinner? 
> 
> code in question:
> 
> @Override 
> public SafeHtml getValue(PairQueryable object) { 
> ImageResource imageResource = 
> InterfaceStatusImage.getResource(object.getFirst().getStatistics().getStatus());
>  
> SafeHtml nicStatus = 
> SafeHtmlUtils.fromTrustedString(AbstractImagePrototype.create(imageResource).getHTML());
>  
> if (object.getFirst() != null && isNetworkUpdating(object)) { 
> return 
> templates.networkDeviceStatusImgAndNetworkOperationInProgress(nicStatus, 
> constants.networksUpdating()); 
> } else if (object.getFirst() != null && !isNetworkUpdating(object)) { 
> return templates.networkDeviceStatusImg(nicStatus); 
> } else if (object.getFirst() == null && isNetworkUpdating(object)) { 
> return templates.networkOperationInProgressDiv(constants.networksUpdating()); 
> } else { 
> return null; 
> } 
> } 
> 
> (something in ^ is NPE) 
> 
> On Thu, Dec 13, 2018 at 12:49 PM Leo David  wrote: 
> 
> Hi, 
> No errors in the engine logs, only some info's... 
> 2018-12-13 17:40:44,775Z INFO  
> [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-9) 
> [45bb68a9] Running command: CreateUserSessionCommand internal: false.
> 2018-12-13 17:40:44,794Z INFO  
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (default task-9) [45bb68a9] EVENT_ID: USER_VDC_LOGIN(30), User 
> admin@internal-authz connecting from '10.10.2.14' using session 
> 'LXv9Geyp5aKURf0nZ53QLn6DP74cHcl6TMdVtPbbQkudThg9vmmOXj0uFhErNykZ1czs5rmMVd302HfboMMUrQ=='
>  logged in. 
> 
> Maybe the following,  but I don;t think it has to do with the UI error: 
> 
> 2018-12-13 17:40:46,458Z INFO  
> [org.ovirt.engine.core.utils.servlet.ServletUtils] (default task-9) [] Can't 
> read file '/usr/share/ovirt-engine/files/spice/SpiceVersion.txt' for request 
> '/ovirt-engine/services/files/spice/SpiceVersion.txt' -- 404
> 2018-12-13 17:41:11,133Z INFO  
> [org.ovirt.engine.core.utils.servlet.ServletUtils] (default task-10) [] Can't 
> read file '/usr/share/ovirt-engine/files/spice/SpiceVersion.txt' for request 
> '/ovirt-engine/services/files/spice/SpiceVersion.txt' -- 404 
> Wondering, I am the only one getting this UI error ? 
> 
> On Thu, Dec 13, 2018 at 6:13 PM Gobinda Das  wrote: 
> 
> Hi Leo, Is there any error in engine? 
> Can you please share engine log as well? 
> 
> On Thu, Dec 13, 2018 at 9:01 PM Leo David  wrote: 
> 
> Thank you very much Eitan ! 
> So on a fresh installation, browser cache cleared, tried different browsers, 
> i still have that error,  and the following error in ui.log: 
> 
> - 2018-12-13 15:21:34,625Z ERROR 
> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default 
> task-2) [] Permutation name: 
> F7D51C60208EA84178ACC5B48326252F
> 2018-12-13 15:21:34,626Z ERROR 
> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default 
> task-2) [] Uncaught exception: 
> 

[ovirt-users] Re: Active Storage Domains as Problematic

2018-12-20 Thread Alex McWhirter

On 2018-12-20 07:53, Stefan Wolf wrote:

i 've mounted it during the hosted-engine --deploy process
I selected glusterfs
and entered  server:/engine
I dont enter any mount options
yes it is enabled for both. I dont got errors for the second one, but
may it doesn't check after the first fail
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SYX6BQBW2MMV4YIXHG24KMXA7FTWL46X/


try server:/engine -o direct-io-mode=enable
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N22AFD47NV336PNLHTGZSQJWMJOGFJCG/


[ovirt-users] Re: Active Storage Domains as Problematic

2018-12-20 Thread Alex McWhirter

On 2018-12-20 07:14, Stefan Wolf wrote:

yes i think this too, but as you see at the top

[root@kvm380 ~]# gluster volume info
...
performance.strict-o-direct: on

...
it was already set

i did a one cluster setup with ovirt and I uses this result

Volume Name: engine
Type: Distribute
Volume ID: a40e848b-a8f1-4990-9d32-133b46db6f1d
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: kvm360.durchhalten.intern:/gluster_bricks/engine/engine
Options Reconfigured:
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
user.cifs: off
network.ping-timeout: 30
network.remote-dio: off
performance.strict-o-direct: on
performance.low-prio-threads: 32
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on

could there be an other reason?


are you mounting via the gluster GUI? I'm not sure how it handles 
mounting of manual gluster volumes, but the direct-io-mode=enable mount 
option comes to mind. I assume direct-io is also enabled on the other 
volume? It needs to be on all of them.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NOGI2JX5NRWKVXNDHY66GI6XJ54HAFKR/


[ovirt-users] Re: Upload via GUI to VMSTORE possible but not ISO Domain

2018-12-20 Thread Alex McWhirter
I've always just used engine-iso-uploader on the engine host to upload
images to the ISO domain, never really noticed that it doesn't "appear"
to be in the GUI. Very rarely do i need to upload ISO's, so i guess it's
just never really been an issue. I know the disk upload GUI options are
for VM HDD disk images, not ISO's though, which is why i imagine it
doesn't show up.___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3BYPWNPUITXCI43LPJJ7ZMFVHOZDSXMF/


[ovirt-users] Re: Active Storage Domains as Problematic

2018-12-20 Thread Alex McWhirter

you see to set strict direct io on the volumes

performance.strict-o-direct on
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2SJGYBAQ33N7JW7NQTY7KPAUH3YQVMIO/


[ovirt-users] Re: Ovirt 4.3 Alpha AMD 2970WX Windows VM creation and NUMA

2018-12-02 Thread Alex McWhirter

On 2018-12-02 14:07, Darin Schmidt wrote:

Not sure if Users is the best place for this as Im using 4.3 to test
support for my AMD 2970WX Threadripper but While trying to setup a
Windows VM, it fails. I have a working CentOS 7 running. Heres what I
get when I try to startup the VM.

VM Windows-Darin is down with error. Exit message: internal error:
process exited while connecting to monitor: WARNING: Image format was
not specified for '/dev/sg0' and probing guessed raw.
 Automatically detecting the format is dangerous for raw
images, write operations on block 0 will be restricted.
 Specify the 'raw' format explicitly to remove the 
restrictions.

2018-12-02T18:43:05.358741Z qemu-kvm: warning: CPU(s) not present in
any NUMA nodes: CPU 10 [socket-id: 10, core-id: 0, thread-id: 0], CPU
11 [socket-id: 11, core-id: 0, thread-id: 0], CPU 12 [socket-id: 12,
core-id: 0, thread-id: 0], CPU 13 [socket-id: 13, core-id: 0,
thread-id: 0], CPU 14 [socket-id: 14, core-id: 0, thread-id: 0], CPU
15 [socket-id: 15, core-id: 0, thread-id: 0]
2018-12-02T18:43:05.358791Z qemu-kvm: warning: All CPU(s) up to
maxcpus should be described in NUMA config, ability to start up with
partial NUMA mappings is obsoleted and will be removed in future
2018-12-02T18:43:05.359052Z qemu-kvm: can't apply global
EPYC-x86_64-cpu.hv-synic=on: Property '.hv-synic' not found.

NUMA doesnt seem to resolve the issue any even if I set NUMA to be 1
and place it on a NUMA that has 12 cores and 64GB ram. I also dont
fully understand NUMA because it creates 4 NUMA sockets even though
technically there should be 2 correct? I dont get how NUMA is setup,
but either way it doesnt help resolve any issues.

[root@ovirt ~]# numactl --hardware
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 4 5 24 25 26 27 28 29
node 0 size: 65429 MB
node 0 free: 54322 MB
node 1 cpus: 12 13 14 15 16 17 36 37 38 39 40 41
node 1 size: 0 MB
node 1 free: 0 MB
node 2 cpus: 6 7 8 9 10 11 30 31 32 33 34 35
node 2 size: 32754 MB
node 2 free: 27699 MB
node 3 cpus: 18 19 20 21 22 23 42 43 44 45 46 47
node 3 size: 0 MB
node 3 free: 0 MB
node distances:
node   0   1   2   3
  0:  10  16  16  16
  1:  16  10  16  16
  2:  16  16  10  16
  3:  16  16  16  10

link to my messages and vdsm log
https://drive.google.com/open?id=1v-Wjcj7xLZIcR2GKR-P659yE14r1-hsz
https://drive.google.com/open?id=1BHvcS2Dgan68hQkVdOmme1gP1ciSsejL


EPYC-x86_64-cpu.hv-synic=on: Property '.hv-synic' not found.

is hv-synic a typo somewhere? would imagine that is supposed to be 
hv-sync?

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XAQW6YFUW324ZS2T25LDC7JFZHVG5Z2E/


[ovirt-users] Re: oVirt Node on CentOS 7.5 and AMD EPYC Support

2018-11-30 Thread Alex McWhirter

On 2018-11-30 09:33, Darin Schmidt wrote:

I was curious, I have an AMD Threadripper (2970wx). Do you know where
Ovirt is grepping or other to get the info needed to use the cpu type?
I assume lscpu is possibly where it gets it and is just matching? Id
like to be able to test this on a threadripper.


IIRC, this is coming in 4.3
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UX4UQ5Q27LKDCDHAH36LSVMLKA5R5XJB/


[ovirt-users] Re: Change Default Behaviour

2018-11-28 Thread Alex McWhirter

On 2018-11-28 06:56, Lucie Leistnerova wrote:

Hello Alex,

On 11/27/18 8:02 PM, Alex McWhirter wrote:
In the admin interface, if i create a server template and make a VM 
out of it i get a Clone/Independent VM. If i use a dekstop template a 
get a Thin/Dependent


In the VM portal i only get Thin/Dependent.

How can i change this so that it's always Clone/Dependent for certain 
templates?

The default setting should depend on Optimized for: Desktop/Server on
the template. So when it creates Thin/Dependent VM for Server
template, I think that is a bug.

I created an issue. Thank you for bringing that up.

https://github.com/oVirt/ovirt-web-ui/issues/882


Best regards,
Lucie


Ok, yes that would make sense.

Any ideas where i can look to change this manually as a temp work 
around?

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4JC6BPX2NKAHDK46WNONUUKFS4U4RJLF/


[ovirt-users] Change Default Behaviour

2018-11-27 Thread Alex McWhirter
In the admin interface, if i create a server template and make a VM out 
of it i get a Clone/Independent VM. If i use a dekstop template a get a 
Thin/Dependent


In the VM portal i only get Thin/Dependent.

How can i change this so that it's always Clone/Dependent for certain 
templates?

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SBJCLN4QPSNUFWB2XKS3TJ44VL4KC75J/


[ovirt-users] Re: vGPU not available in "type mdev"

2018-11-27 Thread Alex McWhirter

On 2018-11-27 07:47, Marc Le Grand wrote:

Hello
I followed the tutorial regarding vGPU but it's not working, i i guess
it's a Nvidia licence issue, but i need to be sure.
My node is installed using the node ISO image.
I just removed the nouveau driver ans installed the nVidia one.
My product is : "GK106GL [Quadro K4000]"
My driver is : NVIDIA-Linux-x86_64-390.87
In the host peripherals I see my card (pci__05_00_0) listed, but
the column "type mdev" is empty, as the doc says I should see vGPU i
guess something is missing.
I took a look, there seems to be nVidia vGPU copatible driver, but no
trace of it. The only vGPU stuff i found it's "NVIDIA VIRTUAL GPU" and
i understand I have to pay to use vGPU on my nVidia card.
is this correct or no ?
If i'm supposed to pay is there a free workaround ?
thanks for your help
marc


It's my understanding that nvidia vgpu requires specialized cards, like 
the nvidia grid cards or quadro wds cards. There are some early models 
of these cards like the Grid K1 or K2 that did not need licensing, but 
the newer ones do. I don't think it's possible to get vgpu working on a 
standard quadro card.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CFB757MKTRQGKNZBG3V6YRT6MNVLQ36T/


[ovirt-users] Re: SPICE QXL Crashes Linux Guests

2018-11-26 Thread Alex McWhirter

On 2018-11-25 14:48, Alex McWhirter wrote:

I'm having an odd issue that i find hard to believe could be a bug,
and not some kind of user error, but im at a loss for where else to
look.

when booting a linux ISO with QXL SPICE graphics, the boot hangs as
soon as kernel modesetting kicks in. Tried with latest debian, fedora,
and centos. Sometimes it randomly works, but most often it does not.
QXL / VGA VNC work fine. However if i wait a few minutes after
starting the VM for the graphics to start, then there are no issues
and i can install as usual.

So after install, i reboot, hangs on reboot right after graphics
switch back to text mode with QXL SPICE, not with VNC. So i force
power off, reboot, and wait a while for it to boot. If i did text only
install, when i open a spice console it will hang after typing a few
characters. If i did a graphical install then as long as i waited long
enough for X to start, then it works perfectly fine.

I tried to capture some logs, but since the whole guest OS hangs it's
rather hard to pull off. I did see an occasional error about the mouse
driver, so that's really all i have to go on.

As for the spice client, im using virt-viewer on windows 10 x64, tried
various versions of virt-viewer just to be sure, no change. I also
have a large amount on windows guests with QXL SPICE. These all work
with no issue. Having guest agent installed in the linux guest seems
to make no difference.

There are no out of the ordinary logs on the VDSM hosts, but i can
provide anything you may need. It's not specific to any one host, i
have 10 VM hosts in the cluster, they all do. They are westmere boxes
if that makes a difference.

Any ideas on how i should approach this? VNC works well enough for
text only linux guest, but not being able to reboot my GUI linux
guests without also closing my spice connection is a small pain.


as far as ovirt versions im on the latest, this is a rather fresh
install. just set it up a few days ago, but i've been a long time
ovirt user. I am using a squid spice proxy if that makes a difference.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CJWU35TTEUC66K4LQDSJIV2HTB7TI7GI/



I managed to extract a log file from a debian guest.

https://pastebin.com/wNu69Edn
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/43N4WSHMFWICTL3LT4BJAB2JUQVLVVMS/


[ovirt-users] SPICE QXL Crashes Linux Guests

2018-11-25 Thread Alex McWhirter
I'm having an odd issue that i find hard to believe could be a bug, and 
not some kind of user error, but im at a loss for where else to look.


when booting a linux ISO with QXL SPICE graphics, the boot hangs as soon 
as kernel modesetting kicks in. Tried with latest debian, fedora, and 
centos. Sometimes it randomly works, but most often it does not. QXL / 
VGA VNC work fine. However if i wait a few minutes after starting the VM 
for the graphics to start, then there are no issues and i can install as 
usual.


So after install, i reboot, hangs on reboot right after graphics switch 
back to text mode with QXL SPICE, not with VNC. So i force power off, 
reboot, and wait a while for it to boot. If i did text only install, 
when i open a spice console it will hang after typing a few characters. 
If i did a graphical install then as long as i waited long enough for X 
to start, then it works perfectly fine.


I tried to capture some logs, but since the whole guest OS hangs it's 
rather hard to pull off. I did see an occasional error about the mouse 
driver, so that's really all i have to go on.


As for the spice client, im using virt-viewer on windows 10 x64, tried 
various versions of virt-viewer just to be sure, no change. I also have 
a large amount on windows guests with QXL SPICE. These all work with no 
issue. Having guest agent installed in the linux guest seems to make no 
difference.


There are no out of the ordinary logs on the VDSM hosts, but i can 
provide anything you may need. It's not specific to any one host, i have 
10 VM hosts in the cluster, they all do. They are westmere boxes if that 
makes a difference.


Any ideas on how i should approach this? VNC works well enough for text 
only linux guest, but not being able to reboot my GUI linux guests 
without also closing my spice connection is a small pain.



as far as ovirt versions im on the latest, this is a rather fresh 
install. just set it up a few days ago, but i've been a long time ovirt 
user. I am using a squid spice proxy if that makes a difference.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CJWU35TTEUC66K4LQDSJIV2HTB7TI7GI/


Re: [ovirt-users] strange iscsi issue

2015-09-08 Thread Alex McWhirter
Are we talking about a single ssd or an array of them? VMs are usually large 
continuous image files. SSDs are faster delivering many small files over large 
continuous file.

I believe ovirt forces sync writes by default, but I'm not sure as I'm using 
NFS. The best thing to do is figure out whether it's a storage issue or network 
issue.

Try setting your iscsi server to use async writes, this can be dangerous if 
either server crashes or loses power so I would just do it for testing 
purposes. 

With async writes you should be able to hit near 10gbps writes, but reads will 
depend on how much data is cached and how much ram the iscsi server has.

Are you presenting a raw disk over iscsi, an image file, or a filesystem lun 
via zfs or something similar?

Alex sent the message, but his phone sent the typos...

On Sep 8, 2015 1:45 AM, Karli Sjöberg wrote: > > tis 2015-09-08 klockan 06:59 
+0200 skrev Demeter Tibor: > > Hi, > > Thank you for your reply. > > I'm sorry 
but I don't think so. This storage is fast, because it is a SSD based storage, 
and I can read/write to it with fast performance. > > I know, in virtual 
environment the I/O always slowest than on physical, but here I have a very 
large difference. > > Also, I use ext4 FS. > > My suggestion would be to use a 
filesystem benchmarking tool like bonnie > ++ to first test the performance 
locally on the storage server and then > redo the same test inside of a virtual 
machine. Also make sure the VM is > using VirtIO disk (either block or SCSI) 
for best performance. I have > tested speeds over 1Gb/s with bonded 1Gb NICS so 
I know it should work > in theory as well as practice. > > Oh, and for the 
record. IO doesn´t have to be bound by the speed of > storage, if the host 
caches in RAM before sending it over the wire. But > that in my opinion is 
dangerous and as far as I know, it´s not actived > in oVirt, please correct me 
if I´m wrong. > > /K > > > > > Thanks > > > > Tibor > > > > > > - 2015. 
szept.. 8., 0:40, Alex McWhirter alexmcwhir...@triadic.us írta: > > > > > 
Unless you're using a caching filesystem like zfs, then you're going to be > > 
> limited by how fast your storage back end can actually right to disk. Unless 
> > > you have a quite large storage back end, 10gbe is probably faster than 
your > > > disks can read and write. > > > > > > On Sep 7, 2015 4:26 PM, 
Demeter Tibor wrote: > > >> > > >> Hi All, > > >> > > >> I have to create a 
test environment for testing purposes, because we need to > > >> testing our 
new 10gbe infrastructure. > > >> One server that have a 10gbe nic - this is the 
vdsm host and ovirt portal. > > >> One server that have a 10gbe nic - this is 
the storage. > > >> > > >> Its connected to each other throught a dlink 10gbe 
switch. > > >> > > >> Everything good and nice, the server can connect to 
storage, I can make and run > > >> VMs, but the storage performance from inside 
VM seems to be 1Gb/sec only. > > >> I did try the iperf command for testing 
connections beetwen servers, and it was > > >> 9.40 GB/sec. I have try to use 
hdparm -tT /dev/mapper/iscsidevice and also it > > >> was 400-450 MB/sec. I've 
got same result on storage server. > > >> > > >> So: > > >> > > >> - hdparm 
test on local storage ~ 400 mb/sec > > >> - hdparm test on ovirt node server 
through attached iscsi device ~ 400 Mb/sec > > >> - hdparm test from inside vm 
on local virtual disk - 93-102 Mb /sec > > >> > > >> The question is : Why? > > 
>> > > >> ps. I Have only one ovirtmgmt device, so there are no other networks. 
The router > > >> is only 1gbe/sec, but i've tested and the traffic does not 
going through  this. > > >> > > >> Thanks in advance, > > >> > > >> Regards, > 
> > > Tibor > > ___ > > Users 
mailing list > > Users@ovirt.org > > 
http://lists.ovirt.org/mailman/listinfo/users >
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] strange iscsi issue

2015-09-07 Thread Alex McWhirter
Unless you're using a caching filesystem like zfs, then you're going to be 
limited by how fast your storage back end can actually right to disk. Unless 
you have a quite large storage back end, 10gbe is probably faster than your 
disks can read and write.

On Sep 7, 2015 4:26 PM, Demeter Tibor  wrote:
>
> Hi All,
>
> I have to create a test environment for testing purposes, because we need to 
> testing our new 10gbe infrastructure.
> One server that have a 10gbe nic - this is the vdsm host and ovirt portal.
> One server that have a 10gbe nic - this is the storage.
>
> Its connected to each other throught a dlink 10gbe switch.
>
> Everything good and nice, the server can connect to storage, I can make and 
> run VMs, but the storage performance from inside VM seems to be 1Gb/sec only. 
> I did try the iperf command for testing connections beetwen servers, and it 
> was 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and 
> also it was 400-450 MB/sec. I've got same result on storage server.
>
> So:
>
> - hdparm test on local storage ~ 400 mb/sec
> - hdparm test on ovirt node server through attached iscsi device ~ 400 Mb/sec
> - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec
>
> The question is : Why?
>
> ps. I Have only one ovirtmgmt device, so there are no other networks. The 
> router is only 1gbe/sec, but i've tested and the traffic does not going 
> through  this.
>
> Thanks in advance,
>
> Regards, 
> Tibor
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users