[ovirt-users] Re: Ovirt VLAN Primer

2021-02-01 Thread Ales Musil
On Tue, Feb 2, 2021 at 6:18 AM David Johnson 
wrote:

> Good morning all,
>
> On my ovirt 4.4.4 cluster, I am trying to use VLan's to separate VM's for
> security purposes.
>
> Is there a usable how-to document that describes how to configure the
> vlan's so they actually function without taking the host into
> non-operational mode?
>
> Thank you in advance.
>
> Regards,
> David Johnson
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IYPORJKHTSVTYTTRGWIW3V2MF5CFZ6DC/
>

Hello,

I assume that you have marked those networks as required. This is handy to
make sure that all hosts in a cluster have this network attached.
Which implies that the host is considered non operational until you assign
all required networks.

To avoid this you can uncheck it for a new network in the cluster tab of
the "New Logical Network" window. For existing go to
Compute -> Clusters -> $YOUR_CLUSTER -> Logical Networks -> Manage Networks
and uncheck required for the affected network.
This can be always changed back.

Hopefully this helps.
Regards,
Ales




-- 

Ales Musil

Software Engineer - RHV Network

Red Hat EMEA 

amu...@redhat.comIM: amusil

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JEUZXK7Y6KRH2LIATXQMIUIXO4LXKSUT/


[ovirt-users] Ovirt VLAN Primer

2021-02-01 Thread David Johnson
Good morning all,

On my ovirt 4.4.4 cluster, I am trying to use VLan's to separate VM's for
security purposes.

Is there a usable how-to document that describes how to configure the
vlan's so they actually function without taking the host into
non-operational mode?

Thank you in advance.

Regards,
David Johnson
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IYPORJKHTSVTYTTRGWIW3V2MF5CFZ6DC/


[ovirt-users] Re: ovirt 4.4 and CentOS 8 and multipath with Equallogic

2021-02-01 Thread Nir Soffer
On Mon, Feb 1, 2021 at 8:37 PM Gianluca Cecchi
 wrote:
>
>
>
> On Mon, Feb 1, 2021 at 6:51 PM David Teigland  wrote:
>>
>> On Mon, Feb 01, 2021 at 07:18:24PM +0200, Nir Soffer wrote:
>> > Assuming we could use:
>> >
>> > io_timeout = 10
>> > renewal_retries = 8
>> >
>> > The worst case would be:
>> >
>> >  00 sanlock renewal succeeds
>> >  19 storage fails
>> >  20 sanlock try to renew lease 1/7 (timeout=10)
>> >  30 sanlock renewal timeout
>> >  40 sanlock try to renew lease 2/7 (timeout=10)
>> >  50 sanlock renewal timeout
>> >  60 sanlock try to renew lease 3/7 (timeout=10)
>> >  70 sanlock renewal timeout
>> >  80 sanlock try to renew lease 4/7 (timeout=10)
>> >  90 sanlock renewal timeout
>> > 100 sanlock try to renew lease 5/7 (timeout=10)
>> > 110 sanlock renewal timeout
>> > 120 sanlock try to renew lease 6/7 (timeout=10)
>> > 130 sanlock renewal timeout
>> > 139 storage is back
>> > 140 sanlock try to renew lease 7/7 (timeout=10)
>> > 140 sanlock renewal succeeds
>> >
>> > David, what do you think?
>>
>> I wish I could say, it would require some careful study to know how
>> feasible it is.  The timings are intricate and fundamental to correctness
>> of the algorithm.
>> Dave
>>
>
> I was taking values also reading this:
>
> https://access.redhat.com/solutions/5152311
>
> Perhaps it needs some review?

Yes, I think we need to update the effective timeout filed. The value
describe how sanlock and multipath configuration are related, but it
does not represent the maximum outage time.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DCDI25ZKUM5JB4HW4NM7UYBYIU4JL7XG/


[ovirt-users] Re: ovirt 4.4 and CentOS 8 and multipath with Equallogic

2021-02-01 Thread Gianluca Cecchi
On Mon, Feb 1, 2021 at 6:51 PM David Teigland  wrote:

> On Mon, Feb 01, 2021 at 07:18:24PM +0200, Nir Soffer wrote:
> > Assuming we could use:
> >
> > io_timeout = 10
> > renewal_retries = 8
> >
> > The worst case would be:
> >
> >  00 sanlock renewal succeeds
> >  19 storage fails
> >  20 sanlock try to renew lease 1/7 (timeout=10)
> >  30 sanlock renewal timeout
> >  40 sanlock try to renew lease 2/7 (timeout=10)
> >  50 sanlock renewal timeout
> >  60 sanlock try to renew lease 3/7 (timeout=10)
> >  70 sanlock renewal timeout
> >  80 sanlock try to renew lease 4/7 (timeout=10)
> >  90 sanlock renewal timeout
> > 100 sanlock try to renew lease 5/7 (timeout=10)
> > 110 sanlock renewal timeout
> > 120 sanlock try to renew lease 6/7 (timeout=10)
> > 130 sanlock renewal timeout
> > 139 storage is back
> > 140 sanlock try to renew lease 7/7 (timeout=10)
> > 140 sanlock renewal succeeds
> >
> > David, what do you think?
>
> I wish I could say, it would require some careful study to know how
> feasible it is.  The timings are intricate and fundamental to correctness
> of the algorithm.
> Dave
>
>
I was taking values also reading this:

https://access.redhat.com/solutions/5152311

Perhaps it needs some review?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E4SSLVJ2YGYK2NVWGMJ2WKGSVGVXS5R7/


[ovirt-users] Re: ovirt 4.4 and CentOS 8 and multipath with Equallogic

2021-02-01 Thread David Teigland
On Mon, Feb 01, 2021 at 07:18:24PM +0200, Nir Soffer wrote:
> Assuming we could use:
> 
> io_timeout = 10
> renewal_retries = 8
> 
> The worst case would be:
> 
>  00 sanlock renewal succeeds
>  19 storage fails
>  20 sanlock try to renew lease 1/7 (timeout=10)
>  30 sanlock renewal timeout
>  40 sanlock try to renew lease 2/7 (timeout=10)
>  50 sanlock renewal timeout
>  60 sanlock try to renew lease 3/7 (timeout=10)
>  70 sanlock renewal timeout
>  80 sanlock try to renew lease 4/7 (timeout=10)
>  90 sanlock renewal timeout
> 100 sanlock try to renew lease 5/7 (timeout=10)
> 110 sanlock renewal timeout
> 120 sanlock try to renew lease 6/7 (timeout=10)
> 130 sanlock renewal timeout
> 139 storage is back
> 140 sanlock try to renew lease 7/7 (timeout=10)
> 140 sanlock renewal succeeds
> 
> David, what do you think?

I wish I could say, it would require some careful study to know how
feasible it is.  The timings are intricate and fundamental to correctness
of the algorithm.
Dave
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/75UVHZEUU6T5AIIYNCK2W37NPHDVH63Z/


[ovirt-users] Re: ovirt 4.4 and CentOS 8 and multipath with Equallogic

2021-02-01 Thread Nir Soffer
On Mon, Feb 1, 2021 at 5:23 PM Gianluca Cecchi
 wrote:
>
> On Mon, Feb 1, 2021 at 4:09 PM Nir Soffer  wrote:
...
>> For 120 seconds, you likey need
>>
>> sanlock:io_timeout=20
>> no_path_retry=32
>>
>
> Shouldn't  the above values for a 160 seconds timeout? I need 120

120 seconds for sanlock means that sanlock will expire the lease
exactly 120 seconds since the last successful lease renewal. Sanlock
cannot exceeds this deadline since other hosts assume that timeout
when acquiring a lease from a "dead" host.

When using 15 seconds timeout, sanlock renews the lease every
30 seconds.

The best case flow is:

 00 sanlock renewal succeeds
 01 storage fails
 30 sanlock try to renew lease 1/3 (timeout=15)
 45 sanlock renewal timeout
 60 sanlock try to renew lease 2/3 (timeout=15)
 75 sanlock renewal timeout
 90 sanlock tries to renew lease 3/3 (timeout=15)
105 sanlock renewal timeout
120 sanlock expire the lease, kill the vm/vdsm
121 storage is back

If you use 20 seconds io timeout, sanlock checks every 40 seconds.

The best case flow is:

 00 sanlock renewal succeeds
 01 storage fails
 40 sanlock try to renew lease 1/3 (timeout=20)
 60 sanlock renewal timeout
 80 sanlock try to renew lease 2/3 (timeout=20)
100 sanlock renewal timeout
120 sanlock try to renew lease 3/3 (timeout=20)
121 storage is back
122 sanlock renwal succeeds

But, we need to consider also the worst case flow:

 00 sanlock renewal succeeds
 39 storage fails
 40 sanlock try to renew lease 1/3 (timeout=20)
 60 sanlock renewal timeout
 80 sanlock try to renew lease 2/3 (timeout=20)
100 sanlock renewal timeout
120 sanlock try to renew lease 3/3 (timeout=20)
140 sanlock renwal timeout
159 storage is back
160 sanlock expire lease, kill vm/vdsm etc.

So even with 20 seconds io timeout, 120 seconds outage may not
succeed.

In practice we can assume that we detect storage outage sometime
in the middle between sanlock renewals, so the flow would be:

 00 sanlock renewal succeeds
 20 storage fails
 40 sanlock try to renew lease 1/3 (timeout=20)
 60 sanlock renewal timeout
 80 sanlock try to renew lease 2/3 (timeout=20)
100 sanlock renewal timeout
120 sanlock try to renew lease 3/3 (timeout=20)
140 storage is back
140 sanlock renwal succeeds
160 sanlock expire lease, kill vm/vdsm etc.

So I would start with 20 seconds io timeout, and increase it if needed.

These flows assume that multiapth timeout is configured properly.
If multipath is using too short timeout, it will fail sanlock renewal
immediately instead of queuing the I/O.

I also did not add the time to detect that storage is available again.
multipath check paths every 5 seconds (polling_internal), so this
may add 5 seconds delay from the time the storage is up, until
multipath detect it and try to send queued I/O.

I think the current way sanlock works is not helpful for dealing
with long outages on the storage side. If we could keep the
io_timeout constant (e.g. 10 seconds), and change the number
of retries we could work better and be easier to predict.

Assuming we could use:

io_timeout = 10
renewal_retries = 8

The worst case would be:

 00 sanlock renewal succeeds
 19 storage fails
 20 sanlock try to renew lease 1/7 (timeout=10)
 30 sanlock renewal timeout
 40 sanlock try to renew lease 2/7 (timeout=10)
 50 sanlock renewal timeout
 60 sanlock try to renew lease 3/7 (timeout=10)
 70 sanlock renewal timeout
 80 sanlock try to renew lease 4/7 (timeout=10)
 90 sanlock renewal timeout
100 sanlock try to renew lease 5/7 (timeout=10)
110 sanlock renewal timeout
120 sanlock try to renew lease 6/7 (timeout=10)
130 sanlock renewal timeout
139 storage is back
140 sanlock try to renew lease 7/7 (timeout=10)
140 sanlock renewal succeeds

David, what do you think?

...
> On another host with same config (other luns on the same storage), if I run:
>
> multipath reconfigure -v4 > /tmp/multipath_reconfigure_v4.txt 2>&1
>
> I get this:
> https://drive.google.com/file/d/1VkezFkT9IwsrYD8LoIp4-Q-j2X1dN_qR/view?usp=sharing
>
> anything important inside, concerned with path retry settings?

I don't see anything about no_path_retry, there, maybe logging was changed,
or it is not the right flags to see all the info during reconfiguration.

I think "multipathd show config" is the canonical way to look at the current
configuration. It shows the actual values multipath will use during
runtime, after
local configuration was applied on top of the built configuration.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/25ONGBFYU5DC6XM3HFUV7RU2OMJK5VRA/


[ovirt-users] Re: ovirt 4.4 and CentOS 8 and multipath with Equallogic

2021-02-01 Thread Gianluca Cecchi
On Mon, Feb 1, 2021 at 4:09 PM Nir Soffer  wrote:
[snip]

>
> The easiest way to get vdsm defaults is to change the the file to use
> an older version,
> (e.g. use 1.9), remove the private comment, and run:
>
> vdms-tool configure --force --module multipath
>
> Vdsm will upgrade your old file to the most recent version, and backup
> your old file
> to /etc/multipath.conf.timestamp.
>

thanks


> For 120 seconds, you likey need
>
> sanlock:io_timeout=20
> no_path_retry=32
>
>
Shouldn't  the above values for a 160 seconds timeout? I need 120


Because of the different way sanlock and multipath handles timeouts.
>
> Also note that our QE never tested changing these settings, but your
> feedback on this new
> configuration is very important.
>

ok



>
> multipath -r deleages the command to multipathd daemon, this is
> probably the reason
> you don't see the logs here.
>
> I think this will be more useful:
>
>multipathd reconfigure -v3
>
> I'm not sure about the -v3, check multipathd manual for the details.
>
> Nir
>
>
On another host with same config (other luns on the same storage), if I run:

multipath reconfigure -v4 > /tmp/multipath_reconfigure_v4.txt 2>&1

I get this:
https://drive.google.com/file/d/1VkezFkT9IwsrYD8LoIp4-Q-j2X1dN_qR/view?usp=sharing

anything important inside, concerned with path retry settings?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FC5ZONSANUSXKJCYTPEOAWDLZOGDR7TF/


[ovirt-users] Re: ovirt 4.4 and CentOS 8 and multipath with Equallogic

2021-02-01 Thread Nir Soffer
On Mon, Feb 1, 2021 at 4:47 PM Gianluca Cecchi
 wrote:
>
> On Mon, Feb 1, 2021 at 3:10 PM Nir Soffer  wrote:
> [snip]
>
>> > So at the end I have the multipath.conf default file installed by vdsm (so 
>> > without the # PRIVATE line)
>> > and this in /etc/multipath/conf.d/eql.conf
>> >
>> > devices {
>> > device {
>> > vendor  "EQLOGIC"
>> > product "100E-00"
>>
>> Ben, why is this device missing from multipath builtin devices?
>
>
> I was using Equallogic kind of storage since oVirt 3.6, so CentoOS/RHEL 6, 
> and it has never been inside the multipath database as far as I remember.
> But I don't know why.
> The parameters I put was from latest EQL best practices, but they was updated 
> at CentOS 7 time.
> I would like to use the same parameters in CentOS 8 now and see if they works 
> ok.
> PS line of EQL is somehow deprecated (in the sense of no new features and so 
> on..) but anyway still supported
>
>>
>>
>> > path_selector   "round-robin 0"
>> > path_grouping_policymultibus
>> > path_checkertur
>> > rr_min_io_rq10
>> > rr_weight   priorities
>> > failbackimmediate
>> > features"0"
>>
>> This is never needed, multipath generates this value.
>
>
> Those were the recommended values from EQL
> Latest is dated April 2016 when 8 not out yet:
> http://downloads.dell.com/solutions/storage-solution-resources/(3199-CD-L)RHEL-PSseries-Configuration.pdf
>
>
>>
>>
>> Ben: please correct me if needed
>>
>> > no_path_retry16
>>
>> I'm don't think that you need this, since you should inherit the value from 
>> vdsm
>> multipath.conf, either from the "defaults" section, or from the
>> "overrides" section.
>>
>> You must add no_path_retry here if you want to use another value, and you 
>> don't
>> want to use vdsm default value.
>
>
> You are right; I see the value of 16 both in defaults and overrides. But I 
> put it also inside the device section during my tests in doubt it was not 
> picked up in the hope to see similar output as in CentOS 7:
>
> 36090a0c8d04f2fc4251c7c08d0a3 dm-14 EQLOGIC ,100E-00
> size=2.4T features='1 queue_if_no_path' hwhandler='0' wp=rw
>
> where you notice the hwhandler='0'
>
> Originally I remember the default value for no_path_retry was 4 but probably 
> it has been changed in 4.4 to 16, correct?

Yes. Working on configurable sanlock io timeout revealed that these values
should match.

> If I want to see the default that vdsm would create from scratch should I see 
> inside /usr/lib/python3.6/site-packages/vdsm/tool/configurators/multipath.py 
> of my version?

Yes.

The easiest way to get vdsm defaults is to change the the file to use
an older version,
(e.g. use 1.9), remove the private comment, and run:

vdms-tool configure --force --module multipath

Vdsm will upgrade your old file to the most recent version, and backup
your old file
to /etc/multipath.conf.timestamp.

> On my system with vdsm-python-4.40.40-1.el8.noarch I have this inside that 
> file
> _NO_PATH_RETRY = 16

Yes, this matches sanlock default io timeout (10 seconds).

>> Note that if you use your own value, you need to match it to sanlock 
>> io_timeout.
>> See this document for more info:
>> https://github.com/oVirt/vdsm/blob/master/doc/io-timeouts.md
>>
>> > }
>
>
> Yes I set this:
>
> # cat /etc/vdsm/vdsm.conf.d/99-FooIO.conf
> # Configuration for FooIO storage.
>
> [sanlock]
> # Set renewal timeout to 80 seconds
> # (8 * io_timeout == 80).
> io_timeout = 10
>
> And for another environment with Netapp MetroCluster and 2 different sites 
> (I'm with RHV there...) I plan to set no_path_retry to 24 and io_timeout to 
> 15, to manage disaster recovery scenarios and planned maintenance with Netapp 
> node failover through sites taking potentially up to 120 seconds.

For 120 seconds, you likey need

sanlock:io_timeout=20
no_path_retry=32

Because of the different way sanlock and multipath handles timeouts.

Also note that our QE never tested changing these settings, but your
feedback on this new
configuration is very important.

...
> I would like just to be confident about the no_path_retry setting, because 
> the multipath output, also with -v2, -v3, -v4 seems not so clear to me
> In 7 (as Benjamin suggested 4 years ago.. ;-) I have this:
>
> # multipath -r -v3 | grep no_path_retry
> Feb 01 15:45:27 | 36090a0d88034667163b315f8c906b0ac: no_path_retry = 4 
> (config file default)
> Feb 01 15:45:27 | 36090a0c8d04f2fc4251c7c08d0a3: no_path_retry = 4 
> (config file default)
>
> On CentOS 8.3 I get only standard error...:
>
> # multipath -r -v3
> Feb 01 15:46:32 | set open fds limit to 8192/262144
> Feb 01 15:46:32 | loading /lib64/multipath/libchecktur.so checker
> Feb 01 15:46:32 | checker tur: message table size = 3
> Feb 01 15:46:32 | loading /lib64/multipath/libprioconst.so prioritizer
> Feb 01 15:46:3

[ovirt-users] Re: ovirt 4.4 and CentOS 8 and multipath with Equallogic

2021-02-01 Thread Gianluca Cecchi
On Mon, Feb 1, 2021 at 3:10 PM Nir Soffer  wrote:
[snip]

> So at the end I have the multipath.conf default file installed by vdsm
> (so without the # PRIVATE line)
> > and this in /etc/multipath/conf.d/eql.conf
> >
> > devices {
> > device {
> > vendor  "EQLOGIC"
> > product "100E-00"
>
> Ben, why is this device missing from multipath builtin devices?
>

I was using Equallogic kind of storage since oVirt 3.6, so CentoOS/RHEL 6,
and it has never been inside the multipath database as far as I remember.
But I don't know why.
The parameters I put was from latest EQL best practices, but they was
updated at CentOS 7 time.
I would like to use the same parameters in CentOS 8 now and see if they
works ok.
PS line of EQL is somehow deprecated (in the sense of no new features and
so on..) but anyway still supported


>
> > path_selector   "round-robin 0"
> > path_grouping_policymultibus
> > path_checkertur
> > rr_min_io_rq10
> > rr_weight   priorities
> > failbackimmediate
> > features"0"
>
> This is never needed, multipath generates this value.
>

Those were the recommended values from EQL
Latest is dated April 2016 when 8 not out yet:
http://downloads.dell.com/solutions/storage-solution-resources/(3199-CD-L)RHEL-PSseries-Configuration.pdf



>
> Ben: please correct me if needed
>
> > no_path_retry16
>
> I'm don't think that you need this, since you should inherit the value
> from vdsm
> multipath.conf, either from the "defaults" section, or from the
> "overrides" section.
>
> You must add no_path_retry here if you want to use another value, and you
> don't
> want to use vdsm default value.
>

You are right; I see the value of 16 both in defaults and overrides. But I
put it also inside the device section during my tests in doubt it was not
picked up in the hope to see similar output as in CentOS 7:

36090a0c8d04f2fc4251c7c08d0a3 dm-14 EQLOGIC ,100E-00
size=2.4T features='1 queue_if_no_path' hwhandler='0' wp=rw

where you notice the hwhandler='0'

Originally I remember the default value for no_path_retry was 4 but
probably it has been changed in 4.4 to 16, correct?
If I want to see the default that vdsm would create from scratch should I
see inside
/usr/lib/python3.6/site-packages/vdsm/tool/configurators/multipath.py of my
version?
On my system with vdsm-python-4.40.40-1.el8.noarch I have this inside that
file
_NO_PATH_RETRY = 16



>
> Note that if you use your own value, you need to match it to sanlock
> io_timeout.
> See this document for more info:
> https://github.com/oVirt/vdsm/blob/master/doc/io-timeouts.md
>
> > }
>

Yes I set this:

# cat /etc/vdsm/vdsm.conf.d/99-FooIO.conf
# Configuration for FooIO storage.

[sanlock]
# Set renewal timeout to 80 seconds
# (8 * io_timeout == 80).
io_timeout = 10

And for another environment with Netapp MetroCluster and 2 different sites
(I'm with RHV there...) I plan to set no_path_retry to 24 and io_timeout to
15, to manage disaster recovery scenarios and planned maintenance with
Netapp node failover through sites taking potentially up to 120 seconds.

> But still I see this
> >
> > # multipath -l
> > 36090a0c8d04f2fc4251c7c08d0a3 dm-13 EQLOGIC,100E-00
> > size=2.4T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
> > `-+- policy='round-robin 0' prio=0 status=active
> >   |- 16:0:0:0 sdc 8:32 active undef running
> >   `- 18:0:0:0 sde 8:64 active undef running
> > 36090a0d88034667163b315f8c906b0ac dm-12 EQLOGIC,100E-00
> > size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
> > `-+- policy='round-robin 0' prio=0 status=active
> >   |- 15:0:0:0 sdb 8:16 active undef running
> >   `- 17:0:0:0 sdd 8:48 active undef running
> >
> > that lets me think I'm not using the no_path_retry setting, but
> queue_if_no_path... I could be wrong anyway..
>
> Not this is expected. What is means, if I understand multipath
> behavior correctly,
> that the device queue data for no_path_retry * polling_internal seconds
> when all
> paths failed. After that the device will fail all pending and new I/O
> until at least
> one path is recovered.
>
> > How to verify for sure (without dropping the paths, at least at the
> moment) from the config?
> > Any option with multipath and/or dmsetup commands?
>
> multipath show config -> find your device section, it will show the current
> value for no_path_retry.
>
> Nir
>
>
I would like just to be confident about the no_path_retry setting, because
the multipath output, also with -v2, -v3, -v4 seems not so clear to me
In 7 (as Benjamin suggested 4 years ago.. ;-) I have this:

# multipath -r -v3 | grep no_path_retry
Feb 01 15:45:27 | 36090a0d88034667163b315f8c906b0ac: no_path_retry = 4
(config file default)
Feb 01 15:45:27 | 36090a0c8d04f2fc4251c7c08d0a3: no_path_retry = 4
(config file default)

On CentOS 8.3 I get only

[ovirt-users] Re: ovirt 4.4 and CentOS 8 and multipath with Equallogic

2021-02-01 Thread Nir Soffer
On Mon, Feb 1, 2021 at 1:55 PM Gianluca Cecchi
 wrote:
>
> On Sat, Jan 30, 2021 at 6:05 PM Strahil Nikolov  wrote:
>>
>> So you created that extra conf with this content but it didn't work ?
>> multipath -v4 could hint you why it was complaining.
>>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>
> Ok, I missed the surrounding root part
>
> devices {
>
> }

It seems that we need more examples in multpath.conf file
installed by vdsm.

> Apparently "multipathd show config" didn't complain...
> Now I put also that and it seems to work, thanks for pointing it
>
> So at the end I have the multipath.conf default file installed by vdsm (so 
> without the # PRIVATE line)
> and this in /etc/multipath/conf.d/eql.conf
>
> devices {
> device {
> vendor  "EQLOGIC"
> product "100E-00"

Ben, why is this device missing from multipath builtin devices?

> path_selector   "round-robin 0"
> path_grouping_policymultibus
> path_checkertur
> rr_min_io_rq10
> rr_weight   priorities
> failbackimmediate
> features"0"

This is never needed, multipath generates this value.

Ben: please correct me if needed

> no_path_retry16

I'm don't think that you need this, since you should inherit the value from vdsm
multipath.conf, either from the "defaults" section, or from the
"overrides" section.

You must add no_path_retry here if you want to use another value, and you don't
want to use vdsm default value.

Note that if you use your own value, you need to match it to sanlock io_timeout.
See this document for more info:
https://github.com/oVirt/vdsm/blob/master/doc/io-timeouts.md

> }
> }
>
> Recreated initrd and rebooted the host and activated it without further 
> problems.
> And "multipathd show config" confirms it.

Yes, this is the recommended way to configure multipath, thanks Strahil for the
good advice!

> But still I see this
>
> # multipath -l
> 36090a0c8d04f2fc4251c7c08d0a3 dm-13 EQLOGIC,100E-00
> size=2.4T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
> `-+- policy='round-robin 0' prio=0 status=active
>   |- 16:0:0:0 sdc 8:32 active undef running
>   `- 18:0:0:0 sde 8:64 active undef running
> 36090a0d88034667163b315f8c906b0ac dm-12 EQLOGIC,100E-00
> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
> `-+- policy='round-robin 0' prio=0 status=active
>   |- 15:0:0:0 sdb 8:16 active undef running
>   `- 17:0:0:0 sdd 8:48 active undef running
>
> that lets me think I'm not using the no_path_retry setting, but 
> queue_if_no_path... I could be wrong anyway..

Not this is expected. What is means, if I understand multipath
behavior correctly,
that the device queue data for no_path_retry * polling_internal seconds when all
paths failed. After that the device will fail all pending and new I/O
until at least
one path is recovered.

> How to verify for sure (without dropping the paths, at least at the moment) 
> from the config?
> Any option with multipath and/or dmsetup commands?

multipath show config -> find your device section, it will show the current
value for no_path_retry.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DVZSNPVMPY5RNXUI5QN7A57H2IB34CUS/


[ovirt-users] Re: move ovirt 4.3.9 to RHEL 8

2021-02-01 Thread Nathanaël Blanchet

Hello,

this is my memo after 4 sucessfully el7->el8 ovirt-engine migrations:

# Migration engine el7->el8
sudo dnf install 
https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm

dnf module enable-y javapackages-tools pki-deps 389-ds postgresql:12
yum install ovirt-engine -y
yum install ovirt-engine-extension-aaa-ldap-setup -y
ovirt-engine-extension-aaa-ldap-setup
firewall-cmd --permanent 
--add-service={ovirt-http,ovirt-https,ovirt-imageio,ovirt-postgres,ovirt-provider-ovn,ovirt-storageconsole,ovirt-vmconsole,ovirt-vmconsole-proxy,ovirt-websocket-proxy}

firewall-cmd --reload

good luck

Le 30/01/2021 à 03:03, Paul Dyer a écrit :

no, but installing ovirt-engine-setup fails in the same way.

But, I was able to find the RHEL8 repo that has javapackages-tools, 
and that one gave me apache-commons-compress.   So, I am again making 
some progress towards my goals.


[root@r8-bacchus yum.repos.d]# subscription-manager repos
--enable=codeready-builder-for-rhel-8-x86_64-rpms
Repository 'codeready-builder-for-rhel-8-x86_64-rpms' is enabled
for this system.

[root@r8-bacchus yum.repos.d]# yum module enable javapackages-tools
Updating Subscription Management repositories.
Red Hat CodeReady Linux Builder for RHEL 8 x86_ 3.5 MB/s | 4.4 MB
    00:01
Dependencies resolved.


 Package           Architecture     Version Repository         Size


Enabling module streams:
 javapackages-tools
                                    201801

Transaction Summary



Is this ok [y/N]: y
Complete!

[root@r8-bacchus yum.repos.d]# yum info apache-commons-compress
Updating Subscription Management repositories.
Last metadata expiration check: 0:00:18 ago on Fri 29 Jan 2021
08:00:02 PM CST.
Available Packages
Name         : apache-commons-compress
Version      : 1.18
Release      : 1.module+el8+2598+06babf2e
Architecture : noarch
Size         : 526 k
Source       :
apache-commons-compress-1.18-1.module+el8+2598+06babf2e.src.rpm
Repository   : codeready-builder-for-rhel-8-x86_64-rpms
Summary      : Java API for working with compressed files and
archivers

URL          : http://commons.apache.org/proper/commons-compress/
License      : ASL 2.0
Description  : The Apache Commons Compress library defines an API
for working
             : with ar, cpio, Unix dump, tar, zip, gzip, XZ,
Pack200 and bzip2
             : files. In version 1.14 read-only support for Brotli
decompression
             : has been added, but it has been removed form this
package.


On Fri, Jan 29, 2021 at 7:43 PM Derek Atkins > wrote:


Ummm.. I realize it's been a while but aren't you supposed to
install ovirt-engine-setup and not ovirt-engine?

-derek
Sent using my mobile device. Please excuse any typos.

On January 29, 2021 8:29:00 PM Paul Dyer mailto:pmdyer...@gmail.com>> wrote:


thanks Gianluca for the well thought out response.   I started
over and tried to install RHEL 8 with ovirt 4.4.   I ran into a
few problems, which may be related.   First was that the module
javapackages-tools could not be found.

# yum module enable javapackages-tools
Updating Subscription Management repositories.
Last metadata expiration check: 0:17:45 ago on Fri 29 Jan
2021 07:02:20 PM CST.
Error: Problems in request:
missing groups or modules: javapackages-tools


Then, the install of ovirt-engine relies
on apache-commons-compress, but no version could pass module
filtering.

[root@r8-bacchus yum.repos.d]# yum install ovirt-engine
Updating Subscription Management repositories.
Last metadata expiration check: 0:12:16 ago on Fri 29 Jan
2021 07:02:20 PM CST.
Error:
 Problem: package ovirt-engine-4.4.4.7-1.el8.noarch requires
apache-commons-compress, but none of the providers can be
installed
  - cannot install the best candidate for the job
  - package
apache-commons-compress-1.20-1.module+el8.2.1+6727+059d025f.noarch
is filtered out by modular filtering
  - package
apache-commons-compress-1.20-3.module+el8.2.1+7436+4afdca1f.noarch
is filtered out by modular filtering
(try to add '--skip-broken' to skip uninstallable packages or
'--nobest' to use not only best candidate packages)

[root@r8-bacchus yum.repos.d]# yum install ovirt-engine --nobest
Updating Subscription Management repositories.
Last metadata expiration check: 0:14:26 ago on Fri 29 Jan
2021 07:02:20 PM CST.
Error:
  

[ovirt-users] Re: ovirt 4.4 and CentOS 8 and multipath with Equallogic

2021-02-01 Thread Gianluca Cecchi
On Sat, Jan 30, 2021 at 6:05 PM Strahil Nikolov 
wrote:

> So you created that extra conf with this content but it didn't work ?
> multipath -v4 could hint you why it was complaining.
>
>
> Best Regards,
> Strahil Nikolov
>
>
Ok, I missed the surrounding root part

devices {

}

Apparently "multipathd show config" didn't complain...
Now I put also that and it seems to work, thanks for pointing it

So at the end I have the multipath.conf default file installed by vdsm (so
without the # PRIVATE line)
and this in /etc/multipath/conf.d/eql.conf

devices {
device {
vendor  "EQLOGIC"
product "100E-00"
path_selector   "round-robin 0"
path_grouping_policymultibus
path_checkertur
rr_min_io_rq10
rr_weight   priorities
failbackimmediate
features"0"
no_path_retry16
}
}

Recreated initrd and rebooted the host and activated it without further
problems.
And "multipathd show config" confirms it.

But still I see this

# multipath -l
36090a0c8d04f2fc4251c7c08d0a3 dm-13 EQLOGIC,100E-00
size=2.4T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 16:0:0:0 sdc 8:32 active undef running
  `- 18:0:0:0 sde 8:64 active undef running
36090a0d88034667163b315f8c906b0ac dm-12 EQLOGIC,100E-00
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 15:0:0:0 sdb 8:16 active undef running
  `- 17:0:0:0 sdd 8:48 active undef running

that lets me think I'm not using the no_path_retry setting, but
queue_if_no_path... I could be wrong anyway..
How to verify for sure (without dropping the paths, at least at the moment)
from the config?
Any option with multipath and/or dmsetup commands?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NVVH4PN6MWKB3O2MEX2CCOB6ZH2LJ4ID/


[ovirt-users] CVE-2021-3156 && ovirt-node-ng 4.3 && 4.4 (sudo)

2021-02-01 Thread Renaud RAKOTOMALALA
Hello everyone,

I operate several oVirt clusters including pre-productions using ovirt-node-ng 
images.

For our traditional clusters we manage the incident in a unitary way with a 
dedicated rpm, however for ovirt-node-ng I am not yet up to date with critical 
package updates process.

Do you have any advice or tips?

Nice day,
Renaud
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P4FO6UDJA2YGGL6U7ZGGURQJ5SX6PNV5/


[ovirt-users] Re: AFFINITY GROUPS

2021-02-01 Thread LS CHENG
ok

thank you

Is AFFINITY GROUPS extensively used?

Cheers

On Sun, Jan 31, 2021 at 10:42 AM Kim Kargaard 
wrote:

> You can select multiple hosts when selecting which host the VM can run on.
>
> Kim
> --
> *From:* LS CHENG 
> *Sent:* Sunday, January 31, 2021 9:51 AM
> *To:* Strahil Nikolov 
> *Cc:* users@ovirt.org 
> *Subject:* [ovirt-users] Re: AFFINITY GROUPS
>
> Hi
>
> Yes, that can be trick as well. I am using affinity rules because I am
> trying to extend to more combinations. For examples 4 hosts and a vm can
> run in 2 hosts only etc.
>
> Thanks
>
> On Sun, Jan 31, 2021 at 9:14 AM Strahil Nikolov 
> wrote:
>
> Why don't you go to VM settings and define that vm01 can run only on
> host_x?
> If the host is down, it won't start on host_y.
>
> Best Regards,
> Strahil Nikolov
> Sent from Yahoo Mail on Android
> 
>
> On Sat, Jan 30, 2021 at 23:09, LS CHENG
>  wrote:
> Hi
>
> I would like to know how AFFINITY GROUPS works.
>
> I have 2 VM in 2 hosts, each VM runs in a host. Let's call
>
> HOST_X where vm01 runs
> HOST_Y where vm02 runs
>
> I have set up an affinity group where it says vm01 relates to HOST_X, vm02
> relates to HOST_Y. VM affinity rule is set to negative and HOST affinity
> rule set to positive. I need both vm01 and vm02 to run in their respective
> physical host.
>
> I have a problem, when vm01 is stopped and HOST_Y is rebooted vm02 starts
> in HOST_X, how can I avoid that?
>
> Thank you
>
> Luis Sanchez
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C74WDKK2SSFA3JRQ6RERHRINJ4F53CBB/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PEJRBAK3XUIVDRS7TLOUEFM7DJ2UGSGY/