[ovirt-users] Re: Reconfigure Gluster from Replica 3 to Arbiter 1/Replica 2

2021-07-19 Thread David White via Users
Thank you.
I'm doing some more research & reading on this to make sure I understand 
everything before I do this work.

You wrote:
> If you rebuild the raid, you are destroying the brick, so after mounting it 
> back, you will need to reset-brick. If it doesn't work for some reason , you 
> can always remove-brick replica 1 host1:/path/to/brick arbiter:/path/to/brick 
> and readd them with add-brick replica 3 arbiter 1.

Would it be safer to remove-brick replica 1 before I destroy the brick?

Also, you suggested creating a fresh logical volume on the node where I'm 
converting from full replica to arbiter.
Would it not suffice to simply go in and erase all of the data?

I'm still not clear on how to force gluster to use a specific server as the 
arbiter node.
Will gluster just "figure it out" if the logical volume on the arbiter is 
smaller than on the other two nodes?

I'm also still a little bit unclear on why I need to specifically increase the 
inode when I go to add-brick on the arbiter node. Or is that when (if?) I 
rebuild the Logical Volume?
Do I need to increase the inode on the other two servers after I grow the RAID 
on the two primary storage servers?

Below is an overview of the specific steps I think I need to take, in order:

Prepare:
Put cluster into global maintenance mode
Run the following commands:
# gluster volume status data inode

host 3 (Arbiter):
# gluster volume remove-brick data replica 2 
host3.mgt.example.com:/gluster_bricks/data/data
# rm -rf /gluster_bricks/data/data/*
# ALTERNATIVELY... rebuild the logical volume? Is this really necessary?
# gluster volume add-brick data replica 3 arbiter 1 
host3.mgt.example.com:/gluster_bricks/data/data

(Let the volumes heal completely) 


host 1
# gluster volume remove-brick data replica 1 
host1.mgt.example.com:/gluster_bricks/data/data
(rebuild the array & reboot the server)
# gluster volume add-brick data replica 3 arbiter 1 
host1.mgt.example.com:/gluster_bricks/data/data

(Let the volumes heal completely)

host 2
# gluster volume remove-brick data replica 1 
host1.mgt.example.com:/gluster_bricks/data/data
(rebuild the array & reboot the server)
# gluster volume add-brick data replica 3 arbiter 1 
host1.mgt.example.com:/gluster_bricks/data/data

Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐

On Sunday, July 11th, 2021 at 3:14 AM, Strahil Nikolov via Users 
 wrote:

> > 2a) remove-brick replica 2
> 

> -- if I understand correctly, this will basically just reconfigure the 
> existing volume to replicate between the 2 bricks, and not all 3 ... is this 
> correct?
> 

> Yep, you are kicking out the 3rd node and the volume is converted to replica 
> 2.
> 

> Most probably the command would be gluster volume remove-brick vmstore 
> replica 2 host3:/path/to/brick force
> 

> > 2b) add-brick replicat 3 arbiter 1
> 

> -- If I understand correctly, this will reconfigure the volume (again), 
> adding the 3rd server's storage back to the Gluster volume, but only as an 
> arbiter node, correct?
> 

> Yes,I would prefer to create a fresh new LV.Don't forget to raise the inode 
> count higher,as this one will be an arbiter brick (see previous e-mail).
> 

> Once you add via gluster volume add-brick vmstore replica 3 arbiter 1 
> host3:/path/to/new/brick , you will have to wait for all heals to complete
> 

>  
> 

> > 3.  Now with everything healthy, the volume is now a Replica 2 / Arbiter 
> > 1 and I can now stop gluster on each of the 2 servers getting the 
> > storage upgrade, rebuild the RAID on the new storage, reboot, and let 
> > gluster heal itself before moving on to the next server.
> 

> If you rebuild the raid, you are destroying the brick, so after mounting it 
> back, you will need to reset-brick. If it doesn't work for some reason , you 
> can always remove-brick replica 1 host1:/path/to/brick arbiter:/path/to/brick 
> and readd them with add-brick replica 3 arbiter 1.
> 

> I had some paused VMs after raid reshaping (spinning disks) during the 
> healing but my lab is running on workstations, so do it in the least busiest 
> hours and possible backups should have completed before the reconfiguration 
> and not exactly during the healing ;)
> 

> Best Regards,
> 

> Strahil Nikolov
> 

> Users mailing list -- users@ovirt.org
> 

> To unsubscribe send an email to users-le...@ovirt.org
> 

> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> 

> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> 

> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ECHZQWBMC5H3CDWV67TCLFZMBPGTVGCU/

publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt 

[ovirt-users] Re: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Penalizing score by 1600 due to network status

2021-07-19 Thread Yedidyah Bar David
On Mon, Jul 19, 2021 at 1:54 PM Christoph Timm  wrote:
>
>
>
> Am 19.07.21 um 10:52 schrieb Yedidyah Bar David:
> > On Mon, Jul 19, 2021 at 11:39 AM Christoph Timm  wrote:
> >>
> >> Am 19.07.21 um 10:25 schrieb Yedidyah Bar David:
> >>> On Mon, Jul 19, 2021 at 11:02 AM Christoph Timm  wrote:
>  Am 19.07.21 um 09:27 schrieb Yedidyah Bar David:
> > On Mon, Jul 19, 2021 at 10:04 AM Christoph Timm  wrote:
> >> Hi Didi,
> >>
> >> thank you for the quick response.
> >>
> >>
> >> Am 19.07.21 um 07:59 schrieb Yedidyah Bar David:
> >>> On Mon, Jul 19, 2021 at 8:39 AM Christoph Timm  
> >>> wrote:
>  Hi List,
> 
>  I'm trying to understand why my hosted engine is moved from one node 
>  to
>  another from time to time.
>  It is happening sometime multiple times a day. But there are also 
>  days
>  without it.
> 
>  I can see the following in the ovirt-hosted-engine-ha/agent.log:
>  ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
>  Penalizing score by 1600 due to network status
> 
>  After that the engine will be shutdown and started on another host.
>  The oVirt Admin portal is showing the following around the same time:
>  Invalid status on Data Center Default. Setting status to Non 
>  Responsive.
> 
>  But the whole cluster is working normally during that time.
> 
>  I believe that I have somehow a network issue on my side but I have 
>  no
>  clue what kind of check is causing the network status to penalized.
> 
>  Does anyone have an idea how to investigate this further?
> >>> Please check also broker.log. Do you see 'dig' failures?
> >> Yes I found them as well.
> >>
> >> Thread-1::WARNING::2021-07-19
> >> 08:02:00,032::network::120::network.Network::(_dns) DNS query failed:
> >> ; <<>> DiG 9.11.26-RedHat-9.11.26-4.el8_4 <<>> +tries=1 +time=5
> >> ;; global options: +cmd
> >> ;; connection timed out; no servers could be reached
> >>
> >>> This happened several times already on our CI infrastructure, but 
> >>> yours is
> >>> the first report from an actual real user. See also:
> >>>
> >>> https://lists.ovirt.org/archives/list/in...@ovirt.org/thread/LIGS5WXGEKWACY5GCK7Z6Q2JYVWJ6JBF/
> >> So I understand that the following command is triggered to test the
> >> network: "dig +tries=1 +time=5"
> > Indeed.
> >
> >>> I didn't open a bug for this (yet?), also because I never reproduced 
> >>> on my
> >>> own machines and am not sure about the exact failing flow. If this is
> >>> reproducible
> >>> reliably for you, you might want to test the patch I pushed:
> >>>
> >>> https://gerrit.ovirt.org/c/ovirt-hosted-engine-ha/+/115596
> >> I'm happy to give it a try.
> >> Please confirm that I need to replace this file (network.py) on all my
> >> nodes (CentOS 8.4 based) which can host my engine.
> > It definitely makes sense to do so, but in principle there is no problem
> > with applying it only on some of them. That's especially useful if you 
> > try
> > this first on a test env and try to enforce a reproduction somehow 
> > (overload
> > the network, disconnect stuff, etc.).
>  OK will give it a try and report back.
> >>> Thanks and good luck.
> Do I need to restart anything after that change?

Yes, the broker. This might restart some other services there, so best put the
host to maintenance during this.

> Also please confirm that the comma after TCP is correct as there wasn't
> one before after the timeout in row 110.

It is correct, but not mandatory. We (my team, at least) often add it
in such cases
to make a theoretical future patch that adds another parameter not
require adding
it again (thus making the patch smaller and hopefully cleaner).

> >>>
> >>> Other ideas/opinions about how to enhance this part of the monitoring
> >>> are most welcome.
> >>>
> >>> If this phenomenon is new for you, and you can reliably say it's not 
> >>> due to
> >>> a recent "natural" higher network load, I wonder if it's due to some 
> >>> weird
> >>> bug/change somewhere.
> >> I'm quite sure that I see this since we moved to 4.4.(4).
> >> Just for house keeping I'm running 4.4.7 now.
> > We use 'dig' as the network monitor since 4.3.5, around one year before 
> > 4.4
> > was released: https://bugzilla.redhat.com/1659052
> >
> > Which version did you use before 4.4?
>  The last 4.3 versions have been 4.3.7, 4.3.9 and 4.3.10 before migrating
>  to 4.4.4.
> >>> I now realize that in above-linked bug we only changed the default, for 
> >>> new
> >>> setups. So if you deployed He before 4.3.5, upgrade to later 4.3 would not
> >>> change the default (as opposed to upgrade to 4.4, 

[ovirt-users] Re: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Penalizing score by 1600 due to network status

2021-07-19 Thread Christoph Timm



Am 19.07.21 um 10:52 schrieb Yedidyah Bar David:

On Mon, Jul 19, 2021 at 11:39 AM Christoph Timm  wrote:


Am 19.07.21 um 10:25 schrieb Yedidyah Bar David:

On Mon, Jul 19, 2021 at 11:02 AM Christoph Timm  wrote:

Am 19.07.21 um 09:27 schrieb Yedidyah Bar David:

On Mon, Jul 19, 2021 at 10:04 AM Christoph Timm  wrote:

Hi Didi,

thank you for the quick response.


Am 19.07.21 um 07:59 schrieb Yedidyah Bar David:

On Mon, Jul 19, 2021 at 8:39 AM Christoph Timm  wrote:

Hi List,

I'm trying to understand why my hosted engine is moved from one node to
another from time to time.
It is happening sometime multiple times a day. But there are also days
without it.

I can see the following in the ovirt-hosted-engine-ha/agent.log:
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
Penalizing score by 1600 due to network status

After that the engine will be shutdown and started on another host.
The oVirt Admin portal is showing the following around the same time:
Invalid status on Data Center Default. Setting status to Non Responsive.

But the whole cluster is working normally during that time.

I believe that I have somehow a network issue on my side but I have no
clue what kind of check is causing the network status to penalized.

Does anyone have an idea how to investigate this further?

Please check also broker.log. Do you see 'dig' failures?

Yes I found them as well.

Thread-1::WARNING::2021-07-19
08:02:00,032::network::120::network.Network::(_dns) DNS query failed:
; <<>> DiG 9.11.26-RedHat-9.11.26-4.el8_4 <<>> +tries=1 +time=5
;; global options: +cmd
;; connection timed out; no servers could be reached


This happened several times already on our CI infrastructure, but yours is
the first report from an actual real user. See also:

https://lists.ovirt.org/archives/list/in...@ovirt.org/thread/LIGS5WXGEKWACY5GCK7Z6Q2JYVWJ6JBF/

So I understand that the following command is triggered to test the
network: "dig +tries=1 +time=5"

Indeed.


I didn't open a bug for this (yet?), also because I never reproduced on my
own machines and am not sure about the exact failing flow. If this is
reproducible
reliably for you, you might want to test the patch I pushed:

https://gerrit.ovirt.org/c/ovirt-hosted-engine-ha/+/115596

I'm happy to give it a try.
Please confirm that I need to replace this file (network.py) on all my
nodes (CentOS 8.4 based) which can host my engine.

It definitely makes sense to do so, but in principle there is no problem
with applying it only on some of them. That's especially useful if you try
this first on a test env and try to enforce a reproduction somehow (overload
the network, disconnect stuff, etc.).

OK will give it a try and report back.

Thanks and good luck.

Do I need to restart anything after that change?
Also please confirm that the comma after TCP is correct as there wasn't 
one before after the timeout in row 110.



Other ideas/opinions about how to enhance this part of the monitoring
are most welcome.

If this phenomenon is new for you, and you can reliably say it's not due to
a recent "natural" higher network load, I wonder if it's due to some weird
bug/change somewhere.

I'm quite sure that I see this since we moved to 4.4.(4).
Just for house keeping I'm running 4.4.7 now.

We use 'dig' as the network monitor since 4.3.5, around one year before 4.4
was released: https://bugzilla.redhat.com/1659052

Which version did you use before 4.4?

The last 4.3 versions have been 4.3.7, 4.3.9 and 4.3.10 before migrating
to 4.4.4.

I now realize that in above-linked bug we only changed the default, for new
setups. So if you deployed He before 4.3.5, upgrade to later 4.3 would not
change the default (as opposed to upgrade to 4.4, which was actually a
new deployment with engine backup/restore). Do you know which version
your cluster was originally deployed with?

Hm, I'm sorry but I don't recall this. I'm quite sure that we started

OK, thanks for trying.


with 4.0 something. But we moved to a HE setup around September 2019.
But I don't recall the version. But we installed also the backup from
the old installation into the HE environment if I'm not wrong.

If indeed this change was the trigger for you, you can rather easily try to
change this to 'ping' and see if this helps - I think it's enough to change
'network_test' to 'ping' in /etc/ovirt-hosted-engine/hosted-engine.conf
and restart the broker - didn't try, though. But generally speaking, I do not
think we want to change the default back to 'ping', but rather make 'dns'
work better/well. We had valid reasons to move away from ping...

OK I will try this if the tcp change does not help me.


Best regards,

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Penalizing score by 1600 due to network status

2021-07-19 Thread Yedidyah Bar David
On Mon, Jul 19, 2021 at 11:39 AM Christoph Timm  wrote:
>
>
> Am 19.07.21 um 10:25 schrieb Yedidyah Bar David:
> > On Mon, Jul 19, 2021 at 11:02 AM Christoph Timm  wrote:
> >>
> >> Am 19.07.21 um 09:27 schrieb Yedidyah Bar David:
> >>> On Mon, Jul 19, 2021 at 10:04 AM Christoph Timm  wrote:
>  Hi Didi,
> 
>  thank you for the quick response.
> 
> 
>  Am 19.07.21 um 07:59 schrieb Yedidyah Bar David:
> > On Mon, Jul 19, 2021 at 8:39 AM Christoph Timm  wrote:
> >> Hi List,
> >>
> >> I'm trying to understand why my hosted engine is moved from one node to
> >> another from time to time.
> >> It is happening sometime multiple times a day. But there are also days
> >> without it.
> >>
> >> I can see the following in the ovirt-hosted-engine-ha/agent.log:
> >> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
> >> Penalizing score by 1600 due to network status
> >>
> >> After that the engine will be shutdown and started on another host.
> >> The oVirt Admin portal is showing the following around the same time:
> >> Invalid status on Data Center Default. Setting status to Non 
> >> Responsive.
> >>
> >> But the whole cluster is working normally during that time.
> >>
> >> I believe that I have somehow a network issue on my side but I have no
> >> clue what kind of check is causing the network status to penalized.
> >>
> >> Does anyone have an idea how to investigate this further?
> > Please check also broker.log. Do you see 'dig' failures?
>  Yes I found them as well.
> 
>  Thread-1::WARNING::2021-07-19
>  08:02:00,032::network::120::network.Network::(_dns) DNS query failed:
>  ; <<>> DiG 9.11.26-RedHat-9.11.26-4.el8_4 <<>> +tries=1 +time=5
>  ;; global options: +cmd
>  ;; connection timed out; no servers could be reached
> 
> > This happened several times already on our CI infrastructure, but yours 
> > is
> > the first report from an actual real user. See also:
> >
> > https://lists.ovirt.org/archives/list/in...@ovirt.org/thread/LIGS5WXGEKWACY5GCK7Z6Q2JYVWJ6JBF/
>  So I understand that the following command is triggered to test the
>  network: "dig +tries=1 +time=5"
> >>> Indeed.
> >>>
> > I didn't open a bug for this (yet?), also because I never reproduced on 
> > my
> > own machines and am not sure about the exact failing flow. If this is
> > reproducible
> > reliably for you, you might want to test the patch I pushed:
> >
> > https://gerrit.ovirt.org/c/ovirt-hosted-engine-ha/+/115596
>  I'm happy to give it a try.
>  Please confirm that I need to replace this file (network.py) on all my
>  nodes (CentOS 8.4 based) which can host my engine.
> >>> It definitely makes sense to do so, but in principle there is no problem
> >>> with applying it only on some of them. That's especially useful if you try
> >>> this first on a test env and try to enforce a reproduction somehow 
> >>> (overload
> >>> the network, disconnect stuff, etc.).
> >> OK will give it a try and report back.
> > Thanks and good luck.
> >
> > Other ideas/opinions about how to enhance this part of the monitoring
> > are most welcome.
> >
> > If this phenomenon is new for you, and you can reliably say it's not 
> > due to
> > a recent "natural" higher network load, I wonder if it's due to some 
> > weird
> > bug/change somewhere.
>  I'm quite sure that I see this since we moved to 4.4.(4).
>  Just for house keeping I'm running 4.4.7 now.
> >>> We use 'dig' as the network monitor since 4.3.5, around one year before 
> >>> 4.4
> >>> was released: https://bugzilla.redhat.com/1659052
> >>>
> >>> Which version did you use before 4.4?
> >> The last 4.3 versions have been 4.3.7, 4.3.9 and 4.3.10 before migrating
> >> to 4.4.4.
> > I now realize that in above-linked bug we only changed the default, for new
> > setups. So if you deployed He before 4.3.5, upgrade to later 4.3 would not
> > change the default (as opposed to upgrade to 4.4, which was actually a
> > new deployment with engine backup/restore). Do you know which version
> > your cluster was originally deployed with?
> Hm, I'm sorry but I don't recall this. I'm quite sure that we started

OK, thanks for trying.

> with 4.0 something. But we moved to a HE setup around September 2019.
> But I don't recall the version. But we installed also the backup from
> the old installation into the HE environment if I'm not wrong.

If indeed this change was the trigger for you, you can rather easily try to
change this to 'ping' and see if this helps - I think it's enough to change
'network_test' to 'ping' in /etc/ovirt-hosted-engine/hosted-engine.conf
and restart the broker - didn't try, though. But generally speaking, I do not
think we want to change the default back to 'ping', but rather make 'dns'
work better/well. We had valid reasons to 

[ovirt-users] Re: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Penalizing score by 1600 due to network status

2021-07-19 Thread Christoph Timm


Am 19.07.21 um 10:25 schrieb Yedidyah Bar David:

On Mon, Jul 19, 2021 at 11:02 AM Christoph Timm  wrote:


Am 19.07.21 um 09:27 schrieb Yedidyah Bar David:

On Mon, Jul 19, 2021 at 10:04 AM Christoph Timm  wrote:

Hi Didi,

thank you for the quick response.


Am 19.07.21 um 07:59 schrieb Yedidyah Bar David:

On Mon, Jul 19, 2021 at 8:39 AM Christoph Timm  wrote:

Hi List,

I'm trying to understand why my hosted engine is moved from one node to
another from time to time.
It is happening sometime multiple times a day. But there are also days
without it.

I can see the following in the ovirt-hosted-engine-ha/agent.log:
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
Penalizing score by 1600 due to network status

After that the engine will be shutdown and started on another host.
The oVirt Admin portal is showing the following around the same time:
Invalid status on Data Center Default. Setting status to Non Responsive.

But the whole cluster is working normally during that time.

I believe that I have somehow a network issue on my side but I have no
clue what kind of check is causing the network status to penalized.

Does anyone have an idea how to investigate this further?

Please check also broker.log. Do you see 'dig' failures?

Yes I found them as well.

Thread-1::WARNING::2021-07-19
08:02:00,032::network::120::network.Network::(_dns) DNS query failed:
; <<>> DiG 9.11.26-RedHat-9.11.26-4.el8_4 <<>> +tries=1 +time=5
;; global options: +cmd
;; connection timed out; no servers could be reached


This happened several times already on our CI infrastructure, but yours is
the first report from an actual real user. See also:

https://lists.ovirt.org/archives/list/in...@ovirt.org/thread/LIGS5WXGEKWACY5GCK7Z6Q2JYVWJ6JBF/

So I understand that the following command is triggered to test the
network: "dig +tries=1 +time=5"

Indeed.


I didn't open a bug for this (yet?), also because I never reproduced on my
own machines and am not sure about the exact failing flow. If this is
reproducible
reliably for you, you might want to test the patch I pushed:

https://gerrit.ovirt.org/c/ovirt-hosted-engine-ha/+/115596

I'm happy to give it a try.
Please confirm that I need to replace this file (network.py) on all my
nodes (CentOS 8.4 based) which can host my engine.

It definitely makes sense to do so, but in principle there is no problem
with applying it only on some of them. That's especially useful if you try
this first on a test env and try to enforce a reproduction somehow (overload
the network, disconnect stuff, etc.).

OK will give it a try and report back.

Thanks and good luck.


Other ideas/opinions about how to enhance this part of the monitoring
are most welcome.

If this phenomenon is new for you, and you can reliably say it's not due to
a recent "natural" higher network load, I wonder if it's due to some weird
bug/change somewhere.

I'm quite sure that I see this since we moved to 4.4.(4).
Just for house keeping I'm running 4.4.7 now.

We use 'dig' as the network monitor since 4.3.5, around one year before 4.4
was released: https://bugzilla.redhat.com/1659052

Which version did you use before 4.4?

The last 4.3 versions have been 4.3.7, 4.3.9 and 4.3.10 before migrating
to 4.4.4.

I now realize that in above-linked bug we only changed the default, for new
setups. So if you deployed He before 4.3.5, upgrade to later 4.3 would not
change the default (as opposed to upgrade to 4.4, which was actually a
new deployment with engine backup/restore). Do you know which version
your cluster was originally deployed with?
Hm, I'm sorry but I don't recall this. I'm quite sure that we started 
with 4.0 something. But we moved to a HE setup around September 2019. 
But I don't recall the version. But we installed also the backup from 
the old installation into the HE environment if I'm not wrong.


Best regards,

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RWZ76D2OZ4ZXEMEOWZVQ75IZHMJP2V6D/


[ovirt-users] Re: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Penalizing score by 1600 due to network status

2021-07-19 Thread Yedidyah Bar David
On Mon, Jul 19, 2021 at 11:02 AM Christoph Timm  wrote:
>
>
> Am 19.07.21 um 09:27 schrieb Yedidyah Bar David:
> > On Mon, Jul 19, 2021 at 10:04 AM Christoph Timm  wrote:
> >> Hi Didi,
> >>
> >> thank you for the quick response.
> >>
> >>
> >> Am 19.07.21 um 07:59 schrieb Yedidyah Bar David:
> >>> On Mon, Jul 19, 2021 at 8:39 AM Christoph Timm  wrote:
>  Hi List,
> 
>  I'm trying to understand why my hosted engine is moved from one node to
>  another from time to time.
>  It is happening sometime multiple times a day. But there are also days
>  without it.
> 
>  I can see the following in the ovirt-hosted-engine-ha/agent.log:
>  ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
>  Penalizing score by 1600 due to network status
> 
>  After that the engine will be shutdown and started on another host.
>  The oVirt Admin portal is showing the following around the same time:
>  Invalid status on Data Center Default. Setting status to Non Responsive.
> 
>  But the whole cluster is working normally during that time.
> 
>  I believe that I have somehow a network issue on my side but I have no
>  clue what kind of check is causing the network status to penalized.
> 
>  Does anyone have an idea how to investigate this further?
> >>> Please check also broker.log. Do you see 'dig' failures?
> >> Yes I found them as well.
> >>
> >> Thread-1::WARNING::2021-07-19
> >> 08:02:00,032::network::120::network.Network::(_dns) DNS query failed:
> >> ; <<>> DiG 9.11.26-RedHat-9.11.26-4.el8_4 <<>> +tries=1 +time=5
> >> ;; global options: +cmd
> >> ;; connection timed out; no servers could be reached
> >>
> >>> This happened several times already on our CI infrastructure, but yours is
> >>> the first report from an actual real user. See also:
> >>>
> >>> https://lists.ovirt.org/archives/list/in...@ovirt.org/thread/LIGS5WXGEKWACY5GCK7Z6Q2JYVWJ6JBF/
> >> So I understand that the following command is triggered to test the
> >> network: "dig +tries=1 +time=5"
> > Indeed.
> >
> >>> I didn't open a bug for this (yet?), also because I never reproduced on my
> >>> own machines and am not sure about the exact failing flow. If this is
> >>> reproducible
> >>> reliably for you, you might want to test the patch I pushed:
> >>>
> >>> https://gerrit.ovirt.org/c/ovirt-hosted-engine-ha/+/115596
> >> I'm happy to give it a try.
> >> Please confirm that I need to replace this file (network.py) on all my
> >> nodes (CentOS 8.4 based) which can host my engine.
> > It definitely makes sense to do so, but in principle there is no problem
> > with applying it only on some of them. That's especially useful if you try
> > this first on a test env and try to enforce a reproduction somehow (overload
> > the network, disconnect stuff, etc.).
> OK will give it a try and report back.

Thanks and good luck.

> >
> >>> Other ideas/opinions about how to enhance this part of the monitoring
> >>> are most welcome.
> >>>
> >>> If this phenomenon is new for you, and you can reliably say it's not due 
> >>> to
> >>> a recent "natural" higher network load, I wonder if it's due to some weird
> >>> bug/change somewhere.
> >> I'm quite sure that I see this since we moved to 4.4.(4).
> >> Just for house keeping I'm running 4.4.7 now.
> > We use 'dig' as the network monitor since 4.3.5, around one year before 4.4
> > was released: https://bugzilla.redhat.com/1659052
> >
> > Which version did you use before 4.4?
> The last 4.3 versions have been 4.3.7, 4.3.9 and 4.3.10 before migrating
> to 4.4.4.

I now realize that in above-linked bug we only changed the default, for new
setups. So if you deployed He before 4.3.5, upgrade to later 4.3 would not
change the default (as opposed to upgrade to 4.4, which was actually a
new deployment with engine backup/restore). Do you know which version
your cluster was originally deployed with?

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6X2DMNPAXCD34624CMBEZTZO4KU64KCG/


[ovirt-users] Re: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Penalizing score by 1600 due to network status

2021-07-19 Thread Christoph Timm


Am 19.07.21 um 09:27 schrieb Yedidyah Bar David:

On Mon, Jul 19, 2021 at 10:04 AM Christoph Timm  wrote:

Hi Didi,

thank you for the quick response.


Am 19.07.21 um 07:59 schrieb Yedidyah Bar David:

On Mon, Jul 19, 2021 at 8:39 AM Christoph Timm  wrote:

Hi List,

I'm trying to understand why my hosted engine is moved from one node to
another from time to time.
It is happening sometime multiple times a day. But there are also days
without it.

I can see the following in the ovirt-hosted-engine-ha/agent.log:
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
Penalizing score by 1600 due to network status

After that the engine will be shutdown and started on another host.
The oVirt Admin portal is showing the following around the same time:
Invalid status on Data Center Default. Setting status to Non Responsive.

But the whole cluster is working normally during that time.

I believe that I have somehow a network issue on my side but I have no
clue what kind of check is causing the network status to penalized.

Does anyone have an idea how to investigate this further?

Please check also broker.log. Do you see 'dig' failures?

Yes I found them as well.

Thread-1::WARNING::2021-07-19
08:02:00,032::network::120::network.Network::(_dns) DNS query failed:
; <<>> DiG 9.11.26-RedHat-9.11.26-4.el8_4 <<>> +tries=1 +time=5
;; global options: +cmd
;; connection timed out; no servers could be reached


This happened several times already on our CI infrastructure, but yours is
the first report from an actual real user. See also:

https://lists.ovirt.org/archives/list/in...@ovirt.org/thread/LIGS5WXGEKWACY5GCK7Z6Q2JYVWJ6JBF/

So I understand that the following command is triggered to test the
network: "dig +tries=1 +time=5"

Indeed.


I didn't open a bug for this (yet?), also because I never reproduced on my
own machines and am not sure about the exact failing flow. If this is
reproducible
reliably for you, you might want to test the patch I pushed:

https://gerrit.ovirt.org/c/ovirt-hosted-engine-ha/+/115596

I'm happy to give it a try.
Please confirm that I need to replace this file (network.py) on all my
nodes (CentOS 8.4 based) which can host my engine.

It definitely makes sense to do so, but in principle there is no problem
with applying it only on some of them. That's especially useful if you try
this first on a test env and try to enforce a reproduction somehow (overload
the network, disconnect stuff, etc.).

OK will give it a try and report back.



Other ideas/opinions about how to enhance this part of the monitoring
are most welcome.

If this phenomenon is new for you, and you can reliably say it's not due to
a recent "natural" higher network load, I wonder if it's due to some weird
bug/change somewhere.

I'm quite sure that I see this since we moved to 4.4.(4).
Just for house keeping I'm running 4.4.7 now.

We use 'dig' as the network monitor since 4.3.5, around one year before 4.4
was released: https://bugzilla.redhat.com/1659052

Which version did you use before 4.4?
The last 4.3 versions have been 4.3.7, 4.3.9 and 4.3.10 before migrating 
to 4.4.4.
  


Best regards,

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FLU4ULXUXBUFCQV237LLX3OBGYBTEW6Q/


[ovirt-users] Re: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Penalizing score by 1600 due to network status

2021-07-19 Thread Yedidyah Bar David
On Mon, Jul 19, 2021 at 10:04 AM Christoph Timm  wrote:
>
> Hi Didi,
>
> thank you for the quick response.
>
>
> Am 19.07.21 um 07:59 schrieb Yedidyah Bar David:
> > On Mon, Jul 19, 2021 at 8:39 AM Christoph Timm  wrote:
> >> Hi List,
> >>
> >> I'm trying to understand why my hosted engine is moved from one node to
> >> another from time to time.
> >> It is happening sometime multiple times a day. But there are also days
> >> without it.
> >>
> >> I can see the following in the ovirt-hosted-engine-ha/agent.log:
> >> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
> >> Penalizing score by 1600 due to network status
> >>
> >> After that the engine will be shutdown and started on another host.
> >> The oVirt Admin portal is showing the following around the same time:
> >> Invalid status on Data Center Default. Setting status to Non Responsive.
> >>
> >> But the whole cluster is working normally during that time.
> >>
> >> I believe that I have somehow a network issue on my side but I have no
> >> clue what kind of check is causing the network status to penalized.
> >>
> >> Does anyone have an idea how to investigate this further?
> > Please check also broker.log. Do you see 'dig' failures?
> Yes I found them as well.
>
> Thread-1::WARNING::2021-07-19
> 08:02:00,032::network::120::network.Network::(_dns) DNS query failed:
> ; <<>> DiG 9.11.26-RedHat-9.11.26-4.el8_4 <<>> +tries=1 +time=5
> ;; global options: +cmd
> ;; connection timed out; no servers could be reached
>
> >
> > This happened several times already on our CI infrastructure, but yours is
> > the first report from an actual real user. See also:
> >
> > https://lists.ovirt.org/archives/list/in...@ovirt.org/thread/LIGS5WXGEKWACY5GCK7Z6Q2JYVWJ6JBF/
> So I understand that the following command is triggered to test the
> network: "dig +tries=1 +time=5"

Indeed.

> >
> > I didn't open a bug for this (yet?), also because I never reproduced on my
> > own machines and am not sure about the exact failing flow. If this is
> > reproducible
> > reliably for you, you might want to test the patch I pushed:
> >
> > https://gerrit.ovirt.org/c/ovirt-hosted-engine-ha/+/115596
> I'm happy to give it a try.
> Please confirm that I need to replace this file (network.py) on all my
> nodes (CentOS 8.4 based) which can host my engine.

It definitely makes sense to do so, but in principle there is no problem
with applying it only on some of them. That's especially useful if you try
this first on a test env and try to enforce a reproduction somehow (overload
the network, disconnect stuff, etc.).

> >
> > Other ideas/opinions about how to enhance this part of the monitoring
> > are most welcome.
> >
> > If this phenomenon is new for you, and you can reliably say it's not due to
> > a recent "natural" higher network load, I wonder if it's due to some weird
> > bug/change somewhere.
> I'm quite sure that I see this since we moved to 4.4.(4).
> Just for house keeping I'm running 4.4.7 now.

We use 'dig' as the network monitor since 4.3.5, around one year before 4.4
was released: https://bugzilla.redhat.com/1659052

Which version did you use before 4.4?

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PI23BOXRQSK2HTJWIOT2RTFUJFK7LXFT/


[ovirt-users] Re: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Penalizing score by 1600 due to network status

2021-07-19 Thread Christoph Timm

Hi Didi,

thank you for the quick response.


Am 19.07.21 um 07:59 schrieb Yedidyah Bar David:

On Mon, Jul 19, 2021 at 8:39 AM Christoph Timm  wrote:

Hi List,

I'm trying to understand why my hosted engine is moved from one node to
another from time to time.
It is happening sometime multiple times a day. But there are also days
without it.

I can see the following in the ovirt-hosted-engine-ha/agent.log:
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
Penalizing score by 1600 due to network status

After that the engine will be shutdown and started on another host.
The oVirt Admin portal is showing the following around the same time:
Invalid status on Data Center Default. Setting status to Non Responsive.

But the whole cluster is working normally during that time.

I believe that I have somehow a network issue on my side but I have no
clue what kind of check is causing the network status to penalized.

Does anyone have an idea how to investigate this further?

Please check also broker.log. Do you see 'dig' failures?

Yes I found them as well.

Thread-1::WARNING::2021-07-19 
08:02:00,032::network::120::network.Network::(_dns) DNS query failed:

; <<>> DiG 9.11.26-RedHat-9.11.26-4.el8_4 <<>> +tries=1 +time=5
;; global options: +cmd
;; connection timed out; no servers could be reached



This happened several times already on our CI infrastructure, but yours is
the first report from an actual real user. See also:

https://lists.ovirt.org/archives/list/in...@ovirt.org/thread/LIGS5WXGEKWACY5GCK7Z6Q2JYVWJ6JBF/
So I understand that the following command is triggered to test the 
network: "dig +tries=1 +time=5"


I didn't open a bug for this (yet?), also because I never reproduced on my
own machines and am not sure about the exact failing flow. If this is
reproducible
reliably for you, you might want to test the patch I pushed:

https://gerrit.ovirt.org/c/ovirt-hosted-engine-ha/+/115596

I'm happy to give it a try.
Please confirm that I need to replace this file (network.py) on all my 
nodes (CentOS 8.4 based) which can host my engine.


Other ideas/opinions about how to enhance this part of the monitoring
are most welcome.

If this phenomenon is new for you, and you can reliably say it's not due to
a recent "natural" higher network load, I wonder if it's due to some weird
bug/change somewhere.

I'm quite sure that I see this since we moved to 4.4.(4).
Just for house keeping I'm running 4.4.7 now.


Thanks and best regards,

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RBBILRNRT57YNREOKAYWWZFCJE5ACZRY/


[ovirt-users] Re: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Penalizing score by 1600 due to network status

2021-07-19 Thread Yedidyah Bar David
On Mon, Jul 19, 2021 at 8:39 AM Christoph Timm  wrote:
>
> Hi List,
>
> I'm trying to understand why my hosted engine is moved from one node to
> another from time to time.
> It is happening sometime multiple times a day. But there are also days
> without it.
>
> I can see the following in the ovirt-hosted-engine-ha/agent.log:
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
> Penalizing score by 1600 due to network status
>
> After that the engine will be shutdown and started on another host.
> The oVirt Admin portal is showing the following around the same time:
> Invalid status on Data Center Default. Setting status to Non Responsive.
>
> But the whole cluster is working normally during that time.
>
> I believe that I have somehow a network issue on my side but I have no
> clue what kind of check is causing the network status to penalized.
>
> Does anyone have an idea how to investigate this further?

Please check also broker.log. Do you see 'dig' failures?

This happened several times already on our CI infrastructure, but yours is
the first report from an actual real user. See also:

https://lists.ovirt.org/archives/list/in...@ovirt.org/thread/LIGS5WXGEKWACY5GCK7Z6Q2JYVWJ6JBF/

I didn't open a bug for this (yet?), also because I never reproduced on my
own machines and am not sure about the exact failing flow. If this is
reproducible
reliably for you, you might want to test the patch I pushed:

https://gerrit.ovirt.org/c/ovirt-hosted-engine-ha/+/115596

Other ideas/opinions about how to enhance this part of the monitoring
are most welcome.

If this phenomenon is new for you, and you can reliably say it's not due to
a recent "natural" higher network load, I wonder if it's due to some weird
bug/change somewhere.

Thanks and best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5F3I646BN3SFT6QJNGYFXMO27ZPRMJZI/