[ovirt-users] Re: how to forcibly remove dead node from Gluster 3 replica 1 arbiter setup?

2021-11-11 Thread dhanaraj.ramesh--- via Users
wanted to remove beclovkvma02.bec.net as the node was dead, now I reinstalled 
this node and trying to add as 4th node - beclovkvma04.bec.net however since 
the system UUID is same Im not able to add the node in ovirt gluster.. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XXY6FD7G6PUYKEBQ6ZORVYZI4L6NSRFW/


[ovirt-users] Re: how to forcibly remove dead node from Gluster 3 replica 1 arbiter setup?

2021-11-11 Thread dhanaraj.ramesh--- via Users
Hi Strahil Nikolov


Volume Name: datastore1
Type: Distributed-Replicate
Volume ID: bc362259-14d4-4357-96bd-8db6492dc788
Status: Started
Snapshot Count: 0
Number of Bricks: 7 x (2 + 1) = 21
Transport-type: tcp
Bricks:
Brick1: beclovkvma01.bec.net:/data/brick2/brick2
Brick2: beclovkvma02.bec.net:/data/brick2/brick2
Brick3: beclovkvma03.bec.net:/data/brick1/brick2 (arbiter)
Brick4: beclovkvma01.bec.net:/data/brick3/brick3
Brick5: beclovkvma02.bec.net:/data/brick3/brick3
Brick6: beclovkvma03.bec.net:/data/brick1/brick3 (arbiter)
Brick7: beclovkvma01.bec.net:/data/brick4/brick4
Brick8: beclovkvma02.bec.net:/data/brick4/brick4
Brick9: beclovkvma03.bec.net:/data/brick1/brick4 (arbiter)
Brick10: beclovkvma01.bec.net:/data/brick5/brick5
Brick11: beclovkvma02.bec.net:/data/brick5/brick5
Brick12: beclovkvma03.bec.net:/data/brick1/brick5 (arbiter)
Brick13: beclovkvma01.bec.net:/data/brick6/brick6
Brick14: beclovkvma02.bec.net:/data/brick6/brick6
Brick15: beclovkvma03.bec.net:/data/brick1/brick6 (arbiter)
Brick16: beclovkvma01.bec.net:/data/brick7/brick7
Brick17: beclovkvma02.bec.net:/data/brick7/brick7
Brick18: beclovkvma03.bec.net:/data/brick1/brick7 (arbiter)
Brick19: beclovkvma01.bec.net:/data/brick8/brick8
Brick20: beclovkvma02.bec.net:/data/brick8/brick8
Brick21: beclovkvma03.bec.net:/data/brick1/brick8 (arbiter)
Options Reconfigured:
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
cluster.lookup-optimize: off
server.keepalive-count: 5
server.keepalive-interval: 2
server.keepalive-time: 10
server.tcp-user-timeout: 20
network.ping-timeout: 30
server.event-threads: 4
client.event-threads: 4
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
performance.strict-o-direct: on
performance.low-prio-threads: 32
storage.owner-gid: 36
storage.owner-uid: 36
network.remote-dio: off

All the bricks from  beclovkvma02.bec.net as the node 2 dead, now the node has 
been reinstalled, 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZC26PQDDYVGE42AKMCX2ADWH7C2OX3FE/


[ovirt-users] Re: how to forcibly remove dead node from Gluster 3 replica 1 arbiter setup?

2021-11-11 Thread Strahil Nikolov via Users
Please provide  'gluster volume info datastore1' and specify which bricks you 
want to remove.

Best Regards,Strahil Nikolov
 
 
  On Thu, Nov 11, 2021 at 6:13, dhanaraj.ramesh--- via Users 
wrote:   Hi Strahil Nikolov

Thank you for the suggestion but it does not help... 



[root@beclovkvma01 ~]# sudo gluster volume remove-brick datastore1 replica 1 
beclovkvma02.bec..net:/data/brick2/brick2  
beclovkvma02.bec..net:/data/brick3/brick3  
beclovkvma02.bec..net:/data/brick4/brick4  
beclovkvma02.bec..net:/data/brick5/brick5  
beclovkvma02.bec..net:/data/brick6/brick6  
beclovkvma02.bec..net:/data/brick7/brick7 
beclovkvma02.bec..net:/data/brick8/brick8 force
Remove-brick force will not migrate files from the removed bricks, so they will 
no longer be available on the volume.
Do you want to continue? (y/n) y
volume remove-brick commit force: failed: need 14(xN) bricks for reducing 
replica count of the volume from 3 to 1
[root@beclovkvma01 ~]# sudo gluster volume remove-brick datastore1 replica 1 
beclovkvma02.bec..net:/data/brick2/brick2  forceRemove-brick force will not 
migrate files from the removed bricks, so they will no longer be available on 
the volume.
Do you want to continue? (y/n) y
volume remove-brick commit force: failed: need 14(xN) bricks for reducing 
replica count of the volume from 3 to 1
[root@beclovkvma01 ~]# sudo gluster volume remove-brick datastore1 replica 2 
beclovkvma02.bec..net:/data/brick2/brick2  force
Remove-brick force will not migrate files from the removed bricks, so they will 
no longer be available on the volume.
Do you want to continue? (y/n) y
volume remove-brick commit force: failed: need 7(xN) bricks for reducing 
replica count of the volume from 3 to 2
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7M7IZI7AP7NWDNMXBXNYZZSSY64UTPMC/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KXZVE62TCDE3G747RXEBAETET2LMHDGU/


[ovirt-users] Re: upgrade dependency issues

2021-11-11 Thread David White via Users
Hi team,
I saw that RHEL 8.5 was released yesterday, so I just put one of my hosts that 
doesn't have local gluster storage into maintenance mode and again attempted an 
update.

The update again failed through the oVirt web UI, and a `yum update` from the 
command line again failed with the same issues as I documented in this email 
thread earlier.
So I ran `yum update --disablerepo ovirt\*` successfully, then I tried a 
standard yum update again.

I'm still getting the same problems. Is there still an issue with upgrading to 
the latest version of oVirt, even when on RHEL 8.5?
I'm not going to paste the full stdout here, because there's a lot, but here's 
all of the "Problems".

Should I try a --no-best here?

Problem 1: cannot install the best update candidate for package 
vdsm-4.40.80.6-1.el8.x86_64
Problem 2: package vdsm-gluster-4.40.90.4-1.el8.x86_64 requires vdsm = 
4.40.90.4-1.el8, but none of the providers can be installed
Problem 3: package ovirt-host-dependencies-4.4.9-2.el8.x86_64 requires vdsm >= 
4.40.90, but none of the providers can be installed
Problem 4: package ovirt-host-4.4.9-2.el8.x86_64 requires 
ovirt-host-dependencies = 4.4.9-2.el8, but none of the providers can be 
installed
Problem 5: package ovirt-provider-ovn-driver-1.2.34-1.el8.noarch requires vdsm, 
but none of the providers can be installed
Problem 6: package ovirt-hosted-engine-ha-2.4.9-1.el8.noarch requires vdsm >= 
4.40.0, but none of the providers can be installed
Problem 7: problem with installed package vdsm-4.40.80.6-1.el8.x86_64
Problem 8: problem with installed package vdsm-gluster-4.40.80.6-1.el8.x86_64
Problem 9: problem with installed package 
ovirt-provider-ovn-driver-1.2.34-1.el8.noarch
Problem 10: problem with installed package 
ovirt-hosted-engine-ha-2.4.8-1.el8.noarch
Problem 11: package ovirt-hosted-engine-setup-2.5.4-2.el8.noarch requires 
ovirt-hosted-engine-ha >= 2.4, but none of the providers can be installed
Problem 12: problem with installed package 
ovirt-host-dependencies-4.4.8-1.el8.x86_64

I tried rebooting the system, but am getting the same errors, even on the RHEL 
8.5 kernel.

As an aside, I'm having a really frustrating time with my network 
configurations somehow getting somewhat reset on every system reboot, and it 
always takes me a while to get full network connectivity up and running again. 
I'm running into that issue yet again, but I don't want to derail the topic 
here... I can send another email if necessary.

Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐

On Tuesday, October 26th, 2021 at 9:37 AM, Sandro Bonazzola 
 wrote:

> Il giorno mar 26 ott 2021 alle ore 15:32 Gianluca Cecchi 
>  ha scritto:
> 

> > On Tue, Oct 26, 2021 at 12:12 PM Sandro Bonazzola  
> > wrote:
> > 

> > > Thanks for the report, my team is looking into the dedependency 
> > > failures.oVirt 4.4.9 has been developed on CentOS Stream 8 and some 
> > > dependencies are not yet available on RHEL 8.4 and derivatives.
> > 

> > Ok, fair enough you only test on CentOS Stream 8, but at least I think you 
> > should change what you are going to write in the next release notes, 
> > putting only what actually tested.
> > 

> > For 4.4.9 there was:
> > 

> > "
> > 

> > This release is available now on x86_64 architecture for:
> > 

> > -   Red Hat Enterprise Linux 8.4
> > 

> > -   CentOS Linux (or similar) 8.4
> > 

> > -   CentOS Stream 8
> > 

> > 

> > This release supports Hypervisor Hosts on x86_64 and ppc64le architectures 
> > for:
> > 

> > -   Red Hat Enterprise Linux 8.4
> > 

> > -   CentOS Linux (or similar) 8.4
> > 

> > -   oVirt Node NG (based on CentOS Stream 8)
> > 

> > -   CentOS Stream 8
> > 

> > 

> > "
> > So one understands that at least installation/upgrade from 4.4.8 to 4.4.9 
> > has been validated when the hosts are in CentOS 8.4 or in RH EL 8.4, that 
> > currently are the latest 8.4 level released, while it seems both fails 
> > right now, correct?
> > 

> > Gianluca
> 

> It fails right now, correct.
> 

> --
> 

> Sandro Bonazzola
> 

> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
> 

> Red Hat EMEA
> 

> sbona...@redhat.com   
> 

> Red Hat respects your work life balance. Therefore there is no need to answer 
> this email out of your office hours.

publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LD6FNYUOBMKQWK6YHRXCLB54XUSSIDEZ/


[ovirt-users] [ANN] Async oVirt engine release for oVirt 4.4.9

2021-11-11 Thread Lev Veyde
On November 11th, 2021 the oVirt project released an async update of oVirt
engine (4.4.9.5) and engine sdk-java (4.4.6)

Changes:

   -

   Reload NM configuration instead of service restart (Fixes BZ#2019807
   )


-- 

Lev Veyde

Senior Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel



l...@redhat.com | lve...@redhat.com

TRIED. TESTED. TRUSTED. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DHJNFCQ4IXI4B5YGJ3J3JNXIG2U6Q6SG/


[ovirt-users] Re: resources.ovirt.org migration

2021-11-11 Thread Denis Volkov
Migration is completed

In case of issues please create ticket in issue tracking system:
https://issues.redhat.com/projects/CPDEVOPS/issues



On Thu, Nov 11, 2021 at 12:44 PM Denis Volkov  wrote:

> Hello
>
> Server `resources.ovirt.org` hosting repositories for ovirt is going to
> be migrated to new hardware in a different datacenter within the next
> couple of hours.
> No service interruption is expected during the migration. I will follow-up
> when the process is complete
> --
>
> Denis Volkov
>
>
>

-- 

Denis Volkov


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F6ONEK33LN2OA7YGQA7DFKZXKQQ7P6MD/


[ovirt-users] resources.ovirt.org migration

2021-11-11 Thread Denis Volkov
Hello

Server `resources.ovirt.org` hosting repositories for ovirt is going to be
migrated to new hardware in a different datacenter within the next couple
of hours.
No service interruption is expected during the migration. I will follow-up
when the process is complete
-- 

Denis Volkov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D6BMUWRKJAVYIMNOJF3OMHTOUNYFLBCM/