[ovirt-users] Re: unsynced after remove brick

2023-01-19 Thread Dominique Deschênes

Hello, 

time seems to have solved the problem.
I don't have any errors now.

[root@ovnode2 ~]# gluster volume heal datassd info
Brick ovnode2s.telecom.lan:/gluster_bricks/datassd/datassd
Status: Connected
Number of entries: 0

Brick ovnode3s.telecom.lan:/gluster_bricks/datassd/datassd
Status: Connected
Number of entries: 0



Dominique Deschênes
Ingénieur chargé de projets, Responsable TI
816, boulevard Guimond, Longueuil J4G 1T5
 450 670-8383 x105  450 670-2259


          

- Message reçu -
De: Dominique D (dominique.desche...@gcgenicom.com)
Date: 19/01/23 08:27
À: users@ovirt.org
Objet: [ovirt-users] unsynced after remove brick

Hello,

Yesterday I had to remove the brick of my first server (HCI with 3 servers) for 
maintenance and recover hard disks.

3 servers with 4 disks per server in raid5. 1 brick per server

i did :

gluster volume remove-brick data replica 2 
ovnode1s.telecom.lan:/gluster_bricks/datassd/datassd force

After deleting the brick, I had 8 unsynced entries present and this morning I 
have 6.

What should I do to resolve my unsynced ?


[root@ovnode2 ~]# gluster volume status
Status of volume: datassd
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick ovnode2s.telecom.lan:/gluster_bricks/
datassd/datassd                             49152     0          Y       2431
Brick ovnode3s.telecom.lan:/gluster_bricks/
datassd/datassd                             49152     0          Y       2379
Self-heal Daemon on localhost               N/A       N/A        Y       2442
Self-heal Daemon on ovnode3s.telecom.lan    N/A       N/A        Y       2390

Task Status of Volume datassd
--

[root@ovnode2 ~]# gluster volume heal datassd info
Brick ovnode2s.telecom.lan:/gluster_bricks/datassd/datassd
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.18
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.7
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.150
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.241
/.shard/21907c8f-abe2-4501-b597-d1c2f9a0fa92.18
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.155
Status: Connected
Number of entries: 6

Brick ovnode3s.telecom.lan:/gluster_bricks/datassd/datassd
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.18
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.7
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.150
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.241
/.shard/21907c8f-abe2-4501-b597-d1c2f9a0fa92.18
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.155
Status: Connected
Number of entries: 6

Thank you
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZEKDPDGK2LQGCUR5QPNJNMUNOBVGVOB5/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7KD4RCPXVHWQU4VZTBHERNEBD6J5VH44/


[ovirt-users] Re: gluster 5834 Unsynced entries present

2021-10-01 Thread Dominique Deschênes

Thank you very much, it took a few minutes but now I don't have any more 
Unsynced entries.


[root@ovnode2 glusterfs]# gluster volume heal datassd info | grep entries | 
sort | uniq -c
3 Number of entries: 0




Dominique D

- Message reçu -
De: Strahil Nikolov via Users (users@ovirt.org)
Date: 01/10/21 11:05
À: Dominique D (dominique.desche...@gcgenicom.com), users@ovirt.org
Objet: [ovirt-users] Re: gluster 5834 Unsynced entries present

Put ovnode2 in maintenance (put a tick for stopping gluster), wait till all VMs 
evacuate and the host is really in maintenance and activate it back.


Restarting the glusterd also should do the trick, but it's always better to 
ensure no gluster processes have been left running(inclusing the mount points.

Best Regards,
Strahil Nikolov


On Fri, Oct 1, 2021 at 17:06, Dominique D
 wrote:
yesterday I had a glich and my second ovnode2 server restarted
here are some errors in the events :
VDSM ovnode3.telecom.lan command SpmStatusVDS failed: Connection timeout for 
host 'ovnode3.telecom.lan', last response arrived 2455 ms ago.
Host ovnode3.telecom.lan is not responding. It will stay in Connecting state 
for a grace period of 86 seconds and after that an attempt to fence the host 
will be issued.
Invalid status on Data Center Default. Setting Data Center status to Non 
Responsive (On host ovnode3.telecom.lan, Error: Network error during 
communication with the Host.).
Executing power management status on Host ovnode3.telecom.lan using Proxy Host 
ovnode1.telecom.lan and Fence Agent ipmilan:10.5.1.16.
Now my 3 bricks have errors from my gluster volume

[root@ovnode2 ~]# gluster volume status
Status of volume: datassd
Gluster process                            TCP Port  RDMA Port  Online  Pid
--
Brick ovnode1s.telecom.lan:/gluster_bricks/
datassd/datassd                            49152    0          Y      4027
Brick ovnode2s.telecom.lan:/gluster_bricks/
datassd/datassd                            49153    0          Y      2393
Brick ovnode3s.telecom.lan:/gluster_bricks/
datassd/datassd                            49152    0          Y      2347
Self-heal Daemon on localhost              N/A      N/A        Y      2405
Self-heal Daemon on ovnode3s.telecom.lan    N/A      N/A        Y      2366
Self-heal Daemon on 172.16.70.91            N/A      N/A        Y      4043
Task Status of Volume datassd
--
There are no active volume tasks

gluster volume heal datassd info | grep -i "Number of entries:" | grep -v 
"entries: 0"
Number of entries: 5759
in the webadmin all the bricks are green with comments for two :
ovnode1 Up, 5834 Unsynced entries present
ovnode2 Up,
ovnode3 Up, 5820 Unsynced entries present
I tried this without success
gluster volume heal datassd
Launching heal operation to perform index self heal on volume datassd has been 
unsuccessful:
Glusterd Syncop Mgmt brick op 'Heal' failed. Please check glustershd log file 
for details.
What are the next steps ?
Thank you
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QRI2K34O2X3NEEYLWTZJYG26EYH6CJQU/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L4SWBS2VMHJC6JCWARQI5SHIQQJVJ6GQ/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OW7Z24SU3F3GFIWCD75SQJJS62ITIZAM/


[ovirt-users] Re: Import OVA problem

2021-08-31 Thread Dominique Deschênes

Hi Saif,
 

Here the error in engine.log


2021-08-31 10:22:26,536-04 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-8) [82b7a8d4-15d1-4553-94e5-34506d174ac4] EVENT_ID: 
ANSIBLE_RUNNER_EVENT_NOTIFICATION(559), Query OVA info. Run query script.
2021-08-31 10:22:29,640-04 ERROR [org.ovirt.engine.core.utils.ovf.OvfManager] 
(default task-8) [82b7a8d4-15d1-4553-94e5-34506d174ac4] Error parsing OVF due 
to Error loading ovf, message The element type "SpecParams" must be terminated 
by the matching end-tag "".
2021-08-31 10:22:29,662-04 ERROR [org.ovirt.engine.core.utils.ovf.OvfManager] 
(default task-8) [82b7a8d4-15d1-4553-94e5-34506d174ac4] Error parsing OVF due 
to Error loading ovf, message The element type "SpecParams" must be terminated 
by the matching end-tag "".
2021-08-31 10:22:30,694-04 ERROR [org.ovirt.engine.core.utils.ovf.OvfManager] 
(default task-8) [82b7a8d4-15d1-4553-94e5-34506d174ac4] Error parsing OVF due 
to Error loading ovf, message XML document structures must start and end within 
the same entity.

And my ansible log file 

https://drive.google.com/file/d/1qnd-Ut1tHr5X3jrKcsPsxOz8ObPTuQ7v/view?usp=sharing


Dominique 


- Original Message -
From: Saif Abu Saleh (sabus...@redhat.com)
Date: 31/08/21 09:54
To: Dominique D (dominique.desche...@gcgenicom.com)
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Import OVA problem

Hi Dominique,

Hope you are doing well

In order to investigate this, we need to check the log files
Can you please try to do import again and provide the following log files after 
the import:

- engine.log, from location: 
/var/log/ovirt-engine/engine.log
- ansible log file (depending on the time of the import), from location: 
/var/log/ovirt-engine/ova

Thanks,
Saif

 



On Sun, Aug 29, 2021 at 6:08 PM Dominique D  
wrote:
I have tried exporting 4 VMs (2 centos and 2 Windows) to an nfs mount (export 
OVA) and importing from ova and I can't see all the files.

Here are the files in the directory

-rw---+ 1 root root 10739331584 Aug 29 10:27 linux2.ova
-rw---+ 1 root root  1784370176 Aug 28 12:00 linuxtest.ova
-rw---+ 1 root root 64434655232 Aug 28 13:57 vdiw10-2004v1C.ova
-rw---+ 1 vdsm kvm  49040712192 Aug 27 17:21 vdiw10rg06.ova

The only file I see in virtual machine on source is linuxtest.ova and 
linux2.ova. Do you know why ?

ovirt 4.4.6
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3TX6YKUG56OLCWZPPRR3XNVLNS6M3K4U/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DDKT3PDKQ26KNFW7ALZF66NPN3O4HHYE/


[ovirt-users] Re: Disk (brick) failure on my stack

2021-06-22 Thread Dominique Deschênes

Hi Strahil


here :


[root@ovnode2 ~]# gluster volume info data

Volume Name: data
Type: Replicate
Volume ID: c6535ef6-c5c5-4097-8eb4-90b9254570c5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 172.16.70.91:/gluster_bricks/data/data
Brick2: 172.16.70.92:/gluster_bricks/data/data
Brick3: 172.16.70.93:/gluster_bricks/data/data
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: on




Dominique Deschênes
Ingénieur chargé de projets, Responsable TI
816, boulevard Guimond, Longueuil J4G 1T5
 450 670-8383 x105  450 670-2259


          

- Message reçu -
De: Strahil Nikolov via Users (users@ovirt.org)
Date: 21/06/21 23:38
À: dominique.desche...@gcgenicom.com, users@ovirt.org
Objet: [ovirt-users] Re: Disk (brick) failure on my stack

I'm not sure about the GUI (but I think it has the option) , but under command 
line you got several options.

1. Use gluster's remove-brick replica 2 (with flag force)
and then 'add-brick replica 3'
2. Use the old way 'replace-brick'
If you need guidance, please provide the 'gluster volume info ' .
Best Regards,
Strahil Nikolov


On Tue, Jun 22, 2021 at 2:01, Dominique D
 wrote:
yesterday I had a disk failure on my stack of 3 Ovirt 4.4.1 node
on each server I have 3 Bricks (engine, data, vmstore)
brick data 4X600Gig raid0. /dev/gluster_vg_sdb/gluster_lv_data mount 
/gluster_bricks/data
brick engine 2X1TB raid1 /dev/gluster_vg_sdc/gluster_lv_engine mount 
/gluster_bricks/engine
brick vmstore 2X1TB raid1 /dev/gluster_vg_sdc/gluster_lv_vmstore mount 
/gluster_bricks/vmstore
Everything was configured by the gui (hyperconverge and hosted-engine)
It is the raid0 of the 2nd server who broke.
all VMs were automatically moved to the other two servers, I haven't lost any 
data.
the host2 is now in maintenance mode.
I am going to buy 4 new SSD disks to replace the 4 disks of the defective raid0.
When I'm going to erase the faulty raid0 and create the new raid with the new 
disks on the raid controler, how do I add in ovirt so that they resynchronize 
with the other bricks data?
Status of volume: data
Gluster process                            TCP Port  RDMA Port  Online  Pid
--
Brick 172.16.70.91:/gluster_bricks/data/dat

a                                          49153    0          Y      79168
Brick 172.16.70.92:/gluster_bricks/data/dat
a                                          N/A      N/A        N      N/A
Brick 172.16.70.93:/gluster_bricks/data/dat

a                                          49152    0          Y      3095
Self-heal Daemon on localhost              N/A      N/A        Y      2528
Self-heal Daemon on 172.16.70.91            N/A      N/A        Y      225523
Self-heal Daemon on 172.16.70.93            N/A      N/A        Y      3121

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html

oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P4XAPOA35NEFSQ5CGL5OV7KKCZMBGJUK/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHBYPWP23TIKH6KOYBFLBSWLOFWVYVV7/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RRC5X7DBCDKKSLAOEDAR5V2N7ULFZ2WD/


[ovirt-users] Re: Hosted-engine fail and host reboot

2021-06-04 Thread Dominique Deschênes

Yes, I think so


Dominique Deschênes


- Message reçu -
De: Strahil Nikolov via Users (users@ovirt.org)
Date: 02/06/21 13:02
À: Dominique Deschênes (dominique.desche...@gcgenicom.com), Yedidyah Bar David 
(d...@redhat.com)
Cc: users (users@ovirt.org)
Objet: [ovirt-users] Re: Hosted-engine fail and host reboot

In https://github.com/gluster/gluster-ansible-infra there is an example with :
vars:
     # Firewall setup
     gluster_infra_fw_ports:
           - 5900-6923/tcp
Maybe that's causing the problem ?

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BOEHHMVR3OBA6GSVU2K3YUIDKGCQQDX3/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PTE7IJNQECUG72FI5ZRMAAY6Z34LX56M/


[ovirt-users] Re: Hosted-engine fail and host reboot

2021-05-31 Thread Dominique Deschênes

Hi,

I tried to deploy hosted-engine without deploying gluster (Hyperconverged) and 
I did not need to remove 6900/tcp.


Dominique Deschênes
Ingénieur chargé de projets, Responsable TI
816, boulevard Guimond, Longueuil J4G 1T5
 450 670-8383 x105  450 670-2259


          

- Message reçu -
De: Yedidyah Bar David (d...@redhat.com)
Date: 31/05/21 02:36
À: Dominique Deschênes (dominique.desche...@gcgenicom.com)
Cc: Strahil Nikolov (hunter86...@yahoo.com), users (users@ovirt.org)
Objet: Re: [ovirt-users] Re: Hosted-engine fail and host reboot

On Sat, May 29, 2021 at 7:03 PM Dominique Deschênes
 wrote:
>
> Hi Strahil,
>
> I did that and it worked.

Thanks for the report.

This looks identical to a similar case from a few weeks ago [1].

Any chance you can try checking what/who did this change to your
firewall conf prior to deployment?

It sounds like a new change somewhere.

Thanks and best regards,

[1] 
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/5SEB6PJCFTLXKOIBFIECQVJOPBHZJWIR/

>
> firewall-cmd --zone=public --remove-port=6900/tcp
> firewall-cmd --runtime-to-permanent
> hosted-engine --deploy
>
> Thank you
>
> Dominique
>
> - Message reçu -
> 
> De: Strahil Nikolov (hunter86...@yahoo.com)
> Date: 28/05/21 14:10
> À: Dominique D (dominique.desche...@gcgenicom.com), users@ovirt.org
> Objet: Re: [ovirt-users] Re: Hosted-engine fail and host reboot
>
> Maybe you can remove 6900/tcp from firewalld and try again ?
>
>
> Best Regards,
> Strahil Nikolov
>
> On Thu, May 27, 2021 at 19:43, Dominique D
>  wrote:
> it seems to be this problem
>
> I tried to install it again with version 4.4.6-2021051809 and I get this 
> message.
>
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Open a port on firewalld]
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "ERROR: 
> Exception caught: org.fedoraproject.FirewallD1.Exception: ALREADY_ENABLED: 
> '6900:tcp' already in 'public' Non-permanent operation"}
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DASEBWHNO2RT2QNH23KYD7ENNSLFWYLN/
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLJTBXKTTTI2KENISVYFGEJRP7OY6ZUG/



--
Didi


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4EPHYHPIYNJD4R7FDR27PXEDDBRX54WF/


[ovirt-users] Re: Hosted-engine fail and host reboot

2021-05-29 Thread Dominique Deschênes

Hi Strahil,

I did that and it worked.

firewall-cmd --zone=public --remove-port=6900/tcp
firewall-cmd --runtime-to-permanent
hosted-engine --deploy

Thank you


Dominique 

- Message reçu -
De: Strahil Nikolov (hunter86...@yahoo.com)
Date: 28/05/21 14:10
À: Dominique D (dominique.desche...@gcgenicom.com), users@ovirt.org
Objet: Re: [ovirt-users] Re: Hosted-engine fail and host reboot

Maybe you can remove 6900/tcp from firewalld and try again ?

Best Regards,
Strahil Nikolov


On Thu, May 27, 2021 at 19:43, Dominique D
 wrote:
it seems to be this problem

I tried to install it again with version 4.4.6-2021051809 and I get this 
message.

[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Open a port on firewalld]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "ERROR: 
Exception caught: org.fedoraproject.FirewallD1.Exception: ALREADY_ENABLED: 
'6900:tcp' already in 'public' Non-permanent operation"}
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DASEBWHNO2RT2QNH23KYD7ENNSLFWYLN/




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLJTBXKTTTI2KENISVYFGEJRP7OY6ZUG/


[ovirt-users] Gluster Volume Type Distributed

2020-08-27 Thread Dominique Deschênes


Hi Everyone,

I would like to use Distrbuted Volume type but the volume type is Gray out. I 
can only use the replicate type. 


It's normal ?


3 ovirt Servers 4.4.1-2020080418

Can I configure a replicate volume for the engine domain and Distributed for 
the data domain?





Thank you


Dominique Deschênes
Ingénieur chargé de projets, Responsable TI
816, boulevard Guimond, Longueuil J4G 1T5
 450 670-8383 x105  450 670-2259


          


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7TMGYCS4EF3KDKED46BVIL7JH3H4EKSH/


[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-17 Thread Dominique Deschênes

Hi,

I use ovirt ISO file ovirt-node-ng-installer-4.4.1-2020070811.el8.iso (July 8).
I just saw that there is a new version of July 13 (4.4.1-2020071311). I will 
try it.


Dominique Deschênes
Ingénieur chargé de projets, Responsable TI
816, boulevard Guimond, Longueuil J4G 1T5
 450 670-8383 x105  450 670-2259


          

- Message reçu -
De: Strahil Nikolov (hunter86...@yahoo.com)
Date: 17/07/20 04:03
À: Dominique Deschênes (dominique.desche...@gcgenicom.com), clam2...@gmail.com, 
users@ovirt.org
Objet: Re: [ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster 
deploy fails insufficient free space no matter how small the volume is set

What version of CentOS 8 are you using -> Stream or regular, version ?

Best Regards,
Strahil Nikolov

На 16 юли 2020 г. 21:07:57 GMT+03:00, "Dominique Deschênes" 
 написа:
>
>
>HI,
>Thank you for your answers
>
>I tried to replace the "package" with "dnf".  the installation of the
>gluster seems to work well but I had the similar message during the
>deployment of the Hosted engine.
>
>Here is the error
>
>[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 10, "changed":
>false, "msg": "The Python 2 yum module is needed for this module. If
>you require Python 3 support use the `dnf` Ansible module instead."}
>
>
>
>
>
>Dominique Deschênes
>Ingénieur chargé de projets, Responsable TI
>816, boulevard Guimond, Longueuil J4G 1T5
> 450 670-8383 x105  450 670-2259
>
>
>        
>
>- Message reçu -
>De: clam2...@gmail.com
>Date: 16/07/20 13:40
>À: users@ovirt.org
>Objet: [ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged
>Gluster deploy fails insufficient free space no matter how small the
>volume is set
>
>Dear Strahil, Dominique and Edward:
>
>I reimaged the three hosts with
>ovirt-node-ng-installer-4.4.1-2020071311.el8.iso just to be sure
>everything was stock (I had upgraded from v4.4) and attempted a
>redeploy with all suggested changes EXCEPT replacing "package" with
>"dnf" --> same failure.  I then made Strahil's recommended replacement
>of "package" with "dnf" and the Gluster deployment succeeded through
>that section of main.yml only to fail a little later at:
>
>- name: Install python-yaml package for Debian systems
> package:
>   name: python-yaml
>   state: present
> when: ansible_distribution == "Debian" or ansible_distribution ==
>"Ubuntu"
>
>I found this notable given that I had not replaced "package" with "dnf"
>in the prior section:
>
>- name: Change to Install lvm tools for debian systems.
> package:
>   name: thin-provisioning-tools
>   state: present
> when: ansible_distribution == "Debian" or ansible_distribution ==
>"Ubuntu"
>
>and deployment had not failed here.  Anyhow, I deleted the two Debian
>statements as I am deploying from Node (CentOS based), cleaned up,
>cleaned up my drives ('dmsetup remove eui.xxx...' and 'wipefs --all
>--force /dev/nvme0n1 /dev/nvmeXn1 ...')  and redeployed again.  This
>time Gluster deployment seemed to execute main.yml OK only to fail in a
>new file, vdo_create.yml:
>
>TASK [gluster.infra/roles/backend_setup : Install VDO dependencies]
>
>task path:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:26
>fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>
>Expecting that this might continue, I have been looking into the
>documentation of how "package" works and if I can find a root cause for
>this rather than reviewing n *.yml files and replacing "package" with
>"dnf" in all of them.  Thank you VERY much to Strahil for helping me!
>
>If Strahil or anyone else has any additional troubleshooting tips,
>suggestions, insight or solutions I am all ears.  I will continue to
>update as I progress.
>
>Respectfully,
>Charles
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an 

[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-16 Thread Dominique Deschênes


HI, 
Thank you for your answers

I tried to replace the "package" with "dnf".  the installation of the gluster 
seems to work well but I had the similar message during the deployment of the 
Hosted engine.

Here is the error

[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 10, "changed": false, 
"msg": "The Python 2 yum module is needed for this module. If you require 
Python 3 support use the `dnf` Ansible module instead."} 





Dominique Deschênes
Ingénieur chargé de projets, Responsable TI
816, boulevard Guimond, Longueuil J4G 1T5
 450 670-8383 x105  450 670-2259


          

- Message reçu -
De: clam2...@gmail.com
Date: 16/07/20 13:40
À: users@ovirt.org
Objet: [ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster 
deploy fails insufficient free space no matter how small the volume is set

Dear Strahil, Dominique and Edward:

I reimaged the three hosts with 
ovirt-node-ng-installer-4.4.1-2020071311.el8.iso just to be sure everything was 
stock (I had upgraded from v4.4) and attempted a redeploy with all suggested 
changes EXCEPT replacing "package" with "dnf" --> same failure.  I then made 
Strahil's recommended replacement of "package" with "dnf" and the Gluster 
deployment succeeded through that section of main.yml only to fail a little 
later at:

- name: Install python-yaml package for Debian systems
 package:
   name: python-yaml
   state: present
 when: ansible_distribution == "Debian" or ansible_distribution == "Ubuntu"

I found this notable given that I had not replaced "package" with "dnf" in the 
prior section:

- name: Change to Install lvm tools for debian systems.
 package:
   name: thin-provisioning-tools
   state: present
 when: ansible_distribution == "Debian" or ansible_distribution == "Ubuntu"

and deployment had not failed here.  Anyhow, I deleted the two Debian 
statements as I am deploying from Node (CentOS based), cleaned up, cleaned up 
my drives ('dmsetup remove eui.xxx...' and 'wipefs --all --force /dev/nvme0n1 
/dev/nvmeXn1 ...')  and redeployed again.  This time Gluster deployment seemed 
to execute main.yml OK only to fail in a new file, vdo_create.yml:

TASK [gluster.infra/roles/backend_setup : Install VDO dependencies] 
task path: 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:26
fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The 
Python 2 yum module is needed for this module. If you require Python 3 support 
use the `dnf` Ansible module instead."}
fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The 
Python 2 yum module is needed for this module. If you require Python 3 support 
use the `dnf` Ansible module instead."}
fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The 
Python 2 yum module is needed for this module. If you require Python 3 support 
use the `dnf` Ansible module instead."}

Expecting that this might continue, I have been looking into the documentation 
of how "package" works and if I can find a root cause for this rather than 
reviewing n *.yml files and replacing "package" with "dnf" in all of them.  
Thank you VERY much to Strahil for helping me!

If Strahil or anyone else has any additional troubleshooting tips, suggestions, 
insight or solutions I am all ears.  I will continue to update as I progress.

Respectfully,
Charles
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3JTZX2OF4JTGRECMZLZXZQT5IWR4PFSG/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGMHY4KGT45UC5FPF7HHH53YJ62IKFC4/