[ovirt-users] Re: Single node hyperconverged issue with 4.5.0.2

2022-05-18 Thread Ritesh Chikatwar
Hello,

Can you share these files with me from the node,
/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml
& /etc/ansible/hc_wizard_inventory.yml

Thanks



On Wed, May 18, 2022 at 4:15 PM  wrote:

> I've run into the following issue with oVirt node on a single host using
> the single node hyperconverged wizard:
>
> TASK [gluster.features/roles/gluster_hci : Create the GlusterFS volumes]
> ***
> failed: [ovirt01.syd1.fqdn.com] (item={'volname': 'engine', 'brick':
> '/gluster_bricks/engine/engine', 'arbiter': 0}) => {"ansible_loop_var":
> "item", "changed": true, "cmd": "gluster volume create engine replica
> __omit_place_holder__2e79c54ac7e0f34125a3ed6965d6de550953524f  transport
> tcp  ovirt01.syd1.fqdn.com:/gluster_bricks/engine/engine  force\n",
> "delta": "0:00:00.086880", "end": "2022-05-18 10:28:49.211929", "item":
> {"arbiter": 0, "brick": "/gluster_bricks/engine/engine", "volname":
> "engine"}, "msg": "non-zero return code", "rc": 1, "start": "2022-05-18
> 10:28:49.125049", "stderr": "replica count should be greater than
> 1\n\nUsage:\nvolume create  [[replica  [arbiter
> ]]|[replica 2 thin-arbiter 1]] [disperse []] [disperse-data
> ] [redundancy ] [transport ] 
> ... [force]", "stderr_lines": ["replica count should be greater
> than 1", "", "Usage:", "volume create  [[replica 
> [arbiter
>   ]]|[replica 2 thin-arbiter 1]] [disperse []]
> [disperse-data ] [redundancy ] [transport
> ]  ... [force]"], "stdout": "",
> "stdout_lines": []}
> failed: [ovirt01.syd1.fqdn.com] (item={'volname': 'data', 'brick':
> '/gluster_bricks/data/data', 'arbiter': 0}) => {"ansible_loop_var": "item",
> "changed": true, "cmd": "gluster volume create data replica
> __omit_place_holder__2e79c54ac7e0f34125a3ed6965d6de550953524f  transport
> tcp  ovirt01.syd1.fqdn.com:/gluster_bricks/data/data  force\n", "delta":
> "0:00:00.088490", "end": "2022-05-18 10:28:49.905458", "item": {"arbiter":
> 0, "brick": "/gluster_bricks/data/data", "volname": "data"}, "msg":
> "non-zero return code", "rc": 1, "start": "2022-05-18 10:28:49.816968",
> "stderr": "replica count should be greater than 1\n\nUsage:\nvolume create
>  [[replica  [arbiter ]]|[replica 2 thin-arbiter
> 1]] [disperse []] [disperse-data ] [redundancy ]
> [transport ]  ... [force]",
> "stderr_lines": ["replica count should be greater than 1", "", "Usage:",
> "volume create  [[replica  [arbiter ]]|[replic
>  a 2 thin-arbiter 1]] [disperse []] [disperse-data ]
> [redundancy ] [transport ] 
> ... [force]"], "stdout": "", "stdout_lines": []}
> failed: [ovirt01.syd1.fqdn.com] (item={'volname': 'vmstore', 'brick':
> '/gluster_bricks/vmstore/vmstore', 'arbiter': 0}) => {"ansible_loop_var":
> "item", "changed": true, "cmd": "gluster volume create vmstore replica
> __omit_place_holder__2e79c54ac7e0f34125a3ed6965d6de550953524f  transport
> tcp  ovirt01.syd1.fqdn.com:/gluster_bricks/vmstore/vmstore  force\n",
> "delta": "0:00:00.086626", "end": "2022-05-18 10:28:50.604015", "item":
> {"arbiter": 0, "brick": "/gluster_bricks/vmstore/vmstore", "volname":
> "vmstore"}, "msg": "non-zero return code", "rc": 1, "start": "2022-05-18
> 10:28:50.517389", "stderr": "replica count should be greater than
> 1\n\nUsage:\nvolume create  [[replica  [arbiter
> ]]|[replica 2 thin-arbiter 1]] [disperse []] [disperse-data
> ] [redundancy ] [transport ] 
> ... [force]", "stderr_lines": ["replica count should be greater
> than 1", "", "Usage:", "volume create  [[replica 
>   [arbiter ]]|[replica 2 thin-arbiter 1]] [disperse []]
> [disperse-data ] [redundancy ] [transport
> ]  ... [force]"], "stdout": "",
> "stdout_lines": []}
>
> The only non-default settings I changed were the stripe size and number of
> disks. Following the steps here:
>
> https://www.ovirt.org/dropped/gluster-hyperconverged/chap-Single_node_hyperconverged.html
>
> Any ideas to work around this? I will be deploying to 6 nodes eventually
> but wanted to try out the engine before the rest of my hardware arrives :)
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/M3HBNBFFNUVZSI7P7ZNB6VMQEPMSWIID/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQUU76E7IWQMKYU25A5N2L32YEPM5VPY/


[ovirt-users] Re: Single node hyperconverged issue with 4.5.0.2

2022-05-18 Thread bpbp
Hi Ritesh,

I was able to make some progress by modifying the gluster role, see this issue 
I filed earlier: https://github.com/gluster/gluster-ansible-features/issues/55

There was a further issue with deploying the hosted engine which was solved by 
modifying the code which was checking for an XML element which was missing, see 
this post: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6IKZ45B2TUCQB6WXZ3B4AFVU2RXZXJQQ/

I can share the files you asked for tomorrow when I am back in the office.

Cheers,
Boden

On Wed, 18 May 2022, at 9:00 PM, Ritesh Chikatwar wrote:
> Hello,
> 
> Can you share these files with me from the node, 
> /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml & 
> /etc/ansible/hc_wizard_inventory.yml
> 
> Thanks
> 
> 
> 
> On Wed, May 18, 2022 at 4:15 PM  wrote:
>> I've run into the following issue with oVirt node on a single host using the 
>> single node hyperconverged wizard:
>> 
>> TASK [gluster.features/roles/gluster_hci : Create the GlusterFS volumes] 
>> ***
>> failed: [ovirt01.syd1.fqdn.com] (item={'volname': 'engine', 'brick': 
>> '/gluster_bricks/engine/engine', 'arbiter': 0}) => {"ansible_loop_var": 
>> "item", "changed": true, "cmd": "gluster volume create engine replica 
>> __omit_place_holder__2e79c54ac7e0f34125a3ed6965d6de550953524f  transport tcp 
>>  ovirt01.syd1.fqdn.com:/gluster_bricks/engine/engine  force\n", "delta": 
>> "0:00:00.086880", "end": "2022-05-18 10:28:49.211929", "item": {"arbiter": 
>> 0, "brick": "/gluster_bricks/engine/engine", "volname": "engine"}, "msg": 
>> "non-zero return code", "rc": 1, "start": "2022-05-18 10:28:49.125049", 
>> "stderr": "replica count should be greater than 1\n\nUsage:\nvolume create 
>>  [[replica  [arbiter ]]|[replica 2 thin-arbiter 
>> 1]] [disperse []] [disperse-data ] [redundancy ] 
>> [transport ]  ... [force]", 
>> "stderr_lines": ["replica count should be greater than 1", "", "Usage:", 
>> "volume create  [[replica  [arbiter
>>   ]]|[replica 2 thin-arbiter 1]] [disperse []] [disperse-data 
>> ] [redundancy ] [transport ]  
>> ... [force]"], "stdout": "", "stdout_lines": []}
>> failed: [ovirt01.syd1.fqdn.com] (item={'volname': 'data', 'brick': 
>> '/gluster_bricks/data/data', 'arbiter': 0}) => {"ansible_loop_var": "item", 
>> "changed": true, "cmd": "gluster volume create data replica 
>> __omit_place_holder__2e79c54ac7e0f34125a3ed6965d6de550953524f  transport tcp 
>>  ovirt01.syd1.fqdn.com:/gluster_bricks/data/data  force\n", "delta": 
>> "0:00:00.088490", "end": "2022-05-18 10:28:49.905458", "item": {"arbiter": 
>> 0, "brick": "/gluster_bricks/data/data", "volname": "data"}, "msg": 
>> "non-zero return code", "rc": 1, "start": "2022-05-18 10:28:49.816968", 
>> "stderr": "replica count should be greater than 1\n\nUsage:\nvolume create 
>>  [[replica  [arbiter ]]|[replica 2 thin-arbiter 
>> 1]] [disperse []] [disperse-data ] [redundancy ] 
>> [transport ]  ... [force]", 
>> "stderr_lines": ["replica count should be greater than 1", "", "Usage:", 
>> "volume create  [[replica  [arbiter ]]|[replic
>>  a 2 thin-arbiter 1]] [disperse []] [disperse-data ] 
>> [redundancy ] [transport ]  
>> ... [force]"], "stdout": "", "stdout_lines": []}
>> failed: [ovirt01.syd1.fqdn.com] (item={'volname': 'vmstore', 'brick': 
>> '/gluster_bricks/vmstore/vmstore', 'arbiter': 0}) => {"ansible_loop_var": 
>> "item", "changed": true, "cmd": "gluster volume create vmstore replica 
>> __omit_place_holder__2e79c54ac7e0f34125a3ed6965d6de550953524f  transport tcp 
>>  ovirt01.syd1.fqdn.com:/gluster_bricks/vmstore/vmstore  force\n", "delta": 
>> "0:00:00.086626", "end": "2022-05-18 10:28:50.604015", "item": {"arbiter": 
>> 0, "brick": "/gluster_bricks/vmstore/vmstore", "volname": "vmstore"}, "msg": 
>> "non-zero return code", "rc": 1, "start": "2022-05-18 10:28:50.517389", 
>> "stderr": "replica count should be greater than 1\n\nUsage:\nvolume create 
>>  [[replica  [arbiter ]]|[replica 2 thin-arbiter 
>> 1]] [disperse []] [disperse-data ] [redundancy ] 
>> [transport ]  ... [force]", 
>> "stderr_lines": ["replica count should be greater than 1", "", "Usage:", 
>> "volume create  [[replica 
>>   [arbiter ]]|[replica 2 thin-arbiter 1]] [disperse []] 
>> [disperse-data ] [redundancy ] [transport ] 
>>  ... [force]"], "stdout": "", "stdout_lines": []}
>> 
>> The only non-default settings I changed were the stripe size and number of 
>> disks. Following the steps here:
>> https://www.ovirt.org/dropped/gluster-hyperconverged/chap-Single_node_hyperconverged.html
>> 
>> Any ideas to work around this? I will be deploying to 6 nodes eventually but 
>> wanted to try out the engine before the rest of my hardware arrives :) 
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>>