[ovirt-users] Re: Unable to ugprade cluster level to 4.7 for the hosted engine (only)

2022-05-18 Thread Ryan Bullock
Has anyone had any success with solving this issue? I'm running into it as
well after upgrading to 4.5. I'm unable to change any settings on the
hosted engine, with everything reporting settings as locked.

Regards,

Ryan Bullock

On Tue, May 10, 2022 at 3:00 AM lists--- via Users  wrote:

> Hello,
>
> i upgraded my engine and nodes to 4.5 a few days ago and now planning to
> upgrade the cluster level compatibility from 4.6 to 4.7. First i tried
> doing this from the cluster settings, but it fails because hosted-engine
> settings are locked. So i tried it by hand but again got the locked error,
> i found i cant change any values on the hosted engine. Changing
> compatiblity level on all other VMs worked fine and there are on 4.7 now.
>
> I read about the timezone issue in 4.4.8, so i checked the timezone of my
> hosted engine it is filled with "Standard: (GMTZ) Greenwhich Standard
> Time". To be sure, i just did a
> "/usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "update vm_static SET
> time_zone='Etc/GMT' where vm_name='HostedEngine';"" and it changed the
> timezone, but settings are still locked and i am unable to change the
> compatibility level.
>
> Any idea how to solve this?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ENA2IU7N62YFMYOOQJ6NA7JSIF74ZFJ6/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OYEZNZD3ZFD4TLIPGVUZWXGEVVGHUKR2/


[ovirt-users] Re: [outage] ovirt.org site is currently down

2022-05-18 Thread Sandro Bonazzola
The site is back up and running.
Report of the incident is here:
https://listman.redhat.com/archives/osci-announce/2022-May/14.html

Il giorno mer 18 mag 2022 alle ore 09:18 Sandro Bonazzola <
sbona...@redhat.com> ha scritto:

> Hi, ovirt.org is currently down, Infrastructure team has been alerted.
>
> Thanks
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.*
>
>
>

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GFBKZPV5TNUO7RO2GKR5GGBJWLWOXZIU/


[ovirt-users] [outage] ovirt.org site is currently down

2022-05-18 Thread Sandro Bonazzola
Hi, ovirt.org is currently down, Infrastructure team has been alerted.

Thanks
-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAHL7LVVUR4OJYXYTWVUX4VCWBD4LT6A/


[ovirt-users] Re: Single node hyperconverged issue with 4.5.0.2

2022-05-18 Thread Ritesh Chikatwar
Hello,

Can you share these files with me from the node,
/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml
& /etc/ansible/hc_wizard_inventory.yml

Thanks



On Wed, May 18, 2022 at 4:15 PM  wrote:

> I've run into the following issue with oVirt node on a single host using
> the single node hyperconverged wizard:
>
> TASK [gluster.features/roles/gluster_hci : Create the GlusterFS volumes]
> ***
> failed: [ovirt01.syd1.fqdn.com] (item={'volname': 'engine', 'brick':
> '/gluster_bricks/engine/engine', 'arbiter': 0}) => {"ansible_loop_var":
> "item", "changed": true, "cmd": "gluster volume create engine replica
> __omit_place_holder__2e79c54ac7e0f34125a3ed6965d6de550953524f  transport
> tcp  ovirt01.syd1.fqdn.com:/gluster_bricks/engine/engine  force\n",
> "delta": "0:00:00.086880", "end": "2022-05-18 10:28:49.211929", "item":
> {"arbiter": 0, "brick": "/gluster_bricks/engine/engine", "volname":
> "engine"}, "msg": "non-zero return code", "rc": 1, "start": "2022-05-18
> 10:28:49.125049", "stderr": "replica count should be greater than
> 1\n\nUsage:\nvolume create  [[replica  [arbiter
> ]]|[replica 2 thin-arbiter 1]] [disperse []] [disperse-data
> ] [redundancy ] [transport ] 
> ... [force]", "stderr_lines": ["replica count should be greater
> than 1", "", "Usage:", "volume create  [[replica 
> [arbiter
>   ]]|[replica 2 thin-arbiter 1]] [disperse []]
> [disperse-data ] [redundancy ] [transport
> ]  ... [force]"], "stdout": "",
> "stdout_lines": []}
> failed: [ovirt01.syd1.fqdn.com] (item={'volname': 'data', 'brick':
> '/gluster_bricks/data/data', 'arbiter': 0}) => {"ansible_loop_var": "item",
> "changed": true, "cmd": "gluster volume create data replica
> __omit_place_holder__2e79c54ac7e0f34125a3ed6965d6de550953524f  transport
> tcp  ovirt01.syd1.fqdn.com:/gluster_bricks/data/data  force\n", "delta":
> "0:00:00.088490", "end": "2022-05-18 10:28:49.905458", "item": {"arbiter":
> 0, "brick": "/gluster_bricks/data/data", "volname": "data"}, "msg":
> "non-zero return code", "rc": 1, "start": "2022-05-18 10:28:49.816968",
> "stderr": "replica count should be greater than 1\n\nUsage:\nvolume create
>  [[replica  [arbiter ]]|[replica 2 thin-arbiter
> 1]] [disperse []] [disperse-data ] [redundancy ]
> [transport ]  ... [force]",
> "stderr_lines": ["replica count should be greater than 1", "", "Usage:",
> "volume create  [[replica  [arbiter ]]|[replic
>  a 2 thin-arbiter 1]] [disperse []] [disperse-data ]
> [redundancy ] [transport ] 
> ... [force]"], "stdout": "", "stdout_lines": []}
> failed: [ovirt01.syd1.fqdn.com] (item={'volname': 'vmstore', 'brick':
> '/gluster_bricks/vmstore/vmstore', 'arbiter': 0}) => {"ansible_loop_var":
> "item", "changed": true, "cmd": "gluster volume create vmstore replica
> __omit_place_holder__2e79c54ac7e0f34125a3ed6965d6de550953524f  transport
> tcp  ovirt01.syd1.fqdn.com:/gluster_bricks/vmstore/vmstore  force\n",
> "delta": "0:00:00.086626", "end": "2022-05-18 10:28:50.604015", "item":
> {"arbiter": 0, "brick": "/gluster_bricks/vmstore/vmstore", "volname":
> "vmstore"}, "msg": "non-zero return code", "rc": 1, "start": "2022-05-18
> 10:28:50.517389", "stderr": "replica count should be greater than
> 1\n\nUsage:\nvolume create  [[replica  [arbiter
> ]]|[replica 2 thin-arbiter 1]] [disperse []] [disperse-data
> ] [redundancy ] [transport ] 
> ... [force]", "stderr_lines": ["replica count should be greater
> than 1", "", "Usage:", "volume create  [[replica 
>   [arbiter ]]|[replica 2 thin-arbiter 1]] [disperse []]
> [disperse-data ] [redundancy ] [transport
> ]  ... [force]"], "stdout": "",
> "stdout_lines": []}
>
> The only non-default settings I changed were the stripe size and number of
> disks. Following the steps here:
>
> https://www.ovirt.org/dropped/gluster-hyperconverged/chap-Single_node_hyperconverged.html
>
> Any ideas to work around this? I will be deploying to 6 nodes eventually
> but wanted to try out the engine before the rest of my hardware arrives :)
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/M3HBNBFFNUVZSI7P7ZNB6VMQEPMSWIID/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQUU76E7IWQMKYU25A5N2L32YEPM5VPY/


[ovirt-users] Re: Cluster CPU Type

2022-05-18 Thread Michal Skrivanek


> On 18. 5. 2022, at 1:11, Colin Coe  wrote:
> 
> Hi all
> 
> I'm just putting in some new servers to be used as hosts in RHV 4.3 (soon to 
> go to v4.4 or v4.5) but I'm having problems with the cluster CPU.
> 
> These are HPE DL360 Gen10 with "Xeon Gold 6338N".  Google tells me these are 
> "Ice Lake" but RHV complains unless I set the cluster CPU type to 
> "SandyBridge".
> 
> Is this expected?  What do I need to do to get these recognised as the corect 
> CPU type?

Hi,
4.3 doesn't support Ice Lake, you'd need oVirt/RHV 4.4 and a 4.5 cluster level.
You may also be missing microcode updates for some of the security 
vulnerabilities

Thanks,
michal
> 
> Thanks
> 
> CC
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MVARZPDJABDS7OPOX3KP6ZRGPMAVTAMA/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MYBZONN5BLM7LRHECDM2ANYSNBTN4OFA/


[ovirt-users] Re: Single node hyperconverged issue with 4.5.0.2

2022-05-18 Thread bpbp
Hi Ritesh,

I was able to make some progress by modifying the gluster role, see this issue 
I filed earlier: https://github.com/gluster/gluster-ansible-features/issues/55

There was a further issue with deploying the hosted engine which was solved by 
modifying the code which was checking for an XML element which was missing, see 
this post: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6IKZ45B2TUCQB6WXZ3B4AFVU2RXZXJQQ/

I can share the files you asked for tomorrow when I am back in the office.

Cheers,
Boden

On Wed, 18 May 2022, at 9:00 PM, Ritesh Chikatwar wrote:
> Hello,
> 
> Can you share these files with me from the node, 
> /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml & 
> /etc/ansible/hc_wizard_inventory.yml
> 
> Thanks
> 
> 
> 
> On Wed, May 18, 2022 at 4:15 PM  wrote:
>> I've run into the following issue with oVirt node on a single host using the 
>> single node hyperconverged wizard:
>> 
>> TASK [gluster.features/roles/gluster_hci : Create the GlusterFS volumes] 
>> ***
>> failed: [ovirt01.syd1.fqdn.com] (item={'volname': 'engine', 'brick': 
>> '/gluster_bricks/engine/engine', 'arbiter': 0}) => {"ansible_loop_var": 
>> "item", "changed": true, "cmd": "gluster volume create engine replica 
>> __omit_place_holder__2e79c54ac7e0f34125a3ed6965d6de550953524f  transport tcp 
>>  ovirt01.syd1.fqdn.com:/gluster_bricks/engine/engine  force\n", "delta": 
>> "0:00:00.086880", "end": "2022-05-18 10:28:49.211929", "item": {"arbiter": 
>> 0, "brick": "/gluster_bricks/engine/engine", "volname": "engine"}, "msg": 
>> "non-zero return code", "rc": 1, "start": "2022-05-18 10:28:49.125049", 
>> "stderr": "replica count should be greater than 1\n\nUsage:\nvolume create 
>>  [[replica  [arbiter ]]|[replica 2 thin-arbiter 
>> 1]] [disperse []] [disperse-data ] [redundancy ] 
>> [transport ]  ... [force]", 
>> "stderr_lines": ["replica count should be greater than 1", "", "Usage:", 
>> "volume create  [[replica  [arbiter
>>   ]]|[replica 2 thin-arbiter 1]] [disperse []] [disperse-data 
>> ] [redundancy ] [transport ]  
>> ... [force]"], "stdout": "", "stdout_lines": []}
>> failed: [ovirt01.syd1.fqdn.com] (item={'volname': 'data', 'brick': 
>> '/gluster_bricks/data/data', 'arbiter': 0}) => {"ansible_loop_var": "item", 
>> "changed": true, "cmd": "gluster volume create data replica 
>> __omit_place_holder__2e79c54ac7e0f34125a3ed6965d6de550953524f  transport tcp 
>>  ovirt01.syd1.fqdn.com:/gluster_bricks/data/data  force\n", "delta": 
>> "0:00:00.088490", "end": "2022-05-18 10:28:49.905458", "item": {"arbiter": 
>> 0, "brick": "/gluster_bricks/data/data", "volname": "data"}, "msg": 
>> "non-zero return code", "rc": 1, "start": "2022-05-18 10:28:49.816968", 
>> "stderr": "replica count should be greater than 1\n\nUsage:\nvolume create 
>>  [[replica  [arbiter ]]|[replica 2 thin-arbiter 
>> 1]] [disperse []] [disperse-data ] [redundancy ] 
>> [transport ]  ... [force]", 
>> "stderr_lines": ["replica count should be greater than 1", "", "Usage:", 
>> "volume create  [[replica  [arbiter ]]|[replic
>>  a 2 thin-arbiter 1]] [disperse []] [disperse-data ] 
>> [redundancy ] [transport ]  
>> ... [force]"], "stdout": "", "stdout_lines": []}
>> failed: [ovirt01.syd1.fqdn.com] (item={'volname': 'vmstore', 'brick': 
>> '/gluster_bricks/vmstore/vmstore', 'arbiter': 0}) => {"ansible_loop_var": 
>> "item", "changed": true, "cmd": "gluster volume create vmstore replica 
>> __omit_place_holder__2e79c54ac7e0f34125a3ed6965d6de550953524f  transport tcp 
>>  ovirt01.syd1.fqdn.com:/gluster_bricks/vmstore/vmstore  force\n", "delta": 
>> "0:00:00.086626", "end": "2022-05-18 10:28:50.604015", "item": {"arbiter": 
>> 0, "brick": "/gluster_bricks/vmstore/vmstore", "volname": "vmstore"}, "msg": 
>> "non-zero return code", "rc": 1, "start": "2022-05-18 10:28:50.517389", 
>> "stderr": "replica count should be greater than 1\n\nUsage:\nvolume create 
>>  [[replica  [arbiter ]]|[replica 2 thin-arbiter 
>> 1]] [disperse []] [disperse-data ] [redundancy ] 
>> [transport ]  ... [force]", 
>> "stderr_lines": ["replica count should be greater than 1", "", "Usage:", 
>> "volume create  [[replica 
>>   [arbiter ]]|[replica 2 thin-arbiter 1]] [disperse []] 
>> [disperse-data ] [redundancy ] [transport ] 
>>  ... [force]"], "stdout": "", "stdout_lines": []}
>> 
>> The only non-default settings I changed were the stripe size and number of 
>> disks. Following the steps here:
>> https://www.ovirt.org/dropped/gluster-hyperconverged/chap-Single_node_hyperconverged.html
>> 
>> Any ideas to work around this? I will be deploying to 6 nodes eventually but 
>> wanted to try out the engine before the rest of my hardware arrives :) 
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> 

[ovirt-users] Re: list-view instead of tiled-view in oVirt VM Portal?

2022-05-18 Thread Radoslaw Szwajkowski
On Tue, May 17, 2022 at 7:51 PM Sharon Gratch  wrote:
>
> We are also planning as part of Patternfly 4 upgrade to maybe reduce the area 
> size of each VM card - which might help as well.
>

Please take a look at https://github.com/oVirt/ovirt-web-ui/pull/1543
Screen in standard resolution 1920x1080 will display 14 tiles instead of 8.

best regards,
radek
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FT2MGJPJESCUXBS2THMUSSDNUCXH2FEJ/


[ovirt-users] Re: Ovirt-engine , certificate issue

2022-05-18 Thread Angel R. Gonzalez

Hi,

thank you very much for your support.

I've restarted httpd and the issue is resolved. But now, I've seen that 
one of the nodes is NonResponsive mode, other is in Connecting mode and 
the system log say:


    "Engine's certification has expired at 2022-05-16. Please renew the 
engine's certification."


Should I run the command engine-setup --offline for renew the engine's 
certification?

Do I have to do some more actions before executing that command?
After the engine-setup --offline, the nodes will be up?


Thanks in advance.
Ángel.


El 17/5/22 a las 22:23, Gianluca Cecchi escribió:

On Tue, May 17, 2022 at 7:36 PM Sharon Gratch  wrote:

Hi,

On Tue, May 17, 2022 at 7:33 PM Angel R. Gonzalez
 wrote:

Hello,

I've a issue when I try log in ovirt-engine manager with a
browser. The
error message is:

 PKIX path validation failed:
java.security.cert.CertPathValidatorException: validity check
failed

The ovirt version is 4.4.5.11-1.

I follow the next commands for try resolve it.


> # cp -a /etc/pki/ovirt-engine "/etc/pki/ovirt-engine.$(date
"+%Y%m%d")"
> # SUBJECT="$(openssl x509 -subject -noout -in
> /etc/pki/ovirt-engine/certs/apache.cer | sed 's/subject= //')"
> # /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh
--name=apache
> --password="PASSWORD" --subject="${SUBJECT}"
> # openssl pkcs12 -passin "pass:PASSWORD" -nokeys -in
> /etc/pki/ovirt-engine/keys/apache.p12 >
> /etc/pki/ovirt-engine/certs/apache.cer
> # openssl pkcs12 -passin "pass:PASSWORD" -nocerts -nodes -in
> /etc/pki/ovirt-engine/keys/apache.p12 >
> /etc/pki/ovirt-engine/keys/apache.key.nopass
> # chmod 0600 /etc/pki/ovirt-engine/keys/apache.key.nopass
> # systemctl restart ovirt-engine.service
But after restarting the issue is the same.

Any idea?


Maybe try to restart the apache HTTP Server as well:
/systemctl restart httpd/

If it still doesn't work then please share the errors within the
engine log /var/log/ovirt-engine/engine.log

Thanks,
Sharon



Otherwise you can run
engine-setup --offline
(it will not change anything on current config and will not try to 
update any package)
between the answers to give it will notice that your certificate is 
expired and you have to answer yes to the question to renew it

After that you should be able to access the engine again

HIH,
Gianluca



--
Ángel Ramón González Martín
Responsable de Laboratorios Docentes
Edificio Alan Turing   Planta 3ª, Despacho A-313
Teléfono: 91497 2311 angel.gonza...@uam.es
Escuela Politécnica Superior


Universidad Autónoma de Madrid
C/ Francisco Tomás y Valiente 11, 28049 Madrid

Antes de imprimir este correo piense si es necesario. Cuidemos el 
medioambiente.___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XNNPWDSUIGL47VYR2ZH7RWI7LY7WN5OM/


[ovirt-users] Single node hyperconverged issue with 4.5.0.2

2022-05-18 Thread bpbp
I've run into the following issue with oVirt node on a single host using the 
single node hyperconverged wizard:

TASK [gluster.features/roles/gluster_hci : Create the GlusterFS volumes] ***
failed: [ovirt01.syd1.fqdn.com] (item={'volname': 'engine', 'brick': 
'/gluster_bricks/engine/engine', 'arbiter': 0}) => {"ansible_loop_var": "item", 
"changed": true, "cmd": "gluster volume create engine replica 
__omit_place_holder__2e79c54ac7e0f34125a3ed6965d6de550953524f  transport tcp  
ovirt01.syd1.fqdn.com:/gluster_bricks/engine/engine  force\n", "delta": 
"0:00:00.086880", "end": "2022-05-18 10:28:49.211929", "item": {"arbiter": 0, 
"brick": "/gluster_bricks/engine/engine", "volname": "engine"}, "msg": 
"non-zero return code", "rc": 1, "start": "2022-05-18 10:28:49.125049", 
"stderr": "replica count should be greater than 1\n\nUsage:\nvolume create 
 [[replica  [arbiter ]]|[replica 2 thin-arbiter 1]] 
[disperse []] [disperse-data ] [redundancy ] [transport 
]  ... [force]", "stderr_lines": 
["replica count should be greater than 1", "", "Usage:", "volume create 
 [[replica  [arbiter
  ]]|[replica 2 thin-arbiter 1]] [disperse []] [disperse-data 
] [redundancy ] [transport ]  
... [force]"], "stdout": "", "stdout_lines": []}
failed: [ovirt01.syd1.fqdn.com] (item={'volname': 'data', 'brick': 
'/gluster_bricks/data/data', 'arbiter': 0}) => {"ansible_loop_var": "item", 
"changed": true, "cmd": "gluster volume create data replica 
__omit_place_holder__2e79c54ac7e0f34125a3ed6965d6de550953524f  transport tcp  
ovirt01.syd1.fqdn.com:/gluster_bricks/data/data  force\n", "delta": 
"0:00:00.088490", "end": "2022-05-18 10:28:49.905458", "item": {"arbiter": 0, 
"brick": "/gluster_bricks/data/data", "volname": "data"}, "msg": "non-zero 
return code", "rc": 1, "start": "2022-05-18 10:28:49.816968", "stderr": 
"replica count should be greater than 1\n\nUsage:\nvolume create  
[[replica  [arbiter ]]|[replica 2 thin-arbiter 1]] [disperse 
[]] [disperse-data ] [redundancy ] [transport 
]  ... [force]", "stderr_lines": 
["replica count should be greater than 1", "", "Usage:", "volume create 
 [[replica  [arbiter ]]|[replic
 a 2 thin-arbiter 1]] [disperse []] [disperse-data ] [redundancy 
] [transport ]  ... [force]"], 
"stdout": "", "stdout_lines": []}
failed: [ovirt01.syd1.fqdn.com] (item={'volname': 'vmstore', 'brick': 
'/gluster_bricks/vmstore/vmstore', 'arbiter': 0}) => {"ansible_loop_var": 
"item", "changed": true, "cmd": "gluster volume create vmstore replica 
__omit_place_holder__2e79c54ac7e0f34125a3ed6965d6de550953524f  transport tcp  
ovirt01.syd1.fqdn.com:/gluster_bricks/vmstore/vmstore  force\n", "delta": 
"0:00:00.086626", "end": "2022-05-18 10:28:50.604015", "item": {"arbiter": 0, 
"brick": "/gluster_bricks/vmstore/vmstore", "volname": "vmstore"}, "msg": 
"non-zero return code", "rc": 1, "start": "2022-05-18 10:28:50.517389", 
"stderr": "replica count should be greater than 1\n\nUsage:\nvolume create 
 [[replica  [arbiter ]]|[replica 2 thin-arbiter 1]] 
[disperse []] [disperse-data ] [redundancy ] [transport 
]  ... [force]", "stderr_lines": 
["replica count should be greater than 1", "", "Usage:", "volume create 
 [[replica 
  [arbiter ]]|[replica 2 thin-arbiter 1]] [disperse []] 
[disperse-data ] [redundancy ] [transport ] 
 ... [force]"], "stdout": "", "stdout_lines": []}

The only non-default settings I changed were the stripe size and number of 
disks. Following the steps here:
https://www.ovirt.org/dropped/gluster-hyperconverged/chap-Single_node_hyperconverged.html

Any ideas to work around this? I will be deploying to 6 nodes eventually but 
wanted to try out the engine before the rest of my hardware arrives :) 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M3HBNBFFNUVZSI7P7ZNB6VMQEPMSWIID/