[ovirt-users] Re: 4.3.3 single node hyperconverged wizard failing because var/log is too small?

2019-05-10 Thread Edward Berger
Thanks! After adding the workaround, I was able to complete the deployment.

On Fri, May 10, 2019 at 1:39 AM Parth Dhanjal  wrote:

> Hey!
>
> oVirt 4.3.3 uses gluster-ansible-roles to deploy the storage.
> There are multiple checks during a deployment.
> This particular check which is failing is a part of the
> gluster-ansible-featues (
> https://github.com/gluster/gluster-ansible-features/tree/master/roles/gluster_hci
> )
>
> A simple workaround can be to skip the test, by editing the finally
> generated inventory file in the last step before deployment and adding 
> gluster_features_force_varlogsizecheck:
> false under the vars section of the file.
>
> Regards
> Parth Dhanjal
>
> On Fri, May 10, 2019 at 5:58 AM Edward Berger  wrote:
>
>> I'm trying to bring up a single node hyperconverged with the current
>> node-ng ISO installation,
>> but it ends with this failure message.
>>
>> TASK [gluster.features/roles/gluster_hci : Check if /var/log has enough
>> disk space] ***
>> fatal: [br014.bridges.psc.edu]: FAILED! => {"changed": true, "cmd": "df
>> -m /var/log | awk '/[0-9]%/ {print $4}'", "delta": "0:00:00.008513", "end":
>> "2019-05-09 20:09:27.914400", "failed_when_result": true, "rc": 0, "start":
>> "2019-05-09 20:09:27.905887", "stderr": "", "stderr_lines": [], "stdout":
>> "7470", "stdout_lines": ["7470"]}
>>
>> I have what the installer created by default for /var/log, so I don't
>> know why its complaining.
>>
>> [root@br014 ~]# df -kh
>> Filesystem  Size
>> Used Avail Use% Mounted on
>> /dev/mapper/onn_br014-ovirt--node--ng--4.3.3.1--0.20190417.0+1  3.5T
>> 2.1G  3.3T   1% /
>> devtmpfs 63G
>> 0   63G   0% /dev
>> tmpfs63G
>> 4.0K   63G   1% /dev/shm
>> tmpfs63G
>> 18M   63G   1% /run
>> tmpfs63G
>> 0   63G   0% /sys/fs/cgroup
>> /dev/mapper/onn_br014-home  976M
>> 2.6M  907M   1% /home
>> /dev/mapper/onn_br014-tmp   976M
>> 2.8M  906M   1% /tmp
>> /dev/mapper/onn_br014-var15G
>> 42M   14G   1% /var
>> /dev/sda2   976M
>> 173M  737M  19% /boot
>> /dev/mapper/onn_br014-var_log   7.8G
>> 41M  7.3G   1% /var/log
>> /dev/mapper/onn_br014-var_log_audit 2.0G
>> 7.6M  1.8G   1% /var/log/audit
>> /dev/mapper/onn_br014-var_crash 9.8G
>> 37M  9.2G   1% /var/crash
>> /dev/sda1   200M
>> 12M  189M   6% /boot/efi
>> tmpfs13G
>> 0   13G   0% /run/user/1000
>> tmpfs13G
>> 0   13G   0% /run/user/0
>> /dev/mapper/gluster_vg_sdb-gluster_lv_engine3.7T
>> 33M  3.7T   1% /gluster_bricks/engine
>> /dev/mapper/gluster_vg_sdc-gluster_lv_data  3.7T
>> 34M  3.7T   1% /gluster_bricks/data
>> /dev/mapper/gluster_vg_sdd-gluster_lv_vmstore   3.7T
>> 34M  3.7T   1% /gluster_bricks/vmstore
>>
>> The machine had 4 4TB disks, so sda is the installation for oVirt
>> node-ng, the other 3 disks for the gluster volumes.
>>
>> root@br014 ~]# pvs
>>   PV VG Fmt  Attr PSize  PFree
>>   /dev/sda3  onn_br014  lvm2 a--  <3.64t 100.00g
>>   /dev/sdb   gluster_vg_sdb lvm2 a--  <3.64t <26.02g
>>   /dev/sdc   gluster_vg_sdc lvm2 a--  <3.64t  0
>>   /dev/sdd   gluster_vg_sdd lvm2 a--  <3.64t  0
>>
>> [root@br014 ~]# vgs
>>   VG #PV #LV #SN Attr   VSize  VFree
>>   gluster_vg_sdb   1   1   0 wz--n- <3.64t <26.02g
>>   gluster_vg_sdc   1   2   0 wz--n- <3.64t  0
>>   gluster_vg_sdd   1   2   0 wz--n- <3.64t  0
>>   onn_br0141  11   0 wz--n- <3.64t 100.00g
>>
>> [root@br014 ~]# lvs
>>   LV   VG Attr   LSize
>> PoolOrigin Data%
>> Meta%  Move Log Cpy%Sync Convert
>>   gluster_lv_enginegluster_vg_sdb -wi-ao
>> 3.61t
>>
>>   gluster_lv_data  gluster_vg_sdc Vwi-aot---  3.61t
>> gluster_thinpool_gluster_vg_sdc
>> 0.05
>>   gluster_thinpool_gluster_vg_sdc  gluster_vg_sdc twi-aot---
>> <3.61t
>> 0.05   0.13
>>   gluster_lv_vmstore   gluster_vg_sdd Vwi-aot---  3.61t
>> gluster_thinpool_gluster_vg_sdd
>> 0.05
>>   gluster_thinpool_gluster_vg_sdd  gluster_vg_sdd twi-aot---
>> <3.61t
>> 0.05   0.13
>>   home onn_br014  Vwi-aotz--  1.00g
>> pool00
>> 4.79
>>   ovirt-node-ng-4.3.3.1-0.20190417.0   onn_br014  Vwi---tz-k <3.51t
>> pool00

[ovirt-users] Re: 4.3.3 single node hyperconverged wizard failing because var/log is too small?

2019-05-09 Thread Parth Dhanjal
Hey!

oVirt 4.3.3 uses gluster-ansible-roles to deploy the storage.
There are multiple checks during a deployment.
This particular check which is failing is a part of the
gluster-ansible-featues (
https://github.com/gluster/gluster-ansible-features/tree/master/roles/gluster_hci
)

A simple workaround can be to skip the test, by editing the finally
generated inventory file in the last step before deployment and adding
gluster_features_force_varlogsizecheck:
false under the vars section of the file.

Regards
Parth Dhanjal

On Fri, May 10, 2019 at 5:58 AM Edward Berger  wrote:

> I'm trying to bring up a single node hyperconverged with the current
> node-ng ISO installation,
> but it ends with this failure message.
>
> TASK [gluster.features/roles/gluster_hci : Check if /var/log has enough
> disk space] ***
> fatal: [br014.bridges.psc.edu]: FAILED! => {"changed": true, "cmd": "df
> -m /var/log | awk '/[0-9]%/ {print $4}'", "delta": "0:00:00.008513", "end":
> "2019-05-09 20:09:27.914400", "failed_when_result": true, "rc": 0, "start":
> "2019-05-09 20:09:27.905887", "stderr": "", "stderr_lines": [], "stdout":
> "7470", "stdout_lines": ["7470"]}
>
> I have what the installer created by default for /var/log, so I don't know
> why its complaining.
>
> [root@br014 ~]# df -kh
> Filesystem  Size  Used
> Avail Use% Mounted on
> /dev/mapper/onn_br014-ovirt--node--ng--4.3.3.1--0.20190417.0+1  3.5T
> 2.1G  3.3T   1% /
> devtmpfs 63G
> 0   63G   0% /dev
> tmpfs63G
> 4.0K   63G   1% /dev/shm
> tmpfs63G
> 18M   63G   1% /run
> tmpfs63G
> 0   63G   0% /sys/fs/cgroup
> /dev/mapper/onn_br014-home  976M
> 2.6M  907M   1% /home
> /dev/mapper/onn_br014-tmp   976M
> 2.8M  906M   1% /tmp
> /dev/mapper/onn_br014-var15G
> 42M   14G   1% /var
> /dev/sda2   976M
> 173M  737M  19% /boot
> /dev/mapper/onn_br014-var_log   7.8G
> 41M  7.3G   1% /var/log
> /dev/mapper/onn_br014-var_log_audit 2.0G
> 7.6M  1.8G   1% /var/log/audit
> /dev/mapper/onn_br014-var_crash 9.8G
> 37M  9.2G   1% /var/crash
> /dev/sda1   200M
> 12M  189M   6% /boot/efi
> tmpfs13G
> 0   13G   0% /run/user/1000
> tmpfs13G
> 0   13G   0% /run/user/0
> /dev/mapper/gluster_vg_sdb-gluster_lv_engine3.7T
> 33M  3.7T   1% /gluster_bricks/engine
> /dev/mapper/gluster_vg_sdc-gluster_lv_data  3.7T
> 34M  3.7T   1% /gluster_bricks/data
> /dev/mapper/gluster_vg_sdd-gluster_lv_vmstore   3.7T
> 34M  3.7T   1% /gluster_bricks/vmstore
>
> The machine had 4 4TB disks, so sda is the installation for oVirt node-ng,
> the other 3 disks for the gluster volumes.
>
> root@br014 ~]# pvs
>   PV VG Fmt  Attr PSize  PFree
>   /dev/sda3  onn_br014  lvm2 a--  <3.64t 100.00g
>   /dev/sdb   gluster_vg_sdb lvm2 a--  <3.64t <26.02g
>   /dev/sdc   gluster_vg_sdc lvm2 a--  <3.64t  0
>   /dev/sdd   gluster_vg_sdd lvm2 a--  <3.64t  0
>
> [root@br014 ~]# vgs
>   VG #PV #LV #SN Attr   VSize  VFree
>   gluster_vg_sdb   1   1   0 wz--n- <3.64t <26.02g
>   gluster_vg_sdc   1   2   0 wz--n- <3.64t  0
>   gluster_vg_sdd   1   2   0 wz--n- <3.64t  0
>   onn_br0141  11   0 wz--n- <3.64t 100.00g
>
> [root@br014 ~]# lvs
>   LV   VG Attr   LSize
> PoolOrigin Data%
> Meta%  Move Log Cpy%Sync Convert
>   gluster_lv_enginegluster_vg_sdb -wi-ao
> 3.61t
>
>   gluster_lv_data  gluster_vg_sdc Vwi-aot---  3.61t
> gluster_thinpool_gluster_vg_sdc
> 0.05
>   gluster_thinpool_gluster_vg_sdc  gluster_vg_sdc twi-aot---
> <3.61t
> 0.05   0.13
>   gluster_lv_vmstore   gluster_vg_sdd Vwi-aot---  3.61t
> gluster_thinpool_gluster_vg_sdd
> 0.05
>   gluster_thinpool_gluster_vg_sdd  gluster_vg_sdd twi-aot---
> <3.61t
> 0.05   0.13
>   home onn_br014  Vwi-aotz--  1.00g
> pool00
> 4.79
>   ovirt-node-ng-4.3.3.1-0.20190417.0   onn_br014  Vwi---tz-k <3.51t
> pool00
> root
>   ovirt-node-ng-4.3.3.1-0.20190417.0+1 onn_br014  Vwi-aotz-- <3.51t
> pool00  ovirt-node-ng-4.3.3.1-0.20190417.0
> 0.13
>   pool00   onn_br014  twi-aotz--
> 3.53t
> 0.19   1.86
>   root