[ovirt-users] Re: Hyperconvergend Setup stuck

2018-12-20 Thread Stefan Wolf
It is gdeploy 2.0.2

rpm -qa |grep gdeploy
gdeploy-2.0.8-1.el7.noarch
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3QUCUGUHSSRHESRSPGWENRRUND2K3QLK/


[ovirt-users] Re: Hyperconvergend Setup stuck

2018-12-18 Thread Gobinda Das
Which version of gdeploy you are using?

On Tue, Dec 18, 2018 at 6:06 PM Stefan Wolf  wrote:

> Hello
>
>
>
> I like to setup hyperconvergend
>
>
>
> I ve 3 hosts, everyone is fresh installed
>
> Kvm320 has one additional harddrive with 1TB SATA
>
> And kvm360 and kvm380 with two additional harddrives with 300gb and 600gb
> SAS
>
>
>
>
>
> #gdeploy configuration generated by cockpit-gluster plugin
>
> [hosts]
>
> kvm380.durchhalten.intern
>
> kvm360.durchhalten.intern
>
> kvm320.durchhalten.intern
>
>
>
> [script1:kvm380.durchhalten.intern]
>
> action=execute
>
> ignore_script_errors=no
>
> file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb,sdc -h
> kvm380.durchhalten.intern, kvm360.durchhalten.intern,
> kvm320.durchhalten.intern
>
>
>
> [script1:kvm360.durchhalten.intern]
>
> action=execute
>
> ignore_script_errors=no
>
> file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb,sdc -h
> kvm380.durchhalten.intern, kvm360.durchhalten.intern,
> kvm320.durchhalten.intern
>
>
>
> [script1:kvm320.durchhalten.intern]
>
> action=execute
>
> ignore_script_errors=no
>
> file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
> kvm380.durchhalten.intern, kvm360.durchhalten.intern,
> kvm320.durchhalten.intern
>
>
>
> [disktype]
>
> raid6
>
>
>
> [diskcount]
>
> 12
>
>
>
> [stripesize]
>
> 256
>
>
>
> [service1]
>
> action=enable
>
> service=chronyd
>
>
>
> [service2]
>
> action=restart
>
> service=chronyd
>
>
>
> [shell2]
>
> action=execute
>
> command=vdsm-tool configure --force
>
>
>
> [script3]
>
> action=execute
>
> file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh
>
> ignore_script_errors=no
>
>
>
> [pv1:kvm380.durchhalten.intern]
>
> action=create
>
> devices=sdb
>
> ignore_pv_errors=no
>
>
>
> [pv2:kvm380.durchhalten.intern]
>
> action=create
>
> devices=sdc
>
> ignore_pv_errors=no
>
>
>
> [pv1:kvm360.durchhalten.intern]
>
> action=create
>
> devices=sdb
>
> ignore_pv_errors=no
>
>
>
> [pv2:kvm360.durchhalten.intern]
>
> action=create
>
> devices=sdc
>
> ignore_pv_errors=no
>
>
>
> [pv1:kvm320.durchhalten.intern]
>
> action=create
>
> devices=sdb
>
> ignore_pv_errors=no
>
>
>
> [vg1:kvm380.durchhalten.intern]
>
> action=create
>
> vgname=gluster_vg_sdb
>
> pvname=sdb
>
> ignore_vg_errors=no
>
>
>
> [vg2:kvm380.durchhalten.intern]
>
> action=create
>
> vgname=gluster_vg_sdc
>
> pvname=sdc
>
> ignore_vg_errors=no
>
>
>
> [vg1:kvm360.durchhalten.intern]
>
> action=create
>
> vgname=gluster_vg_sdb
>
> pvname=sdb
>
> ignore_vg_errors=no
>
>
>
> [vg2:kvm360.durchhalten.intern]
>
> action=create
>
> vgname=gluster_vg_sdc
>
> pvname=sdc
>
> ignore_vg_errors=no
>
>
>
> [vg1:kvm320.durchhalten.intern]
>
> action=create
>
> vgname=gluster_vg_sdb
>
> pvname=sdb
>
> ignore_vg_errors=no
>
>
>
> [lv1:kvm380.durchhalten.intern]
>
> action=create
>
> poolname=gluster_thinpool_sdc
>
> ignore_lv_errors=no
>
> vgname=gluster_vg_sdc
>
> lvtype=thinpool
>
> size=1005GB
>
> poolmetadatasize=5GB
>
>
>
> [lv2:kvm360.durchhalten.intern]
>
> action=create
>
> poolname=gluster_thinpool_sdc
>
> ignore_lv_errors=no
>
> vgname=gluster_vg_sdc
>
> lvtype=thinpool
>
> size=1005GB
>
> poolmetadatasize=5GB
>
>
>
> [lv3:kvm320.durchhalten.intern]
>
> action=create
>
> poolname=gluster_thinpool_sdb
>
> ignore_lv_errors=no
>
> vgname=gluster_vg_sdb
>
> lvtype=thinpool
>
> size=41GB
>
> poolmetadatasize=1GB
>
>
>
> [lv4:kvm380.durchhalten.intern]
>
> action=create
>
> lvname=gluster_lv_engine
>
> ignore_lv_errors=no
>
> vgname=gluster_vg_sdb
>
> mount=/gluster_bricks/engine
>
> size=100GB
>
> lvtype=thick
>
>
>
> [lv5:kvm380.durchhalten.intern]
>
> action=create
>
> lvname=gluster_lv_data
>
> ignore_lv_errors=no
>
> vgname=gluster_vg_sdc
>
> mount=/gluster_bricks/data
>
> lvtype=thinlv
>
> poolname=gluster_thinpool_sdc
>
> virtualsize=500GB
>
>
>
> [lv6:kvm380.durchhalten.intern]
>
> action=create
>
> lvname=gluster_lv_vmstore
>
> ignore_lv_errors=no
>
> vgname=gluster_vg_sdc
>
> mount=/gluster_bricks/vmstore
>
> lvtype=thinlv
>
> poolname=gluster_thinpool_sdc
>
> virtualsize=500GB
>
>
>
> [lv7:kvm360.durchhalten.intern]
>
> action=create
>
> lvname=gluster_lv_engine
>
> ignore_lv_errors=no
>
> vgname=gluster_vg_sdb
>
> mount=/gluster_bricks/engine
>
> size=100GB
>
> lvtype=thick
>
>
>
> [lv8:kvm360.durchhalten.intern]
>
> action=create
>
> lvname=gluster_lv_data
>
> ignore_lv_errors=no
>
> vgname=gluster_vg_sdc
>
> mount=/gluster_bricks/data
>
> lvtype=thinlv
>
> poolname=gluster_thinpool_sdc
>
> virtualsize=500GB
>
>
>
> [lv9:kvm360.durchhalten.intern]
>
> action=create
>
> lvname=gluster_lv_vmstore
>
> ignore_lv_errors=no
>
> vgname=gluster_vg_sdc
>
> mount=/gluster_bricks/vmstore
>
> lvtype=thinlv
>
> poolname=gluster_thinpool_sdc
>
> virtualsize=500GB
>
>
>
> [lv10:kvm320.durchhalten.intern]
>
> action=create
>
> lvname=gluster_lv_engine
>
> ignore_lv_errors=no
>
> vgname=gluster_vg_sdb
>
> mount=/gluster_bricks/engine
>
> size=20GB
>
> lvtype=thick
>
>
>
> [lv11:kvm320.durchhalten.intern]
>
> action=create
>
>