[ovirt-users] Re: ovirt-node 4.2 iso - hyperconverged wizard doesn't write gdeployConfig settings

2019-02-06 Thread Sachidananda URS
Hi,

On Thu, Feb 7, 2019 at 9:27 AM Sahina Bose  wrote:

> +Sachidananda URS to review user request about systemd mount files
>
> On Tue, Feb 5, 2019 at 10:22 PM feral  wrote:
> >
> > Using SystemD makes way more sense to me. I was just trying to use
> ovirt-node as it was ... intended? Mainly because I have no idea how it all
> works yet, so I've been trying to do the most stockish deployment possible,
> following deployment instructions and not thinking I'm smarter than the
> software :p.
> > I've given up on 4.2 for now, as 4.3 was just released, so giving that a
> try now. Will report back. Hopefully 4.3 enlists systemd for stuff?
> >
>


Unless we have really complicated mount setup, it is better to use fstab.
We had certain difficulties while using vdo, maybe for such cases?

However the systemd.mount(5) manpage suggests that the preferred way of
mount configuration
should be /etc/fstab.

src:
https://manpages.debian.org/jessie/systemd/systemd.mount.5.en.html#/ETC/FSTAB


/ETC/FSTAB

Mount units may either be configured via unit files, or via /etc/fstab
(seefstab(5) for details). Mounts listed in /etc/fstab will be converted
into native units dynamically at boot and when the configuration of the
system manager is reloaded. In general, configuring mount points through
/etc/fstab is the preferred approach. See systemd-fstab-generator(8) for
details about the conversion.



> > On Tue, Feb 5, 2019 at 4:33 AM Strahil Nikolov 
> wrote:
> >>
> >> Dear Feral,
> >>
> >> >On that note, have you also had issues with gluster not restarting on
> reboot, as well as >all of the HA stuff failing on reboot after power loss?
> Thus far, the only way I've got the >cluster to come back to life, is to
> manually restart glusterd on all nodes, then put the >cluster back into
> "not mainentance" mode, and then manually starting the hosted-engine vm.
> >This also fails after 2 or 3 power losses, even though the entire cluster
> is happy through >the first 2.
> >>
> >>
> >> About the gluster not starting - use systemd.mount unit files.
> >> here is my setup and for now works:
> >>
> >> [root@ovirt2 yum.repos.d]# systemctl cat gluster_bricks-engine.mount
> >> # /etc/systemd/system/gluster_bricks-engine.mount
> >> [Unit]
> >> Description=Mount glusterfs brick - ENGINE
> >> Requires = vdo.service
> >> After = vdo.service
> >> Before = glusterd.service
> >> Conflicts = umount.target
> >>
> >> [Mount]
> >> What=/dev/mapper/gluster_vg_md0-gluster_lv_engine
> >> Where=/gluster_bricks/engine
> >> Type=xfs
> >> Options=inode64,noatime,nodiratime
> >>
> >> [Install]
> >> WantedBy=glusterd.service
> >> [root@ovirt2 yum.repos.d]# systemctl cat
> gluster_bricks-engine.automount
> >> # /etc/systemd/system/gluster_bricks-engine.automount
> >> [Unit]
> >> Description=automount for gluster brick ENGINE
> >>
> >> [Automount]
> >> Where=/gluster_bricks/engine
> >>
> >> [Install]
> >> WantedBy=multi-user.target
> >> [root@ovirt2 yum.repos.d]# systemctl cat glusterd
> >> # /etc/systemd/system/glusterd.service
> >> [Unit]
> >> Description=GlusterFS, a clustered file-system server
> >> Requires=rpcbind.service gluster_bricks-engine.mount
> gluster_bricks-data.mount gluster_bricks-isos.mount
> >> After=network.target rpcbind.service gluster_bricks-engine.mount
> gluster_bricks-data.mount gluster_bricks-isos.mount
> >> Before=network-online.target
> >>
> >> [Service]
> >> Type=forking
> >> PIDFile=/var/run/glusterd.pid
> >> LimitNOFILE=65536
> >> Environment="LOG_LEVEL=INFO"
> >> EnvironmentFile=-/etc/sysconfig/glusterd
> >> ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid  --log-level
> $LOG_LEVEL $GLUSTERD_OPTIONS
> >> KillMode=process
> >> SuccessExitStatus=15
> >>
> >> [Install]
> >> WantedBy=multi-user.target
> >>
> >> # /etc/systemd/system/glusterd.service.d/99-cpu.conf
> >> [Service]
> >> CPUAccounting=yes
> >> Slice=glusterfs.slice
> >>
> >>
> >> Best Regards,
> >> Strahil Nikolov
> >
> >
> >
> > --
> > _
> > Fact:
> > 1. Ninjas are mammals.
> > 2. Ninjas fight ALL the time.
> > 3. The purpose of the ninja is to flip out and kill people.
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/G4AE6YQHYL7XBTYNCLQPFQY6CY6C7YGX/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JMBK3F3LGTYUQ4MAS7GP5JM2ONTY7HCT/


[ovirt-users] Re: ovirt-node 4.2 iso - hyperconverged wizard doesn't write gdeployConfig settings

2019-02-06 Thread Sahina Bose
+Sachidananda URS to review user request about systemd mount files

On Tue, Feb 5, 2019 at 10:22 PM feral  wrote:
>
> Using SystemD makes way more sense to me. I was just trying to use ovirt-node 
> as it was ... intended? Mainly because I have no idea how it all works yet, 
> so I've been trying to do the most stockish deployment possible, following 
> deployment instructions and not thinking I'm smarter than the software :p.
> I've given up on 4.2 for now, as 4.3 was just released, so giving that a try 
> now. Will report back. Hopefully 4.3 enlists systemd for stuff?
>
> On Tue, Feb 5, 2019 at 4:33 AM Strahil Nikolov  wrote:
>>
>> Dear Feral,
>>
>> >On that note, have you also had issues with gluster not restarting on 
>> >reboot, as well as >all of the HA stuff failing on reboot after power loss? 
>> >Thus far, the only way I've got the >cluster to come back to life, is to 
>> >manually restart glusterd on all nodes, then put the >cluster back into 
>> >"not mainentance" mode, and then manually starting the hosted-engine vm. 
>> >>This also fails after 2 or 3 power losses, even though the entire cluster 
>> >is happy through >the first 2.
>>
>>
>> About the gluster not starting - use systemd.mount unit files.
>> here is my setup and for now works:
>>
>> [root@ovirt2 yum.repos.d]# systemctl cat gluster_bricks-engine.mount
>> # /etc/systemd/system/gluster_bricks-engine.mount
>> [Unit]
>> Description=Mount glusterfs brick - ENGINE
>> Requires = vdo.service
>> After = vdo.service
>> Before = glusterd.service
>> Conflicts = umount.target
>>
>> [Mount]
>> What=/dev/mapper/gluster_vg_md0-gluster_lv_engine
>> Where=/gluster_bricks/engine
>> Type=xfs
>> Options=inode64,noatime,nodiratime
>>
>> [Install]
>> WantedBy=glusterd.service
>> [root@ovirt2 yum.repos.d]# systemctl cat gluster_bricks-engine.automount
>> # /etc/systemd/system/gluster_bricks-engine.automount
>> [Unit]
>> Description=automount for gluster brick ENGINE
>>
>> [Automount]
>> Where=/gluster_bricks/engine
>>
>> [Install]
>> WantedBy=multi-user.target
>> [root@ovirt2 yum.repos.d]# systemctl cat glusterd
>> # /etc/systemd/system/glusterd.service
>> [Unit]
>> Description=GlusterFS, a clustered file-system server
>> Requires=rpcbind.service gluster_bricks-engine.mount 
>> gluster_bricks-data.mount gluster_bricks-isos.mount
>> After=network.target rpcbind.service gluster_bricks-engine.mount 
>> gluster_bricks-data.mount gluster_bricks-isos.mount
>> Before=network-online.target
>>
>> [Service]
>> Type=forking
>> PIDFile=/var/run/glusterd.pid
>> LimitNOFILE=65536
>> Environment="LOG_LEVEL=INFO"
>> EnvironmentFile=-/etc/sysconfig/glusterd
>> ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid  --log-level 
>> $LOG_LEVEL $GLUSTERD_OPTIONS
>> KillMode=process
>> SuccessExitStatus=15
>>
>> [Install]
>> WantedBy=multi-user.target
>>
>> # /etc/systemd/system/glusterd.service.d/99-cpu.conf
>> [Service]
>> CPUAccounting=yes
>> Slice=glusterfs.slice
>>
>>
>> Best Regards,
>> Strahil Nikolov
>
>
>
> --
> _
> Fact:
> 1. Ninjas are mammals.
> 2. Ninjas fight ALL the time.
> 3. The purpose of the ninja is to flip out and kill people.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/G4AE6YQHYL7XBTYNCLQPFQY6CY6C7YGX/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OU3HJYDT5P4ZQ2WJT7AS6URPEVTC4LRJ/


[ovirt-users] Re: ovirt-node 4.2 iso - hyperconverged wizard doesn't write gdeployConfig settings

2019-02-05 Thread feral
Using SystemD makes way more sense to me. I was just trying to use
ovirt-node as it was ... intended? Mainly because I have no idea how it all
works yet, so I've been trying to do the most stockish deployment possible,
following deployment instructions and not thinking I'm smarter than the
software :p.
I've given up on 4.2 for now, as 4.3 was just released, so giving that a
try now. Will report back. Hopefully 4.3 enlists systemd for stuff?

On Tue, Feb 5, 2019 at 4:33 AM Strahil Nikolov 
wrote:

> Dear Feral,
>
> >On that note, have you also had issues with gluster not restarting on
> reboot, as well as >all of the HA stuff failing on reboot after power loss?
> Thus far, the only way I've got the >cluster to come back to life, is to
> manually restart glusterd on all nodes, then put the >cluster back into
> "not mainentance" mode, and then manually starting the hosted-engine vm.
> >This also fails after 2 or 3 power losses, even though the entire cluster
> is happy through >the first 2.
>
>
> About the gluster not starting - use systemd.mount unit files.
> here is my setup and for now works:
>
> [root@ovirt2 yum.repos.d]# systemctl cat gluster_bricks-engine.mount
> # /etc/systemd/system/gluster_bricks-engine.mount
> [Unit]
> Description=Mount glusterfs brick - ENGINE
> Requires = vdo.service
> After = vdo.service
> Before = glusterd.service
> Conflicts = umount.target
>
> [Mount]
> What=/dev/mapper/gluster_vg_md0-gluster_lv_engine
> Where=/gluster_bricks/engine
> Type=xfs
> Options=inode64,noatime,nodiratime
>
> [Install]
> WantedBy=glusterd.service
> [root@ovirt2 yum.repos.d]# systemctl cat gluster_bricks-engine.automount
> # /etc/systemd/system/gluster_bricks-engine.automount
> [Unit]
> Description=automount for gluster brick ENGINE
>
> [Automount]
> Where=/gluster_bricks/engine
>
> [Install]
> WantedBy=multi-user.target
> [root@ovirt2 yum.repos.d]# systemctl cat glusterd
> # /etc/systemd/system/glusterd.service
> [Unit]
> Description=GlusterFS, a clustered file-system server
> Requires=rpcbind.service gluster_bricks-engine.mount
> gluster_bricks-data.mount gluster_bricks-isos.mount
> After=network.target rpcbind.service gluster_bricks-engine.mount
> gluster_bricks-data.mount gluster_bricks-isos.mount
> Before=network-online.target
>
> [Service]
> Type=forking
> PIDFile=/var/run/glusterd.pid
> LimitNOFILE=65536
> Environment="LOG_LEVEL=INFO"
> EnvironmentFile=-/etc/sysconfig/glusterd
> ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid  --log-level
> $LOG_LEVEL $GLUSTERD_OPTIONS
> KillMode=process
> SuccessExitStatus=15
>
> [Install]
> WantedBy=multi-user.target
>
> # /etc/systemd/system/glusterd.service.d/99-cpu.conf
> [Service]
> CPUAccounting=yes
> Slice=glusterfs.slice
>
>
> Best Regards,
> Strahil Nikolov
>


-- 
_
Fact:
1. Ninjas are mammals.
2. Ninjas fight ALL the time.
3. The purpose of the ninja is to flip out and kill people.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G4AE6YQHYL7XBTYNCLQPFQY6CY6C7YGX/


[ovirt-users] Re: ovirt-node 4.2 iso - hyperconverged wizard doesn't write gdeployConfig settings

2019-02-05 Thread Strahil Nikolov
 Dear Feral,
>On that note, have you also had issues with gluster not restarting on reboot, 
>as well as >all of the HA stuff failing on reboot after power loss? Thus far, 
>the only way I've got the >cluster to come back to life, is to manually 
>restart glusterd on all nodes, then put the >cluster back into "not 
>mainentance" mode, and then manually starting the hosted-engine vm. >This also 
>fails after 2 or 3 power losses, even though the entire cluster is happy 
>through >the first 2.

About the gluster not starting - use systemd.mount unit files.here is my setup 
and for now works:
[root@ovirt2 yum.repos.d]# systemctl cat gluster_bricks-engine.mount
# /etc/systemd/system/gluster_bricks-engine.mount
[Unit]
Description=Mount glusterfs brick - ENGINE
Requires = vdo.service
After = vdo.service
Before = glusterd.service
Conflicts = umount.target

[Mount]
What=/dev/mapper/gluster_vg_md0-gluster_lv_engine
Where=/gluster_bricks/engine
Type=xfs
Options=inode64,noatime,nodiratime

[Install]
WantedBy=glusterd.service
[root@ovirt2 yum.repos.d]# systemctl cat gluster_bricks-engine.automount
# /etc/systemd/system/gluster_bricks-engine.automount
[Unit]
Description=automount for gluster brick ENGINE

[Automount]
Where=/gluster_bricks/engine

[Install]
WantedBy=multi-user.target
[root@ovirt2 yum.repos.d]# systemctl cat glusterd
# /etc/systemd/system/glusterd.service
[Unit]
Description=GlusterFS, a clustered file-system server
Requires=rpcbind.service gluster_bricks-engine.mount gluster_bricks-data.mount 
gluster_bricks-isos.mount
After=network.target rpcbind.service gluster_bricks-engine.mount 
gluster_bricks-data.mount gluster_bricks-isos.mount
Before=network-online.target

[Service]
Type=forking
PIDFile=/var/run/glusterd.pid
LimitNOFILE=65536
Environment="LOG_LEVEL=INFO"
EnvironmentFile=-/etc/sysconfig/glusterd
ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid  --log-level $LOG_LEVEL 
$GLUSTERD_OPTIONS
KillMode=process
SuccessExitStatus=15

[Install]
WantedBy=multi-user.target

# /etc/systemd/system/glusterd.service.d/99-cpu.conf
[Service]
CPUAccounting=yes
Slice=glusterfs.slice


Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K24KAM7RXA77EWJDNYDFJYDDMNXX7OMB/


[ovirt-users] Re: ovirt-node 4.2 iso - hyperconverged wizard doesn't write gdeployConfig settings

2019-02-04 Thread feral
Fyi, this is just a vanilla install from the ovirt node 4.2 iso. Install 3
nodes, sync up hosts file and exchange SSH keys, and hit the webui for
hyperconverged deployment. The only setting I enter that make it into the
config, are the hostnames.

On Mon, Feb 4, 2019, 8:44 PM Gobinda Das  Sure Greg, I will look into this and get back to you guys.
>
> On Tue, Feb 5, 2019 at 7:22 AM Greg Sheremeta  wrote:
>
>> Sahina, Gobinda,
>>
>> Can you check this thread?
>>
>> On Mon, Feb 4, 2019 at 6:02 PM feral  wrote:
>>
>>> Glusterd was enabled, just crashes on boot. It's a known issue that was
>>> resolved in 3.13, but ovirt-node only has 3.12.
>>> The VM is at that point, paused. So I manually startup glusterd again
>>> and ensure all nodes are online, and then resume the hosted engine.
>>> Sometimes it works, sometimes not.
>>>
>>> I think the issue here is that there are multiple issues with the
>>> current ovirt-node release iso. I was able to get everything working with
>>> Centos base and installing ovirt manually. Still had the same problem with
>>> the gluster wizard not using any of my settings, but after that, and
>>> ensuring i restart all services after a reboot, things came to life.
>>> Trying to discuss with devs, but so far no luck. I keep hearing that the
>>> previous release of ovirt-node (iso) was just much smoother, but haven't
>>> seen anyone addressing the issues in current release.
>>>
>>>
>>> On Mon, Feb 4, 2019 at 2:16 PM Edward Berger 
>>> wrote:
>>>
 On each host you should check if systemctl status glusterd shows
 "enabled" and whatever is the gluster events daemon. (I'm not logged in to
 look right now)

 I'm not sure which part of gluster-wizard or hosted-engine engine
 installation is supposed to do the enabling, but I've seen where incomplete
 installs left it disabled.

 If the gluster servers haven't come up properly then there's no working
 image for engine.
 I had a situation where it was in a "paused" state and I had to run
 "hosted-engine --vm-status" on possible nodes to find which one has VM in
 paused state
 then log into that node and run this command..

 virsh -c
 qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf resume
 HostedEngine


 On Mon, Feb 4, 2019 at 3:23 PM feral  wrote:

> On that note, have you also had issues with gluster not restarting on
> reboot, as well as all of the HA stuff failing on reboot after power loss?
> Thus far, the only way I've got the cluster to come back to life, is to
> manually restart glusterd on all nodes, then put the cluster back into 
> "not
> mainentance" mode, and then manually starting the hosted-engine vm. This
> also fails after 2 or 3 power losses, even though the entire cluster is
> happy through the first 2.
>
> On Mon, Feb 4, 2019 at 12:21 PM feral  wrote:
>
>> Yea, I've been able to build a config manually myself, but sure would
>> be nice if the gdeploy worked (at all), as it takes an hour to deploy 
>> every
>> test, and manually creating the conf, I have to be super conservative 
>> about
>> my sizes, as I'm still not entirely sure what the deploy script actually
>> does. IE: I've got 3 nodes with 1.2TB for the gluster each, but if I try 
>> to
>> build a deployment to make use of more than 900GB, it fails as it's
>> creating the thinpool with whatever size it wants.
>>
>> Just wanted to make sure I wasn't the only one having this issue.
>> Given we know at least two people have noticed, who's the best to 
>> contact?
>> I haven't been able to get any response from devs on any of (the myriad)
>> of issues with the 4.2.8 image.
>>
>
>> Have you reported bugs?
>> https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine
>> is a good generic place to start
>>
>>
>>> Also having a ton of strange issues with the hosted-engine vm deployment.
>>
>
>> Can you elaborate and or report bugs?
>> https://bugzilla.redhat.com/enter_bug.cgi?product=cockpit-ovirt
>>
>>
>>>
>> On Mon, Feb 4, 2019 at 11:59 AM Edward Berger 
>> wrote:
>>
>>> Yes, I had that issue with an 4.2.8 installation.
>>> I had to manually edit the "web-UI-generated" config to be anywhere
>>> close to what I wanted.
>>>
>>
>> Please report a bug on this, with steps to reproduce.
>> https://bugzilla.redhat.com/enter_bug.cgi?product=cockpit-ovirt
>>
>>
>>>
>>> I'll attach an edited config as an example.
>>>
>>> On Mon, Feb 4, 2019 at 2:51 PM feral  wrote:
>>>
 New install of ovirt-node 4.2 (from iso). Setup each node with
 networking and ssh keys, and use the hyperconverged gluster deployment
 wizard. None of the user specified settings are ever reflected in the
 gdeployConfig.conf.
 Anyone running into this?

 --
 _
 

[ovirt-users] Re: ovirt-node 4.2 iso - hyperconverged wizard doesn't write gdeployConfig settings

2019-02-04 Thread Gobinda Das
Sure Greg, I will look into this and get back to you guys.

On Tue, Feb 5, 2019 at 7:22 AM Greg Sheremeta  wrote:

> Sahina, Gobinda,
>
> Can you check this thread?
>
> On Mon, Feb 4, 2019 at 6:02 PM feral  wrote:
>
>> Glusterd was enabled, just crashes on boot. It's a known issue that was
>> resolved in 3.13, but ovirt-node only has 3.12.
>> The VM is at that point, paused. So I manually startup glusterd again and
>> ensure all nodes are online, and then resume the hosted engine. Sometimes
>> it works, sometimes not.
>>
>> I think the issue here is that there are multiple issues with the current
>> ovirt-node release iso. I was able to get everything working with Centos
>> base and installing ovirt manually. Still had the same problem with the
>> gluster wizard not using any of my settings, but after that, and ensuring i
>> restart all services after a reboot, things came to life.
>> Trying to discuss with devs, but so far no luck. I keep hearing that the
>> previous release of ovirt-node (iso) was just much smoother, but haven't
>> seen anyone addressing the issues in current release.
>>
>>
>> On Mon, Feb 4, 2019 at 2:16 PM Edward Berger  wrote:
>>
>>> On each host you should check if systemctl status glusterd shows
>>> "enabled" and whatever is the gluster events daemon. (I'm not logged in to
>>> look right now)
>>>
>>> I'm not sure which part of gluster-wizard or hosted-engine engine
>>> installation is supposed to do the enabling, but I've seen where incomplete
>>> installs left it disabled.
>>>
>>> If the gluster servers haven't come up properly then there's no working
>>> image for engine.
>>> I had a situation where it was in a "paused" state and I had to run
>>> "hosted-engine --vm-status" on possible nodes to find which one has VM in
>>> paused state
>>> then log into that node and run this command..
>>>
>>> virsh -c
>>> qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf resume
>>> HostedEngine
>>>
>>>
>>> On Mon, Feb 4, 2019 at 3:23 PM feral  wrote:
>>>
 On that note, have you also had issues with gluster not restarting on
 reboot, as well as all of the HA stuff failing on reboot after power loss?
 Thus far, the only way I've got the cluster to come back to life, is to
 manually restart glusterd on all nodes, then put the cluster back into "not
 mainentance" mode, and then manually starting the hosted-engine vm. This
 also fails after 2 or 3 power losses, even though the entire cluster is
 happy through the first 2.

 On Mon, Feb 4, 2019 at 12:21 PM feral  wrote:

> Yea, I've been able to build a config manually myself, but sure would
> be nice if the gdeploy worked (at all), as it takes an hour to deploy 
> every
> test, and manually creating the conf, I have to be super conservative 
> about
> my sizes, as I'm still not entirely sure what the deploy script actually
> does. IE: I've got 3 nodes with 1.2TB for the gluster each, but if I try 
> to
> build a deployment to make use of more than 900GB, it fails as it's
> creating the thinpool with whatever size it wants.
>
> Just wanted to make sure I wasn't the only one having this issue.
> Given we know at least two people have noticed, who's the best to contact?
> I haven't been able to get any response from devs on any of (the myriad)
> of issues with the 4.2.8 image.
>

> Have you reported bugs?
> https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine
> is a good generic place to start
>
>
>> Also having a ton of strange issues with the hosted-engine vm deployment.
>

> Can you elaborate and or report bugs?
> https://bugzilla.redhat.com/enter_bug.cgi?product=cockpit-ovirt
>
>
>>
> On Mon, Feb 4, 2019 at 11:59 AM Edward Berger 
> wrote:
>
>> Yes, I had that issue with an 4.2.8 installation.
>> I had to manually edit the "web-UI-generated" config to be anywhere
>> close to what I wanted.
>>
>
> Please report a bug on this, with steps to reproduce.
> https://bugzilla.redhat.com/enter_bug.cgi?product=cockpit-ovirt
>
>
>>
>> I'll attach an edited config as an example.
>>
>> On Mon, Feb 4, 2019 at 2:51 PM feral  wrote:
>>
>>> New install of ovirt-node 4.2 (from iso). Setup each node with
>>> networking and ssh keys, and use the hyperconverged gluster deployment
>>> wizard. None of the user specified settings are ever reflected in the
>>> gdeployConfig.conf.
>>> Anyone running into this?
>>>
>>> --
>>> _
>>> Fact:
>>> 1. Ninjas are mammals.
>>> 2. Ninjas fight ALL the time.
>>> 3. The purpose of the ninja is to flip out and kill people.
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> 

[ovirt-users] Re: ovirt-node 4.2 iso - hyperconverged wizard doesn't write gdeployConfig settings

2019-02-04 Thread Greg Sheremeta
Sahina, Gobinda,

Can you check this thread?

On Mon, Feb 4, 2019 at 6:02 PM feral  wrote:

> Glusterd was enabled, just crashes on boot. It's a known issue that was
> resolved in 3.13, but ovirt-node only has 3.12.
> The VM is at that point, paused. So I manually startup glusterd again and
> ensure all nodes are online, and then resume the hosted engine. Sometimes
> it works, sometimes not.
>
> I think the issue here is that there are multiple issues with the current
> ovirt-node release iso. I was able to get everything working with Centos
> base and installing ovirt manually. Still had the same problem with the
> gluster wizard not using any of my settings, but after that, and ensuring i
> restart all services after a reboot, things came to life.
> Trying to discuss with devs, but so far no luck. I keep hearing that the
> previous release of ovirt-node (iso) was just much smoother, but haven't
> seen anyone addressing the issues in current release.
>
>
> On Mon, Feb 4, 2019 at 2:16 PM Edward Berger  wrote:
>
>> On each host you should check if systemctl status glusterd shows
>> "enabled" and whatever is the gluster events daemon. (I'm not logged in to
>> look right now)
>>
>> I'm not sure which part of gluster-wizard or hosted-engine engine
>> installation is supposed to do the enabling, but I've seen where incomplete
>> installs left it disabled.
>>
>> If the gluster servers haven't come up properly then there's no working
>> image for engine.
>> I had a situation where it was in a "paused" state and I had to run
>> "hosted-engine --vm-status" on possible nodes to find which one has VM in
>> paused state
>> then log into that node and run this command..
>>
>> virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf
>> resume HostedEngine
>>
>>
>> On Mon, Feb 4, 2019 at 3:23 PM feral  wrote:
>>
>>> On that note, have you also had issues with gluster not restarting on
>>> reboot, as well as all of the HA stuff failing on reboot after power loss?
>>> Thus far, the only way I've got the cluster to come back to life, is to
>>> manually restart glusterd on all nodes, then put the cluster back into "not
>>> mainentance" mode, and then manually starting the hosted-engine vm. This
>>> also fails after 2 or 3 power losses, even though the entire cluster is
>>> happy through the first 2.
>>>
>>> On Mon, Feb 4, 2019 at 12:21 PM feral  wrote:
>>>
 Yea, I've been able to build a config manually myself, but sure would
 be nice if the gdeploy worked (at all), as it takes an hour to deploy every
 test, and manually creating the conf, I have to be super conservative about
 my sizes, as I'm still not entirely sure what the deploy script actually
 does. IE: I've got 3 nodes with 1.2TB for the gluster each, but if I try to
 build a deployment to make use of more than 900GB, it fails as it's
 creating the thinpool with whatever size it wants.

 Just wanted to make sure I wasn't the only one having this issue. Given
 we know at least two people have noticed, who's the best to contact? I
 haven't been able to get any response from devs on any of (the myriad)  of
 issues with the 4.2.8 image.

>>>
Have you reported bugs?
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine
is a good generic place to start


> Also having a ton of strange issues with the hosted-engine vm deployment.

>>>
Can you elaborate and or report bugs?
https://bugzilla.redhat.com/enter_bug.cgi?product=cockpit-ovirt


>
 On Mon, Feb 4, 2019 at 11:59 AM Edward Berger 
 wrote:

> Yes, I had that issue with an 4.2.8 installation.
> I had to manually edit the "web-UI-generated" config to be anywhere
> close to what I wanted.
>

Please report a bug on this, with steps to reproduce.
https://bugzilla.redhat.com/enter_bug.cgi?product=cockpit-ovirt


>
> I'll attach an edited config as an example.
>
> On Mon, Feb 4, 2019 at 2:51 PM feral  wrote:
>
>> New install of ovirt-node 4.2 (from iso). Setup each node with
>> networking and ssh keys, and use the hyperconverged gluster deployment
>> wizard. None of the user specified settings are ever reflected in the
>> gdeployConfig.conf.
>> Anyone running into this?
>>
>> --
>> _
>> Fact:
>> 1. Ninjas are mammals.
>> 2. Ninjas fight ALL the time.
>> 3. The purpose of the ninja is to flip out and kill people.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF56FSFRNGCWEM4VJFOGKKAELJ3ID7NR/
>>
>

 --
 _
 Fact:
 1. Ninjas are mammals.
 2. Ninjas 

[ovirt-users] Re: ovirt-node 4.2 iso - hyperconverged wizard doesn't write gdeployConfig settings

2019-02-04 Thread feral
Glusterd was enabled, just crashes on boot. It's a known issue that was
resolved in 3.13, but ovirt-node only has 3.12.
The VM is at that point, paused. So I manually startup glusterd again and
ensure all nodes are online, and then resume the hosted engine. Sometimes
it works, sometimes not.

I think the issue here is that there are multiple issues with the current
ovirt-node release iso. I was able to get everything working with Centos
base and installing ovirt manually. Still had the same problem with the
gluster wizard not using any of my settings, but after that, and ensuring i
restart all services after a reboot, things came to life.
Trying to discuss with devs, but so far no luck. I keep hearing that the
previous release of ovirt-node (iso) was just much smoother, but haven't
seen anyone addressing the issues in current release.


On Mon, Feb 4, 2019 at 2:16 PM Edward Berger  wrote:

> On each host you should check if systemctl status glusterd shows "enabled"
> and whatever is the gluster events daemon. (I'm not logged in to look right
> now)
>
> I'm not sure which part of gluster-wizard or hosted-engine engine
> installation is supposed to do the enabling, but I've seen where incomplete
> installs left it disabled.
>
> If the gluster servers haven't come up properly then there's no working
> image for engine.
> I had a situation where it was in a "paused" state and I had to run
> "hosted-engine --vm-status" on possible nodes to find which one has VM in
> paused state
> then log into that node and run this command..
>
> virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf
> resume HostedEngine
>
>
> On Mon, Feb 4, 2019 at 3:23 PM feral  wrote:
>
>> On that note, have you also had issues with gluster not restarting on
>> reboot, as well as all of the HA stuff failing on reboot after power loss?
>> Thus far, the only way I've got the cluster to come back to life, is to
>> manually restart glusterd on all nodes, then put the cluster back into "not
>> mainentance" mode, and then manually starting the hosted-engine vm. This
>> also fails after 2 or 3 power losses, even though the entire cluster is
>> happy through the first 2.
>>
>> On Mon, Feb 4, 2019 at 12:21 PM feral  wrote:
>>
>>> Yea, I've been able to build a config manually myself, but sure would be
>>> nice if the gdeploy worked (at all), as it takes an hour to deploy every
>>> test, and manually creating the conf, I have to be super conservative about
>>> my sizes, as I'm still not entirely sure what the deploy script actually
>>> does. IE: I've got 3 nodes with 1.2TB for the gluster each, but if I try to
>>> build a deployment to make use of more than 900GB, it fails as it's
>>> creating the thinpool with whatever size it wants.
>>>
>>> Just wanted to make sure I wasn't the only one having this issue. Given
>>> we know at least two people have noticed, who's the best to contact? I
>>> haven't been able to get any response from devs on any of (the myriad)  of
>>> issues with the 4.2.8 image.
>>> Also having a ton of strange issues with the hosted-engine vm deployment.
>>>
>>> On Mon, Feb 4, 2019 at 11:59 AM Edward Berger 
>>> wrote:
>>>
 Yes, I had that issue with an 4.2.8 installation.
 I had to manually edit the "web-UI-generated" config to be anywhere
 close to what I wanted.

 I'll attach an edited config as an example.

 On Mon, Feb 4, 2019 at 2:51 PM feral  wrote:

> New install of ovirt-node 4.2 (from iso). Setup each node with
> networking and ssh keys, and use the hyperconverged gluster deployment
> wizard. None of the user specified settings are ever reflected in the
> gdeployConfig.conf.
> Anyone running into this?
>
> --
> _
> Fact:
> 1. Ninjas are mammals.
> 2. Ninjas fight ALL the time.
> 3. The purpose of the ninja is to flip out and kill people.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF56FSFRNGCWEM4VJFOGKKAELJ3ID7NR/
>

>>>
>>> --
>>> _
>>> Fact:
>>> 1. Ninjas are mammals.
>>> 2. Ninjas fight ALL the time.
>>> 3. The purpose of the ninja is to flip out and kill people.
>>>
>>
>>
>> --
>> _
>> Fact:
>> 1. Ninjas are mammals.
>> 2. Ninjas fight ALL the time.
>> 3. The purpose of the ninja is to flip out and kill people.
>>
>

-- 
_
Fact:
1. Ninjas are mammals.
2. Ninjas fight ALL the time.
3. The purpose of the ninja is to flip out and kill people.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 

[ovirt-users] Re: ovirt-node 4.2 iso - hyperconverged wizard doesn't write gdeployConfig settings

2019-02-04 Thread Edward Berger
On each host you should check if systemctl status glusterd shows "enabled"
and whatever is the gluster events daemon. (I'm not logged in to look right
now)

I'm not sure which part of gluster-wizard or hosted-engine engine
installation is supposed to do the enabling, but I've seen where incomplete
installs left it disabled.

If the gluster servers haven't come up properly then there's no working
image for engine.
I had a situation where it was in a "paused" state and I had to run
"hosted-engine --vm-status" on possible nodes to find which one has VM in
paused state
then log into that node and run this command..

virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf
resume HostedEngine


On Mon, Feb 4, 2019 at 3:23 PM feral  wrote:

> On that note, have you also had issues with gluster not restarting on
> reboot, as well as all of the HA stuff failing on reboot after power loss?
> Thus far, the only way I've got the cluster to come back to life, is to
> manually restart glusterd on all nodes, then put the cluster back into "not
> mainentance" mode, and then manually starting the hosted-engine vm. This
> also fails after 2 or 3 power losses, even though the entire cluster is
> happy through the first 2.
>
> On Mon, Feb 4, 2019 at 12:21 PM feral  wrote:
>
>> Yea, I've been able to build a config manually myself, but sure would be
>> nice if the gdeploy worked (at all), as it takes an hour to deploy every
>> test, and manually creating the conf, I have to be super conservative about
>> my sizes, as I'm still not entirely sure what the deploy script actually
>> does. IE: I've got 3 nodes with 1.2TB for the gluster each, but if I try to
>> build a deployment to make use of more than 900GB, it fails as it's
>> creating the thinpool with whatever size it wants.
>>
>> Just wanted to make sure I wasn't the only one having this issue. Given
>> we know at least two people have noticed, who's the best to contact? I
>> haven't been able to get any response from devs on any of (the myriad)  of
>> issues with the 4.2.8 image.
>> Also having a ton of strange issues with the hosted-engine vm deployment.
>>
>> On Mon, Feb 4, 2019 at 11:59 AM Edward Berger 
>> wrote:
>>
>>> Yes, I had that issue with an 4.2.8 installation.
>>> I had to manually edit the "web-UI-generated" config to be anywhere
>>> close to what I wanted.
>>>
>>> I'll attach an edited config as an example.
>>>
>>> On Mon, Feb 4, 2019 at 2:51 PM feral  wrote:
>>>
 New install of ovirt-node 4.2 (from iso). Setup each node with
 networking and ssh keys, and use the hyperconverged gluster deployment
 wizard. None of the user specified settings are ever reflected in the
 gdeployConfig.conf.
 Anyone running into this?

 --
 _
 Fact:
 1. Ninjas are mammals.
 2. Ninjas fight ALL the time.
 3. The purpose of the ninja is to flip out and kill people.
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF56FSFRNGCWEM4VJFOGKKAELJ3ID7NR/

>>>
>>
>> --
>> _
>> Fact:
>> 1. Ninjas are mammals.
>> 2. Ninjas fight ALL the time.
>> 3. The purpose of the ninja is to flip out and kill people.
>>
>
>
> --
> _
> Fact:
> 1. Ninjas are mammals.
> 2. Ninjas fight ALL the time.
> 3. The purpose of the ninja is to flip out and kill people.
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5WONWLWDFCWGL7HCEKZXEQVY76QTPOLS/


[ovirt-users] Re: ovirt-node 4.2 iso - hyperconverged wizard doesn't write gdeployConfig settings

2019-02-04 Thread feral
On that note, have you also had issues with gluster not restarting on
reboot, as well as all of the HA stuff failing on reboot after power loss?
Thus far, the only way I've got the cluster to come back to life, is to
manually restart glusterd on all nodes, then put the cluster back into "not
mainentance" mode, and then manually starting the hosted-engine vm. This
also fails after 2 or 3 power losses, even though the entire cluster is
happy through the first 2.

On Mon, Feb 4, 2019 at 12:21 PM feral  wrote:

> Yea, I've been able to build a config manually myself, but sure would be
> nice if the gdeploy worked (at all), as it takes an hour to deploy every
> test, and manually creating the conf, I have to be super conservative about
> my sizes, as I'm still not entirely sure what the deploy script actually
> does. IE: I've got 3 nodes with 1.2TB for the gluster each, but if I try to
> build a deployment to make use of more than 900GB, it fails as it's
> creating the thinpool with whatever size it wants.
>
> Just wanted to make sure I wasn't the only one having this issue. Given we
> know at least two people have noticed, who's the best to contact? I haven't
> been able to get any response from devs on any of (the myriad)  of issues
> with the 4.2.8 image.
> Also having a ton of strange issues with the hosted-engine vm deployment.
>
> On Mon, Feb 4, 2019 at 11:59 AM Edward Berger  wrote:
>
>> Yes, I had that issue with an 4.2.8 installation.
>> I had to manually edit the "web-UI-generated" config to be anywhere close
>> to what I wanted.
>>
>> I'll attach an edited config as an example.
>>
>> On Mon, Feb 4, 2019 at 2:51 PM feral  wrote:
>>
>>> New install of ovirt-node 4.2 (from iso). Setup each node with
>>> networking and ssh keys, and use the hyperconverged gluster deployment
>>> wizard. None of the user specified settings are ever reflected in the
>>> gdeployConfig.conf.
>>> Anyone running into this?
>>>
>>> --
>>> _
>>> Fact:
>>> 1. Ninjas are mammals.
>>> 2. Ninjas fight ALL the time.
>>> 3. The purpose of the ninja is to flip out and kill people.
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF56FSFRNGCWEM4VJFOGKKAELJ3ID7NR/
>>>
>>
>
> --
> _
> Fact:
> 1. Ninjas are mammals.
> 2. Ninjas fight ALL the time.
> 3. The purpose of the ninja is to flip out and kill people.
>


-- 
_
Fact:
1. Ninjas are mammals.
2. Ninjas fight ALL the time.
3. The purpose of the ninja is to flip out and kill people.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X4VYG2MCSJWCRDA4MGN27CICTTYDS7ZH/


[ovirt-users] Re: ovirt-node 4.2 iso - hyperconverged wizard doesn't write gdeployConfig settings

2019-02-04 Thread feral
Yea, I've been able to build a config manually myself, but sure would be
nice if the gdeploy worked (at all), as it takes an hour to deploy every
test, and manually creating the conf, I have to be super conservative about
my sizes, as I'm still not entirely sure what the deploy script actually
does. IE: I've got 3 nodes with 1.2TB for the gluster each, but if I try to
build a deployment to make use of more than 900GB, it fails as it's
creating the thinpool with whatever size it wants.

Just wanted to make sure I wasn't the only one having this issue. Given we
know at least two people have noticed, who's the best to contact? I haven't
been able to get any response from devs on any of (the myriad)  of issues
with the 4.2.8 image.
Also having a ton of strange issues with the hosted-engine vm deployment.

On Mon, Feb 4, 2019 at 11:59 AM Edward Berger  wrote:

> Yes, I had that issue with an 4.2.8 installation.
> I had to manually edit the "web-UI-generated" config to be anywhere close
> to what I wanted.
>
> I'll attach an edited config as an example.
>
> On Mon, Feb 4, 2019 at 2:51 PM feral  wrote:
>
>> New install of ovirt-node 4.2 (from iso). Setup each node with networking
>> and ssh keys, and use the hyperconverged gluster deployment wizard. None of
>> the user specified settings are ever reflected in the gdeployConfig.conf.
>> Anyone running into this?
>>
>> --
>> _
>> Fact:
>> 1. Ninjas are mammals.
>> 2. Ninjas fight ALL the time.
>> 3. The purpose of the ninja is to flip out and kill people.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF56FSFRNGCWEM4VJFOGKKAELJ3ID7NR/
>>
>

-- 
_
Fact:
1. Ninjas are mammals.
2. Ninjas fight ALL the time.
3. The purpose of the ninja is to flip out and kill people.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K2GZOMMG3SIVKOPT6O6ZCRYNHTRA6KCB/


[ovirt-users] Re: ovirt-node 4.2 iso - hyperconverged wizard doesn't write gdeployConfig settings

2019-02-04 Thread Edward Berger
Yes, I had that issue with an 4.2.8 installation.
I had to manually edit the "web-UI-generated" config to be anywhere close
to what I wanted.

I'll attach an edited config as an example.

On Mon, Feb 4, 2019 at 2:51 PM feral  wrote:

> New install of ovirt-node 4.2 (from iso). Setup each node with networking
> and ssh keys, and use the hyperconverged gluster deployment wizard. None of
> the user specified settings are ever reflected in the gdeployConfig.conf.
> Anyone running into this?
>
> --
> _
> Fact:
> 1. Ninjas are mammals.
> 2. Ninjas fight ALL the time.
> 3. The purpose of the ninja is to flip out and kill people.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF56FSFRNGCWEM4VJFOGKKAELJ3ID7NR/
>
#gdeploy configuration generated by cockpit-gluster plugin
# edited by Ed B. 1-24-2019

[hosts]
10.200.0.131
10.200.0.134
10.200.0.135

[script1:10.200.0.131]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md1 -h 10.200.0.131, 
10.200.0.134, 10.200.0.135

[script1:10.200.0.134]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md1 -h 10.200.0.131, 
10.200.0.134, 10.200.0.135

[script1:10.200.0.135]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md1 -h 10.200.0.131, 
10.200.0.134, 10.200.0.135

[disktype]
raid10

[diskcount]
8

[stripesize]
256

[service1]
action=enable
service=chronyd

[service2]
action=restart
service=chronyd

[shell2]
action=execute
command=vdsm-tool configure --force

[script3]
action=execute
file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh
ignore_script_errors=no

[pv1:10.200.0.131]
action=create
devices=md1
ignore_pv_errors=no

[pv1:10.200.0.134]
action=create
devices=md1
ignore_pv_errors=no

[pv1:10.200.0.135]
action=create
devices=md1
ignore_pv_errors=no

[vg1:10.200.0.131]
action=create
vgname=gluster_vg_md1
pvname=md1
ignore_vg_errors=no

[vg1:10.200.0.134]
action=create
vgname=gluster_vg_md1
pvname=md1
ignore_vg_errors=no

[vg1:10.200.0.135]
action=create
vgname=gluster_vg_md1
pvname=md1
ignore_vg_errors=no

[lv1:10.200.0.131]
action=create
poolname=gluster_thinpool_md1
ignore_lv_errors=no
vgname=gluster_vg_md1
lvtype=thinpool
size=12000GB
poolmetadatasize=16GB

[lv2:10.200.0.134]
action=create
poolname=gluster_thinpool_md1
ignore_lv_errors=no
vgname=gluster_vg_md1
lvtype=thinpool
size=12000GB
poolmetadatasize=16GB

[lv3:10.200.0.135]
action=create
poolname=gluster_thinpool_md1
ignore_lv_errors=no
vgname=gluster_vg_md1
lvtype=thinpool
size=12000GB
poolmetadatasize=16GB

[lv4:10.200.0.131]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_md1
mount=/gluster_bricks/engine
size=2048GB
lvtype=thick

[lv5:10.200.0.131]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_md1
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_md1
virtualsize=9800GB

[lv6:10.200.0.131]
action=create
lvname=gluster_lv_vmstore
ignore_lv_errors=no
vgname=gluster_vg_md1
mount=/gluster_bricks/vmstore
lvtype=thinlv
poolname=gluster_thinpool_md1
virtualsize=2048GB

[lv7:10.200.0.134]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_md1
mount=/gluster_bricks/engine
size=2048GB
lvtype=thick

[lv8:10.200.0.134]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_md1
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_md1
virtualsize=9800GB

[lv9:10.200.0.134]
action=create
lvname=gluster_lv_vmstore
ignore_lv_errors=no
vgname=gluster_vg_md1
mount=/gluster_bricks/vmstore
lvtype=thinlv
poolname=gluster_thinpool_md1
virtualsize=2048GB

[lv10:10.200.0.135]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_md1
mount=/gluster_bricks/engine
size=2048GB
lvtype=thick

[lv11:10.200.0.135]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_md1
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_md1
virtualsize=9800GB

[lv12:10.200.0.135]
action=create
lvname=gluster_lv_vmstore
ignore_lv_errors=no
vgname=gluster_vg_md1
mount=/gluster_bricks/vmstore
lvtype=thinlv
poolname=gluster_thinpool_md1
virtualsize=2048GB

[selinux]
yes

[service3]
action=restart
service=glusterd
slice_setup=yes

[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp
services=glusterfs

[script2]
action=execute
file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh

[shell3]
action=execute
command=usermod -a -G gluster qemu

[volume1]
action=create
volname=engine
transport=tcp
replica=yes
replica_count=3