[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-04 Thread souvaliotimaria
Hello again, I've tried to heal the brick with latest-mtime, but I get the following: gluster volume heal engine split-brain latest-mtime /80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7 Healing

[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-04 Thread souvaliotimaria
I tried only the simple healing because I wasn't sure if I'd mess the gluster more than it already is. I will try latest-mtime in a couple of hours because the system is a production system and I have to do it after office hours. I will come back with an update. Thank you very much for your

[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-03 Thread Alex K
On Wed, Mar 3, 2021, 19:13 wrote: > Hello, > > Thank you very much for your reply. > > I get the following from the below gluster commands: > > [root@ov-no1 ~]# gluster volume heal engine info split-brain > Brick ov-no1.ariadne-t.local:/gluster_bricks/engine/engine >

[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-03 Thread souvaliotimaria
Hello, Thank you very much for your reply. I get the following from the below gluster commands: [root@ov-no1 ~]# gluster volume heal engine info split-brain Brick ov-no1.ariadne-t.local:/gluster_bricks/engine/engine Status: Connected

[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-02 Thread Alex K
On Mon, Mar 1, 2021, 15:20 wrote: > Hello again, > > I am back with a brief description of the situation I am in, and questions > about the recovery. > > oVirt environment: 4.3.5.2 Hyperconverged > GlusterFS: Replica 2 + Arbiter 1 > GlusterFS volumes: data, engine, vmstore > > The current

[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-01 Thread Sandro Bonazzola
+Gobinda Das , +Satheesaran Sundaramoorthi maybe you can help here Il giorno lun 1 mar 2021 alle ore 14:20 ha scritto: > Hello again, > > I am back with a brief description of the situation I am in, and questions > about the recovery. > > oVirt environment: 4.3.5.2 Hyperconverged > GlusterFS:

[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-01 Thread souvaliotimaria
Hello again, I am back with a brief description of the situation I am in, and questions about the recovery. oVirt environment: 4.3.5.2 Hyperconverged GlusterFS: Replica 2 + Arbiter 1 GlusterFS volumes: data, engine, vmstore The current situation is the following: - The Cluster is in Global

[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2021-01-28 Thread Harry O
Ok guys now my setup it like this: 2 x Servers with 5 x 4TB 7200RPM drives in raidz1 and 10G storage network (mtu 9000) in each - my gluster_bricks folders 1 x SFF workstation with 2 x 50GB SSD's in ZFS mirror - my gluster_bricks folder for arbiter My gluster vol info looks like this: Volume

[ovirt-users] Re: Gluster release and oVirt 4.4

2021-01-25 Thread Sandro Bonazzola
Il giorno mar 12 gen 2021 alle ore 12:28 Sandro Bonazzola < sbona...@redhat.com> ha scritto: > > > Il giorno mar 12 gen 2021 alle ore 01:50 Simon Coter < > simon.co...@oracle.com> ha scritto: > >> Hi, >> >> is there any plan to introduce Gluster-8 for hyper-converged architecture >> with oVirt

[ovirt-users] Re: Gluster Storage

2021-01-25 Thread Vojtech Juranek
On Monday, 25 January 2021 12:22:51 CET dkipla...@outlook.com wrote: > Hi Nikolov, > I have installed ovirt with default settings and seems i cant find any > detail steps to set up Gluster storage after that, if there is any link i > will appreciate if you can share. please check

[ovirt-users] Re: Gluster Storage

2021-01-25 Thread dkiplagat
Hi Nikolov, I have installed ovirt with default settings and seems i cant find any detail steps to set up Gluster storage after that, if there is any link i will appreciate if you can share. ___ Users mailing list -- users@ovirt.org To unsubscribe send

[ovirt-users] Re: Gluster Storage

2021-01-25 Thread Strahil Nikolov via Users
Yes it can. Sent from Yahoo Mail on Android Hi, Am new using oVirt and i would like to know if i could deploy oVirt and be able to use it to deploy and manage Gluster storage. ___ Users mailing list -- users@ovirt.org To unsubscribe send an

[ovirt-users] Re: Gluster Hyperconverged fails with single disk partitioned

2021-01-21 Thread Shantur Rathore
I have found a workaround to this. gluster.infra ansible role can exclude and reset lvm filters when "gluster_infra_lvm" variable defined. https://github.com/gluster/gluster-ansible-infra/blob/2522d3bd722be86139c57253a86336b2fec33964/roles/backend_setup/tasks/main.yml#L18 1. Go with gluster

[ovirt-users] Re: Gluster Hyperconverged fails with single disk partitioned

2021-01-21 Thread Shantur Rathore
Thanks Derek, I don't think that is the case as per documentation https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/ On Thu, Jan 21, 2021 at 12:17 AM Derek Atkins

[ovirt-users] Re: Gluster Hyperconverged fails with single disk partitioned

2021-01-20 Thread Derek Atkins
Ovirt is expecting an LVM volume, not a raw partition. -derek Sent using my mobile device. Please excuse any typos. On January 20, 2021 7:13:45 PM Shantur Rathore wrote: Hi, I am trying to setup a single host Self-Hosted hyperconverged setup with GlusterFS. I have a custom partitioning where

[ovirt-users] Re: Gluster release and oVirt 4.4

2021-01-12 Thread Sandro Bonazzola
Il giorno mar 12 gen 2021 alle ore 01:50 Simon Coter ha scritto: > Hi, > > is there any plan to introduce Gluster-8 for hyper-converged architecture > with oVirt 4.4 ? > Just wondering because I can see Gluster-7 is declared EOL on Dec 11, 2020 > (https://www.gluster.org/release-schedule/) >

[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-27 Thread wkmail
for that workload (using that particular test with the dsync) then that is what I saw on mounted gluster given the 7200 drives and simple 1G network. Next week I'll make a point of running your test with bonded ethernet to see if that improves things. Note: our testing uses the following:

[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-26 Thread Harry O
So my gluster performance results is expected? ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct:

[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-26 Thread wkmail
well, I just reviewed my previous test and I realized that I made a mistake on the gluster mount test. I had up arrowed the shell history and  used of= "/test12.img" instead of "./test12" which meant I was testing on the baremetal root partition even though I had 'cd'ed into the Gluster

[ovirt-users] Re: "gluster-ansible-roles is not installed on Host" error on Cockpit

2020-11-26 Thread garcialiang . anne
It work well. I need restart # systemctl restart cockpit and # yum install gluster-ansible again. ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:

[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-26 Thread Ritesh Chikatwar
On Thu, Nov 26, 2020 at 1:54 PM Harry O wrote: > I would love to see something similar to your performance numbers WK. > Here is my gluster volume options and info: > [root@ovirtn1 ~]# gluster v info vmstore > > Volume Name: vmstore > Type: Replicate > Volume ID: stuff > Status: Started >

[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-26 Thread Harry O
New results from centos vm on vmstore: [root@host2 ~]# dd if=/dev/zero of=/test12.img bs=1G count=1 oflag=dsync 1+0 records in 1+0 records out 1073741824 bytes (1.1 GB) copied, 26.6353 s, 40.3 MB/s [root@host2 ~]# rm -rf /test12.img [root@host2 ~]# [root@host2 ~]# dd if=/dev/zero of=/test12.img

[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-26 Thread Harry O
I would love to see something similar to your performance numbers WK. Here is my gluster volume options and info: [root@ovirtn1 ~]# gluster v info vmstore Volume Name: vmstore Type: Replicate Volume ID: stuff Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp

[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-25 Thread Strahil Nikolov via Users
The virt settings (highly recommended for Virtual usage) enabled SHARDING. ONCE ENABLED, NEVER EVER DISABLE SHARDING !!! Best Regards, Strahil Nikolov В 16:34 -0800 на 25.11.2020 (ср), WK написа: > > No, that doesn't look right. > > > > I have a testbed cluster that has a

[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-25 Thread WK
No, that doesn't look right. I have a testbed cluster that has a single 1G network (1500 mtu) it is replica 2 + arbiter on top of 7200 rpms spinning drives formatted with XFS This cluster runs Gluster 6.10 on Ubuntu 18 on some Dell i5-2xxx boxes that were lying around. it uses a stock

[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-25 Thread Strahil Nikolov via Users
Any reason to use dsync flag ? Do you have a real workload to test with ? Best Regards, Strahil Nikolov В 10:29 + на 25.11.2020 (ср), Harry O написа: > Unfortunately I didn't get any improvement by upgrading the network. > > Bare metal (zfs raid1 zvol): > dd if=/dev/zero

[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-25 Thread Harry O
Unfortunately I didn't get any improvement by upgrading the network. Bare metal (zfs raid1 zvol): dd if=/dev/zero of=/gluster_bricks/test1.img bs=1G count=1 oflag=dsync 1+0 records in 1+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 15.6471 s, 68.6 MB/s Centos VM on gluster volume: dd

[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-23 Thread Harry O
Thanks for looking into this. I will try the stuff out. ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct:

[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-23 Thread WK
On 11/23/2020 5:56 AM, Harry O wrote: Hi, Can anyone help me with the performance on my 3 node gluster on zfs (it is setup with one arbiter) The performance on the single vm I have on it (with engine) is 50% worse then a single bare metal disk, on the writes. I have enabled "Optimize for virt

[ovirt-users] Re: Gluster Domain Storage full

2020-10-27 Thread Strahil Nikolov via Users
o de 2020 17:11:14 Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full Nope, oficially oVirt supports only replica 3 (replica 3 arbiter 1) or replica 1 (which actually is a single brick distributed ) volumes. If you have issues related to the Gluster Volume - liek this case , the communit

[ovirt-users] Re: Gluster Domain Storage full

2020-10-27 Thread suporte
and configure it again. Thanks José De: "Strahil Nikolov" Para: supo...@logicworks.pt Cc: "users" Enviadas: Terça-feira, 27 De Outubro de 2020 17:11:14 Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full Nope, oficially oVirt supports only replica 3 (replica 3 arb

[ovirt-users] Re: Gluster Domain Storage full

2020-10-27 Thread Strahil Nikolov via Users
c: "users" Enviadas: Terça-feira, 27 De Outubro de 2020 15:49:22 Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full You have exactly 90% used space. The Gluster's default protection value is exactly 10%: Option: cluster.min-free-disk Default Value: 10% Description: Percentage/S

[ovirt-users] Re: Gluster Domain Storage full

2020-10-27 Thread suporte
12MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 transport.address-family: inet nfs.disable: on De: "Strahil Nikolov" Para: supo...@logicworks.pt Cc: "users" Enviadas: Terça-feira, 27 De Outubro de 2020 1:00:08 Assunto: Re:

[ovirt-users] Re: Gluster Domain Storage full

2020-10-27 Thread Strahil Nikolov via Users
-uid: 36 transport.address-family: inet nfs.disable: on De: "Strahil Nikolov" Para: supo...@logicworks.pt Cc: "users" Enviadas: Terça-feira, 27 De Outubro de 2020 1:00:08 Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full So what

[ovirt-users] Re: Gluster Domain Storage full

2020-10-27 Thread suporte
storage.owner-uid: 36 transport.address-family: inet nfs.disable: on De: "Strahil Nikolov" Para: supo...@logicworks.pt Cc: "users" Enviadas: Terça-feira, 27 De Outubro de 2020 1:00:08 Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full So what is the output of &

[ovirt-users] Re: Gluster Domain Storage full

2020-10-26 Thread Strahil Nikolov via Users
remove all image's volumes: (u'b6165676-a6cd-48e2-8925-43ed49bc7f8e [Errno 28] No space left on device',) Any idea? Thanks José De: "Strahil Nikolov" Para: supo...@logicworks.pt Cc: "users" Enviadas: Terça-feira, 22 De Setembro de 2020 13:36:27

[ovirt-users] Re: Gluster volume not responding

2020-10-23 Thread Strahil Nikolov via Users
Most probably , but I have no clue. You can set the host into maintenance and then activate it ,so the volume get's mounted properly. Best Regards, Strahil Nikolov В петък, 23 октомври 2020 г., 03:16:42 Гринуич+3, Simon Scott написа: Hi Strahil, All networking configs have

[ovirt-users] Re: Gluster volume not responding

2020-10-22 Thread Simon Scott
Hi Strahil, All networking configs have been checked and correct. I just looked at the gluster volume and noticed the Mount Option ‘logbsize=256k’ on two nodes and is not on the third node. Status of volume: pltfm_data01 Brick : Brick

[ovirt-users] Re: Gluster Domain Storage full

2020-10-18 Thread Strahil Nikolov via Users
-48e2-8925-43ed49bc7f8e [Errno 28] No space left on device',) Any idea? Thanks José De: "Strahil Nikolov" Para: supo...@logicworks.pt Cc: "users" Enviadas: Terça-feira, 22 De Setembro de 2020 13:36:27 Assunto: Re: [ovirt-users] Re: Gluster Domain Stor

[ovirt-users] Re: Gluster Domain Storage full

2020-10-15 Thread suporte
idea? Thanks José De: "Strahil Nikolov" Para: supo...@logicworks.pt Cc: "users" Enviadas: Terça-feira, 22 De Setembro de 2020 13:36:27 Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full Any option to extend the Gluster Volume ? Other approaches are quite

[ovirt-users] Re: Gluster volume not responding

2020-10-11 Thread Strahil Nikolov via Users
Hi Simon, Usually it is the network, but you need real-world data. I would open screen sessions and run ping continiously . Something like this: while true; do echo -n "$(date) "; timeout -s 9 1 ping -c 1 ovirt2 | grep icmp_seq; sleep 1; done | tee -a /tmp/icmp_log Are all systems in the same

[ovirt-users] Re: Gluster volume not responding

2020-10-11 Thread Simon Scott
Thanks Strahil. I have found between 1 & 4 Gluster peer rpc-clnt-ping timer expired messages in the rhev-data-center-mnt-glusterSD-hostname-strg:_pltfm_data01.log on the storage network IP. Of the 6 Hosts only 1 does not have these timeouts. Fencing has been disabled but can you identify which

[ovirt-users] Re: Gluster volume not responding

2020-10-08 Thread Strahil Nikolov via Users
Hi Simon, I doubt the system needs tuning from network perspective. I guess you can run some 'screen'-s which a pinging another system and logging everything to a file. Best Regards, Strahil Nikolov В петък, 9 октомври 2020 г., 01:05:22 Гринуич+3, Simon Scott написа: Thanks

[ovirt-users] Re: Gluster volume not responding

2020-10-08 Thread Strahil Nikolov via Users
I have seen many "checks" that are "OK"... Have you checked that backups are not used over the same network ? I would disable the power management (fencing) ,so I can find out what has happened to the systems. Best Regards, Strahil Nikolov В четвъртък, 8 октомври 2020 г., 22:43:34

[ovirt-users] Re: Gluster volume not responding

2020-10-08 Thread Strahil Nikolov via Users
>Every Monday and Wednesday morning there are gluster connectivity timeouts >>but all checks of the network and network configs are ok. Based on this one I make the following conclusions: 1. Issue is reoccuring 2. You most probably have a network issue Have you checked the following: - are

[ovirt-users] Re: Gluster Volumes - Correct Peer Connection

2020-09-30 Thread penguin pages
I think this is the issue. When HCI deployed nodes. and consumed the drives and setup "engine" "data" and "vmstore" The GUI was set for "storage" network via hostsnames correctly. And I think, based on watching replication traffic it is using 10Gb "storage" network. CLI shows peers on that

[ovirt-users] Re: Gluster Volumes - Correct Peer Connection

2020-09-30 Thread Strahil Nikolov via Users
If you can do it from cli - use the cli as it has far more control over what the UI can provide. Usually I use UI for monitoring and basic stuff like starting/stopping the brick or setting the 'virt'group via the 'optimize for Virt' (or whatever it was called). Best Regards, Strahil Nikolov

[ovirt-users] Re: Gluster Volumes - Correct Peer Connection

2020-09-30 Thread penguin pages
I have a network called "Storage" but not called "gluster logical network" Front end 172.16.100.0/24 for mgmt and vms (1Gb) "ovirtmgmt" Back end 172.16.101.0/24 for storage (10Gb) "Storage" and yes.. I was never able to figure out how to us UI to create bricks.. so I just was bad and

[ovirt-users] Re: Gluster Volumes - Correct Peer Connection

2020-09-28 Thread Gobinda Das
Hi Jeremey, I think the problem is you have not created a "gluster logical network" from ovirt manager. So when the bricks are listing because you have only mgmt network it's mapping with that network Could you please confirm whether you have Gluster logical network created which maps with 10G

[ovirt-users] Re: Gluster Volumes - Correct Peer Connection

2020-09-24 Thread Ritesh Chikatwar
Jermy, This looks like a bug. You are using an IPv4 or IPv6 network. Ritesh On Thu, Sep 24, 2020 at 12:14 PM Gobinda Das wrote: > But I think this only sync gluster brick status not the entier object. > Looks like this a bug. > @Ritesh Chikatwar Could you please check what data > we are

[ovirt-users] Re: Gluster Volumes - Correct Peer Connection

2020-09-24 Thread Gobinda Das
But I think this only sync gluster brick status not the entier object. Looks like this a bug. @Ritesh Chikatwar Could you please check what data we are getting from vdsm during gluster sync job run? Are we saving exact data or customizing anything? On Thu, Sep 24, 2020 at 11:01 AM Gobinda Das

[ovirt-users] Re: Gluster Volumes - Correct Peer Connection

2020-09-23 Thread Gobinda Das
We do have gluster volume UI sync issue and this is fixed in ovirt-4.4.2 BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1860775 On Wed, Sep 23, 2020 at 8:50 PM Jeremey Wise wrote: > > I just noticed when HCI setup bult the gluster engine / data / vmstore > volumes... it did use correctly the

[ovirt-users] Re: Gluster Name too long

2020-09-23 Thread Parth Dhanjal
Hey! This is a known bug targeted for oVirt 4.4.3. Firstly, multipath should be ideally used when you are not using a RHVH system. Then disabling the "blacklist gluster device" option will ensure that the ansible inventory file doesn't blacklists your device. In case you have a multipath and you

[ovirt-users] Re: Gluster Domain Storage full

2020-09-22 Thread Strahil Nikolov via Users
Any option to extend the Gluster Volume ? Other approaches are quite destructive. I guess , you can obtain the VM's xml via virsh and then copy the disks to another pure-KVM host. Then you can start the VM , while you are recovering from the situation. virsh -c

[ovirt-users] Re: Gluster Domain Storage full

2020-09-22 Thread suporte
Hello Strahil, I just set cluster.min-free-disk to 1%: # gluster volume info data Volume Name: data Type: Distribute Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: node2.domain.com:/home/brick1

[ovirt-users] Re: Gluster Domain Storage full

2020-09-21 Thread Strahil Nikolov via Users
Usually gluster has a 10% reserver defined in 'cluster.min-free-disk' volume option. You can power off the VM , then set cluster.min-free-disk to 1% and immediately move any of the VM's disks to another storage domain. Keep in mind that filling your bricks is bad and if you eat that reserve ,

[ovirt-users] Re: Gluster quorum issue on 3-node HCI with extra 5-nodes as compute and storage nodes

2020-09-15 Thread Strahil Nikolov via Users
As I mentioned in the Gluster's slack, start with providing the output of some cli commands: gluster pool list gluster peer status gluster volume list gluster volume status Best Regards, Strahil Nikolov В понеделник, 14 септември 2020 г., 16:24:04 Гринуич+3, tho...@hoberg.net написа:

[ovirt-users] Re: Gluster quorum issue on 3-node HCI with extra 5-nodes as compute and storage nodes

2020-09-14 Thread Thomas Hoberg
Am 14.09.2020 um 15:23 schrieb tho...@hoberg.net: Sorry two times now: 1. It is a duplicate post, because the delay for posts to show up on the web site is ever longer (as I am responding via mail, the first post is still not shown...) 2. It seems to have been a wild goose chase: The gluster

[ovirt-users] Re: Gluster Name too long

2020-09-13 Thread Strahil Nikolov via Users
I would prefer entries in /dev/disk/by-id. Have you tried not to specify the "/dev/" , like 'mapper/XXYYY' ? Best Regards, Strahil Nikolov В неделя, 13 септември 2020 г., 08:56:30 Гринуич+3, Jeremey Wise написа: Deployment on three node cluster using oVirt HCI wizard. I think

[ovirt-users] Re: Gluster Volume Type Distributed

2020-08-28 Thread Strahil Nikolov via Users
Yes it is. You can still install and setup Gluster all by yourself (lots of manual steps) and then use that as a storage. Yet, replica 1 and replica 3 (or replica 3 arbiter 1) are the only supported in Ovirt. Best Regards, Strahil Nikolov  В четвъртък, 27 август 2020 г., 18:28:55

[ovirt-users] Re: Gluster Volume Type Distributed

2020-08-27 Thread thomas
Replicated is pretty much hard coded into all the Ansible scripts for the HCI setup. So you can't do anything but replicated and only choose between arbiter or full replica there. Distributed give you anything with three nodes, but with five, seven, nine or really high numbers, it becomes quite

[ovirt-users] Re: Gluster Volume Type Distributed

2020-08-27 Thread Jason Brooks
On Thu, Aug 27, 2020 at 8:30 AM Dominique Deschênes wrote: > > Hi Everyone, > > I would like to use Distrbuted Volume type but the volume type is Gray out. I > can only use the replicate type. > > It's normal ? > > 3 ovirt Servers 4.4.1-2020080418 > > Can I configure a replicate volume for the

[ovirt-users] Re: Gluster error in server log.

2020-06-05 Thread Strahil Nikolov via Users
Have you tried restarting the engine ? Best Regards, Strahil Nikolov В петък, 5 юни 2020 г., 11:56:37 Гринуич+3, Krist van Besien написа:    Hello all, On my ovirt HC cluster I constantly get the following kinds of errors: From /var/log/ovirt-engine/engine.log 2020-06-05

[ovirt-users] Re: [Gluster-users] Re: Single instance scaleup.

2020-06-05 Thread Krist van Besien
Hi all. I acrtually did something like that myself. I started out with a single node HC cluster. I then added another node (and plan to add a third). This is what I did: 1) Set up the new node. Make sure that you have all dependencies. (In my case I started with a Centos 8 machine, and

[ovirt-users] Re: Gluster deployment fails with missing UUID

2020-05-05 Thread Gobinda Das
I would recommend to do cleanup from cockpit or if you are using cli based deployment then use "/etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/gluster_cleanup.yml" with your inventory. Then try to deploy again. Cleanup takes care everything. On Thu, Apr 30, 2020 at 9:59

[ovirt-users] Re: Gluster deployment fails with missing UUID

2020-04-30 Thread Strahil Nikolov
On April 30, 2020 12:31:59 PM GMT+03:00, Shareef Jalloq wrote: >Changing to /dev/mapper names seems to work but if anyone can tell me >why >the /dev/sd* naming is filtered that would help my understanding. > >On Thu, Apr 30, 2020 at 10:13 AM Shareef Jalloq >wrote: > >> Having no luck here.

[ovirt-users] Re: Gluster deployment fails with missing UUID

2020-04-30 Thread Shareef Jalloq
It's running now using the /dev/mapper/by-id name so I'll just stick with that and use this in the future. Thanks. On Thu, Apr 30, 2020 at 3:43 PM Strahil Nikolov wrote: > On April 29, 2020 8:21:58 PM GMT+03:00, Shareef Jalloq < > shar...@jalloq.co.uk> wrote: > >Actually, now I've fixed that,

[ovirt-users] Re: Gluster deployment fails with missing UUID

2020-04-30 Thread Strahil Nikolov
On April 29, 2020 8:21:58 PM GMT+03:00, Shareef Jalloq wrote: >Actually, now I've fixed that, indeed, the deployment now fails with an >lvm >filter error. I'm not familiar with filters but there aren't any >uncommented instances of 'filter' in /etc/lvm/lvm.conf. > > > >On Wed, Apr 29, 2020 at

[ovirt-users] Re: Gluster deployment fails with missing UUID

2020-04-30 Thread Strahil Nikolov
On April 29, 2020 7:42:55 PM GMT+03:00, Shareef Jalloq wrote: >Ah of course. I was assuming something had gone wrong with the >deployment >and it couldn't clean up its own mess. I'll raise a bug on the >documentation. > >Strahil, what are the other options to using /dev/sdxxx? > >On Wed, Apr

[ovirt-users] Re: Gluster deployment fails with missing UUID

2020-04-30 Thread Shareef Jalloq
Changing to /dev/mapper names seems to work but if anyone can tell me why the /dev/sd* naming is filtered that would help my understanding. On Thu, Apr 30, 2020 at 10:13 AM Shareef Jalloq wrote: > Having no luck here. I've had a read on the LVM config usage and there > were no filters enabled

[ovirt-users] Re: Gluster deployment fails with missing UUID

2020-04-30 Thread Shareef Jalloq
Having no luck here. I've had a read on the LVM config usage and there were no filters enabled in lvm.conf. I enabled debug logging and can see the default global filter being applied. I then manually forced the 'all' fiter and 'pvcreate /dev/sdb' still tells me it is excluded by a filter. The

[ovirt-users] Re: Gluster deployment fails with missing UUID

2020-04-29 Thread Shareef Jalloq
Actually, now I've fixed that, indeed, the deployment now fails with an lvm filter error. I'm not familiar with filters but there aren't any uncommented instances of 'filter' in /etc/lvm/lvm.conf. On Wed, Apr 29, 2020 at 5:42 PM Shareef Jalloq wrote: > Ah of course. I was assuming something

[ovirt-users] Re: Gluster deployment fails with missing UUID

2020-04-29 Thread Shareef Jalloq
Ah of course. I was assuming something had gone wrong with the deployment and it couldn't clean up its own mess. I'll raise a bug on the documentation. Strahil, what are the other options to using /dev/sdxxx? On Wed, Apr 29, 2020 at 10:17 AM Strahil Nikolov wrote: > On April 29, 2020 2:39:05

[ovirt-users] Re: Gluster deployment fails with missing UUID

2020-04-29 Thread Strahil Nikolov
On April 29, 2020 2:39:05 AM GMT+03:00, Jayme wrote: >Has the drive been used before, it might have existing >partition/filesystem >on it? If you are sure it's fine to overwrite try running wipefs -a >/dev/sdb on all hosts. Also make sure there aren't any filters setup in >lvm.conf (there

[ovirt-users] Re: Gluster deployment fails with missing UUID

2020-04-28 Thread Jayme
Has the drive been used before, it might have existing partition/filesystem on it? If you are sure it's fine to overwrite try running wipefs -a /dev/sdb on all hosts. Also make sure there aren't any filters setup in lvm.conf (there shouldn't be on fresh install, but worth checking). On Tue, Apr

[ovirt-users] Re: Gluster problems with new disk and device name change and overlap

2020-04-07 Thread Gianluca Cecchi
On Tue, Apr 7, 2020 at 12:22 PM Strahil Nikolov wrote: > > > The simplest way would be to say that 'blacklisting everything in > multipath.conf' will solve your problems. > In reality it is a little bit more complicated. > > Interesting your arguments Strahil. To be digged more at my part. In

[ovirt-users] Re: Gluster problems with new disk and device name change and overlap

2020-04-07 Thread Strahil Nikolov
On April 7, 2020 10:45:18 AM GMT+03:00, Gianluca Cecchi wrote: >Hi, >I have configured a single host HCI environment through the GUI wizard >in >4.3.9. >Initial setup has thai layout of disks, as seen by the operating >system: >/dev/sda -> for ovirt-node-ng OS >/dev/nvme0n1 --> for gluster,

[ovirt-users] Re: Gluster permissions HCI

2020-03-25 Thread Gianluca Cecchi
On Wed, Mar 25, 2020 at 8:32 PM Strahil Nikolov wrote: > Hello All, > can someone assist me with some issue. > > Could you check the ownership of some folders for me ? > > 1. ls -l /rhev/data-center/mnt/glusterSD > 2. ls -l /rhev/data-center/mnt/glusterSD/_ > 3. ls -l

[ovirt-users] Re: Gluster Settings

2020-03-19 Thread Strahil Nikolov
On March 19, 2020 1:30:25 PM GMT+02:00, Jayme wrote: >It applies a profile for the virt group. You can get more info here: >https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/configuring_red_hat_virtualization_with_red_hat_gluster_storage/app-virt_profile > >Or you can

[ovirt-users] Re: Gluster Settings

2020-03-19 Thread Jayme
It applies a profile for the virt group. You can get more info here: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/configuring_red_hat_virtualization_with_red_hat_gluster_storage/app-virt_profile Or you can look at the file directly, it’s basically just a list of

[ovirt-users] Re: Gluster Settings

2020-03-19 Thread Christian Reiss
Yeah, That button scares me. What does it do, precisely? On 19/03/2020 11:18, Jayme wrote: At the very least you should make sure to apply the gluster virt profile to vm volumes. This can also be done using optimize for virt store in the ovirt GUI -- with kind regards, mit freundlichen

[ovirt-users] Re: Gluster Settings

2020-03-19 Thread Jayme
At the very least you should make sure to apply the gluster virt profile to vm volumes. This can also be done using optimize for virt store in the ovirt GUI On Thu, Mar 19, 2020 at 6:54 AM Christian Reiss wrote: > Hey folks, > > quick question. For running Gluster / oVirt I found several

[ovirt-users] Re: [Gluster-users] Image File Owner change Situation. (root:root)

2020-03-13 Thread Olaf Buitelaar
Hi Robert, there were serveral issues with ownership in ovirt, for example see; https://bugzilla.redhat.com/show_bug.cgi?id=1666795 Maybe you're encountering these issues during the upgrade process. Also if you're using gluster as backend storage, there might be some permission issues in the 6.7

[ovirt-users] Re: [Gluster-users] ACL issue v6.6, v6.7, v7.1, v7.2

2020-02-07 Thread Paolo Margara
Il 07/02/20 14:51, Strahil Nikolov ha scritto: > On February 7, 2020 10:30:19 AM GMT+02:00, Christian Reiss > wrote: >> Hey, >> >> the ACL issue did not occur during an upgrade. I did upgrade the broken >> >> cluster to get rid of the error. The version was oVirt node 4.3.7 with >> the shipped

[ovirt-users] Re: [Gluster-users] ACL issue v6.6, v6.7, v7.1, v7.2

2020-02-07 Thread Strahil Nikolov
On February 7, 2020 10:30:19 AM GMT+02:00, Christian Reiss wrote: >Hey, > >the ACL issue did not occur during an upgrade. I did upgrade the broken > >cluster to get rid of the error. The version was oVirt node 4.3.7 with >the shipped gluster version. I upgraded to 4.3.8 with Gluster 6.7 and

[ovirt-users] Re: [Gluster-users] ACL issue v6.6, v6.7, v7.1, v7.2

2020-02-07 Thread Christian Reiss
Hey, the ACL issue did not occur during an upgrade. I did upgrade the broken cluster to get rid of the error. The version was oVirt node 4.3.7 with the shipped gluster version. I upgraded to 4.3.8 with Gluster 6.7 and let's see how production ready this really is. -Chris. On 07/02/2020

[ovirt-users] Re: [Gluster-users] ACL issue v6.6, v6.7, v7.1, v7.2

2020-02-06 Thread Paolo Margara
Hi, this is interesting, this happen always with gluster 6.6 or only in certain cases? I ask this because I have two ovirt clusters with gluster, both with gluster v6.6, in one case I've upgraded from 6.5 to 6.6 as Strahil, and I haven't hit this bug. When upgrading my clusters I follow exactly

[ovirt-users] Re: Gluster Heal Issue

2020-02-01 Thread Strahil Nikolov
On February 1, 2020 12:00:43 PM GMT+02:00, a...@pioner.kz wrote: >Hi! >I did it with working Gluster. Just copy missing files from one of the >host and start hesl volume after this. >But the main - i dont understand, why this is happening with this >issue. I saw this many times after maintanance

[ovirt-users] Re: Gluster Heal Issue

2020-02-01 Thread Strahil Nikolov
On February 1, 2020 10:53:59 AM GMT+02:00, Christian Reiss wrote: >Hey Strahil, > >thanks for your answer. > >On 01/02/2020 08:18, Strahil Nikolov wrote: >> There is an active thread in gluster-users , so it will be nice to >mention this there. >> >> About the sync, you can find the paths via:

[ovirt-users] Re: Gluster Heal Issue

2020-02-01 Thread asm
Hi! I did it with working Gluster. Just copy missing files from one of the host and start hesl volume after this. But the main - i dont understand, why this is happening with this issue. I saw this many times after maintanance of one host, for example.

[ovirt-users] Re: Gluster Heal Issue

2020-02-01 Thread Christian Reiss
Hey Strahil, thanks for your answer. On 01/02/2020 08:18, Strahil Nikolov wrote: There is an active thread in gluster-users , so it will be nice to mention this there. About the sync, you can find the paths via: 1. Mount mount -t glusterfs -o aux-gfid-mount vm1:test /mnt/testvol 2. Find

[ovirt-users] Re: Gluster Heal Issue

2020-01-31 Thread Strahil Nikolov
On February 1, 2020 1:34:30 AM GMT+02:00, Jayme wrote: >I have run into this exact issue before and resolved it by simply >syncing >over the missing files and running a heal on the volume (can take a >little >time to correct) > > >On Fri, Jan 31, 2020 at 7:05 PM Christian Reiss > >wrote: > >> Hey

[ovirt-users] Re: Gluster Heal Issue

2020-01-31 Thread Jayme
I have run into this exact issue before and resolved it by simply syncing over the missing files and running a heal on the volume (can take a little time to correct) On Fri, Jan 31, 2020 at 7:05 PM Christian Reiss wrote: > Hey folks, > > in our production setup with 3 nodes (HCI) we took one

[ovirt-users] Re: Gluster storage options

2020-01-23 Thread Jayme
Yes you should install node on separate boot drives and add your additional drives for gluster. You do not have to do anything with gluster beforehand. The ovirt installer will prepare the drives and do all the needed gluster configuration with gdeploy On Thu, Jan 23, 2020 at 4:32 AM Shareef

[ovirt-users] Re: Gluster: a lof of Number of ntries in heal pending

2020-01-21 Thread Stefan Wolf
Hello >I hope you plan to add another brick or arbiter, as you are now prone to >split-brain and other issues. Yes I will add an other one, but I think this is not a problem. I ve set cluster.server-quorum-ratio to 51% to avid the split brain problem. of course I know I just have failure

[ovirt-users] Re: Gluster: a lof of Number of ntries in heal pending

2020-01-21 Thread Strahil Nikolov
On January 21, 2020 7:11:19 AM GMT+02:00, Stefan Wolf wrote: >Hi Strahil, > >yes it is a replica 4 set >I ve tried to stop and stop every gluster server, >and Ive rebooted every server. > >or should I remove the brick and add it again? > >bye >stefan

[ovirt-users] Re: Gluster: a lof of Number of ntries in heal pending

2020-01-20 Thread Stefan Wolf
Hi Strahil, yes it is a replica 4 set I ve tried to stop and stop every gluster server, and Ive rebooted every server. or should I remove the brick and add it again? bye stefan ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to

[ovirt-users] Re: Gluster: a lof of Number of ntries in heal pending

2020-01-20 Thread Strahil Nikolov
On January 20, 2020 8:15:03 PM GMT+02:00, Stefan Wolf wrote: >yes, I ve already tried a full heal a week a go. > >how do i perform a manual heal? > >I only have this gfid: > > > > > > > > > > > >Status: Connected >Number of entries: 868 > >I ve tried to heal it with: >[root@kvm10 ~]# gluster

[ovirt-users] Re: Gluster: a lof of Number of ntries in heal pending

2020-01-20 Thread Stefan Wolf
yes, I ve already tried a full heal a week a go. how do i perform a manual heal? I only have this gfid: Status: Connected Number of entries: 868 I ve tried to heal it with: [root@kvm10 ~]# gluster volume heal data split-brain latest-mtime gfid:c2b47c5c-89b6-49ac-bf10-1733dd8f0902

[ovirt-users] Re: Gluster: a lof of Number of ntries in heal pending

2020-01-20 Thread Jayme
I would try running a full heal first and give it some time to see if it clears up. I.e. gluster volume heal full If that doesn't work, you could try stat on every file to trigger healing doing something like this: find /fuse-mountpoint -iname '*' -exec stat {} \; On Mon, Jan 20, 2020 at 12:16

<    1   2   3   4   5   >