[ovirt-users] Re: Cannot copy or move disks

2020-11-16 Thread Sahina Bose
a bug on gluster side. > > > > > > > De: "Nir Soffer" > > Para: supo...@logicworks.pt > > Cc: "users" , "Sahina Bose" , > "Krutika Dhananjay" , "Nisan, Tal" > > > Enviadas: Domingo, 1

[ovirt-users] Re: CLI for HCI setup

2020-09-23 Thread Sahina Bose
There's the ansible playbooks that you can use - https://github.com/gluster/gluster-ansible/tree/master/playbooks/hc-ansible-deployment On Thu, Sep 3, 2020 at 12:26 AM Michael Thomas wrote: > Is there a CLI for setting up a hyperconverged environment with > glusterfs? The docs that I've found

[ovirt-users] Re: Multiple Gluster ACL issues with oVirt

2020-06-21 Thread Sahina Bose
Thanks Strahil. Adding Sas and Ravi for their inputs. On Sun, 21 Jun 2020 at 6:11 PM, Strahil Nikolov wrote: > Hello Sahina, Sandro, > > I have noticed that the ACL issue with Gluster ( > https://github.com/gluster/glusterfs/issues/876) is happening to > multiple oVirt users (so far at least

[ovirt-users] Re: Hyperconverged setup - storage architecture - scaling

2020-01-09 Thread Sahina Bose
ted. You would need to create a custom ansible playbook that sets up the gluster volumes and add the hosts to the existing engine. (or do the creation of cluster and gluster volumes via the engine UI) > Please let me know. > > Thank You > > C Williams > > On Tue, Jan 29, 2019 a

[ovirt-users] Re: ovirt 4.3.7 + Gluster in hyperconverged (production design)

2020-01-02 Thread Sahina Bose
On Tue, Dec 24, 2019 at 3:26 AM wrote: > Hi, > After playing a bit with oVirt and Gluster in our pre-production > environment for the last year, we have decided to move forward with a our > production design using ovirt 4.3.7 + Gluster in a hyperconverged setup. > > For this we are looking get

[ovirt-users] Re: ovirt 4.3.7 geo-replication not working

2019-12-11 Thread Sahina Bose
+Sunny Kumar On Thu, Dec 12, 2019 at 6:33 AM Strahil wrote: > Hi Adrian, > > Have you checked the passwordless rsync between master and slave volume > nodes ? > > Best Regards, > Strahil NikolovOn Dec 11, 2019 22:36, adrianquint...@gmail.com wrote: > > > > Hi, > > I am trying to setup

[ovirt-users] Re: Beginning oVirt / Hyperconverged

2019-11-25 Thread Sahina Bose
On Tue, Nov 12, 2019 at 3:27 AM wrote: > I guess this is a little late now... but: > > I wanted to do the same, especially because the additional servers (beyond > 3) would lower the relative storage cost overhead when using erasure > coding, but it's 'off the trodden path' and I cannot

[ovirt-users] Re: [ANN] oVirt 4.3.7 Third Release Candidate is now available for testing

2019-11-20 Thread Sahina Bose
e what is it complaining about > this time. > > Best Regards, > Strahil Nikolov > > >Hi Sahina, > > >I have a strange situation: > >1. When I try to access the file via 'sudo -u vdsm dd if=disk of=test > bs=4M' the command fails on aprox 60MB. > >2. If I

[ovirt-users] Re: [ANN] oVirt 4.3.7 Third Release Candidate is now available for testing

2019-11-18 Thread Sahina Bose
9981f67b/images/94f763e9-fd96-4bee-a6b2-31af841a918b/5b1d3113-5cca-4582-9029-634b16338a2f. Was it reset after upgrade? Are you able to copy this file to a different location and try running a VM with this image? Any errors in the mount log of gluster1:_data__fast volume? > Best Regards, > St

[ovirt-users] Re: Gluster & Hyper Converged setup

2019-11-18 Thread Sahina Bose
You will need to edit to provide the correct device during installation. Check output of lsblk On Mon, Nov 18, 2019 at 5:19 PM wrote: > Logical Volumes Create new Logical Volume > 1.35 TiB Pool for Thin Volumes pool00 > 1 GiB ext4 File System /dev/onn_ovirt1/home > 1.32 TiB Inactive volume

[ovirt-users] Re: [ANN] oVirt 4.3.7 Third Release Candidate is now available for testing

2019-11-18 Thread Sahina Bose
On Mon, Nov 18, 2019 at 2:58 PM Sandro Bonazzola wrote: > +Sahina Bose +Gobinda Das +Nir > Soffer +Tal Nisan can you please > help here? > > > Il giorno dom 17 nov 2019 alle ore 16:00 Strahil Nikolov < > hunter86...@yahoo.com> ha scritto: > >> So far,

[ovirt-users] Re: [oVirt HC] Gluster traffic still flows on mgmt even after choosing a different Gluster nw

2019-10-17 Thread Sahina Bose
safe to do this on a live >> system or do all VMs need to be brought down first? How does resetting the >> brick fix the issue with gluster peers using the server hostnames which are >> attached to IPs on the ovirtmanagement network? >> >> On Thu, Oct 17, 2019 at

[ovirt-users] Re: [oVirt HC] Gluster traffic still flows on mgmt even after choosing a different Gluster nw

2019-10-17 Thread Sahina Bose
On Wed, Oct 16, 2019 at 8:38 PM Jayme wrote: > Is there a way to fix this on a hci deployment which is already in > operation? I do have a separate gluster network which is chosen for > migration and gluster network but when I originally deployed I used just > one set of host names which

[ovirt-users] Re: oVirt 4.3.5/6 HC: Reinstall fails from WEB UI

2019-10-17 Thread Sahina Bose
"Host host1.example.com cannot access the Storage Domain(s) attached to the Data Center Default-DC1." Can you check the vdsm logs from this host to check why the storage domains are not attached? On Thu, Oct 17, 2019 at 9:43 AM Strahil wrote: > Ssh to host and check the status of : >

[ovirt-users] Re: is it possible to add host on which ovirt is installed ?

2019-10-09 Thread Sahina Bose
On Tue, Oct 8, 2019 at 4:18 PM wrote: > Hi. > > I am confused now. > > removed all previous configuration and follwed instruction: > > > https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Single_node_hyperconverged.html > > Than I read: > > Deploying on oVirt Node based Hosts >

[ovirt-users] Re: Change hostname of physical hosts under an oVirt and Gluster combination

2019-09-24 Thread Sahina Bose
9-09-24 09:07:27,386-04 WARN > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler6) [51432e7f] Could not add brick > 'mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02' to volume > 'f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0' - server uuid > 'ad7d956a-a121-

[ovirt-users] Re: Change hostname of physical hosts under an oVirt and Gluster combination

2019-09-24 Thread Sahina Bose
On Mon, Sep 23, 2019 at 6:34 AM TomK wrote: > Or in other words, how do I remove all resources, clusters, datacenters, > hosts and readd them under different names? > Does this answer your question -

[ovirt-users] Re: Creating Bricks via the REST API

2019-09-24 Thread Sahina Bose
On Mon, Sep 23, 2019 at 9:08 PM Amit Bawer wrote: > +Sahina Bose > Thanks for your clarification Julian, creating new Gluster brick from > ovirt's REST API is not currently supported, only from UI. > That's right. We do have a bug tracking this - https://bugzilla.redhat.com/sho

[ovirt-users] Re: Gluster: Bricks remove failed

2019-09-17 Thread Sahina Bose
Ok, so you want to reduce the replica count to 2? In this case there will not be any data migration. +Ravishankar Narayanankutty On Thu, Sep 12, 2019 at 2:12 PM wrote: > Hi Sahina. > It was > > gluster volume status engine > Status of volume: engine > Gluster process

[ovirt-users] Re: Gluster: Bricks remove failed

2019-09-11 Thread Sahina Bose
Can you provide the output of gluster volume info before the remove-brick was done. It's not clear if you were reducing the replica count or removing a replica subvolume. On Wed, Sep 11, 2019 at 4:23 PM wrote: > Hi. > There is an ovirt-hosted-engine on gluster volume engine > Replicate replica

[ovirt-users] Re: ovirt with glusterfs data domain - very slow writing speed on Windows server virtual machine

2019-09-11 Thread Sahina Bose
More details on the tests you ran, and also gluster profile data while you were running the tests can help analyse. Similar to my request to another user on thread, you can also help provide some feedback on the data gathering ansible scripts by trying out

[ovirt-users] Re: Does cluster upgrade wait for heal before proceeding to next host?

2019-09-11 Thread Sahina Bose
unclear to me if the update >>> continues to run while the volumes are healing and resumes when they are >>> done. There doesn’t seem to be any indication in the ui (unless I’m >>> mistaken) >>> >> >> Adding @Martin Perina , @Sahina Bose >>and

[ovirt-users] Re: oVirt 4.3.5 WARN no gluster network found in cluster

2019-09-11 Thread Sahina Bose
On Mon, Aug 26, 2019 at 10:57 PM wrote: > Hi, > Yes I have glusternet shows the role of "migration" and "gluster". > Hosts show 1 network connected to management and the other to logical > network "glusternet" > What does "ping host1.example.com" return? Does it return the IP address of the

[ovirt-users] Re: oVirt 4.3.5 glusterfs 6.3 performance tunning

2019-09-11 Thread Sahina Bose
+Krutika Dhananjay +Sachidananda URS Adrian, can you provide more details on the performance issue you're seeing? We do have some scripts to collect data to analyse. Perhaps you can run this and provide us the details in a bug. The ansible scripts to do this is still under review -

[ovirt-users] Re: Procedure to replace out out of three hyperconverged nodes

2019-09-10 Thread Sahina Bose
On Mon, Aug 19, 2019 at 10:55 PM wrote: > On my silent Atom based three node Hyperconverged journey I hit upon a > snag: Evidently they are too slow for Ansible. > > The Gluster storage part went all great and perfect on fresh oVirt node > images that I had configured to leave an empty partition

[ovirt-users] Re: gluster

2019-09-10 Thread Sahina Bose
On Mon, Sep 9, 2019 at 7:22 PM Kaustav Majumder wrote: > Well almost. Create a new cluster and (Check) Enable Gluster Service . > Upon adding new hosts to this cluster (via ui) gluster will be > automatically configured on them. > > > On Mon, Sep 9, 2019 at 6:56 PM wrote: > >> So doing this

[ovirt-users] Re: LiveStoreageMigration failed

2019-07-22 Thread Sahina Bose
On Thu, Jul 18, 2019 at 9:02 PM Strahil Nikolov wrote: > According to this one (GlusterFS Storage Domain — oVirt) libgfapi support > is disabled by default due to incompatibility with Live Storage Migration. > VM can not be migrated to the GlusterFS storage domain. > I guess someone from the

[ovirt-users] Re: hosted engine, Hyperconverged recovery

2019-07-17 Thread Sahina Bose
On Wed, Jul 17, 2019 at 7:50 PM Doron Fediuck wrote: > Adding relevant folks. > Sahina? > > On Thu, 11 Jul 2019 at 00:24, William Kwan wrote: > >> Hi, >> >> I need some direction to make sure we won't make more mistake in >> recovering a 3-node self hosted engine with Gluster. >> >> Someone

[ovirt-users] Re: [Gluster-users] Update 4.2.8 --> 4.3.5

2019-07-16 Thread Sahina Bose
On Thu, Jul 11, 2019 at 11:15 PM Strahil wrote: > I'm addding gluster-users as I'm not sure if you can go gluster v3 -> v6 > directly. > > Theoretically speaking , there should be no problem - but I don't know > if you will observe any issues. > > @Gluster-users, > > Can someone share their

[ovirt-users] Re: oVirt hyerconverged more than 12 node

2019-07-10 Thread Sahina Bose
On Mon, Jul 1, 2019 at 2:06 AM Strahil Nikolov wrote: > I suspect this limitation is due to support obligations that come with a > subscription from Red Hat. > In oVirt , you don't have such agreement and thus no support even with a > 3-node cluster. > > This is correct. Gluster scales up and

[ovirt-users] Re: Ovirt Hyperconverged Cluster

2019-07-10 Thread Sahina Bose
Did you manage to get past this error? On Sat, Jun 29, 2019 at 3:32 AM Edward Berger wrote: > Maybe there is something already on the disk from before? > gluster setup wants it completely blank, no detectable filesystem, no > raid, etc. > see what is there with fdisk.-l see what PVs exist with

[ovirt-users] Re: HE deployment failing

2019-07-08 Thread Sahina Bose
On Mon, Jul 8, 2019 at 11:40 AM Parth Dhanjal wrote: > Hey! > > I used cockpit to deploy gluster. > And the problem seems to be with > 10.xx.xx.xx9:/engine 50G 3.6G 47G 8% > /rhev/data-center/mnt/glusterSD/10.70.41.139:_engine > > Engine volume has 500G available

[ovirt-users] Re: Issues when Creating a Gluster Brick with Cache

2019-06-24 Thread Sahina Bose
On Mon, Jun 24, 2019 at 11:39 AM Robert Crawford < robert.crawford4.14...@gmail.com> wrote: > Hey Everyone, > > When in the server manager and creating a brick from the storage device > the brick will fail whenever i attach a cache device to it. > > I'm not really sure why? It just says unknown.

[ovirt-users] Re: Gluster 3.12 vs Engine 4.3.5

2019-06-24 Thread Sahina Bose
On Thu, Jun 20, 2019 at 11:17 AM Николаев Алексей < alexeynikolaev.p...@yandex.ru> wrote: > Hi community! > > Is it possible to continue using independent gluster 3.12 as a data domain > with engine 4.3.5? > Yes. ___ > Users mailing list --

[ovirt-users] Re: Feature Request: oVirt to warn when VDO is getting full

2019-06-04 Thread Sahina Bose
On Tue, Jun 4, 2019 at 3:26 PM Strahil wrote: > Hello All, > > I would like to ask how many of you use VDO before asking the oVirt Devs > to assess a feature in oVirt for monitoring the size of the VDOs on > hyperconverged systems. > > I think such warning, will save a lot of headaches, but it

[ovirt-users] Re: oVirt node loses gluster volume UUID after reboot, goes to emergency mode every time I reboot.

2019-05-21 Thread Sahina Bose
+Sachidananda URS On Wed, May 22, 2019 at 1:14 AM wrote: > I'm sorry, i'm still working on my linux knowledge, here is the output of > my blkid on one of the servers: > > /dev/nvme0n1: PTTYPE="dos" > /dev/nvme1n1: PTTYPE="dos" > /dev/mapper/eui.6479a71892882020: PTTYPE="dos" >

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-21 Thread Sahina Bose
елник, 20 май 2019 г., 14:56:11 ч. Гринуич+3, Sahina Bose < > sab...@redhat.com> написа: > > > To scale existing volumes - you need to add bricks and run rebalance on > the gluster volume so that data is correctly redistributed as Alex > mentioned. > We do support expandi

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-21 Thread Sahina Bose
quot;}, > "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5} > failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': > u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a > filter.\n",

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-20 Thread Sahina Bose
To scale existing volumes - you need to add bricks and run rebalance on the gluster volume so that data is correctly redistributed as Alex mentioned. We do support expanding existing volumes as the bug https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed As to procedure to expand

[ovirt-users] Re: oVirt upgrade version from 4.2 to 4.3

2019-05-20 Thread Sahina Bose
On Sun, May 19, 2019 at 4:11 PM Strahil wrote: > I would recommend you to postpone your upgrade if you use gluster > (without the API) , as creation of virtual disks via UI on gluster is > having issues - only preallocated can be created. > +Gobinda Das +Satheesaran Sundaramoorthi Sas, can

[ovirt-users] Re: oVirt node loses gluster volume UUID after reboot, goes to emergency mode every time I reboot.

2019-05-20 Thread Sahina Bose
Adding Sachi On Thu, May 9, 2019 at 2:01 AM wrote: > This only started to happen with oVirt node 4.3, 4.2 didn't have issue. > Since I updated to 4.3, every reboot the host goes into emergency mode. > First few times this happened I re-installed O/S from scratch, but after > some digging I

[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-18 Thread Sahina Bose
On Sun, 19 May 2019 at 12:21 AM, Nir Soffer wrote: > On Fri, May 17, 2019 at 7:54 AM Gobinda Das wrote: > >> From RHHI side default we are setting below volume options: >> >> { group: 'virt', >> storage.owner-uid: '36', >> storage.owner-gid: '36', >> network.ping-timeout: '30',

[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-18 Thread Sahina Bose
On Fri, 17 May 2019 at 2:13 AM, Strahil Nikolov wrote: > > >This may be another issue. This command works only for storage with 512 > bytes sector size. > > >Hyperconverge systems may use VDO, and it must be configured in > compatibility mode to >support > >512 bytes sector size. > > >I'm not

[ovirt-users] Re: Gluster service failure

2019-05-14 Thread Sahina Bose
This looks like a bug displaying status in the UI (similar to https://bugzilla.redhat.com/show_bug.cgi?id=1381175 ?). Could you also attach engine logs from the timeframe that you notice the issue in UI. Do all nodes in the cluster return peer status as Connected? (Engine logs will help determine

[ovirt-users] Re: hosted-engine and GlusterFS on Vlan help

2019-05-14 Thread Sahina Bose
On Tue, Oct 4, 2016 at 9:51 PM, Hanson wrote: > Running iperf3 between node1 & node2, I can achieve almost 10gbps without > ever going out to the gateway... > > So switching between port to port on the switch is working properly on the > vlan. > > This must be a problem in the gluster settings?

[ovirt-users] Re: 4.0 - 2nd node fails on deploy

2019-05-14 Thread Sahina Bose
2818>), in state , has disconnected from glusterd. > Thanks > > > > *From:* Sahina Bose [mailto:sab...@redhat.com] > *Sent:* 05 October 2016 08:11 > *To:* Jason Jeffrey ; gluster-us...@gluster.org; > Ravishankar Narayanankutty > *Cc:* Simone Tiraboschi ;

[ovirt-users] Re: 4.0 - 2nd node fails on deploy

2019-05-14 Thread Sahina Bose
[Adding gluster-users ML] The brick logs are filled with errors : [2016-10-05 19:30:28.659061] E [MSGID: 113077] [posix-handle.c:309:posix_handle_pump] 0-engine-posix: malformed internal link

[ovirt-users] Re: Ovirt nodeNG RDMA support?

2019-05-07 Thread Sahina Bose
Rafi, can you take a look? On Mon, May 6, 2019 at 10:29 PM wrote: > > this is what I see in the logs when I try to add RDMA: > > [2019-05-06 16:54:50.305297] I [MSGID: 106521] > [glusterd-op-sm.c:2953:glusterd_op_set_volume] 0-management: changing > transport-type for volume storage_ssd to

[ovirt-users] Re: Ovirt nodeNG RDMA support?

2019-05-02 Thread Sahina Bose
On Tue, Apr 30, 2019 at 3:35 PM Sandro Bonazzola wrote: > > > Il giorno ven 26 apr 2019 alle ore 01:56 ha > scritto: > >> When I was able to load CentosOS as a host OS, Was able to use RDMA, but >> it seems like the 4.3x branch of nodeNG is missing RDMA support? I enabled >> rdma and started

[ovirt-users] Re: Gluster and few iSCSI Datastores in one Data Center

2019-04-23 Thread Sahina Bose
There seem to be communication issues between vdsmd and supervdsmd services. Can you check the status of both on the nodes? Perhaps try restarting these On Tue, Apr 23, 2019 at 6:01 PM wrote: > > I decided to add another cluster to the existing data center (Enable Virt > Service + Enable

[ovirt-users] Re: Gluster suggestions

2019-04-16 Thread Sahina Bose
a32 c3 d3 > > //Magnus > > > ____ > From: Sahina Bose > Sent: 16 April 2019 10:55 > To: Magnus Isaksson > Cc: users > Subject: Re: [ovirt-users] Gluster suggestions > > On Tue, Apr 16, 2019 at 1:42 PM wrote: > > > > Hello > > > > I would l

[ovirt-users] Re: hosted engine does not start

2019-04-16 Thread Sahina Bose
-only > > mount: wrong fs type, bad option, bad superblock on /dev/loop0, > >missing codepage or helper program, or other error > > > > In some cases useful info is found in syslog - try > >dmesg | tail or so. > > [root@kvm360 /]# > > >

[ovirt-users] Re: hosted engine does not start

2019-04-16 Thread Sahina Bose
On Tue, Apr 16, 2019 at 1:07 AM Stefan Wolf wrote: > > Hello all, > > > > after a powerloss the hosted engine won’t start up anymore. > > I ‘ve the current ovirt installed. > > Storage is glusterfs und it is up and running > > > > It is trying to start up hosted engine but it does not work, but I

[ovirt-users] Re: Gluster suggestions

2019-04-16 Thread Sahina Bose
On Tue, Apr 16, 2019 at 1:42 PM wrote: > > Hello > > I would like some suggestions on what type of solution with Gluster i should > use. > > I have 4 hosts with 3 disks each, i want to user as much space as possible > but also some redundancy, like raid5 or 6 > The 4 hosts are running oVirt on

[ovirt-users] Re: Gluster arbiter volume storage domain - change

2019-04-16 Thread Sahina Bose
On Tue, Apr 16, 2019 at 1:39 PM Leo David wrote: > > Hi Everyone, > I have wrongly configured the main gluster volume ( 12 identical 1tb ssd > disks, replica 3 distributed-replicated, across 6 nodes - 2 per node ) with > arbiter one. > Oviously I am wasting storage space in this scenario with

[ovirt-users] Re: Disk Locked

2019-04-10 Thread Sahina Bose
On Wed, Apr 3, 2019 at 5:33 PM Николаев Алексей wrote: > > Hi comminuty! > > I have issue like this https://bugzilla.redhat.com/show_bug.cgi?id=1506373 on > ovirt-engine 4.2.8.2-1.el7. > > Description of problem: > VM disk left in LOCKED state when added. > > Version-Release number of selected

[ovirt-users] Re: change self-hosted engine storage domain

2019-04-10 Thread Sahina Bose
On Wed, Apr 10, 2019 at 8:51 PM wrote: > > Is it possible to change a self-hosted engine's storage domain's settings? > > I setup a 3-node ovirt + gluster cluster with a dedicated 'engine' storage > domain. I can see through the administration portal that the engine storage > domain is using a

[ovirt-users] Re: Second host fail to activate (hosted-engine)

2019-04-10 Thread Sahina Bose
On Wed, Apr 10, 2019 at 1:45 AM Ricardo Alonso wrote: > > After installing the second host via the web gui (4.3.2.1-1.el7), it fails to > activate telling that wasn't possible to connect to the storage pool default > (glusterfs). Those are the logs: > > vdsm.log > > 2019-04-09 15:54:07,409-0400

[ovirt-users] Re: HE - engine gluster volume - not mounted

2019-04-02 Thread Sahina Bose
> Any calls to "START connectStorageServer" in vdsm.log? > Should I perform an "engine-cleanup", delete lvms from Cockpit and start it > all over ? I doubt if that would resolve issue since you did clean up files from the mount. > Did anyone succes

[ovirt-users] Re: HE - engine gluster volume - not mounted

2019-04-02 Thread Sahina Bose
Is it possible you have not cleared the gluster volume between installs? What's the corresponding error in vdsm.log? On Tue, Apr 2, 2019 at 4:07 PM Leo David wrote: > > And there it is the last lines on the ansible_create_storage_domain log: > > 2019-04-02 10:53:49,139+0100 DEBUG var changed:

[ovirt-users] Re: oVirt Survey 2019 results

2019-04-02 Thread Sahina Bose
On Tue, Apr 2, 2019 at 12:07 PM Sandro Bonazzola wrote: > Thanks to the 143 participants to oVirt Survey 2019! > The survey is now closed and results are publicly available at > https://bit.ly/2JYlI7U > We'll analyze collected data in order to improve oVirt thanks to your > feedback. > > As a

[ovirt-users] Re: Actual size bigger than virtual size

2019-03-29 Thread Sahina Bose
On Fri, Mar 29, 2019 at 6:02 PM wrote: > > Hi, > > Any help? > > Thanks > > José > > > From: supo...@logicworks.pt > To: "users" > Sent: Wednesday, March 27, 2019 11:21:41 AM > Subject: Actual size bigger than virtual size > > Hi, > > I have an all in one ovirt

[ovirt-users] Re: CLUSTER_CANNOT_DISABLE_GLUSTER_WHEN_CLUSTER_CONTAINS_VOLUMES

2019-03-29 Thread Sahina Bose
On Fri, Mar 29, 2019 at 3:29 AM Arsène Gschwind wrote: > On Thu, 2019-03-28 at 12:18 +, Arsène Gschwind wrote: > > On Wed, 2019-03-27 at 12:19 +0530, Sahina Bose wrote: > > On Wed, Mar 27, 2019 at 1:59 AM Arsène Gschwind > > < > > arsene.gschw...@unibas.ch >

[ovirt-users] Re: Gluster VM image Resync Time

2019-03-27 Thread Sahina Bose
On Wed, Mar 27, 2019 at 7:40 PM Indivar Nair wrote: > > Hi Strahil, > > Ok. Looks like sharding should make the resyncs faster. > > I searched for more info on it, but couldn't find much. > I believe it will still have to compare each shard to determine whether there > are any changes that need

[ovirt-users] Re: CLUSTER_CANNOT_DISABLE_GLUSTER_WHEN_CLUSTER_CONTAINS_VOLUMES

2019-03-27 Thread Sahina Bose
On Wed, Mar 27, 2019 at 1:59 AM Arsène Gschwind wrote: > > On Tue, 2019-03-26 at 18:09 +0530, Sahina Bose wrote: > > On Tue, Mar 26, 2019 at 3:00 PM Kaustav Majumder < > > kmaju...@redhat.com > > > wrote: > > > Let me rephrase > > >

[ovirt-users] Re: VM disk corruption with LSM on Gluster

2019-03-26 Thread Sahina Bose
+Krutika Dhananjay and gluster ml On Tue, Mar 26, 2019 at 6:16 PM Sander Hoentjen wrote: > > Hello, > > tl;dr We have disk corruption when doing live storage migration on oVirt > 4.2 with gluster 3.12.15. Any idea why? > > We have a 3-node oVirt cluster that is both compute and gluster-storage.

[ovirt-users] Re: CLUSTER_CANNOT_DISABLE_GLUSTER_WHEN_CLUSTER_CONTAINS_VOLUMES

2019-03-26 Thread Sahina Bose
t; >> users-le...@ovirt.org >> >> >> Privacy Statement: >> >> https://www.ovirt.org/site/privacy-policy/ >> >> >> oVirt Code of Conduct: >> >> https://www.ovirt.org/community/about/community-guidelines/ >> >> >> List

[ovirt-users] Re: OVirt Gluster Fail

2019-03-25 Thread Sahina Bose
You will first need to restore connectivity between the gluster peers for heal to work. So restart glusterd on all hosts as Strahil mentioned, and check if "gluster peer status" returns the other nodes as connected. If not, please check the glusterd log to see what's causing the issue. Share the

[ovirt-users] Re: VM has been paused due to a storage I/O error

2019-03-19 Thread Sahina Bose
Can you check the gluster mount logs to check if there are storage related errors. For the VM that's paused, check which storage domain and gluster volume the OS disk is on. For instance, if the name of the gluster volume is data, check the logs under

[ovirt-users] Re: vdsm should decouple with managed glusterfs services

2019-03-18 Thread Sahina Bose
Perina - do you know if this is possible? > Regards, > Levin > > > On 18/3/2019, 17:40, "Sahina Bose" wrote: > > On Sun, Mar 17, 2019 at 12:56 PM wrote: > > > > Hi, I had experience two time of 3-node hyper-converged 4.2.8 ovirt > cluster t

[ovirt-users] Re: vdsm should decouple with managed glusterfs services

2019-03-18 Thread Sahina Bose
On Sun, Mar 17, 2019 at 12:56 PM wrote: > > Hi, I had experience two time of 3-node hyper-converged 4.2.8 ovirt cluster > total outage due to vdsm reactivate the unresponsive node, and cause the > multiple glusterfs daemon restart. As a result, all VM was paused and some of > disk image was

[ovirt-users] Re: alertMessage, [Warning! Low confirmed free space on gluster volume M2Stick1]

2019-03-11 Thread Sahina Bose
+Denis Chapligin On Wed, Mar 6, 2019 at 2:03 PM Robert O'Kane wrote: > > Hello, > > With my first "in Ovirt" made Gluster Storage I am getting some annoying > Warnings. > > On the Hypervisor(s) engine.log : > > 2019-03-05 13:07:45,281+01 INFO >

[ovirt-users] Re: "gluster-ansible-roles is not installed on Host" error on Cockpit

2019-03-11 Thread Sahina Bose
We do have an updated rpm gluster-ansible-roles. +Sachidananda URS On Sun, Mar 10, 2019 at 7:00 PM Hesham Ahmed wrote: > > sac-gluster-ansible is there and is enabled: > > [sac-gluster-ansible] > enabled=1 > name = Copr repo for gluster-ansible owned by sac > baseurl = >

[ovirt-users] Re: gdeployConfig.conf errors (Hyperconverged setup using GUI)

2019-03-11 Thread Sahina Bose
+Gobinda Das +Dhanjal Parth On Mon, Mar 11, 2019 at 1:42 AM wrote: > > Hello I am trying to run a Hyperconverged setup "COnfigure gluster storage > and ovirt hosted engine", however I get the following error > >

[ovirt-users] Re: Advice around ovirt 4.3 / gluster 5.x

2019-03-04 Thread Sahina Bose
Adding gluster ml On Mon, Mar 4, 2019 at 7:17 AM Guillaume Pavese wrote: > > I got that too so upgraded to gluster6-rc0 nit still, this morning one engine > brick is down : > > [2019-03-04 01:33:22.492206] E [MSGID: 101191] > [event-epoll.c:765:event_dispatch_epoll_worker] 0-epoll: Failed to

[ovirt-users] Re: error: "cannot set lock, no free lockspace" (localized)

2019-02-28 Thread Sahina Bose
irtError ('virDomainCreateWithFlags() failed', dom=self) libvirtError: Не удалось установить блокировку: На устройстве не осталось свободного места[2019-02-28 On Fri, Mar 1, 2019 at 1:08 PM Mike Lykov wrote: > > 01.03.2019 9:51, Sahina Bose пишет: > > Any errors in vdsm.log or gluster mo

[ovirt-users] Re: error: "cannot set lock, no free lockspace" (localized)

2019-02-28 Thread Sahina Bose
Any errors in vdsm.log or gluster mount log for this volume? On Wed, Feb 27, 2019 at 1:07 PM Mike Lykov wrote: > > > Hi all. I have a HCI setup, glusterfs 3.12, ovirt 4.2.7, 4 nodes > > Yesterday I see 3 VMs detected by engine as "not responding" (it is marked as > HA VMs) > (it all located on

[ovirt-users] Re: [oVirt 4.3.1-RC2 Test Day] Hyperconverged HE Deployment

2019-02-28 Thread Sahina Bose
On Wed, Feb 27, 2019 at 4:06 PM Guillaume Pavese wrote: > > Hi, I tried again today to deploy HE on Gluster with oVirt 4.3.1 RC2 on a > clean Nested environment (no precedent deploy attempts to clean before...). > > Gluster was deployed without problem from cockpit. > I then snapshoted my vms

[ovirt-users] Re: VM poor iops

2019-02-28 Thread Sahina Bose
r > accsessing the volume. > Please correct me if this is wrong. > Have a nice day, In single instance deployments too, the option ensures all writes (with o-direct flag) are flushed to disk and not cached. > > Leo > > > On Tue, Feb 26, 2019, 08:24 Sahina Bose wrote: >>

[ovirt-users] Re: VM poor iops

2019-02-25 Thread Sahina Bose
On Fri, Sep 14, 2018 at 3:35 PM Paolo Margara wrote: > Hi, > > but performance.strict-o-direct is not one of the option enabled by > gdeploy during installation because it's supposed to give some sort of > benefit? > See

[ovirt-users] Re: Problem deploying self-hosted engine on ovirt 4.3.0

2019-02-25 Thread Sahina Bose
On Mon, Feb 25, 2019 at 2:51 PM matteo fedeli wrote: > > oh, ovirt-engine-appliance where I can found? ovirt-engine-appliance rpm is present in the oVirt repo (https://resources.ovirt.org/pub/ovirt-4.3/rpm/el7/x86_64/ for 4.3) > > At the end I wait in total 3 hour (is not too much?) and the

[ovirt-users] Re: Expand existing gluster storage in ovirt 4.2/4.3

2019-02-25 Thread Sahina Bose
On Thu, Feb 21, 2019 at 8:47 PM wrote: > > Hello, > I have a 3 node ovirt 4.3 cluster that I've setup and using gluster > (Hyperconverged setup) > I need to increase the amount of storage and compute so I added a 4th host > (server4.example.com) if it is possible to expand the amount of bricks

[ovirt-users] Re: Stuck completing last step of 4.3 upgrade

2019-02-25 Thread Sahina Bose
u can log a bug with these logs, that would be great - please use https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS to log the bug. > > Jason aka Tristam > > > On Feb 14, 2019, at 1:12 AM, Sahina Bose wrote: > > On Thu, Feb 14, 2019 at 2:39 AM Ron Jerome wrote: &

[ovirt-users] Re: Ovirt 4.2.8.. Possible bug?

2019-02-24 Thread Sahina Bose
This looks like a bug, if you selected jbod but is not reflected in the generated gdeploy config file. +Gobinda Das ? On Fri, Feb 22, 2019 at 8:59 PM Sandro Bonazzola wrote: > > > > Il ven 22 feb 2019, 15:31 matteo fedeli ha scritto: >> >> sorry, but I don't understand... > > > > I added to the

[ovirt-users] Re: Gluster setup Problem

2019-02-24 Thread Sahina Bose
+Gobinda Das +Dhanjal Parth can you please check? On Fri, Feb 22, 2019 at 11:52 PM Matthew Roth wrote: > > I have 3 servers, Node 1 is 3tb /dev/sda, Node 2, 3tb /dev/sdb, node3 3tb > /dev/sdb > > I start the process for gluster deployment. I change node 1 to sda and all > the other ones to

[ovirt-users] Re: Ovirt Node 4.3 Gluster Install adding bricks

2019-02-24 Thread Sahina Bose
On Thu, Feb 21, 2019 at 7:47 PM wrote: > > Sorry if this seems simple, but trial and error is how I learn. So the > basics. I installed Node 4.3 on 3 hosts, and was following the setup for > self-hosted engine. The setup fails when detecting peers and indicates that > they are already part of

[ovirt-users] Re: Ovirt Glusterfs

2019-02-24 Thread Sahina Bose
The options set on the gluster volume are tuned for data consistency and reliability. Some of the changes that you can try 1. use gfapi - however this will not provide you HA if the server used to access the gluster volume is down. (the backup-volfile-servers are not used in case of gfapi). You

[ovirt-users] Re: Problem deploying self-hosted engine on ovirt 4.3.0

2019-02-24 Thread Sahina Bose
You can interrupt and continue from hosted engine setup the next time. Please download the ovirt-engine-appliance rpm prior to install to speeden things up. On Mon, Feb 25, 2019 at 4:56 AM matteo fedeli wrote: > > after several attempts I managed to install and deploying the ovel gluster > but

[ovirt-users] Re: Ovirt Cluster completely unstable

2019-02-14 Thread Sahina Bose
On Thu, Feb 14, 2019 at 8:24 PM Jayme wrote: > https://bugzilla.redhat.com/show_bug.cgi?id=1677160 doesn't seem relevant > to me? Is that the correct link? > > Like I mentioned in a previous email I'm also having problems with Gluster > bricks going offline since upgrading to oVirt 4.3

[ovirt-users] Re: Problem installing hyperconverged setup

2019-02-14 Thread Sahina Bose
You can edit per host in the cockpit UI if you have non-uniform hosts. If you still run into issues, please paste the generated gdeploy config file to check On Wed, Feb 13, 2019 at 8:54 PM Edward Berger wrote: > > I don't believe the wizard followed your wishes if it comes up with 1005gb > for

[ovirt-users] Re: Ovirt Cluster completely unstable

2019-02-14 Thread Sahina Bose
On Thu, Feb 14, 2019 at 4:56 AM wrote: > > I'm abandoning my production ovirt cluster due to instability. I have a 7 > host cluster running about 300 vms and have been for over a year. It has > become unstable over the past three days. I have random hosts both, compute > and storage

[ovirt-users] Re: Stuck completing last step of 4.3 upgrade

2019-02-13 Thread Sahina Bose
On Thu, Feb 14, 2019 at 2:39 AM Ron Jerome wrote: > > > > > > Can you be more specific? What things did you see, and did you report bugs? > > I've got this one: https://bugzilla.redhat.com/show_bug.cgi?id=1649054 > and this one: https://bugzilla.redhat.com/show_bug.cgi?id=1651246 > and I've got

[ovirt-users] Re: ovirt-4.3 hyperconverged deployment - no option for "disc count" for JBOD

2019-02-12 Thread Sahina Bose
d 300MBps, the VM is pushing 280MBps average. Both using XFS. > > So why is ovirt's guest disc performance (native and gluster) so poor? Why > is it consistently giving me about 1/10th to 1/80th of the hosts disc > throughput? > > > > > On Mon, Feb 11, 2019 at 5:01 AM Sahina Bose w

[ovirt-users] Re: Error starting hosted engine

2019-02-11 Thread Sahina Bose
On Tue, Feb 12, 2019 at 10:51 AM Endre Karlson wrote: > It's a upgrade from 4.2.x < latest version of 4.2 series. I upgraded by > adding the 4.3 repo and doing the steps on the upgrade guide page > https://www.ovirt.org/release/4.3.0/#centos--rhel > Seems like you're running into

[ovirt-users] Re: ovirt-4.3 hyperconverged deployment - no option for "disc count" for JBOD

2019-02-11 Thread Sahina Bose
On Wed, Feb 6, 2019 at 10:45 PM feral wrote: > On that note, this was already reported several times a few months back, > but apparently was fixed in gdeploy-2.0.2-29.el7rhgs.noarch. I'm guessing > ovirt-node-4.3 just hasn't updated to that version yet? > +Niels de Vos +Sachidananda URS Any

[ovirt-users] Re: glusterevents daemon fails after upgrade from 4.2.8 to 4.3

2019-02-10 Thread Sahina Bose
On Fri, Feb 8, 2019 at 11:31 AM Aravinda wrote: > > Looks like Python 3 porting issue. I will work on the fix soon. Thanks Do we have a bug in gluster to track this? > > > On Thu, 2019-02-07 at 13:27 +0530, Sahina Bose wrote: > > +Aravinda Vishwanathapura Krishna Murth

[ovirt-users] Re: "Volume Option cluster.granular-entry-heal=enable could not be set" when using "Optimize for Virt store"

2019-02-07 Thread Sahina Bose
On Wed, Feb 6, 2019 at 4:17 PM Jorick Astrego wrote: > Hi again, > > When using the option "Optimize for Virt store", I get the following error: > > 2019-02-06 10:25:02,353+01 ERROR > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >

[ovirt-users] Re: glusterevents daemon fails after upgrade from 4.2.8 to 4.3

2019-02-07 Thread Sahina Bose
+Aravinda Vishwanathapura Krishna Murthy can you take a look? oVirt 4.3 has Gluster 5 On Wed, Feb 6, 2019 at 7:35 PM Edward Berger wrote: > > I upgraded some nodes from 4.28 to 4.3 and now when I look at the cockpit > "services" > tab I see a red failure for Gluster Events Notifier and clicking

[ovirt-users] Re: unable to migrate hosted-engine to oVirt 4.3 updated nodes

2019-02-06 Thread Sahina Bose
On Thu, Feb 7, 2019 at 8:57 AM Edward Berger wrote: > > I'm seeing migration failures for the hosted-engine VM from a 4.28 node to a > 4.30 node so I can complete the node upgrades. You may be running into https://bugzilla.redhat.com/show_bug.cgi?id=1641798. Can you check the version of libvirt

[ovirt-users] Re: ovirt-node 4.2 iso - hyperconverged wizard doesn't write gdeployConfig settings

2019-02-06 Thread Sahina Bose
+Sachidananda URS to review user request about systemd mount files On Tue, Feb 5, 2019 at 10:22 PM feral wrote: > > Using SystemD makes way more sense to me. I was just trying to use ovirt-node > as it was ... intended? Mainly because I have no idea how it all works yet, > so I've been trying

  1   2   3   4   5   6   >