Re: [ovirt-users] oVirt-Gluster Hyperconvergence: Graceful shutdown and startup

2018-01-04 Thread Bernhard Seidl
Am 04.01.2018 um 11:56 schrieb Sahina Bose: > > > On Wed, Jan 3, 2018 at 9:27 PM, Bernhard Seidl > wrote: > > Hi all and a happy new year, > > I am just testing oVirt 4.2 using a three node gluster hyperconvergence > and self

Re: [ovirt-users] oVirt-Gluster Hyperconvergence: Graceful shutdown and startup

2018-01-04 Thread Sahina Bose
On Wed, Jan 3, 2018 at 9:27 PM, Bernhard Seidl wrote: > Hi all and a happy new year, > > I am just testing oVirt 4.2 using a three node gluster hyperconvergence > and self hosted engine setup. How should this setup be shutdown and > started again? Here is what tried an

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-08 Thread Abi Askushi
Filed the *Bug 1459855* Alex On Thu, Jun 8, 2017 at 1:16 PM, Abi Askushi wrote: > Hi Denis, > > Ok I will file a bug for this. > I am not sure if I will be able to provide troubleshooting info for much > long as I

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-08 Thread Abi Askushi
Hi Denis, Ok I will file a bug for this. I am not sure if I will be able to provide troubleshooting info for much long as I already have put forward the replacement of disks with 512 ones. Alex On Thu, Jun 8, 2017 at 11:48 AM, Denis Chaplygin wrote: > Hello Alex, > > > On

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-08 Thread Denis Chaplygin
Hello Alex, On Wed, Jun 7, 2017 at 11:39 AM, Abi Askushi wrote: > Hi Sahina, > > Did you have the chance to check the logs and have any idea how this may > be addressed? > It seems to be a VDSM issue, as VDSM uses direct IO (and id actualy calls dd) and assumes that

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-07 Thread Abi Askushi
Hi Sahina, Did you have the chance to check the logs and have any idea how this may be addressed? Thanx, Alex On Mon, Jun 5, 2017 at 12:14 PM, Sahina Bose wrote: > Can we have the gluster mount logs and brick logs to check if it's the > same issue? > > On Sun, Jun 4, 2017

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-06 Thread Abi Askushi
Just to note that the mentioned logs below are from the dd with bs=512, which were failing. Attached the full logs from mount and brick. Alex On Tue, Jun 6, 2017 at 3:18 PM, Abi Askushi wrote: > Hi Krutika, > > My comments inline. > > Also attached the strace of: >

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-06 Thread Abi Askushi
Hi Krutika, My comments inline. Also attached the strace of: strace -y -ff -o /root/512-trace-on-root.log dd if=/dev/zero of=/mnt/test2.img oflag=direct bs=512 count=1 and of: strace -y -ff -o /root/4096-trace-on-root.log dd if=/dev/zero of=/mnt/test2.img oflag=direct bs=4096 count=16 I have

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-06 Thread Krutika Dhananjay
I stand corrected. Just realised the strace command I gave was wrong. Here's what you would actually need to execute: strace -y -ff -o -Krutika On Tue, Jun 6, 2017 at 3:20 PM, Krutika Dhananjay wrote: > OK. > > So for the 'Transport endpoint is not connected' issue,

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-06 Thread Krutika Dhananjay
OK. So for the 'Transport endpoint is not connected' issue, could you share the mount and brick logs? Hmmm.. 'Invalid argument' error even on the root partition. What if you change bs to 4096 and run? The logs I showed in my earlier mail shows that gluster is merely returning the error it got

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-05 Thread Abi Askushi
Also when testing with dd i get the following: *Testing on the gluster mount: * dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/10.100.100.1:_engine/test2.img oflag=direct bs=512 count=1 dd: error writing β/rhev/data-center/mnt/glusterSD/10.100.100.1:_engine/test2.imgβ: *Transport endpoint is

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-05 Thread Abi Askushi
The question that rises is what is needed to make gluster aware of the 4K physical sectors presented to it (the logical sector is also 4K). The offset (127488) at the log does not seem aligned at 4K. Alex On Mon, Jun 5, 2017 at 2:47 PM, Abi Askushi wrote: > Hi Krutika,

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-05 Thread Abi Askushi
Hi Krutika, I am saying that I am facing this issue with 4k drives. I never encountered this issue with 512 drives. Alex On Jun 5, 2017 14:26, "Krutika Dhananjay" wrote: > This seems like a case of O_DIRECT reads and writes gone wrong, judging by > the 'Invalid argument'

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-05 Thread Krutika Dhananjay
This seems like a case of O_DIRECT reads and writes gone wrong, judging by the 'Invalid argument' errors. The two operations that have failed on gluster bricks are: [2017-06-05 09:40:39.428979] E [MSGID: 113072] [posix.c:3453:posix_writev] 0-engine-posix: write failed: offset 0, [Invalid

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-05 Thread Abi Askushi
Hi Sahina, Attached are the logs. Let me know if sth else is needed. I have 5 disks (with 4K physical sector) in RAID5. The RAID has 64K stripe size at the moment. I have prepared the storage as below: pvcreate --dataalignment 256K /dev/sda4 vgcreate --physicalextentsize 256K gluster /dev/sda4

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-05 Thread Sahina Bose
Can we have the gluster mount logs and brick logs to check if it's the same issue? On Sun, Jun 4, 2017 at 11:21 PM, Abi Askushi wrote: > I clean installed everything and ran into the same. > I then ran gdeploy and encountered the same issue when deploying engine. >

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-04 Thread Maor Lipchuk
On Sun, Jun 4, 2017 at 8:51 PM, Abi Askushi wrote: > I clean installed everything and ran into the same. > I then ran gdeploy and encountered the same issue when deploying engine. > Seems that gluster (?) doesn't like 4K sector drives. I am not sure if it > has to do with

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-04 Thread Abi Askushi
I clean installed everything and ran into the same. I then ran gdeploy and encountered the same issue when deploying engine. Seems that gluster (?) doesn't like 4K sector drives. I am not sure if it has to do with alignment. The weird thing is that gluster volumes are all ok, replicating normally

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-04 Thread Abi Askushi
Hi Maor, My disk are of 4K block size and from this bug seems that gluster replica needs 512B block size. Is there a way to make gluster function with 4K drives? Thank you! On Sun, Jun 4, 2017 at 2:34 PM, Maor Lipchuk wrote: > Hi Alex, > > I saw a bug that might be

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-04 Thread Maor Lipchuk
Hi Alex, I saw a bug that might be related to the issue you encountered at https://bugzilla.redhat.com/show_bug.cgi?id=1386443 Sahina, maybe you have any advise? Do you think that BZ1386443is related? Regards, Maor On Sat, Jun 3, 2017 at 8:45 PM, Abi Askushi wrote: >

Re: [ovirt-users] Ovirt/Gluster replica 3 distributed-replicated problem

2016-09-29 Thread Ravishankar N
On 09/29/2016 08:03 PM, Davide Ferrari wrote: It's strange, I've tried to trigger the error again by putting vm04 in maintenence and stopping the gluster service (from ovirt gui) and now the VM starts correctly. Maybe the arbiter indeed blamed the brick that was still up before, but how's that

Re: [ovirt-users] Ovirt/Gluster replica 3 distributed-replicated problem

2016-09-29 Thread Davide Ferrari
It's strange, I've tried to trigger the error again by putting vm04 in maintenence and stopping the gluster service (from ovirt gui) and now the VM starts correctly. Maybe the arbiter indeed blamed the brick that was still up before, but how's that possible? The only (maybe big) difference with

Re: [ovirt-users] Ovirt/Gluster replica 3 distributed-replicated problem

2016-09-29 Thread Ravishankar N
On 09/29/2016 05:18 PM, Sahina Bose wrote: Yes, this is a GlusterFS problem. Adding gluster users ML On Thu, Sep 29, 2016 at 5:11 PM, Davide Ferrari > wrote: Hello maybe this is more glustefs then ovirt related but since OVirt

Re: [ovirt-users] Ovirt/Gluster replica 3 distributed-replicated problem

2016-09-29 Thread Sahina Bose
Yes, this is a GlusterFS problem. Adding gluster users ML On Thu, Sep 29, 2016 at 5:11 PM, Davide Ferrari wrote: > Hello > > maybe this is more glustefs then ovirt related but since OVirt integrates > Gluster management and I'm experiencing the problem in an ovirt cluster,

Re: [ovirt-users] oVirt Gluster Hyperconverged problem

2016-09-20 Thread knarra
Hello Hanson, Below is the procedure to replace the host with same FQDN where existing host OS has to be re-installed. If the ovirt version you are running is 4.0, steps 14 and 15 are not required. You could reinstall the host from UI with HostedEngine->Deploy option. * 1. Move

Re: [ovirt-users] oVirt Gluster Hyperconverged problem

2016-09-19 Thread knarra
Hi, Pad [1] contains the procedure to replace the host with same FQDN where existing host OS has to be re-installed. [1] https://paste.fedoraproject.org/431252/47435076/ Thanks kasturi. On 09/20/2016 06:27 AM, Hanson wrote: Hi Guys, I encountered an unfortunate circumstance today.

Re: [ovirt-users] oVirt + Gluster Hyperconverged

2016-07-18 Thread Hanson Turner
Hi Fernando, Not anything spectacular that I have seen, but I'm using 16GB minimum each node. Probably want to setup your hosted-engine as 2cpu, 4096mb ram. I believe that's the min reqs. Thanks, Hanson On 07/15/2016 09:48 AM, Fernando Frediani wrote: Hi folks, I have a few servers

Re: [ovirt-users] oVirt & gluster - which copy is written to?

2016-05-17 Thread Sahina Bose
On 05/12/2016 05:35 AM, Bill Bill wrote: Hello, Let’s say I have a 5 node converged cluster of oVirt & glusterFS with a replica count of “3”. Host1 - replica Host2 - replica Host3 - replica Host4 Host5 If I spin a VM up on Host1 – does the first replica get created local to that

Re: [ovirt-users] oVirt & gluster - which copy is written to?

2016-05-16 Thread Alexander Wels
On Thursday, May 12, 2016 12:05:22 AM Bill Bill wrote: > Hello, > > Let’s say I have a 5 node converged cluster of oVirt & glusterFS with a > replica count of “3”. > Host1 - replica > Host2 - replica > Host3 - replica > Host4 > Host5 > > If I spin a VM up on Host1 – does the first replica get

Re: [ovirt-users] oVirt + gluster + selfhosted + bonding

2015-09-16 Thread Simone Tiraboschi
On Tue, Sep 15, 2015 at 9:58 PM, Joachim Tingvold wrote: > Hi, > > First-time user of oVirt, so bear with me. > > Trying to get redundant oVirt + gluster set up. Have four hosts; > > gluster1 (CentOS7) > gluster2 (CentOS7) > ovirt1 (CentOS7) > ovirt2 (CentOS7) > >

Re: [ovirt-users] ovirt+gluster+NFS : storage hicups

2015-09-03 Thread Nicolas Ecarnot
Le 06/08/2015 16:36, Tim Macy a écrit : Nicolas, I have the same setup dedicated physical system running engine on CentOS 6.6 three hosts running CentOS 7.1 with Gluster and KVM, and firewall is disabled on all hosts. I also followed the same documents to build my environment so I assume they

Re: [ovirt-users] Ovirt/Gluster

2015-08-28 Thread Sander Hoentjen
On 08/21/2015 06:12 PM, Ravishankar N wrote: On 08/21/2015 07:57 PM, Sander Hoentjen wrote: Maybe I should formulate some clear questions: 1) Am I correct in assuming that an issue on of of 3 gluster nodes should not cause downtime for VM's on other nodes? From what I understand, yes.

Re: [ovirt-users] Ovirt/Gluster

2015-08-21 Thread Sander Hoentjen
On 08/21/2015 11:30 AM, Ravishankar N wrote: On 08/21/2015 01:21 PM, Sander Hoentjen wrote: On 08/21/2015 09:28 AM, Ravishankar N wrote: On 08/20/2015 02:14 PM, Sander Hoentjen wrote: On 08/19/2015 09:04 AM, Ravishankar N wrote: On 08/18/2015 04:22 PM, Ramesh Nachimuthu wrote: +

Re: [ovirt-users] Ovirt/Gluster

2015-08-21 Thread Sander Hoentjen
On 08/21/2015 02:21 PM, Ravishankar N wrote: On 08/21/2015 04:32 PM, Sander Hoentjen wrote: On 08/21/2015 11:30 AM, Ravishankar N wrote: On 08/21/2015 01:21 PM, Sander Hoentjen wrote: On 08/21/2015 09:28 AM, Ravishankar N wrote: On 08/20/2015 02:14 PM, Sander Hoentjen wrote:

Re: [ovirt-users] Ovirt/Gluster

2015-08-21 Thread Ravishankar N
On 08/21/2015 04:32 PM, Sander Hoentjen wrote: On 08/21/2015 11:30 AM, Ravishankar N wrote: On 08/21/2015 01:21 PM, Sander Hoentjen wrote: On 08/21/2015 09:28 AM, Ravishankar N wrote: On 08/20/2015 02:14 PM, Sander Hoentjen wrote: On 08/19/2015 09:04 AM, Ravishankar N wrote:

Re: [ovirt-users] Ovirt/Gluster

2015-08-21 Thread Ravishankar N
On 08/21/2015 07:57 PM, Sander Hoentjen wrote: Maybe I should formulate some clear questions: 1) Am I correct in assuming that an issue on of of 3 gluster nodes should not cause downtime for VM's on other nodes? From what I understand, yes. Maybe the ovirt folks can confirm. I can tell you

Re: [ovirt-users] Ovirt/Gluster

2015-08-21 Thread Ravishankar N
On 08/21/2015 01:21 PM, Sander Hoentjen wrote: On 08/21/2015 09:28 AM, Ravishankar N wrote: On 08/20/2015 02:14 PM, Sander Hoentjen wrote: On 08/19/2015 09:04 AM, Ravishankar N wrote: On 08/18/2015 04:22 PM, Ramesh Nachimuthu wrote: + Ravi from gluster. Regards, Ramesh -

Re: [ovirt-users] Ovirt/Gluster

2015-08-21 Thread Ravishankar N
On 08/20/2015 02:14 PM, Sander Hoentjen wrote: On 08/19/2015 09:04 AM, Ravishankar N wrote: On 08/18/2015 04:22 PM, Ramesh Nachimuthu wrote: + Ravi from gluster. Regards, Ramesh - Original Message - From: Sander Hoentjen san...@hoentjen.eu To: users@ovirt.org Sent: Tuesday,

Re: [ovirt-users] Ovirt/Gluster

2015-08-21 Thread Sander Hoentjen
On 08/21/2015 09:28 AM, Ravishankar N wrote: On 08/20/2015 02:14 PM, Sander Hoentjen wrote: On 08/19/2015 09:04 AM, Ravishankar N wrote: On 08/18/2015 04:22 PM, Ramesh Nachimuthu wrote: + Ravi from gluster. Regards, Ramesh - Original Message - From: Sander Hoentjen

Re: [ovirt-users] Ovirt/Gluster

2015-08-19 Thread Ravishankar N
On 08/18/2015 04:22 PM, Ramesh Nachimuthu wrote: + Ravi from gluster. Regards, Ramesh - Original Message - From: Sander Hoentjen san...@hoentjen.eu To: users@ovirt.org Sent: Tuesday, August 18, 2015 3:30:35 PM Subject: [ovirt-users] Ovirt/Gluster Hi, We are looking for some easy to

Re: [ovirt-users] Ovirt/Gluster

2015-08-18 Thread Ramesh Nachimuthu
+ Ravi from gluster. Regards, Ramesh - Original Message - From: Sander Hoentjen san...@hoentjen.eu To: users@ovirt.org Sent: Tuesday, August 18, 2015 3:30:35 PM Subject: [ovirt-users] Ovirt/Gluster Hi, We are looking for some easy to manage self contained VM hosting. Ovirt with

Re: [ovirt-users] ovirt+gluster+NFS : storage hicups

2015-08-09 Thread Vered Volansky
On Thu, Aug 6, 2015 at 3:24 PM, Nicolas Ecarnot nico...@ecarnot.net wrote: Hi Vered, Thanks for answering. Le 06/08/2015 11:08, Vered Volansky a écrit : But from times to times, there seem to appear a severe hicup which I have great difficulties to diagnose. The messages in the web gui

Re: [ovirt-users] ovirt+gluster+NFS : storage hicups

2015-08-07 Thread Nicolas Ecarnot
Le 07/08/2015 02:17, Donny Davis a écrit : I have the same setup, and my only issue is at the switch level with CTDB. The IP does failover, however until I issue a ping from the interface ctdb is connected to, the storage will not connect. If i go to the host with the CTDB vip, and issue a ping

Re: [ovirt-users] ovirt+gluster+NFS : storage hicups

2015-08-06 Thread Nicolas Ecarnot
Hi Tim, Nice to read that someone else is fighting with a similar setup :) Le 06/08/2015 16:36, Tim Macy a écrit : Nicolas, I have the same setup dedicated physical system running engine on CentOS 6.6 three hosts running CentOS 7.1 with Gluster and KVM, and firewall is disabled on all hosts.

Re: [ovirt-users] ovirt+gluster+NFS : storage hicups

2015-08-06 Thread Donny Davis
I have the same setup, and my only issue is at the switch level with CTDB. The IP does failover, however until I issue a ping from the interface ctdb is connected to, the storage will not connect. If i go to the host with the CTDB vip, and issue a ping from the interface ctdb is on, everything

Re: [ovirt-users] ovirt+gluster+NFS : storage hicups

2015-08-06 Thread Sahina Bose
On 08/06/2015 02:38 PM, Vered Volansky wrote: - Original Message - From: Nicolas Ecarnot nico...@ecarnot.net To: users@ovirt.org Users@ovirt.org Sent: Wednesday, August 5, 2015 5:32:38 PM Subject: [ovirt-users] ovirt+gluster+NFS : storage hicups Hi, I used the two links below to

Re: [ovirt-users] ovirt+gluster+NFS : storage hicups

2015-08-06 Thread Nicolas Ecarnot
Hi Vered, Thanks for answering. Le 06/08/2015 11:08, Vered Volansky a écrit : But from times to times, there seem to appear a severe hicup which I have great difficulties to diagnose. The messages in the web gui are not very precise, and not consistent: - some tell about some host having

Re: [ovirt-users] ovirt+gluster+NFS : storage hicups

2015-08-06 Thread Nicolas Ecarnot
Le 06/08/2015 14:26, Sahina Bose a écrit : - Host serv-vm-al03 cannot access the Storage Domain(s) UNKNOWN attached to the Data Center Just by waiting a couple of seconds lead to a self heal with no action. - Repeated Detected change in status of brick serv-vm-al03:/gluster/data/brick of

Re: [ovirt-users] ovirt+gluster+NFS : storage hicups

2015-08-06 Thread Vered Volansky
- Original Message - From: Nicolas Ecarnot nico...@ecarnot.net To: users@ovirt.org Users@ovirt.org Sent: Wednesday, August 5, 2015 5:32:38 PM Subject: [ovirt-users] ovirt+gluster+NFS : storage hicups Hi, I used the two links below to setup a test DC :

Re: [ovirt-users] oVirt/gluster storage questions for 2-3 node datacenter

2014-08-29 Thread David King
Paul, Thanks for the response. You mention that the issue is orphaned files during updates when one node is down. However I am less concerned about adding and removing files because the file server will be predominately VM disks so the file structure is fairly static. Those VM files will be

Re: [ovirt-users] oVirt/gluster storage questions for 2-3 node datacenter

2014-08-29 Thread Vijay Bellur
On 08/29/2014 07:34 PM, David King wrote: Paul, Thanks for the response. You mention that the issue is orphaned files during updates when one node is down. However I am less concerned about adding and removing files because the file server will be predominately VM disks so the file structure

Re: [ovirt-users] oVirt/gluster storage questions for 2-3 node datacenter

2014-08-29 Thread Paul Robert Marino
On Fri, Aug 29, 2014 at 12:25 PM, Vijay Bellur vbel...@redhat.com wrote: On 08/29/2014 07:34 PM, David King wrote: Paul, Thanks for the response. You mention that the issue is orphaned files during updates when one node is down. However I am less concerned about adding and removing files

Re: [ovirt-users] oVirt/gluster storage questions for 2-3 node datacenter

2014-08-29 Thread David King
Hi Paul, I would prefer to do a direct mount for local disk.  However I am not certain how to configure a single system with both local storage and bluster replicated storage. - The “Configure Local Storage”  option for Hosts wants to make a datacenter and cluster for the system.  I presume

Re: [ovirt-users] oVirt/gluster storage questions for 2-3 node datacenter

2014-08-28 Thread Paul Robert Marino
I'll try to answer some of these.1) its not a serious problem persay the issue is if one node goes down and you delete a file while the second node is down it will be restored when the second node comes back which may cause orphaned files where as if you use 3 servers they will use quorum to

Re: [ovirt-users] oVirt + gluster with host and hosted VM on different subnets

2014-07-09 Thread Sven Kieske
Am 08.07.2014 21:15, schrieb Martin Sivak: Hi, I do not recommend running hosted engine on top of GlusterFS. Not even on top of NFS compatibility layer the GlusterFS provides. There have been a lot of issues with setups like that. GlusterFS does not ensure that the metadata writes are

Re: [ovirt-users] oVirt + gluster with host and hosted VM on different subnets

2014-07-09 Thread Jason Brooks
- Original Message - From: Nicolas Ecarnot nico...@ecarnot.net To: users@ovirt.org Sent: Tuesday, July 8, 2014 12:55:10 PM Subject: Re: [ovirt-users] oVirt + gluster with host and hosted VM on different subnets Le 08/07/2014 21:15, Martin Sivak a écrit : Hi, I do

Re: [ovirt-users] oVirt + gluster with host and hosted VM on different subnets

2014-07-08 Thread Sandro Bonazzola
Il 07/07/2014 15:38, Simone Marchioni ha scritto: Hi, I'm trying to install oVirt 3.4 + gluster looking at the following guides: http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/ http://community.redhat.com/blog/2014/03/up-and-running-with-ovirt-3-4/ It went smooth until

Re: [ovirt-users] oVirt + gluster with host and hosted VM on different subnets

2014-07-08 Thread Andrew Lau
On Wed, Jul 9, 2014 at 12:23 AM, Sandro Bonazzola sbona...@redhat.com wrote: Il 07/07/2014 15:38, Simone Marchioni ha scritto: Hi, I'm trying to install oVirt 3.4 + gluster looking at the following guides: http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/

Re: [ovirt-users] oVirt + gluster with host and hosted VM on different subnets

2014-07-08 Thread Simone Marchioni
Il 08/07/2014 16:47, Andrew Lau ha scritto: On Wed, Jul 9, 2014 at 12:23 AM, Sandro Bonazzola sbona...@redhat.com wrote: Il 07/07/2014 15:38, Simone Marchioni ha scritto: Hi, I'm trying to install oVirt 3.4 + gluster looking at the following guides:

Re: [ovirt-users] oVirt + gluster with host and hosted VM on different subnets

2014-07-08 Thread Martin Sivak
Hi, I do not recommend running hosted engine on top of GlusterFS. Not even on top of NFS compatibility layer the GlusterFS provides. There have been a lot of issues with setups like that. GlusterFS does not ensure that the metadata writes are atomic and visible to all nodes at the same time

Re: [ovirt-users] oVirt + gluster with host and hosted VM on different subnets

2014-07-08 Thread Nicolas Ecarnot
Le 08/07/2014 21:15, Martin Sivak a écrit : Hi, I do not recommend running hosted engine on top of GlusterFS. Not even on top of NFS compatibility layer the GlusterFS provides. Martin, It is very disturbing for us, final users, to read the comment above and the web page below, both written

Re: [ovirt-users] oVirt + gluster with host and hosted VM on different subnets

2014-07-08 Thread Andrew Lau
Hi Martin, Is that because of how the replication works? What if you had, the kernel-nfs server running ontop of the glusternfs share and a virtual IP to allow the hosted-engine to only access one of the shares. Thanks, Andrew On Wed, Jul 9, 2014 at 5:15 AM, Martin Sivak msi...@redhat.com

Re: [ovirt-users] Ovirt + GLUSTER

2014-04-22 Thread Jeremiah Jahn
I am. On Mon, Apr 21, 2014 at 1:50 PM, Joop jvdw...@xs4all.nl wrote: Ovirt User wrote: Hello, anyone are using ovirt with glusterFS as storage domain in production environment ? Not directly production but almost. Having problems? Regards, Joop

Re: [ovirt-users] Ovirt + GLUSTER

2014-04-22 Thread Gabi C
second to that! 3 node acting both as VMs and gluster hosts, engine on separate vm (ESXi) machine On Tue, Apr 22, 2014 at 3:53 PM, Jeremiah Jahn jerem...@goodinassociates.com wrote: I am. On Mon, Apr 21, 2014 at 1:50 PM, Joop jvdw...@xs4all.nl wrote: Ovirt User wrote: Hello,

Re: [ovirt-users] Ovirt + GLUSTER

2014-04-22 Thread Jeremiah Jahn
Nothing too complicated. SL 6x and 5 8 vm hosts running on a hitachi blade symphony. 25 Server guests 15+ desktop windows guests 3x 12TB storage servers (All 1TB based raid 10 SSDs) 10Gbs PTP between 2 of the servers, with one geolocation server offsite. Most of the server images/luns are

Re: [ovirt-users] Ovirt + GLUSTER

2014-04-22 Thread Jeremiah Jahn
oh, and engine is currently on a separate server, but will probably move to one of the storage servers. On Tue, Apr 22, 2014 at 8:43 AM, Jeremiah Jahn jerem...@goodinassociates.com wrote: Nothing too complicated. SL 6x and 5 8 vm hosts running on a hitachi blade symphony. 25 Server guests

Re: [ovirt-users] Ovirt + GLUSTER

2014-04-21 Thread Joop
Ovirt User wrote: Hello, anyone are using ovirt with glusterFS as storage domain in production environment ? Not directly production but almost. Having problems? Regards, Joop ___ Users mailing list Users@ovirt.org