Re: [Gluster-users] QEMU

2016-06-14 Thread Kevin Lemonnier
> Are you willing to share your hardware/network configuration? We are > planning a cluster > similiar to yours. Sure, we are using those : https://www.ovh.com/fr/serveurs_dedies/details-servers.xml?range=ENT=2016-SP-64 The last variant with 3 SAS Drives + a Caching SSD with hard RAID. > > I

Re: [Gluster-users] QEMU

2016-06-14 Thread Gandalf Corvotempesta
2016-06-14 0:05 GMT+02:00 Kevin Lemonnier : > Yep, using it with proxmox so it's qemu. > Performances seems good, but it does depend on the setup. > I know currently our production cluster is using 3.7.6 on 3 nodes > across 2 datacenters, with 10ms ping between them. Seemed

[Gluster-users] to RAID or not?

2016-06-14 Thread Gandalf Corvotempesta
Let's assume a small cluster made by 3 servers, 12 disks/bricks each. This cluster would be expanded to a maximum of 15 servers in near future. What do you suggest, a JBOD or a RAID? Which RAID level? 15 servers with 12 disks/bricks in JBOD are 180 bricks. Is this an acceptable value? Multiple

Re: [Gluster-users] NFS ganesha client not showing files after crash

2016-06-14 Thread Jiffin Tony Thottan
On 06/06/16 08:20, Alan Hartless wrote: Hi Jiffin, Thanks! I have 3.7.11-ubuntu1~trusty1 installed and using NFSv4 mount protocols. Doing a forced lookup lists the root directories but shows 0 files in each. Hi, Sorry for the delayed reply. You might need to do the explicit lookup on

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC

2016-06-14 Thread Saravanakumar Arumugam
Hi, This meeting is scheduled for anyone, who is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC (in

[Gluster-users] Minutes : Gluster Community Bug Triage meeting (Today)

2016-06-14 Thread Saravanakumar Arumugam
Hi, Please find minutes of June 14 Bug Triage meeting. Meeting summary 1. 1. agenda: https://public.pad.fsfe.org/p/gluster-bug-triage (Saravanakmr

[Gluster-users] Fwd: glusterfs .vib package

2016-06-14 Thread atris adam
I want to use glusterfs as a storage for ESXI , I have mount it via NFS. I want to make a VM with the zerodeager thick option, but it is disabled.I have searched alot and find out that NFS does not allow to create thick vm unless the hardware acceleration is supported on the storage. ESXI

Re: [Gluster-users] Fwd: glusterfs .vib package

2016-06-14 Thread Ben Werthmann
Atris, I can't speak for the Gluster project, but as I understand it, there is no .vib package to enable NFS VAAI in the builtin Gluster NFS server. The direction appears to be to phase out Gluster's built-in NFS service in favor of nfs-ganesha. In 3.8 Gluster's built-in NFS is off by default.

Re: [Gluster-users] issues recovering machine in gluster

2016-06-14 Thread Gandalf Corvotempesta
Il 15 giu 2016 07:09, "Atin Mukherjee" ha scritto: > To get rid of this situation you'd need to stop all the running glusterd > instances and go into /var/lib/glusterd/peers folder on all the nodes > and manually correct the UUID file names and their content if required. If

Re: [Gluster-users] moka111 - memory leak problem as written in IRC

2016-06-14 Thread Atin Mukherjee
A little late as it went out of my radar, apologies. >From st02's glusterd log, it looks like profile commands were executed very frequently. [2016-06-02 10:35:02.232596] I [MSGID: 106494] [glusterd-handler.c:3070:__glusterd_handle_cli_profile_volume] 0-management: Received volume profile req

Re: [Gluster-users] Disk failed, how do I remove brick?

2016-06-14 Thread Nithya Balachandran
On Fri, Jun 10, 2016 at 1:25 AM, Phil Dumont wrote: > Just started trying gluster, to decide if we want to put it into > production. > > Running version 3.7.11-1 > > Replicated, distributed volume, two servers, 20 bricks per server: > > [root@storinator1 ~]#

Re: [Gluster-users] issues recovering machine in gluster

2016-06-14 Thread Atin Mukherjee
So the issue looks like an incorrect UUID got populated in the peer configuration which lead to this inconsistency and here is the log entry to prove this. I have a feeling that the steps were not properly performed or you missed to copy the older UUID of the failed node to the new one.

Re: [Gluster-users] issues recovering machine in gluster

2016-06-14 Thread Atin Mukherjee
On 06/15/2016 11:06 AM, Gandalf Corvotempesta wrote: > Il 15 giu 2016 07:09, "Atin Mukherjee" > ha scritto: >> To get rid of this situation you'd need to stop all the running glusterd >> instances and go into /var/lib/glusterd/peers folder on

[Gluster-users] Disk failed, how do I remove brick?

2016-06-14 Thread Phil Dumont
Just started trying gluster, to decide if we want to put it into production. Running version 3.7.11-1 Replicated, distributed volume, two servers, 20 bricks per server: [root@storinator1 ~]# gluster volume status gv0 Status of volume: gv0 Gluster process TCP Port