Re: [Gluster-users] VM going down

2017-05-10 Thread Lindsay Mathieson
On 11/05/2017 9:51 AM, Alessandro Briosi wrote: On one it reports about Leaked clusters but I don't think this might cause the problem (or not?) Should be fine -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://list

Re: [Gluster-users] VM going down

2017-05-10 Thread Alessandro Briosi
Il 09/05/2017 23:41, Lindsay Mathieson ha scritto: On 10/05/2017 12:59 AM, Alessandro Briosi wrote: Also the seek errors where there before when there was no arbiter (only 2 replica). And finally seek error is triggered when the VM is started (at least the one in the logs). Could there be a co

Re: [Gluster-users] Slow write times to gluster disk

2017-05-10 Thread Pat Haley
Hi Pranith, Since we are mounting the partitions as the bricks, I tried the dd test writing to /.glusterfs/. The results without oflag=sync were 1.6 Gb/s (faster than gluster but not as fast as I was expecting given the 1.2 Gb/s to the no-gluster area w/ fewer disks). Pat On 05/10/2017 0

Re: [Gluster-users] Quota limits gone after upgrading to 3.8

2017-05-10 Thread mabi
Hi Sanoj, I do understand that my quotas are still working on my GluserFS volume but not displayed in the output of the volume quota list command. I now did the test of re-adding the quota by running for example: gluster volume quota myvolume limit-usage /directoryX 50GB After that I ran the v

Re: [Gluster-users] Is there difference when Nfs-Ganesha is unavailable

2017-05-10 Thread ML Wong
Soumya, I should have mentioned in my first email. The VIP was always able to failover to the remaining nodes. But in many of my testings, the failover IP just did not carry over the states for the NFS client. So, it always look like the NFS server is unavailable. Thanks for your response. Any

Re: [Gluster-users] Slow write times to gluster disk

2017-05-10 Thread Pranith Kumar Karampuri
On Wed, May 10, 2017 at 10:15 PM, Pat Haley wrote: > > Hi Pranith, > > Not entirely sure (this isn't my area of expertise). I'll run your answer > by some other people who are more familiar with this. > > I am also uncertain about how to interpret the results when we also add > the dd tests writ

Re: [Gluster-users] Slow write times to gluster disk

2017-05-10 Thread Pat Haley
Hi Pranith, Not entirely sure (this isn't my area of expertise). I'll run your answer by some other people who are more familiar with this. I am also uncertain about how to interpret the results when we also add the dd tests writing to the /home area (no gluster, still on the same machine)

Re: [Gluster-users] Slow write times to gluster disk

2017-05-10 Thread Pranith Kumar Karampuri
Okay good. At least this validates my doubts. Handling O_SYNC in gluster NFS and fuse is a bit different. When application opens a file with O_SYNC on fuse mount then each write syscall has to be written to disk as part of the syscall where as in case of NFS, there is no concept of open. NFS perfor

Re: [Gluster-users] Slow write times to gluster disk

2017-05-10 Thread Pat Haley
Without the oflag=sync and only a single test of each, the FUSE is going faster than NFS: FUSE: mseas-data2(dri_nascar)% dd if=/dev/zero count=4096 bs=1048576 of=zeros.txt conv=sync 4096+0 records in 4096+0 records out 4294967296 bytes (4.3 GB) copied, 7.46961 s, 575 MB/s NFS mseas-data2(H

Re: [Gluster-users] Slow write times to gluster disk

2017-05-10 Thread Pranith Kumar Karampuri
Could you let me know the speed without oflag=sync on both the mounts? No need to collect profiles. On Wed, May 10, 2017 at 9:17 PM, Pat Haley wrote: > > Here is what I see now: > > [root@mseas-data2 ~]# gluster volume info > > Volume Name: data-volume > Type: Distribute > Volume ID: c162161e-2a

Re: [Gluster-users] Slow write times to gluster disk

2017-05-10 Thread Pat Haley
Here is what I see now: [root@mseas-data2 ~]# gluster volume info Volume Name: data-volume Type: Distribute Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18 Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: mseas-data2:/mnt/brick1 Brick2: mseas-data2:/mnt/brick2 Options Rec

Re: [Gluster-users] Slow write times to gluster disk

2017-05-10 Thread Pranith Kumar Karampuri
Is this the volume info you have? >* [root at mseas-data2 > ~]# gluster volume info *>>* Volume Name: data-volume *>* Type: Distribute *>* Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18 *>* Status: Started *>* Number

Re: [Gluster-users] VM going down

2017-05-10 Thread Pranith Kumar Karampuri
On Wed, May 10, 2017 at 7:11 PM, Niels de Vos wrote: > On Wed, May 10, 2017 at 04:08:22PM +0530, Pranith Kumar Karampuri wrote: > > On Tue, May 9, 2017 at 7:40 PM, Niels de Vos wrote: > > > > > ... > > > > > client from > > > > > srvpve2-162483-2017/05/08-10:01:06:189720-datastore2-client-0-0-0

Re: [Gluster-users] Slow write times to gluster disk

2017-05-10 Thread Pat Haley
Hi, We finally managed to do the dd tests for an NFS-mounted gluster file system. The profile results during that test are in http://mseas.mit.edu/download/phaley/GlusterUsers/profile_gluster_nfs_test The summary of the dd tests are * writing to gluster disk mounted with fuse: 5 Mb/s *

Re: [Gluster-users] VM going down

2017-05-10 Thread Niels de Vos
On Wed, May 10, 2017 at 04:08:22PM +0530, Pranith Kumar Karampuri wrote: > On Tue, May 9, 2017 at 7:40 PM, Niels de Vos wrote: > > > ... > > > > client from > > > > srvpve2-162483-2017/05/08-10:01:06:189720-datastore2-client-0-0-0 > > > > (version: 3.8.11) > > > > [2017-05-08 10:01:06.237433] E [

Re: [Gluster-users] small files optimizations

2017-05-10 Thread Pranith Kumar Karampuri
I think you should take a look at: https://wiki.dovecot.org/MailLocation/SharedDisk if not already. On Wed, May 10, 2017 at 12:52 PM, wrote: > On Wed, May 10, 2017 at 09:14:59AM +0200, Gandalf Corvotempesta wrote: > > Yes much clearer but I think this makes some trouble like space available >

Re: [Gluster-users] VM going down

2017-05-10 Thread Pranith Kumar Karampuri
On Tue, May 9, 2017 at 7:40 PM, Niels de Vos wrote: > ... > > > client from > > > srvpve2-162483-2017/05/08-10:01:06:189720-datastore2-client-0-0-0 > > > (version: 3.8.11) > > > [2017-05-08 10:01:06.237433] E [MSGID: 113107] > [posix.c:1079:posix_seek] > > > 0-datastore2-posix: seek failed on fd

Re: [Gluster-users] gdeploy not starting all the daemons for NFS-ganesha :(

2017-05-10 Thread Manisha Saini
Hi hvjunk >From the logs it seems like the ganesha cluster is not yet created/running and >performing refresh config is failing.As the cluster is not running and volume >"ganesha" is not yet exported due to this performing refresh config on volume >will fail. For creating ganesha cluster,there

[Gluster-users] GlusterFS+heketi+Kubernetes snapshots fail

2017-05-10 Thread Chris Jones
Hi All, This was discussed briefly on IRC, but got no resolution. I have a Kubernetes cluster running heketi and GlusterFS 3.10.1. When I try to create a snapshot, I get: snapshot create: failed: Commit failed on localhost. Please check log file for details. glusterd log: http://termbin.co

[Gluster-users] Poor performance for nfs client on windows than on linux

2017-05-10 Thread ChiKu
Hello, I'm testing glusterfs for windows client. I created 2 servers for glusterfs (3.10.1 replication 2) on centos 7.3. Right now, I just use default setting and my testing use case is alot small files in a folder. nfs windows client is so poor performance than nfs linux client. Idon't understa

[Gluster-users] Community meeting 2017-05-10

2017-05-10 Thread Kaushal M
Hi all, Today's meeting is scheduled to happen in 6 hours at 1500UTC. The meeting pad is at https://bit.ly/gluster-community-meetings . Please add your updates and topics for discussion. I had forgotten to send out the meeting minutes and logs for the last meeting which happened on 2017-04-26. Th

[Gluster-users] plan to use as a nas for homedirs

2017-05-10 Thread Laurent Bardi
Hi, I have 2x (40TB DAS attached to a machine running ESX (64Go RAM)). I plan to use them as an active-active cluster with gluster + ctdb (serving only smb no nfs). The goal is to store homedirs and profiles for all my client machines (~ 150 win7 client). Any advice are welcome ! -Should

Re: [Gluster-users] gdeploy not starting all the daemons for NFS-ganesha :(

2017-05-10 Thread Manisha Saini
Hi hvjunk >From the logs it seems like the ganesha cluster is not yet created/running and >performing refresh config is failing.As the cluster is not running and volume >"ganesha" is not yet exported due to this performing refresh config on volume >will fail. For creating ganesha cluster,there

Re: [Gluster-users] small files optimizations

2017-05-10 Thread lemonnierk
On Wed, May 10, 2017 at 09:14:59AM +0200, Gandalf Corvotempesta wrote: > Yes much clearer but I think this makes some trouble like space available > shown by gluster. Or not? Not really, you'll just see "used space" on your volumes that you won't be able to track down, keep in mind that the used s

Re: [Gluster-users] small files optimizations

2017-05-10 Thread Gandalf Corvotempesta
Yes much clearer but I think this makes some trouble like space available shown by gluster. Or not? Il 10 mag 2017 9:10 AM, ha scritto: > > As gluster doesn't support to "share" bricks with multiple volumes, I > would > > like to create a single volume with VMs and maildirs > > Just create two b

Re: [Gluster-users] small files optimizations

2017-05-10 Thread lemonnierk
> As gluster doesn't support to "share" bricks with multiple volumes, I would > like to create a single volume with VMs and maildirs Just create two bricks for two volumes ? For example if your disk is /mnt/storage, have a /mnt/storage/brick_VM and a /mnt/storage/brick_mail or something like that.

[Gluster-users] small files optimizations

2017-05-10 Thread Gandalf Corvotempesta
Currently, which are the best small files optimization that we can enable on a gluster storage? I'm planning to move a couple of dovecot servers, with thousands mail files (from a couple of KB to less than 10-20MB) These optimizations are compatible with VMs workload, like sharding? As gluster do