[Gluster-users] Gluster 3.9 repository?

2017-01-11 Thread Andrus, Brian Contractor
All, I notice on the main page, Gluster 3.9 is listed as Gluster 3.9 is the latest major release as of November 2016. Yet, if you click on the download link, there is no mention of Gluster 3.9 on that page at all. It has: GlusterFS version 3.8 is the latest version at the moment. Is there going

Re: [Gluster-users] Replace replicate in single brick

2017-01-10 Thread Andrus, Brian Contractor
gt; Cc: gluster-users@gluster.org; Andrus, Brian Contractor <bdand...@nps.edu> Subject: Re: [Gluster-users] Replace replicate in single brick Il 10 gen 2017 05:59, "Ravishankar N" <ravishan...@redhat.com<mailto:ravishan...@redhat.com>> ha scritto: If you are using glusterfs

[Gluster-users] Bricks show much space, mount shows little...

2016-04-04 Thread Andrus, Brian Contractor
All, I am a little confused here... I have a 7.2 terabyte mirrored gluster implementation. The backing filesystem is zfs. This is what I am seeing: node45: Filesystem Size Used Avail Use% Mounted on node45: /dev/sdb1190M 33M 148M 19% /boot node45: /dev/sdb3854G 12G

Re: [Gluster-users] Moved files from one directory to another, now gone

2016-03-03 Thread Andrus, Brian Contractor
want to try and track it down. Brian Andrus From: Pranith Kumar Karampuri [mailto:pkara...@redhat.com] Sent: Thursday, January 28, 2016 10:36 PM To: Andrus, Brian Contractor <bdand...@nps.edu>; gluster-users@gluster.org Subject: Re: [Gluster-users] Moved files from one directory to anothe

[Gluster-users] ZFS and snapshots

2016-02-04 Thread Andrus, Brian Contractor
All, It seems that snapshotting for volumes based on ZFS is still 'in the works'. Is that the case? snapshot create: failed: Snapshot is supported only for thin provisioned LV. Ensure that all bricks of DATA are thinly provisioned LV. Using glusterfs-3.7.6-1.el6.x86_64 Brian Andrus

[Gluster-users] Moved files from one directory to another, now gone

2016-01-28 Thread Andrus, Brian Contractor
All, I have a glusterfs setup with a disperse volume over 3 zfs bricks, one on each node. I just did a 'mv' of some log files from one directory to another and when I look in the directory, they are not there at all! Neither is any of the data I used to have. It is completely empty. I try

[Gluster-users] Glusterd on one node using 89% of memory

2016-01-26 Thread Andrus, Brian Contractor
All, I have one (of 4) gluster node that is using almost all of the available memory on my box. It has been growing and is up to 89% I have already done 'echo 2 > /proc/sys/vm/drop_caches' There seems to be no effect. Are there any gotcha to just restart glusterd? This is a CentOS 6.6 system

[Gluster-users] concurrent writes not all being written

2015-12-13 Thread Andrus, Brian Contractor
All, I have a small gluster filesystem on 3 nodes. I have a perl program that multi-threads and each thread writes it's output to one of 3 files depending on some results. My trouble is that I am seeing missing lines from the output. The input is a file of 500 lines. Depending on the line, it

[Gluster-users] Why oh why does gluster delay?

2015-11-30 Thread Andrus, Brian Contractor
All, I am seeing it VERY consistently that when I do a 'gluster peer status' or 'gluster pool list', the system 'hangs' for up to 1 minute before spitting back results. I have 10 nodes all on the same network and currently ZERO volumes or bricks configured. Just trying to get good performance

[Gluster-users] New install of 3.7.6 peers disconnect intermittently

2015-11-25 Thread Andrus, Brian Contractor
All, I am trying to do an install of gluster 3.7.6 OS is CentOS 6.5 I start doing peer probe commands to add the servers, but I am constantly getting intermittent rpc_clnt_ping_timer_expired messages in the logs and arbitrary servers show 'Disconnected' when I try doing 'gluster pool list' I

[Gluster-users] port 988

2015-07-31 Thread Andrus, Brian Contractor
All, I am seeing a problem with conflicting ports. I am running a relatively simple gluster implementation (4 x 2 = 8) But on the same nodes I also run lustre. I find that since gluster starts first, it seems to take over port 988, which lnet needs. Unfortunately, I do not see where I can affect

Re: [Gluster-users] Gluster healing VM images

2015-07-24 Thread Andrus, Brian Contractor
I had this as well. It was a BIG pain for me. FWIW, after upgrading to gluster 3.7, I have not had an issue. YMMV Brian Andrus ITACS/Research Computing Naval Postgraduate School Monterey, California voice: 831-656-6238 -Original Message- From: gluster-users-boun...@gluster.org

Re: [Gluster-users] downgrade client from 3.7.1 to 3.6.3

2015-06-11 Thread Andrus, Brian Contractor
Usually works: yum downgrade packagename Brian Andrus ITACS/Research Computing Naval Postgraduate School Monterey, California voice: 831-656-6238 -Original Message- From: gluster-users-boun...@gluster.org [mailto:gluster-users-boun...@gluster.org] On Behalf Of Kingsley Sent:

Re: [Gluster-users] GlusterFS 3.6.1 breaks VM images on cluster node restart

2015-06-09 Thread Andrus, Brian Contractor
schrieb Andrus, Brian Contractor: I have similar issues with gluster and am starting to wonder if it really is stable for VM images. My setup is simple: 1X2=2 I am mirroring a disk, basically. Trouble has been that the VM images (qcow2 files) go split-brained when one of the VMs gets busier than

Re: [Gluster-users] GlusterFS 3.6.1 breaks VM images on cluster node restart

2015-06-04 Thread Andrus, Brian Contractor
I have similar issues with gluster and am starting to wonder if it really is stable for VM images. My setup is simple: 1X2=2 I am mirroring a disk, basically. Trouble has been that the VM images (qcow2 files) go split-brained when one of the VMs gets busier than usual. Once that happens, heal

[Gluster-users] replacing bad hd in a 1x2 replica

2015-06-04 Thread Andrus, Brian Contractor
All, I have noticed that one of my HDs in a 1x2 replica is throwing sector errors. Now I am wondering the best way to fix or replace it. The backing filesystem is xfs, so xfs_repair can only be used when it is offline. So, can I unmount the brick, run the repairs to block out the bad sectors