Re: [Gluster-users] PLEASE READ ! We need your opinion. GSOC-2014 and the Gluster community

2014-03-13 Thread Sabuj Pattanayek
has the 32 group limit been fixed yet? If not how about that :) ? https://bugzilla.redhat.com/show_bug.cgi?id=789961 On Thu, Mar 13, 2014 at 8:01 AM, Jeff Darcy jda...@redhat.com wrote: I am a little bit impressed by the lack of action on this topic. I hate to be that guy, specially being

[Gluster-users] 32 groups issue with 3.4.1, posix and nfsv4 acls

2014-01-11 Thread Sabuj Pattanayek
Hi all, We're still having problems with the 32 group limit for gluster. We have several users who are in 32 AD groups. What version is expected to have these increased and when will we see posix acl and nfsv4 acl support? Thanks, Sabuj ___

[Gluster-users] ls seems to be stuck, but keeps looping in a directory in a distributed setup

2013-04-15 Thread Sabuj Pattanayek
Hi all, Anyone ever seen a problem where ls seems to be stuck (can't ctrl+c out, only kill -9), but is actually looping forever in a directory under a simple distributed setup? Seems like inodes have somehow got into a loop? Any suggestions other than running fsck on the underlying filesystems?

Re: [Gluster-users] ls seems to be stuck, but keeps looping in a directory in a distributed setup

2013-04-15 Thread Sabuj Pattanayek
It doesn't look like there's a problem in the actual filesystem since I can go into the directories from the filesystem level and ls works fine. Data looks ok also. On Mon, Apr 15, 2013 at 9:01 AM, Sabuj Pattanayek sab...@gmail.com wrote: Hi all, Anyone ever seen a problem where ls seems

Re: [Gluster-users] ls seems to be stuck, but keeps looping in a directory in a distributed setup

2013-04-15 Thread Sabuj Pattanayek
On Mon, Apr 15, 2013 at 9:21 AM, Michael Brown mich...@netdirect.ca wrote: Yes, actually. Are you running with ext4 bricks? yes http://joejulian.name/blog/glusterfs-bit-by-ext4-structure-change/ and I just did upgrade to 2.6.32-279.22.1 on the bricks. If so, try disabling

Re: [Gluster-users] Fw: performance evaluation of distributed storage systems

2013-01-22 Thread Sabuj Pattanayek
I didn't see anything in the paper that mentioned the specs on your load generating clients? Where can I get your test scripts for the structured filesystem tests other than specsfs2k8? I assume you used distributed + replicated with gluster? Is compuverde using a similar algorithm or is it

[Gluster-users] 1024 char limit for auth.allow and automatically re-reading auth.allow without having to restart glusterd?

2013-01-15 Thread Sabuj Pattanayek
Hi, Anyone know if the 1024 char limit for auth.allow still exists in the latest production version (seems to be there in 3.2.5). Also anyone know if the new versions check if auth.allow has been updated without having to restart glusterd? Is there anyway to restart glusterd without killing it

Re: [Gluster-users] 1024 char limit for auth.allow and automatically re-reading auth.allow without having to restart glusterd?

2013-01-15 Thread Sabuj Pattanayek
Would also be nice if auth.allow could take DNS entries similar to the way that /etc/exports for NFS can read these types of entries, e.g. *.blah.foo.edu On Tue, Jan 15, 2013 at 5:59 PM, Sabuj Pattanayek sab...@gmail.com wrote: Hi, Anyone know if the 1024 char limit for auth.allow still exists

Re: [Gluster-users] Infiniband performance issues answered?

2012-12-18 Thread Sabuj Pattanayek
i think qperf just writes to and from memory on both systems so that it can best test the network and not disk, then tosses the packets away On Tue, Dec 18, 2012 at 3:34 AM, Andrew Holway a.hol...@syseleven.de wrote: On Dec 18, 2012, at 2:15 AM, Sabuj Pattanayek wrote: I have R610's

Re: [Gluster-users] Infiniband performance issues answered?

2012-12-17 Thread Sabuj Pattanayek
I have R610's with a similar setup but with HT turned on and I'm getting 3.5GB/s for one way RDMA tests between two QDR connected clients using mellanox connectx x4 PCI-E cards in x8 slots. 1GB/s with IPoIB connections (seem to be limited to 10gbe). Note, I had problems with the 1.x branch of OFED

Re: [Gluster-users] Infiniband performance issues answered?

2012-12-17 Thread Sabuj Pattanayek
and yes on some Dells you'll get strange network and RAID controller performance characteristics if you turn on the BIOS power management. On Mon, Dec 17, 2012 at 7:15 PM, Sabuj Pattanayek sab...@gmail.com wrote: I have R610's with a similar setup but with HT turned on and I'm getting 3.5GB/s

Re: [Gluster-users] NASA uses gluster..

2012-10-26 Thread Sabuj Pattanayek
On Fri, Oct 26, 2012 at 1:48 AM, Robert Klemme shortcut...@googlemail.com wrote: On Tue, Oct 23, 2012 at 12:11 AM, Paul Simpson p...@realisestudio.com wrote: thought this might be of interest to you all out there:

Re: [Gluster-users] Gluster 2.6 and infiniband

2012-06-07 Thread Sabuj Pattanayek
To make a long story short, I made rdma client connect files and mounted with them directly : #/etc/glusterd/vols/pirdist/pirdist.rdma-fuse.vol /pirdist glusterfs transport=rdma 0 0 #/etc/glusterd/vols/pirstripe/pirstripe.rdma-fuse.vol /pirstripe glusterfs

Re: [Gluster-users] Performance issues with striped volume over Infiniband

2012-04-19 Thread Sabuj Pattanayek
On Thu, Apr 19, 2012 at 2:00 PM, Ionescu, A. a.ione...@student.vu.nl wrote: Thanks for your answer, Sabuj. However, I am not sure I understand what you mean by trying with ipoib. Do you mean specifying transport tcp and using the ipoib ips/hostnames for the bricks? If you manage to find a

Re: [Gluster-users] Performance issues with striped volume over Infiniband

2012-04-18 Thread Sabuj Pattanayek
I've seen the same 100MB/s limit (depending on block size of transfer) with 5 bricks in a stripe and have yet to try ipoib, which I hear improves performance over rdma for some reason. On Wed, Apr 18, 2012 at 5:05 AM, Ionescu, A. a.ione...@student.vu.nl wrote: Dear Gluster Users, We are facing

Re: [Gluster-users] Random disconnect

2012-04-13 Thread Sabuj Pattanayek
I have lots of clients hitting my bricks and I noticed that the bricks thought that they were being syn flooded so they were sending out syn cookies. I was also having these sorts of disconnects so I turned off syn cookies, not sure yet if it helped but I haven't had a disconnect since then. On

Re: [Gluster-users] files on stripe not visible with ls or stat, but files visible at the filesystem layer can still be cat'd / viewed through the gluster fuse layer (you just have to know the file na

2012-04-06 Thread Sabuj Pattanayek
- Original Message - From: Sabuj Pattanayek sab...@gmail.com To: gluster-users@gluster.org Sent: Thu, 05 Apr 2012 17:53:46 -0400 (EDT) Subject: Re: [Gluster-users] files on stripe not visible with ls or stat, but files visible at the filesystem layer can still be cat'd / viewed

Re: [Gluster-users] Newbie's understanding of GlusterFS

2012-03-30 Thread Sabuj Pattanayek
If I had 10 bricks in a replicated volume would the the effective storage capacity be half the total storage capacity as is the case with RAID1? Yes. Next area I am not sure of.. Does GlusterFS only store complete files or chunks of files on bricks? for stripes it stores a sparse file that

Re: [Gluster-users] Optimal XFS formatting?

2012-03-30 Thread Sabuj Pattanayek
mount -t xfs -o rw,noatime,nodiratime,logbufs=8 nodiratime is redundant, noatime will do both. ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Targeting Bugs for GlusterFS 3.3 Beta 3

2012-03-27 Thread Sabuj Pattanayek
There is no saved search named 'Bugs targeted for GlusterFS 3.3.0 beta3'. On Tue, Mar 27, 2012 at 1:01 PM, John Mark Walker johnm...@redhat.com wrote: This is a list of bugs that Vijay has tagged for the 3.3.0 beta3 milestone:

Re: [Gluster-users] QA builds for 3.2.6 and 3.3 beta3

2012-03-15 Thread Sabuj Pattanayek
I was looking at it the other day in the source package, but I couldn't tell which changes were new to 3.2.6 from 3.2.5, would have to download the older source and compare the changelog files. On Thu, Mar 15, 2012 at 6:42 AM, David Coulson da...@davidcoulson.net wrote: Is there a change log

Re: [Gluster-users] Usage Case: just not getting the performance I was hoping for

2012-03-15 Thread Sabuj Pattanayek
On Thu, Mar 15, 2012 at 7:51 AM, Paul Simpson p...@realisestudio.com wrote: same - XFS works v well for us too.  maybe this is just a stripe issue? No, it doesn't report the size of sparse files correctly: http://oss.sgi.com/archives/xfs/2011-06/msg00225.html

[Gluster-users] default cluster.stripe-block-size for striped volumes on 3.0.x vs 3.3 beta (128kb), performance change if i reduce to a smaller block size?

2012-02-23 Thread Sabuj Pattanayek
Hi, I've been migrating data from an old striped 3.0.x gluster install to a 3.3 beta install. I copied all the data to a regular XFS partition (4K blocksize) from the old gluster striped volume and it totaled 9.2TB. With the old setup I used the following option in a volume stripe block in the

Re: [Gluster-users] default cluster.stripe-block-size for striped volumes on 3.0.x vs 3.3 beta (128kb), performance change if i reduce to a smaller block size?

2012-02-23 Thread Sabuj Pattanayek
, 2012 at 5:12 PM, Sabuj Pattanayek sab...@gmail.com wrote: Hi, I've been migrating data from an old striped 3.0.x gluster install to a 3.3 beta install. I copied all the data to a regular XFS partition (4K blocksize) from the old gluster striped volume and it totaled 9.2TB. With the old setup I

Re: [Gluster-users] default cluster.stripe-block-size for striped volumes on 3.0.x vs 3.3 beta (128kb), performance change if i reduce to a smaller block size?

2012-02-23 Thread Sabuj Pattanayek
, 2012 at 12:50 AM, Sabuj Pattanayek sab...@gmail.com wrote: This seems to be a bug in XFS as Joe pointed out : http://oss.sgi.com/archives/xfs/2011-06/msg00233.html http://stackoverflow.com/questions/6940516/create-sparse-file-with-alternate-data-and-hole-on-ext3-and-xfs It seems

Re: [Gluster-users] Parallel cp?

2012-02-05 Thread Sabuj Pattanayek
On Sun, Feb 5, 2012 at 6:00 AM, Anand Avati anand.av...@gmail.com wrote: On Sun, Feb 5, 2012 at 4:41 AM, Brian Candler b.cand...@pobox.com wrote: I reckon that to quickly copy one glusterfs volume to another, I will need a multi-threaded 'cp'.  That is, something which will take the list of

[Gluster-users] will future versions of gluster support stripe and distribute volumes created with v3.2+

2012-01-27 Thread Sabuj Pattanayek
Hi, Will future versions of gluster support stripe and distribute volumes created with v3.2+ ? Thanks, Sabuj ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Re: [Gluster-users] Optimal XFS formatting?

2011-10-20 Thread Sabuj Pattanayek
Hi, I've seen that EXT4 has better random I/O performance than XFS, especially on small reads and writes. For large sequential reads and writes XFS is a little bit better. For really large sequential reads and write EXT4 and XFS are about the same. I used to format XFS using this: mkfs.xfs -l

Re: [Gluster-users] Optimal XFS formatting?

2011-10-20 Thread Sabuj Pattanayek
resemble your workload, to compare xfs vs ext4 both with and without glusterfs. On 10/20/2011 03:36 PM, Sabuj Pattanayek wrote: Hi, I've seen that EXT4 has better random I/O performance than XFS, especially on small reads and writes. For large sequential reads and writes XFS

Re: [Gluster-users] Optimal XFS formatting?

2011-10-20 Thread Sabuj Pattanayek
The issues with both random-access performance and fsck times vary a lot according to *exactly* which version of each you're using.  I'm in the same Yup, our tests recently were done directly to XFS using bonnie, iozone, fio, and tiobench on centos6, which is not using the most bleeding edge

Re: [Gluster-users] gluster client performance

2011-07-26 Thread Sabuj Pattanayek
3. What is the IB bandwidth that you are getting between the compute node and the glusterfs storage node? You can run the tool rdma_bw to get the details: This is what I got on bidirectional : 2638: Bandwidth peak (#0 to #785): 6052.22 MB/sec 2638: Bandwidth average: 6050.02 MB/sec 2638:

Re: [Gluster-users] gluster client performance

2011-07-25 Thread Sabuj Pattanayek
version of gluster and any of the configurations (if they apply) that I've posted. It might compel me to upgrade. HTH, Sabuj Pattanayek For some background, our compute cluster has 64 compute nodes. The gluster storage pool has 10 Dell PowerEdge R515 servers, each with 12 x 2 TB disks. We have

Re: [Gluster-users] 3.0.5 RDMA clients seem broken

2011-07-25 Thread Sabuj Pattanayek
n.b. we are updating them to 3.2.2 tomorrow. anyone know if upgrading from 3.0.x to 3.2.x works without data loss for gluster storage nodes where clients are accessing the storage nodes in a stripe (or distribute for that matter)? ___ Gluster-users

[Gluster-users] tar file shrank, padding with zeros errors

2011-04-04 Thread Sabuj Pattanayek
Hi, I'm trying to tar files directly off of a striped gluster mount to tape. For multi-gb files, tar sometimes throws an error saying that the file has shrunk by some bytes and it is being padded with zeros, found some info about this in relation to fuse controller file systems:

[Gluster-users] gluster 3.0.x change client IP access list without restarting and questions on upgrade to 3.1

2011-04-01 Thread Sabuj Pattanayek
Hi, Doing a quick google search http://www.google.com/search?hl=enq=glusterfs%20update%20configuration%20without%20restartbtnG=Google+Search shows that 3.1 supports dynamically changing the glusterfsd.vol configuration without having to restart the servers, for example adding or removing

Re: [Gluster-users] Backup Strategy

2011-03-09 Thread Sabuj Pattanayek
for the amount of features that you get with backuppc, it's worth the fairly painless setup. Btw, we've found that it's better/faster to use tar via backuppc (it supports rsync as well) to do the backups rather than rsync in backuppc. Rsync can be really slow if you have thousands/millions of

Re: [Gluster-users] Backup Strategy

2011-03-09 Thread Sabuj Pattanayek
, Sabuj Pattanayek sab...@gmail.com wrote: for the amount of features that you get with backuppc, it's worth the fairly painless setup. Btw, we've found that it's better/faster to use tar via backuppc (it supports rsync as well) to do the backups rather than rsync in backuppc. Rsync can be really slow

[Gluster-users] crash when using the cp command to copy files off a striped gluster dir but not when using rsync

2010-03-02 Thread Sabuj Pattanayek
14 06:36:37 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux Any ideas? Thanks, Sabuj Pattanayek ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Re: [Gluster-users] crash when using the cp command to copy files off a striped gluster dir but not when using rsync

2010-03-02 Thread Sabuj Pattanayek
Also, I cannot duplicate the crash when I copy using cp off the distributed endpoint, i.e. cp works fine there. ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Re: [Gluster-users] gigabit to infiniband storage network

2010-02-02 Thread Sabuj Pattanayek
-dist.allow 10.0.1.* subvolumes iothreads iothreads-dist end-volume HTH, Sabuj Pattanayek On Tue, Feb 2, 2010 at 1:49 AM, teg...@renget.se wrote: I'm planning a storage solution for our HPC environment which is built on both gigabit and infiniband switches. The gluster storage nodes

Re: [Gluster-users] quota question

2010-01-04 Thread Sabuj Pattanayek
The only way to achieve this currently is to use quotas at the filesystem level. Given n gluster storage servers I just divide the total required quota for a user/group by n and set that part of the quota on each storage server. On Mon, Jan 4, 2010 at 7:54 PM, Konstantin Sharlaimov

Re: [Gluster-users] is it possible to use ib-verbs and tcp transport-types on the same volume?

2009-08-10 Thread Sabuj Pattanayek
You want to break it out into two server volumes: # Server settings volume tcp-server  type protocol/server  option transport-type tcp/server  subvolumes brick  option auth.addr.brick.allow * end-volume volume ib-server  type protocol/server  option transport-type ib-verbs/server