has the 32 group limit been fixed yet? If not how about that :) ?
https://bugzilla.redhat.com/show_bug.cgi?id=789961
On Thu, Mar 13, 2014 at 8:01 AM, Jeff Darcy jda...@redhat.com wrote:
I am a little bit impressed by the lack of action on this topic. I hate
to be
that guy, specially being
Hi all,
We're still having problems with the 32 group limit for gluster. We have
several users who are in 32 AD groups. What version is expected to have
these increased and when will we see posix acl and nfsv4 acl support?
Thanks,
Sabuj
___
Hi all,
Anyone ever seen a problem where ls seems to be stuck (can't ctrl+c out,
only kill -9), but is actually looping forever in a directory under a
simple distributed setup? Seems like inodes have somehow got into a loop?
Any suggestions other than running fsck on the underlying filesystems?
It doesn't look like there's a problem in the actual filesystem since I can
go into the directories from the filesystem level and ls works fine. Data
looks ok also.
On Mon, Apr 15, 2013 at 9:01 AM, Sabuj Pattanayek sab...@gmail.com wrote:
Hi all,
Anyone ever seen a problem where ls seems
On Mon, Apr 15, 2013 at 9:21 AM, Michael Brown mich...@netdirect.ca wrote:
Yes, actually.
Are you running with ext4 bricks?
yes
http://joejulian.name/blog/glusterfs-bit-by-ext4-structure-change/
and I just did upgrade to 2.6.32-279.22.1 on the bricks.
If so, try disabling
I didn't see anything in the paper that mentioned the specs on your
load generating clients? Where can I get your test scripts for the
structured filesystem tests other than specsfs2k8? I assume you used
distributed + replicated with gluster? Is compuverde using a similar
algorithm or is it
Hi,
Anyone know if the 1024 char limit for auth.allow still exists in the
latest production version (seems to be there in 3.2.5). Also anyone
know if the new versions check if auth.allow has been updated without
having to restart glusterd? Is there anyway to restart glusterd
without killing it
Would also be nice if auth.allow could take DNS entries similar to the
way that /etc/exports for NFS can read these types of entries, e.g.
*.blah.foo.edu
On Tue, Jan 15, 2013 at 5:59 PM, Sabuj Pattanayek sab...@gmail.com wrote:
Hi,
Anyone know if the 1024 char limit for auth.allow still exists
i think qperf just writes to and from memory on both systems so that
it can best test the network and not disk, then tosses the packets
away
On Tue, Dec 18, 2012 at 3:34 AM, Andrew Holway a.hol...@syseleven.de wrote:
On Dec 18, 2012, at 2:15 AM, Sabuj Pattanayek wrote:
I have R610's
I have R610's with a similar setup but with HT turned on and I'm
getting 3.5GB/s for one way RDMA tests between two QDR connected
clients using mellanox connectx x4 PCI-E cards in x8 slots. 1GB/s with
IPoIB connections (seem to be limited to 10gbe). Note, I had problems
with the 1.x branch of OFED
and yes on some Dells you'll get strange network and RAID controller
performance characteristics if you turn on the BIOS power management.
On Mon, Dec 17, 2012 at 7:15 PM, Sabuj Pattanayek sab...@gmail.com wrote:
I have R610's with a similar setup but with HT turned on and I'm
getting 3.5GB/s
On Fri, Oct 26, 2012 at 1:48 AM, Robert Klemme
shortcut...@googlemail.com wrote:
On Tue, Oct 23, 2012 at 12:11 AM, Paul Simpson p...@realisestudio.com wrote:
thought this might be of interest to you all out there:
To make a long story short, I made rdma client connect files and
mounted with them directly :
#/etc/glusterd/vols/pirdist/pirdist.rdma-fuse.vol /pirdist
glusterfs
transport=rdma 0 0
#/etc/glusterd/vols/pirstripe/pirstripe.rdma-fuse.vol /pirstripe
glusterfs
On Thu, Apr 19, 2012 at 2:00 PM, Ionescu, A. a.ione...@student.vu.nl wrote:
Thanks for your answer, Sabuj. However, I am not sure I understand what you
mean by trying with ipoib.
Do you mean specifying transport tcp and using the ipoib ips/hostnames for
the bricks?
If you manage to find a
I've seen the same 100MB/s limit (depending on block size of transfer)
with 5 bricks in a stripe and have yet to try ipoib, which I hear
improves performance over rdma for some reason.
On Wed, Apr 18, 2012 at 5:05 AM, Ionescu, A. a.ione...@student.vu.nl wrote:
Dear Gluster Users,
We are facing
I have lots of clients hitting my bricks and I noticed that the bricks
thought that they were being syn flooded so they were sending out syn
cookies. I was also having these sorts of disconnects so I turned off
syn cookies, not sure yet if it helped but I haven't had a disconnect
since then.
On
- Original Message -
From: Sabuj Pattanayek sab...@gmail.com
To: gluster-users@gluster.org
Sent: Thu, 05 Apr 2012 17:53:46 -0400 (EDT)
Subject: Re: [Gluster-users] files on stripe not visible with ls or stat, but
files visible at the filesystem layer can still be cat'd / viewed
If I had 10 bricks in a replicated volume would the the effective storage
capacity be half the total storage capacity as is the case with RAID1?
Yes.
Next area I am not sure of.. Does GlusterFS only store complete files or
chunks of files on bricks?
for stripes it stores a sparse file that
mount -t xfs -o rw,noatime,nodiratime,logbufs=8
nodiratime is redundant, noatime will do both.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
There is no saved search named 'Bugs targeted for GlusterFS 3.3.0 beta3'.
On Tue, Mar 27, 2012 at 1:01 PM, John Mark Walker johnm...@redhat.com wrote:
This is a list of bugs that Vijay has tagged for the 3.3.0 beta3 milestone:
I was looking at it the other day in the source package, but I
couldn't tell which changes were new to 3.2.6 from 3.2.5, would have
to download the older source and compare the changelog files.
On Thu, Mar 15, 2012 at 6:42 AM, David Coulson da...@davidcoulson.net wrote:
Is there a change log
On Thu, Mar 15, 2012 at 7:51 AM, Paul Simpson p...@realisestudio.com wrote:
same - XFS works v well for us too. maybe this is just a stripe issue?
No, it doesn't report the size of sparse files correctly:
http://oss.sgi.com/archives/xfs/2011-06/msg00225.html
Hi,
I've been migrating data from an old striped 3.0.x gluster install to
a 3.3 beta install. I copied all the data to a regular XFS partition
(4K blocksize) from the old gluster striped volume and it totaled
9.2TB. With the old setup I used the following option in a volume
stripe block in the
, 2012 at 5:12 PM, Sabuj Pattanayek sab...@gmail.com wrote:
Hi,
I've been migrating data from an old striped 3.0.x gluster install to
a 3.3 beta install. I copied all the data to a regular XFS partition
(4K blocksize) from the old gluster striped volume and it totaled
9.2TB. With the old setup I
, 2012 at 12:50 AM, Sabuj Pattanayek sab...@gmail.com wrote:
This seems to be a bug in XFS as Joe pointed out :
http://oss.sgi.com/archives/xfs/2011-06/msg00233.html
http://stackoverflow.com/questions/6940516/create-sparse-file-with-alternate-data-and-hole-on-ext3-and-xfs
It seems
On Sun, Feb 5, 2012 at 6:00 AM, Anand Avati anand.av...@gmail.com wrote:
On Sun, Feb 5, 2012 at 4:41 AM, Brian Candler b.cand...@pobox.com wrote:
I reckon that to quickly copy one glusterfs volume to another, I will need a
multi-threaded 'cp'. That is, something which will take the list of
Hi,
Will future versions of gluster support stripe and distribute volumes
created with v3.2+ ?
Thanks,
Sabuj
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Hi,
I've seen that EXT4 has better random I/O performance than XFS,
especially on small reads and writes. For large sequential reads and
writes XFS is a little bit better. For really large sequential reads
and write EXT4 and XFS are about the same. I used to format XFS using
this:
mkfs.xfs -l
resemble your workload, to compare xfs vs ext4 both with and without
glusterfs.
On 10/20/2011 03:36 PM, Sabuj Pattanayek wrote:
Hi,
I've seen that EXT4 has better random I/O performance than XFS,
especially on small reads and writes. For large sequential reads
and
writes XFS
The issues with both random-access performance and fsck times vary a lot
according to *exactly* which version of each you're using. I'm in the same
Yup, our tests recently were done directly to XFS using bonnie,
iozone, fio, and tiobench on centos6, which is not using the most
bleeding edge
3. What is the IB bandwidth that you are getting between the compute node
and the glusterfs storage node? You can run the tool rdma_bw to get the
details:
This is what I got on bidirectional :
2638: Bandwidth peak (#0 to #785): 6052.22 MB/sec
2638: Bandwidth average: 6050.02 MB/sec
2638:
version of
gluster and any of the configurations (if they apply) that I've
posted. It might compel me to upgrade.
HTH,
Sabuj Pattanayek
For some background, our compute cluster has 64 compute nodes. The gluster
storage pool has 10 Dell PowerEdge R515 servers, each with 12 x 2 TB disks.
We have
n.b. we are updating them to 3.2.2 tomorrow.
anyone know if upgrading from 3.0.x to 3.2.x works without data loss
for gluster storage nodes where clients are accessing the storage
nodes in a stripe (or distribute for that matter)?
___
Gluster-users
Hi,
I'm trying to tar files directly off of a striped gluster mount to
tape. For multi-gb files, tar sometimes throws an error saying that
the file has shrunk by some bytes and it is being padded with zeros,
found some info about this in relation to fuse controller file
systems:
Hi,
Doing a quick google search
http://www.google.com/search?hl=enq=glusterfs%20update%20configuration%20without%20restartbtnG=Google+Search
shows that 3.1 supports dynamically changing the glusterfsd.vol
configuration without having to restart the servers, for example
adding or removing
for the amount of features that you get with backuppc, it's worth the
fairly painless setup. Btw, we've found that it's better/faster to use
tar via backuppc (it supports rsync as well) to do the backups rather
than rsync in backuppc. Rsync can be really slow if you have
thousands/millions of
, Sabuj Pattanayek sab...@gmail.com wrote:
for the amount of features that you get with backuppc, it's worth the
fairly painless setup. Btw, we've found that it's better/faster to use
tar via backuppc (it supports rsync as well) to do the backups rather
than rsync in backuppc. Rsync can be really slow
14 06:36:37 EDT 2009
x86_64 x86_64 x86_64 GNU/Linux
Any ideas?
Thanks,
Sabuj Pattanayek
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Also, I cannot duplicate the crash when I copy using cp off the
distributed endpoint, i.e. cp works fine there.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
-dist.allow 10.0.1.*
subvolumes iothreads iothreads-dist
end-volume
HTH,
Sabuj Pattanayek
On Tue, Feb 2, 2010 at 1:49 AM, teg...@renget.se wrote:
I'm planning a storage solution for our HPC environment which is built on
both gigabit and infiniband switches. The gluster storage nodes
The only way to achieve this currently is to use quotas at the
filesystem level. Given n gluster storage servers I just divide the
total required quota for a user/group by n and set that part of the
quota on each storage server.
On Mon, Jan 4, 2010 at 7:54 PM, Konstantin Sharlaimov
You want to break it out into two server volumes:
# Server settings
volume tcp-server
type protocol/server
option transport-type tcp/server
subvolumes brick
option auth.addr.brick.allow *
end-volume
volume ib-server
type protocol/server
option transport-type ib-verbs/server
42 matches
Mail list logo