Hello,
i have a problem with gluster 3.2.6 and infiniband. With gluster 3.3
its working ok but with 3.2.6 i have following problems:
when i'm trying to mount rdma volume using command mount -t glusterfs
192.168.100.1:/atlas1.rdma mount i get:
[2012-06-07 04:30:18.894337] I
On 06/07/2012 02:04 PM, bxma...@gmail.com wrote:
Hello,
i have a problem with gluster 3.2.6 and infiniband. With gluster 3.3
its working ok but with 3.2.6 i have following problems:
when i'm trying to mount rdma volume using command mount -t glusterfs
192.168.100.1:/atlas1.rdma mount i get:
Hello,
at first it was tcp then tcp,rdma.
You are right that without tcp definition .rdma is not working. But
now i have another problem.
I'm trying tcp / rdma, im trying even tcp/rdma using normal network
card ( not using infiniband IP but normal 1gbit network
card and i have still same speed,
Hi,
Sorry this reply won't be of any help to your problem, but I am too curious to
understand how it can be even slower if monting using Gluster client which I
would expect always be quicker than NFS or anything else.
If you find the reason port it back to the list and share with us please. I
On Thu, Jun 07, 2012 at 10:10:03AM +, Fernando Frediani (Qube) wrote:
Sorry this reply won’t be of any help to your problem, but I am too
curious to understand how it can be even slower if monting using
Gluster client which I would expect always be quicker than NFS or
anything
Hello there.
That's really interesting, because we think about using GlusterFS too with a
similar setup/scenario.
I read about a really strange setup with GlusterFS native client mount on
the web servers and NFS mount on top of that so you get GlusterFS failover +
NFS caching.
Can't find the
Here's the link:
http://community.gluster.org/a/nfs-performance-with-fuse-client-redundancy/
Sent again with a reply to all.
Gerald
- Original Message -
From: Christian Meisinger em_got...@gmx.net
To: olav johansen luxis2...@gmail.com
Cc: gluster-users@gluster.org
Sent: Thursday,
To make a long story short, I made rdma client connect files and
mounted with them directly :
#/etc/glusterd/vols/pirdist/pirdist.rdma-fuse.vol /pirdist
glusterfs
transport=rdma 0 0
#/etc/glusterd/vols/pirstripe/pirstripe.rdma-fuse.vol /pirstripe
glusterfs
Brian,
Small correction: 'sending queries to *both* servers to check they are in
sync - even read accesses.' Read fops like stat/getxattr etc are sent to only
one brick.
Pranith.
- Original Message -
From: Brian Candler b.cand...@pobox.com
To: Fernando Frediani (Qube)
Hey everyone,
I currently have an NFS server that I need to make highly available. I was
thinking I would use Gluster, but since there's no way to match Gluster's built
in NFS server to my current NFS exports file I can't use the Gluster NFS
server. So I was thinking I could have two bricks
On Thu, Jun 07, 2012 at 08:34:56AM -0400, Pranith Kumar Karampuri wrote:
Brian,
Small correction: 'sending queries to *both* servers to check they are in
sync - even read accesses.' Read fops like stat/getxattr etc are sent to only
one brick.
Is that new behaviour for 3.3? My
hi Brian,
'stat' command comes as fop (File-operation) 'lookup' to the gluster mount
which triggers self-heal. So the behavior is still same.
I was referring to the fop 'stat' which will be performed only on one of the
bricks.
Unfortunately most of the commands and fops have same name.
Hi everybody,
we are testing Gluster 3.3 as an alternative to our current Nexenta based
storage. With the introduction of granular based locking gluster seems like a
viable alternative for VM storage.
Regrettably we cannot get it to work even for the most rudimentary tests. We
have a two brick
Here are a couple of wrinkles I have come across while trying gluster 3.3.0
under ubuntu-12.04.
(1) At one point I decided to delete some volumes and recreate them. But
it would not let me recreate them:
root@dev-storage2:~# gluster volume create fast
dev-storage1:/disk/storage1/fast
Brian,
The first point(1) is working as it is intended. Allowing something like
that can get the volume into very complicated state.
Please go through the following bug:
https://bugzilla.redhat.com/show_bug.cgi?id=812214
Pranith
- Original Message -
From: Brian Candler
Hi Atha,
I have a very similar setup and behaviour here.
I have two bricks with replication and I am able to mount the NFS, deploy a
machine there, but when I try to Power it On it simply doesn't work and gives a
different message saying that it couldn't find some files.
I wonder if anyone
It was said in previous emails about suggestions on how to improve Gluster on
the development of the next version, 3.4.
Well I guess we can all put up a list and see what will be more popular and
useful to most people then send to the developers for consideration.
My list starts with:
RAID 1E
On Wed 06 Jun 2012 10:25:38 PM PDT, Vijay Bellur wrote:
On 06/07/2012 03:22 AM, Jason Brooks wrote:
I've been testing on CentOS 6.2. The only command from the Admin guide
I've run successfully has been: curl -v -H 'X-Storage-User: test:tester'
-H 'X-Storage-Pass:testing' -k
On 06/06/2012 10:25 PM, Vijay Bellur wrote:
On 06/07/2012 03:22 AM, Jason Brooks wrote:
I've been testing on CentOS 6.2. The only command from the Admin guide
I've run successfully has been: curl -v -H 'X-Storage-User: test:tester'
-H 'X-Storage-Pass:testing' -k
Hi All,
Thanks for great feedback, I had changed ip's and I noticed one server
wasn't connecting correctly when checking log.
To ensure I had no wrong-doings I've re-done the bricks from scratch, clean
configurations, with mount info attached below, still not performing
'great' compared to a
Hi Fernando,
thanks for the reply. I'm seeing exactly the same behavior. I'm wondering if it
somehow has to do with locking. I read here
(http://community.gluster.org/q/can-not-mount-nfs-share-without-nolock-option/)
that locking on NFS was not implemented in 3.2.x and it is now in 3.3. I
Hi Brian,
Answers inline.
Here are a couple of wrinkles I have come across while trying gluster 3.3.0
under ubuntu-12.04.
(1) At one point I decided to delete some volumes and recreate them. But
it would not let me recreate them:
root@dev-storage2:~# gluster volume create fast
one can use the clear_xattrs.sh script with the bricks as argument to remove
all the xattrs set on bricks. it recursively deleted all
xattrs from the bricks' files. after running this script on bricks, we can
re-use them
Regards,
Rajesh Amaravathi,
Software Engineer, GlusterFS
RedHat Inc.
23 matches
Mail list logo