Hi all,
In about 20 minutes from now we will have the regular weekly Gluster
Community meeting.
Meeting details:
- location: #gluster-meeting on Freenode IRC
webchat: http://webchat.freenode.net/?channels=gluster-meeting
- date: every Wednesday
- time: 7:00 EST, 12:00 UTC, 13:00 CET,
Im not sure what you mean by Mode 1000 are you referring to the .gluster
directory or the Brick files them selves?
From: Joe Julian [mailto:j...@julianfamily.org]
Sent: Sunday, February 15, 2015 10:54 AM
To: Thomas Holkenbrink; 'gluster-users@gluster.org'
Subject: Re: [Gluster-users] Gluster
Hi,
I wolud like to know if splitmount utility is useful for gluster version
upper 3.3. I think that since 3.3 version I need to delete the hardlink
also in split-brain case, it's true?
Thank's
___
Gluster-users mailing list
Gluster-users@gluster.org
Hi all,
I'm have this problem after upgrading from 3.5.3 to 3.6.2.
At the moment I am still waiting for a heal to finish (on a 31TB volume
with 42 bricks, replicated over three nodes).
Tom,
how did you remove the duplicates?
with 42 bricks I will not be able to do this manually..
Did a:
find
Hi Olav,
I have a hunch that our problem was caused by improper unmounting of the
gluster volume, and have since found that the proper order should be: kill all
jobs using volume - unmount volume on clients - gluster volume stop - stop
gluster service (if necessary)
In my case, I wrote a
On 02/18/2015 11:43 AM, tben...@3vgeomatics.com wrote:
Hi Olav,
I have a hunch that our problem was caused by improper unmounting of
the gluster volume, and have since found that the proper order should
be: kill all jobs using volume - unmount volume on clients - gluster
volume stop - stop
On 02/18/2015 10:52 AM, Olav Peeters wrote:
Hi all,
I'm have this problem after upgrading from 3.5.3 to 3.6.2.
At the moment I am still waiting for a heal to finish (on a 31TB
volume with 42 bricks, replicated over three nodes).
Tom,
how did you remove the duplicates?
with 42 bricks I will
Bump!
On 2015-02-16 16:19, Ernie Dunbar wrote:
Hi list.
I've searched around and I've found that nobody seems to have asked this
question before. Is it 100% necessary to have Gluster bricks that are
formatted with XFS, and is it also 100% necessary that it needs to be its own
I set up a distributed, replicated volume consisting of just 2 bricks on
two physical nodes. The nodes are peered using a dedicated GB ethernet
and can be accessed from the clients using a separate GB ethernet NIC.
Doing a simple dd performance test I see about 11 MB/s for read and
write.
- Original Message -
From: Lars Hanke deb...@lhanke.de
To: gluster-users@gluster.org
Sent: Wednesday, February 18, 2015 3:01:54 PM
Subject: [Gluster-users] Poor Gluster performance
I set up a distributed, replicated volume consisting of just 2 bricks on
two physical nodes. The
Thanks Tom and Joe,
for the fast response!
Before I started my upgrade I stopped all clients using the volume and
stopped all VM's with VHD on the volume, but I guess, and this may be
the missing thing to reproduce this in a lab, I did not detach a NFS
shared storage mount from a XenServer
- Original Message -
From: Lars Hanke deb...@lhanke.de
To: Ben Turner btur...@redhat.com
Cc: gluster-users@gluster.org
Sent: Wednesday, February 18, 2015 5:09:19 PM
Subject: Re: [Gluster-users] Poor Gluster performance
Am 18.02.2015 um 22:05 schrieb Ben Turner:
- Original
Hi,
XFS is recommended but not 100% necessary. However there are some known
bugs with Gluster + eg ext4. I learnt that the hard way :)
Gluster needs bricks and bricks need to have size. In replicated setup
it is recommended (very recommended) to have bricks of same size. That
is those that
- Original Message -
From: Lars Hanke deb...@lhanke.de
To: Ben Turner btur...@redhat.com
Cc: gluster-users@gluster.org
Sent: Wednesday, February 18, 2015 5:09:19 PM
Subject: Re: [Gluster-users] Poor Gluster performance
Am 18.02.2015 um 22:05 schrieb Ben Turner:
- Original
Am 18.02.2015 um 22:05 schrieb Ben Turner:
- Original Message -
From: Lars Hanke deb...@lhanke.de
To: gluster-users@gluster.org
Sent: Wednesday, February 18, 2015 3:01:54 PM
Subject: [Gluster-users] Poor Gluster performance
I set up a distributed, replicated volume consisting of just 2
On Wed, Feb 18, 2015 at 12:41:40PM +0100, Niels de Vos wrote:
Hi all,
In about 20 minutes from now we will have the regular weekly Gluster
Community meeting.
Meeting details:
- location: #gluster-meeting on Freenode IRC
webchat: http://webchat.freenode.net/?channels=gluster-meeting
I could probably chip in too. I've run tons of my own science
experiments on Rackspace instead of our own hardware, becase that makes
my results more reproducible by others. If we can enable more people to
do likewise, that benefits everyone.
P.S. Hi Jesse. Small world, huh?
Hi all,
I am setting up a gluster replica with 2 bricks and 2 native client mounts
on same the bricks servers.
In server1 :
volume logs brick 1 on /data/brick1 (XFS)
mount server1:/logs on /mount/logs
In server2:
volume logs brick 2 on
On 18 Feb 2015, at 13:50, Jesse Noller jesse.nol...@rackspace.com wrote:
Sorry, this is what I get for logging off for the night: short answer is YES.
At the end of the day I want to showcase gluster' awesomeness and also be
able to show users how to do it right in the cloud for shared, fault
Sup Jeff!
I'm pro doing it as a *user* would; always
Jesse
On Feb 18, 2015, at 7:50 AM, Jeff Darcy jda...@redhat.com wrote:
I could probably chip in too. I've run tons of my own science
experiments on Rackspace instead of our own hardware, becase that makes
my results more reproducible
Sorry, this is what I get for logging off for the night: short answer is YES.
At the end of the day I want to showcase gluster' awesomeness and also be able
to show users how to do it right in the cloud for shared, fault tolerant file
systems
Jesse
On Feb 17, 2015, at 5:53 PM, Justin Clift
- Original Message -
From: Justin Clift jus...@gluster.org
To: Benjamin Turner bennytu...@gmail.com
Cc: Gluster Users gluster-users@gluster.org, Gluster Devel
gluster-de...@gluster.org, Jesse Noller
jesse.nol...@rackspace.com
Sent: Tuesday, February 17, 2015 6:52:48 PM
Subject: Re:
Looks like we have four volunteers:
* Ben Turner (primary GlusterFS perf tuning guy)
* Jeff Darcy (greybeard GlusterFS developer and scalability expert)
* Josh Boon (experienced GlusterFS guy - Ubuntu focused)
* Nico Schottelius (newer GlusterFS guy - familiar with Ubuntu/CentOS)
This
On to the logistics:
When: I'm looking at sometime during the second week of May (May 11-15).
Alternately, the third week of April (April 13-19), though, I'm
concerned about being able to get it all in place before then. I'd like
to have at least one day worth of scheduled presentations,
Am 18.02.2015 um 23:26 schrieb Ben Turner:
- Original Message -
From: Lars Hanke deb...@lhanke.de
To: Ben Turner btur...@redhat.com
Cc: gluster-users@gluster.org
Sent: Wednesday, February 18, 2015 5:09:19 PM
Subject: Re: [Gluster-users] Poor Gluster performance
Am 18.02.2015 um 22:05
This looks like the NICs may only be negotiating to 100Mb(max theoretical of
12.5 MB / sec), can you check ethtool on all of your NICs? Also I like to run
iperf between servers and clients and servers and servers before I do anything
with gluster, if you aren't getting ~line speed with iperf
26 matches
Mail list logo