Main issue so far seems to be related with Courier-Imap in that when moving
messages between IMAP folders some messages are duplicated with the
io-threads enabled on the client side. Issue looks to be on the
Courier-IMAP side though I haven't seen this with NFS and the duplicate
messages
# time dd if=/dev/zero of=/mnt/gluster/disktest count=10240 bs=1024
Please try with a larger block size like 64KB or higher. Low block
sizes are very expensive for network or fuse based filesystems.
avati
___
Gluster-users mailing list
Patrick,
Can you try the latest codebase? some fixes in quota have gone in
since your first mail. Also, quota is best used on the server side for
now. We are still working on making it work well on the client side.
Avati
On Fri, Jan 30, 2009 at 9:57 PM, Patrick Ruckstuhl patr...@tario.org
On Tue, Feb 10, 2009 at 7:23 AM, Craig Flockhart
craigflockh...@yahoo.com wrote:
Thanks, this now works, although I get poor write performance like a
previous poster complained about: Around 7MB/s with replicated distributed
and around 5MB/s with distributed replicated pairs. This compares to
On Tue, Feb 10, 2009 at 3:18 PM, Chris Maciejewski ch...@wima.co.uk wrote:
It seems to me there is a performance problem with GlusterFS 2.0.0rc1.
Please see the results of the following simple test below:
$ dd if=/dev/zero of=hundred-meg-file count=10 bs=1000
for 2.0.0.rc1:
10+0
How could i improve my performance ? what should i do ? waiting for your
return ,thanks a lot
Use the write-behind translator for improving write performance, and
try a newer version of GlusterFS
Avati
___
Gluster-users mailing list
Hello,
Is there any way to figure the number of clients connected to a server
process?
No straight forward way currently. You can find it out with the output
of netstat command (grepping for the listen port)
Avati
___
Gluster-users mailing list
Sorry for the multiple questions, they just keep coming as I study GlusterFS
capabilities :).
Is there any advantage to using the NUFA translator with AFR?
I mean, according to documents, NUFA targets local server first, then remote
servers later. But with AFR Gluster writes the file to
Can you please get us a backtrace of the core dump?
Thanks,
Avati
On Wed, Apr 1, 2009 at 9:45 PM, Greg g...@easyflirt.com wrote:
Anand Avati a écrit :
Greg,
can you describe the sequence of operations performed on the
mountpoint before you faced
It is unfair to expect a high throughput for IO with O_DIRECT on a
network filesystem. The fact that a loopback interface is in picture
can influence some bias in opinion, but in reality, if you were to
compare GlusterFS direct IO performance with direct IO performance of
any other network
what is the git commit id of the snapshot you are using?
I' updated to head, now the fd issue seems gone, autoheal is gone as well,
apparently...
can you please explain what you mean by autoheal being gone as well?
Avati
___
Gluster-users mailing
What happens when you stat a filename, which you know exists on only
one subvolume? does it get healed then atleast?
Avati
On Tue, Apr 28, 2009 at 9:43 PM, Luca Barbato lu_z...@gentoo.org wrote:
Anand Avati wrote:
what is the git commit id of the snapshot you are using?
I' updated to head
lab:~# 2009-05-06 14:30:54 D [glusterfsd.c:339:_get_specfp] glusterfs:
loading volume file /etc/glusterfs/glusterfs.vol
Thanks to IRC, solution was to put IP instead of hostname in option
remote-host.
Fix for this is in the repository. You should be able to specify long
hostnames in the
but still it can't sustain a bonnie run:
sudo bonnie -d /mnt/nfs -m NFS
File '/mnt/nfs/Bonnie.61622', size: 104857600
Writing with putc()...done
Rewriting...done
Writing intelligently...done
Reading with getc()...done
Reading intelligently...done
Seeker 1...Seeker 2...Seeker 3...start
Paolo,
Do you have a core dump on your system (in /)? Can you get us a backtrace
from it?
Thanks,
Avati
- Original Message -
From: Paolo Pisati paolo.pis...@tomato.it
To: Paolo Pisati paolo.pis...@contactlab.com
Cc: Anand Avati av...@gluster.com, Raghavendra G
raghaven...@gluster.com
Paolo,
There are known issues with FreeBSD at the moment (including the one
you are facing). Please do let us know if you face the same issues on
a GNU/Linux system.
Avati
On 5/26/09, Paolo Pisati paolo.pis...@tomato.it wrote:
Anand Avati wrote:
Paolo,
Do you have a core dump on your
The reply to this thread is coming after a while. There is a patch
gone in recently which fixes endian compatibility issues, specifically
reported on SPARC and PPC against X86. Thank you all for your
patience.
Avati
On Fri, Dec 5, 2008 at 12:26 AM, Anand Avatiav...@zresearch.com wrote:
My guess
I want to use java nio with mmap ,so had to updata kernel 2.6.27 or
newer . The fuse in kernel 2.6.30 is api 7.11 .
How can i give your details with gdb? what should i do ?
Do you have a file in your system / with a name like /core. ? If you do,
run this command -
sh$ gdb glusterfs
The 2.0.3 release of gluster appears so far to have fixed the crash issue I
was experiencing. What was the specific patch that fixed for it I was
wondering?
It was http://patches.gluster.com/patch/664/. A less ugly fix is lined
up for 2.1
Great job either way! It appears that with fuse 2.8
On Wed, Jul 8, 2009 at 5:32 AM, Justice Londonjlon...@lawinfo.com wrote:
Actually, I spoke too soon. NFS still crashes, even if the mountpoint
doesn’t.
Justice, 2.0.3 fixes issues with 2.8.0-pre2. fuse-2.8.0-pre3 needs one
more fix (http://patches.gluster.com/patch/693/) which is lined up for
Can you submit the client logs? which version are you using? The
client should detect the server going down with its internal ping-pong
mechanisms (default ping-timeout being 10secs).
Avati
On Tue, Jul 14, 2009 at 4:16 PM, Justice Londonjlon...@lawinfo.com wrote:
When I reboot any of the
Nevertheless I would not say it is stable. At least I can crash it within
minutes to days of runtime.
Can you report the crash you faced in 2.0.4? If possible file a crash
bug with the backtrace in bugs.gluster.com.
Thanks
Avati
___
Gluster-users
I've been running 2.0.3 with two backend bricks and a frontend client of
mod_gluster/apache 2.2.11+worker for a few weeks now without much issue.
Last night i upgraded to 2.0.6 only to find out that mod_gluster has been
removed and is recommending to use the booster library - which is fine
patchset: v2.0.5-25-g8dfdde5
signal received: 11
time of crash: 2009-08-14 21:51:01
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 2.0.6rc4
Hi,
we had 2.0.4 running. I am sorry thats all I have in the logfile...
Dont you have a core file in the / of your system? Can you describe what
operation was running on the mountpoint when you faced the crash?
Avati
___
Gluster-users mailing
we tried to reproduce the freeze, but instead a classical hang showed up.
This one is nice as everything seems fine with the boxes, only glusterfs
mountpoints hang on both clients. There are no unusual outputs in the
standard glusterfs logfiles, so we can only show you this:
These are
It is perfectly clear to us that glusterfs(d) is the reason for the
box
becoming instable and producing a hang even on a local fs (you cannot
df on
the exported partition for example).
We will therefore continue with debugging as told before.
glusterfsd is just another application as far
Ok, please stay serious. As described in my original email from 19th
effectively _all_ four physical boxes have not-moving (I deny to use hanging
here) gluster processes. The mount points on the clients hang (which made
bonnies stop), the primary server looks pretty much ok, but does obviously
Replies inline
back to our original problem of all-hanging glusterfs servers and
clients.
Today we got another hang with same look and feel, but this time we
got
something in the logs, please read and tell us how to further
proceed.
Configuration is as before. I send the whole log since
On Thu, Aug 20, 2009 at 12:05 PM, Wei Dongwdong@gmail.com wrote:
I'm experiencing extremely slow auto-heal rate with glusterfs and I want to
hear from you guys to see if it seems reasonable or something's wrong.
I did a reconfiguration of the glusterfs running on our lab cluster.
well, that never hapen before when using nfs with the same
computers, same disk, etc ... for almost 2 years, so it's more
than possible that is glusterfs the one which is triggering this
suposed ext3 bug, but appart from this:
a) documentation says All operations that do not modify the file
Jeff,
We are working on bug 144. We think one of the changes we plan to
bring in 2.0.7 will fix this problem. The discussion in this thread is
about those situations where the server (machine hosting the
storage/posix volume) hangs the backend filesystem (verified by kernel
console logs) and
I'm wondering if there's some way for glusterfs to detect the flaws of the
underlying operating system. I believe there's no bug-free file systems in
the universe, so I believe it is the job of the glusterfs developer to
specify which underlying filesystem is tested and supported. It's not
If i'm not wrong it will be synced once you try to access any
of the unsynced files, except if the file does not exist on the
first volume
That is not quite true, if all the modifications were performed from
the mountpoint, then it does not matter which server went down. If
files were removed
I am curious to know what environment the developers of GlusterFS use to
develop and test themselves ? If, for example, releases are being pushed
from tests done on CentOS 5.2 x64, or Ubuntu 8.04 32bit, or whatever.
If i'm going to set up any new gluster machines, i'd like them to be as
I doubt that this can be a real solution. My guess is that glusterfsd runs
into some race condition where it locks itself up completely.
It is not funny to debug something the like on a production setup. Best
would
be to have debugging output sent from the servers' glusterfsd directly
Altough is clear that the bug itself is a kernel bug it's also
clear that glusterfs is triggering that bug. The same system under
the same load but using nfs instead of gluster does not have this
problem. This problem also does not happen copying lots of data
using scp. Also, i have
the lockups are on server side not in client side and fuse is
not used on the server side
Which actually reinforces the point that glusterfs has very little to
do with this kernel lockup. It is not even performing the special fuse
protocol communication with the kernel in question. Just plain
On 9/13/09, Wei Dong wdong@gmail.com wrote:
OK, so the previous good results are indeed too good to be true. Here's a
more reasonable evaluation
http://www.cs.princeton.edu/~wdong/gluster/large.gif where
I enlarged the number of images created by 10x so everything no longer fit
in main
It's not perfect - but at least any calls that fall through will still be
handled properly. For example, if an application calls fopen(), which is not
on the GlusterFS list of overridden system calls the last time I checked,
then at least the fopen() will be intercepted by FUSE rather than
On 9/14/09, Stephan von Krawczynski sk...@ithnet.com wrote:
2. running bash wasnt a very useful scenario when the LD_PRELOAD
variable can be added for the bash environment as a whole. For eg.
if you just do export LD_PRELOAD=blah on the command line, you can
actually have every
I noticed that the glusterfs client tries to set ulimit -n to 1M. When I
run booster with non-privileged user, the following line appears several
times in the log file:
[2009-09-14 09:15:22] W [client-protocol.c:6010:init] brick-0-0-0: WARNING:
Failed to set 'ulimit -n 1M': Operation not
storage/posix xlator absolutely needs to be run as root for it to be
able to create files as different owners in the backend.
Avati
On Wed, Sep 16, 2009 at 6:44 PM, Wei Dong wdong@gmail.com wrote:
I'm interested in this too. I think other than access control, there's no
reason preventing
Hi
It also hangs if there is no glusterfs server available
Is this during runtime or during mount time? For me, it can happen at mount
time when I reboot all servers at the same time. Once I kill -9 the 'mount'
process, umount the mount point, and then remount, the freeze clears. Still,
For me, it does not clear after 3 mins or 3 hours. I restarted the machines
at midnight, and the first time I tried again was around 1pm the next day
(13 hours). I easily recognize the symptoms as the /bin/mount remains in the
process tree. I can't get a strace -p on the /bin/mount process
Oh - to further clarify - the exact equivalent symptoms (/bin/mount being
frozen for 13 hours, requiring kill -9 to clear the condition) happened on
all three machines. So, it wasn't a one off. If I reboot the machines one by
one - there are some 10 second pauses (expected) but everything is
[r...@wcarh033]~# ps -ef | grep gluster
root 1548 1 0 21:00 ?00:00:00
/opt/glusterfs/sbin/glusterfsd -f /etc/glusterfs/glusterfsd.vol
root 1861 1 0 21:00 ?00:00:00
/opt/glusterfs/sbin/glusterfs --log-level=NORMAL
--volfile=/etc/glusterfs/tools.vol
I'm starting gluster like this:
sudo glusterfs -f /etc/glusterfs/glusterfs-client.vol /mnt/glusterfs
sudo glusterfsd -f /etc/glusterfs/glusterfs-server.vol
And do the following to try and trigger it to replicate
sudo ls -alRh /mnt/glusterfs/
sudo ls -alRh /data/export
sudo ls -alRh
Hate to post again, but anyone have any ideas on this?
What does the server log have to say? Can you also check if port-1 is
the active port in ibv_devinfo? Looks like ib-verbs messaging is not
happening. Does ibv_srq_pingpong give sane results?
Avati
What does the server log have to say? Can you also check if port-1 is
the active port in ibv_devinfo? Looks like ib-verbs messaging is not
happening. Does ibv_srq_pingpong give sane results?
[3.890311] ib_mthca: Mellanox InfiniBand HCA driver v1.0 (April 4,
2008)
[3.890315]
[r...@xen1 ~]# ibv_srq_pingpong 10.13.0.220
local address: LID 0x000b, QPN 0x300406, PSN 0x1ace41
local address: LID 0x000b, QPN 0x300407, PSN 0x6ba197
local address: LID 0x000b, QPN 0x300408, PSN 0xa6f895
local address: LID 0x000b, QPN 0x300409, PSN 0xf054c0
local
http://www.gluster.com/community/documentation/index.php/Translators/cluster/distribute
It seems to suggest that 'lookup-unhashed' says that the default is 'on'.
Perhaps try turning it 'off'?
Wei,
There are two things we would like you to try. First is what Mark
has just pointed, the
All,
GlusterFS 2.0.7 has been released. It is a maintenance release with few
bug fixes. GlusterFS 2.0.7 has improvements in libglusterfsclient/booster.
Users making use of libglusterfsclient/booster are recommended to upgrade
to 2.0.7.
Does this fix the problem where restarting a xen
-- 1 root root 191M 2009-09-23 21:28 test03a
Any ideas what I might be doing wrong?
On Wed, Sep 23, 2009 at 5:55 AM, Anand Avati av...@gluster.com wrote:
On 9/23/09, Georgecooldude georgecoold...@gmail.com wrote:
Anyone have any ideas on the below? Thanks
Benjamin,
Are the clients and servers all running the same version of
GlusterFS? Can you check by running glusterfs --version on all the
nodes involved?
Avati
On Tue, Oct 6, 2009 at 3:36 AM, Benjamin Long
benjamin.l...@longbros.com wrote:
Greetings All,
I've been fighting with a
On Thu, Oct 8, 2009 at 8:04 AM, Nathan Stratton nat...@robotics.net wrote:
I know it is hard to tell, but any idea when 2.1 will be stable? I need to
run xen with out --disable-direct-io and my understanding is that works in
2.1. What kernel is required for that? Does it require the latest
The returns for the effort of increasing the fuse channel size is
hardly anything. Though you will see a nice increase in performance
when block size is increased, because of the reduction of context
switches, there will be a peak after which increasing the block size
further has no benefits.
Are your files present in the backend directories?
Avati
On Wed, Oct 14, 2009 at 6:36 PM, Andre Felipe Machado
andremach...@techforce.com.br wrote:
Hello,
I am trying to optimize glusterfs (back to debian 2.0.4 official squeeze
packages installed on lenny) distributed for small files (php
On Wed, Jan 6, 2010 at 2:11 PM, Davide D'Amico
davide.dam...@contactlab.com wrote:
Il giorno 06/gen/2010, alle ore 02.58, Elliot Finley ha scritto:
I should have been more specific. We were only using NFS on the FreeBSD
servers to connect to GLuster. We did not use the native GLuster client
On Thu, Jan 7, 2010 at 4:44 PM, Nick Birkett
n...@streamline-computing.com wrote:
Is it possible to use a gluster hash when some of the servers are missing ?
Eg if glusterfs.vol has 8 servers but only 6 available.
I dont mind files being missing (due to being on offline servers).
I do
On Tue, Jan 26, 2010 at 1:20 AM, Larry Bates larry.ba...@vitalesafe.com wrote:
I recently moved my backend storage to GlusterFS V3.0 and with one exception
everything is running great. That exception is that I had a daemon using
inotify that was watching my storage for new files. Upon arrival
Hello community,
We wanted to know how many of you are using GlusterFS via the
libglusterfsclient API, since we were about to make some changes in
the library interface. libglusterfsclient was initially designed for
building application plugins (like mod_glusterfs, booster) but we are
not sure
Joe,
Do you have access to the core dump from the crash? If you do,
please post the output of 'thread apply all bt full' within gdb on the
core.
Thanks,
Avati
On Wed, Apr 28, 2010 at 2:26 PM, Joe Warren-Meeks
j...@encoretickets.co.uk wrote:
Hey guys,
Any clues or pointers with this problem?
Here you go!
Anything else I can do?
Joe, can you please rerun the gdb command as:
# gdb /usr/local/sbin/glusterfs -c /core.13560
Without giving the glusterfs binary in the parameter the backtrace is
missing all the symbols and just the numerical addresses are not quite
useful.
Thanks,
On Wed, Apr 28, 2010 at 11:15 PM, Joe Warren-Meeks
j...@encoretickets.co.uk wrote:
Oops, I'm an idiot, sorry about that.. here you go!
Thanks! We have a good understanding of the issue now. Please add
yourself to the CC list at
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=868 to
Barry,
Some clarifications below -
On Fri, Jun 4, 2010 at 9:17 PM, Barry Jaspan barry.jas...@acquia.com wrote:
I am concerned about this patch:
updated write-behind default values - http://patches.gluster.com/patch/3223/
It changes the default value for performance/writebehind's
If write() succeeds, is it not possible for flush() or close() to fail
later? My understanding is you need to check for errors from both. For
example, write() might just write to an in-memory buffer, then the disk
might fill up before the change is flushed out.
A close() by itself does
This behavior is fixed in the repository code and will be available in
3.1.1.
Avati
On Thu, Nov 18, 2010 at 11:06 AM, Devin Reade g...@gno.org wrote:
I'm trying out a configuration where, for the purpose of gluster,
the universe consists of only two CentOS 5.5 hosts (outside of gluster,
this
We are tracking this bug at -
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2038
Please add yourself to the cc-list to receive further updates.
Avati
On Thu, Nov 18, 2010 at 5:22 PM, David Lloyd
david.ll...@v-consultants.co.uk wrote:
if you try to mount a gluster fs volume from
GlusterFS code is 32-bit clean. It should work just fine (on GNU/Linux).
However we do not QA our releases on 32bit machines. Hence we neither
release 32bit binaries nor support it officially. This may change in the
future, but no promises.
Avati
On Tue, Nov 16, 2010 at 1:04 PM, Christian
This bug was also fixed post 3.1.0 in commit
413b2a5f9b77fd3d7f3b26c848482ec7b914102f
Avati
On Mon, Nov 22, 2010 at 8:18 PM, Hugo Cisneiros (Eitch)
hugo.cisnei...@gmail.com wrote:
Hi :)
In another thread, I had problems with the quick-read translator that
was fixed on 3.1.1. Since I'm
Can you verify that ibv_srq_pingpong works from the server where this log
file is from?
Thanks,
Avati
On Wed, Dec 1, 2010 at 7:44 PM, Jeremy Stout stout.jer...@gmail.com wrote:
Whenever I try to start or mount a GlusterFS 3.1.1 volume that uses
RDMA, I'm seeing the following error messages in
Release 3.0.x has the same behavior what you describe here, which is not
really an issue. Please verify if some other component in your system
changed (kernel or fuse kernel module) which has changed the behavior for
you.
Avati
On Thu, Dec 2, 2010 at 7:29 PM, Samuel Hassine sam...@hassine.fr
Do both read and write throughput peak at 1Gbit/s? What is the block size
used for performing I/O? Can you get the output of -
1. dd if=/dev/zero of=/mnt/stripe/file bs=1M count=1K
2. dd if=/mnt/stripe/file of=/dev/null bs=1M count=1K
Just one instance of dd is enough as the client network
Can you post the kernel backtrace?
Avati
On Sun, Dec 19, 2010 at 11:21 PM, Jacco Beije ja...@redproc.com wrote:
Craig Carl cr...@... writes:
Max -
Can you try again with Gluster 3.1.1, please install from source -
That does not seem to be the complete call trace. Can you post the complete
trace? This seems like a kernel bug strangely triggered by gluster
Avati
On Thu, Dec 23, 2010 at 11:48 PM, Jacco Beije ja...@redproc.com wrote:
Can you post the kernel backtrace?
Avati
redid test to get
This looks an issue in your setup. One of the clients has very likely not
connected to both the servers. You say client volfile are exactly the same.
Can you verify if DNS (or /etc/hosts entries) resolve consistent on all your
client nodes?
Avati
On 14 January 2011 15:44, Georg Höllrigl
On Sun, Jan 16, 2011 at 5:02 PM, Max Ivanov ivanov.ma...@gmail.com wrote:
time tar cf - M | pv /dev/null 15.8 MB/sec (native) 3.48MB/sec
(FUSE) 254 Kb/sec (NFS)
This test shows why glusterfs native protocol is better than NFS when you
need to scale out storage. Even with a context
15.01.2011 22:24, schrieb Anand Avati:
This looks an issue in your setup. One of the clients has very likely not
connected to both the
servers. You say client volfile are exactly the same. Can you verify if
DNS (or /etc/hosts entries)
resolve consistent on all your client nodes?
Avati
David,
The problem what you are facing is something we are already investigating.
We still haven't root-caused it yet, but from what we have seen this happens
only on / and only for metadata changelog. This shows up as just annoying
logs but it should not affect your functionality.
Avati
On
Can you post the client log files from your machine? Are you sure
connectivity is fine between the client and servers? (some kind of network
split pointing your machine to one server and the other client to another
server?)
Avati
On Sat, Jan 29, 2011 at 11:58 AM, paul simpson
Can you post your server logs? What happens if you run 'df -k' on your
backend export filesystems?
Thanks
Avati
On Mon, Jan 17, 2011 at 5:27 AM, Joe Warren-Meeks
j...@encoretickets.co.ukwrote:
(sorry about topposting.)
Just changing the timeout would only mask the problem. The real issue is
:/mnt/data17
Brick102: clustr-06:/mnt/data17
Brick103: clustr-01:/mnt/data18
Brick104: clustr-02:/mnt/data18
Brick105: clustr-03:/mnt/data18
[...]
P
On Mon, Jan 31, 2011 at 4:26 PM, Anand Avati anand.av...@gmail.com
wrote:
Can you post your server logs? What happens if you run 'df -k
229 Jan 16 21:15 glusterd.vol
-rw-r--r-- 1 root root 1908 Jan 16 21:15 glusterfsd.vol.sample
-rw-r--r-- 1 root root 2005 Jan 16 21:15 glusterfs.vol.sample
I created all of the configs via the gluster commandline tool.
Thanks
P
On Thu, Feb 3, 2011 at 6:39 PM, Anand Avati anand.av
It is very likely the brick process is failing to start. Please look at the
brick log on that server. (in /var/log/glusterfs/bricks/* )
Avati
On Fri, Feb 4, 2011 at 10:19 AM, phil cryer p...@cryer.us wrote:
I have glusterfs 3.1.2 running on Debian, I'm able to start the volume
and now mount
Locking is more robust in the 3.1.x releases. Please upgrade.
Avati
On Fri, Feb 4, 2011 at 12:13 PM, Ken S. shaw...@gmail.com wrote:
I'm having some problems getting two nodes to mount a shared gluster
volume where I have the MySQL data files stored. The databases are
Innodb. Creating the
Some commits into RDMA to improve stability has gone in post 3.1.1. Can you
check if 3.1.2 has those issues as well?
Avati
On Sun, Feb 6, 2011 at 10:35 AM, Claudio Baeza Retamal
clau...@dim.uchile.cl wrote:
Dear friends,
I have several problems of stability, reliability in a small-middle
Can you please file a bug with the gdb backtrace on the core dump file?
Avati
On Wed, Feb 16, 2011 at 3:03 PM, Piotr Kandziora ravee...@gmail.com wrote:
Hi,
I am testing GlusterFS 3.1.2 on Ubuntu with NFS support.
After a few weeks of error-free working, unfortunately GlusterFS crashed.
Anthony,
That is still experimental code from an unreleased feature. Once we
officially release it, the documentation and release notes will point you in
the right direction.
Avati
On Mon, Mar 7, 2011 at 8:19 AM, anthony garnier sokar6...@hotmail.comwrote:
Hi,
I'm currently testing de
This issue has been fixed and will be available in 3.1.3.
Avati
On Thu, Mar 10, 2011 at 8:25 AM, Etienne Lyard etienne.ly...@unige.chwrote:
Hello,
I tried more or less the same setup as you did (CentOS 5.5, gluster 3.1.2)
but I got different behaviour: in replica, on single brick down
Gerd,
Can you check if the files are available in the backend but not accessible
from the mountpoint or, have gone missing altogether?
Avati
On Fri, Apr 15, 2011 at 12:49 AM, Gerd W. Naschenweng
gerd.naschenw...@bidorbuy.co.za wrote:
This is highly frustrating and we notice that uploaded
Thanks for the post Joe. We introduced the diff based self heal algorithm
in 3.1 release.
Avati
On Tue, May 3, 2011 at 3:38 AM, Joe Landman land...@scalableinformatics.com
wrote:
Hi folks
We've fielded a number of mirroring questions offline as well as
watched/participated in discussions
Occurance of a split brain situation is only under a specific sequence of
events and modifications and the filesystem cannot decide which of the two
copies of the file is updated. It might so happen that the two changes were
actually the same change and hence the two copies of your file might
Udo,
Do you know what kind of access was performed on those files? Were they
just copied in (via cp), were they rsync'ed over an existing set of data?
Was it data carried over from 3.1 into a 3.2 system? We hate to lose users
(community users or paid customers equally) and will do our best to
3.1.4 - gained a few bugs
Can someone throw more light on this? We do not have any open bugs in
bugzilla marked against 3.1.4 - which means either
a) They were reported and fixed, but we haven't made a release yet
b) We have not been fixing it because we have not yet heard about it!
Avati
Can you get us a backtrace from the core dump? It will be very helpful.
Thanks,
Avati
On Mon, May 23, 2011 at 12:06 PM, Cal Leeming [Simplicity Media Ltd]
cal.leem...@simplicitymedialtd.co.uk wrote:
Every now and then I keep getting the following error:
mkdir: cannot create directory
Joshua,
You are right. Even though GlusterFS native client provides redundancy and
high availability, the act of mounting itself is from a single server.
Standard way to work around this is to have a ucarp based VIP just for the
purpose of mounting. Other ways include techniques mentioned above
This is interesting. write-behind guarantees ordering of read/write such
that you always read written data. Do you happen to know if Mercurial reads
and writes from different file descriptors on the same file? Can you give us
logs and configuration details of your setup?
Avati
On Mon, Jun 6,
Can you please share NFS and brick logs from the duration of the link going
down? Gluster should have worked in the situation you described.
Avati
On Fri, Jun 10, 2011 at 3:27 PM, Daniel Manser dan...@clienta.ch wrote:
Dear community,
I have a 2-node gluster cluster with one replicated
Do you find anything in the client logs?
On Fri, Jun 10, 2011 at 3:20 AM, Alan Zapolsky a...@droptheworld.comwrote:
Hello,
I have a PHP web application that uses Gluster to store its files.
There are a few areas of the application that perform multiple
operations on small to medium size
1 - 100 of 300 matches
Mail list logo