Hello.
I saw different md5sum-s on version 2.0.6. on node ftp2 at
[2009-08-25 ~18:00].
Also I've read about this bug in Gluster changelog before update.
Then I've updated my soft and have saw different md5sum-s again
(you can see some errors in
Hi
In our case the hanged server is:
Dell PE2900
2 x x5...@3.33 8Gb RAM
SAS 6/iR Integrated RAID Controller
7 x SEAGATE ST31000640SS
1 x SEAGATE ST3300656SS
Debian testing
Kernel 2.6.26-2
server hanged when writing to a unified volume (7 x 1Tb +
namespace and system on the ST3300656SS)
We
Raghavendra G wrote:
Hi Kirby,
As of now, dht does balancing of storage only during creation of files.
Files which are already created are not moved b/w nodes because of less free
storage.
During creation dht tries to balance out the storage b/w its children. it
accepts an option
hi. i trying to launch glusterfs 1.3.11 on FreeBSD 7.1-PRERELEASE. i
don't want to try 2.0.6 because AFR is not working on FreeBSD in 2.0.
server config:
=== cut ===
volume pos1
type storage/posix
option directory /home/export/b1
end-volume
volume pos2
type storage/posix
option directory
Hi!
These empty files with sticky bit are so called link-files. Their
extended attributes contains location of the server, where the real
file resides. These link-files are used by distribute translator, when
the actual location of a file differs from the location computed using
name hash ( see
client launcher:
=== cut ===
#!/bin/sh
/usr/local/gluster2/sbin/glusterfs \
-N -L DEBUG \
-l /var/log/glusterfs2-client.log \
-f /home/raven/temp/gluster/2/etc/glusterfs-client.vol \
--volume unify \
/mnt
=== cut ===
server launcher:
=== cut ===
#!/bin/sh
/usr/local/gluster2/sbin/glusterfsd \
-N
Hello all,
after playing around for some weeks we decided to make some real world tests
with glusterfs. Therefore we took a nfs-client and mounted the very same data
with glusterfs. The client does some logfile processing every 5 minutes and
needs around 3,5 mins runtime in a nfs setup.
We found
Stephan von Krawczynski wrote:
Hello all,
after playing around for some weeks we decided to make some real world tests
with glusterfs. Therefore we took a nfs-client and mounted the very same data
with glusterfs. The client does some logfile processing every 5 minutes and
needs around 3,5 mins
On Mon, 31 Aug 2009 19:48:46 +0530
Shehjar Tikoo shehj...@gluster.com wrote:
Stephan von Krawczynski wrote:
Hello all,
after playing around for some weeks we decided to make some real world tests
with glusterfs. Therefore we took a nfs-client and mounted the very same
data
with
Hello all,
as told earlier we tried to replace a nfs-server/client combination in
semi-production environment with a trivial one-server gluster setup. We
thought at first that this pretty simple setup would allow some more testing.
Unfortunately we have to stop those tests because it turns out
Hi ,
Im testing AFR replication and noticed that a starnge thing probebly a
bug in gluster 2 , say there are 2 servers in AFR mode , 1 of them
reboots
then few files get modified on the remaining node , none of the
changes propergate when the otehr node comes back .
Then i noticed this on gluster
we have no problem on using the volume for reading data and if
the server is not under heavy load it works well for writing too,
only hangs when server is using 100% cpu for math intensise
calculations and we try to write a lot of data to glusterfs
volume (it usually
takes some hours for it
I posted this on irc (#gluster) a couple of times in the last few days but
got no response. Trying my luck here:
chetan I'm seeing this crash in 2.0.1 server codebase
#4 0x7f6a42848a56 in free () from /lib/libc.so.6
#5 0x7f6a4157ff99 in __socket_ioq_new (this=value optimized out,
13 matches
Mail list logo