On 10/04/2012 09:39, 7220022 wrote:
Are there plans to add provisioning of spare bricks in a replicated
(or
distributed-replicated) configuration? E.g., when a brick in a mirror
set dies, the system rebuilds it automatically on a spare, similar to
how it'd done by RAID controllers.
Nor would it
On 10/04/2012 09:39, 7220022 wrote:
Are there plans to add provisioning of spare bricks in a replicated (or
distributed-replicated) configuration? E.g., when a brick in a mirror
set dies, the system rebuilds it automatically on a spare, similar to
how it’d done by RAID controllers.
Nor would it
On 03/29/2012 03:39 PM, Pascal wrote:
Hello everyone,
I would like to know if it is possible to setup a GlusterFS
installation which is comparable to a RAID 6? I did some research in
the community and several mailing lists and all I could find were the
similar request from 2009
(http://gluster.o
On 09/11/2011 19:09, Magnus Näslund wrote:
On 11/09/2011 06:51 PM, Gordan Bobic wrote:
My main concern with such data volumes would be the error rates of
modern disks. If your FS doesn't have automatic checking and block level
checksums, you will suffer data corruption, silent or othe
On Wed, 09 Nov 2011 17:50:00 +0100, Magnus Näslund
wrote:
[...]
We want the data replicated at least 3 times physically (box-wise),
so we've ordered 3 test servers with 24x3TB "enterprise" SATA disks
each with an areca card + bbu. We'll probably be running the tests
feeding raid volumes to glu
On Wed, 24 Aug 2011 11:50:45 +0200, m...@netbsd.org (Emmanuel Dreyfus)
wrote:
Pavan T C wrote:
The glusterfs server process uses iomem pools. Initially, it starts
with
one pool. Under memory pressure, it allocates more, but does not
give
back the pool when the pressure decreases. It can be d
On Wed, 24 Aug 2011 11:48:04 +0530, Pavan T C wrote:
On Wednesday 24 August 2011 09:02 AM, Emmanuel Dreyfus wrote:
Emmanuel Dreyfus wrote:
Another issue: glusterfs client eats a lot of memory. It grows up
as big
as 222 Mo in a few minutes. Here is top(1) output:
PID USERNAME PRI NICE SIZE
Saw this article earlier:
http://www.theregister.co.uk/2011/06/14/calxeda_arm_server_software_partners/
It mentions that Gluster is on-board for ARM support. Does that mean the
tune has been changed since this:
http://permalink.gmane.org/gmane.comp.file-systems.gluster.devel/1669
Gordan
Hans K. Rosbach wrote:
On Wed, 2011-06-08 at 12:34 +0100, Gordan Bobic wrote:
Hans K. Rosbach wrote:
-SCTP support, this might not be a silver bullet but it feels
[...]
Features that might need glusterfs code changes:
[...]
-Multihoming (failover when one nic dies)
How is this
Hans K. Rosbach wrote:
-SCTP support, this might not be a silver bullet but it feels
[...]
Features that might need glusterfs code changes:
[...]
-Multihoming (failover when one nic dies)
How is this different to what can be achieved (probably much more
cleanly) with NIC bonding?
[.
On 10/04/2011 20:17, Meisam Mohammadkhani wrote:
Dear Gordan,
Actually our little tests showed that replicated files, could be
accessible in more than one minute with GlusterFS.
I suspect the reason you saw that delay is because you put the files on
one node, and it needed healing when you ac
On 10/04/2011 15:27, Meisam Mohammadkhani wrote:
Hi All,
I'm new to GlusterFS. I'm searching around a solution for our enterprise
application that is responsible to save(and manipulate) historical data
of industrial devices. Now, we have two stations that works like hot
redundant of each other.
On 01/06/2011 06:52 PM, Christopher Hawkins wrote:
This is not scalable to more than a few servers, but one possibility
for this type of setup is to use LVS in conjunction with Gluster
storage.
Say there were 3 gluster storage nodes doing a 3x replicate. One of
them (or a small 4th server) could
Fredrik Widlund wrote:
If you're re-exporting a gluster filesystem, the re-exporting node will act as a proxy. As a concept this is fairly natural, and in itself it shouldn't be a problem.
It's natural, but Sylar was saying it's inefficient - which it is,
because you end up hammering one node
沈允中 wrote:
And now I know that it's impossible for nfsd to re-export a FUSE mounted
filesystem.
But the workflow of the gluster native nfsd is not smart just as the white
paper mentioned.
Gluster will act stupid when not using glusterfs protocol.
1. A client asks server A for a file but server
沈允中 wrote:
> Hi,
> Thanks for the advice.
> My problem is just like you said.
> But is there any alternative way that I can solve my problem?
> Because vmware really doesn't have the glusterfs protocol to mount.
> I know that the Gluster.com may publish their VMStor product to improve this.
> Howe
沈允中 wrote:
> Hi All:
>
> I wanted to use GlusterFS as a share storage to connect with vmware.
>
> But the nfs protocol had a poor performance when the scalability got
> larger.(I have 20 servers as a GlusterFS)
>
> So I figured out a way when I saw the wiki of Ceph
>
> http://ceph.newdream.net
On 12/24/2010 01:53 AM, Mukund Buddhikot wrote:
Hello Gluster Folks,
One can use dd, iometer, "internal" data sets and OLTP workloads for
performance testing of GlusterFS.
Is there a "standardized" workload/benchmark for DFS such as GlusterFS..
The ones you mention are "standard" benchmarks.
On 10/13/2010 01:22 PM, Beat Rubischon wrote:
Hi Gordan!
Quoting (13.10.10 10:06):
What sort of a cluster are you running with that many nodes? RHCS?
Heartbeat? Something else entirely? In what arrangement?
High performance clusters. The main target Gluster was made for :-)
I'm curious ab
Beat Rubischon wrote:
Hello!
Quoting (13.10.10 09:11):
I would also expect to see network issues as a cluster grows.
Performance reducing as the node count increases isn't seen as a bigger
issue?
I have pretty bad experience with Multicast. Running serveral clusters in
the range 500-1000 n
Craig Carl wrote:
Gordon -
PGM is still experimental so it will be a while before we would
consider implementing it,
Just because it isn't officially ratified doesn't mean it's not stable.
PGM has been stable enough to be implemented in Windows since around
2002, and there have been libr
Are there any plans to implement Replicate/AFR using broadcasts (on a
LAN) or multicast? At the moment, write performance scales inversely
linearly (O(n)) as additional nodes are added, which gets in the way of
scalability. It'd be really useful if this could be done using reliable
multicast (e
On 09/26/2010 12:54 AM, Ed W wrote:
On 24/09/2010 09:42, Gordan Bobic wrote:
This sounds remarkably similar to how DLM in GFS works. It caches file
locks, so performance is reasonable where a set of files is only
accessed from one of the nodes. Might it be easier to interface with
DLM for
This sounds remarkably similar to how DLM in GFS works. It caches file
locks, so performance is reasonable where a set of files is only
accessed from one of the nodes. Might it be easier to interface with DLM
for locking control instead of implementing such a thing from scratch?
The bulk of th
On 08/11/2010 03:40 PM, Roland Fischer wrote:
Hi ,
i have a big question about how to build a good directory and
file-structure, just to get the best performance out of GlusterFs.I want
to put a large amount of pictures and videos on the LUN so it´s very
mixed up between small and large files.Th
Emmanuel,
IIRC, libfuse was dropped with glfs v3.x for performance reasons. Last
version that works through libfuse was v2.x branch.
Gordan
On Fri, 2010-07-23 at 13:59 +0200, Emmanuel Dreyfus wrote:
> Hello
>
> I gave a try to glusterfs on NetBSD. That produced a few patches (see below),
> and
phil cryer wrote:
I'm testing with lscynd now to sync remote clusters (ours running
gluster, others may be, may not be) and it's working well.
Just make sure you are running the latest trunk version. The exclude
file handling bugs were fixed about 5 hours ago. :-)
Also look
at http://code.g
anthony garnier wrote:
Hi,
I'd like to know if gluster developpers intend to develop a module
option for Asynchronous replication ?
Let me know if you got some informations.
I think you may be trying to use the wrong tool for the job here. If you
really want asynchronous replication (no c
On 02/04/2010 14:49, Stephan von Krawczynski wrote:
Following to a recent talk on the IRC channel, it came to my mind that
caching lookups could (in this particular situation) greatly improve the
performances.
Maybe some of the devs can explain whether this is plausible, but I
somewhat doubt i
On 02/04/2010 12:32, Olivier Le Cam wrote:
Following to a recent talk on the IRC channel, it came to my mind that
caching lookups could (in this particular situation) greatly improve the
performances.
Maybe some of the devs can explain whether this is plausible, but I
somewhat doubt it. You w
Stephan von Krawczynski wrote:
On Thu, 25 Mar 2010 10:43:10 +
Gordan Bobic wrote:
Stephan von Krawczynski wrote:
On Thu, 25 Mar 2010 09:56:24 +
Gordan Bobic wrote:
If I have your mentioned scenario right, including what you believe
should happen:
* First node goes down
Stephan von Krawczynski wrote:
On Thu, 25 Mar 2010 09:56:24 +
Gordan Bobic wrote:
If I have your mentioned scenario right, including what you believe
should happen:
* First node goes down. Simple enough.
* Second node has new file operations performed on it that the first
Michael Cassaniti wrote:
On 03/25/10 10:21, Gordan Bobic wrote:
Christopher Hawkins wrote:
Correct me if I'm wrong, but something I would add to this debate is
the type of split brain we are talking about. Glusterfs is quite
different from GFS or OCFS2 in a key way, in that it is an ov
Christopher Hawkins wrote:
Correct me if I'm wrong, but something I would add to this debate is the type
of split brain we are talking about. Glusterfs is quite different from GFS or
OCFS2 in a key way, in that it is an overlay FS that uses locking to control
who writes to the underlying files
On 03/23/2010 07:23 PM, Ed W wrote:
On 18/03/2010 16:59, Christopher Hawkins wrote:
I see what you mean. Hopefully that behavior is fixed in 3.0. Though
in my case, I would still like fast disconnect because the data mirror
is active / passive. There should be no problems for glusterfs to
figure
Brandon Lamb wrote:
Hello,
We have started looking into some used inifiniband hardware to up our
network from the typical gigabit ethernet to the 10gigabit infiniband
stuff. Im curious to hear from anyone that has maybe done this or has
been able to see how much of a performance gain there is. M
Sounds like you are seeing the same disconnection issues I reported a while
back.
Gordan
"Daniel Maher" wrote:
>Raghavendra G wrote:
>> * What does the top output corresponding to glusterfs say? what is the
>> memory usage and cpu usage?
>> * Do you find anything interesting in glusterfs clie
Anand Avati wrote:
Gordan, and others using a config like Bug 542
(http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=542)
This corruption issue can shows up (but not always) when you have
loaded both io-cache and write-behind below replicate (syntactically
before replicate in the volfile
nicolas prochazka wrote:
Hello,
For your information, glusterfs 3.0.2 still reboot my servers
Server 1 / Client 1 | Server2 / Client2 Client1 /
Client2 in replicate mode to server 1 / 2 , no performance translator
Server 1 / Server 2 : 10.98.98.x /24 for gluster interfac
Hmm... I think asynchronous replication is conceptually not in line with
the design of glfs.
If you want something similar with asynchronous replication (obviously,
no hope of POSIX locks), you may want to look into SeznamFS. I can
probably send you some .spec files if you are intending to use
Will this be included in the 3.0.2 release?
Gordan
Samuel Hassine wrote:
Hi all,
I just patched the GlusterFS Source code with
http://patches.gluster.com/patch/2716/ .
For PHP Sessions, the problem is solved, it works with the locks
translator.
For internal server errors on a huge trafic, sam
Tejas N. Bhise wrote:
Thanks for your feedback on unfsd. We will have a look.
Thanks. The performance is similar between 2.0.9 without any performance
translators and 3.0.2rc2 with io-cache+writebehind. Without any
performance translators, 2.0.9 seems to be much more responsive.
How's the
Tejas N. Bhise wrote:
Besides that, if you have recently upgraded to 3.0.0,
please consider 3.0.2 which would be out very soon
( your can even try 3.0.2rc1 ). It has much better
performance that previous versions too.
I'm no longer that convinced. My observation is that latencies in the
glfs+u
Ran wrote:
The line that you need to add is the one with "writeback" in it.
If you are running qemu-kvm manually, you'll need to add the
"cache=writeback" > to your list of -drive option parameters.
All of this, of course, doesn't preclude applying ionice to the qemu
container processes.
ioni
Ran wrote:
The line that you need to add is the one with "writeback" in it.
If you are running qemu-kvm manually, you'll need to add the "cache=writeback"
> to your list of -drive option parameters.
All of this, of course, doesn't preclude applying ionice to the qemu container
processes.
ioni
Alex Attarian wrote:
I'll give that a try Gordan! Thanks for the info!
My other question is does gluster perform a read for the extended
attributes on all servers regardless read-subvolume?
This is what my tcpdump shows:
,3..h...:O5*.L.
..+c...c..D.
Jeff Darcy wrote:
Unfortunately, I/O traffic shaping is still in its infancy
compared to what's available for networking - or perhaps even "infancy"
is too generous. As far as the I/O stack is concerned, all of the
traffic is coming from the glusterfsd process(es) without
differentiation, so ev
Jeff Darcy wrote:
On 01/31/2010 09:06 AM, Ran wrote:
You guys are talking about network IO im taking about the gluster server disk IO
the idea to shape the trafic does make sence seens the virt machines
server do use network to get to the disks(gluster)
but what about if there are say 5 KVM serv
Stephan von Krawczynski wrote:
On Mon, 01 Feb 2010 11:10:16 +
Gordan Bobic wrote:
Stephan von Krawczynski wrote:
On Sun, 31 Jan 2010 13:37:49 +
Gordan Bobic wrote:
Stephan von Krawczynski wrote:
On Sun, 31 Jan 2010 00:29:55 +
Gordan Bobic wrote:
Slightly offtopic I would
Daniel Maher wrote:
Gordan Bobic wrote:
That's hardly unexpected. If you are using client-side replicate, I'd
expect to see the bandwidth requirements multiply with the number of
replicas. For all clustered configurations (not limited to glfs) I use
a separate LAN for cluster com
Stephan von Krawczynski wrote:
On Sun, 31 Jan 2010 13:37:49 +
Gordan Bobic wrote:
Stephan von Krawczynski wrote:
On Sun, 31 Jan 2010 00:29:55 +
Gordan Bobic wrote:
Slightly offtopic I would like to ask if you, too, experienced glusterfs using
a lot more bandwith than a comparable
Ran wrote:
You guys are talking about network IO im taking about the gluster server disk IO
I also explained how you could use ionice to throttle the entire VM's
disk I/O.
the idea to shape the trafic does make sence seens the virt machines
server do use network to get to the disks(gluster)
I seem to remember something like this being broken was mentioned on the
list a while back. I wouldn't even bother asking about issues like this
before trying the latest stable version (2.0.9).
Gordan
Alex Attarian wrote:
Is anyone willing to look at this and maybe provide some answers? Is
th
Traffic shaping with tc and iptables are your friends. ;)
Of course, if you are genuinely running out of bandwidth nothing will
solve the lack of bandwidth, but if you merely need to make sure it is
distributed more fairly and sensibly between the machines, it can be done.
Typically I would pu
Stephan von Krawczynski wrote:
On Sun, 31 Jan 2010 00:29:55 +
Gordan Bobic wrote:
Stephan von Krawczynski wrote:
On Fri, 29 Jan 2010 18:41:10 +
Gordan Bobic wrote:
I'm seeing things like this in the logs, coupled with things locking up
for a while until the timeout is com
Stephan von Krawczynski wrote:
On Fri, 29 Jan 2010 18:41:10 +
Gordan Bobic wrote:
I'm seeing things like this in the logs, coupled with things locking up
for a while until the timeout is complete:
[2010-01-29 18:29:01] E
[client-protocol.c:415:client_ping_timer_expired] home2: S
I'm seeing things like this in the logs, coupled with things locking up
for a while until the timeout is complete:
[2010-01-29 18:29:01] E
[client-protocol.c:415:client_ping_timer_expired] home2: Server
10.2.0.10:6997 has not responded in the last 42 seconds, disconnecting.
[2010-01-29 18:29:0
nicolas prochazka wrote:
Hi,
client replicate ( see my precedent email )
server 1 : glusterfsd and glusterfs (replicate s1 s2 )
server 2 : glusterfsd and glustefs ( replicate s1 s2 )
cut the cable between server1 and server2
waiting for Five minutes
two servers reboots.
I think is fuse impa
Anand Avati wrote:
On Thu, Jan 28, 2010 at 4:44 PM, Gordan Bobic wrote:
Thanks for your answer Gordan. Do you have more information about
3.0.2 ?
None, other than that Avati said a couple of days ago that it will be
available "very quickly".
Do you think the 3.0.1 could be a
Samuel Hassine wrote:
Hi,
I downloaded and installed GlusterFS 3.0.2rc1, I have the same problem.
Furthermore, I discovered that not only PHP sessions do not work, but I
also test to move my /etc/apache2 in the GlusterFS. In 2.0.9 it worked
very well and now, Apache2 coult not read all module
Anand Avati wrote:
On Thu, Jan 28, 2010 at 4:44 PM, Gordan Bobic wrote:
Thanks for your answer Gordan. Do you have more information about
3.0.2 ?
None, other than that Avati said a couple of days ago that it will be
available "very quickly".
Do you think the 3.0.1 could be a
Samuel Hassine wrote:
Hi,
Thanks for your answer Gordan. Do you have more information about
3.0.2 ?
None, other than that Avati said a couple of days ago that it will be
available "very quickly".
Do you think the 3.0.1 could be a temporary solution or I have to wait ?
I don't think I cou
Samuel Hassine wrote:
Hi all,
I have a problem here. I am using a gluster file system as a simple
distributed storage (one storage server and multiple clients). I want to
put php sessions inside this partition.
In GlusterFS 2.0.9, it worked very well. PHP could write and read as
quickly as po
Avati wrote:
Please hold back using 3.0.1. We found some issues and are making
3.0.2 very quickly. Apologies for all the inconvenience.
Avati
On Tue, Jan 26, 2010 at 6:30 PM, Gordan Bobic wrote:
I upgraded to 3.0.1 last night and it still doesn't seem as stable as 2.0.9.
Things I have bumped
Mickey Mazarick wrote:
Just a suggestion, but maybe we can label releases as rc once it has
been tested by the dev team but not by the community at large.
That way the more conservative deployments can feel better and we know
what we're getting into.
I've been saying that for a while. The comp
Mickey Mazarick wrote:
I'm having problems with the server process crashing periodically under
medium load. It's not consistent but never happens with the 2.0.9 release.
I've tried 3.0 and the latest git checkout. Are there any know bugs with
ib that would cause this?
No, 3.0.x isn't stable ye
Anand Avati wrote:
Please hold back using 3.0.1. We found some issues and are making
3.0.2 very quickly. Apologies for all the inconvenience.
Duly noted. Do you have a guesstimate for when it'll be available?
Gordan
___
Gluster-devel mailing list
G
I upgraded to 3.0.1 last night and it still doesn't seem as stable as
2.0.9. Things I have bumped into since the upgrade:
1) I've had unfsd lock up hard when exporting the volume, it couldn't be
"kill -9"-ed. This happened just after a spurious disconnect (see 2).
2) Seeing random disconnects
I've just observed another case of corruption similar to what I reported
a while back with .viminfo files getting corrupted (in that case, by
somehow being clobbered by a shared library fragment from what I could
tell).
This time, however, it was much more sinister, although probably the
same
Shehjar Tikoo wrote:
The answer to your question is, yes, it will be possible to export
your
local file system with knfsd and glusterfs distributed-replicated
volumes with Gluster NFS translator BUT not in the first release.
See comment above. Isn't that all the more reason to double check
p
On 08/01/2010 21:15, Martin Fick wrote:
--- On Fri, 1/8/10, Gordan Bobic wrote:
...
On writes, NFS gets 4.4MB/s, GlusterFS (server
side AFR) gets 4.6MB/s. Pretty even.
On reads GlusterFS gets 117MB/s, NFS gets 119MB/s
(on the first read after flushing the caches, after that it
goes up to
Shehjar Tikoo wrote:
The answer to your question is, yes, it will be possible to export your
local file system with knfsd and glusterfs distributed-replicated
volumes with Gluster NFS translator BUT not in the first release.
See comment above. Isn't that all the more reason to double check
p
Can you post your configs and what your logs (client and server) say?
Have you got the fuse package installed (should be a rpm dependency if
you used rpm to install)? Do you have the fuse service started? Is the
fuse.ko module loaded?
Gordan
lesonus wrote:
I install and test GLusterFS 2.6 2
ly the find command.
Gordan
Gordan Bobic wrote:
During my performance testing of various combinations of glfs and nfs
using the kernel build process, I came across something in some
configurations.
Specifically, during make clean after building:
find: WARNING: Hard link count is wrong for ./arch/x
During my performance testing of various combinations of glfs and nfs
using the kernel build process, I came across something in some
configurations.
Specifically, during make clean after building:
find: WARNING: Hard link count is wrong for ./arch/x86_64/kernel: this
may be a bug in your fil
When glusterfs is being used as rootfs, doing this:
echo 3 > /proc/sys/vm/drop_caches
makes the bash that issued the command go zombie and it cannot be killed
or stopped. This happens on all my glfs-root servers. The only cure is
to reboot the machine. There is no actual crash per se - the mac
Shehjar Tikoo wrote:
The answer to that lies in another question, "why would anyone use
a standardized NFS over GlusterFS?"
Here are three points from pnfs.com on why:
1. Ensures Interoperability among vendor solutions
2. Allows Choice of best-of-breed products
3. Eliminates Risks of deploying
Anand Avati wrote:
On Tue, Jan 5, 2010 at 5:31 PM, Gordan Bobic wrote:
I've noticed a very high incidence of the problem I reported a while back,
that manifests itself in open files getting corrupted on commit, possibly
during conditions that involve server disconnections due to timeouts
Shehjar Tikoo wrote:
Gordan Bobic wrote:
Gordan Bobic wrote:
With native NFS there'll be no need to first mount a glusterFS
FUSE based volume and then export it as NFS. The way it has been
developed is that
any glusterfs volume in the volfile can be exported using NFS by adding
a
Shehjar Tikoo wrote:
Gordan Bobic wrote:
Martin Fick wrote:
--- On Wed, 1/6/10, Gordan Bobic wrote:
With native NFS there'll be no need to first mount a
glusterFS
FUSE based volume and then export it as NFS. The way
it has been developed is that
any glusterfs volume in the volfil
Anand Avati wrote:
So - I did a redneck test instead - dd 64MB of /dev/zero to a file on the
mounted partition.
On writes, NFS gets 4.4MB/s, GlusterFS (server side AFR) gets 4.6MB/s.
Pretty even.
On reads GlusterFS gets 117MB/s, NFS gets 119MB/s (on the first read after
flushing the caches, afte
Anand Avati wrote:
On Thu, Jan 7, 2010 at 4:51 AM, Gordan Bobic wrote:
I'm seeing something peculiar. lsof is showing glusterfs as having deleted
file handles open in the backing store. That's fine, those file handles are
open by the process running on top of it. But the process runn
I'm seeing something peculiar. lsof is showing glusterfs as having
deleted file handles open in the backing store. That's fine, those file
handles are open by the process running on top of it. But the process
running on top of glusterfs isn't showing the same file handle as
deleted. That seems
Gordan Bobic wrote:
With native NFS there'll be no need to first mount a glusterFS
FUSE based volume and then export it as NFS. The way it has been
developed is that
any glusterfs volume in the volfile can be exported using NFS by adding
an NFS volume over it in the volfile. This is some
Martin Fick wrote:
--- On Wed, 1/6/10, Gordan Bobic wrote:
With native NFS there'll be no need to first mount a
glusterFS
FUSE based volume and then export it as NFS. The way
it has been developed is that
any glusterfs volume in the volfile can be exported
using NFS by adding
a
Shehjar Tikoo wrote:
- "Gordan Bobic" wrote:
I'm not sure if this is the right place to ask, but Google has failed
me
and it is rather related to the upcoming Gluster NFS export feature.
What I'm interested in knowing is whether it will be possible to run
Gluster
Shehjar Tikoo wrote:
Gordan Bobic wrote:
I can't figure out why this might be the case, but it would appear
that when unfsd is bound to a custom port and not registered with
portmap, the performance is massively improved.
I changed my init.d/unfsd script as follows, in the start o
I can't figure out why this might be the case, but it would appear that
when unfsd is bound to a custom port and not registered with portmap,
the performance is massively improved.
I changed my init.d/unfsd script as follows, in the start option:
- /usr/sbin/unfsd -i ${pidfile}
+ /usr/sbin/unf
On 05/01/2010 03:11, "廖再学" wrote:
> All:
> Now,i should scale the system and the system is not down ,please help
> me and how to operation
Parse error.
Can you elaborate on what exactly you are trying to do?
Gordan
___
Gluster-devel mailing lis
I'm not sure if this is the right place to ask, but Google has failed me
and it is rather related to the upcoming Gluster NFS export feature.
What I'm interested in knowing is whether it will be possible to run
Gluster NFS export for glfs mounts while using knfsd as per standard for
exporting n
I've noticed a very high incidence of the problem I reported a while
back, that manifests itself in open files getting corrupted on commit,
possibly during conditions that involve server disconnections due to
timeouts (very high disk load). Specifically, I've noticed that my
.viminfo file got c
Vikas Gorur wrote:
Gordan Bobic wrote:
That is plausible since I am using single-process client and server.
Is there a way to tell on a running glfs cluster which node is the
current lock master? The process creating the load was running on the
first listed volume, so I would have expected
Vikas Gorur wrote:
Gordan Bobic wrote:
Hi,
I'm noticing something that could be considered weird. I have a 3-node
rootfs-on-gluster cluster, and I'm doing a big yum update on one of
the nodes. On that node and one of the other nodes, glusterfsd is
using about 10-15% CPU. But on th
Hi,
I'm noticing something that could be considered weird. I have a 3-node
rootfs-on-gluster cluster, and I'm doing a big yum update on one of the
nodes. On that node and one of the other nodes, glusterfsd is using
about 10-15% CPU. But on the 3rd node, glusterfs is using < 0.3% CPU.
I'm gue
On 22/12/2009 00:39, Anand Babu Periasamy wrote:
2009 Gluster Hacker - Award goes to "Gordon Bobic" for consistently
contributing to the Gluster community world wide. He has always been
there to criticize constructively and to help others selflessly. We hope
everyone in the community will also fi
Hi,
I tried updating my root-on-GlusterFS cluster to 3.0.0pre1, and there
appears to be a problem that causes the whole thing to lock up pretty
quickly.
One node comes up OK, but the second node typically locks up half way
through the boot-up process, and from there on all access to the file
Harshavardhana wrote:
Gordan,
Recent RHEL and CentOS kernel update of -164 patch set has included
fuse into
there rpms. ABI version in 2.6.18-164 is 7.10 for fuse. This has
been a phenominal
improvement as the ABI version even till fuse-2.7.4glfs11 was 7.8. So
pretty much
with n
Anand Avati wrote:
So why does it require the fuse package to install and fuse-devel to build?
It's just something we missed in the glusterfs.spec RPM specification.
The tarball builds without external fuse libs. Will be fixed soon.
There is still a minor requirement for fuse. The init scrip
Anand Avati wrote:
Key highlights of 3.0 are
* Background self-healing: Applications won't be blocked any more during
healing operation.
* Checksum based healing: Rsync like healing mechanism to heal only the
inconsistent blocks within a file.
* Healing on the fly: Files can be healed even wh
On 02/11/2009 21:18, Anand Babu Periasamy wrote:
Gordan Bobic wrote:
It would appear that this requires fuse-devel to build, at least as an
RPM. My understanding was that 3.0 doesn't require fuse libraries and
talks directly to the kernel module. Is that an error in my
understanding or an
1 - 100 of 331 matches
Mail list logo