Anand Avati wrote:
What happens when you stat a filename, which you know exists on only
one subvolume? does it get healed then atleast?
Yes it does.
--
Luca Barbato
Gentoo Council Member
Gentoo/linux Gentoo/PPC
http://dev.gentoo.org/~lu_zero
___
G
What happens when you stat a filename, which you know exists on only
one subvolume? does it get healed then atleast?
Avati
On Tue, Apr 28, 2009 at 9:43 PM, Luca Barbato wrote:
> Anand Avati wrote:
what is the git commit id of the snapshot you are using?
>>>
>>> I' updated to head, now
Anand Avati wrote:
what is the git commit id of the snapshot you are using?
I' updated to head, now the fd issue seems gone, autoheal is gone as well,
apparently...
can you please explain what you mean by autoheal being gone as well?
I have a setup with 2 replicate over 3 bricks
each node h
>> what is the git commit id of the snapshot you are using?
>
> I' updated to head, now the fd issue seems gone, autoheal is gone as well,
> apparently...
can you please explain what you mean by autoheal being gone as well?
Avati
___
Gluster-users mail
Anand Avati wrote:
what is the git commit id of the snapshot you are using?
I' updated to head, now the fd issue seems gone, autoheal is gone as
well, apparently...
lu
--
Luca Barbato
Gentoo Council Member
Gentoo/linux Gentoo/PPC
http://dev.gentoo.org/~lu_zero
___
Anand Avati wrote:
currently one node seems to have surpassed about 3M open files even if the
samba server claims to have about 75k files currently opened.
I'm using a recent gluster snapshot from git.
what is the git commit id of the snapshot you are using?
I cannot tell with much precision
> Recently a gluster I setup got mounted on a server that exports it through
> samba. It appears to work till a point. Unexpectedly on heavy usage the
> nodes happen to reach the max file descriptors opened limit really easily.
>
> Anybody else has experience on it? Is that kind of usage supported.
Recently a gluster I setup got mounted on a server that exports it
through samba. It appears to work till a point. Unexpectedly on heavy
usage the nodes happen to reach the max file descriptors opened limit
really easily.
Anybody else has experience on it? Is that kind of usage supported.
cur