Thank you for the answer,
if I have understood you suggest to disable NUFA to verify if this is
the problem originator,
is it correct?
Il giorno lun, 06/06/2016 alle 15.18 -0400, Jeff Darcy ha scritto:
> >
> > This could be because of nufa xlator. As you say the files are
> > present on the
>
Afaict, that posix_flush warning might have been fixed in
028afb21a7793d3efbb9db431bde37ec332d9839 which is in 3.7.11
On 06/06/2016 10:52 PM, ABHISHEK PALIWAL wrote:
i am still facing this issue any suggestion
On Fri, May 27, 2016 at 10:48 AM, ABHISHEK PALIWAL
Hi,
I'm having a distributed volume, so no replication *.
When a brick becomes missing for some reason, it is still possible to
access files on remaining bricks, while writes that hash to the missing
will fail. It is kind of sane, but can be a bit confusing to users.
A few other ways to
Hello
I get this message in the log, but I have trouble to figure
what it means. Any hint?
[2016-06-07 06:41:17.366490] I [MSGID: 109036]
[dht-common.c:8173:dht_log_new_layout_for_dir_selfheal] 0-gfs-dht: Setting
layout of /ftp/shadow/MMC/20160606/30/pdata/1 with [Subvol_name:
gfs-replicate-0,
On Tue, Jun 7, 2016 at 2:01 PM, Emmanuel Dreyfus wrote:
> Hello
>
> I get this message in the log, but I have trouble to figure
> what it means. Any hint?
>
> [2016-06-07 06:41:17.366490] I [MSGID: 109036]
> [dht-common.c:8173:dht_log_new_layout_for_dir_selfheal] 0-gfs-dht:
Hi,
This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in
> Thank you for the answer,
> if I have understood you suggest to disable NUFA to verify if this is
> the problem originator,
> is it correct?
That would certainly provide a very useful data point.
___
Gluster-users mailing list