Is there any update ??
On Thu, Apr 6, 2017 at 12:45 PM, ABHISHEK PALIWAL
wrote:
> Hi,
>
> We are currently experiencing a serious issue w.r.t volume space usage by
> glusterfs.
>
> In the below outputs, we can see that the size of the real data in /c
> (glusterfs volume) is nearly 1GB but the “.
The longevity cluster has been updated to glusterfs-3.10.1 (from 3.8.5).
General information on the longevity cluster is at [1].
In the previous update sharding was enabled on the gluster volume. This
time I have added a NFS-Ganesha NFS server on one server. Its memory
usage is being sampled
Like commit hash, I hope you are doing this on directories only,
nevertheless it is good to look into the brick logs and client logs. If
logs are not helping, gdb will definitely help here.
You can share your code with us if that is possible, more people can
look into your code to debug it. or giv
Hi
I declare a new variable in dht_layout->list structure similar to
commit_hash, but I can not update this field in global .
This update is just local and in client that do this update, servers and
othe client can not see this change.
For update this field I do like commit_hash
functions, dht_upda
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-04-05-93e3c9ab
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman
If the data is written on minimum number of brick, heal will take place on
failed brick only.
Data will be read from good bricks, encoding will happen and the fragment on
the failed brick will be written only.
- Original Message -
From: "jayakrishnan mm"
To: "Gluster Devel"
Sent:
Hi
I am using Glusterfs3.7.15.
What type of algorithm is used in EC Healing ? I mean , if a brick fails
during writing and if it comes back online later , whether all the bricks
will be re-written or only the failed brick is written with the new data?
Best regards
JK
___
Hi,
We are currently experiencing a serious issue w.r.t volume space usage by
glusterfs.
In the below outputs, we can see that the size of the real data in /c
(glusterfs volume) is nearly 1GB but the “.glusterfs” directory inside the
brick (i.e., “/opt/lvmdir/c2/brick”) is consuming around 3.4 GB