Thanks for your response Pranith and Mathieu,
Pranith: To answer your question, I am planning to use this storage for two
main workloads.
1. As a shared storage for VMs.
2. As a NFS Storage for files.
We are a online backup company so we store few hundred Terra bytes of data.
Mathieu: I apprec
Just in case, here is Valgrind output on FUSE client with 3.7.6 +
API-related patches we discussed before:
https://gist.github.com/cd6605ca19734c1496a4
12.01.2016 08:24, Soumya Koduri написав:
For fuse client, I tried vfs drop_caches as suggested by Vijay in an
earlier mail. Though all the ino
I tried like suggested:
echo 3 > /proc/sys/vm/drop_caches
sync
It lower a bit usage:
before:
[image: Images intégrées 2]
after:
[image: Images intégrées 1]
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-01-12 7:34 GMT+01:00 Mathieu Chateau :
> Hello,
>
> I also experience high m
Hello,
I also experience high memory usage on my gluster clients. Sample :
[image: Images intégrées 1]
Can I help in testing/debugging ?
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-01-12 7:24 GMT+01:00 Soumya Koduri :
>
>
> On 01/11/2016 05:11 PM, Oleksandr Natalenko wrote:
>
>> Br
Hello,
For any system, 36 disks raise disk failure probability. Do you plan
GlusterFS with only one server?
You should think about failure at each level and be prepared for it:
- Motherboard failure (full server down)
- Disks failure
- Network cable failure
- File system corruption (
On 01/11/2016 05:11 PM, Oleksandr Natalenko wrote:
Brief test shows that Ganesha stopped leaking and crashing, so it seems
to be good for me.
Thanks for checking.
Nevertheless, back to my original question: what about FUSE client? It
is still leaking despite all the fixes applied. Should it
hi Francesco,
I can take a look at this if you could give me statedump of
the process which is consuming so much memory. If the process consuming
so much memory is client process. Please use
https://github.com/gluster/glusterfs/blob/master/doc/debugging/statedump.md#how-to-generate-st
On 12/01/16 13:38, Pranith Kumar Karampuri wrote:
is 3.7.7 far off?
It is getting delayed because of me. Let me see what I can do. Will
update you with something by end of this week.
Oh, no hurry - didn't mean to pressure you - just curious. It'll be
ready when its ready.
Thanks,
--
Linds
On 01/08/2016 06:30 PM, Patrick Kaiser wrote:
hi,
I am running a distributed replicated gluster fs setup with 4 nodes.
currently i have no problems but i was wondering when i am running
gluster volume status
and seeing different free disk space on every node.
I am wondering if I should not h
On 01/10/2016 04:44 AM, Lindsay Mathieson wrote:
On 9/01/2016 12:34 PM, Ravishankar N wrote:
If you're trying arbiter, it would be good if you can compile the 3.7
branch and use it since it has an important fix
(http://review.gluster.org/#/c/12479/) that will only make it to
glusterfs-3.7.7.
On 01/12/2016 04:34 AM, Pawan Devaiah wrote:
Hi All,
We have a fairly powerful server sitting at office with 128 Gig RAM
and 36 X 4 TB drives. I am planning to utilize this server as a
backend storage with GlusterFS on it.
I have been doing lot of reading on Glusterfs, but I do not see any
On 01/12/2016 08:36 AM, ngsflow wrote:
Hello:
I have a six node cluster in 3 X 2 (distributed replicated) mode, with
glusterfs 3.7.6 and centos 6.3.
problem is that subvolumes report error from time to time when dealing
with bunch of small files, such as tar, cp command.
the following is
On 01/12/2016 08:52 AM, Lindsay Mathieson wrote:
On 11/01/16 15:37, Krutika Dhananjay wrote:
Kyle,
Based on the testing we have done from our end, we've found that
512MB is a good number that is neither too big nor too small,
and provides good performance both on the IO side and with respect
On 11/01/16 15:37, Krutika Dhananjay wrote:
Kyle,
Based on the testing we have done from our end, we've found that 512MB
is a good number that is neither too big nor too small,
and provides good performance both on the IO side and with respect to
self-heal.
Hi Krutika, I experimented a lot
Hello:
I have a six node cluster in 3 X 2 (distributed replicated) mode, with
glusterfs 3.7.6 and centos 6.3.
problem is that subvolumes report error from time to time when dealing with
bunch of small files, such as tar, cp command.
the following is snip from log file. (can see a lot of sim
Hello:
I have a six node cluster in 3 X 2 (distributed replicated) mode, with
glusterfs 3.7.6 and centos 6.3.
problem is that subvolumes report error from time to time when dealing with
bunch of small files, such as tar, cp command.
the following is snip from log file. (can see a lot of sim
Hi All,
We have a fairly powerful server sitting at office with 128 Gig RAM and 36
X 4 TB drives. I am planning to utilize this server as a backend storage
with GlusterFS on it.
I have been doing lot of reading on Glusterfs, but I do not see any
definite recommendation on having RAID on GLUSTER no
Hi All,
We discussed the following proposal for 3.8 in the maintainers mailing
list and there was general consensus about the changes being a step in
the right direction. Would like to hear your thoughts about the same.
Changes to 3.8 Plan:
1. Include 4.0 features such a
Brief test shows that Ganesha stopped leaking and crashing, so it seems
to be good for me.
Nevertheless, back to my original question: what about FUSE client? It
is still leaking despite all the fixes applied. Should it be considered
another issue?
11.01.2016 12:26, Soumya Koduri написав:
I
I have made changes to fix the lookup leak in a different way (as
discussed with Pranith) and uploaded them in the latest patch set #4
- http://review.gluster.org/#/c/13096/
Please check if it resolves the mem leak and hopefully doesn't result in
any assertion :)
Thanks,
Soumya
On 01
20 matches
Mail list logo