Actually you raised a very good point.

Why does it need to rely on FUSE. Why can't it be something that run in kernel 
that doesn't have any reliance on FUSE ? I imagine that would require a lot of 
engineering but the benefits no need to mention.
Does anyone know a bit of architecture of Isilon and of other POSIX compliant 
distributed filesystems  ?

Fernando

-----Original Message-----
From: Joe Landman [mailto:[email protected]] 
Sent: 06 November 2012 12:39
To: Fernando Frediani (Qube)
Cc: '[email protected]'
Subject: Re: [Gluster-users] Very slow directory listing and high CPU usage on 
replicated volume

On 11/06/2012 04:35 AM, Fernando Frediani (Qube) wrote:
> Joe,
>
> I don't think we have to accept this as this is not acceptable thing.

I understand your unhappyness with it.  But its "free" and you sometimes have 
to accept what you get for "free".

> I have seen countless people complaining about this problem for a 
> while and seems no improvements have been done. The thing about the 
> ramdisk although might help, looks more a chewing gun. I have seen 
> other distributed filesystems that don't suffer for the same problem, 
> so why Gluster have to ?

This goes to some aspect of the implementation.  FUSE makes metadata ops (and 
other very small IOs) problematic (as in time consuming).  There are no easy 
fixes for this, without engineering a new kernel subsystem
(unlikely) to incorporate Gluster, or redesigning FUSE so this is not an issue. 
 I am not sure either is likely.

Red Hat may be willing to talk to you about these if you give them money for 
subscriptions.  They eventually relented on xfs.


-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: [email protected]
web  : http://scalableinformatics.com
        http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615

_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to