[cc-ing pvfs developers]

[EMAIL PROTECTED] wrote on Thu, 16 Aug 2007 14:03 -0400:
> Sorry I took a while to get back to you.  So, anyway here are some  
> answers to your questions.
> On Aug 14, 2007, at 6:11 PM, Pete Wyckoff wrote:
> 
> >- All data is copied through a character device between kernel
> >and fuse application.  Is there a way to avoid this?
> 
> Not without rewriting the Fuse code.
> >
> >- Kernel does operations at page cache size granularity (4k).
> >PVFS servers really want to see larger operations.  Is there
> >some way to get the total VFS operation size instead of
> >having the IO chunked up like this?  Direct IO?
> 
> With the direct_io option, data gets grouped into 128K chunks.  This  
> is hard-coded to 32 pages in the Fuse kernel code.
> 
> >- Kernel will use the page cache.  This isn't done in the
> >current pvfs kernel module to avoid false sharing and cases like
> >having data sit on one client but not visible by another.  Is
> >there a way to avoid the page cache?  Direct IO again?  Some way to
> >force it on even if app doesn't say O_DIRECT?
> 
> Using direct_io avoids the page cache.  Basicaly, direct_io calls  
> back almost immediately into the user space code rather than going  
> through the generic_file_read/write path.  What that also means is  
> that some attrs may not get updated properly if they are getting  
> cached in the kernel.
> 
> I've enabled direct_io by default for performance and to avoid the  
> caching issues you've talked about.
> 
> O_DIRECT doesn't work in FUSE.  The kernel throws it back on open  
> with an EINVAL.
> 
> Hope I could help.  If you are planning on using the Fuse code that I  
> did, there needs to be some significant testing on your workloads.   
> Also, one major problem with my code is that it doesnt work with  
> multithreaded fuse.

Thanks for the answers.  I'm copying developers as there are others
who may be more motivated to do work with this in the short term.
The direct_io option is promising.

                -- Pete
_______________________________________________
Pvfs2-developers mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-developers

Reply via email to