yubert wrote:
Hi all,
I tried to Google for the answers but couldn’t seem to find the answers
to the following questions so I’m hoping someone on here can help.
1. How are PVFS2 file system quotas configured and enabled?
We don't support them.
2. Does PVFS2 use a write-through or write-back caching?
We don't cache on the client.
3. Where can I find info on how the metadata and data are stored?
You can read previous discussions on pvfs2-developers, or you can look
at the code, or you can ask. Basically we use Berkeley DB for metadata
storage and individual local files (one per file per server).
4. What’s the maximum file size?
Whatever your local file system will store.
5. What’s maximum file name size
Dunno off the top of my head.
6. What’s the max number of files per directory?
Whatever Berkeley DB will let you keep in a single database.
7. What’s the max number of simultaneous open files within a single
file system?
No such limit as far as I know. Probably the only bound is # of FDs per
process.
8. Is there a limit to the number of native clients that PVFS2 can
support?
This is interconnect dependent. Probably the BMI implementations will
hit a limit due to memory use at some point? We have not seen this in
practice, and it's easy enough to fix.
9. Is there a max number of file systems per storage server?
We don't test with more than one, although in theory the code supports
more. There's no real limit.
10. What’s the max file system size?
Depends on what your local file systems will do, since we're using them
to store data.
In short, all these maximum size issues basically depend on the
underlying Berkeley DB and file systems (e.g. ext3) that you choose.
You'd have to have a *very* large file system before any of these issues
would bite you on current kernels and versions of these codes. And if
you did hit such a limit, it wouldn't be difficult to hack Trove to use
a slightly different approach to storing data that would work around
issues (e.g. splitting data across multiple file systems, or using more
than one DB, or using something else entirely).
Regards,
Rob
_______________________________________________
Pvfs2-developers mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-developers