Dear all,
for the sake of completeness, I would like to revive this thread and
share some of my findings. As I wrote below, I intended to make better
use of large disks for AFS cache partitions, but my initial attempts
actually made the AFS clients unusable for our requirements.
One problem we faced was high memory consumption of the libafs module.
This is related to the number of cache files OpenAFS creates given a
certain size of the cache device. In my default installation, there
was one file created per 32 1k-blocks. For a ~250GB cache, this
amounted to ~7900000 cache files, blocking ~8GB RAM. According to fs
getcacheprms, it turned out that with our typical data, all cache
blocks were filled already when only ~10% of the cache files were
used. Therefore I now decided to set both the -blocks and -files
parameters such that I get one cache file per 384 1k-blocks. Using
this setting, both block and cache file usage corresponds very well.
Another problem was caused by setting the -chunksize parameter. In
many of our use cases, we only write portions of a large file at a
time. After increasing the -chunksize value, performance of fseek and
write operations dropped from ~15MB/s to ~70kB/s for partial access to
large files. Here, the best solution I found was to leave the default
-chunksize setting unmodified.
Although I'm fine now with my settings, I'd be interested in some
explanations for my findings.
Best regards,
Volkmar
Zitat von Volkmar Glauche <[email protected]>:
Dear all,
we are running OpenAFS clients with up to 1TB space available for cache
partitions. These are multiuser cluster nodes where cache space should
be as large as possible. Until now, there was (inadvertently) a limit
of ~200GB set to the cache size by restrictive afsd options.
I now removed most of these options, my current command line for afsd
looks like
/usr/bin/afsd -chunksize 30 -fakestat -blocks <SPACE_ON_DEVICE>
Now it takes a very long time (~hours) to start up afsd, in some cases
the afs cache scan even fails with a kernel panic. Is there any way to
make efficient use of ~1TB cache partitions?
Volkmar
--
Freiburg Brain Imaging
http://fbi.uniklinik-freiburg.de/
Tel. +761 270-54783
Fax. +761 270-54819
_______________________________________________
OpenAFS-info mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-info
--
Freiburg Brain Imaging
http://fbi.uniklinik-freiburg.de/
Tel. +761 270-54110
Fax. +761 270-53100
_______________________________________________
OpenAFS-info mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-info