----- Original Message -----
| Currently the default behaviour when the journal size is not specified
| is to use a default size of 128M, which means that mkfs.gfs2 can run out
| of space while writing to a small device. The hard default also means
| that some xfstests fail with gfs2 as they try to create small file
| This patch addresses these problems by setting sensible default journal
| sizes depending on the size of the file system. Journal sizes specified
| by the user are limited to half of the fs. As the minimum journal size
| is 8MB that means we effectively get a hard minimum file system size of
| 16MB (per journal).
| Signed-off-by: Andrew Price <anpr...@redhat.com>
| v2: Andreas found that using 25% of the fs for journals was too large so this
| version separates the default journal size calculation from the check
| for user-provided journal sizes, which allows for more sensible defaults.
| The default journal sizes for fs size ranges were taken from e2fsprogs.
| gfs2/libgfs2/libgfs2.h | 2 ++
| gfs2/man/mkfs.gfs2.8 | 5 +++--
| gfs2/mkfs/main_mkfs.c | 56
| tests/edit.at | 2 +-
| tests/mkfs.at | 10 +++++++++
| tests/testsuite.at | 6 ++++++
| 6 files changed, 76 insertions(+), 5 deletions(-)
| + if (num_blocks < 8192*1024) /* 32 GB */
| + return (32768); /* 128 MB */
| + if (num_blocks < 16384*1024) /* 64 GB */
| + return (65536); /* 256 MB */
| + if (num_blocks < 32768*1024) /* 128 GB */
| + return (131072); /* 512 MB */
| + return 262144; /* 1 GB */
Perhaps you can adjust the indentation on the comment so it's clear
that the journal size is 1GB in this case, not the file system size?
Here are some random thoughts on the matter:
I'm not sure I like the default journal size going up so quickly at
32GB. In most cases, 128MB journals should be adequate. I'd like to
see a much higher threshold that still uses 128MB journals.
Unless there's a high level of metadata pressure, after a certain point,
it's just wasted space.
I'd rather see 128MB journals go up to file systems of 1TB, for example.
I'm not sure it's ever worthwhile to use a 1GB journal, but I suppose
with today's faster storage and faster machines, maybe it would be.
Barry recently got some new super-fast storage; perhaps we should ask
him to test some metadata-intense benchmark to see if we can ever
push it to the point of waiting for journal writes. I'd use
instrumentation to tell us whenever journal writes need to wait for
journal space. Of course, a lot of that hinges on the bug I'm currently
working on where we often artificially wait too long for journal space.
(IOW, this is less of a concern when I get the bug fixed).
Also, don't forget that GFS2, unlike other file systems, requires a
journal for each node, and that should also be factored into the
calculations. So if you have 1TB file system and it chooses a journal
size of 1GB, but it's a 16-node cluster, you're using 16GB of space
for the journals. That's maybe not a tragedy, but it's likely to not
give them any performance benefit either. Unless they need jdata, for
example, which is heavy on journal-writes.
Don't forget also that at a certain size, GFS2 journals will can cross
resource group boundaries, and therefore have multiple segments to
manage. It may not be a big deal to carve out a 1GB journal when the
file system is shiny and new, but after two years of use, the file system
may be severely fragmented, so gfs2_jadd may add journals that are
severely fragmented, especially if they're big. Adding a 128MB journal
is less likely to get into fragmentation concerns than a 1GB journal.
Writing to a fragmented journal then becomes a slow-down because the
journal extent map needed to reference it becomes complex, and it's
used for every journal block written.
Red Hat File Systems