Re: [Cluster-devel] [PATCH v2] mkfs.gfs2: Scale down journal size for smaller devices

2018-02-14 Thread Andrew Price

Hi Bob,

On 14/02/18 14:13, Bob Peterson wrote:

Hi,

Comments below.

- Original Message -
| Currently the default behaviour when the journal size is not specified
| is to use a default size of 128M, which means that mkfs.gfs2 can run out
| of space while writing to a small device. The hard default also means
| that some xfstests fail with gfs2 as they try to create small file
| systems.
|
| This patch addresses these problems by setting sensible default journal
| sizes depending on the size of the file system. Journal sizes specified
| by the user are limited to half of the fs. As the minimum journal size
| is 8MB that means we effectively get a hard minimum file system size of
| 16MB (per journal).
|
| Signed-off-by: Andrew Price 
| ---
|
| v2: Andreas found that using 25% of the fs for journals was too large so this
| version separates the default journal size calculation from the check
| used
| for user-provided journal sizes, which allows for more sensible defaults.
| The default journal sizes for fs size ranges were taken from e2fsprogs.
|
|  gfs2/libgfs2/libgfs2.h |  2 ++
|  gfs2/man/mkfs.gfs2.8   |  5 +++--
|  gfs2/mkfs/main_mkfs.c  | 56
|  --
|  tests/edit.at  |  2 +-
|  tests/mkfs.at  | 10 +
|  tests/testsuite.at |  6 ++
|  6 files changed, 76 insertions(+), 5 deletions(-)
(snip)
| + if (num_blocks < 8192*1024) /* 32 GB */
| + return (32768); /* 128 MB */
| + if (num_blocks < 16384*1024)/* 64 GB */
| + return (65536); /* 256 MB */
| + if (num_blocks < 32768*1024)/* 128 GB */
| + return (131072);/* 512 MB */
| + return 262144;  /* 1 GB */

Perhaps you can adjust the indentation on the comment so it's clear
that the journal size is 1GB in this case, not the file system size?


The journal size comments are already aligned but I guess I could nudge 
the "1 GB" over a little :)



Here are some random thoughts on the matter:

I'm not sure I like the default journal size going up so quickly at
32GB. In most cases, 128MB journals should be adequate. I'd like to
see a much higher threshold that still uses 128MB journals.
Unless there's a high level of metadata pressure, after a certain point,
it's just wasted space.

I'd rather see 128MB journals go up to file systems of 1TB, for example.
I'm not sure it's ever worthwhile to use a 1GB journal, but I suppose
with today's faster storage and faster machines, maybe it would be.
Barry recently got some new super-fast storage; perhaps we should ask
him to test some metadata-intense benchmark to see if we can ever
push it to the point of waiting for journal writes. I'd use
instrumentation to tell us whenever journal writes need to wait for
journal space. Of course, a lot of that hinges on the bug I'm currently
working on where we often artificially wait too long for journal space.
(IOW, this is less of a concern when I get the bug fixed).


Good points. It would be useful to see some performance numbers with 
different journal/device sizes. For now, based on your comments, perhaps 
we can do something like


  fs sizejsize (at 4K blocks)
   < 512M   8M
   <   2G  16M
   <   8G  32M
   <  16G  64M
   <   1T 128M
   <  10T 512M
   >= 10T   1G

So we get the current default of 128M journals between 16G and 1T, and 
we keep the lower values the same to cater for Andreas' test cases. Over 
1T a gigabyte is not much wasted space so we might as well increase it 
to the max.



Also, don't forget that GFS2, unlike other file systems, requires a
journal for each node, and that should also be factored into the
calculations.


Yes, the changes added in sbd_init() that do a '/ opts->journals' take 
the number of journals into account.



Don't forget also that at a certain size, GFS2 journals will can cross
resource group boundaries,


For a while that's only been true for journals added with gfs2_jadd. 
mkfs.gfs2 always creates single-extent journals.



and therefore have multiple segments to
manage. It may not be a big deal to carve out a 1GB journal when the
file system is shiny and new, but after two years of use, the file system
may be severely fragmented, so gfs2_jadd may add journals that are
severely fragmented, especially if they're big. Adding a 128MB journal
is less likely to get into fragmentation concerns than a 1GB journal.
Writing to a fragmented journal then becomes a slow-down because the
journal extent map needed to reference it becomes complex, and it's
used for every journal block written.
All good points to consider. I haven't touched gfs2_jadd yet but perhaps 
it would be better to leave it as-is in that case. That said, we should 
be fallocate()ing new journals and fallocate() should be doing its best 
to avoid fragmentation, although I accept it won't always 

Re: [Cluster-devel] [PATCH v2] mkfs.gfs2: Scale down journal size for smaller devices

2018-02-14 Thread Bob Peterson
Hi,

Comments below.

- Original Message -
| Currently the default behaviour when the journal size is not specified
| is to use a default size of 128M, which means that mkfs.gfs2 can run out
| of space while writing to a small device. The hard default also means
| that some xfstests fail with gfs2 as they try to create small file
| systems.
| 
| This patch addresses these problems by setting sensible default journal
| sizes depending on the size of the file system. Journal sizes specified
| by the user are limited to half of the fs. As the minimum journal size
| is 8MB that means we effectively get a hard minimum file system size of
| 16MB (per journal).
| 
| Signed-off-by: Andrew Price 
| ---
| 
| v2: Andreas found that using 25% of the fs for journals was too large so this
| version separates the default journal size calculation from the check
| used
| for user-provided journal sizes, which allows for more sensible defaults.
| The default journal sizes for fs size ranges were taken from e2fsprogs.
| 
|  gfs2/libgfs2/libgfs2.h |  2 ++
|  gfs2/man/mkfs.gfs2.8   |  5 +++--
|  gfs2/mkfs/main_mkfs.c  | 56
|  --
|  tests/edit.at  |  2 +-
|  tests/mkfs.at  | 10 +
|  tests/testsuite.at |  6 ++
|  6 files changed, 76 insertions(+), 5 deletions(-)
(snip)
| + if (num_blocks < 8192*1024) /* 32 GB */
| + return (32768); /* 128 MB */
| + if (num_blocks < 16384*1024)/* 64 GB */
| + return (65536); /* 256 MB */
| + if (num_blocks < 32768*1024)/* 128 GB */
| + return (131072);/* 512 MB */
| + return 262144;  /* 1 GB */

Perhaps you can adjust the indentation on the comment so it's clear
that the journal size is 1GB in this case, not the file system size?

Here are some random thoughts on the matter:

I'm not sure I like the default journal size going up so quickly at
32GB. In most cases, 128MB journals should be adequate. I'd like to
see a much higher threshold that still uses 128MB journals.
Unless there's a high level of metadata pressure, after a certain point,
it's just wasted space.

I'd rather see 128MB journals go up to file systems of 1TB, for example.
I'm not sure it's ever worthwhile to use a 1GB journal, but I suppose
with today's faster storage and faster machines, maybe it would be.
Barry recently got some new super-fast storage; perhaps we should ask
him to test some metadata-intense benchmark to see if we can ever
push it to the point of waiting for journal writes. I'd use
instrumentation to tell us whenever journal writes need to wait for
journal space. Of course, a lot of that hinges on the bug I'm currently
working on where we often artificially wait too long for journal space.
(IOW, this is less of a concern when I get the bug fixed).

Also, don't forget that GFS2, unlike other file systems, requires a
journal for each node, and that should also be factored into the
calculations. So if you have 1TB file system and it chooses a journal
size of 1GB, but it's a 16-node cluster, you're using 16GB of space
for the journals. That's maybe not a tragedy, but it's likely to not
give them any performance benefit either. Unless they need jdata, for
example, which is heavy on journal-writes.

Don't forget also that at a certain size, GFS2 journals will can cross
resource group boundaries, and therefore have multiple segments to
manage. It may not be a big deal to carve out a 1GB journal when the
file system is shiny and new, but after two years of use, the file system
may be severely fragmented, so gfs2_jadd may add journals that are
severely fragmented, especially if they're big. Adding a 128MB journal
is less likely to get into fragmentation concerns than a 1GB journal.
Writing to a fragmented journal then becomes a slow-down because the
journal extent map needed to reference it becomes complex, and it's
used for every journal block written.

Regards

Bob Peterson
Red Hat File Systems