Re: xfs: does mkfs.xfs require fancy switches to get decent performance? (was Tux3 Report: How fast can we fsync?)

2015-05-11 Thread David Lang

On Mon, 11 May 2015, Daniel Phillips wrote:


On 05/11/2015 03:12 PM, Pavel Machek wrote:

It is a fact of life that when you change one aspect of an intimately 
interconnected system,
something else will change as well. You have naive/nonexistent free space 
management now; when you
design something workable there it is going to impact everything else you've 
already done. It's an
easy bet that the impact will be negative, the only question is to what degree.


You might lose that bet. For example, suppose we do strictly linear allocation
each delta, and just leave nice big gaps between the deltas for future
expansion. Clearly, we run at similar or identical speed to the current naive
strategy until we must start filling in the gaps, and at that point our layout
is not any worse than XFS, which started bad and stayed that way.


Umm, are you sure. If some areas of disk are faster than others is
still true on todays harddrives, the gaps will decrease the
performance (as you'll use up the fast areas more quickly).


That's why I hedged my claim with similar or identical. The
difference in media speed seems to be a relatively small effect
compared to extra seeks. It seems that XFS puts big spaces between
new directories, and suffers a lot of extra seeks because of it.
I propose to batch new directories together initially, then change
the allocation goal to a new, relatively empty area if a big batch
of files lands on a directory in a crowded region. The big gaps
would be on the order of delta size, so not really very big.


This is an interesting idea, but what happens if the files don't arrive as a big 
batch, but rather trickle in over time (think a logserver that if putting files 
into a bunch of directories at a fairly modest rate per directory)


And when you then decide that you have to move the directory/file info, doesn't 
that create a potentially large amount of unexpected IO that could end up 
interfering with what the user is trying to do?


David Lang

___
Tux3 mailing list
Tux3@phunq.net
http://phunq.net/mailman/listinfo/tux3


Re: xfs: does mkfs.xfs require fancy switches to get decent performance? (was Tux3 Report: How fast can we fsync?)

2015-05-11 Thread Theodore Ts'o
On Tue, May 12, 2015 at 12:12:23AM +0200, Pavel Machek wrote:
 Umm, are you sure. If some areas of disk are faster than others is
 still true on todays harddrives, the gaps will decrease the
 performance (as you'll use up the fast areas more quickly).

It's still true.  The difference between O.D. and I.D. (outer diameter
vs inner diameter) LBA's is typically a factor of 2.  This is why
short-stroking works as a technique, and another way that people
doing competitive benchmarking can screw up and produce misleading
numbers.  (If you use partitions instead of the whole disk, you have
to use the same partition in order to make sure you aren't comparing
apples with oranges.)

Cheers,

- Ted

___
Tux3 mailing list
Tux3@phunq.net
http://phunq.net/mailman/listinfo/tux3


Re: xfs: does mkfs.xfs require fancy switches to get decent performance? (was Tux3 Report: How fast can we fsync?)

2015-05-11 Thread Daniel Phillips
Hi David,

On 05/11/2015 05:12 PM, David Lang wrote:
 On Mon, 11 May 2015, Daniel Phillips wrote:
 
 On 05/11/2015 03:12 PM, Pavel Machek wrote:
 It is a fact of life that when you change one aspect of an intimately 
 interconnected system,
 something else will change as well. You have naive/nonexistent free space 
 management now; when you
 design something workable there it is going to impact everything else 
 you've already done. It's an
 easy bet that the impact will be negative, the only question is to what 
 degree.

 You might lose that bet. For example, suppose we do strictly linear 
 allocation
 each delta, and just leave nice big gaps between the deltas for future
 expansion. Clearly, we run at similar or identical speed to the current 
 naive
 strategy until we must start filling in the gaps, and at that point our 
 layout
 is not any worse than XFS, which started bad and stayed that way.

 Umm, are you sure. If some areas of disk are faster than others is
 still true on todays harddrives, the gaps will decrease the
 performance (as you'll use up the fast areas more quickly).

 That's why I hedged my claim with similar or identical. The
 difference in media speed seems to be a relatively small effect
 compared to extra seeks. It seems that XFS puts big spaces between
 new directories, and suffers a lot of extra seeks because of it.
 I propose to batch new directories together initially, then change
 the allocation goal to a new, relatively empty area if a big batch
 of files lands on a directory in a crowded region. The big gaps
 would be on the order of delta size, so not really very big.
 
 This is an interesting idea, but what happens if the files don't arrive as a 
 big batch, but rather
 trickle in over time (think a logserver that if putting files into a bunch of 
 directories at a
 fairly modest rate per directory)

If files are trickling in then we can afford to spend a lot more time
finding nice places to tuck them in. Log server files are an especially
irksome problem for a redirect-on-write filesystem because the final
block tends to be rewritten many times and we must move it to a new
location each time, so every extent ends up as one block. Oh well. If
we just make sure to have some free space at the end of the file that
only that file can use (until everywhere else is full) then the long
term result will be slightly ravelled blocks that nonetheless tend to
be on the same track or flash block as their logically contiguous
neighbours. There will be just zero or one empty data blocks mixed
into the file tail as we commit the tail block over and over with the
same allocation goal. Sometimes there will be a block or two of
metadata as well, which will eventually bake themselves into the
middle of contiguous data and stop moving around.

Putting this together, we have:

  * At delta flush, break out all the log type files
  * Dedicate some block groups to append type files
  * Leave lots of space between files in those block groups
  * Peek at the last block of the file to set the allocation goal

Something like that. What we don't want is to throw those files into
the middle of a lot of rewrite-all files, messing up both kinds of file.
We don't care much about keeping these files near the parent directory
because one big seek per log file in a grep is acceptable, we just need
to avoid thousands of big seeks within the file, and not dribble single
blocks all over the disk.

It would also be nice to merge together extents somehow as the final
block is rewritten. One idea is to retain the final block dirty until
the next delta, and write it again into a contiguous position, so the
final block is always flushed twice. We already have the opportunistic
merge logic, but the redirty behavior and making sure it only happens
to log files would be a bit fiddly.

We will also play the incremental defragmentation card at some point,
but first we should try hard to control fragmentation in the first
place. Tux3 is well suited to online defragmentation because the delta
commit model makes it easy to move things around efficiently and safely,
but it does generate extra IO, so as a basic mechanism it is not ideal.
When we get to piling on features, that will be high on the list,
because it is relatively easy, and having that fallback gives a certain
sense of security.

 And when you then decide that you have to move the directory/file info, 
 doesn't that create a
 potentially large amount of unexpected IO that could end up interfering with 
 what the user is trying
 to do?

Right, we don't like that and don't plan to rely on it. What we hope
for is behavior that, when you slowly stir the pot, tends to improve the
layout just as often as it degrades it. It may indeed become harder to
find ideal places to put things as time goes by, but we also gain more
information to base decisions on.

Regards,

Daniel

___
Tux3 mailing list
Tux3@phunq.net