On Tue, May 06, 2014 at 01:26:44PM +0200, Hendrik Siedelmann wrote: > On 06.05.2014 13:19, Hugo Mills wrote: > >On Tue, May 06, 2014 at 01:14:26PM +0200, Hendrik Siedelmann wrote: > >>On 06.05.2014 12:59, Hugo Mills wrote: > >>>On Tue, May 06, 2014 at 12:41:38PM +0200, Hendrik Siedelmann wrote: > >>>>Hello all! > >>>> > >>>>I would like to use btrfs (or anyting else actually) to maximize raid0 > >>>>performance. Basically I have a relatively constant stream of data that > >>>>simply has to be written out to disk. So my question is, how is the block > >>>>allocator deciding on which device to write, can this decision be dynamic > >>>>and could it incorporate timing/troughput decisions? I'm willing to write > >>>>code, I just have no clue as to how this works right now. I read somewhere > >>>>that the decision is based on free space, is this still true? > >>> > >>> For (current) RAID-0 allocation, the block group allocator will use > >>>as many chunks as there are devices with free space (down to a minimum > >>>of 2). Data is then striped across those chunks in 64 KiB stripes. > >>>Thus, the first block group will be N GiB of usable space, striped > >>>across N devices. > >> > >>So do I understand this correctly that (assuming we have enough space) data > >>will be spread equally between the disks independend of write speeds? So one > >>slow device would slow down the whole raid? > > > > Yes. Exactly the same as it would be with DM RAID-0 on the same > >configuration. There's not a lot we can do about that at this point. > > So striping is fixed but which disk takes part with a chunk is dynamic? But > for large workloads slower disks could 'skip a chunk' as chunk allocation is > dynamic, correct?
You'd have to rewrite the chunk allocator to do this, _and_ provide different RAID levels for different subvolumes. The chunk/block group allocator right now uses only one rule for allocating data, and one for allocating metadata. Now, both of these are planned, and _might_ between them possibly cover the use-case you're talking about, but I'm not certain it's necessarily a sensible thing to do in this case. My question is, if you actually care about the performance of this system, why are you buying some slow devices to drag the performance of your fast devices down? It seems like a recipe for disaster... > >>> There's a second level of allocation (which I haven't looked at at > >>>all), which is how the FS decides where to put data within the > >>>allocated block groups. I think it will almost certainly be beneficial > >>>in your case to use prealloc extents, which will turn your continuous > >>>write into large contiguous sections of striping. > >> > >>Why does prealloc change anything? For me latency does not matter, only > >>continuous troughput! > > > > It makes the extent allocation algorithm much simpler, because it > >can then allocate in larger chunks and do more linear writes > > Is this still true if I do very large writes? Or do those get broken down by > the kernel somewhere? I guess it'll depend on the approach you use to do these "very large" writes, and on the exact definition of "very large". This is not an area I know a huge amount about. Hugo. -- === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk === PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk --- "I am the author. You are the audience. I outrank you!" ---
signature.asc
Description: Digital signature