On Fri, Mar 19, 2010 at 9:59 PM, Josef Bacik <[email protected]> wrote:
> On Fri, Mar 19, 2010 at 11:09:25AM +0800, Yan, Zheng  wrote:
>> On Thu, Mar 18, 2010 at 11:47 PM, Josef Bacik <[email protected]> wrote:
>> > A user reported a bug a few weeks back where if he set max_extent=1m and 
>> > then
>> > did a dd and then stopped it, we would panic.  This is because I 
>> > miscalculated
>> > how many extents would be needed for splits/merges.  Turns out I didn't 
>> > actually
>> > take max_extent into account properly, since we only ever add 1 extent for 
>> > a
>> > write, which isn't quite right for the case that say max_extent is 4k and 
>> > we do
>> > 8k writes.  That would result in more than 1 extent.  So this patch makes 
>> > us
>> > properly figure out how many extents are needed for the amount of data 
>> > that is
>> > being written, and deals with splitting and merging better.  I've tested 
>> > this ad
>> > nauseum and it works well.  This version doesn't depend on my per-cpu 
>> > stuff.
>> > Thanks,
>>
>> Why not remove the the max_extent check. The max length of file extent
>> is also affected by fragmentation level of free space. It doesn't make sense
>> to introduce complex code to address one factor while lose sight of another
>> factor. I think reserving one unit of metadata for each delalloc extent in 
>> the
>> extent IO tree should be OK. because even a delalloc extent ends up with
>> multiple file extents, these file extents are adjacency in the b-tree.
>>
>
> Do you mean remove the option for max_extent altogether, or just remove all of
> my code for taking it into account?  Thanks,
>

all of the code for taking max_extent into account
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to