On 1 Sep 2014, at 9:22 pm, Mike Belopuhov <m...@belopuhov.com> wrote:

> On 1 September 2014 13:06, Stefan Fritsch <s...@sfritsch.de> wrote:
>> On Mon, 1 Sep 2014, Mike Belopuhov wrote:
>> 
>>> On 29 August 2014 22:39, Stefan Fritsch <s...@sfritsch.de> wrote:
>>>> Yes, that seems to be what happens. But if every adapter needs to support
>>>> transfers of MAXBSIZE == MAXPHYS anyway, there would be no need for the
>>>> adapter to be able to override the default minphys function with its own.
>>>> And adapters that only support smaller transfers would need to have logic
>>>> in their driver to be able to split the transfer into smaller chunks.
>>>> 
>>> 
>>> i believe that if you start digging you realise that (at least at some 
>>> point)
>>> the least common denominator is (or was) 64k meaning that even the shittiest
>>> controller on vax can do 64k.  which means that we don't have code for a
>>> filesystem or buffercache to probe the controller for a supported transfer
>>> size.
>> 
>> That's possible. But what is really strange is, why does openbsd then have
>> an infrastructure to set different max transfer sizes for physio on a
>> per-adapter basis? This makes no sense. Either the drivers have to support
>> 64k transfers, in which case most of the minphys infrastructure is
>> useless, or they don't have to. In the latter case the minphys
>> infrastructure would need to be used in all code paths.
>> 
> 
> i haven't found a controller that does less than MAXPHYS.
> perhaps they meant to improve the situation but stopped short.

if we wanted to raise MAXPHYS, we'd have to support older controllers that cant 
do greater than 64k with some mechanism.

> 
>>>> I think it makes more sense to have that logic in one place to be used by
>>>> all drivers.
>>> 
>>> perhaps, but unless the filesystem will issue breads of larger blocks the
>>> only benefit would be physio itself which doesn't really justify the change.
>> 
>> You lost me there. Why should this depend on how many requests some
>> filesystem makes with larger blocks? If there is the possibility that
>> filesystems do such requests, someone needs to do the splitting of the
>> requests. The question is who. The driver or the block layer.
>> 
>> 
> 
> well, the filesystem doesn't issue requests for more than 64k at a time.
> newfs won't allow you to build a 128k block file filesystem.
> 
>> BTW, for my use case I don't actually want to limit the block size, but
>> rather the number of DMA segments. But the most reasonable way to do that
>> seemed to set minphys to max_segments * pagesize. If we change how these
>> things work, we could take the number of DMA segments into account.
> 
> can't help you with this one.


Reply via email to