On 02.05.2012 18:35, Kevin Wolf wrote:
>> As I already mentioned, the virtio protocol has the same defect (but there
>> it is less serious due to usage of larger units).  And that's where the
>> additional overflow needs to be ELIMINATED, not just checked.  Ie, the
>> protocol should be changed somehow - the only issue is that I don't know
>> how to do that now, once it has been in use for quite some time.
> Even if you create a new version of the protocol (introduce a new
> feature flag or whatever), newer qemus will still have to deal with
> older guests and vice versa.

Sure.  And for these, the checks indeed should be done in lower layers.

> But now that you're extending the property value to 32 bits, but only 25
> bits can be really used, the property type doesn't even theoretically
> suffice as a check.

So, what do you propose?  To add a check into virtio-blk.c (and into
whatever else place is using this variable) too?  If yes, and indeed
it appears to be the right thing to do, care to show me the right
place please, I'm not very familiar with that code...

> But I can't see which structure is only used by virtio-blk, though. The
> min_io_size property is contained in DEFINE_BLOCK_PROPERTIES(), which is
> used by every qdevified block device. Most of them ignore min_io_size,
> but virtio-blk and scsi-disk don't.

I mean the BlockConf struct.  It isn't used anywhere but in virtio-blk.c.

But again, since I'm not familiar with the code, I might be wrong.


> That wouldn't be a good interface. Let's just take a 32 bit number and
> add the checks in the devices.

That's fine.  The only thing left to do is to find the proper places for
the checks.  Help?

> Just curious... What values do you want to use? The 32 MB minimum I/O
> size that are possible with 16 bits and 512 byte sectors already sounds
> insanely large.

I don't "use" any values.  I merely pass whatever is defined on my systems
down to the guest.

md layer in kernel - raid4, raid5 and raid6 implementation - sets min_io_size
to the chunk size, and opt_io_size to "stripe size".  It is not uncommon at
all to have chunk size = 1Mb or more.   I've no idea how useful this information
is, but at least with it present (my small raid5 array has 256Kb chunk size),
xfs created in the guest performs much faster than without this information
(which means usual 512 there).

This is how I discovered the issue - I wondered why xfs created in the guest
is so much slower than the same xfs but created on host.  I/O sizes immediately
come to min, so I added these, but it was still slow.  So I noticed min_io_size
isn't passed correctly, increased the size of this type, and voila, xfs
created in guest now behaves as fast as created on host.  Something like that

There's an obvious bug in there, but it is not obvious for me where/how it 
be fixed.  Maybe the sizes used by md raid5 are insane, that's arguable 
but this is what is in use now (and since the day the i/o sizes were added to
md to start with), and this is what makes things fly.



Reply via email to