Re: tx_queue_size with tap device

2022-07-28 Thread Michael S. Tsirkin
On Thu, Jul 28, 2022 at 10:51:22AM +0200, Markus Frank wrote:
> Hello,
> 
> I have a few questions concerning these commits.
> 
> commit 9b02e1618cf26aa52cf786f215d757506dda14f8
> commit 2eef278b9e6326707410eed23be40e57f6c331b7
> commit 0ea5778f066ea5c5e73246a4c11f0773edc4c45d
> 
> Therefore I added the developers in CC.
> 
> Do backends other than vhost-user or vhost-vdpa really not support the
> maximum tx_queue_size and why?
> Example: When using tap as backend and adding NET_CLIENT_DRIVER_TAP as a case
> to the switch statement, so that it can use the maximum size, ethtool also 
> shows
> the configured tx_queue_size of 1024.
> 
> In the commit 9b02e1618cf2 is stated, that the maximum tx_queue_size for other
> backends is 512. Is this still the case? If yes shouldn't it be possible to
> default to that (512), if something higher (1024) is configured, instead of 
> 256?
> 
> Also is there another way to check/test the tx_queue_size than ethtool?
> 
> Thanks in advance.
> 
> Markus

This is qemu virtio queue size which limits two things at the same time:
1- how many packets in queue
2- how long can a scatter gather list be

2 is what we are trying to limit here and so it has nothing to do
with ethtool.

That did not change.

In practical terms linux guests will not push scatter gather lists
larger than 512 even if queue size is larger, so you most likely
can increase the tx queue size and things will still work,
they are just not guaranteed by the spec.

As for changing defaults whether that is a gain or a loss will
depend on the workload.

-- 
MST




Re: Snapshot customizing

2022-07-28 Thread Peter Maydell
On Wed, 27 Jul 2022 at 18:45, Jayakrishna Vadayath  
wrote:
> Yes I did profile it afterwards and found that the majority of time is being 
> spent on loading the RAM.
> I'm running QEMU with "-m 1G" and I do see that there's close to 1 gigabyte 
> of memory being loaded from the snapshot.
>
> In my use case, I need to repeatedly revert back to a snapshot after 
> executing a small user-space program.
> I was wondering if it would be possible to identify the dirty pages in RAM 
> after this execution and only restore those pages from the snapshot when a 
> restore is encountered.
>
> I understand that this scenario might be very unique, but I just wanted to 
> know if such an idea would be feasible or not.

Potentially most of the machinery is present for that (tracking dirty
pages); but it would probably be a moderate development effort to
get it connected up to handle the snapshot revert case correctly,
and as you say it's for an extremely niche use case.

thanks
-- PMM



tx_queue_size with tap device

2022-07-28 Thread Markus Frank

Hello,

I have a few questions concerning these commits.

commit 9b02e1618cf26aa52cf786f215d757506dda14f8
commit 2eef278b9e6326707410eed23be40e57f6c331b7
commit 0ea5778f066ea5c5e73246a4c11f0773edc4c45d

Therefore I added the developers in CC.

Do backends other than vhost-user or vhost-vdpa really not support the
maximum tx_queue_size and why?
Example: When using tap as backend and adding NET_CLIENT_DRIVER_TAP as a case
to the switch statement, so that it can use the maximum size, ethtool also shows
the configured tx_queue_size of 1024.

In the commit 9b02e1618cf2 is stated, that the maximum tx_queue_size for other
backends is 512. Is this still the case? If yes shouldn't it be possible to
default to that (512), if something higher (1024) is configured, instead of 256?

Also is there another way to check/test the tx_queue_size than ethtool?

Thanks in advance.

Markus