On Mon, Dec 02, 2019 at 02:06:53PM +0100, Anton Yakovlev wrote:
> Hi,
> Sorry for the late reply, I was not available for a week.
> On 25.11.2019 10:31, Gerd Hoffmann wrote:
> >    Hi,
> > 
> > > > Instead of PCM_OUTPUT and PCM_INPUT I would put the number of input and
> > > > output streams into the device config space (and explicitly allow count
> > > > being zero).
> > > 
> > > There are several options for the number of streams:
> > > 
> > > 1. The device can provide one input and/or output stream.
> > > 2. The device can provide several input and/or output streams, and all of 
> > > them
> > > share the same capabilities.
> > > 3. The device can provide several input and/or output streams, and they 
> > > can
> > > have different set of capabilities.
> > > 
> > > Overall design depends on chosen option. Current draft is based on p.1. 
> > > But we
> > > have never discussed what is the best solution here.
> > 
> > (3) looks most useful to me.
> Then we need to refactor device configuration. I would propose to put into
> configuration space only a total number of all available PCM streams and
> introduce special control request to query per-stream configuration. A
> response should contain a stream type (input/output) and all its capabilities.

Yes, that will fine work too.

> > > > PCM_MSG -- I would drop the feature bit and make that mandatory, so we
> > > > have a common baseline on which all drivers and devices can agree on.
> > > 
> > > Then we need to inform device what "transport" will be in use (I assumed 
> > > it
> > > would be feature negotiation).
> > 
> > Whenever other transports (i.e. via shared memory) are supported: yes,
> > that should be a feature bit.
> > 
> > Not sure about choosing the transport.  If both msg (i.e. via virtqueue)
> > and shared memory are available, does it make sense to allow the driver
> > choose the transport each time it starts a stream?
> Shared memory based transport in any case will require some additional
> actions. For HOST_MEM case the driver will need to get an access to host
> buffer somehow. In GUEST_MEM case the driver will need to provide a buffer for
> the host.
> At first sight, we could extend the set_params request with the
> transport_type field and some additional information.

Or have a per-transport set_params request command.

> For example, in case
> of GUEST_MEM the request could be followed by a buffer sg-list.

I'm not convinced guest_mem is that useful.  host_mem allows to give the
guest access to the buffers used by the hosts sound hardware, which is
probably what you need if the MSG transport can't handle the latency
requirements you have.

> > > > > struct virtio_snd_pcm_status {
> > > > >       le32 status;
> > > > >       le32 actual_length;
> > > > > };
> > > > 
> > > > Hmm.  Why actual_length?  Do we want allow short transfers?  Why?
> > > > Wouldn't it be more useful to just fill the buffer completely?
> > > 
> > > In current design we have no buffer size requirements. It means, that in
> > > theory the device and the driver could have buffers with different sizes.
> > 
> > Should we define that the (virtio) buffer size must be period_bytes then?
> It will have no sense, since the driver chooses period_bytes at runtime.

Yep, the driver will choose period_bytes when starting a stream.
The driver will also queue the buffers (for the MSG transport),
so it should be no problem for the driver to make the two match.

> If we gonna introduce any buffer constrains, it must be set by the
> device in a stream configuration.

If we want allow the device specify min/max period_bytes which it can
handle, then yes, that should go into the stream configuration.

Or we use negotiation: driver asks for period_bytes in set-params, the
driver picks the closest period_bytes value it can handle and returns

> > > Also, the capture stream is a special case. Now we don't state explicitly
> > > whether read request is blockable or not.
> > 
> > The concept of "blockable" doesn't exist at that level.  The driver
> > submits buffers to the device, the device fills them and notifies the
> > driver when the buffer is full.  It simply doesn't work like a read(2)
> > syscall.
> But you described exactly "blockable" case: an I/O request is completed not
> immediately but upon some condition (the buffer is full). In case of message-
> based transport, both the device and the driver will have its own buffers.

Well, no.  The device doesn't need any buffers, it can use the buffers
submitted by the driver.  Typical workflow:

  (1) The driver puts a bunch of empty buffers into the rx (record/read)
      virtqueue (each being period_bytes in size).
  (2) The driver starts recording.
  (3) The device fills the first buffer with recorded sound data.
  (4) When the buffer is full the device returns it to the driver,
      takes the next from the virtqueue to continue recording.
  (5) The driver takes the filled buffer and does whatever it wants do
      with the data (typically pass on to the userspace app).
  (6) The driver submits a new empty buffer to the virtqueue to make
      sure the device doesn't run out of buffers.

So, it's not a "here is a buffer, fill it please", "here is the next,
..." ping pong game between driver and device.  There is a queue with
multiple buffers instead, and the device fills them one by one.

Note: The device implementation can have additional buffering of course.
When using alsa the library records to its own private buffers, then
goes copy the data to the buffer provided by the application when you
call snd_pcm_readi().  But it is not required.  The device
implementation could also pass on the empty guest buffers directly to
the host sound hardware (if host hardware and host api allow that).

> And
> for capturing these buffers might be filled at different speed. For example,
> in order to improve latency, the device could complete requests immediately
> and fill in buffers with whatever it has at the moment.

Latency obviously depends on period_bytes.  If the driver cares about
latency it should simply work with lots of small buffers instead of a
few big ones.


To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org

Reply via email to