Re: [Intel-gfx] [alsa-devel] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)

2014-06-03 Thread Daniel Vetter
On Tue, Jun 03, 2014 at 01:42:03AM +, Lin, Mengdong wrote:
  -Original Message-
  From: Daniel Vetter [mailto:daniel.vet...@ffwll.ch] 
 
   Hi Daniel,
  
   Would you please share more info about your idea?
  
   - What would be an avsink device represent here?
E.g. on Intel platforms, will the whole display device have a child
   avsink device or multiple avsink devices for each DDI port?
  
  My idea would be to have one for each output pipe (i.e. the link between
  audio and gfx), not one per ddi. Gfx driver would then let audio know
  when a screen is connected and which one (e.g. exact model serial from
  edid).
  This is somewhat important for dp mst where there's no longer a fixed
  relationship between audio pin and screen
 
 Thanks. But if we use avsink device, I prefer to have an avsink device per 
 DDI or several avsink devices per DDI,
 It's because
 1. Without DP MST, there is a fixed mapping between each audio codec pin and 
 DDI;
 2. With DP MST, the above pin: DDI mapping is still valid (at least on Intel 
 platforms),
   and there is also a fixed mapping between each device (screen) connected to 
 a pin/DDI.  
 3. HD-Audio driver creates a PCM (audio stream) devices for each pin.
   Keeping this behavior can make audio driver works on platforms without 
 implementing the sound/gfx sync channel.
   And I guess in the future the audio driver will creates more than one PCM 
 devices for a DP MST-capable pin, according how many devices a DDI can 
 support.
 
 4. Display mode change can change the pipe connected to a DDI even if the 
 monitor stays on the same DDI, 
   If we have an avsink device per pipe, the audio driver will have to check 
 another avsink device for this case. It seems not convenient.

All this can also be solved by making the connector/avsink/sound pcm known
to userspace and let userspace figure it out. A few links in sysfs should
be good enough, plus exposing the full edid on the sound pcm side (so that
userspace can compare the serial number in the edid).

   - And for the relationship between audio driver and the avsink device,
   which would be the master and which would be the component?
  
  1:1 for avsink:alsa pin (iirc it's called a pin, not sure about the name).
  That way the audio driver has a clear point for getting at the eld and
  similar information.
 
 Since the audio driver usually already binds to some device (PCI or platform 
 device),
 I think the audio driver cannot bind to the new avsink devices created by 
 display driver, and we need a new driver to handle these device and 
 communication.
 
 While the display driver creates the new endpoint avsink devices, the audio 
 driver can also create the same number of audio endpoint devices.
 And we could let the audio endpoint device be the master and its peer display 
 endpoint device be the component.
 Thus the master/component framework can help us to bind/unbind each pair of 
 display/audio endpoint devices.
 
 Is it doable? If okay, I'll modify the RFC and see if there are other gaps.

Yeah, that should be doable. gfx creates avsink devices, audio binds to
them using the component framework.

   In addition, the component framework does not touch PM now.
   And introducing PM to the component framework seems not easy since
   there can be potential conflict caused by parent-child relationship of
   the involved devices.
  
  Yeah, the entire PM situation seems to be a bit bad. It also looks like on
  resume/suspend we still have problems, at least on the audio side since
  we need to coordinate between 2 completel different underlying devices.
  But at least with the parent-child relationship we have a guranatee that
  the avsink won't be suspended after the gfx device is already off.
  -Daniel
 
 Yes. You're right.
 And we could find a way to hide the Intel-specific display power well
 from the audio driver by using runtime PM API on these devices.

Yeah, that's one of the goals a I have here.

Cheers, Daneil
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [alsa-devel] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)

2014-06-02 Thread Lin, Mengdong
 -Original Message-
 From: Daniel Vetter [mailto:daniel.vet...@ffwll.ch] 

  Hi Daniel,
 
  Would you please share more info about your idea?
 
  - What would be an avsink device represent here?
   E.g. on Intel platforms, will the whole display device have a child
  avsink device or multiple avsink devices for each DDI port?
 
 My idea would be to have one for each output pipe (i.e. the link between
 audio and gfx), not one per ddi. Gfx driver would then let audio know
 when a screen is connected and which one (e.g. exact model serial from
 edid).
 This is somewhat important for dp mst where there's no longer a fixed
 relationship between audio pin and screen

Thanks. But if we use avsink device, I prefer to have an avsink device per DDI 
or several avsink devices per DDI,
It's because
1. Without DP MST, there is a fixed mapping between each audio codec pin and 
DDI;
2. With DP MST, the above pin: DDI mapping is still valid (at least on Intel 
platforms),
  and there is also a fixed mapping between each device (screen) connected to a 
pin/DDI.  
3. HD-Audio driver creates a PCM (audio stream) devices for each pin.
  Keeping this behavior can make audio driver works on platforms without 
implementing the sound/gfx sync channel.
  And I guess in the future the audio driver will creates more than one PCM 
devices for a DP MST-capable pin, according how many devices a DDI can support.

4. Display mode change can change the pipe connected to a DDI even if the 
monitor stays on the same DDI, 
  If we have an avsink device per pipe, the audio driver will have to check 
another avsink device for this case. It seems not convenient.

  - And for the relationship between audio driver and the avsink device,
  which would be the master and which would be the component?
 
 1:1 for avsink:alsa pin (iirc it's called a pin, not sure about the name).
 That way the audio driver has a clear point for getting at the eld and
 similar information.

Since the audio driver usually already binds to some device (PCI or platform 
device),
I think the audio driver cannot bind to the new avsink devices created by 
display driver, and we need a new driver to handle these device and 
communication.

While the display driver creates the new endpoint avsink devices, the audio 
driver can also create the same number of audio endpoint devices.
And we could let the audio endpoint device be the master and its peer display 
endpoint device be the component.
Thus the master/component framework can help us to bind/unbind each pair of 
display/audio endpoint devices.

Is it doable? If okay, I'll modify the RFC and see if there are other gaps.

  In addition, the component framework does not touch PM now.
  And introducing PM to the component framework seems not easy since
  there can be potential conflict caused by parent-child relationship of
  the involved devices.
 
 Yeah, the entire PM situation seems to be a bit bad. It also looks like on
 resume/suspend we still have problems, at least on the audio side since
 we need to coordinate between 2 completel different underlying devices.
 But at least with the parent-child relationship we have a guranatee that
 the avsink won't be suspended after the gfx device is already off.
 -Daniel

Yes. You're right.
And we could find a way to hide the Intel-specific display power well from 
the audio driver by using runtime PM API on these devices.

Thanks
Mengdong
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [alsa-devel] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)

2014-05-22 Thread Lin, Mengdong
 -Original Message-
 From: Vetter, Daniel
 Sent: Tuesday, May 20, 2014 11:08 PM
 
 On 20/05/2014 16:57, Thierry Reding wrote:
  On Tue, May 20, 2014 at 04:45:56PM +0200, Daniel Vetter wrote:
  On Tue, May 20, 2014 at 4:29 PM, Imre Deakimre.d...@intel.com
 wrote:
   On Tue, 2014-05-20 at 05:52 +0300, Lin, Mengdong wrote:
   This RFC is based on previous discussion to set up a generic
   communication channel between display and audio driver and an
   internal design of Intel MCG/VPG HDMI audio driver. It's still
   an initial draft and your advice would be appreciated to
   improve the design.
   
   The basic idea is to create a new avsink module and let both
   drm and alsa depend on it.
   This new module provides a framework and APIs for
   synchronization between the display and audio driver.
   
   1. Display/Audio Client
   
   The avsink core provides APIs to create, register and lookup a
   display/audio client.
   A specific display driver (eg. i915) or audio driver (eg.
   HD-Audio
   driver) can create a client, add some resources objects (shared
   power wells, display outputs, and audio inputs, register ops)
   to the client, and then register this client to avisink core.
   The peer driver can look up a registered client by a name or
   type, or both. If a client gives a valid peer client name on
   registration, avsink core will bind the two clients as peer for
   each other. And we expect a display client and an audio client
   to be peers for each other in a system.
   
   One problem we have at the moment is the order of calling the
   system suspend/resume handlers of the display driver wrt. that of
   the audio driver. Since the power well control is part of the
   display HW block, we need to run the display driver's resume
   handler first, initialize the HW, and only then let the audio
   driver's resume handler run. For similar reasons we have to call
   the audio suspend handler first and only then the display driver
   resume handler. Currently we solve this using the display
   driver's late/early suspend/resume hooks, but we'd need a more robust
 solution.
   
   This seems to be a similar issue to the load time ordering
   problem that you describe later. Having a real device for avsync
   that would be a child of the display device would solve the
   ordering issue in both cases. I admit I haven't looked into it if
   this is feasible, but I would like to see some solution to this as 
   part of
 the plan.
  
  Yeah, this is a big reason why I want real devices - we have piles
  of infrastructure to solve these ordering issues as soon as there's
  a struct device around. If we don't use that, we need to reinvent
  all those wheels ourselves.
  To make the driver core's magic work I think you'd need to find a way
  to reparent the audio device under the display device. Presumably they
  come from two different parts of the device tree (two different PCI
  devices I would guess for Intel, two different platform devices on
  SoCs). Changing the parent after a device has been registered doesn't
  work as far as I know. But even assuming that would work, I have
  trouble imagining what the implications would be on the rest of the driver
 model.
 
  I faced similar problems with the Tegra DRM driver, and the only way I
  can see to make this kind of interaction between devices work is by
  tacking on an extra layer outside the core driver model.

 That's why we need a new avsink device which is a proper child of the gfx
 device, and the audio driver needs to use the componentized device
 framework so that the suspend/resume ordering works correctly. Or at least
 that's been my idea, might be we have some small gaps here and there.
 -Daniel

Hi Daniel,

Would you please share more info about your idea?

- What would be an avsink device represent here? 
 E.g. on Intel platforms, will the whole display device have a child avsink 
device or multiple avsink devices for each DDI port?

- And for the relationship between audio driver and the avsink device, which 
would be the master and which would be the component?

In addition, the component framework does not touch PM now. 
And introducing PM to the component framework seems not easy since there can be 
potential conflict caused by parent-child relationship of the involved devices.

Thanks
Mengdong
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [alsa-devel] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)

2014-05-22 Thread Daniel Vetter
On Thu, May 22, 2014 at 02:59:56PM +, Lin, Mengdong wrote:
  -Original Message-
  From: Vetter, Daniel
  Sent: Tuesday, May 20, 2014 11:08 PM
  
  On 20/05/2014 16:57, Thierry Reding wrote:
   On Tue, May 20, 2014 at 04:45:56PM +0200, Daniel Vetter wrote:
   On Tue, May 20, 2014 at 4:29 PM, Imre Deakimre.d...@intel.com
  wrote:
On Tue, 2014-05-20 at 05:52 +0300, Lin, Mengdong wrote:
This RFC is based on previous discussion to set up a generic
communication channel between display and audio driver and an
internal design of Intel MCG/VPG HDMI audio driver. It's still
an initial draft and your advice would be appreciated to
improve the design.

The basic idea is to create a new avsink module and let both
drm and alsa depend on it.
This new module provides a framework and APIs for
synchronization between the display and audio driver.

1. Display/Audio Client

The avsink core provides APIs to create, register and lookup a
display/audio client.
A specific display driver (eg. i915) or audio driver (eg.
HD-Audio
driver) can create a client, add some resources objects (shared
power wells, display outputs, and audio inputs, register ops)
to the client, and then register this client to avisink core.
The peer driver can look up a registered client by a name or
type, or both. If a client gives a valid peer client name on
registration, avsink core will bind the two clients as peer for
each other. And we expect a display client and an audio client
to be peers for each other in a system.

One problem we have at the moment is the order of calling the
system suspend/resume handlers of the display driver wrt. that of
the audio driver. Since the power well control is part of the
display HW block, we need to run the display driver's resume
handler first, initialize the HW, and only then let the audio
driver's resume handler run. For similar reasons we have to call
the audio suspend handler first and only then the display driver
resume handler. Currently we solve this using the display
driver's late/early suspend/resume hooks, but we'd need a more robust
  solution.

This seems to be a similar issue to the load time ordering
problem that you describe later. Having a real device for avsync
that would be a child of the display device would solve the
ordering issue in both cases. I admit I haven't looked into it if
this is feasible, but I would like to see some solution to this as 
part of
  the plan.
   
   Yeah, this is a big reason why I want real devices - we have piles
   of infrastructure to solve these ordering issues as soon as there's
   a struct device around. If we don't use that, we need to reinvent
   all those wheels ourselves.
   To make the driver core's magic work I think you'd need to find a way
   to reparent the audio device under the display device. Presumably they
   come from two different parts of the device tree (two different PCI
   devices I would guess for Intel, two different platform devices on
   SoCs). Changing the parent after a device has been registered doesn't
   work as far as I know. But even assuming that would work, I have
   trouble imagining what the implications would be on the rest of the driver
  model.
  
   I faced similar problems with the Tegra DRM driver, and the only way I
   can see to make this kind of interaction between devices work is by
   tacking on an extra layer outside the core driver model.
 
  That's why we need a new avsink device which is a proper child of the gfx
  device, and the audio driver needs to use the componentized device
  framework so that the suspend/resume ordering works correctly. Or at least
  that's been my idea, might be we have some small gaps here and there.
  -Daniel
 
 Hi Daniel,
 
 Would you please share more info about your idea?
 
 - What would be an avsink device represent here? 
  E.g. on Intel platforms, will the whole display device have a child
  avsink device or multiple avsink devices for each DDI port?

My idea would be to have one for each output pipe (i.e. the link between
audio and gfx), not one per ddi. Gfx driver would then let audio know when
a screen is connected and which one (e.g. exact model serial from edid).
This is somewhat important for dp mst where there's no longer a fixed
relationship between audio pin and screen

 
 - And for the relationship between audio driver and the avsink device,
 which would be the master and which would be the component?

1:1 for avsink:alsa pin (iirc it's called a pin, not sure about the name).
That way the audio driver has a clear point for getting at the eld and
similar information.

 In addition, the component framework does not touch PM now. 
 And introducing PM to the component framework seems not easy since there
 can be potential conflict caused by parent-child relationship of the
 involved devices.


Re: [Intel-gfx] [alsa-devel] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)

2014-05-21 Thread Raymond Yau
 
  This RFC is based on previous discussion to set up a generic
communication channel between display and audio driver and
  an internal design of Intel MCG/VPG HDMI audio driver. It's still an
initial draft and your advice would be appreciated
  to improve the design.
 
  The basic idea is to create a new avsink module and let both drm and
alsa depend on it.


  1. Display/Audio Client
 
  The avsink core provides APIs to create, register and lookup a
display/audio client.


 For HD-audio HDMI, both controller and codec drivers would need the
 avsink access.  So, both drivers will register the own client?

http://nvidia.custhelp.com/app/answers/detail/a_id/2544/~/my-nvidia-graphics-card-came-with-an-internal-spdif-pass-through-audio-cable-to

http://www.intel.com/support/motherboards/desktop/sb/CS-032871.htm

Does it mean that those grpahic card HDMI which use motherboard internal
spdif connector will not be supported any more when  graphic card have no
way to communicate with audio driver ?

https://git.kernel.org/cgit/linux/kernel/git/tiwai/sound.git/commit/sound/pci/hda/hda_auto_parser.c?id=3f25dcf691ebf45924a34b9aaedec78e5a255798

Should alsa regard these kind of digital device as HDMI or SPDIF ?
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [alsa-devel] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)

2014-05-20 Thread Jaroslav Kysela
Date 20.5.2014 14:43, Thierry Reding wrote:
 On Tue, May 20, 2014 at 12:04:38PM +0200, Daniel Vetter wrote:
 Also adding dri-devel and linux-media. Please don't forget these lists for
 the next round.
 -Daniel

 On Tue, May 20, 2014 at 12:02:04PM +0200, Daniel Vetter wrote:
 Adding Greg just as an fyi since we've chatted briefly about the avsink
 bus. Comments below.
 -Daniel

 On Tue, May 20, 2014 at 02:52:19AM +, Lin, Mengdong wrote:
 This RFC is based on previous discussion to set up a generic communication 
 channel between display and audio driver and
 an internal design of Intel MCG/VPG HDMI audio driver. It's still an 
 initial draft and your advice would be appreciated
 to improve the design.

 The basic idea is to create a new avsink module and let both drm and alsa 
 depend on it.
 This new module provides a framework and APIs for synchronization between 
 the display and audio driver.

 1. Display/Audio Client

 The avsink core provides APIs to create, register and lookup a 
 display/audio client.
 A specific display driver (eg. i915) or audio driver (eg. HD-Audio driver) 
 can create a client, add some resources
 objects (shared power wells, display outputs, and audio inputs, register 
 ops) to the client, and then register this
 client to avisink core. The peer driver can look up a registered client by 
 a name or type, or both. If a client gives
 a valid peer client name on registration, avsink core will bind the two 
 clients as peer for each other. And we
 expect a display client and an audio client to be peers for each other in 
 a system.

 int avsink_new_client ( const char *name,
 int type,   /* client type, display or audio */
 struct module *module,
 void *context,
 const char *peer_name,
 struct avsink_client **client_ret);

 int avsink_free_client (struct avsink_client *client);


 Hm, my idea was to create a new avsink bus and let vga drivers register
 devices on that thing and audio drivers register as drivers. There's a bit
 more work involved in creating a full-blown bus, but it has a lot of
 upsides:
 - Established infrastructure for matching drivers (i.e. audio drivers)
   against devices (i.e. avsinks exported by gfx drivers).
 - Module refcounting.
 - power domain handling and well-integrated into runtime pm.
 - Allows integration into componentized device framework since we're
   dealing with a real struct device.
 - Better decoupling between gfx and audio side since registration is done
   at runtime.
 - We can attach drv private date which the audio driver needs.
 
 I think this would be another case where the interface framework[0]
 could potentially be used. It doesn't give you all of the above, but
 there's no reason it couldn't be extended. Then again, adding too much
 would end up duplicating more of the driver core, so if something really
 heavy-weight is required here, then the interface framework is not the
 best option.
 
 [0]: https://lkml.org/lkml/2014/5/13/525

This looks like the right direction. I would go in this way rather than
create specific A/V grouping mechanisms. This seems to be applicable to
more use cases.

Jaroslav

-- 
Jaroslav Kysela pe...@perex.cz
Linux Kernel Sound Maintainer
ALSA Project; Red Hat, Inc.
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [alsa-devel] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)

2014-05-20 Thread Thierry Reding
On Tue, May 20, 2014 at 04:45:56PM +0200, Daniel Vetter wrote:
 On Tue, May 20, 2014 at 4:29 PM, Imre Deak imre.d...@intel.com wrote:
  On Tue, 2014-05-20 at 05:52 +0300, Lin, Mengdong wrote:
  This RFC is based on previous discussion to set up a generic
  communication channel between display and audio driver and
  an internal design of Intel MCG/VPG HDMI audio driver. It's still an
  initial draft and your advice would be appreciated
  to improve the design.
 
  The basic idea is to create a new avsink module and let both drm and
  alsa depend on it.
  This new module provides a framework and APIs for synchronization
  between the display and audio driver.
 
  1. Display/Audio Client
 
  The avsink core provides APIs to create, register and lookup a
  display/audio client.
  A specific display driver (eg. i915) or audio driver (eg. HD-Audio
  driver) can create a client, add some resources
  objects (shared power wells, display outputs, and audio inputs,
  register ops) to the client, and then register this
  client to avisink core. The peer driver can look up a registered
  client by a name or type, or both. If a client gives
  a valid peer client name on registration, avsink core will bind the
  two clients as peer for each other. And we
  expect a display client and an audio client to be peers for each other
  in a system.
 
  One problem we have at the moment is the order of calling the system
  suspend/resume handlers of the display driver wrt. that of the audio
  driver. Since the power well control is part of the display HW block, we
  need to run the display driver's resume handler first, initialize the
  HW, and only then let the audio driver's resume handler run. For similar
  reasons we have to call the audio suspend handler first and only then
  the display driver resume handler. Currently we solve this using the
  display driver's late/early suspend/resume hooks, but we'd need a more
  robust solution.
 
  This seems to be a similar issue to the load time ordering problem that
  you describe later. Having a real device for avsync that would be a
  child of the display device would solve the ordering issue in both
  cases. I admit I haven't looked into it if this is feasible, but I would
  like to see some solution to this as part of the plan.
 
 Yeah, this is a big reason why I want real devices - we have piles of
 infrastructure to solve these ordering issues as soon as there's a
 struct device around. If we don't use that, we need to reinvent all
 those wheels ourselves.

To make the driver core's magic work I think you'd need to find a way to
reparent the audio device under the display device. Presumably they come
from two different parts of the device tree (two different PCI devices I
would guess for Intel, two different platform devices on SoCs). Changing
the parent after a device has been registered doesn't work as far as I
know. But even assuming that would work, I have trouble imagining what
the implications would be on the rest of the driver model.

I faced similar problems with the Tegra DRM driver, and the only way I
can see to make this kind of interaction between devices work is by
tacking on an extra layer outside the core driver model.

Thierry


pgpPqxaZpacIy.pgp
Description: PGP signature
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [alsa-devel] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)

2014-05-20 Thread Daniel Vetter

On 20/05/2014 16:57, Thierry Reding wrote:

On Tue, May 20, 2014 at 04:45:56PM +0200, Daniel Vetter wrote:

On Tue, May 20, 2014 at 4:29 PM, Imre Deakimre.d...@intel.com  wrote:

 On Tue, 2014-05-20 at 05:52 +0300, Lin, Mengdong wrote:

 This RFC is based on previous discussion to set up a generic
 communication channel between display and audio driver and
 an internal design of Intel MCG/VPG HDMI audio driver. It's still an
 initial draft and your advice would be appreciated
 to improve the design.
 
 The basic idea is to create a new avsink module and let both drm and
 alsa depend on it.
 This new module provides a framework and APIs for synchronization
 between the display and audio driver.
 
 1. Display/Audio Client
 
 The avsink core provides APIs to create, register and lookup a
 display/audio client.
 A specific display driver (eg. i915) or audio driver (eg. HD-Audio
 driver) can create a client, add some resources
 objects (shared power wells, display outputs, and audio inputs,
 register ops) to the client, and then register this
 client to avisink core. The peer driver can look up a registered
 client by a name or type, or both. If a client gives
 a valid peer client name on registration, avsink core will bind the
 two clients as peer for each other. And we
 expect a display client and an audio client to be peers for each other
 in a system.

 
 One problem we have at the moment is the order of calling the system
 suspend/resume handlers of the display driver wrt. that of the audio
 driver. Since the power well control is part of the display HW block, we
 need to run the display driver's resume handler first, initialize the
 HW, and only then let the audio driver's resume handler run. For similar
 reasons we have to call the audio suspend handler first and only then
 the display driver resume handler. Currently we solve this using the
 display driver's late/early suspend/resume hooks, but we'd need a more
 robust solution.
 
 This seems to be a similar issue to the load time ordering problem that
 you describe later. Having a real device for avsync that would be a
 child of the display device would solve the ordering issue in both
 cases. I admit I haven't looked into it if this is feasible, but I would
 like to see some solution to this as part of the plan.


Yeah, this is a big reason why I want real devices - we have piles of
infrastructure to solve these ordering issues as soon as there's a
struct device around. If we don't use that, we need to reinvent all
those wheels ourselves.

To make the driver core's magic work I think you'd need to find a way to
reparent the audio device under the display device. Presumably they come
from two different parts of the device tree (two different PCI devices I
would guess for Intel, two different platform devices on SoCs). Changing
the parent after a device has been registered doesn't work as far as I
know. But even assuming that would work, I have trouble imagining what
the implications would be on the rest of the driver model.

I faced similar problems with the Tegra DRM driver, and the only way I
can see to make this kind of interaction between devices work is by
tacking on an extra layer outside the core driver model.
That's why we need a new avsink device which is a proper child of the 
gfx device, and the audio driver needs to use the componentized device 
framework so that the suspend/resume ordering works correctly. Or at 
least that's been my idea, might be we have some small gaps here and there.

-Daniel
Intel Semiconductor AG
Registered No. 020.30.913.786-7
Registered Office: Badenerstrasse 549, 8048 Zurich, Switzerland

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [alsa-devel] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)

2014-05-20 Thread Thierry Reding
On Tue, May 20, 2014 at 05:07:51PM +0200, Daniel Vetter wrote:
 On 20/05/2014 16:57, Thierry Reding wrote:
 On Tue, May 20, 2014 at 04:45:56PM +0200, Daniel Vetter wrote:
 On Tue, May 20, 2014 at 4:29 PM, Imre Deakimre.d...@intel.com  wrote:
  On Tue, 2014-05-20 at 05:52 +0300, Lin, Mengdong wrote:
  This RFC is based on previous discussion to set up a generic
  communication channel between display and audio driver and
  an internal design of Intel MCG/VPG HDMI audio driver. It's still an
  initial draft and your advice would be appreciated
  to improve the design.
  
  The basic idea is to create a new avsink module and let both drm and
  alsa depend on it.
  This new module provides a framework and APIs for synchronization
  between the display and audio driver.
  
  1. Display/Audio Client
  
  The avsink core provides APIs to create, register and lookup a
  display/audio client.
  A specific display driver (eg. i915) or audio driver (eg. HD-Audio
  driver) can create a client, add some resources
  objects (shared power wells, display outputs, and audio inputs,
  register ops) to the client, and then register this
  client to avisink core. The peer driver can look up a registered
  client by a name or type, or both. If a client gives
  a valid peer client name on registration, avsink core will bind the
  two clients as peer for each other. And we
  expect a display client and an audio client to be peers for each other
  in a system.
  
  One problem we have at the moment is the order of calling the system
  suspend/resume handlers of the display driver wrt. that of the audio
  driver. Since the power well control is part of the display HW block, we
  need to run the display driver's resume handler first, initialize the
  HW, and only then let the audio driver's resume handler run. For similar
  reasons we have to call the audio suspend handler first and only then
  the display driver resume handler. Currently we solve this using the
  display driver's late/early suspend/resume hooks, but we'd need a more
  robust solution.
  
  This seems to be a similar issue to the load time ordering problem that
  you describe later. Having a real device for avsync that would be a
  child of the display device would solve the ordering issue in both
  cases. I admit I haven't looked into it if this is feasible, but I would
  like to see some solution to this as part of the plan.
 
 Yeah, this is a big reason why I want real devices - we have piles of
 infrastructure to solve these ordering issues as soon as there's a
 struct device around. If we don't use that, we need to reinvent all
 those wheels ourselves.
 To make the driver core's magic work I think you'd need to find a way to
 reparent the audio device under the display device. Presumably they come
 from two different parts of the device tree (two different PCI devices I
 would guess for Intel, two different platform devices on SoCs). Changing
 the parent after a device has been registered doesn't work as far as I
 know. But even assuming that would work, I have trouble imagining what
 the implications would be on the rest of the driver model.
 
 I faced similar problems with the Tegra DRM driver, and the only way I
 can see to make this kind of interaction between devices work is by
 tacking on an extra layer outside the core driver model.
 That's why we need a new avsink device which is a proper child of the gfx
 device, and the audio driver needs to use the componentized device framework
 so that the suspend/resume ordering works correctly. Or at least that's been
 my idea, might be we have some small gaps here and there.

The component/master helpers don't allow you to do that. Essentially
what it does is provide a way to glue together multiple devices (the
components) to produce a meta-device (the master). What you get is a
pair of .bind()/.unbind() functions that are called on each of the
components when the master binds or unbinds the meta-device. I don't
see how that could be made to work for suspend/resume.

Thierry


pgp_kB904ANJg.pgp
Description: PGP signature
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [alsa-devel] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)

2014-05-20 Thread Daniel Vetter
On Tue, May 20, 2014 at 5:15 PM, Thierry Reding
thierry.red...@gmail.com wrote:
 The component/master helpers don't allow you to do that. Essentially
 what it does is provide a way to glue together multiple devices (the
 components) to produce a meta-device (the master). What you get is a
 pair of .bind()/.unbind() functions that are called on each of the
 components when the master binds or unbinds the meta-device. I don't
 see how that could be made to work for suspend/resume.

We'll we could add a pm_ops pointer to the master and auto-register a
pile of suspend/resume hooks to all the component devices. Then we'd
suspend the master as soon as the first component gets suspended and
resume it only when the last component is resumed. Should be doable
with a bunch of refcounts.

On top of that we should be able to use runtime pm to do fine-grained
pm control for each component. So in my naive world here (never used
the component stuff myself after all) this should all work out ;-)
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx