Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On Thu, Jun 23, 2016 at 12:38:48PM +0200, Henrik Austad wrote: > Richard: is it fair to assume that if ptp4l is running and is part of a PTP > domain, ktime_get() will return PTP-adjusted time for the system? No. > Or do I also need to run phc2sys in order to sync the system-time > to PTP-time? Yes, unless you are using SW time stamping, in which case ptp4l will steer the system clock directly. HTH, Richard
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On Thu, Jun 23, 2016 at 12:38:48PM +0200, Henrik Austad wrote: > Richard: is it fair to assume that if ptp4l is running and is part of a PTP > domain, ktime_get() will return PTP-adjusted time for the system? No. > Or do I also need to run phc2sys in order to sync the system-time > to PTP-time? Yes, unless you are using SW time stamping, in which case ptp4l will steer the system clock directly. HTH, Richard
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On Tue, Jun 21, 2016 at 10:45:18AM -0700, Pierre-Louis Bossart wrote: > On 6/20/16 5:18 AM, Richard Cochran wrote: > >On Mon, Jun 20, 2016 at 01:08:27PM +0200, Pierre-Louis Bossart wrote: > >>The ALSA API provides support for 'audio' timestamps (playback/capture rate > >>defined by audio subsystem) and 'system' timestamps (typically linked to > >>TSC/ART) with one option to take synchronized timestamps should the hardware > >>support them. > > > >Thanks for the info. I just skimmed > >Documentation/sound/alsa/timestamping.txt. > > > >That is fairly new, only since v4.1. Are then any apps in the wild > >that I can look at? AFAICT, OpenAVB, gstreamer, etc, don't use the > >new API. > > The ALSA API supports a generic .get_time_info callback, its implementation > is for now limited to a regular 'DMA' or 'link' timestamp for HDaudio - the > difference being which counters are used and how close they are to the link > serializer. The synchronized part is still WIP but should come 'soon' Interesting, would you mind CCing me in on those patches? > >>The intent was that the 'audio' timestamps are translated to a shared time > >>reference managed in userspace by gPTP, which in turn would define if > >>(adaptive) audio sample rate conversion is needed. There is no support at > >>the moment for a 'play_at' function in ALSA, only means to control a > >>feedback loop. > > > >Documentation/sound/alsa/timestamping.txt says: > > > > If supported in hardware, the absolute link time could also be used > > to define a precise start time (patches WIP) > > > >Two questions: > > > >1. Where are the patches? (If some are coming, I would appreciate > > being on CC!) > > > >2. Can you mention specific HW that would support this? > > You can experiment with the 'dma' and 'link' timestamps today on any > HDaudio-based device. Like I said the synchronized part has not been > upstreamed yet (delays + dependency on ART-to-TSC conversions that made it > in the kernel recently) Ok, I think I see a way to hook this into timestamps from the skbuf on incoming frames and a somewhat messy way on outgoing. Having time coupled with 'avail' and 'delay' is useful, and from the looks of it, 'link'-time is the appropriate level to add this. I'm working on storing the time in the tsn_link struct I use, and then read that from the avb_alsa-shim. Details are still a bit fuzzy though, but I plan to do that and then see what audio-time gives me once it is up and running. Richard: is it fair to assume that if ptp4l is running and is part of a PTP domain, ktime_get() will return PTP-adjusted time for the system? -Or do I also need to run phc2sys in order to sync the system-time to PTP-time? Note that this is for outgoing traffic, Rx should perhaps use the timestamp in skb. Hooking into ktime_get() instead of directly to the PTP-subsystem (if that is even possible) makes it a lot easier to debug when running this in a VM as it doesn't *have* to use PTP-time when I'm crashing a new kernel :) Thanks! -- Henrik Austad signature.asc Description: Digital signature
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On Tue, Jun 21, 2016 at 10:45:18AM -0700, Pierre-Louis Bossart wrote: > On 6/20/16 5:18 AM, Richard Cochran wrote: > >On Mon, Jun 20, 2016 at 01:08:27PM +0200, Pierre-Louis Bossart wrote: > >>The ALSA API provides support for 'audio' timestamps (playback/capture rate > >>defined by audio subsystem) and 'system' timestamps (typically linked to > >>TSC/ART) with one option to take synchronized timestamps should the hardware > >>support them. > > > >Thanks for the info. I just skimmed > >Documentation/sound/alsa/timestamping.txt. > > > >That is fairly new, only since v4.1. Are then any apps in the wild > >that I can look at? AFAICT, OpenAVB, gstreamer, etc, don't use the > >new API. > > The ALSA API supports a generic .get_time_info callback, its implementation > is for now limited to a regular 'DMA' or 'link' timestamp for HDaudio - the > difference being which counters are used and how close they are to the link > serializer. The synchronized part is still WIP but should come 'soon' Interesting, would you mind CCing me in on those patches? > >>The intent was that the 'audio' timestamps are translated to a shared time > >>reference managed in userspace by gPTP, which in turn would define if > >>(adaptive) audio sample rate conversion is needed. There is no support at > >>the moment for a 'play_at' function in ALSA, only means to control a > >>feedback loop. > > > >Documentation/sound/alsa/timestamping.txt says: > > > > If supported in hardware, the absolute link time could also be used > > to define a precise start time (patches WIP) > > > >Two questions: > > > >1. Where are the patches? (If some are coming, I would appreciate > > being on CC!) > > > >2. Can you mention specific HW that would support this? > > You can experiment with the 'dma' and 'link' timestamps today on any > HDaudio-based device. Like I said the synchronized part has not been > upstreamed yet (delays + dependency on ART-to-TSC conversions that made it > in the kernel recently) Ok, I think I see a way to hook this into timestamps from the skbuf on incoming frames and a somewhat messy way on outgoing. Having time coupled with 'avail' and 'delay' is useful, and from the looks of it, 'link'-time is the appropriate level to add this. I'm working on storing the time in the tsn_link struct I use, and then read that from the avb_alsa-shim. Details are still a bit fuzzy though, but I plan to do that and then see what audio-time gives me once it is up and running. Richard: is it fair to assume that if ptp4l is running and is part of a PTP domain, ktime_get() will return PTP-adjusted time for the system? -Or do I also need to run phc2sys in order to sync the system-time to PTP-time? Note that this is for outgoing traffic, Rx should perhaps use the timestamp in skb. Hooking into ktime_get() instead of directly to the PTP-subsystem (if that is even possible) makes it a lot easier to debug when running this in a VM as it doesn't *have* to use PTP-time when I'm crashing a new kernel :) Thanks! -- Henrik Austad signature.asc Description: Digital signature
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On 6/21/16 12:40 PM, Richard Cochran wrote: On Tue, Jun 21, 2016 at 10:45:18AM -0700, Pierre-Louis Bossart wrote: You can experiment with the 'dma' and 'link' timestamps today on any HDaudio-based device. Like I said the synchronized part has not been upstreamed yet (delays + dependency on ART-to-TSC conversions that made it in the kernel recently) Can you point me to any open source apps using the dma/link timestamps? Those timestamps are only used in custom applications at the moment, not 'mainstream' open source. It takes time for new kernel capabilities to trickle into userspace, applications usually align on the lowest hardware common denominator. In addition, most applications don't access the kernel directly but go through an audio server or HAL which needs to be updated as well so it's a two-level dependency. These timestamps can be directly mappped to the Android AudioTrack.getTimeStamp API though.
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On 6/21/16 12:40 PM, Richard Cochran wrote: On Tue, Jun 21, 2016 at 10:45:18AM -0700, Pierre-Louis Bossart wrote: You can experiment with the 'dma' and 'link' timestamps today on any HDaudio-based device. Like I said the synchronized part has not been upstreamed yet (delays + dependency on ART-to-TSC conversions that made it in the kernel recently) Can you point me to any open source apps using the dma/link timestamps? Those timestamps are only used in custom applications at the moment, not 'mainstream' open source. It takes time for new kernel capabilities to trickle into userspace, applications usually align on the lowest hardware common denominator. In addition, most applications don't access the kernel directly but go through an audio server or HAL which needs to be updated as well so it's a two-level dependency. These timestamps can be directly mappped to the Android AudioTrack.getTimeStamp API though.
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On Tue, Jun 21, 2016 at 10:45:18AM -0700, Pierre-Louis Bossart wrote: > You can experiment with the 'dma' and 'link' timestamps today on any > HDaudio-based device. Like I said the synchronized part has not been > upstreamed yet (delays + dependency on ART-to-TSC conversions that made it > in the kernel recently) Can you point me to any open source apps using the dma/link timestamps? Thanks, Richard
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On Tue, Jun 21, 2016 at 10:45:18AM -0700, Pierre-Louis Bossart wrote: > You can experiment with the 'dma' and 'link' timestamps today on any > HDaudio-based device. Like I said the synchronized part has not been > upstreamed yet (delays + dependency on ART-to-TSC conversions that made it > in the kernel recently) Can you point me to any open source apps using the dma/link timestamps? Thanks, Richard
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On 6/20/16 5:18 AM, Richard Cochran wrote: On Mon, Jun 20, 2016 at 01:08:27PM +0200, Pierre-Louis Bossart wrote: The ALSA API provides support for 'audio' timestamps (playback/capture rate defined by audio subsystem) and 'system' timestamps (typically linked to TSC/ART) with one option to take synchronized timestamps should the hardware support them. Thanks for the info. I just skimmed Documentation/sound/alsa/timestamping.txt. That is fairly new, only since v4.1. Are then any apps in the wild that I can look at? AFAICT, OpenAVB, gstreamer, etc, don't use the new API. The ALSA API supports a generic .get_time_info callback, its implementation is for now limited to a regular 'DMA' or 'link' timestamp for HDaudio - the difference being which counters are used and how close they are to the link serializer. The synchronized part is still WIP but should come 'soon' The intent was that the 'audio' timestamps are translated to a shared time reference managed in userspace by gPTP, which in turn would define if (adaptive) audio sample rate conversion is needed. There is no support at the moment for a 'play_at' function in ALSA, only means to control a feedback loop. Documentation/sound/alsa/timestamping.txt says: If supported in hardware, the absolute link time could also be used to define a precise start time (patches WIP) Two questions: 1. Where are the patches? (If some are coming, I would appreciate being on CC!) 2. Can you mention specific HW that would support this? You can experiment with the 'dma' and 'link' timestamps today on any HDaudio-based device. Like I said the synchronized part has not been upstreamed yet (delays + dependency on ART-to-TSC conversions that made it in the kernel recently)
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On 6/20/16 5:18 AM, Richard Cochran wrote: On Mon, Jun 20, 2016 at 01:08:27PM +0200, Pierre-Louis Bossart wrote: The ALSA API provides support for 'audio' timestamps (playback/capture rate defined by audio subsystem) and 'system' timestamps (typically linked to TSC/ART) with one option to take synchronized timestamps should the hardware support them. Thanks for the info. I just skimmed Documentation/sound/alsa/timestamping.txt. That is fairly new, only since v4.1. Are then any apps in the wild that I can look at? AFAICT, OpenAVB, gstreamer, etc, don't use the new API. The ALSA API supports a generic .get_time_info callback, its implementation is for now limited to a regular 'DMA' or 'link' timestamp for HDaudio - the difference being which counters are used and how close they are to the link serializer. The synchronized part is still WIP but should come 'soon' The intent was that the 'audio' timestamps are translated to a shared time reference managed in userspace by gPTP, which in turn would define if (adaptive) audio sample rate conversion is needed. There is no support at the moment for a 'play_at' function in ALSA, only means to control a feedback loop. Documentation/sound/alsa/timestamping.txt says: If supported in hardware, the absolute link time could also be used to define a precise start time (patches WIP) Two questions: 1. Where are the patches? (If some are coming, I would appreciate being on CC!) 2. Can you mention specific HW that would support this? You can experiment with the 'dma' and 'link' timestamps today on any HDaudio-based device. Like I said the synchronized part has not been upstreamed yet (delays + dependency on ART-to-TSC conversions that made it in the kernel recently)
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On 6/20/16 5:31 AM, Richard Cochran wrote: On Mon, Jun 20, 2016 at 02:18:38PM +0200, Richard Cochran wrote: Documentation/sound/alsa/timestamping.txt says: Examples of typestamping with HDaudio: 1. DMA timestamp, no compensation for DMA+analog delay $ ./audio_time -p --ts_type=1 Where is this "audio_time" program of which you speak? alsa-lib/test
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On 6/20/16 5:31 AM, Richard Cochran wrote: On Mon, Jun 20, 2016 at 02:18:38PM +0200, Richard Cochran wrote: Documentation/sound/alsa/timestamping.txt says: Examples of typestamping with HDaudio: 1. DMA timestamp, no compensation for DMA+analog delay $ ./audio_time -p --ts_type=1 Where is this "audio_time" program of which you speak? alsa-lib/test
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On Tue, 21 Jun 2016 08:38:57 +0200, Richard Cochran wrote: > > On Tue, Jun 21, 2016 at 07:54:32AM +0200, Takashi Iwai wrote: > > > I still would appreciate an answer to my other questions, though... > > > > Currently HD-audio (both ASoC and legacy ones) are the only drivers > > providing the link timestamp. In the recent code, it's PCM > > get_time_info ops, so you can easily grep it. > > Yes, I found that myself, thanks. > > > HTH, > > No it doesn't help me, because I asked three questions, and none were > about the link timestamp. ?? The extended audio timpestamp is essentially to return the link timestamp. Just the term has changed along time... Takashi
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On Tue, 21 Jun 2016 08:38:57 +0200, Richard Cochran wrote: > > On Tue, Jun 21, 2016 at 07:54:32AM +0200, Takashi Iwai wrote: > > > I still would appreciate an answer to my other questions, though... > > > > Currently HD-audio (both ASoC and legacy ones) are the only drivers > > providing the link timestamp. In the recent code, it's PCM > > get_time_info ops, so you can easily grep it. > > Yes, I found that myself, thanks. > > > HTH, > > No it doesn't help me, because I asked three questions, and none were > about the link timestamp. ?? The extended audio timpestamp is essentially to return the link timestamp. Just the term has changed along time... Takashi
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On Tue, Jun 21, 2016 at 07:54:32AM +0200, Takashi Iwai wrote: > > I still would appreciate an answer to my other questions, though... > > Currently HD-audio (both ASoC and legacy ones) are the only drivers > providing the link timestamp. In the recent code, it's PCM > get_time_info ops, so you can easily grep it. Yes, I found that myself, thanks. > HTH, No it doesn't help me, because I asked three questions, and none were about the link timestamp. Thanks, Richard
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On Tue, Jun 21, 2016 at 07:54:32AM +0200, Takashi Iwai wrote: > > I still would appreciate an answer to my other questions, though... > > Currently HD-audio (both ASoC and legacy ones) are the only drivers > providing the link timestamp. In the recent code, it's PCM > get_time_info ops, so you can easily grep it. Yes, I found that myself, thanks. > HTH, No it doesn't help me, because I asked three questions, and none were about the link timestamp. Thanks, Richard
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On Mon, 20 Jun 2016 17:21:26 +0200, Richard Cochran wrote: > > On Mon, Jun 20, 2016 at 02:31:48PM +0200, Richard Cochran wrote: > > Where is this "audio_time" program of which you speak? > > Never mind, found it in alsa-lib. > > I still would appreciate an answer to my other questions, though... Currently HD-audio (both ASoC and legacy ones) are the only drivers providing the link timestamp. In the recent code, it's PCM get_time_info ops, so you can easily grep it. HTH, Takashi
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On Mon, 20 Jun 2016 17:21:26 +0200, Richard Cochran wrote: > > On Mon, Jun 20, 2016 at 02:31:48PM +0200, Richard Cochran wrote: > > Where is this "audio_time" program of which you speak? > > Never mind, found it in alsa-lib. > > I still would appreciate an answer to my other questions, though... Currently HD-audio (both ASoC and legacy ones) are the only drivers providing the link timestamp. In the recent code, it's PCM get_time_info ops, so you can easily grep it. HTH, Takashi
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On Mon, Jun 20, 2016 at 02:31:48PM +0200, Richard Cochran wrote: > Where is this "audio_time" program of which you speak? Never mind, found it in alsa-lib. I still would appreciate an answer to my other questions, though... Thanks, Richard
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On Mon, Jun 20, 2016 at 02:31:48PM +0200, Richard Cochran wrote: > Where is this "audio_time" program of which you speak? Never mind, found it in alsa-lib. I still would appreciate an answer to my other questions, though... Thanks, Richard
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On Mon, Jun 20, 2016 at 02:18:38PM +0200, Richard Cochran wrote: > Documentation/sound/alsa/timestamping.txt says: Examples of typestamping with HDaudio: 1. DMA timestamp, no compensation for DMA+analog delay $ ./audio_time -p --ts_type=1 Where is this "audio_time" program of which you speak? Thanks, Richard
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On Mon, Jun 20, 2016 at 02:18:38PM +0200, Richard Cochran wrote: > Documentation/sound/alsa/timestamping.txt says: Examples of typestamping with HDaudio: 1. DMA timestamp, no compensation for DMA+analog delay $ ./audio_time -p --ts_type=1 Where is this "audio_time" program of which you speak? Thanks, Richard
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On Mon, Jun 20, 2016 at 01:08:27PM +0200, Pierre-Louis Bossart wrote: > The ALSA API provides support for 'audio' timestamps (playback/capture rate > defined by audio subsystem) and 'system' timestamps (typically linked to > TSC/ART) with one option to take synchronized timestamps should the hardware > support them. Thanks for the info. I just skimmed Documentation/sound/alsa/timestamping.txt. That is fairly new, only since v4.1. Are then any apps in the wild that I can look at? AFAICT, OpenAVB, gstreamer, etc, don't use the new API. > The intent was that the 'audio' timestamps are translated to a shared time > reference managed in userspace by gPTP, which in turn would define if > (adaptive) audio sample rate conversion is needed. There is no support at > the moment for a 'play_at' function in ALSA, only means to control a > feedback loop. Documentation/sound/alsa/timestamping.txt says: If supported in hardware, the absolute link time could also be used to define a precise start time (patches WIP) Two questions: 1. Where are the patches? (If some are coming, I would appreciate being on CC!) 2. Can you mention specific HW that would support this? Thanks, Richard
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On Mon, Jun 20, 2016 at 01:08:27PM +0200, Pierre-Louis Bossart wrote: > The ALSA API provides support for 'audio' timestamps (playback/capture rate > defined by audio subsystem) and 'system' timestamps (typically linked to > TSC/ART) with one option to take synchronized timestamps should the hardware > support them. Thanks for the info. I just skimmed Documentation/sound/alsa/timestamping.txt. That is fairly new, only since v4.1. Are then any apps in the wild that I can look at? AFAICT, OpenAVB, gstreamer, etc, don't use the new API. > The intent was that the 'audio' timestamps are translated to a shared time > reference managed in userspace by gPTP, which in turn would define if > (adaptive) audio sample rate conversion is needed. There is no support at > the moment for a 'play_at' function in ALSA, only means to control a > feedback loop. Documentation/sound/alsa/timestamping.txt says: If supported in hardware, the absolute link time could also be used to define a precise start time (patches WIP) Two questions: 1. Where are the patches? (If some are coming, I would appreciate being on CC!) 2. Can you mention specific HW that would support this? Thanks, Richard
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On Mon, Jun 20, 2016 at 01:08:27PM +0200, Pierre-Louis Bossart wrote: > > >Presentation time is either set by > >a) Local sound card performing capture (in which case it will be 'capture > > time') > >b) Local media application sending a stream accross the network > > (time when the sample should be played out remotely) > >c) Remote media application streaming data *to* host, in which case it will > > be local presentation time on local soundcard > > > >>This value is dominant to the number of events included in an IEC 61883-1 > >>packet. If this TSN subsystem decides it, most of these items don't need > >>to be in ALSA. > > > >Not sure if I understand this correctly. > > > >TSN should have a reference to the timing-domain of each *local* > >sound-device (for local capture or playback) as well as the shared > >time-reference provided by gPTP. > > > >Unless an End-station acts as GrandMaster for the gPTP-domain, time set > >forth by gPTP is inmutable and cannot be adjusted. It follows that the > >sample-frequency of the local audio-devices must be adjusted, or the > >audio-streams to/from said devices must be resampled. > > The ALSA API provides support for 'audio' timestamps > (playback/capture rate defined by audio subsystem) and 'system' > timestamps (typically linked to TSC/ART) with one option to take > synchronized timestamps should the hardware support them. Ok, this sounds promising, and very much in line with what AVB would need. > The intent was that the 'audio' timestamps are translated to a > shared time reference managed in userspace by gPTP, which in turn > would define if (adaptive) audio sample rate conversion is needed. > There is no support at the moment for a 'play_at' function in ALSA, > only means to control a feedback loop. Ok, I understand that the 'play_at' is difficult to obtain, but it sounds like it is doable to achieve something useful. Looks like I will be looking into what to put in the .trigger-handler in the ALSA shim and experimenting with this to see how it make sense to connect it from the TSN-stream. Thanks! -- Henrik Austad signature.asc Description: Digital signature
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
On Mon, Jun 20, 2016 at 01:08:27PM +0200, Pierre-Louis Bossart wrote: > > >Presentation time is either set by > >a) Local sound card performing capture (in which case it will be 'capture > > time') > >b) Local media application sending a stream accross the network > > (time when the sample should be played out remotely) > >c) Remote media application streaming data *to* host, in which case it will > > be local presentation time on local soundcard > > > >>This value is dominant to the number of events included in an IEC 61883-1 > >>packet. If this TSN subsystem decides it, most of these items don't need > >>to be in ALSA. > > > >Not sure if I understand this correctly. > > > >TSN should have a reference to the timing-domain of each *local* > >sound-device (for local capture or playback) as well as the shared > >time-reference provided by gPTP. > > > >Unless an End-station acts as GrandMaster for the gPTP-domain, time set > >forth by gPTP is inmutable and cannot be adjusted. It follows that the > >sample-frequency of the local audio-devices must be adjusted, or the > >audio-streams to/from said devices must be resampled. > > The ALSA API provides support for 'audio' timestamps > (playback/capture rate defined by audio subsystem) and 'system' > timestamps (typically linked to TSC/ART) with one option to take > synchronized timestamps should the hardware support them. Ok, this sounds promising, and very much in line with what AVB would need. > The intent was that the 'audio' timestamps are translated to a > shared time reference managed in userspace by gPTP, which in turn > would define if (adaptive) audio sample rate conversion is needed. > There is no support at the moment for a 'play_at' function in ALSA, > only means to control a feedback loop. Ok, I understand that the 'play_at' is difficult to obtain, but it sounds like it is doable to achieve something useful. Looks like I will be looking into what to put in the .trigger-handler in the ALSA shim and experimenting with this to see how it make sense to connect it from the TSN-stream. Thanks! -- Henrik Austad signature.asc Description: Digital signature
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
Presentation time is either set by a) Local sound card performing capture (in which case it will be 'capture time') b) Local media application sending a stream accross the network (time when the sample should be played out remotely) c) Remote media application streaming data *to* host, in which case it will be local presentation time on local soundcard This value is dominant to the number of events included in an IEC 61883-1 packet. If this TSN subsystem decides it, most of these items don't need to be in ALSA. Not sure if I understand this correctly. TSN should have a reference to the timing-domain of each *local* sound-device (for local capture or playback) as well as the shared time-reference provided by gPTP. Unless an End-station acts as GrandMaster for the gPTP-domain, time set forth by gPTP is inmutable and cannot be adjusted. It follows that the sample-frequency of the local audio-devices must be adjusted, or the audio-streams to/from said devices must be resampled. The ALSA API provides support for 'audio' timestamps (playback/capture rate defined by audio subsystem) and 'system' timestamps (typically linked to TSC/ART) with one option to take synchronized timestamps should the hardware support them. The intent was that the 'audio' timestamps are translated to a shared time reference managed in userspace by gPTP, which in turn would define if (adaptive) audio sample rate conversion is needed. There is no support at the moment for a 'play_at' function in ALSA, only means to control a feedback loop.
Re: [alsa-devel] [very-RFC 0/8] TSN driver for the kernel
Presentation time is either set by a) Local sound card performing capture (in which case it will be 'capture time') b) Local media application sending a stream accross the network (time when the sample should be played out remotely) c) Remote media application streaming data *to* host, in which case it will be local presentation time on local soundcard This value is dominant to the number of events included in an IEC 61883-1 packet. If this TSN subsystem decides it, most of these items don't need to be in ALSA. Not sure if I understand this correctly. TSN should have a reference to the timing-domain of each *local* sound-device (for local capture or playback) as well as the shared time-reference provided by gPTP. Unless an End-station acts as GrandMaster for the gPTP-domain, time set forth by gPTP is inmutable and cannot be adjusted. It follows that the sample-frequency of the local audio-devices must be adjusted, or the audio-streams to/from said devices must be resampled. The ALSA API provides support for 'audio' timestamps (playback/capture rate defined by audio subsystem) and 'system' timestamps (typically linked to TSC/ART) with one option to take synchronized timestamps should the hardware support them. The intent was that the 'audio' timestamps are translated to a shared time reference managed in userspace by gPTP, which in turn would define if (adaptive) audio sample rate conversion is needed. There is no support at the moment for a 'play_at' function in ALSA, only means to control a feedback loop.
Re: [very-RFC 0/8] TSN driver for the kernel
On Sun, Jun 19, 2016 at 11:46:29AM +0200, Richard Cochran wrote: > On Sun, Jun 19, 2016 at 12:45:50AM +0200, Henrik Austad wrote: > > edit: this turned out to be a somewhat lengthy answer. I have tried to > > shorten it down somewhere. it is getting late and I'm getting increasingly > > incoherent (Richard probably knows what I'm talking about ;) so I'll stop > > for now. > > Thanks for your responses, Henrik. I think your explanations are on spot. > > > note that an adjustable sample-clock is not a *requirement* but in general > > you'd want to avoid resampling in software. > > Yes, but.. > > Adjusting the local clock rate to match the AVB network rate is > essential. You must be able to *continuously* adjust the rate in > order to compensate drift. Again, there are exactly two ways to do > it, namely in hardware (think VCO) or in software (dynamic > resampling). Don't get me wrong, having an adjustable clock for the sampling is essential -but it si not -required-. > What you cannot do is simply buffer the AV data and play it out > blindly at the local clock rate. No, that you cannot do that, that would not be pretty :) > Regarding the media clock, if I understand correctly, there the talker > has two possibilities. Either the talker samples the stream at the > gPTP rate, or the talker must tell the listeners the relationship > (phase offset and frequency ratio) between the media clock and the > gPTP time. Please correct me if I got the wrong impression... Last first; AFAIK, there is no way for the Talker to tell a Listener the phase offset/freq ratio other than how each end-station/bridge in the gPTP-domain calculates this on psync_update event messages. I could be wrong though, and different encoding formats can probably convey such information. I have not seen any such mechanisms in the underlying 1722 format though. So a Talker should send a stream sampled as if the gPTP time drove the AD/DA sample frequency directly. Whether the local sampling is driven by gPTP or resampled to match gPTP-time prior to transmit is left as an implementation detail for the end-station. Did all that make sense? Thanks! -- Henrik Austad signature.asc Description: Digital signature
Re: [very-RFC 0/8] TSN driver for the kernel
On Sun, Jun 19, 2016 at 11:46:29AM +0200, Richard Cochran wrote: > On Sun, Jun 19, 2016 at 12:45:50AM +0200, Henrik Austad wrote: > > edit: this turned out to be a somewhat lengthy answer. I have tried to > > shorten it down somewhere. it is getting late and I'm getting increasingly > > incoherent (Richard probably knows what I'm talking about ;) so I'll stop > > for now. > > Thanks for your responses, Henrik. I think your explanations are on spot. > > > note that an adjustable sample-clock is not a *requirement* but in general > > you'd want to avoid resampling in software. > > Yes, but.. > > Adjusting the local clock rate to match the AVB network rate is > essential. You must be able to *continuously* adjust the rate in > order to compensate drift. Again, there are exactly two ways to do > it, namely in hardware (think VCO) or in software (dynamic > resampling). Don't get me wrong, having an adjustable clock for the sampling is essential -but it si not -required-. > What you cannot do is simply buffer the AV data and play it out > blindly at the local clock rate. No, that you cannot do that, that would not be pretty :) > Regarding the media clock, if I understand correctly, there the talker > has two possibilities. Either the talker samples the stream at the > gPTP rate, or the talker must tell the listeners the relationship > (phase offset and frequency ratio) between the media clock and the > gPTP time. Please correct me if I got the wrong impression... Last first; AFAIK, there is no way for the Talker to tell a Listener the phase offset/freq ratio other than how each end-station/bridge in the gPTP-domain calculates this on psync_update event messages. I could be wrong though, and different encoding formats can probably convey such information. I have not seen any such mechanisms in the underlying 1722 format though. So a Talker should send a stream sampled as if the gPTP time drove the AD/DA sample frequency directly. Whether the local sampling is driven by gPTP or resampled to match gPTP-time prior to transmit is left as an implementation detail for the end-station. Did all that make sense? Thanks! -- Henrik Austad signature.asc Description: Digital signature
Re: [very-RFC 0/8] TSN driver for the kernel
On Sun, Jun 19, 2016 at 12:45:50AM +0200, Henrik Austad wrote: > edit: this turned out to be a somewhat lengthy answer. I have tried to > shorten it down somewhere. it is getting late and I'm getting increasingly > incoherent (Richard probably knows what I'm talking about ;) so I'll stop > for now. Thanks for your responses, Henrik. I think your explanations are on spot. > note that an adjustable sample-clock is not a *requirement* but in general > you'd want to avoid resampling in software. Yes, but.. Adjusting the local clock rate to match the AVB network rate is essential. You must be able to *continuously* adjust the rate in order to compensate drift. Again, there are exactly two ways to do it, namely in hardware (think VCO) or in software (dynamic resampling). What you cannot do is simply buffer the AV data and play it out blindly at the local clock rate. Regarding the media clock, if I understand correctly, there the talker has two possibilities. Either the talker samples the stream at the gPTP rate, or the talker must tell the listeners the relationship (phase offset and frequency ratio) between the media clock and the gPTP time. Please correct me if I got the wrong impression... Thanks, Richard
Re: [very-RFC 0/8] TSN driver for the kernel
On Sun, Jun 19, 2016 at 12:45:50AM +0200, Henrik Austad wrote: > edit: this turned out to be a somewhat lengthy answer. I have tried to > shorten it down somewhere. it is getting late and I'm getting increasingly > incoherent (Richard probably knows what I'm talking about ;) so I'll stop > for now. Thanks for your responses, Henrik. I think your explanations are on spot. > note that an adjustable sample-clock is not a *requirement* but in general > you'd want to avoid resampling in software. Yes, but.. Adjusting the local clock rate to match the AVB network rate is essential. You must be able to *continuously* adjust the rate in order to compensate drift. Again, there are exactly two ways to do it, namely in hardware (think VCO) or in software (dynamic resampling). What you cannot do is simply buffer the AV data and play it out blindly at the local clock rate. Regarding the media clock, if I understand correctly, there the talker has two possibilities. Either the talker samples the stream at the gPTP rate, or the talker must tell the listeners the relationship (phase offset and frequency ratio) between the media clock and the gPTP time. Please correct me if I got the wrong impression... Thanks, Richard
Re: [very-RFC 0/8] TSN driver for the kernel
On Sat, Jun 18, 2016 at 02:22:13PM +0900, Takashi Sakamoto wrote: > Hi, Hi Takashi, You raise a lot of valid points and questions, I'll try to answer them. edit: this turned out to be a somewhat lengthy answer. I have tried to shorten it down somewhere. it is getting late and I'm getting increasingly incoherent (Richard probably knows what I'm talking about ;) so I'll stop for now. Plase post a follow-up with everything that's not clear! Thanks! > Sorry to be late. In this weekday, I have little time for this thread > because working for alsa-lib[1]. Besides, I'm not full-time developer > for this kind of work. In short, I use my limited private time for this > discussion. Thank you for taking the time to reply to this thread then, it is much appreciated > On Jun 15 2016 17:06, Richard Cochran wrote: > > On Wed, Jun 15, 2016 at 12:15:24PM +0900, Takashi Sakamoto wrote: > >>> On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote: > I have seen audio PLL/multiplier chips that will take, for example, a > 10 kHz input and produce your 48 kHz media clock. With the right HW > design, you can tell your PTP Hardware Clock to produce a 1 PPS, > and you will have a synchronized AVB endpoint. The software is all > there already. Somebody should tell the ALSA guys about it. > >> > >> Just from my curiosity, could I ask you more explanation for it in ALSA > >> side? > > > > (Disclaimer: I really don't know too much about ALSA, expect that is > > fairly big and complex ;) > > In this morning, I read IEEE 1722:2011 and realized that it quite > roughly refers to IEC 61883-1/6 and includes much ambiguities to end > applications. As far as I know, 1722 aims to describe how the data is wrapped in AVTPDU (and likewise for control-data), not how the end-station should implement it. If there are ambiguities, would you mind listing a few? It would serve as a useful guide as to look for other pitfalls as well (thanks!) > (In my opinion, the author just focuses on packet with timestamps, > without enough considering about how to implement endpoint applications > which perform semi-real sampling, fetching and queueing and so on, so as > you. They're satisfied just by handling packet with timestamp, without > enough consideration about actual hardware/software applications.) You are correct, none of the standards explain exactly how it should be implemented, only what the end result should look like. One target of this collection of standards are embedded, dedicated AV equipment and the authors have no way of knowing (nor should they care I think) the underlying architecture of these. > > Here is what I think ALSA should provide: > > > > - The DA and AD clocks should appear as attributes of the HW device. This would be very useful and helpful when determining if the clock of the HW time is falling behind or racing ahead of the gPTP time domain. It will also help finding the capture time or calculating when a sample in the buffer will be played back by the device. > > - There should be a method for measuring the DA/AD clock rate with > > respect to both the system time and the PTP Hardware Clock (PHC) > > time. as above. > > - There should be a method for adjusting the DA/AD clock rate if > > possible. If not, then ALSA should fall back to sample rate > > conversion. This is not a requirement from the standard, but will help avoid costly resampling. At least it should be possible to detect the *need* for resampling so that we can try to avoid underruns. > > - There should be a method to determine the time delay from the point > > when the audio data are enqueued into ALSA until they pass through > > the D/A converter. If this cannot be known precisely, then the > > library should provide an estimate with an error bound. > > > > - I think some AVB use cases will need to know the time delay from A/D > > until the data are available to the local application. (Distributed > > microphones? I'm not too sure about that.) yes, if you have multiple microphones that you want to combine into a stream and do signal processing, some cases require sample-sync (so within 1 us accuracy for 48kHz). > > - If the DA/AD clocks are connected to other clock devices in HW, > > there should be a way to find this out in SW. For example, if SW > > can see the PTP-PHC-PLL-DA relationship from the above example, then > > it knows how to synchronize the DA clock using the network. > > > > [ Implementing this point involves other subsystems beyond ALSA. It > > isn't really necessary for people designing AVB systems, since > > they know their designs, but it would be nice to have for writing > > generic applications that can deal with any kind of HW setup. ] > > Depends on which subsystem decides "AVTP presentation time"[3]. Presentation time is either set by a) Local sound card performing capture (in which case it will be 'capture
Re: [very-RFC 0/8] TSN driver for the kernel
On Sat, Jun 18, 2016 at 02:22:13PM +0900, Takashi Sakamoto wrote: > Hi, Hi Takashi, You raise a lot of valid points and questions, I'll try to answer them. edit: this turned out to be a somewhat lengthy answer. I have tried to shorten it down somewhere. it is getting late and I'm getting increasingly incoherent (Richard probably knows what I'm talking about ;) so I'll stop for now. Plase post a follow-up with everything that's not clear! Thanks! > Sorry to be late. In this weekday, I have little time for this thread > because working for alsa-lib[1]. Besides, I'm not full-time developer > for this kind of work. In short, I use my limited private time for this > discussion. Thank you for taking the time to reply to this thread then, it is much appreciated > On Jun 15 2016 17:06, Richard Cochran wrote: > > On Wed, Jun 15, 2016 at 12:15:24PM +0900, Takashi Sakamoto wrote: > >>> On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote: > I have seen audio PLL/multiplier chips that will take, for example, a > 10 kHz input and produce your 48 kHz media clock. With the right HW > design, you can tell your PTP Hardware Clock to produce a 1 PPS, > and you will have a synchronized AVB endpoint. The software is all > there already. Somebody should tell the ALSA guys about it. > >> > >> Just from my curiosity, could I ask you more explanation for it in ALSA > >> side? > > > > (Disclaimer: I really don't know too much about ALSA, expect that is > > fairly big and complex ;) > > In this morning, I read IEEE 1722:2011 and realized that it quite > roughly refers to IEC 61883-1/6 and includes much ambiguities to end > applications. As far as I know, 1722 aims to describe how the data is wrapped in AVTPDU (and likewise for control-data), not how the end-station should implement it. If there are ambiguities, would you mind listing a few? It would serve as a useful guide as to look for other pitfalls as well (thanks!) > (In my opinion, the author just focuses on packet with timestamps, > without enough considering about how to implement endpoint applications > which perform semi-real sampling, fetching and queueing and so on, so as > you. They're satisfied just by handling packet with timestamp, without > enough consideration about actual hardware/software applications.) You are correct, none of the standards explain exactly how it should be implemented, only what the end result should look like. One target of this collection of standards are embedded, dedicated AV equipment and the authors have no way of knowing (nor should they care I think) the underlying architecture of these. > > Here is what I think ALSA should provide: > > > > - The DA and AD clocks should appear as attributes of the HW device. This would be very useful and helpful when determining if the clock of the HW time is falling behind or racing ahead of the gPTP time domain. It will also help finding the capture time or calculating when a sample in the buffer will be played back by the device. > > - There should be a method for measuring the DA/AD clock rate with > > respect to both the system time and the PTP Hardware Clock (PHC) > > time. as above. > > - There should be a method for adjusting the DA/AD clock rate if > > possible. If not, then ALSA should fall back to sample rate > > conversion. This is not a requirement from the standard, but will help avoid costly resampling. At least it should be possible to detect the *need* for resampling so that we can try to avoid underruns. > > - There should be a method to determine the time delay from the point > > when the audio data are enqueued into ALSA until they pass through > > the D/A converter. If this cannot be known precisely, then the > > library should provide an estimate with an error bound. > > > > - I think some AVB use cases will need to know the time delay from A/D > > until the data are available to the local application. (Distributed > > microphones? I'm not too sure about that.) yes, if you have multiple microphones that you want to combine into a stream and do signal processing, some cases require sample-sync (so within 1 us accuracy for 48kHz). > > - If the DA/AD clocks are connected to other clock devices in HW, > > there should be a way to find this out in SW. For example, if SW > > can see the PTP-PHC-PLL-DA relationship from the above example, then > > it knows how to synchronize the DA clock using the network. > > > > [ Implementing this point involves other subsystems beyond ALSA. It > > isn't really necessary for people designing AVB systems, since > > they know their designs, but it would be nice to have for writing > > generic applications that can deal with any kind of HW setup. ] > > Depends on which subsystem decides "AVTP presentation time"[3]. Presentation time is either set by a) Local sound card performing capture (in which case it will be 'capture
Re: [very-RFC 0/8] TSN driver for the kernel
Hi, Sorry to be late. In this weekday, I have little time for this thread because working for alsa-lib[1]. Besides, I'm not full-time developer for this kind of work. In short, I use my limited private time for this discussion. On Jun 15 2016 17:06, Richard Cochran wrote: > On Wed, Jun 15, 2016 at 12:15:24PM +0900, Takashi Sakamoto wrote: >>> On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote: I have seen audio PLL/multiplier chips that will take, for example, a 10 kHz input and produce your 48 kHz media clock. With the right HW design, you can tell your PTP Hardware Clock to produce a 1 PPS, and you will have a synchronized AVB endpoint. The software is all there already. Somebody should tell the ALSA guys about it. >> >> Just from my curiosity, could I ask you more explanation for it in ALSA >> side? > > (Disclaimer: I really don't know too much about ALSA, expect that is > fairly big and complex ;) In this morning, I read IEEE 1722:2011 and realized that it quite roughly refers to IEC 61883-1/6 and includes much ambiguities to end applications. (In my opinion, the author just focuses on packet with timestamps, without enough considering about how to implement endpoint applications which perform semi-real sampling, fetching and queueing and so on, so as you. They're satisfied just by handling packet with timestamp, without enough consideration about actual hardware/software applications.) > Here is what I think ALSA should provide: > > - The DA and AD clocks should appear as attributes of the HW device. > > - There should be a method for measuring the DA/AD clock rate with > respect to both the system time and the PTP Hardware Clock (PHC) > time. > > - There should be a method for adjusting the DA/AD clock rate if > possible. If not, then ALSA should fall back to sample rate > conversion. > > - There should be a method to determine the time delay from the point > when the audio data are enqueued into ALSA until they pass through > the D/A converter. If this cannot be known precisely, then the > library should provide an estimate with an error bound. > > - I think some AVB use cases will need to know the time delay from A/D > until the data are available to the local application. (Distributed > microphones? I'm not too sure about that.) > > - If the DA/AD clocks are connected to other clock devices in HW, > there should be a way to find this out in SW. For example, if SW > can see the PTP-PHC-PLL-DA relationship from the above example, then > it knows how to synchronize the DA clock using the network. > > [ Implementing this point involves other subsystems beyond ALSA. It > isn't really necessary for people designing AVB systems, since > they know their designs, but it would be nice to have for writing > generic applications that can deal with any kind of HW setup. ] Depends on which subsystem decides "AVTP presentation time"[3]. This value is dominant to the number of events included in an IEC 61883-1 packet. If this TSN subsystem decides it, most of these items don't need to be in ALSA. As long as I know, the number of AVTPDU per second seems not to be fixed. So each application is not allowed to calculate the timestamp by its own way unless TSN implementation gives the information to each applications. For your information, in current ALSA implementation of IEC 61883-1/6 on IEEE 1394 bus, the presentation timestamp is decided in ALSA side. The number of isochronous packet transmitted per second is fixed by 8,000 in IEEE 1394, and the number of data blocks in an IEC 61883-1 packet is deterministic according to 'sampling transfer frequency' in IEC 61883-6 and isochronous cycle count passed from Linux FireWire subsystem. In the TSN subsystem, like FireWire subsystem, callback for filling payload should have information of 'when the packet is scheduled to be transmitted'. With the information, each application can calculate the number of event in the packet and presentation timestamp. Of cource, this timestamp should be handled as 'avtp_timestamp' in packet queueing. >> In ALSA, sampling rate conversion should be in userspace, not in kernel >> land. In alsa-lib, sampling rate conversion is implemented in shared object. >> When userspace applications start playbacking/capturing, depending on PCM >> node to access, these applications load the shared object and convert PCM >> frames from buffer in userspace to mmapped DMA-buffer, then commit them. > > The AVB use case places an additional requirement on the rate > conversion. You will need to adjust the frequency on the fly, as the > stream is playing. I would guess that ALSA doesn't have that option? In ALSA kernel/userspace interfaces , the specification cannot be supported, at all. Please explain about this requirement, where it comes from, which specification and clause describe it (802.1AS or 802.1Q?). As long as I read IEEE 1722, I cannot
Re: [very-RFC 0/8] TSN driver for the kernel
Hi, Sorry to be late. In this weekday, I have little time for this thread because working for alsa-lib[1]. Besides, I'm not full-time developer for this kind of work. In short, I use my limited private time for this discussion. On Jun 15 2016 17:06, Richard Cochran wrote: > On Wed, Jun 15, 2016 at 12:15:24PM +0900, Takashi Sakamoto wrote: >>> On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote: I have seen audio PLL/multiplier chips that will take, for example, a 10 kHz input and produce your 48 kHz media clock. With the right HW design, you can tell your PTP Hardware Clock to produce a 1 PPS, and you will have a synchronized AVB endpoint. The software is all there already. Somebody should tell the ALSA guys about it. >> >> Just from my curiosity, could I ask you more explanation for it in ALSA >> side? > > (Disclaimer: I really don't know too much about ALSA, expect that is > fairly big and complex ;) In this morning, I read IEEE 1722:2011 and realized that it quite roughly refers to IEC 61883-1/6 and includes much ambiguities to end applications. (In my opinion, the author just focuses on packet with timestamps, without enough considering about how to implement endpoint applications which perform semi-real sampling, fetching and queueing and so on, so as you. They're satisfied just by handling packet with timestamp, without enough consideration about actual hardware/software applications.) > Here is what I think ALSA should provide: > > - The DA and AD clocks should appear as attributes of the HW device. > > - There should be a method for measuring the DA/AD clock rate with > respect to both the system time and the PTP Hardware Clock (PHC) > time. > > - There should be a method for adjusting the DA/AD clock rate if > possible. If not, then ALSA should fall back to sample rate > conversion. > > - There should be a method to determine the time delay from the point > when the audio data are enqueued into ALSA until they pass through > the D/A converter. If this cannot be known precisely, then the > library should provide an estimate with an error bound. > > - I think some AVB use cases will need to know the time delay from A/D > until the data are available to the local application. (Distributed > microphones? I'm not too sure about that.) > > - If the DA/AD clocks are connected to other clock devices in HW, > there should be a way to find this out in SW. For example, if SW > can see the PTP-PHC-PLL-DA relationship from the above example, then > it knows how to synchronize the DA clock using the network. > > [ Implementing this point involves other subsystems beyond ALSA. It > isn't really necessary for people designing AVB systems, since > they know their designs, but it would be nice to have for writing > generic applications that can deal with any kind of HW setup. ] Depends on which subsystem decides "AVTP presentation time"[3]. This value is dominant to the number of events included in an IEC 61883-1 packet. If this TSN subsystem decides it, most of these items don't need to be in ALSA. As long as I know, the number of AVTPDU per second seems not to be fixed. So each application is not allowed to calculate the timestamp by its own way unless TSN implementation gives the information to each applications. For your information, in current ALSA implementation of IEC 61883-1/6 on IEEE 1394 bus, the presentation timestamp is decided in ALSA side. The number of isochronous packet transmitted per second is fixed by 8,000 in IEEE 1394, and the number of data blocks in an IEC 61883-1 packet is deterministic according to 'sampling transfer frequency' in IEC 61883-6 and isochronous cycle count passed from Linux FireWire subsystem. In the TSN subsystem, like FireWire subsystem, callback for filling payload should have information of 'when the packet is scheduled to be transmitted'. With the information, each application can calculate the number of event in the packet and presentation timestamp. Of cource, this timestamp should be handled as 'avtp_timestamp' in packet queueing. >> In ALSA, sampling rate conversion should be in userspace, not in kernel >> land. In alsa-lib, sampling rate conversion is implemented in shared object. >> When userspace applications start playbacking/capturing, depending on PCM >> node to access, these applications load the shared object and convert PCM >> frames from buffer in userspace to mmapped DMA-buffer, then commit them. > > The AVB use case places an additional requirement on the rate > conversion. You will need to adjust the frequency on the fly, as the > stream is playing. I would guess that ALSA doesn't have that option? In ALSA kernel/userspace interfaces , the specification cannot be supported, at all. Please explain about this requirement, where it comes from, which specification and clause describe it (802.1AS or 802.1Q?). As long as I read IEEE 1722, I cannot
Re: [very-RFC 0/8] TSN driver for the kernel
On Wed, Jun 15, 2016 at 09:04:41AM +0200, Richard Cochran wrote: > On Tue, Jun 14, 2016 at 10:38:10PM +0200, Henrik Austad wrote: > > Whereas I want to do > > > > aplay some_song.wav > > Can you please explain how your patches accomplish this? Never mind. Looking back, I found it in patch #7. Thanks, Richard
Re: [very-RFC 0/8] TSN driver for the kernel
On Wed, Jun 15, 2016 at 09:04:41AM +0200, Richard Cochran wrote: > On Tue, Jun 14, 2016 at 10:38:10PM +0200, Henrik Austad wrote: > > Whereas I want to do > > > > aplay some_song.wav > > Can you please explain how your patches accomplish this? Never mind. Looking back, I found it in patch #7. Thanks, Richard
Re: [very-RFC 0/8] TSN driver for the kernel
On Wed, Jun 15, 2016 at 12:15:24PM +0900, Takashi Sakamoto wrote: > > On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote: > >> I have seen audio PLL/multiplier chips that will take, for example, a > >> 10 kHz input and produce your 48 kHz media clock. With the right HW > >> design, you can tell your PTP Hardware Clock to produce a 1 PPS, > >> and you will have a synchronized AVB endpoint. The software is all > >> there already. Somebody should tell the ALSA guys about it. > > Just from my curiosity, could I ask you more explanation for it in ALSA > side? (Disclaimer: I really don't know too much about ALSA, expect that is fairly big and complex ;) Here is what I think ALSA should provide: - The DA and AD clocks should appear as attributes of the HW device. - There should be a method for measuring the DA/AD clock rate with respect to both the system time and the PTP Hardware Clock (PHC) time. - There should be a method for adjusting the DA/AD clock rate if possible. If not, then ALSA should fall back to sample rate conversion. - There should be a method to determine the time delay from the point when the audio data are enqueued into ALSA until they pass through the D/A converter. If this cannot be known precisely, then the library should provide an estimate with an error bound. - I think some AVB use cases will need to know the time delay from A/D until the data are available to the local application. (Distributed microphones? I'm not too sure about that.) - If the DA/AD clocks are connected to other clock devices in HW, there should be a way to find this out in SW. For example, if SW can see the PTP-PHC-PLL-DA relationship from the above example, then it knows how to synchronize the DA clock using the network. [ Implementing this point involves other subsystems beyond ALSA. It isn't really necessary for people designing AVB systems, since they know their designs, but it would be nice to have for writing generic applications that can deal with any kind of HW setup. ] > In ALSA, sampling rate conversion should be in userspace, not in kernel > land. In alsa-lib, sampling rate conversion is implemented in shared object. > When userspace applications start playbacking/capturing, depending on PCM > node to access, these applications load the shared object and convert PCM > frames from buffer in userspace to mmapped DMA-buffer, then commit them. The AVB use case places an additional requirement on the rate conversion. You will need to adjust the frequency on the fly, as the stream is playing. I would guess that ALSA doesn't have that option? Thanks, Richard
Re: [very-RFC 0/8] TSN driver for the kernel
On Wed, Jun 15, 2016 at 12:15:24PM +0900, Takashi Sakamoto wrote: > > On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote: > >> I have seen audio PLL/multiplier chips that will take, for example, a > >> 10 kHz input and produce your 48 kHz media clock. With the right HW > >> design, you can tell your PTP Hardware Clock to produce a 1 PPS, > >> and you will have a synchronized AVB endpoint. The software is all > >> there already. Somebody should tell the ALSA guys about it. > > Just from my curiosity, could I ask you more explanation for it in ALSA > side? (Disclaimer: I really don't know too much about ALSA, expect that is fairly big and complex ;) Here is what I think ALSA should provide: - The DA and AD clocks should appear as attributes of the HW device. - There should be a method for measuring the DA/AD clock rate with respect to both the system time and the PTP Hardware Clock (PHC) time. - There should be a method for adjusting the DA/AD clock rate if possible. If not, then ALSA should fall back to sample rate conversion. - There should be a method to determine the time delay from the point when the audio data are enqueued into ALSA until they pass through the D/A converter. If this cannot be known precisely, then the library should provide an estimate with an error bound. - I think some AVB use cases will need to know the time delay from A/D until the data are available to the local application. (Distributed microphones? I'm not too sure about that.) - If the DA/AD clocks are connected to other clock devices in HW, there should be a way to find this out in SW. For example, if SW can see the PTP-PHC-PLL-DA relationship from the above example, then it knows how to synchronize the DA clock using the network. [ Implementing this point involves other subsystems beyond ALSA. It isn't really necessary for people designing AVB systems, since they know their designs, but it would be nice to have for writing generic applications that can deal with any kind of HW setup. ] > In ALSA, sampling rate conversion should be in userspace, not in kernel > land. In alsa-lib, sampling rate conversion is implemented in shared object. > When userspace applications start playbacking/capturing, depending on PCM > node to access, these applications load the shared object and convert PCM > frames from buffer in userspace to mmapped DMA-buffer, then commit them. The AVB use case places an additional requirement on the rate conversion. You will need to adjust the frequency on the fly, as the stream is playing. I would guess that ALSA doesn't have that option? Thanks, Richard
Re: [very-RFC 0/8] TSN driver for the kernel
On Wed, Jun 15, 2016 at 09:04:41AM +0200, Richard Cochran wrote: > On Tue, Jun 14, 2016 at 10:38:10PM +0200, Henrik Austad wrote: > > Whereas I want to do > > > > aplay some_song.wav > > Can you please explain how your patches accomplish this? In short: modprobe tsn modprobe avb_alsa mkdir /sys/kernel/config/eth0/link cd /sys/kernel/config/eth0/link echo alsa > enabled aplay -Ddefault:CARD=avb some_song.wav Likewise on the receiver side, except add 'Listener' to end_station attribute arecord -c2 -r48000 -f S16_LE -Ddefault:CARD=avb > some_recording.wav I've not had time to fully fix the hw-aprams for alsa, so some manual tweaking of arecord is required. Again, this is a very early attempt to get something useful done with TSN, I know there are rough edges, I know buffer handling and timestamping is not finished Note: if you don't have an intel-card, load tsn in debug-mode and it will let you use all NICs present. modprobe tsn in_debug=1 -- Henrik Austad signature.asc Description: Digital signature
Re: [very-RFC 0/8] TSN driver for the kernel
On Wed, Jun 15, 2016 at 09:04:41AM +0200, Richard Cochran wrote: > On Tue, Jun 14, 2016 at 10:38:10PM +0200, Henrik Austad wrote: > > Whereas I want to do > > > > aplay some_song.wav > > Can you please explain how your patches accomplish this? In short: modprobe tsn modprobe avb_alsa mkdir /sys/kernel/config/eth0/link cd /sys/kernel/config/eth0/link echo alsa > enabled aplay -Ddefault:CARD=avb some_song.wav Likewise on the receiver side, except add 'Listener' to end_station attribute arecord -c2 -r48000 -f S16_LE -Ddefault:CARD=avb > some_recording.wav I've not had time to fully fix the hw-aprams for alsa, so some manual tweaking of arecord is required. Again, this is a very early attempt to get something useful done with TSN, I know there are rough edges, I know buffer handling and timestamping is not finished Note: if you don't have an intel-card, load tsn in debug-mode and it will let you use all NICs present. modprobe tsn in_debug=1 -- Henrik Austad signature.asc Description: Digital signature
Re: [very-RFC 0/8] TSN driver for the kernel
On Tue, Jun 14, 2016 at 10:38:10PM +0200, Henrik Austad wrote: > Where is your media-application in this? Um, that *is* a media application. It plays music on the sound card. > You only loop the audio from > network to the dsp, is the media-application attached to the dsp-device? Sorry, I thought the old OSS API would be familiar and easy to understand. The /dev/dsp is the sound card. Thanks, Richard
Re: [very-RFC 0/8] TSN driver for the kernel
On Tue, Jun 14, 2016 at 10:38:10PM +0200, Henrik Austad wrote: > Where is your media-application in this? Um, that *is* a media application. It plays music on the sound card. > You only loop the audio from > network to the dsp, is the media-application attached to the dsp-device? Sorry, I thought the old OSS API would be familiar and easy to understand. The /dev/dsp is the sound card. Thanks, Richard
Re: [very-RFC 0/8] TSN driver for the kernel
On Tue, Jun 14, 2016 at 10:38:10PM +0200, Henrik Austad wrote: > Whereas I want to do > > aplay some_song.wav Can you please explain how your patches accomplish this? Thanks, Richard
Re: [very-RFC 0/8] TSN driver for the kernel
On Tue, Jun 14, 2016 at 10:38:10PM +0200, Henrik Austad wrote: > Whereas I want to do > > aplay some_song.wav Can you please explain how your patches accomplish this? Thanks, Richard
Re: [very-RFC 0/8] TSN driver for the kernel
Hi Richard, On Tue, 14 Jun 2016 19:04:44 +0200, Richard Cochran write: >> Well, I guess I should have said, I am not too familiar with the >> breadth of current audio hardware, high end or low end. Of course I >> would like to see even consumer devices work with AVB, but it is up to >> the ALSA people to make that happen. So far, nothing has been done, >> afaict. In OSS world, there's few developers for this kind of devices, even if it's alsa-project. Furthermore, manufacturerer for recording equipments have no interests in OSS. In short, what we can do for these devices is just to reverse-engineering. For models of Ethernet-AVB, it might be just to transfer or receive packets, and read them. The devices are still black-boxes and we have no ways to reveal their details. So when you require the details to implement something in your side, few developers can tell you, I think. Regards Takashi Sakamoto
Re: [very-RFC 0/8] TSN driver for the kernel
Hi Richard, On Tue, 14 Jun 2016 19:04:44 +0200, Richard Cochran write: >> Well, I guess I should have said, I am not too familiar with the >> breadth of current audio hardware, high end or low end. Of course I >> would like to see even consumer devices work with AVB, but it is up to >> the ALSA people to make that happen. So far, nothing has been done, >> afaict. In OSS world, there's few developers for this kind of devices, even if it's alsa-project. Furthermore, manufacturerer for recording equipments have no interests in OSS. In short, what we can do for these devices is just to reverse-engineering. For models of Ethernet-AVB, it might be just to transfer or receive packets, and read them. The devices are still black-boxes and we have no ways to reveal their details. So when you require the details to implement something in your side, few developers can tell you, I think. Regards Takashi Sakamoto
Re: [very-RFC 0/8] TSN driver for the kernel
Hi Richard, > On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote: >> 3. ALSA support for tunable AD/DA clocks. The rate of the Listener's >>DA clock must match that of the Talker and the other Listeners. >>Either you adjust it in HW using a VCO or similar, or you do >>adaptive sample rate conversion in the application. (And that is >>another reason for *not* having a shared kernel buffer.) For the >>Talker, either you adjust the AD clock to match the PTP time, or >>you measure the frequency offset. >> >> I have seen audio PLL/multiplier chips that will take, for example, a >> 10 kHz input and produce your 48 kHz media clock. With the right HW >> design, you can tell your PTP Hardware Clock to produce a 1 PPS, >> and you will have a synchronized AVB endpoint. The software is all >> there already. Somebody should tell the ALSA guys about it. Just from my curiosity, could I ask you more explanation for it in ALSA side? The similar mechanism to synchronize endpoints was also applied to audio and music unit on IEEE 1394 bus. According to IEC 61883-1/6, some of these actual units can generate presentation-timestamp from header information of 8,000 packet per sec, and utilize the signal as sampling clock[1]. There's much differences between IEC 61883-1/6 on IEEE 1394 bus and Audio and Video Bridge on Ethernet[2], especially for synchronization, but in this point of transferring synchnization signal and time-based data, we have the similar requirements of software implementations, I think. My motivation to join in this discussion is to consider about to make it clear to implement packet-oriented drivers in ALSA kernel-land, and enhance my work for drivers to handle IEC 61883-1/6 on IEEE 1394 bus. >> I don't know if ALSA has anything for sample rate conversion or not, >> but haven't seen anything that addresses distributed synchronized >> audio applications. In ALSA, sampling rate conversion should be in userspace, not in kernel land. In alsa-lib, sampling rate conversion is implemented in shared object. When userspace applications start playbacking/capturing, depending on PCM node to access, these applications load the shared object and convert PCM frames from buffer in userspace to mmapped DMA-buffer, then commit them. Before establishing a PCM substream, userspace applications and in-kernel drivers communicate to decide sampling rate, PCM frame format, the size of PCM buffer, and so on. (see snd_pcm_hw_params() and ioctl(SNDRV_PCM_IOCTL_HW_PARAMS)). Thus, as long as in-kernel drivers know specifications of endpoints, userspace applications can start PCM substreams correctly. [1] In detail, please refer to specification of 1394TA I introduced: http://www.spinics.net/lists/netdev/msg381259.html [2] I guess that IEC 61883-1/6 packet for Ethernet-AVB is a mutant from original specifications. Regards Takashi Sakamoto
Re: [very-RFC 0/8] TSN driver for the kernel
Hi Richard, > On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote: >> 3. ALSA support for tunable AD/DA clocks. The rate of the Listener's >>DA clock must match that of the Talker and the other Listeners. >>Either you adjust it in HW using a VCO or similar, or you do >>adaptive sample rate conversion in the application. (And that is >>another reason for *not* having a shared kernel buffer.) For the >>Talker, either you adjust the AD clock to match the PTP time, or >>you measure the frequency offset. >> >> I have seen audio PLL/multiplier chips that will take, for example, a >> 10 kHz input and produce your 48 kHz media clock. With the right HW >> design, you can tell your PTP Hardware Clock to produce a 1 PPS, >> and you will have a synchronized AVB endpoint. The software is all >> there already. Somebody should tell the ALSA guys about it. Just from my curiosity, could I ask you more explanation for it in ALSA side? The similar mechanism to synchronize endpoints was also applied to audio and music unit on IEEE 1394 bus. According to IEC 61883-1/6, some of these actual units can generate presentation-timestamp from header information of 8,000 packet per sec, and utilize the signal as sampling clock[1]. There's much differences between IEC 61883-1/6 on IEEE 1394 bus and Audio and Video Bridge on Ethernet[2], especially for synchronization, but in this point of transferring synchnization signal and time-based data, we have the similar requirements of software implementations, I think. My motivation to join in this discussion is to consider about to make it clear to implement packet-oriented drivers in ALSA kernel-land, and enhance my work for drivers to handle IEC 61883-1/6 on IEEE 1394 bus. >> I don't know if ALSA has anything for sample rate conversion or not, >> but haven't seen anything that addresses distributed synchronized >> audio applications. In ALSA, sampling rate conversion should be in userspace, not in kernel land. In alsa-lib, sampling rate conversion is implemented in shared object. When userspace applications start playbacking/capturing, depending on PCM node to access, these applications load the shared object and convert PCM frames from buffer in userspace to mmapped DMA-buffer, then commit them. Before establishing a PCM substream, userspace applications and in-kernel drivers communicate to decide sampling rate, PCM frame format, the size of PCM buffer, and so on. (see snd_pcm_hw_params() and ioctl(SNDRV_PCM_IOCTL_HW_PARAMS)). Thus, as long as in-kernel drivers know specifications of endpoints, userspace applications can start PCM substreams correctly. [1] In detail, please refer to specification of 1394TA I introduced: http://www.spinics.net/lists/netdev/msg381259.html [2] I guess that IEC 61883-1/6 packet for Ethernet-AVB is a mutant from original specifications. Regards Takashi Sakamoto
Re: [very-RFC 0/8] TSN driver for the kernel
On Tue, Jun 14, 2016 at 08:26:15PM +0200, Richard Cochran wrote: > On Tue, Jun 14, 2016 at 11:30:00AM +0200, Henrik Austad wrote: > > So loop data from kernel -> userspace -> kernelspace and finally back to > > userspace and the media application? > > Huh? I wonder where you got that idea. Let me show an example of > what I mean. > > void listener() > { > int in = socket(); > int out = open("/dev/dsp"); > char buf[]; > > while (1) { > recv(in, buf, packetsize); > write(out, buf + offset, datasize); > } > } > > See? Where is your media-application in this? You only loop the audio from network to the dsp, is the media-application attached to the dsp-device? Whereas I want to do aplay some_song.wav or mplayer or spotify or .. > > Yes, I know some audio apps "use networking", I can stream netradio, I can > > use jack to connect devices using RTP and probably a whole lot of other > > applications do similar things. However, AVB is more about using the > > network as a virtual sound-card. > > That is news to me. I don't recall ever having seen AVB described > like that before. > > > For the media application, it should not > > have to care if the device it is using is a soudncard inside the box or a > > set of AVB-capable speakers somewhere on the network. > > So you would like a remote listener to appear in the system as a local > PCM audio sink? And a remote talker would be like a local media URL? > Sounds unworkable to me, but even if you were to implement it, the > logic would surely belong in alsa-lib and not in the kernel. Behind > the enulated device, the library would run a loop like the example, > above. > > In any case, your patches don't implement that sort of thing at all, > do they? Subject: [very-RFC 7/8] AVB ALSA - Add ALSA shim for TSN Did you even bother to look? -- Henrik Austad signature.asc Description: Digital signature
Re: [very-RFC 0/8] TSN driver for the kernel
On Tue, Jun 14, 2016 at 08:26:15PM +0200, Richard Cochran wrote: > On Tue, Jun 14, 2016 at 11:30:00AM +0200, Henrik Austad wrote: > > So loop data from kernel -> userspace -> kernelspace and finally back to > > userspace and the media application? > > Huh? I wonder where you got that idea. Let me show an example of > what I mean. > > void listener() > { > int in = socket(); > int out = open("/dev/dsp"); > char buf[]; > > while (1) { > recv(in, buf, packetsize); > write(out, buf + offset, datasize); > } > } > > See? Where is your media-application in this? You only loop the audio from network to the dsp, is the media-application attached to the dsp-device? Whereas I want to do aplay some_song.wav or mplayer or spotify or .. > > Yes, I know some audio apps "use networking", I can stream netradio, I can > > use jack to connect devices using RTP and probably a whole lot of other > > applications do similar things. However, AVB is more about using the > > network as a virtual sound-card. > > That is news to me. I don't recall ever having seen AVB described > like that before. > > > For the media application, it should not > > have to care if the device it is using is a soudncard inside the box or a > > set of AVB-capable speakers somewhere on the network. > > So you would like a remote listener to appear in the system as a local > PCM audio sink? And a remote talker would be like a local media URL? > Sounds unworkable to me, but even if you were to implement it, the > logic would surely belong in alsa-lib and not in the kernel. Behind > the enulated device, the library would run a loop like the example, > above. > > In any case, your patches don't implement that sort of thing at all, > do they? Subject: [very-RFC 7/8] AVB ALSA - Add ALSA shim for TSN Did you even bother to look? -- Henrik Austad signature.asc Description: Digital signature
Re: [very-RFC 0/8] TSN driver for the kernel
On Tue, Jun 14, 2016 at 11:30:00AM +0200, Henrik Austad wrote: > So loop data from kernel -> userspace -> kernelspace and finally back to > userspace and the media application? Huh? I wonder where you got that idea. Let me show an example of what I mean. void listener() { int in = socket(); int out = open("/dev/dsp"); char buf[]; while (1) { recv(in, buf, packetsize); write(out, buf + offset, datasize); } } See? > Yes, I know some audio apps "use networking", I can stream netradio, I can > use jack to connect devices using RTP and probably a whole lot of other > applications do similar things. However, AVB is more about using the > network as a virtual sound-card. That is news to me. I don't recall ever having seen AVB described like that before. > For the media application, it should not > have to care if the device it is using is a soudncard inside the box or a > set of AVB-capable speakers somewhere on the network. So you would like a remote listener to appear in the system as a local PCM audio sink? And a remote talker would be like a local media URL? Sounds unworkable to me, but even if you were to implement it, the logic would surely belong in alsa-lib and not in the kernel. Behind the enulated device, the library would run a loop like the example, above. In any case, your patches don't implement that sort of thing at all, do they? Thanks, Richard
Re: [very-RFC 0/8] TSN driver for the kernel
On Tue, Jun 14, 2016 at 11:30:00AM +0200, Henrik Austad wrote: > So loop data from kernel -> userspace -> kernelspace and finally back to > userspace and the media application? Huh? I wonder where you got that idea. Let me show an example of what I mean. void listener() { int in = socket(); int out = open("/dev/dsp"); char buf[]; while (1) { recv(in, buf, packetsize); write(out, buf + offset, datasize); } } See? > Yes, I know some audio apps "use networking", I can stream netradio, I can > use jack to connect devices using RTP and probably a whole lot of other > applications do similar things. However, AVB is more about using the > network as a virtual sound-card. That is news to me. I don't recall ever having seen AVB described like that before. > For the media application, it should not > have to care if the device it is using is a soudncard inside the box or a > set of AVB-capable speakers somewhere on the network. So you would like a remote listener to appear in the system as a local PCM audio sink? And a remote talker would be like a local media URL? Sounds unworkable to me, but even if you were to implement it, the logic would surely belong in alsa-lib and not in the kernel. Behind the enulated device, the library would run a loop like the example, above. In any case, your patches don't implement that sort of thing at all, do they? Thanks, Richard
Re: [very-RFC 0/8] TSN driver for the kernel
On Tue, Jun 14, 2016 at 12:18:44PM +0100, One Thousand Gnomes wrote: > On Mon, 13 Jun 2016 21:51:36 +0200 > Richard Cochranwrote: > > > > Actually, we already have support for tunable clock-like HW elements, > > namely the dynamic posix clock API. It is trivial to write a driver > > for VCO or the like. I am just not too familiar with the latest high > > end audio devices. > > Why high end ? Even the most basic USB audio is frame based and > isosynchronous to the USB clock. It also reports back the delay > properties. Well, I guess I should have said, I am not too familiar with the breadth of current audio hardware, high end or low end. Of course I would like to see even consumer devices work with AVB, but it is up to the ALSA people to make that happen. So far, nothing has been done, afaict. Thanks, Richard
Re: [very-RFC 0/8] TSN driver for the kernel
On Tue, Jun 14, 2016 at 12:18:44PM +0100, One Thousand Gnomes wrote: > On Mon, 13 Jun 2016 21:51:36 +0200 > Richard Cochran wrote: > > > > Actually, we already have support for tunable clock-like HW elements, > > namely the dynamic posix clock API. It is trivial to write a driver > > for VCO or the like. I am just not too familiar with the latest high > > end audio devices. > > Why high end ? Even the most basic USB audio is frame based and > isosynchronous to the USB clock. It also reports back the delay > properties. Well, I guess I should have said, I am not too familiar with the breadth of current audio hardware, high end or low end. Of course I would like to see even consumer devices work with AVB, but it is up to the ALSA people to make that happen. So far, nothing has been done, afaict. Thanks, Richard
Re: [very-RFC 0/8] TSN driver for the kernel
On Mon, 13 Jun 2016 21:51:36 +0200 Richard Cochranwrote: > On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote: > > 3. ALSA support for tunable AD/DA clocks. The rate of the Listener's > >DA clock must match that of the Talker and the other Listeners. > >Either you adjust it in HW using a VCO or similar, or you do > >adaptive sample rate conversion in the application. (And that is > >another reason for *not* having a shared kernel buffer.) For the > >Talker, either you adjust the AD clock to match the PTP time, or > >you measure the frequency offset. > > Actually, we already have support for tunable clock-like HW elements, > namely the dynamic posix clock API. It is trivial to write a driver > for VCO or the like. I am just not too familiar with the latest high > end audio devices. Why high end ? Even the most basic USB audio is frame based and isosynchronous to the USB clock. It also reports back the delay properties. Alan
Re: [very-RFC 0/8] TSN driver for the kernel
On Mon, 13 Jun 2016 21:51:36 +0200 Richard Cochran wrote: > On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote: > > 3. ALSA support for tunable AD/DA clocks. The rate of the Listener's > >DA clock must match that of the Talker and the other Listeners. > >Either you adjust it in HW using a VCO or similar, or you do > >adaptive sample rate conversion in the application. (And that is > >another reason for *not* having a shared kernel buffer.) For the > >Talker, either you adjust the AD clock to match the PTP time, or > >you measure the frequency offset. > > Actually, we already have support for tunable clock-like HW elements, > namely the dynamic posix clock API. It is trivial to write a driver > for VCO or the like. I am just not too familiar with the latest high > end audio devices. Why high end ? Even the most basic USB audio is frame based and isosynchronous to the USB clock. It also reports back the delay properties. Alan
Re: [very-RFC 0/8] TSN driver for the kernel
On Mon, Jun 13, 2016 at 09:32:10PM +0200, Richard Cochran wrote: > On Mon, Jun 13, 2016 at 03:00:59PM +0200, Henrik Austad wrote: > > On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote: > > > Which driver is that? > > > > drivers/net/ethernet/renesas/ > > That driver is merely a PTP capable MAC driver, nothing more. > Although AVB is in the device name, the driver doesn't implement > anything beyond the PTP bits. Yes, I think they do the rest from userspace, not sure though :) > > What is the rationale for no new sockets? To avoid cluttering? or do > > sockets have a drawback I'm not aware of? > > The current raw sockets will work just fine. Again, there should be a > application that sits in between with the network socket and the audio > interface. So loop data from kernel -> userspace -> kernelspace and finally back to userspace and the media application? I agree that you need a way to pipe the incoming data directly from the network to userspace for those TSN users that can handle it. But again, for media-applications that don't know (or care) about AVB, it should be fed to ALSA/v4l2 directly and not jump between kernel and userspace an extra round. I get the point of not including every single audio/video encoder in the kernel, but raw audio should be piped directly to alsa. V4L2 has a way of piping encoded video through the system and to the media application (in order to support cameras that to encoding). The same approach should be doable for AVB, no? (someone from alsa/v4l2 should probably comment on this) > > Why is configfs wrong? > > Because the application will use the already existing network and > audio interfaces to configure the system. Configuring this via the audio-interface is going to be a challenge since you need to configure the stream through the network before you can create the audio interface. If not, you will have to either drop data or block the caller until the link has been fully configured. This is actually the reason why configfs is used in the series now, as it allows userspace to figure out all the different attributes and configure the link before letting ALSA start pushing data. > > > Lets take a look at the big picture. One aspect of TSN is already > > > fully supported, namely the gPTP. Using the linuxptp user stack and a > > > modern kernel, you have a complete 802.1AS-2011 solution. > > > > Yes, I thought so, which is also why I have put that to the side and why > > I'm using ktime_get() for timestamps at the moment. There's also the issue > > of hooking the time into ALSA/V4L2 > > So lets get that issue solved before anything else. It is absolutely > essential for TSN. Without the synchronization, you are only playing > audio over the network. We already have software for that. Yes, I agree, presentation-time and local time needs to be handled properly. The same for adjusting sample-rate etc. This is a lot of work, so I hope you can understand why I started out with a simple approach to spark a discussion before moving on to the larger bits. > > > 2. A user space audio application that puts it all together, making > > >use of the services in #1, the linuxptp gPTP service, the ALSA > > >services, and the network connections. This program will have all > > >the knowledge about packet formats, AV encodings, and the local HW > > >capabilities. This program cannot yet be written, as we still need > > >some kernel work in the audio and networking subsystems. > > > > Why? > > Because user space is right place to place the knowledge of the myriad > formats and options. Se response above, better to let anything but uncompressed raw data trickle through. > > the whole point should be to make it as easy for userspace as > > possible. If you need to tailor each individual media-appliation to use > > AVB, it is not going to be very useful outside pro-Audio. Sure, there will > > be challenges, but one key element here should be to *not* require > > upgrading every single media application. > > > > Then, back to the suggestion of adding a TSN_SOCKET (which you didn't like, > > but can we agree on a term "raw interface to TSN", and mode of transport > > can be defined later? ), was to let those applications that are TSN-aware > > to do what they need to do, whether it is controlling robots or media > > streams. > > First you say you don't want ot upgrade media applications, but then > you invent a new socket type. That is a contradiction in terms. Hehe, no, bad phrasing on my part. I want *both* (hence the shim-interface) :) > Audio apps already use networking, and they already use the audio > subsystem. We need to help them get their job done by providing the > missing kernel interfaces. They don't need extra magic buffering the > kernel. They already can buffer audio data by themselves. Yes, I know some audio apps "use networking", I can stream netradio, I can use jack to
Re: [very-RFC 0/8] TSN driver for the kernel
On Mon, Jun 13, 2016 at 09:32:10PM +0200, Richard Cochran wrote: > On Mon, Jun 13, 2016 at 03:00:59PM +0200, Henrik Austad wrote: > > On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote: > > > Which driver is that? > > > > drivers/net/ethernet/renesas/ > > That driver is merely a PTP capable MAC driver, nothing more. > Although AVB is in the device name, the driver doesn't implement > anything beyond the PTP bits. Yes, I think they do the rest from userspace, not sure though :) > > What is the rationale for no new sockets? To avoid cluttering? or do > > sockets have a drawback I'm not aware of? > > The current raw sockets will work just fine. Again, there should be a > application that sits in between with the network socket and the audio > interface. So loop data from kernel -> userspace -> kernelspace and finally back to userspace and the media application? I agree that you need a way to pipe the incoming data directly from the network to userspace for those TSN users that can handle it. But again, for media-applications that don't know (or care) about AVB, it should be fed to ALSA/v4l2 directly and not jump between kernel and userspace an extra round. I get the point of not including every single audio/video encoder in the kernel, but raw audio should be piped directly to alsa. V4L2 has a way of piping encoded video through the system and to the media application (in order to support cameras that to encoding). The same approach should be doable for AVB, no? (someone from alsa/v4l2 should probably comment on this) > > Why is configfs wrong? > > Because the application will use the already existing network and > audio interfaces to configure the system. Configuring this via the audio-interface is going to be a challenge since you need to configure the stream through the network before you can create the audio interface. If not, you will have to either drop data or block the caller until the link has been fully configured. This is actually the reason why configfs is used in the series now, as it allows userspace to figure out all the different attributes and configure the link before letting ALSA start pushing data. > > > Lets take a look at the big picture. One aspect of TSN is already > > > fully supported, namely the gPTP. Using the linuxptp user stack and a > > > modern kernel, you have a complete 802.1AS-2011 solution. > > > > Yes, I thought so, which is also why I have put that to the side and why > > I'm using ktime_get() for timestamps at the moment. There's also the issue > > of hooking the time into ALSA/V4L2 > > So lets get that issue solved before anything else. It is absolutely > essential for TSN. Without the synchronization, you are only playing > audio over the network. We already have software for that. Yes, I agree, presentation-time and local time needs to be handled properly. The same for adjusting sample-rate etc. This is a lot of work, so I hope you can understand why I started out with a simple approach to spark a discussion before moving on to the larger bits. > > > 2. A user space audio application that puts it all together, making > > >use of the services in #1, the linuxptp gPTP service, the ALSA > > >services, and the network connections. This program will have all > > >the knowledge about packet formats, AV encodings, and the local HW > > >capabilities. This program cannot yet be written, as we still need > > >some kernel work in the audio and networking subsystems. > > > > Why? > > Because user space is right place to place the knowledge of the myriad > formats and options. Se response above, better to let anything but uncompressed raw data trickle through. > > the whole point should be to make it as easy for userspace as > > possible. If you need to tailor each individual media-appliation to use > > AVB, it is not going to be very useful outside pro-Audio. Sure, there will > > be challenges, but one key element here should be to *not* require > > upgrading every single media application. > > > > Then, back to the suggestion of adding a TSN_SOCKET (which you didn't like, > > but can we agree on a term "raw interface to TSN", and mode of transport > > can be defined later? ), was to let those applications that are TSN-aware > > to do what they need to do, whether it is controlling robots or media > > streams. > > First you say you don't want ot upgrade media applications, but then > you invent a new socket type. That is a contradiction in terms. Hehe, no, bad phrasing on my part. I want *both* (hence the shim-interface) :) > Audio apps already use networking, and they already use the audio > subsystem. We need to help them get their job done by providing the > missing kernel interfaces. They don't need extra magic buffering the > kernel. They already can buffer audio data by themselves. Yes, I know some audio apps "use networking", I can stream netradio, I can use jack to
Re: [very-RFC 0/8] TSN driver for the kernel
On Mon, Jun 13, 2016 at 08:56:44AM -0700, John Fastabend wrote: > On 16-06-13 04:47 AM, Richard Cochran wrote: > > [...] > > Here is what is missing to support audio TSN: > > > > * User Space > > > > 1. A proper userland stack for AVDECC, MAAP, FQTSS, and so on. The > >OpenAVB project does not offer much beyond simple examples. > > > > 2. A user space audio application that puts it all together, making > >use of the services in #1, the linuxptp gPTP service, the ALSA > >services, and the network connections. This program will have all > >the knowledge about packet formats, AV encodings, and the local HW > >capabilities. This program cannot yet be written, as we still need > >some kernel work in the audio and networking subsystems. > > > > * Kernel Space > > > > 1. Providing frames with a future transmit time. For normal sockets, > >this can be in the CMESG data. For mmap'ed buffers, we will need a > >new format. (I think Arnd is working on a new layout.) > > > > 2. Time based qdisc for transmitted frames. For MACs that support > >this (like the i210), we only have to place the frame into the > >correct queue. For normal HW, we want to be able to reserve a time > >window in which non-TSN frames are blocked. This is some work, but > >in the end it should be a generic solution that not only works > >"perfectly" with TSN HW but also provides best effort service using > >any NIC. > > > > When I looked at this awhile ago I convinced myself that it could fit > fairly well into the DCB stack (DCB is also part of 802.1Q). A lot of > the traffic class to queue mappings and priories could be handled here. > It might be worth taking a look at ./net/sched/mqprio.c and ./net/dcb/. Interesting, I'll have a look at dcb and mqprio, I'm not familiar with those systems. Thanks for pointing those out! I hope that the complexity doesn't run crazy though, TSN is not aimed at datacentra, a lot of the endpoints are going to be embedded devices, introducing a massive stack for handling every eventuality in 802.1q is going to be counter productive. > Unfortunately I didn't get too far along but we probably don't want > another mechanism to map hw queues/tcs/etc if the existing interfaces > work or can be extended to support this. Sure, I get that, as long as the complexity for setting up a link doesn't go through the roof :) Thanks! -- Henrik Austad signature.asc Description: Digital signature
Re: [very-RFC 0/8] TSN driver for the kernel
On Mon, Jun 13, 2016 at 08:56:44AM -0700, John Fastabend wrote: > On 16-06-13 04:47 AM, Richard Cochran wrote: > > [...] > > Here is what is missing to support audio TSN: > > > > * User Space > > > > 1. A proper userland stack for AVDECC, MAAP, FQTSS, and so on. The > >OpenAVB project does not offer much beyond simple examples. > > > > 2. A user space audio application that puts it all together, making > >use of the services in #1, the linuxptp gPTP service, the ALSA > >services, and the network connections. This program will have all > >the knowledge about packet formats, AV encodings, and the local HW > >capabilities. This program cannot yet be written, as we still need > >some kernel work in the audio and networking subsystems. > > > > * Kernel Space > > > > 1. Providing frames with a future transmit time. For normal sockets, > >this can be in the CMESG data. For mmap'ed buffers, we will need a > >new format. (I think Arnd is working on a new layout.) > > > > 2. Time based qdisc for transmitted frames. For MACs that support > >this (like the i210), we only have to place the frame into the > >correct queue. For normal HW, we want to be able to reserve a time > >window in which non-TSN frames are blocked. This is some work, but > >in the end it should be a generic solution that not only works > >"perfectly" with TSN HW but also provides best effort service using > >any NIC. > > > > When I looked at this awhile ago I convinced myself that it could fit > fairly well into the DCB stack (DCB is also part of 802.1Q). A lot of > the traffic class to queue mappings and priories could be handled here. > It might be worth taking a look at ./net/sched/mqprio.c and ./net/dcb/. Interesting, I'll have a look at dcb and mqprio, I'm not familiar with those systems. Thanks for pointing those out! I hope that the complexity doesn't run crazy though, TSN is not aimed at datacentra, a lot of the endpoints are going to be embedded devices, introducing a massive stack for handling every eventuality in 802.1q is going to be counter productive. > Unfortunately I didn't get too far along but we probably don't want > another mechanism to map hw queues/tcs/etc if the existing interfaces > work or can be extended to support this. Sure, I get that, as long as the complexity for setting up a link doesn't go through the roof :) Thanks! -- Henrik Austad signature.asc Description: Digital signature
Re: [very-RFC 0/8] TSN driver for the kernel
On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote: > 3. ALSA support for tunable AD/DA clocks. The rate of the Listener's >DA clock must match that of the Talker and the other Listeners. >Either you adjust it in HW using a VCO or similar, or you do >adaptive sample rate conversion in the application. (And that is >another reason for *not* having a shared kernel buffer.) For the >Talker, either you adjust the AD clock to match the PTP time, or >you measure the frequency offset. Actually, we already have support for tunable clock-like HW elements, namely the dynamic posix clock API. It is trivial to write a driver for VCO or the like. I am just not too familiar with the latest high end audio devices. I have seen audio PLL/multiplier chips that will take, for example, a 10 kHz input and produce your 48 kHz media clock. With the right HW design, you can tell your PTP Hardware Clock to produce a 1 PPS, and you will have a synchronized AVB endpoint. The software is all there already. Somebody should tell the ALSA guys about it. I don't know if ALSA has anything for sample rate conversion or not, but haven't seen anything that addresses distributed synchronized audio applications. Thanks, Richard
Re: [very-RFC 0/8] TSN driver for the kernel
On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote: > 3. ALSA support for tunable AD/DA clocks. The rate of the Listener's >DA clock must match that of the Talker and the other Listeners. >Either you adjust it in HW using a VCO or similar, or you do >adaptive sample rate conversion in the application. (And that is >another reason for *not* having a shared kernel buffer.) For the >Talker, either you adjust the AD clock to match the PTP time, or >you measure the frequency offset. Actually, we already have support for tunable clock-like HW elements, namely the dynamic posix clock API. It is trivial to write a driver for VCO or the like. I am just not too familiar with the latest high end audio devices. I have seen audio PLL/multiplier chips that will take, for example, a 10 kHz input and produce your 48 kHz media clock. With the right HW design, you can tell your PTP Hardware Clock to produce a 1 PPS, and you will have a synchronized AVB endpoint. The software is all there already. Somebody should tell the ALSA guys about it. I don't know if ALSA has anything for sample rate conversion or not, but haven't seen anything that addresses distributed synchronized audio applications. Thanks, Richard
Re: [very-RFC 0/8] TSN driver for the kernel
On Mon, Jun 13, 2016 at 03:00:59PM +0200, Henrik Austad wrote: > On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote: > > People have been asking me about TSN and Linux, and we've made some > > thoughts about it. The interest is there, and so I am glad to see > > discussion on this topic. > > I'm not aware of any such discussions, could you point me to where TSN has > been discussed, it would be nice to see other peoples thought on the matter > (which was one of the ideas behind this series in the first place) To my knowledge, there hasn't been any previous TSN talk on lkml. (You have just now started the discussion ;) Sorry for not being clear. Richard
Re: [very-RFC 0/8] TSN driver for the kernel
On Mon, Jun 13, 2016 at 03:00:59PM +0200, Henrik Austad wrote: > On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote: > > People have been asking me about TSN and Linux, and we've made some > > thoughts about it. The interest is there, and so I am glad to see > > discussion on this topic. > > I'm not aware of any such discussions, could you point me to where TSN has > been discussed, it would be nice to see other peoples thought on the matter > (which was one of the ideas behind this series in the first place) To my knowledge, there hasn't been any previous TSN talk on lkml. (You have just now started the discussion ;) Sorry for not being clear. Richard
Re: [very-RFC 0/8] TSN driver for the kernel
On Mon, Jun 13, 2016 at 03:00:59PM +0200, Henrik Austad wrote: > On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote: > > Which driver is that? > > drivers/net/ethernet/renesas/ That driver is merely a PTP capable MAC driver, nothing more. Although AVB is in the device name, the driver doesn't implement anything beyond the PTP bits. > What is the rationale for no new sockets? To avoid cluttering? or do > sockets have a drawback I'm not aware of? The current raw sockets will work just fine. Again, there should be a application that sits in between with the network socket and the audio interface. > Why is configfs wrong? Because the application will use the already existing network and audio interfaces to configure the system. > > Lets take a look at the big picture. One aspect of TSN is already > > fully supported, namely the gPTP. Using the linuxptp user stack and a > > modern kernel, you have a complete 802.1AS-2011 solution. > > Yes, I thought so, which is also why I have put that to the side and why > I'm using ktime_get() for timestamps at the moment. There's also the issue > of hooking the time into ALSA/V4L2 So lets get that issue solved before anything else. It is absolutely essential for TSN. Without the synchronization, you are only playing audio over the network. We already have software for that. > > 2. A user space audio application that puts it all together, making > >use of the services in #1, the linuxptp gPTP service, the ALSA > >services, and the network connections. This program will have all > >the knowledge about packet formats, AV encodings, and the local HW > >capabilities. This program cannot yet be written, as we still need > >some kernel work in the audio and networking subsystems. > > Why? Because user space is right place to place the knowledge of the myriad formats and options. > the whole point should be to make it as easy for userspace as > possible. If you need to tailor each individual media-appliation to use > AVB, it is not going to be very useful outside pro-Audio. Sure, there will > be challenges, but one key element here should be to *not* require > upgrading every single media application. > > Then, back to the suggestion of adding a TSN_SOCKET (which you didn't like, > but can we agree on a term "raw interface to TSN", and mode of transport > can be defined later? ), was to let those applications that are TSN-aware > to do what they need to do, whether it is controlling robots or media > streams. First you say you don't want ot upgrade media applications, but then you invent a new socket type. That is a contradiction in terms. Audio apps already use networking, and they already use the audio subsystem. We need to help them get their job done by providing the missing kernel interfaces. They don't need extra magic buffering the kernel. They already can buffer audio data by themselves. > > * Kernel Space > > > > 1. Providing frames with a future transmit time. For normal sockets, > >this can be in the CMESG data. For mmap'ed buffers, we will need a > >new format. (I think Arnd is working on a new layout.) > > Ah, I was unaware of this, both CMESG and mmap buffers. > > What is the accuracy of deferred transmit? If you have a class A stream, > you push out a new frame every 125 us, you may end up with > accuracy-constraints lower than that if you want to be able to state "send > frame X at time Y". I have no idea what you are asking here. Sorry, Richard
Re: [very-RFC 0/8] TSN driver for the kernel
On Mon, Jun 13, 2016 at 03:00:59PM +0200, Henrik Austad wrote: > On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote: > > Which driver is that? > > drivers/net/ethernet/renesas/ That driver is merely a PTP capable MAC driver, nothing more. Although AVB is in the device name, the driver doesn't implement anything beyond the PTP bits. > What is the rationale for no new sockets? To avoid cluttering? or do > sockets have a drawback I'm not aware of? The current raw sockets will work just fine. Again, there should be a application that sits in between with the network socket and the audio interface. > Why is configfs wrong? Because the application will use the already existing network and audio interfaces to configure the system. > > Lets take a look at the big picture. One aspect of TSN is already > > fully supported, namely the gPTP. Using the linuxptp user stack and a > > modern kernel, you have a complete 802.1AS-2011 solution. > > Yes, I thought so, which is also why I have put that to the side and why > I'm using ktime_get() for timestamps at the moment. There's also the issue > of hooking the time into ALSA/V4L2 So lets get that issue solved before anything else. It is absolutely essential for TSN. Without the synchronization, you are only playing audio over the network. We already have software for that. > > 2. A user space audio application that puts it all together, making > >use of the services in #1, the linuxptp gPTP service, the ALSA > >services, and the network connections. This program will have all > >the knowledge about packet formats, AV encodings, and the local HW > >capabilities. This program cannot yet be written, as we still need > >some kernel work in the audio and networking subsystems. > > Why? Because user space is right place to place the knowledge of the myriad formats and options. > the whole point should be to make it as easy for userspace as > possible. If you need to tailor each individual media-appliation to use > AVB, it is not going to be very useful outside pro-Audio. Sure, there will > be challenges, but one key element here should be to *not* require > upgrading every single media application. > > Then, back to the suggestion of adding a TSN_SOCKET (which you didn't like, > but can we agree on a term "raw interface to TSN", and mode of transport > can be defined later? ), was to let those applications that are TSN-aware > to do what they need to do, whether it is controlling robots or media > streams. First you say you don't want ot upgrade media applications, but then you invent a new socket type. That is a contradiction in terms. Audio apps already use networking, and they already use the audio subsystem. We need to help them get their job done by providing the missing kernel interfaces. They don't need extra magic buffering the kernel. They already can buffer audio data by themselves. > > * Kernel Space > > > > 1. Providing frames with a future transmit time. For normal sockets, > >this can be in the CMESG data. For mmap'ed buffers, we will need a > >new format. (I think Arnd is working on a new layout.) > > Ah, I was unaware of this, both CMESG and mmap buffers. > > What is the accuracy of deferred transmit? If you have a class A stream, > you push out a new frame every 125 us, you may end up with > accuracy-constraints lower than that if you want to be able to state "send > frame X at time Y". I have no idea what you are asking here. Sorry, Richard
Re: [very-RFC 0/8] TSN driver for the kernel
On 16-06-13 04:47 AM, Richard Cochran wrote: > Henrik, > > On Sun, Jun 12, 2016 at 01:01:28AM +0200, Henrik Austad wrote: >> There are at least one AVB-driver (the AV-part of TSN) in the kernel >> already, > > Which driver is that? > >> however this driver aims to solve a wider scope as TSN can do >> much more than just audio. A very basic ALSA-driver is added to the end >> that allows you to play music between 2 machines using aplay in one end >> and arecord | aplay on the other (some fiddling required) We have plans >> for doing the same for v4l2 eventually (but there are other fishes to >> fry first). The same goes for a TSN_SOCK type approach as well. > > Please, no new socket type for this. > >> What remains >> - tie to (g)PTP properly, currently using ktime_get() for presentation >> time >> - get time from shim into TSN and vice versa > > ... and a whole lot more, see below. > >> - let shim create/manage buffer > > (BTW, shim is a terrible name for that.) > > [sigh] > > People have been asking me about TSN and Linux, and we've made some > thoughts about it. The interest is there, and so I am glad to see > discussion on this topic. > > Having said that, your series does not even begin to address the real > issues. I did not review the patches too carefully (because the > important stuff is missing), but surely configfs is the wrong > interface for this. In the end, we will be able to support TSN using > the existing networking and audio interfaces, adding appropriate > extensions. > > Your patch features a buffer shared by networking and audio. This > isn't strictly necessary for TSN, and it may be harmful. The > Listeners are supposed to calculate the delay from frame reception to > the DA conversion. They can easily include the time needed for a user > space program to parse the frames, copy (and combine/convert) the > data, and re-start the audio transfer. A flexible TSN implementation > will leave all of the format and encoding task to the userland. After > all, TSN will some include more that just AV data, as you know. > > Lets take a look at the big picture. One aspect of TSN is already > fully supported, namely the gPTP. Using the linuxptp user stack and a > modern kernel, you have a complete 802.1AS-2011 solution. > > Here is what is missing to support audio TSN: > > * User Space > > 1. A proper userland stack for AVDECC, MAAP, FQTSS, and so on. The >OpenAVB project does not offer much beyond simple examples. > > 2. A user space audio application that puts it all together, making >use of the services in #1, the linuxptp gPTP service, the ALSA >services, and the network connections. This program will have all >the knowledge about packet formats, AV encodings, and the local HW >capabilities. This program cannot yet be written, as we still need >some kernel work in the audio and networking subsystems. > > * Kernel Space > > 1. Providing frames with a future transmit time. For normal sockets, >this can be in the CMESG data. For mmap'ed buffers, we will need a >new format. (I think Arnd is working on a new layout.) > > 2. Time based qdisc for transmitted frames. For MACs that support >this (like the i210), we only have to place the frame into the >correct queue. For normal HW, we want to be able to reserve a time >window in which non-TSN frames are blocked. This is some work, but >in the end it should be a generic solution that not only works >"perfectly" with TSN HW but also provides best effort service using >any NIC. > When I looked at this awhile ago I convinced myself that it could fit fairly well into the DCB stack (DCB is also part of 802.1Q). A lot of the traffic class to queue mappings and priories could be handled here. It might be worth taking a look at ./net/sched/mqprio.c and ./net/dcb/. Unfortunately I didn't get too far along but we probably don't want another mechanism to map hw queues/tcs/etc if the existing interfaces work or can be extended to support this. > 3. ALSA support for tunable AD/DA clocks. The rate of the Listener's >DA clock must match that of the Talker and the other Listeners. >Either you adjust it in HW using a VCO or similar, or you do >adaptive sample rate conversion in the application. (And that is >another reason for *not* having a shared kernel buffer.) For the >Talker, either you adjust the AD clock to match the PTP time, or >you measure the frequency offset. > > 4. ALSA support for time triggered playback. The patch series >completely ignore the critical issue of media clock recovery. The >Listener must buffer the stream in order to play it exactly at a >specified time. It cannot simply send the stream ASAP to the audio >HW, because some other Listener might need longer. AFAICT, there >is nothing in ALSA that allows you to say, sample X should be >played at time Y. > > These are some ideas about
Re: [very-RFC 0/8] TSN driver for the kernel
On 16-06-13 04:47 AM, Richard Cochran wrote: > Henrik, > > On Sun, Jun 12, 2016 at 01:01:28AM +0200, Henrik Austad wrote: >> There are at least one AVB-driver (the AV-part of TSN) in the kernel >> already, > > Which driver is that? > >> however this driver aims to solve a wider scope as TSN can do >> much more than just audio. A very basic ALSA-driver is added to the end >> that allows you to play music between 2 machines using aplay in one end >> and arecord | aplay on the other (some fiddling required) We have plans >> for doing the same for v4l2 eventually (but there are other fishes to >> fry first). The same goes for a TSN_SOCK type approach as well. > > Please, no new socket type for this. > >> What remains >> - tie to (g)PTP properly, currently using ktime_get() for presentation >> time >> - get time from shim into TSN and vice versa > > ... and a whole lot more, see below. > >> - let shim create/manage buffer > > (BTW, shim is a terrible name for that.) > > [sigh] > > People have been asking me about TSN and Linux, and we've made some > thoughts about it. The interest is there, and so I am glad to see > discussion on this topic. > > Having said that, your series does not even begin to address the real > issues. I did not review the patches too carefully (because the > important stuff is missing), but surely configfs is the wrong > interface for this. In the end, we will be able to support TSN using > the existing networking and audio interfaces, adding appropriate > extensions. > > Your patch features a buffer shared by networking and audio. This > isn't strictly necessary for TSN, and it may be harmful. The > Listeners are supposed to calculate the delay from frame reception to > the DA conversion. They can easily include the time needed for a user > space program to parse the frames, copy (and combine/convert) the > data, and re-start the audio transfer. A flexible TSN implementation > will leave all of the format and encoding task to the userland. After > all, TSN will some include more that just AV data, as you know. > > Lets take a look at the big picture. One aspect of TSN is already > fully supported, namely the gPTP. Using the linuxptp user stack and a > modern kernel, you have a complete 802.1AS-2011 solution. > > Here is what is missing to support audio TSN: > > * User Space > > 1. A proper userland stack for AVDECC, MAAP, FQTSS, and so on. The >OpenAVB project does not offer much beyond simple examples. > > 2. A user space audio application that puts it all together, making >use of the services in #1, the linuxptp gPTP service, the ALSA >services, and the network connections. This program will have all >the knowledge about packet formats, AV encodings, and the local HW >capabilities. This program cannot yet be written, as we still need >some kernel work in the audio and networking subsystems. > > * Kernel Space > > 1. Providing frames with a future transmit time. For normal sockets, >this can be in the CMESG data. For mmap'ed buffers, we will need a >new format. (I think Arnd is working on a new layout.) > > 2. Time based qdisc for transmitted frames. For MACs that support >this (like the i210), we only have to place the frame into the >correct queue. For normal HW, we want to be able to reserve a time >window in which non-TSN frames are blocked. This is some work, but >in the end it should be a generic solution that not only works >"perfectly" with TSN HW but also provides best effort service using >any NIC. > When I looked at this awhile ago I convinced myself that it could fit fairly well into the DCB stack (DCB is also part of 802.1Q). A lot of the traffic class to queue mappings and priories could be handled here. It might be worth taking a look at ./net/sched/mqprio.c and ./net/dcb/. Unfortunately I didn't get too far along but we probably don't want another mechanism to map hw queues/tcs/etc if the existing interfaces work or can be extended to support this. > 3. ALSA support for tunable AD/DA clocks. The rate of the Listener's >DA clock must match that of the Talker and the other Listeners. >Either you adjust it in HW using a VCO or similar, or you do >adaptive sample rate conversion in the application. (And that is >another reason for *not* having a shared kernel buffer.) For the >Talker, either you adjust the AD clock to match the PTP time, or >you measure the frequency offset. > > 4. ALSA support for time triggered playback. The patch series >completely ignore the critical issue of media clock recovery. The >Listener must buffer the stream in order to play it exactly at a >specified time. It cannot simply send the stream ASAP to the audio >HW, because some other Listener might need longer. AFAICT, there >is nothing in ALSA that allows you to say, sample X should be >played at time Y. > > These are some ideas about
Re: [very-RFC 0/8] TSN driver for the kernel
On Monday, June 13, 2016 1:47:13 PM CEST Richard Cochran wrote: > * Kernel Space > > 1. Providing frames with a future transmit time. For normal sockets, >this can be in the CMESG data. For mmap'ed buffers, we will need a >new format. (I think Arnd is working on a new layout.) > After some back and forth, I think the conclusion for now was that the timestamps in the current v3 format are sufficient until 2106 as long as we treat them as 'unsigned', so we don't need the new format for y2038, but if we get a new format, that should definitely use 64-bit timestamps because that is the right thing to do. Arnd
Re: [very-RFC 0/8] TSN driver for the kernel
On Monday, June 13, 2016 1:47:13 PM CEST Richard Cochran wrote: > * Kernel Space > > 1. Providing frames with a future transmit time. For normal sockets, >this can be in the CMESG data. For mmap'ed buffers, we will need a >new format. (I think Arnd is working on a new layout.) > After some back and forth, I think the conclusion for now was that the timestamps in the current v3 format are sufficient until 2106 as long as we treat them as 'unsigned', so we don't need the new format for y2038, but if we get a new format, that should definitely use 64-bit timestamps because that is the right thing to do. Arnd
Re: [very-RFC 0/8] TSN driver for the kernel
On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote: > Henrik, Hi Richard, > On Sun, Jun 12, 2016 at 01:01:28AM +0200, Henrik Austad wrote: > > There are at least one AVB-driver (the AV-part of TSN) in the kernel > > already, > > Which driver is that? drivers/net/ethernet/renesas/ > > however this driver aims to solve a wider scope as TSN can do > > much more than just audio. A very basic ALSA-driver is added to the end > > that allows you to play music between 2 machines using aplay in one end > > and arecord | aplay on the other (some fiddling required) We have plans > > for doing the same for v4l2 eventually (but there are other fishes to > > fry first). The same goes for a TSN_SOCK type approach as well. > > Please, no new socket type for this. The idea was to create a tsn-driver and then allow userspace to use it either for media or for whatever else they'd like - and then a socket made sense. Or so I thought :) What is the rationale for no new sockets? To avoid cluttering? or do sockets have a drawback I'm not aware of? > > What remains > > - tie to (g)PTP properly, currently using ktime_get() for presentation > > time > > - get time from shim into TSN and vice versa > > ... and a whole lot more, see below. > > > - let shim create/manage buffer > > (BTW, shim is a terrible name for that.) So something thin that is placed between to subystems should rather be called.. flimsy? The point of the name was to indicate that it glued 2 pieces together. If you have a better suggestion, I'm all ears. > [sigh] > > People have been asking me about TSN and Linux, and we've made some > thoughts about it. The interest is there, and so I am glad to see > discussion on this topic. I'm not aware of any such discussions, could you point me to where TSN has been discussed, it would be nice to see other peoples thought on the matter (which was one of the ideas behind this series in the first place) > Having said that, your series does not even begin to address the real > issues. Well, in all honesty, I did say so :) It is marked as "very-RFC", and not for being included in the kernel as-is. I also made a short list of the most crucial bits missing. I know there are real issues, but solving these won't matter if you don't have anything useful to do with it. I decided to start by adding a thin ALSA-driver and then continue to work with the kernel infrastructure. Having something that works-ish makes it a lot easier to test and get others interested in, especially when you are not deeply involved in a subsystem. At one point you get to where you need input from other more intimate with then inner workings of the different subsystems to see how things should be created without making too much of a mess. So where we are :) My primary motivation was to a) gather feedback (which you have provided, and for which I am very grateful) b) get the discussion going on how/if TSN should be added to the kernel > I did not review the patches too carefully (because the > important stuff is missing), but surely configfs is the wrong > interface for this. Why is configfs wrong? Unless you want to implement discovery and enumeration and srp-negotiation in the kernel, you need userspace to handle this. Once userspace has done all that (found priority-codes, streamIDs, vlanIDs and all the required bits), then userspace can create a new link. For that I find ConfigFS to be quite useful and up to the task. In my opinion, it also makes for a much tidier and saner interface than some obscure dark-magic ioctl() > In the end, we will be able to support TSN using > the existing networking and audio interfaces, adding appropriate > extensions. I surely hope so, but as I'm not deep into the networking part of the kernel finding those appropriate extensions is hard - which is why we started writing a standalone module- > Your patch features a buffer shared by networking and audio. This > isn't strictly necessary for TSN, and it may be harmful. At one stage, data has to flow in/out of the network, and whoever's using TSN probably need to store data somewhere as well, so you need some form of buffering at one place in the path the data flows through. That being said, one of the bits on my plate is to remove the "TSN-hosted-buffer" and let TSN read/write data via the shim_ops. What the best set of functions where are, remain to be seen, but it should provide a way to move data from either a single frame or a "few frames" to the shime (err.. ;) > The > Listeners are supposed to calculate the delay from frame reception to > the DA conversion. They can easily include the time needed for a user > space program to parse the frames, copy (and combine/convert) the > data, and re-start the audio transfer. A flexible TSN implementation > will leave all of the format and encoding task to the userland. After > all, TSN will some include more that just AV data, as you know. Yes,
Re: [very-RFC 0/8] TSN driver for the kernel
On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote: > Henrik, Hi Richard, > On Sun, Jun 12, 2016 at 01:01:28AM +0200, Henrik Austad wrote: > > There are at least one AVB-driver (the AV-part of TSN) in the kernel > > already, > > Which driver is that? drivers/net/ethernet/renesas/ > > however this driver aims to solve a wider scope as TSN can do > > much more than just audio. A very basic ALSA-driver is added to the end > > that allows you to play music between 2 machines using aplay in one end > > and arecord | aplay on the other (some fiddling required) We have plans > > for doing the same for v4l2 eventually (but there are other fishes to > > fry first). The same goes for a TSN_SOCK type approach as well. > > Please, no new socket type for this. The idea was to create a tsn-driver and then allow userspace to use it either for media or for whatever else they'd like - and then a socket made sense. Or so I thought :) What is the rationale for no new sockets? To avoid cluttering? or do sockets have a drawback I'm not aware of? > > What remains > > - tie to (g)PTP properly, currently using ktime_get() for presentation > > time > > - get time from shim into TSN and vice versa > > ... and a whole lot more, see below. > > > - let shim create/manage buffer > > (BTW, shim is a terrible name for that.) So something thin that is placed between to subystems should rather be called.. flimsy? The point of the name was to indicate that it glued 2 pieces together. If you have a better suggestion, I'm all ears. > [sigh] > > People have been asking me about TSN and Linux, and we've made some > thoughts about it. The interest is there, and so I am glad to see > discussion on this topic. I'm not aware of any such discussions, could you point me to where TSN has been discussed, it would be nice to see other peoples thought on the matter (which was one of the ideas behind this series in the first place) > Having said that, your series does not even begin to address the real > issues. Well, in all honesty, I did say so :) It is marked as "very-RFC", and not for being included in the kernel as-is. I also made a short list of the most crucial bits missing. I know there are real issues, but solving these won't matter if you don't have anything useful to do with it. I decided to start by adding a thin ALSA-driver and then continue to work with the kernel infrastructure. Having something that works-ish makes it a lot easier to test and get others interested in, especially when you are not deeply involved in a subsystem. At one point you get to where you need input from other more intimate with then inner workings of the different subsystems to see how things should be created without making too much of a mess. So where we are :) My primary motivation was to a) gather feedback (which you have provided, and for which I am very grateful) b) get the discussion going on how/if TSN should be added to the kernel > I did not review the patches too carefully (because the > important stuff is missing), but surely configfs is the wrong > interface for this. Why is configfs wrong? Unless you want to implement discovery and enumeration and srp-negotiation in the kernel, you need userspace to handle this. Once userspace has done all that (found priority-codes, streamIDs, vlanIDs and all the required bits), then userspace can create a new link. For that I find ConfigFS to be quite useful and up to the task. In my opinion, it also makes for a much tidier and saner interface than some obscure dark-magic ioctl() > In the end, we will be able to support TSN using > the existing networking and audio interfaces, adding appropriate > extensions. I surely hope so, but as I'm not deep into the networking part of the kernel finding those appropriate extensions is hard - which is why we started writing a standalone module- > Your patch features a buffer shared by networking and audio. This > isn't strictly necessary for TSN, and it may be harmful. At one stage, data has to flow in/out of the network, and whoever's using TSN probably need to store data somewhere as well, so you need some form of buffering at one place in the path the data flows through. That being said, one of the bits on my plate is to remove the "TSN-hosted-buffer" and let TSN read/write data via the shim_ops. What the best set of functions where are, remain to be seen, but it should provide a way to move data from either a single frame or a "few frames" to the shime (err.. ;) > The > Listeners are supposed to calculate the delay from frame reception to > the DA conversion. They can easily include the time needed for a user > space program to parse the frames, copy (and combine/convert) the > data, and re-start the audio transfer. A flexible TSN implementation > will leave all of the format and encoding task to the userland. After > all, TSN will some include more that just AV data, as you know. Yes,
Re: [very-RFC 0/8] TSN driver for the kernel
Henrik, On Sun, Jun 12, 2016 at 01:01:28AM +0200, Henrik Austad wrote: > There are at least one AVB-driver (the AV-part of TSN) in the kernel > already, Which driver is that? > however this driver aims to solve a wider scope as TSN can do > much more than just audio. A very basic ALSA-driver is added to the end > that allows you to play music between 2 machines using aplay in one end > and arecord | aplay on the other (some fiddling required) We have plans > for doing the same for v4l2 eventually (but there are other fishes to > fry first). The same goes for a TSN_SOCK type approach as well. Please, no new socket type for this. > What remains > - tie to (g)PTP properly, currently using ktime_get() for presentation > time > - get time from shim into TSN and vice versa ... and a whole lot more, see below. > - let shim create/manage buffer (BTW, shim is a terrible name for that.) [sigh] People have been asking me about TSN and Linux, and we've made some thoughts about it. The interest is there, and so I am glad to see discussion on this topic. Having said that, your series does not even begin to address the real issues. I did not review the patches too carefully (because the important stuff is missing), but surely configfs is the wrong interface for this. In the end, we will be able to support TSN using the existing networking and audio interfaces, adding appropriate extensions. Your patch features a buffer shared by networking and audio. This isn't strictly necessary for TSN, and it may be harmful. The Listeners are supposed to calculate the delay from frame reception to the DA conversion. They can easily include the time needed for a user space program to parse the frames, copy (and combine/convert) the data, and re-start the audio transfer. A flexible TSN implementation will leave all of the format and encoding task to the userland. After all, TSN will some include more that just AV data, as you know. Lets take a look at the big picture. One aspect of TSN is already fully supported, namely the gPTP. Using the linuxptp user stack and a modern kernel, you have a complete 802.1AS-2011 solution. Here is what is missing to support audio TSN: * User Space 1. A proper userland stack for AVDECC, MAAP, FQTSS, and so on. The OpenAVB project does not offer much beyond simple examples. 2. A user space audio application that puts it all together, making use of the services in #1, the linuxptp gPTP service, the ALSA services, and the network connections. This program will have all the knowledge about packet formats, AV encodings, and the local HW capabilities. This program cannot yet be written, as we still need some kernel work in the audio and networking subsystems. * Kernel Space 1. Providing frames with a future transmit time. For normal sockets, this can be in the CMESG data. For mmap'ed buffers, we will need a new format. (I think Arnd is working on a new layout.) 2. Time based qdisc for transmitted frames. For MACs that support this (like the i210), we only have to place the frame into the correct queue. For normal HW, we want to be able to reserve a time window in which non-TSN frames are blocked. This is some work, but in the end it should be a generic solution that not only works "perfectly" with TSN HW but also provides best effort service using any NIC. 3. ALSA support for tunable AD/DA clocks. The rate of the Listener's DA clock must match that of the Talker and the other Listeners. Either you adjust it in HW using a VCO or similar, or you do adaptive sample rate conversion in the application. (And that is another reason for *not* having a shared kernel buffer.) For the Talker, either you adjust the AD clock to match the PTP time, or you measure the frequency offset. 4. ALSA support for time triggered playback. The patch series completely ignore the critical issue of media clock recovery. The Listener must buffer the stream in order to play it exactly at a specified time. It cannot simply send the stream ASAP to the audio HW, because some other Listener might need longer. AFAICT, there is nothing in ALSA that allows you to say, sample X should be played at time Y. These are some ideas about implementing TSN. Maybe some of it is wrong (especially about ALSA), but we definitely need a proper design to get the kernel parts right. There is plenty of work to do, but we really don't need some hacky, in-kernel buffer with hard coded audio formats. Thanks, Richard
Re: [very-RFC 0/8] TSN driver for the kernel
Henrik, On Sun, Jun 12, 2016 at 01:01:28AM +0200, Henrik Austad wrote: > There are at least one AVB-driver (the AV-part of TSN) in the kernel > already, Which driver is that? > however this driver aims to solve a wider scope as TSN can do > much more than just audio. A very basic ALSA-driver is added to the end > that allows you to play music between 2 machines using aplay in one end > and arecord | aplay on the other (some fiddling required) We have plans > for doing the same for v4l2 eventually (but there are other fishes to > fry first). The same goes for a TSN_SOCK type approach as well. Please, no new socket type for this. > What remains > - tie to (g)PTP properly, currently using ktime_get() for presentation > time > - get time from shim into TSN and vice versa ... and a whole lot more, see below. > - let shim create/manage buffer (BTW, shim is a terrible name for that.) [sigh] People have been asking me about TSN and Linux, and we've made some thoughts about it. The interest is there, and so I am glad to see discussion on this topic. Having said that, your series does not even begin to address the real issues. I did not review the patches too carefully (because the important stuff is missing), but surely configfs is the wrong interface for this. In the end, we will be able to support TSN using the existing networking and audio interfaces, adding appropriate extensions. Your patch features a buffer shared by networking and audio. This isn't strictly necessary for TSN, and it may be harmful. The Listeners are supposed to calculate the delay from frame reception to the DA conversion. They can easily include the time needed for a user space program to parse the frames, copy (and combine/convert) the data, and re-start the audio transfer. A flexible TSN implementation will leave all of the format and encoding task to the userland. After all, TSN will some include more that just AV data, as you know. Lets take a look at the big picture. One aspect of TSN is already fully supported, namely the gPTP. Using the linuxptp user stack and a modern kernel, you have a complete 802.1AS-2011 solution. Here is what is missing to support audio TSN: * User Space 1. A proper userland stack for AVDECC, MAAP, FQTSS, and so on. The OpenAVB project does not offer much beyond simple examples. 2. A user space audio application that puts it all together, making use of the services in #1, the linuxptp gPTP service, the ALSA services, and the network connections. This program will have all the knowledge about packet formats, AV encodings, and the local HW capabilities. This program cannot yet be written, as we still need some kernel work in the audio and networking subsystems. * Kernel Space 1. Providing frames with a future transmit time. For normal sockets, this can be in the CMESG data. For mmap'ed buffers, we will need a new format. (I think Arnd is working on a new layout.) 2. Time based qdisc for transmitted frames. For MACs that support this (like the i210), we only have to place the frame into the correct queue. For normal HW, we want to be able to reserve a time window in which non-TSN frames are blocked. This is some work, but in the end it should be a generic solution that not only works "perfectly" with TSN HW but also provides best effort service using any NIC. 3. ALSA support for tunable AD/DA clocks. The rate of the Listener's DA clock must match that of the Talker and the other Listeners. Either you adjust it in HW using a VCO or similar, or you do adaptive sample rate conversion in the application. (And that is another reason for *not* having a shared kernel buffer.) For the Talker, either you adjust the AD clock to match the PTP time, or you measure the frequency offset. 4. ALSA support for time triggered playback. The patch series completely ignore the critical issue of media clock recovery. The Listener must buffer the stream in order to play it exactly at a specified time. It cannot simply send the stream ASAP to the audio HW, because some other Listener might need longer. AFAICT, there is nothing in ALSA that allows you to say, sample X should be played at time Y. These are some ideas about implementing TSN. Maybe some of it is wrong (especially about ALSA), but we definitely need a proper design to get the kernel parts right. There is plenty of work to do, but we really don't need some hacky, in-kernel buffer with hard coded audio formats. Thanks, Richard
[very-RFC 0/8] TSN driver for the kernel
Hi all (series based on v4.7-rc2, now with the correct netdev) This is a *very* early RFC for a TSN-driver in the kernel. It has been floating around in my repo for a while and I would appreciate some feedback on the overall design to avoid doing some major blunders. TSN: Time Sensitive Networking, formely known as AVB (Audio/Video Bridging). There are at least one AVB-driver (the AV-part of TSN) in the kernel already, however this driver aims to solve a wider scope as TSN can do much more than just audio. A very basic ALSA-driver is added to the end that allows you to play music between 2 machines using aplay in one end and arecord | aplay on the other (some fiddling required) We have plans for doing the same for v4l2 eventually (but there are other fishes to fry first). The same goes for a TSN_SOCK type approach as well. TSN is all about providing infrastructure. Allthough there are a few very interesting uses for TSN (reliable, deterministic network for audio and video), once you have that reliable link, you can do a lot more. Some notes on the design: The driver is directed via ConfigFS as we need userspace to handle stream-reservation (MSRP), discovery and enumeration (IEEE 1722.1) and whatever other management is needed. Once we have all the required attributes, we can create link using mkdir, and use write() to set the attributes. Once ready, specify the 'shim' (basically a thin wrapper between TSN and another subsystem) and we start pushing out frames. The network part: it ties directly into the rx-handler for receive and writes skb's using netdev_start_xmit(). This could probably be improved. 2 new fields in netdev_ops have been introduced, and the Intel igb-driver has been updated (as this is available as a PCI-e card). The igb-driver works-ish What remains - tie to (g)PTP properly, currently using ktime_get() for presentation time - get time from shim into TSN and vice versa - let shim create/manage buffer Henrik Austad (8): TSN: add documentation TSN: Add the standard formerly known as AVB to the kernel Adding TSN-driver to Intel I210 controller Add TSN header for the driver Add TSN machinery to drive the traffic from a shim over the network Add TSN event-tracing AVB ALSA - Add ALSA shim for TSN MAINTAINERS: add TSN/AVB-entries Documentation/TSN/tsn.txt | 147 + MAINTAINERS | 14 + drivers/media/Kconfig | 15 + drivers/media/Makefile| 3 +- drivers/media/avb/Makefile| 5 + drivers/media/avb/avb_alsa.c | 742 +++ drivers/media/avb/tsn_iec61883.h | 124 drivers/net/ethernet/intel/Kconfig| 18 + drivers/net/ethernet/intel/igb/Makefile | 2 +- drivers/net/ethernet/intel/igb/igb.h | 19 + drivers/net/ethernet/intel/igb/igb_main.c | 10 +- drivers/net/ethernet/intel/igb/igb_tsn.c | 396 include/linux/netdevice.h | 32 + include/linux/tsn.h | 806 include/trace/events/tsn.h| 349 +++ net/Kconfig | 1 + net/Makefile | 1 + net/tsn/Kconfig | 32 + net/tsn/Makefile | 6 + net/tsn/tsn_configfs.c| 623 +++ net/tsn/tsn_core.c| 975 ++ net/tsn/tsn_header.c | 203 +++ net/tsn/tsn_internal.h| 383 net/tsn/tsn_net.c | 403 24 files changed, 5306 insertions(+), 3 deletions(-) create mode 100644 Documentation/TSN/tsn.txt create mode 100644 drivers/media/avb/Makefile create mode 100644 drivers/media/avb/avb_alsa.c create mode 100644 drivers/media/avb/tsn_iec61883.h create mode 100644 drivers/net/ethernet/intel/igb/igb_tsn.c create mode 100644 include/linux/tsn.h create mode 100644 include/trace/events/tsn.h create mode 100644 net/tsn/Kconfig create mode 100644 net/tsn/Makefile create mode 100644 net/tsn/tsn_configfs.c create mode 100644 net/tsn/tsn_core.c create mode 100644 net/tsn/tsn_header.c create mode 100644 net/tsn/tsn_internal.h create mode 100644 net/tsn/tsn_net.c -- 2.7.4
[very-RFC 0/8] TSN driver for the kernel
Hi all (series based on v4.7-rc2, now with the correct netdev) This is a *very* early RFC for a TSN-driver in the kernel. It has been floating around in my repo for a while and I would appreciate some feedback on the overall design to avoid doing some major blunders. TSN: Time Sensitive Networking, formely known as AVB (Audio/Video Bridging). There are at least one AVB-driver (the AV-part of TSN) in the kernel already, however this driver aims to solve a wider scope as TSN can do much more than just audio. A very basic ALSA-driver is added to the end that allows you to play music between 2 machines using aplay in one end and arecord | aplay on the other (some fiddling required) We have plans for doing the same for v4l2 eventually (but there are other fishes to fry first). The same goes for a TSN_SOCK type approach as well. TSN is all about providing infrastructure. Allthough there are a few very interesting uses for TSN (reliable, deterministic network for audio and video), once you have that reliable link, you can do a lot more. Some notes on the design: The driver is directed via ConfigFS as we need userspace to handle stream-reservation (MSRP), discovery and enumeration (IEEE 1722.1) and whatever other management is needed. Once we have all the required attributes, we can create link using mkdir, and use write() to set the attributes. Once ready, specify the 'shim' (basically a thin wrapper between TSN and another subsystem) and we start pushing out frames. The network part: it ties directly into the rx-handler for receive and writes skb's using netdev_start_xmit(). This could probably be improved. 2 new fields in netdev_ops have been introduced, and the Intel igb-driver has been updated (as this is available as a PCI-e card). The igb-driver works-ish What remains - tie to (g)PTP properly, currently using ktime_get() for presentation time - get time from shim into TSN and vice versa - let shim create/manage buffer Henrik Austad (8): TSN: add documentation TSN: Add the standard formerly known as AVB to the kernel Adding TSN-driver to Intel I210 controller Add TSN header for the driver Add TSN machinery to drive the traffic from a shim over the network Add TSN event-tracing AVB ALSA - Add ALSA shim for TSN MAINTAINERS: add TSN/AVB-entries Documentation/TSN/tsn.txt | 147 + MAINTAINERS | 14 + drivers/media/Kconfig | 15 + drivers/media/Makefile| 3 +- drivers/media/avb/Makefile| 5 + drivers/media/avb/avb_alsa.c | 742 +++ drivers/media/avb/tsn_iec61883.h | 124 drivers/net/ethernet/intel/Kconfig| 18 + drivers/net/ethernet/intel/igb/Makefile | 2 +- drivers/net/ethernet/intel/igb/igb.h | 19 + drivers/net/ethernet/intel/igb/igb_main.c | 10 +- drivers/net/ethernet/intel/igb/igb_tsn.c | 396 include/linux/netdevice.h | 32 + include/linux/tsn.h | 806 include/trace/events/tsn.h| 349 +++ net/Kconfig | 1 + net/Makefile | 1 + net/tsn/Kconfig | 32 + net/tsn/Makefile | 6 + net/tsn/tsn_configfs.c| 623 +++ net/tsn/tsn_core.c| 975 ++ net/tsn/tsn_header.c | 203 +++ net/tsn/tsn_internal.h| 383 net/tsn/tsn_net.c | 403 24 files changed, 5306 insertions(+), 3 deletions(-) create mode 100644 Documentation/TSN/tsn.txt create mode 100644 drivers/media/avb/Makefile create mode 100644 drivers/media/avb/avb_alsa.c create mode 100644 drivers/media/avb/tsn_iec61883.h create mode 100644 drivers/net/ethernet/intel/igb/igb_tsn.c create mode 100644 include/linux/tsn.h create mode 100644 include/trace/events/tsn.h create mode 100644 net/tsn/Kconfig create mode 100644 net/tsn/Makefile create mode 100644 net/tsn/tsn_configfs.c create mode 100644 net/tsn/tsn_core.c create mode 100644 net/tsn/tsn_header.c create mode 100644 net/tsn/tsn_internal.h create mode 100644 net/tsn/tsn_net.c -- 2.7.4
Re: [very-RFC 0/8] TSN driver for the kernel
On Sun, Jun 12, 2016 at 12:22:13AM +0200, Henrik Austad wrote: > Hi all Sorry.. I somehow managed to mess up the address to netdev, so if you feel like replying to this, use this as it has the correct netdev-address. again, sorry > (series based on v4.7-rc2) > > This is a *very* early RFC for a TSN-driver in the kernel. It has been > floating around in my repo for a while and I would appreciate some > feedback on the overall design to avoid doing some major blunders. > > TSN: Time Sensitive Networking, formely known as AVB (Audio/Video > Bridging). > > There are at least one AVB-driver (the AV-part of TSN) in the kernel > already, however this driver aims to solve a wider scope as TSN can do > much more than just audio. A very basic ALSA-driver is added to the end > that allows you to play music between 2 machines using aplay in one end > and arecord | aplay on the other (some fiddling required) We have plans > for doing the same for v4l2 eventually (but there are other fishes to > fry first). The same goes for a TSN_SOCK type approach as well. > > TSN is all about providing infrastructure. Allthough there are a few > very interesting uses for TSN (reliable, deterministic network for audio > and video), once you have that reliable link, you can do a lot more. > > Some notes on the design: > > The driver is directed via ConfigFS as we need userspace to handle > stream-reservation (MSRP), discovery and enumeration (IEEE 1722.1) and > whatever other management is needed. Once we have all the required > attributes, we can create link using mkdir, and use write() to set the > attributes. Once ready, specify the 'shim' (basically a thin wrapper > between TSN and another subsystem) and we start pushing out frames. > > The network part: it ties directly into the rx-handler for receive and > writes skb's using netdev_start_xmit(). This could probably be > improved. 2 new fields in netdev_ops have been introduced, and the Intel > igb-driver has been updated (as this is available as a PCI-e card). The > igb-driver works-ish > > > What remains > - tie to (g)PTP properly, currently using ktime_get() for presentation > time > - get time from shim into TSN and vice versa > - let shim create/manage buffer > > Henrik Austad (8): > TSN: add documentation > TSN: Add the standard formerly known as AVB to the kernel > Adding TSN-driver to Intel I210 controller > Add TSN header for the driver > Add TSN machinery to drive the traffic from a shim over the network > Add TSN event-tracing > AVB ALSA - Add ALSA shim for TSN > MAINTAINERS: add TSN/AVB-entries > > Documentation/TSN/tsn.txt | 147 + > MAINTAINERS | 14 + > drivers/media/Kconfig | 15 + > drivers/media/Makefile| 3 +- > drivers/media/avb/Makefile| 5 + > drivers/media/avb/avb_alsa.c | 742 +++ > drivers/media/avb/tsn_iec61883.h | 124 > drivers/net/ethernet/intel/Kconfig| 18 + > drivers/net/ethernet/intel/igb/Makefile | 2 +- > drivers/net/ethernet/intel/igb/igb.h | 19 + > drivers/net/ethernet/intel/igb/igb_main.c | 10 +- > drivers/net/ethernet/intel/igb/igb_tsn.c | 396 > include/linux/netdevice.h | 32 + > include/linux/tsn.h | 806 > include/trace/events/tsn.h| 349 +++ > net/Kconfig | 1 + > net/Makefile | 1 + > net/tsn/Kconfig | 32 + > net/tsn/Makefile | 6 + > net/tsn/tsn_configfs.c| 623 +++ > net/tsn/tsn_core.c| 975 > ++ > net/tsn/tsn_header.c | 203 +++ > net/tsn/tsn_internal.h| 383 > net/tsn/tsn_net.c | 403 > 24 files changed, 5306 insertions(+), 3 deletions(-) > create mode 100644 Documentation/TSN/tsn.txt > create mode 100644 drivers/media/avb/Makefile > create mode 100644 drivers/media/avb/avb_alsa.c > create mode 100644 drivers/media/avb/tsn_iec61883.h > create mode 100644 drivers/net/ethernet/intel/igb/igb_tsn.c > create mode 100644 include/linux/tsn.h > create mode 100644 include/trace/events/tsn.h > create mode 100644 net/tsn/Kconfig > create mode 100644 net/tsn/Makefile > create mode 100644 net/tsn/tsn_configfs.c > create mode 100644 net/tsn/tsn_core.c > create mode 100644 net/tsn/tsn_header.c > create mode 100644 net/tsn/tsn_internal.h > create mode 100644 net/tsn/tsn_net.c > > -- > 2.7.4 -- Henrik Austad
Re: [very-RFC 0/8] TSN driver for the kernel
On Sun, Jun 12, 2016 at 12:22:13AM +0200, Henrik Austad wrote: > Hi all Sorry.. I somehow managed to mess up the address to netdev, so if you feel like replying to this, use this as it has the correct netdev-address. again, sorry > (series based on v4.7-rc2) > > This is a *very* early RFC for a TSN-driver in the kernel. It has been > floating around in my repo for a while and I would appreciate some > feedback on the overall design to avoid doing some major blunders. > > TSN: Time Sensitive Networking, formely known as AVB (Audio/Video > Bridging). > > There are at least one AVB-driver (the AV-part of TSN) in the kernel > already, however this driver aims to solve a wider scope as TSN can do > much more than just audio. A very basic ALSA-driver is added to the end > that allows you to play music between 2 machines using aplay in one end > and arecord | aplay on the other (some fiddling required) We have plans > for doing the same for v4l2 eventually (but there are other fishes to > fry first). The same goes for a TSN_SOCK type approach as well. > > TSN is all about providing infrastructure. Allthough there are a few > very interesting uses for TSN (reliable, deterministic network for audio > and video), once you have that reliable link, you can do a lot more. > > Some notes on the design: > > The driver is directed via ConfigFS as we need userspace to handle > stream-reservation (MSRP), discovery and enumeration (IEEE 1722.1) and > whatever other management is needed. Once we have all the required > attributes, we can create link using mkdir, and use write() to set the > attributes. Once ready, specify the 'shim' (basically a thin wrapper > between TSN and another subsystem) and we start pushing out frames. > > The network part: it ties directly into the rx-handler for receive and > writes skb's using netdev_start_xmit(). This could probably be > improved. 2 new fields in netdev_ops have been introduced, and the Intel > igb-driver has been updated (as this is available as a PCI-e card). The > igb-driver works-ish > > > What remains > - tie to (g)PTP properly, currently using ktime_get() for presentation > time > - get time from shim into TSN and vice versa > - let shim create/manage buffer > > Henrik Austad (8): > TSN: add documentation > TSN: Add the standard formerly known as AVB to the kernel > Adding TSN-driver to Intel I210 controller > Add TSN header for the driver > Add TSN machinery to drive the traffic from a shim over the network > Add TSN event-tracing > AVB ALSA - Add ALSA shim for TSN > MAINTAINERS: add TSN/AVB-entries > > Documentation/TSN/tsn.txt | 147 + > MAINTAINERS | 14 + > drivers/media/Kconfig | 15 + > drivers/media/Makefile| 3 +- > drivers/media/avb/Makefile| 5 + > drivers/media/avb/avb_alsa.c | 742 +++ > drivers/media/avb/tsn_iec61883.h | 124 > drivers/net/ethernet/intel/Kconfig| 18 + > drivers/net/ethernet/intel/igb/Makefile | 2 +- > drivers/net/ethernet/intel/igb/igb.h | 19 + > drivers/net/ethernet/intel/igb/igb_main.c | 10 +- > drivers/net/ethernet/intel/igb/igb_tsn.c | 396 > include/linux/netdevice.h | 32 + > include/linux/tsn.h | 806 > include/trace/events/tsn.h| 349 +++ > net/Kconfig | 1 + > net/Makefile | 1 + > net/tsn/Kconfig | 32 + > net/tsn/Makefile | 6 + > net/tsn/tsn_configfs.c| 623 +++ > net/tsn/tsn_core.c| 975 > ++ > net/tsn/tsn_header.c | 203 +++ > net/tsn/tsn_internal.h| 383 > net/tsn/tsn_net.c | 403 > 24 files changed, 5306 insertions(+), 3 deletions(-) > create mode 100644 Documentation/TSN/tsn.txt > create mode 100644 drivers/media/avb/Makefile > create mode 100644 drivers/media/avb/avb_alsa.c > create mode 100644 drivers/media/avb/tsn_iec61883.h > create mode 100644 drivers/net/ethernet/intel/igb/igb_tsn.c > create mode 100644 include/linux/tsn.h > create mode 100644 include/trace/events/tsn.h > create mode 100644 net/tsn/Kconfig > create mode 100644 net/tsn/Makefile > create mode 100644 net/tsn/tsn_configfs.c > create mode 100644 net/tsn/tsn_core.c > create mode 100644 net/tsn/tsn_header.c > create mode 100644 net/tsn/tsn_internal.h > create mode 100644 net/tsn/tsn_net.c > > -- > 2.7.4 -- Henrik Austad
[very-RFC 0/8] TSN driver for the kernel
Hi all (series based on v4.7-rc2) This is a *very* early RFC for a TSN-driver in the kernel. It has been floating around in my repo for a while and I would appreciate some feedback on the overall design to avoid doing some major blunders. TSN: Time Sensitive Networking, formely known as AVB (Audio/Video Bridging). There are at least one AVB-driver (the AV-part of TSN) in the kernel already, however this driver aims to solve a wider scope as TSN can do much more than just audio. A very basic ALSA-driver is added to the end that allows you to play music between 2 machines using aplay in one end and arecord | aplay on the other (some fiddling required) We have plans for doing the same for v4l2 eventually (but there are other fishes to fry first). The same goes for a TSN_SOCK type approach as well. TSN is all about providing infrastructure. Allthough there are a few very interesting uses for TSN (reliable, deterministic network for audio and video), once you have that reliable link, you can do a lot more. Some notes on the design: The driver is directed via ConfigFS as we need userspace to handle stream-reservation (MSRP), discovery and enumeration (IEEE 1722.1) and whatever other management is needed. Once we have all the required attributes, we can create link using mkdir, and use write() to set the attributes. Once ready, specify the 'shim' (basically a thin wrapper between TSN and another subsystem) and we start pushing out frames. The network part: it ties directly into the rx-handler for receive and writes skb's using netdev_start_xmit(). This could probably be improved. 2 new fields in netdev_ops have been introduced, and the Intel igb-driver has been updated (as this is available as a PCI-e card). The igb-driver works-ish What remains - tie to (g)PTP properly, currently using ktime_get() for presentation time - get time from shim into TSN and vice versa - let shim create/manage buffer Henrik Austad (8): TSN: add documentation TSN: Add the standard formerly known as AVB to the kernel Adding TSN-driver to Intel I210 controller Add TSN header for the driver Add TSN machinery to drive the traffic from a shim over the network Add TSN event-tracing AVB ALSA - Add ALSA shim for TSN MAINTAINERS: add TSN/AVB-entries Documentation/TSN/tsn.txt | 147 + MAINTAINERS | 14 + drivers/media/Kconfig | 15 + drivers/media/Makefile| 3 +- drivers/media/avb/Makefile| 5 + drivers/media/avb/avb_alsa.c | 742 +++ drivers/media/avb/tsn_iec61883.h | 124 drivers/net/ethernet/intel/Kconfig| 18 + drivers/net/ethernet/intel/igb/Makefile | 2 +- drivers/net/ethernet/intel/igb/igb.h | 19 + drivers/net/ethernet/intel/igb/igb_main.c | 10 +- drivers/net/ethernet/intel/igb/igb_tsn.c | 396 include/linux/netdevice.h | 32 + include/linux/tsn.h | 806 include/trace/events/tsn.h| 349 +++ net/Kconfig | 1 + net/Makefile | 1 + net/tsn/Kconfig | 32 + net/tsn/Makefile | 6 + net/tsn/tsn_configfs.c| 623 +++ net/tsn/tsn_core.c| 975 ++ net/tsn/tsn_header.c | 203 +++ net/tsn/tsn_internal.h| 383 net/tsn/tsn_net.c | 403 24 files changed, 5306 insertions(+), 3 deletions(-) create mode 100644 Documentation/TSN/tsn.txt create mode 100644 drivers/media/avb/Makefile create mode 100644 drivers/media/avb/avb_alsa.c create mode 100644 drivers/media/avb/tsn_iec61883.h create mode 100644 drivers/net/ethernet/intel/igb/igb_tsn.c create mode 100644 include/linux/tsn.h create mode 100644 include/trace/events/tsn.h create mode 100644 net/tsn/Kconfig create mode 100644 net/tsn/Makefile create mode 100644 net/tsn/tsn_configfs.c create mode 100644 net/tsn/tsn_core.c create mode 100644 net/tsn/tsn_header.c create mode 100644 net/tsn/tsn_internal.h create mode 100644 net/tsn/tsn_net.c -- 2.7.4
[very-RFC 0/8] TSN driver for the kernel
Hi all (series based on v4.7-rc2) This is a *very* early RFC for a TSN-driver in the kernel. It has been floating around in my repo for a while and I would appreciate some feedback on the overall design to avoid doing some major blunders. TSN: Time Sensitive Networking, formely known as AVB (Audio/Video Bridging). There are at least one AVB-driver (the AV-part of TSN) in the kernel already, however this driver aims to solve a wider scope as TSN can do much more than just audio. A very basic ALSA-driver is added to the end that allows you to play music between 2 machines using aplay in one end and arecord | aplay on the other (some fiddling required) We have plans for doing the same for v4l2 eventually (but there are other fishes to fry first). The same goes for a TSN_SOCK type approach as well. TSN is all about providing infrastructure. Allthough there are a few very interesting uses for TSN (reliable, deterministic network for audio and video), once you have that reliable link, you can do a lot more. Some notes on the design: The driver is directed via ConfigFS as we need userspace to handle stream-reservation (MSRP), discovery and enumeration (IEEE 1722.1) and whatever other management is needed. Once we have all the required attributes, we can create link using mkdir, and use write() to set the attributes. Once ready, specify the 'shim' (basically a thin wrapper between TSN and another subsystem) and we start pushing out frames. The network part: it ties directly into the rx-handler for receive and writes skb's using netdev_start_xmit(). This could probably be improved. 2 new fields in netdev_ops have been introduced, and the Intel igb-driver has been updated (as this is available as a PCI-e card). The igb-driver works-ish What remains - tie to (g)PTP properly, currently using ktime_get() for presentation time - get time from shim into TSN and vice versa - let shim create/manage buffer Henrik Austad (8): TSN: add documentation TSN: Add the standard formerly known as AVB to the kernel Adding TSN-driver to Intel I210 controller Add TSN header for the driver Add TSN machinery to drive the traffic from a shim over the network Add TSN event-tracing AVB ALSA - Add ALSA shim for TSN MAINTAINERS: add TSN/AVB-entries Documentation/TSN/tsn.txt | 147 + MAINTAINERS | 14 + drivers/media/Kconfig | 15 + drivers/media/Makefile| 3 +- drivers/media/avb/Makefile| 5 + drivers/media/avb/avb_alsa.c | 742 +++ drivers/media/avb/tsn_iec61883.h | 124 drivers/net/ethernet/intel/Kconfig| 18 + drivers/net/ethernet/intel/igb/Makefile | 2 +- drivers/net/ethernet/intel/igb/igb.h | 19 + drivers/net/ethernet/intel/igb/igb_main.c | 10 +- drivers/net/ethernet/intel/igb/igb_tsn.c | 396 include/linux/netdevice.h | 32 + include/linux/tsn.h | 806 include/trace/events/tsn.h| 349 +++ net/Kconfig | 1 + net/Makefile | 1 + net/tsn/Kconfig | 32 + net/tsn/Makefile | 6 + net/tsn/tsn_configfs.c| 623 +++ net/tsn/tsn_core.c| 975 ++ net/tsn/tsn_header.c | 203 +++ net/tsn/tsn_internal.h| 383 net/tsn/tsn_net.c | 403 24 files changed, 5306 insertions(+), 3 deletions(-) create mode 100644 Documentation/TSN/tsn.txt create mode 100644 drivers/media/avb/Makefile create mode 100644 drivers/media/avb/avb_alsa.c create mode 100644 drivers/media/avb/tsn_iec61883.h create mode 100644 drivers/net/ethernet/intel/igb/igb_tsn.c create mode 100644 include/linux/tsn.h create mode 100644 include/trace/events/tsn.h create mode 100644 net/tsn/Kconfig create mode 100644 net/tsn/Makefile create mode 100644 net/tsn/tsn_configfs.c create mode 100644 net/tsn/tsn_core.c create mode 100644 net/tsn/tsn_header.c create mode 100644 net/tsn/tsn_internal.h create mode 100644 net/tsn/tsn_net.c -- 2.7.4