Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-20 Thread Takashi Sakamoto

On 2016年06月20日 18:06, Henrik Austad wrote:

On Sun, Jun 19, 2016 at 11:45:47PM +0900, Takashi Sakamoto wrote:

(remove C.C. to lkml. This is not so major feature.)

On Jun 19 2916 07:45, Henrik Austad wrote:

snip

802.1Q gives you low latency through the network, but more importantly, no
dropped frames. gPTP gives you a central reference to time.


When such a long message is required, it means that we don't have
enough premises for this discussion.


Isn't a discussion part of how information is conveyed and finding parts
that require more knowledge?


You have just interests in gPTP and transferring AVTPDUs, while no
interests in the others such as "what the basic ideas of TSN come
from" and "the reason that IEEE 1722 refers to IEC 61883 series
which is originally designed for IEEE 1394 bus" and "the reason that
I was motivated to join in this discussion even though not a netdev
developer".


I'm sorry, I'm not sure I follow you here. What do you mean I don't have
any interest in where TSN comes from? or the reason why 1722 use IEC 61883?
What about "they picked 61883 because it made sense?"

gPTP itself is *not* about transffering audio-data, it is about agreeing on
a common time so that when you *do* transfer audio-data, the samplerate
actually means something.

Let me ask you this; if you have 2 sound-cards in your computer and you
want to attach a mic to one and speakers to the other, how do you solve
streaming of audio from the mic to the speaker If you answer does not
contain something akin to "different timing-domain issues", I'd be very
surprised.

If you are interested in TSN for transferring *anything*, _including_
audio, you *have* to take gPTP into consideration. Just as you have to
think about stream reservation, compliant hardware and all the different
subsystems you are going to run into, either via kernel or via userspace.


Here, could I ask you a question? Do you know a role of cycle start
packet of IEEE Std 1394?


No, I do not.

I have only passing knowledge of the firewire standard, I've looked at the
encoding described in 1722 and added that to the alsa shim as an example of
how to use TSN. As I stated, this was a *very* early version and I would
like to use TSN for audio - and more work is needed.


If you think it's not related to this discussion, please tell it to
me. Then I'll drop out from this thread.


There are tons of details left and right, and as I said, I'm not  all to
familiar with firewire. I know that one of the authors behind the firewire
standard happened to be part of 1722 standard.

I am currently working my way through the firewire-stak paper you've
written, and I have gotten a lot of pointers to other areas I need to dig
into so I should be busy for a while.

That being said, Richard's point about a way to find sample-rate of a
hardware device and ways to influence that, is important for AVB/TSN.


History Repeats itself.


?


OK. Bye.


Takashi Sakamoto
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-20 Thread Henrik Austad
On Sun, Jun 19, 2016 at 11:45:47PM +0900, Takashi Sakamoto wrote:
> (remove C.C. to lkml. This is not so major feature.)
> 
> On Jun 19 2916 07:45, Henrik Austad wrote:
> >snip
> >
> >802.1Q gives you low latency through the network, but more importantly, no
> >dropped frames. gPTP gives you a central reference to time.
> 
> When such a long message is required, it means that we don't have
> enough premises for this discussion.

Isn't a discussion part of how information is conveyed and finding parts 
that require more knowledge?

> You have just interests in gPTP and transferring AVTPDUs, while no
> interests in the others such as "what the basic ideas of TSN come
> from" and "the reason that IEEE 1722 refers to IEC 61883 series
> which is originally designed for IEEE 1394 bus" and "the reason that
> I was motivated to join in this discussion even though not a netdev
> developer".

I'm sorry, I'm not sure I follow you here. What do you mean I don't have 
any interest in where TSN comes from? or the reason why 1722 use IEC 61883? 
What about "they picked 61883 because it made sense?"

gPTP itself is *not* about transffering audio-data, it is about agreeing on 
a common time so that when you *do* transfer audio-data, the samplerate 
actually means something.

Let me ask you this; if you have 2 sound-cards in your computer and you 
want to attach a mic to one and speakers to the other, how do you solve 
streaming of audio from the mic to the speaker If you answer does not 
contain something akin to "different timing-domain issues", I'd be very 
surprised.

If you are interested in TSN for transferring *anything*, _including_ 
audio, you *have* to take gPTP into consideration. Just as you have to 
think about stream reservation, compliant hardware and all the different 
subsystems you are going to run into, either via kernel or via userspace.

> Here, could I ask you a question? Do you know a role of cycle start
> packet of IEEE Std 1394?

No, I do not.

I have only passing knowledge of the firewire standard, I've looked at the 
encoding described in 1722 and added that to the alsa shim as an example of 
how to use TSN. As I stated, this was a *very* early version and I would 
like to use TSN for audio - and more work is needed.

> If you think it's not related to this discussion, please tell it to
> me. Then I'll drop out from this thread.

There are tons of details left and right, and as I said, I'm not  all to 
familiar with firewire. I know that one of the authors behind the firewire 
standard happened to be part of 1722 standard.

I am currently working my way through the firewire-stak paper you've 
written, and I have gotten a lot of pointers to other areas I need to dig 
into so I should be busy for a while.

That being said, Richard's point about a way to find sample-rate of a 
hardware device and ways to influence that, is important for AVB/TSN.

> History Repeats itself.

?

> Takashi Sakamoto

-- 
Henrik Austad


signature.asc
Description: Digital signature


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-20 Thread Henrik Austad
On Sun, Jun 19, 2016 at 11:46:29AM +0200, Richard Cochran wrote:
> On Sun, Jun 19, 2016 at 12:45:50AM +0200, Henrik Austad wrote:
> > edit: this turned out to be a somewhat lengthy answer. I have tried to 
> > shorten it down somewhere. it is getting late and I'm getting increasingly 
> > incoherent (Richard probably knows what I'm talking about ;) so I'll stop 
> > for now.
> 
> Thanks for your responses, Henrik.  I think your explanations are on spot.
> 
> > note that an adjustable sample-clock is not a *requirement* but in general 
> > you'd want to avoid resampling in software.
> 
> Yes, but..
> 
> Adjusting the local clock rate to match the AVB network rate is
> essential.  You must be able to *continuously* adjust the rate in
> order to compensate drift.  Again, there are exactly two ways to do
> it, namely in hardware (think VCO) or in software (dynamic
> resampling).

Don't get me wrong, having an adjustable clock for the sampling is 
essential -but it si not -required-.

> What you cannot do is simply buffer the AV data and play it out
> blindly at the local clock rate.

No, that you cannot do that, that would not be pretty :)

> Regarding the media clock, if I understand correctly, there the talker
> has two possibilities.  Either the talker samples the stream at the
> gPTP rate, or the talker must tell the listeners the relationship
> (phase offset and frequency ratio) between the media clock and the
> gPTP time.  Please correct me if I got the wrong impression...

Last first; AFAIK, there is no way for the Talker to tell a Listener the 
phase offset/freq ratio other than how each end-station/bridge in the 
gPTP-domain calculates this on psync_update event messages. I could be 
wrong though, and different encoding formats can probably convey such 
information. I have not seen any such mechanisms in the underlying 1722 
format though.

So a Talker should send a stream sampled as if the gPTP time drove the 
AD/DA sample frequency directly. Whether the local sampling is driven by 
gPTP or resampled to match gPTP-time prior to transmit is left as an 
implementation detail for the end-station.

Did all that make sense?

Thanks!
-- 
Henrik Austad


signature.asc
Description: Digital signature


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-19 Thread Takashi Sakamoto

(remove C.C. to lkml. This is not so major feature.)

On Jun 19 2916 07:45, Henrik Austad wrote:

snip

802.1Q gives you low latency through the network, but more importantly, no
dropped frames. gPTP gives you a central reference to time.


When such a long message is required, it means that we don't have enough 
premises for this discussion.


You have just interests in gPTP and transferring AVTPDUs, while no 
interests in the others such as "what the basic ideas of TSN come from" 
and "the reason that IEEE 1722 refers to IEC 61883 series which is 
originally designed for IEEE 1394 bus" and "the reason that I was 
motivated to join in this discussion even though not a netdev developer".


Here, could I ask you a question? Do you know a role of cycle start 
packet of IEEE Std 1394?


If you think it's not related to this discussion, please tell it to me. 
Then I'll drop out from this thread.



History Repeats itself.

Takashi Sakamoto
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-19 Thread Richard Cochran
On Sun, Jun 19, 2016 at 12:45:50AM +0200, Henrik Austad wrote:
> edit: this turned out to be a somewhat lengthy answer. I have tried to 
> shorten it down somewhere. it is getting late and I'm getting increasingly 
> incoherent (Richard probably knows what I'm talking about ;) so I'll stop 
> for now.

Thanks for your responses, Henrik.  I think your explanations are on spot.

> note that an adjustable sample-clock is not a *requirement* but in general 
> you'd want to avoid resampling in software.

Yes, but..

Adjusting the local clock rate to match the AVB network rate is
essential.  You must be able to *continuously* adjust the rate in
order to compensate drift.  Again, there are exactly two ways to do
it, namely in hardware (think VCO) or in software (dynamic
resampling).

What you cannot do is simply buffer the AV data and play it out
blindly at the local clock rate.

Regarding the media clock, if I understand correctly, there the talker
has two possibilities.  Either the talker samples the stream at the
gPTP rate, or the talker must tell the listeners the relationship
(phase offset and frequency ratio) between the media clock and the
gPTP time.  Please correct me if I got the wrong impression...

Thanks,
Richard
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-18 Thread Henrik Austad
On Sat, Jun 18, 2016 at 02:22:13PM +0900, Takashi Sakamoto wrote:
> Hi,

Hi Takashi,

You raise a lot of valid points and questions, I'll try to answer them.

edit: this turned out to be a somewhat lengthy answer. I have tried to 
shorten it down somewhere. it is getting late and I'm getting increasingly 
incoherent (Richard probably knows what I'm talking about ;) so I'll stop 
for now.

Plase post a follow-up with everything that's not clear!
Thanks!

> Sorry to be late. In this weekday, I have little time for this thread
> because working for alsa-lib[1]. Besides, I'm not full-time developer
> for this kind of work. In short, I use my limited private time for this
> discussion.

Thank you for taking the time to reply to this thread then, it is much 
appreciated

> On Jun 15 2016 17:06, Richard Cochran wrote:
> > On Wed, Jun 15, 2016 at 12:15:24PM +0900, Takashi Sakamoto wrote:
> >>> On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote:
>  I have seen audio PLL/multiplier chips that will take, for example, a
>  10 kHz input and produce your 48 kHz media clock.  With the right HW
>  design, you can tell your PTP Hardware Clock to produce a 1 PPS,
>  and you will have a synchronized AVB endpoint.  The software is all
>  there already.  Somebody should tell the ALSA guys about it.
> >>
> >> Just from my curiosity, could I ask you more explanation for it in ALSA
> >> side?
> > 
> > (Disclaimer: I really don't know too much about ALSA, expect that is
> > fairly big and complex ;)
> 
> In this morning, I read IEEE 1722:2011 and realized that it quite
> roughly refers to IEC 61883-1/6 and includes much ambiguities to end
> applications.

As far as I know, 1722 aims to describe how the data is wrapped in AVTPDU 
(and likewise for control-data), not how the end-station should implement 
it.

If there are ambiguities, would you mind listing a few? It would serve as a 
useful guide as to look for other pitfalls as well (thanks!)

> (In my opinion, the author just focuses on packet with timestamps,
> without enough considering about how to implement endpoint applications
> which perform semi-real sampling, fetching and queueing and so on, so as
> you. They're satisfied just by handling packet with timestamp, without
> enough consideration about actual hardware/software applications.)

You are correct, none of the standards explain exactly how it should be 
implemented, only what the end result should look like. One target of this 
collection of standards are embedded, dedicated AV equipment and the 
authors have no way of knowing (nor should they care I think) the 
underlying architecture of these.

> > Here is what I think ALSA should provide:
> > 
> > - The DA and AD clocks should appear as attributes of the HW device.

This would be very useful and helpful when determining if the clock of the 
HW time is falling behind or racing ahead of the gPTP time domain. It will 
also help finding the capture time or calculating when a sample in the 
buffer will be played back by the device.

> > - There should be a method for measuring the DA/AD clock rate with
> >   respect to both the system time and the PTP Hardware Clock (PHC)
> >   time.

as above.

> > - There should be a method for adjusting the DA/AD clock rate if
> >   possible.  If not, then ALSA should fall back to sample rate
> >   conversion.

This is not a requirement from the standard, but will help avoid costly 
resampling. At least it should be possible to detect the *need* for 
resampling so that we can try to avoid underruns.

> > - There should be a method to determine the time delay from the point
> >   when the audio data are enqueued into ALSA until they pass through
> >   the D/A converter.  If this cannot be known precisely, then the
> >   library should provide an estimate with an error bound.
> > 
> > - I think some AVB use cases will need to know the time delay from A/D
> >   until the data are available to the local application.  (Distributed
> >   microphones?  I'm not too sure about that.)

yes, if you have multiple microphones that you want to combine into a 
stream and do signal processing, some cases require sample-sync (so within 
1 us accuracy for 48kHz).

> > - If the DA/AD clocks are connected to other clock devices in HW,
> >   there should be a way to find this out in SW.  For example, if SW
> >   can see the PTP-PHC-PLL-DA relationship from the above example, then
> >   it knows how to synchronize the DA clock using the network.
> > 
> >   [ Implementing this point involves other subsystems beyond ALSA.  It
> > isn't really necessary for people designing AVB systems, since
> > they know their designs, but it would be nice to have for writing
> > generic applications that can deal with any kind of HW setup. ]
> 
> Depends on which subsystem decides "AVTP presentation time"[3]. 

Presentation time is either set by
a) Local sound card performing capture (in which case it will be 'capture 
 

Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-17 Thread Takashi Sakamoto
Hi,

Sorry to be late. In this weekday, I have little time for this thread
because working for alsa-lib[1]. Besides, I'm not full-time developer
for this kind of work. In short, I use my limited private time for this
discussion.

On Jun 15 2016 17:06, Richard Cochran wrote:
> On Wed, Jun 15, 2016 at 12:15:24PM +0900, Takashi Sakamoto wrote:
>>> On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote:
 I have seen audio PLL/multiplier chips that will take, for example, a
 10 kHz input and produce your 48 kHz media clock.  With the right HW
 design, you can tell your PTP Hardware Clock to produce a 1 PPS,
 and you will have a synchronized AVB endpoint.  The software is all
 there already.  Somebody should tell the ALSA guys about it.
>>
>> Just from my curiosity, could I ask you more explanation for it in ALSA
>> side?
> 
> (Disclaimer: I really don't know too much about ALSA, expect that is
> fairly big and complex ;)

In this morning, I read IEEE 1722:2011 and realized that it quite
roughly refers to IEC 61883-1/6 and includes much ambiguities to end
applications.

(In my opinion, the author just focuses on packet with timestamps,
without enough considering about how to implement endpoint applications
which perform semi-real sampling, fetching and queueing and so on, so as
you. They're satisfied just by handling packet with timestamp, without
enough consideration about actual hardware/software applications.)

> Here is what I think ALSA should provide:
> 
> - The DA and AD clocks should appear as attributes of the HW device.
> 
> - There should be a method for measuring the DA/AD clock rate with
>   respect to both the system time and the PTP Hardware Clock (PHC)
>   time.
> 
> - There should be a method for adjusting the DA/AD clock rate if
>   possible.  If not, then ALSA should fall back to sample rate
>   conversion.
> 
> - There should be a method to determine the time delay from the point
>   when the audio data are enqueued into ALSA until they pass through
>   the D/A converter.  If this cannot be known precisely, then the
>   library should provide an estimate with an error bound.
> 
> - I think some AVB use cases will need to know the time delay from A/D
>   until the data are available to the local application.  (Distributed
>   microphones?  I'm not too sure about that.)
> 
> - If the DA/AD clocks are connected to other clock devices in HW,
>   there should be a way to find this out in SW.  For example, if SW
>   can see the PTP-PHC-PLL-DA relationship from the above example, then
>   it knows how to synchronize the DA clock using the network.
> 
>   [ Implementing this point involves other subsystems beyond ALSA.  It
> isn't really necessary for people designing AVB systems, since
> they know their designs, but it would be nice to have for writing
> generic applications that can deal with any kind of HW setup. ]

Depends on which subsystem decides "AVTP presentation time"[3]. This
value is dominant to the number of events included in an IEC 61883-1
packet. If this TSN subsystem decides it, most of these items don't need
to be in ALSA.

As long as I know, the number of AVTPDU per second seems not to be
fixed. So each application is not allowed to calculate the timestamp by
its own way unless TSN implementation gives the information to each
applications.

For your information, in current ALSA implementation of IEC 61883-1/6 on
IEEE 1394 bus, the presentation timestamp is decided in ALSA side. The
number of isochronous packet transmitted per second is fixed by 8,000 in
IEEE 1394, and the number of data blocks in an IEC 61883-1 packet is
deterministic according to 'sampling transfer frequency' in IEC 61883-6
and isochronous cycle count passed from Linux FireWire subsystem.

In the TSN subsystem, like FireWire subsystem, callback for filling
payload should have information of 'when the packet is scheduled to be
transmitted'. With the information, each application can calculate the
number of event in the packet and presentation timestamp. Of cource,
this timestamp should be handled as 'avtp_timestamp' in packet queueing.

>> In ALSA, sampling rate conversion should be in userspace, not in kernel
>> land. In alsa-lib, sampling rate conversion is implemented in shared object.
>> When userspace applications start playbacking/capturing, depending on PCM
>> node to access, these applications load the shared object and convert PCM
>> frames from buffer in userspace to mmapped DMA-buffer, then commit them.
> 
> The AVB use case places an additional requirement on the rate
> conversion.  You will need to adjust the frequency on the fly, as the
> stream is playing.  I would guess that ALSA doesn't have that option?

In ALSA kernel/userspace interfaces , the specification cannot be
supported, at all.

Please explain about this requirement, where it comes from, which
specification and clause describe it (802.1AS or 802.1Q?). As long as I
read IEEE 1722, I cannot 

Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-15 Thread Richard Cochran
On Wed, Jun 15, 2016 at 09:04:41AM +0200, Richard Cochran wrote:
> On Tue, Jun 14, 2016 at 10:38:10PM +0200, Henrik Austad wrote:
> > Whereas I want to do 
> > 
> > aplay some_song.wav
> 
> Can you please explain how your patches accomplish this?

Never mind.  Looking back, I found it in patch #7.

Thanks,
Richard
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-15 Thread Richard Cochran
On Wed, Jun 15, 2016 at 12:15:24PM +0900, Takashi Sakamoto wrote:
> > On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote:
> >> I have seen audio PLL/multiplier chips that will take, for example, a
> >> 10 kHz input and produce your 48 kHz media clock.  With the right HW
> >> design, you can tell your PTP Hardware Clock to produce a 1 PPS,
> >> and you will have a synchronized AVB endpoint.  The software is all
> >> there already.  Somebody should tell the ALSA guys about it.
> 
> Just from my curiosity, could I ask you more explanation for it in ALSA
> side?

(Disclaimer: I really don't know too much about ALSA, expect that is
fairly big and complex ;)

Here is what I think ALSA should provide:

- The DA and AD clocks should appear as attributes of the HW device.

- There should be a method for measuring the DA/AD clock rate with
  respect to both the system time and the PTP Hardware Clock (PHC)
  time.

- There should be a method for adjusting the DA/AD clock rate if
  possible.  If not, then ALSA should fall back to sample rate
  conversion.

- There should be a method to determine the time delay from the point
  when the audio data are enqueued into ALSA until they pass through
  the D/A converter.  If this cannot be known precisely, then the
  library should provide an estimate with an error bound.

- I think some AVB use cases will need to know the time delay from A/D
  until the data are available to the local application.  (Distributed
  microphones?  I'm not too sure about that.)

- If the DA/AD clocks are connected to other clock devices in HW,
  there should be a way to find this out in SW.  For example, if SW
  can see the PTP-PHC-PLL-DA relationship from the above example, then
  it knows how to synchronize the DA clock using the network.

  [ Implementing this point involves other subsystems beyond ALSA.  It
isn't really necessary for people designing AVB systems, since
they know their designs, but it would be nice to have for writing
generic applications that can deal with any kind of HW setup. ]

> In ALSA, sampling rate conversion should be in userspace, not in kernel
> land. In alsa-lib, sampling rate conversion is implemented in shared object.
> When userspace applications start playbacking/capturing, depending on PCM
> node to access, these applications load the shared object and convert PCM
> frames from buffer in userspace to mmapped DMA-buffer, then commit them.

The AVB use case places an additional requirement on the rate
conversion.  You will need to adjust the frequency on the fly, as the
stream is playing.  I would guess that ALSA doesn't have that option?

Thanks,
Richard
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-15 Thread Henrik Austad
On Wed, Jun 15, 2016 at 09:04:41AM +0200, Richard Cochran wrote:
> On Tue, Jun 14, 2016 at 10:38:10PM +0200, Henrik Austad wrote:
> > Whereas I want to do 
> > 
> > aplay some_song.wav
> 
> Can you please explain how your patches accomplish this?

In short:

modprobe tsn
modprobe avb_alsa
mkdir /sys/kernel/config/eth0/link
cd /sys/kernel/config/eth0/link

echo alsa > enabled
aplay -Ddefault:CARD=avb some_song.wav

Likewise on the receiver side, except add 'Listener' to end_station 
attribute

arecord -c2 -r48000 -f S16_LE -Ddefault:CARD=avb > some_recording.wav

I've not had time to fully fix the hw-aprams for alsa, so some manual 
tweaking of arecord is required.


Again, this is a very early attempt to get something useful done with TSN, 
I know there are rough edges, I know buffer handling and timestamping is 
not finished


Note: if you don't have an intel-card, load tsn in debug-mode and it will 
let you use all NICs present.

modprobe tsn in_debug=1


-- 
Henrik Austad


signature.asc
Description: Digital signature


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-15 Thread Richard Cochran
On Tue, Jun 14, 2016 at 10:38:10PM +0200, Henrik Austad wrote:
> Where is your media-application in this?

Um, that *is* a media application.  It plays music on the sound card.

> You only loop the audio from 
> network to the dsp, is the media-application attached to the dsp-device?

Sorry, I thought the old OSS API would be familiar and easy to
understand.  The /dev/dsp is the sound card.

Thanks,
Richard
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-15 Thread Richard Cochran
On Tue, Jun 14, 2016 at 10:38:10PM +0200, Henrik Austad wrote:
> Whereas I want to do 
> 
> aplay some_song.wav

Can you please explain how your patches accomplish this?

Thanks,
Richard
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-14 Thread Takashi Sakamoto

Hi Richard,

On Tue, 14 Jun 2016 19:04:44 +0200, Richard Cochran write:
>> Well, I guess I should have said, I am not too familiar with the
>> breadth of current audio hardware, high end or low end.  Of course I
>> would like to see even consumer devices work with AVB, but it is up to
>> the ALSA people to make that happen.  So far, nothing has been done,
>> afaict.

In OSS world, there's few developers for this kind of devices, even if 
it's alsa-project. Furthermore, manufacturerer for recording equipments 
have no interests in OSS.


In short, what we can do for these devices is just to 
reverse-engineering. For models of Ethernet-AVB, it might be just to 
transfer or receive packets, and read them. The devices are still 
black-boxes and we have no ways to reveal their details.


So when you require the details to implement something in your side, few 
developers can tell you, I think.



Regards

Takashi Sakamoto
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-14 Thread Takashi Sakamoto

Hi Richard,

> On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote:
>> 3. ALSA support for tunable AD/DA clocks.  The rate of the Listener's
>>DA clock must match that of the Talker and the other Listeners.
>>Either you adjust it in HW using a VCO or similar, or you do
>>adaptive sample rate conversion in the application. (And that is
>>another reason for *not* having a shared kernel buffer.)  For the
>>Talker, either you adjust the AD clock to match the PTP time, or
>>you measure the frequency offset.
>>
>> I have seen audio PLL/multiplier chips that will take, for example, a
>> 10 kHz input and produce your 48 kHz media clock.  With the right HW
>> design, you can tell your PTP Hardware Clock to produce a 1 PPS,
>> and you will have a synchronized AVB endpoint.  The software is all
>> there already.  Somebody should tell the ALSA guys about it.

Just from my curiosity, could I ask you more explanation for it in ALSA 
side?


The similar mechanism to synchronize endpoints was also applied to audio 
and music unit on IEEE 1394 bus. According to IEC 61883-1/6, some of 
these actual units can generate presentation-timestamp from header 
information of 8,000 packet per sec, and utilize the signal as sampling 
clock[1].


There's much differences between IEC 61883-1/6 on IEEE 1394 bus and 
Audio and Video Bridge on Ethernet[2], especially for synchronization, 
but in this point of transferring synchnization signal and time-based 
data, we have the similar requirements of software implementations, I think.


My motivation to join in this discussion is to consider about to make it 
clear to implement packet-oriented drivers in ALSA kernel-land, and 
enhance my work for drivers to handle IEC 61883-1/6 on IEEE 1394 bus.


>> I don't know if ALSA has anything for sample rate conversion or not,
>> but haven't seen anything that addresses distributed synchronized
>> audio applications.

In ALSA, sampling rate conversion should be in userspace, not in kernel 
land. In alsa-lib, sampling rate conversion is implemented in shared 
object. When userspace applications start playbacking/capturing, 
depending on PCM node to access, these applications load the shared 
object and convert PCM frames from buffer in userspace to mmapped 
DMA-buffer, then commit them.


Before establishing a PCM substream, userspace applications and 
in-kernel drivers communicate to decide sampling rate, PCM frame format, 
the size of PCM buffer, and so on. (see snd_pcm_hw_params() and 
ioctl(SNDRV_PCM_IOCTL_HW_PARAMS)). Thus, as long as in-kernel drivers 
know specifications of endpoints, userspace applications can start PCM 
substreams correctly.



[1] In detail, please refer to specification of 1394TA I introduced:
http://www.spinics.net/lists/netdev/msg381259.html
[2] I guess that IEC 61883-1/6 packet for Ethernet-AVB is a mutant from 
original specifications.



Regards

Takashi Sakamoto
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-14 Thread Henrik Austad
On Tue, Jun 14, 2016 at 08:26:15PM +0200, Richard Cochran wrote:
> On Tue, Jun 14, 2016 at 11:30:00AM +0200, Henrik Austad wrote:
> > So loop data from kernel -> userspace -> kernelspace and finally back to 
> > userspace and the media application?
> 
> Huh?  I wonder where you got that idea.  Let me show an example of
> what I mean.
> 
>   void listener()
>   {
>   int in = socket();
>   int out = open("/dev/dsp");
>   char buf[];
> 
>   while (1) {
>   recv(in, buf, packetsize);
>   write(out, buf + offset, datasize);
>   }
>   }
> 
> See?

Where is your media-application in this? You only loop the audio from 
network to the dsp, is the media-application attached to the dsp-device?

Whereas I want to do 

aplay some_song.wav
or mplayer
or spotify
or ..


> > Yes, I know some audio apps "use networking", I can stream netradio, I can 
> > use jack to connect devices using RTP and probably a whole lot of other 
> > applications do similar things. However, AVB is more about using the 
> > network as a virtual sound-card.
> 
> That is news to me.  I don't recall ever having seen AVB described
> like that before.
> 
> > For the media application, it should not 
> > have to care if the device it is using is a soudncard inside the box or a 
> > set of AVB-capable speakers somewhere on the network.
> 
> So you would like a remote listener to appear in the system as a local
> PCM audio sink?  And a remote talker would be like a local media URL?
> Sounds unworkable to me, but even if you were to implement it, the
> logic would surely belong in alsa-lib and not in the kernel.  Behind
> the enulated device, the library would run a loop like the example,
> above.
> 
> In any case, your patches don't implement that sort of thing at all,
> do they?

Subject: [very-RFC 7/8] AVB ALSA - Add ALSA shim for TSN

Did you even bother to look?

-- 
Henrik Austad


signature.asc
Description: Digital signature


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-14 Thread Richard Cochran
On Tue, Jun 14, 2016 at 11:30:00AM +0200, Henrik Austad wrote:
> So loop data from kernel -> userspace -> kernelspace and finally back to 
> userspace and the media application?

Huh?  I wonder where you got that idea.  Let me show an example of
what I mean.

void listener()
{
int in = socket();
int out = open("/dev/dsp");
char buf[];

while (1) {
recv(in, buf, packetsize);
write(out, buf + offset, datasize);
}
}

See?

> Yes, I know some audio apps "use networking", I can stream netradio, I can 
> use jack to connect devices using RTP and probably a whole lot of other 
> applications do similar things. However, AVB is more about using the 
> network as a virtual sound-card.

That is news to me.  I don't recall ever having seen AVB described
like that before.

> For the media application, it should not 
> have to care if the device it is using is a soudncard inside the box or a 
> set of AVB-capable speakers somewhere on the network.

So you would like a remote listener to appear in the system as a local
PCM audio sink?  And a remote talker would be like a local media URL?
Sounds unworkable to me, but even if you were to implement it, the
logic would surely belong in alsa-lib and not in the kernel.  Behind
the enulated device, the library would run a loop like the example,
above.

In any case, your patches don't implement that sort of thing at all,
do they?

Thanks,
Richard
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-14 Thread Richard Cochran
On Tue, Jun 14, 2016 at 12:18:44PM +0100, One Thousand Gnomes wrote:
> On Mon, 13 Jun 2016 21:51:36 +0200
> Richard Cochran  wrote:
> > 
> > Actually, we already have support for tunable clock-like HW elements,
> > namely the dynamic posix clock API.  It is trivial to write a driver
> > for VCO or the like.  I am just not too familiar with the latest high
> > end audio devices.
> 
> Why high end ? Even the most basic USB audio is frame based and
> isosynchronous to the USB clock. It also reports back the delay
> properties.

Well, I guess I should have said, I am not too familiar with the
breadth of current audio hardware, high end or low end.  Of course I
would like to see even consumer devices work with AVB, but it is up to
the ALSA people to make that happen.  So far, nothing has been done,
afaict.

Thanks,
Richard

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-14 Thread One Thousand Gnomes
On Mon, 13 Jun 2016 21:51:36 +0200
Richard Cochran  wrote:

> On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote:
> > 3. ALSA support for tunable AD/DA clocks.  The rate of the Listener's
> >DA clock must match that of the Talker and the other Listeners.
> >Either you adjust it in HW using a VCO or similar, or you do
> >adaptive sample rate conversion in the application. (And that is
> >another reason for *not* having a shared kernel buffer.)  For the
> >Talker, either you adjust the AD clock to match the PTP time, or
> >you measure the frequency offset.  
> 
> Actually, we already have support for tunable clock-like HW elements,
> namely the dynamic posix clock API.  It is trivial to write a driver
> for VCO or the like.  I am just not too familiar with the latest high
> end audio devices.

Why high end ? Even the most basic USB audio is frame based and
isosynchronous to the USB clock. It also reports back the delay
properties.

Alan
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-14 Thread Henrik Austad
On Mon, Jun 13, 2016 at 09:32:10PM +0200, Richard Cochran wrote:
> On Mon, Jun 13, 2016 at 03:00:59PM +0200, Henrik Austad wrote:
> > On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote:
> > > Which driver is that?
> > 
> > drivers/net/ethernet/renesas/
> 
> That driver is merely a PTP capable MAC driver, nothing more.
> Although AVB is in the device name, the driver doesn't implement
> anything beyond the PTP bits.

Yes, I think they do the rest from userspace, not sure though :)

> > What is the rationale for no new sockets? To avoid cluttering? or do 
> > sockets have a drawback I'm not aware of?
> 
> The current raw sockets will work just fine.  Again, there should be a
> application that sits in between with the network socket and the audio
> interface.

So loop data from kernel -> userspace -> kernelspace and finally back to 
userspace and the media application? I agree that you need a way to pipe 
the incoming data directly from the network to userspace for those TSN 
users that can handle it. But again, for media-applications that don't know 
(or care) about AVB, it should be fed to ALSA/v4l2 directly and not jump 
between kernel and userspace an extra round.

I get the point of not including every single audio/video encoder in the 
kernel, but raw audio should be piped directly to alsa. V4L2 has a way of 
piping encoded video through the system and to the media application (in 
order to support cameras that to encoding). The same approach should be 
doable for AVB, no? (someone from alsa/v4l2 should probably comment on 
this)

> > Why is configfs wrong?
> 
> Because the application will use the already existing network and
> audio interfaces to configure the system.

Configuring this via the audio-interface is going to be a challenge since 
you need to configure the stream through the network before you can create 
the audio interface. If not, you will have to either drop data or block the 
caller until the link has been fully configured.

This is actually the reason why configfs is used in the series now, as it 
allows userspace to figure out all the different attributes and configure 
the link before letting ALSA start pushing data.

> > > Lets take a look at the big picture.  One aspect of TSN is already
> > > fully supported, namely the gPTP.  Using the linuxptp user stack and a
> > > modern kernel, you have a complete 802.1AS-2011 solution.
> > 
> > Yes, I thought so, which is also why I have put that to the side and why 
> > I'm using ktime_get() for timestamps at the moment. There's also the issue 
> > of hooking the time into ALSA/V4L2
> 
> So lets get that issue solved before anything else.  It is absolutely
> essential for TSN.  Without the synchronization, you are only playing
> audio over the network.  We already have software for that.

Yes, I agree, presentation-time and local time needs to be handled 
properly. The same for adjusting sample-rate etc. This is a lot of work, so 
I hope you can understand why I started out with a simple approach to spark 
a discussion before moving on to the larger bits.

> > > 2. A user space audio application that puts it all together, making
> > >use of the services in #1, the linuxptp gPTP service, the ALSA
> > >services, and the network connections.  This program will have all
> > >the knowledge about packet formats, AV encodings, and the local HW
> > >capabilities.  This program cannot yet be written, as we still need
> > >some kernel work in the audio and networking subsystems.
> > 
> > Why?
> 
> Because user space is right place to place the knowledge of the myriad
> formats and options.

Se response above, better to let anything but uncompressed raw data trickle 
through.

> > the whole point should be to make it as easy for userspace as 
> > possible. If you need to tailor each individual media-appliation to use 
> > AVB, it is not going to be very useful outside pro-Audio. Sure, there will 
> > be challenges, but one key element here should be to *not* require 
> > upgrading every single media application.
> > 
> > Then, back to the suggestion of adding a TSN_SOCKET (which you didn't like, 
> > but can we agree on a term "raw interface to TSN", and mode of transport 
> > can be defined later? ), was to let those applications that are TSN-aware 
> > to do what they need to do, whether it is controlling robots or media 
> > streams.
> 
> First you say you don't want ot upgrade media applications, but then
> you invent a new socket type.  That is a contradiction in terms.

Hehe, no, bad phrasing on my part. I want *both* (hence the shim-interface) 
:)

> Audio apps already use networking, and they already use the audio
> subsystem.  We need to help them get their job done by providing the
> missing kernel interfaces.  They don't need extra magic buffering the
> kernel.  They already can buffer audio data by themselves.

Yes, I know some audio apps "use networking", I can stream netradio, I can 
use jack to 

Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-14 Thread Henrik Austad
On Mon, Jun 13, 2016 at 08:56:44AM -0700, John Fastabend wrote:
> On 16-06-13 04:47 AM, Richard Cochran wrote:
> > [...]
> > Here is what is missing to support audio TSN:
> > 
> > * User Space
> > 
> > 1. A proper userland stack for AVDECC, MAAP, FQTSS, and so on.  The
> >OpenAVB project does not offer much beyond simple examples.
> > 
> > 2. A user space audio application that puts it all together, making
> >use of the services in #1, the linuxptp gPTP service, the ALSA
> >services, and the network connections.  This program will have all
> >the knowledge about packet formats, AV encodings, and the local HW
> >capabilities.  This program cannot yet be written, as we still need
> >some kernel work in the audio and networking subsystems.
> > 
> > * Kernel Space
> > 
> > 1. Providing frames with a future transmit time.  For normal sockets,
> >this can be in the CMESG data.  For mmap'ed buffers, we will need a
> >new format.  (I think Arnd is working on a new layout.)
> > 
> > 2. Time based qdisc for transmitted frames.  For MACs that support
> >this (like the i210), we only have to place the frame into the
> >correct queue.  For normal HW, we want to be able to reserve a time
> >window in which non-TSN frames are blocked.  This is some work, but
> >in the end it should be a generic solution that not only works
> >"perfectly" with TSN HW but also provides best effort service using
> >any NIC.
> > 
> 
> When I looked at this awhile ago I convinced myself that it could fit
> fairly well into the DCB stack (DCB is also part of 802.1Q). A lot of
> the traffic class to queue mappings and priories could be handled here.
> It might be worth taking a look at ./net/sched/mqprio.c and ./net/dcb/.

Interesting, I'll have a look at dcb and mqprio, I'm not familiar with 
those systems. Thanks for pointing those out!

I hope that the complexity doesn't run crazy though, TSN is not aimed at 
datacentra, a lot of the endpoints are going to be embedded devices, 
introducing a massive stack for handling every eventuality in 802.1q is 
going to be counter productive.

> Unfortunately I didn't get too far along but we probably don't want
> another mechanism to map hw queues/tcs/etc if the existing interfaces
> work or can be extended to support this.

Sure, I get that, as long as the complexity for setting up a link doesn't 
go through the roof :)

Thanks!

-- 
Henrik Austad


signature.asc
Description: Digital signature


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-13 Thread Richard Cochran
On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote:
> 3. ALSA support for tunable AD/DA clocks.  The rate of the Listener's
>DA clock must match that of the Talker and the other Listeners.
>Either you adjust it in HW using a VCO or similar, or you do
>adaptive sample rate conversion in the application. (And that is
>another reason for *not* having a shared kernel buffer.)  For the
>Talker, either you adjust the AD clock to match the PTP time, or
>you measure the frequency offset.

Actually, we already have support for tunable clock-like HW elements,
namely the dynamic posix clock API.  It is trivial to write a driver
for VCO or the like.  I am just not too familiar with the latest high
end audio devices.

I have seen audio PLL/multiplier chips that will take, for example, a
10 kHz input and produce your 48 kHz media clock.  With the right HW
design, you can tell your PTP Hardware Clock to produce a 1 PPS,
and you will have a synchronized AVB endpoint.  The software is all
there already.  Somebody should tell the ALSA guys about it.

I don't know if ALSA has anything for sample rate conversion or not,
but haven't seen anything that addresses distributed synchronized
audio applications.

Thanks,
Richard
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-13 Thread Richard Cochran
On Mon, Jun 13, 2016 at 03:00:59PM +0200, Henrik Austad wrote:
> On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote:
> > People have been asking me about TSN and Linux, and we've made some
> > thoughts about it.  The interest is there, and so I am glad to see
> > discussion on this topic.
> 
> I'm not aware of any such discussions, could you point me to where TSN has 
> been discussed, it would be nice to see other peoples thought on the matter 
> (which was one of the ideas behind this series in the first place)

To my knowledge, there hasn't been any previous TSN talk on lkml.

(You have just now started the discussion ;)

Sorry for not being clear.  

Richard
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-13 Thread Richard Cochran
On Mon, Jun 13, 2016 at 03:00:59PM +0200, Henrik Austad wrote:
> On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote:
> > Which driver is that?
> 
> drivers/net/ethernet/renesas/

That driver is merely a PTP capable MAC driver, nothing more.
Although AVB is in the device name, the driver doesn't implement
anything beyond the PTP bits.
 
> What is the rationale for no new sockets? To avoid cluttering? or do 
> sockets have a drawback I'm not aware of?

The current raw sockets will work just fine.  Again, there should be a
application that sits in between with the network socket and the audio
interface.
 
> Why is configfs wrong?

Because the application will use the already existing network and
audio interfaces to configure the system.

> > Lets take a look at the big picture.  One aspect of TSN is already
> > fully supported, namely the gPTP.  Using the linuxptp user stack and a
> > modern kernel, you have a complete 802.1AS-2011 solution.
> 
> Yes, I thought so, which is also why I have put that to the side and why 
> I'm using ktime_get() for timestamps at the moment. There's also the issue 
> of hooking the time into ALSA/V4L2

So lets get that issue solved before anything else.  It is absolutely
essential for TSN.  Without the synchronization, you are only playing
audio over the network.  We already have software for that.
 
> > 2. A user space audio application that puts it all together, making
> >use of the services in #1, the linuxptp gPTP service, the ALSA
> >services, and the network connections.  This program will have all
> >the knowledge about packet formats, AV encodings, and the local HW
> >capabilities.  This program cannot yet be written, as we still need
> >some kernel work in the audio and networking subsystems.
> 
> Why?

Because user space is right place to place the knowledge of the myriad
formats and options.

> the whole point should be to make it as easy for userspace as 
> possible. If you need to tailor each individual media-appliation to use 
> AVB, it is not going to be very useful outside pro-Audio. Sure, there will 
> be challenges, but one key element here should be to *not* require 
> upgrading every single media application.
> 
> Then, back to the suggestion of adding a TSN_SOCKET (which you didn't like, 
> but can we agree on a term "raw interface to TSN", and mode of transport 
> can be defined later? ), was to let those applications that are TSN-aware 
> to do what they need to do, whether it is controlling robots or media 
> streams.

First you say you don't want ot upgrade media applications, but then
you invent a new socket type.  That is a contradiction in terms.

Audio apps already use networking, and they already use the audio
subsystem.  We need to help them get their job done by providing the
missing kernel interfaces.  They don't need extra magic buffering the
kernel.  They already can buffer audio data by themselves.

> > * Kernel Space
> > 
> > 1. Providing frames with a future transmit time.  For normal sockets,
> >this can be in the CMESG data.  For mmap'ed buffers, we will need a
> >new format.  (I think Arnd is working on a new layout.)
> 
> Ah, I was unaware of this, both CMESG and mmap buffers.
> 
> What is the accuracy of deferred transmit? If you have a class A stream, 
> you push out a new frame every 125 us, you may end up with 
> accuracy-constraints lower than that if you want to be able to state "send 
> frame X at time Y".

I have no idea what you are asking here.
 
Sorry,
Richard
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-13 Thread John Fastabend
On 16-06-13 04:47 AM, Richard Cochran wrote:
> Henrik,
> 
> On Sun, Jun 12, 2016 at 01:01:28AM +0200, Henrik Austad wrote:
>> There are at least one AVB-driver (the AV-part of TSN) in the kernel
>> already,
> 
> Which driver is that?
> 
>> however this driver aims to solve a wider scope as TSN can do
>> much more than just audio. A very basic ALSA-driver is added to the end
>> that allows you to play music between 2 machines using aplay in one end
>> and arecord | aplay on the other (some fiddling required) We have plans
>> for doing the same for v4l2 eventually (but there are other fishes to
>> fry first). The same goes for a TSN_SOCK type approach as well.
> 
> Please, no new socket type for this.
>  
>> What remains
>> - tie to (g)PTP properly, currently using ktime_get() for presentation
>>   time
>> - get time from shim into TSN and vice versa
> 
> ... and a whole lot more, see below.
> 
>> - let shim create/manage buffer
> 
> (BTW, shim is a terrible name for that.)
> 
> [sigh]
> 
> People have been asking me about TSN and Linux, and we've made some
> thoughts about it.  The interest is there, and so I am glad to see
> discussion on this topic.
> 
> Having said that, your series does not even begin to address the real
> issues.  I did not review the patches too carefully (because the
> important stuff is missing), but surely configfs is the wrong
> interface for this.  In the end, we will be able to support TSN using
> the existing networking and audio interfaces, adding appropriate
> extensions.
> 
> Your patch features a buffer shared by networking and audio.  This
> isn't strictly necessary for TSN, and it may be harmful.  The
> Listeners are supposed to calculate the delay from frame reception to
> the DA conversion.  They can easily include the time needed for a user
> space program to parse the frames, copy (and combine/convert) the
> data, and re-start the audio transfer.  A flexible TSN implementation
> will leave all of the format and encoding task to the userland.  After
> all, TSN will some include more that just AV data, as you know.
> 
> Lets take a look at the big picture.  One aspect of TSN is already
> fully supported, namely the gPTP.  Using the linuxptp user stack and a
> modern kernel, you have a complete 802.1AS-2011 solution.
> 
> Here is what is missing to support audio TSN:
> 
> * User Space
> 
> 1. A proper userland stack for AVDECC, MAAP, FQTSS, and so on.  The
>OpenAVB project does not offer much beyond simple examples.
> 
> 2. A user space audio application that puts it all together, making
>use of the services in #1, the linuxptp gPTP service, the ALSA
>services, and the network connections.  This program will have all
>the knowledge about packet formats, AV encodings, and the local HW
>capabilities.  This program cannot yet be written, as we still need
>some kernel work in the audio and networking subsystems.
> 
> * Kernel Space
> 
> 1. Providing frames with a future transmit time.  For normal sockets,
>this can be in the CMESG data.  For mmap'ed buffers, we will need a
>new format.  (I think Arnd is working on a new layout.)
> 
> 2. Time based qdisc for transmitted frames.  For MACs that support
>this (like the i210), we only have to place the frame into the
>correct queue.  For normal HW, we want to be able to reserve a time
>window in which non-TSN frames are blocked.  This is some work, but
>in the end it should be a generic solution that not only works
>"perfectly" with TSN HW but also provides best effort service using
>any NIC.
> 

When I looked at this awhile ago I convinced myself that it could fit
fairly well into the DCB stack (DCB is also part of 802.1Q). A lot of
the traffic class to queue mappings and priories could be handled here.
It might be worth taking a look at ./net/sched/mqprio.c and ./net/dcb/.

Unfortunately I didn't get too far along but we probably don't want
another mechanism to map hw queues/tcs/etc if the existing interfaces
work or can be extended to support this.

> 3. ALSA support for tunable AD/DA clocks.  The rate of the Listener's
>DA clock must match that of the Talker and the other Listeners.
>Either you adjust it in HW using a VCO or similar, or you do
>adaptive sample rate conversion in the application. (And that is
>another reason for *not* having a shared kernel buffer.)  For the
>Talker, either you adjust the AD clock to match the PTP time, or
>you measure the frequency offset.
> 
> 4. ALSA support for time triggered playback.  The patch series
>completely ignore the critical issue of media clock recovery.  The
>Listener must buffer the stream in order to play it exactly at a
>specified time.  It cannot simply send the stream ASAP to the audio
>HW, because some other Listener might need longer.  AFAICT, there
>is nothing in ALSA that allows you to say, sample X should be
>played at time Y.
> 
> These are some ideas about 

Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-13 Thread Arnd Bergmann
On Monday, June 13, 2016 1:47:13 PM CEST Richard Cochran wrote:
> * Kernel Space
> 
> 1. Providing frames with a future transmit time.  For normal sockets,
>this can be in the CMESG data.  For mmap'ed buffers, we will need a
>new format.  (I think Arnd is working on a new layout.)
> 

After some back and forth, I think the conclusion for now was that
the timestamps in the current v3 format are sufficient until 2106
as long as we treat them as 'unsigned', so we don't need the new
format for y2038, but if we get a new format, that should definitely
use 64-bit timestamps because that is the right thing to do.

Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-13 Thread Henrik Austad
On Mon, Jun 13, 2016 at 01:47:13PM +0200, Richard Cochran wrote:
> Henrik,

Hi Richard,

> On Sun, Jun 12, 2016 at 01:01:28AM +0200, Henrik Austad wrote:
> > There are at least one AVB-driver (the AV-part of TSN) in the kernel
> > already,
> 
> Which driver is that?

drivers/net/ethernet/renesas/

> > however this driver aims to solve a wider scope as TSN can do
> > much more than just audio. A very basic ALSA-driver is added to the end
> > that allows you to play music between 2 machines using aplay in one end
> > and arecord | aplay on the other (some fiddling required) We have plans
> > for doing the same for v4l2 eventually (but there are other fishes to
> > fry first). The same goes for a TSN_SOCK type approach as well.
> 
> Please, no new socket type for this.

The idea was to create a tsn-driver and then allow userspace to use it 
either for media or for whatever else they'd like - and then a socket made 
sense. Or so I thought :)

What is the rationale for no new sockets? To avoid cluttering? or do 
sockets have a drawback I'm not aware of?

> > What remains
> > - tie to (g)PTP properly, currently using ktime_get() for presentation
> >   time
> > - get time from shim into TSN and vice versa
> 
> ... and a whole lot more, see below.
> 
> > - let shim create/manage buffer
> 
> (BTW, shim is a terrible name for that.)

So something thin that is placed between to subystems should rather be 
called.. flimsy? The point of the name was to indicate that it glued 2 
pieces together. If you have a better suggestion, I'm all ears.

> [sigh]
> 
> People have been asking me about TSN and Linux, and we've made some
> thoughts about it.  The interest is there, and so I am glad to see
> discussion on this topic.

I'm not aware of any such discussions, could you point me to where TSN has 
been discussed, it would be nice to see other peoples thought on the matter 
(which was one of the ideas behind this series in the first place)

> Having said that, your series does not even begin to address the real
> issues. 

Well, in all honesty, I did say so :) It is marked as "very-RFC", and not 
for being included in the kernel as-is. I also made a short list of the 
most crucial bits missing.

I know there are real issues, but solving these won't matter if you don't 
have anything useful to do with it. I decided to start by adding a thin 
ALSA-driver and then continue to work with the kernel infrastructure. 
Having something that works-ish makes it a lot easier to test and get 
others interested in, especially when you are not deeply involved in a 
subsystem.

At one point you get to where you need input from other more intimate with 
then inner workings of the different subsystems to see how things should be 
created without making too much of a mess. So where we are :)

My primary motivation was to
a) gather feedback (which you have provided, and for which I am very 
   grateful)
b) get the discussion going on how/if TSN should be added to the kernel

> I did not review the patches too carefully (because the
> important stuff is missing), but surely configfs is the wrong
> interface for this. 

Why is configfs wrong?

Unless you want to implement discovery and enumeration and srp-negotiation 
in the kernel, you need userspace to handle this. Once userspace has done 
all that (found priority-codes, streamIDs, vlanIDs and all the required 
bits), then userspace can create a new link. For that I find ConfigFS to be 
quite useful and up to the task.

In my opinion, it also makes for a much tidier and saner interface than 
some obscure dark-magic ioctl()

> In the end, we will be able to support TSN using
> the existing networking and audio interfaces, adding appropriate
> extensions.

I surely hope so, but as I'm not deep into the networking part of the 
kernel finding those appropriate extensions is hard - which is why we 
started writing a standalone module-

> Your patch features a buffer shared by networking and audio.  This
> isn't strictly necessary for TSN, and it may be harmful. 

At one stage, data has to flow in/out of the network, and whoever's using 
TSN probably need to store data somewhere as well, so you need some form of 
buffering at one place in the path the data flows through.

That being said, one of the bits on my plate is to remove the 
"TSN-hosted-buffer" and let TSN read/write data via the shim_ops. What the 
best set of functions where are, remain to be seen, but it should provide a 
way to move data from either a single frame or a "few frames" to the shime 
(err..  ;)

> The
> Listeners are supposed to calculate the delay from frame reception to
> the DA conversion.  They can easily include the time needed for a user
> space program to parse the frames, copy (and combine/convert) the
> data, and re-start the audio transfer.  A flexible TSN implementation
> will leave all of the format and encoding task to the userland.  After
> all, TSN will some include more that just AV data, as you know.

Yes, 

Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-13 Thread Richard Cochran
Henrik,

On Sun, Jun 12, 2016 at 01:01:28AM +0200, Henrik Austad wrote:
> There are at least one AVB-driver (the AV-part of TSN) in the kernel
> already,

Which driver is that?

> however this driver aims to solve a wider scope as TSN can do
> much more than just audio. A very basic ALSA-driver is added to the end
> that allows you to play music between 2 machines using aplay in one end
> and arecord | aplay on the other (some fiddling required) We have plans
> for doing the same for v4l2 eventually (but there are other fishes to
> fry first). The same goes for a TSN_SOCK type approach as well.

Please, no new socket type for this.
 
> What remains
> - tie to (g)PTP properly, currently using ktime_get() for presentation
>   time
> - get time from shim into TSN and vice versa

... and a whole lot more, see below.

> - let shim create/manage buffer

(BTW, shim is a terrible name for that.)

[sigh]

People have been asking me about TSN and Linux, and we've made some
thoughts about it.  The interest is there, and so I am glad to see
discussion on this topic.

Having said that, your series does not even begin to address the real
issues.  I did not review the patches too carefully (because the
important stuff is missing), but surely configfs is the wrong
interface for this.  In the end, we will be able to support TSN using
the existing networking and audio interfaces, adding appropriate
extensions.

Your patch features a buffer shared by networking and audio.  This
isn't strictly necessary for TSN, and it may be harmful.  The
Listeners are supposed to calculate the delay from frame reception to
the DA conversion.  They can easily include the time needed for a user
space program to parse the frames, copy (and combine/convert) the
data, and re-start the audio transfer.  A flexible TSN implementation
will leave all of the format and encoding task to the userland.  After
all, TSN will some include more that just AV data, as you know.

Lets take a look at the big picture.  One aspect of TSN is already
fully supported, namely the gPTP.  Using the linuxptp user stack and a
modern kernel, you have a complete 802.1AS-2011 solution.

Here is what is missing to support audio TSN:

* User Space

1. A proper userland stack for AVDECC, MAAP, FQTSS, and so on.  The
   OpenAVB project does not offer much beyond simple examples.

2. A user space audio application that puts it all together, making
   use of the services in #1, the linuxptp gPTP service, the ALSA
   services, and the network connections.  This program will have all
   the knowledge about packet formats, AV encodings, and the local HW
   capabilities.  This program cannot yet be written, as we still need
   some kernel work in the audio and networking subsystems.

* Kernel Space

1. Providing frames with a future transmit time.  For normal sockets,
   this can be in the CMESG data.  For mmap'ed buffers, we will need a
   new format.  (I think Arnd is working on a new layout.)

2. Time based qdisc for transmitted frames.  For MACs that support
   this (like the i210), we only have to place the frame into the
   correct queue.  For normal HW, we want to be able to reserve a time
   window in which non-TSN frames are blocked.  This is some work, but
   in the end it should be a generic solution that not only works
   "perfectly" with TSN HW but also provides best effort service using
   any NIC.

3. ALSA support for tunable AD/DA clocks.  The rate of the Listener's
   DA clock must match that of the Talker and the other Listeners.
   Either you adjust it in HW using a VCO or similar, or you do
   adaptive sample rate conversion in the application. (And that is
   another reason for *not* having a shared kernel buffer.)  For the
   Talker, either you adjust the AD clock to match the PTP time, or
   you measure the frequency offset.

4. ALSA support for time triggered playback.  The patch series
   completely ignore the critical issue of media clock recovery.  The
   Listener must buffer the stream in order to play it exactly at a
   specified time.  It cannot simply send the stream ASAP to the audio
   HW, because some other Listener might need longer.  AFAICT, there
   is nothing in ALSA that allows you to say, sample X should be
   played at time Y.

These are some ideas about implementing TSN.  Maybe some of it is
wrong (especially about ALSA), but we definitely need a proper design
to get the kernel parts right.  There is plenty of work to do, but we
really don't need some hacky, in-kernel buffer with hard coded audio
formats.

Thanks,
Richard
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-12 Thread Henrik Austad
On Sun, Jun 12, 2016 at 07:43:34PM +0900, Takashi Sakamoto wrote:
> On Jun 12 2016 17:31, Henrik Austad wrote:
> > On Sun, Jun 12, 2016 at 01:30:24PM +0900, Takashi Sakamoto wrote:
> >> On Jun 12 2016 12:38, Takashi Sakamoto wrote:
> >>> In your patcset, there's no actual codes about how to handle any
> >>> interrupt contexts (software / hardware), how to handle packet payload,
> >>> and so on. Especially, for recent sound subsystem, the timing of
> >>> generating interrupts and which context does what works are important to
> >>> reduce playback/capture latency and power consumption.
> >>>
> >>> Of source, your intention of this patchset is to show your early concept
> >>> of TSN feature. Nevertheless, both of explaination and codes are
> >>> important to the other developers who have little knowledges about TSN,
> >>> AVB and AES-64 such as me.
> >>
> >> Oops. Your 5th patch was skipped by alsa-project.org. I guess that size
> >> of the patch is too large to the list service. I can see it:
> >> http://marc.info/?l=linux-netdev=146568672728661=2
> >>
> >> As long as seeing the patch, packets are queueing in hrtimer callbacks
> >> every 1 seconds.
> > 
> > Actually, the hrtimer fires every 1ms, and that part is something I have to 
> > do something about, also because it sends of the same number of frames 
> > every time, regardless of how accurate the internal timer is to the rest of 
> > the network (there's no backpressure from the networking layer).
> > 
> >> (This is a high level discussion and it's OK to ignore it for the
> >> moment. When writing packet-oriented drivers for sound subsystem, you
> >> need to pay special attention to accuracy of the number of PCM frames
> >> transferred currently, and granularity of the number of PCM frames
> >> transferred by one operation. In this case, snd_avb_hw,
> >> snd_avb_pcm_pointer(), tsn_buffer_write_net() and tsn_buffer_read_net()
> >> are involved in this discussion. You can see ALSA developers' struggle
> >> in USB audio device class drivers and (of cource) IEC 61883-1/6 drivers.)
> > 
> > Ah, good point. Any particular parts of the USB-subsystem I should start 
> > looking at?
> 
> I don't think it's a beter way for you to study USB Audio Device Class
> driver unless you're interested in ALSA or USB subsystem.
> 
> (But for your information, snd-usb-audio is in sound/usb/* of Linux
> kernel. IEC 61883-1/6 driver is in sound/firewire/*.)

Ok, thanks, I'll definately be looking at the firewire bit

> We need different strategy to achieve it on different transmission backend.
> 
> > Knowing where to start looking is a tremendous help
> 
> It's not well-documented, and not well-generalized for packet-oriented
> drivers. Most of developers who have enough knowledge about it work for
> DMA-oriented drivers in mobile platforms and have little interests in
> packet-oriented drivers. You need to find your own way.
> 
> Currently I have few advices to you, because I'm also on the way for
> drivers to process IEC 61883-1/6 packets on IEEE 1394 bus with enough
> accuracy and granularity. The paper I introduced is for the way (but not
> mature).
> 
> I wish you get more helps from the other developers. Your work is more
> significant to Linux system, than mine.
> 
> (And I hope your future work get no ignorance and no unreasonable
> hostility from coarse users.)

Ah well, I have asbestos-underwear so that should be fine :)

Thanks for the pointers, I really appreciate them!



-- 
Henrik Austad


signature.asc
Description: Digital signature


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-12 Thread Takashi Sakamoto
On Jun 12 2016 17:31, Henrik Austad wrote:
> On Sun, Jun 12, 2016 at 01:30:24PM +0900, Takashi Sakamoto wrote:
>> On Jun 12 2016 12:38, Takashi Sakamoto wrote:
>>> In your patcset, there's no actual codes about how to handle any
>>> interrupt contexts (software / hardware), how to handle packet payload,
>>> and so on. Especially, for recent sound subsystem, the timing of
>>> generating interrupts and which context does what works are important to
>>> reduce playback/capture latency and power consumption.
>>>
>>> Of source, your intention of this patchset is to show your early concept
>>> of TSN feature. Nevertheless, both of explaination and codes are
>>> important to the other developers who have little knowledges about TSN,
>>> AVB and AES-64 such as me.
>>
>> Oops. Your 5th patch was skipped by alsa-project.org. I guess that size
>> of the patch is too large to the list service. I can see it:
>> http://marc.info/?l=linux-netdev=146568672728661=2
>>
>> As long as seeing the patch, packets are queueing in hrtimer callbacks
>> every 1 seconds.
> 
> Actually, the hrtimer fires every 1ms, and that part is something I have to 
> do something about, also because it sends of the same number of frames 
> every time, regardless of how accurate the internal timer is to the rest of 
> the network (there's no backpressure from the networking layer).
> 
>> (This is a high level discussion and it's OK to ignore it for the
>> moment. When writing packet-oriented drivers for sound subsystem, you
>> need to pay special attention to accuracy of the number of PCM frames
>> transferred currently, and granularity of the number of PCM frames
>> transferred by one operation. In this case, snd_avb_hw,
>> snd_avb_pcm_pointer(), tsn_buffer_write_net() and tsn_buffer_read_net()
>> are involved in this discussion. You can see ALSA developers' struggle
>> in USB audio device class drivers and (of cource) IEC 61883-1/6 drivers.)
> 
> Ah, good point. Any particular parts of the USB-subsystem I should start 
> looking at?

I don't think it's a beter way for you to study USB Audio Device Class
driver unless you're interested in ALSA or USB subsystem.

(But for your information, snd-usb-audio is in sound/usb/* of Linux
kernel. IEC 61883-1/6 driver is in sound/firewire/*.)

We need different strategy to achieve it on different transmission backend.

> Knowing where to start looking is a tremendous help

It's not well-documented, and not well-generalized for packet-oriented
drivers. Most of developers who have enough knowledge about it work for
DMA-oriented drivers in mobile platforms and have little interests in
packet-oriented drivers. You need to find your own way.

Currently I have few advices to you, because I'm also on the way for
drivers to process IEC 61883-1/6 packets on IEEE 1394 bus with enough
accuracy and granularity. The paper I introduced is for the way (but not
mature).

I wish you get more helps from the other developers. Your work is more
significant to Linux system, than mine.

(And I hope your future work get no ignorance and no unreasonable
hostility from coarse users.)


Regards

Takashi Sakamoto



signature.asc
Description: OpenPGP digital signature


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-12 Thread Takashi Sakamoto
On Jun 12 2016 17:28, Henrik Austad wrote:
> On Sun, Jun 12, 2016 at 12:38:36PM +0900, Takashi Sakamoto wrote:
>> I'm one of maintainers for ALSA firewire stack, which handles IEC
>> 61883-1/6 and vendor-unique packets on IEEE 1394 bus for consumer
>> recording equipments.
>> (I'm not in MAINTAINERS because I'm a shy boy.)
>>
>> IEC 61883-6 describes that one packet can multiplex several types of
>> data in its data channels; i.e. Multi Bit Linear Audio data (PCM
>> samples), One Bit Audio Data (DSD), MIDI messages and so on.
> 
> Hmm, that I did not know, not sure how that applies to AVB, but definately 
> something I have to look into.

For your information, I describe more about it.

You can see pre-standardized specification for IEC 61883-6 in website of
1394 Trade Association. Let's look for 'Audio and Music Data
Transmission Protocol 2.3 (October 13, 2010, 1394TA)'
http://1394ta.org/specifications/

In 'clause 12. AM824 SEQUENCE ADAPTATION LAYERS', you can see that one
data block includes several types of data.


But I can imagine that joint group for AVB loosely refers to IEC
61883-6. In this case, AVB specification might describe one data block
transfers one type of data, to drop unreasonable complexities.

>> If you handles packet payload in 'struct snd_pcm_ops.copy', a process
>> context of an ALSA PCM applications performs the work. Thus, no chances
>> to multiplex data with the other types.
> 
> The driver is not adhering fully to any standards right now, the amount of 
> detail is quite high - but I'm slowly improving as I go through the 
> standards. Getting on top of all the standards and all the different 
> subsystems are definately a work in progress (it's a lot to digest!)

In my taste, the driver is not necessarily compliant to any standards.
It's enough just to work its task, without bad side-effects to Linux
system. Based on this concept, current ALSA firewire stack just support
PCM frames and MIDI messages.

Here, I tell you that actual devices tend not to be compliant to any
standards and lost inter-operability.

(Especially, most of audio and music units on IEEE 1394 bus ignores some
of items in standards. In short, they already lost inter-operability.)

So here, we just consider about what actual devices do, instead of
following any standards.


Regards

Takashi Sakamoto



signature.asc
Description: OpenPGP digital signature


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-12 Thread Henrik Austad
On Sun, Jun 12, 2016 at 01:30:24PM +0900, Takashi Sakamoto wrote:
> On Jun 12 2016 12:38, Takashi Sakamoto wrote:
> > In your patcset, there's no actual codes about how to handle any
> > interrupt contexts (software / hardware), how to handle packet payload,
> > and so on. Especially, for recent sound subsystem, the timing of
> > generating interrupts and which context does what works are important to
> > reduce playback/capture latency and power consumption.
> > 
> > Of source, your intention of this patchset is to show your early concept
> > of TSN feature. Nevertheless, both of explaination and codes are
> > important to the other developers who have little knowledges about TSN,
> > AVB and AES-64 such as me.
> 
> Oops. Your 5th patch was skipped by alsa-project.org. I guess that size
> of the patch is too large to the list service. I can see it:
> http://marc.info/?l=linux-netdev=146568672728661=2
> 
> As long as seeing the patch, packets are queueing in hrtimer callbacks
> every 1 seconds.

Actually, the hrtimer fires every 1ms, and that part is something I have to 
do something about, also because it sends of the same number of frames 
every time, regardless of how accurate the internal timer is to the rest of 
the network (there's no backpressure from the networking layer).

> (This is a high level discussion and it's OK to ignore it for the
> moment. When writing packet-oriented drivers for sound subsystem, you
> need to pay special attention to accuracy of the number of PCM frames
> transferred currently, and granularity of the number of PCM frames
> transferred by one operation. In this case, snd_avb_hw,
> snd_avb_pcm_pointer(), tsn_buffer_write_net() and tsn_buffer_read_net()
> are involved in this discussion. You can see ALSA developers' struggle
> in USB audio device class drivers and (of cource) IEC 61883-1/6 drivers.)

Ah, good point. Any particular parts of the USB-subsystem I should start 
looking at? Knowing where to start looking is a tremendous help

Thanks for the feedback!

-- 
Henrik Austad


signature.asc
Description: Digital signature


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-12 Thread Henrik Austad
On Sun, Jun 12, 2016 at 12:38:36PM +0900, Takashi Sakamoto wrote:
> Hi,
> 
> I'm one of maintainers for ALSA firewire stack, which handles IEC
> 61883-1/6 and vendor-unique packets on IEEE 1394 bus for consumer
> recording equipments.
> (I'm not in MAINTAINERS because I'm a shy boy.)
> 
> IEC 61883-6 describes that one packet can multiplex several types of
> data in its data channels; i.e. Multi Bit Linear Audio data (PCM
> samples), One Bit Audio Data (DSD), MIDI messages and so on.

Hmm, that I did not know, not sure how that applies to AVB, but definately 
something I have to look into.

> If you handles packet payload in 'struct snd_pcm_ops.copy', a process
> context of an ALSA PCM applications performs the work. Thus, no chances
> to multiplex data with the other types.

Hmm, ok, I didn't know that, that is something I need to look into -and 
incidentally one of the reasons why I posted the series now instead of a 
few more months down the road - thanks!

The driver is not adhering fully to any standards right now, the amount of 
detail is quite high - but I'm slowly improving as I go through the 
standards. Getting on top of all the standards and all the different 
subsystems are definately a work in progress (it's a lot to digest!)

> To prevent this situation, current ALSA firewire stack handles packet
> payload in software interrupt context of isochronous context of OHCI
> 1394. As a result of this, the software stack supports PCM substreams
> and MIDI substreams.
> 
> In your patcset, there's no actual codes about how to handle any
> interrupt contexts (software / hardware), how to handle packet payload,
> and so on. Especially, for recent sound subsystem, the timing of
> generating interrupts and which context does what works are important to
> reduce playback/capture latency and power consumption.

See reply in other mail :)

> Of source, your intention of this patchset is to show your early concept
> of TSN feature. Nevertheless, both of explaination and codes are
> important to the other developers who have little knowledges about TSN,
> AVB and AES-64 such as me.

Yes, that is one of the things I aimed for, and also getting feedback on 
the overall thinking

> And, I might cooperate to prepare for common IEC 61883 layer. For actual
> codes of ALSA firewire stack, please see mainline kernel code. For
> actual devices of IEC 61883-1/6 and IEEE 1394 bus, please refer to my
> report in 2014. At least, you can get to know what to consider about
> developing upper drivers near ALSA userspace applications.
> https://github.com/takaswie/alsa-firewire-report

Thanks, I'll dig into that, much appreciated

> (But I confirm that the report includes my misunderstandings in clause
> 3.4 and 6.2. need more time...)

ok, good to know

Thank you for your input, very much appreicated!

-- 
Henrik Austad


signature.asc
Description: Digital signature


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-11 Thread Takashi Sakamoto
On Jun 12 2016 12:38, Takashi Sakamoto wrote:
> In your patcset, there's no actual codes about how to handle any
> interrupt contexts (software / hardware), how to handle packet payload,
> and so on. Especially, for recent sound subsystem, the timing of
> generating interrupts and which context does what works are important to
> reduce playback/capture latency and power consumption.
> 
> Of source, your intention of this patchset is to show your early concept
> of TSN feature. Nevertheless, both of explaination and codes are
> important to the other developers who have little knowledges about TSN,
> AVB and AES-64 such as me.

Oops. Your 5th patch was skipped by alsa-project.org. I guess that size
of the patch is too large to the list service. I can see it:
http://marc.info/?l=linux-netdev=146568672728661=2

As long as seeing the patch, packets are queueing in hrtimer callbacks
every 1 seconds.

(This is a high level discussion and it's OK to ignore it for the
moment. When writing packet-oriented drivers for sound subsystem, you
need to pay special attention to accuracy of the number of PCM frames
transferred currently, and granularity of the number of PCM frames
transferred by one operation. In this case, snd_avb_hw,
snd_avb_pcm_pointer(), tsn_buffer_write_net() and tsn_buffer_read_net()
are involved in this discussion. You can see ALSA developers' struggle
in USB audio device class drivers and (of cource) IEC 61883-1/6 drivers.)


Regards

Takashi Sakamoto
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-11 Thread Takashi Sakamoto
Hi,

I'm one of maintainers for ALSA firewire stack, which handles IEC
61883-1/6 and vendor-unique packets on IEEE 1394 bus for consumer
recording equipments.
(I'm not in MAINTAINERS because I'm a shy boy.)

IEC 61883-6 describes that one packet can multiplex several types of
data in its data channels; i.e. Multi Bit Linear Audio data (PCM
samples), One Bit Audio Data (DSD), MIDI messages and so on.

If you handles packet payload in 'struct snd_pcm_ops.copy', a process
context of an ALSA PCM applications performs the work. Thus, no chances
to multiplex data with the other types.

To prevent this situation, current ALSA firewire stack handles packet
payload in software interrupt context of isochronous context of OHCI
1394. As a result of this, the software stack supports PCM substreams
and MIDI substreams.

In your patcset, there's no actual codes about how to handle any
interrupt contexts (software / hardware), how to handle packet payload,
and so on. Especially, for recent sound subsystem, the timing of
generating interrupts and which context does what works are important to
reduce playback/capture latency and power consumption.

Of source, your intention of this patchset is to show your early concept
of TSN feature. Nevertheless, both of explaination and codes are
important to the other developers who have little knowledges about TSN,
AVB and AES-64 such as me.

And, I might cooperate to prepare for common IEC 61883 layer. For actual
codes of ALSA firewire stack, please see mainline kernel code. For
actual devices of IEC 61883-1/6 and IEEE 1394 bus, please refer to my
report in 2014. At least, you can get to know what to consider about
developing upper drivers near ALSA userspace applications.
https://github.com/takaswie/alsa-firewire-report

(But I confirm that the report includes my misunderstandings in clause
3.4 and 6.2. need more time...)


Regards

Takashi Sakamoto

On Jun 12 2016 08:01, Henrik Austad wrote:
> Hi all
> (series based on v4.7-rc2, now with the correct netdev)
> 
> This is a *very* early RFC for a TSN-driver in the kernel. It has been
> floating around in my repo for a while and I would appreciate some
> feedback on the overall design to avoid doing some major blunders.
> 
> TSN: Time Sensitive Networking, formely known as AVB (Audio/Video
> Bridging).
> 
> There are at least one AVB-driver (the AV-part of TSN) in the kernel
> already, however this driver aims to solve a wider scope as TSN can do
> much more than just audio. A very basic ALSA-driver is added to the end
> that allows you to play music between 2 machines using aplay in one end
> and arecord | aplay on the other (some fiddling required) We have plans
> for doing the same for v4l2 eventually (but there are other fishes to
> fry first). The same goes for a TSN_SOCK type approach as well.
> 
> TSN is all about providing infrastructure. Allthough there are a few
> very interesting uses for TSN (reliable, deterministic network for audio
> and video), once you have that reliable link, you can do a lot more.
> 
> Some notes on the design:
> 
> The driver is directed via ConfigFS as we need userspace to handle
> stream-reservation (MSRP), discovery and enumeration (IEEE 1722.1) and
> whatever other management is needed. Once we have all the required
> attributes, we can create link using mkdir, and use write() to set the
> attributes. Once ready, specify the 'shim' (basically a thin wrapper
> between TSN and another subsystem) and we start pushing out frames.
> 
> The network part: it ties directly into the rx-handler for receive and
> writes skb's using netdev_start_xmit(). This could probably be
> improved. 2 new fields in netdev_ops have been introduced, and the Intel
> igb-driver has been updated (as this is available as a PCI-e card). The
> igb-driver works-ish
> 
> 
> What remains
> - tie to (g)PTP properly, currently using ktime_get() for presentation
>   time
> - get time from shim into TSN and vice versa
> - let shim create/manage buffer
> 
> Henrik Austad (8):
>   TSN: add documentation
>   TSN: Add the standard formerly known as AVB to the kernel
>   Adding TSN-driver to Intel I210 controller
>   Add TSN header for the driver
>   Add TSN machinery to drive the traffic from a shim over the network
>   Add TSN event-tracing
>   AVB ALSA - Add ALSA shim for TSN
>   MAINTAINERS: add TSN/AVB-entries
> 
>  Documentation/TSN/tsn.txt | 147 +
>  MAINTAINERS   |  14 +
>  drivers/media/Kconfig |  15 +
>  drivers/media/Makefile|   3 +-
>  drivers/media/avb/Makefile|   5 +
>  drivers/media/avb/avb_alsa.c  | 742 +++
>  drivers/media/avb/tsn_iec61883.h  | 124 
>  drivers/net/ethernet/intel/Kconfig|  18 +
>  drivers/net/ethernet/intel/igb/Makefile   |   2 +-
>  drivers/net/ethernet/intel/igb/igb.h  |  19 +
>  drivers/net/ethernet/intel/igb/igb_main.c |  10 +-
>  

Re: [very-RFC 0/8] TSN driver for the kernel

2016-06-11 Thread Henrik Austad
On Sun, Jun 12, 2016 at 12:22:13AM +0200, Henrik Austad wrote:
> Hi all

Sorry.. I somehow managed to mess up the address to netdev, so if you feel 
like replying to this, use this as it has the correct netdev-address.

again, sorry

> (series based on v4.7-rc2)
> 
> This is a *very* early RFC for a TSN-driver in the kernel. It has been
> floating around in my repo for a while and I would appreciate some
> feedback on the overall design to avoid doing some major blunders.
> 
> TSN: Time Sensitive Networking, formely known as AVB (Audio/Video
> Bridging).
> 
> There are at least one AVB-driver (the AV-part of TSN) in the kernel
> already, however this driver aims to solve a wider scope as TSN can do
> much more than just audio. A very basic ALSA-driver is added to the end
> that allows you to play music between 2 machines using aplay in one end
> and arecord | aplay on the other (some fiddling required) We have plans
> for doing the same for v4l2 eventually (but there are other fishes to
> fry first). The same goes for a TSN_SOCK type approach as well.
> 
> TSN is all about providing infrastructure. Allthough there are a few
> very interesting uses for TSN (reliable, deterministic network for audio
> and video), once you have that reliable link, you can do a lot more.
> 
> Some notes on the design:
> 
> The driver is directed via ConfigFS as we need userspace to handle
> stream-reservation (MSRP), discovery and enumeration (IEEE 1722.1) and
> whatever other management is needed. Once we have all the required
> attributes, we can create link using mkdir, and use write() to set the
> attributes. Once ready, specify the 'shim' (basically a thin wrapper
> between TSN and another subsystem) and we start pushing out frames.
> 
> The network part: it ties directly into the rx-handler for receive and
> writes skb's using netdev_start_xmit(). This could probably be
> improved. 2 new fields in netdev_ops have been introduced, and the Intel
> igb-driver has been updated (as this is available as a PCI-e card). The
> igb-driver works-ish
> 
> 
> What remains
> - tie to (g)PTP properly, currently using ktime_get() for presentation
>   time
> - get time from shim into TSN and vice versa
> - let shim create/manage buffer
> 
> Henrik Austad (8):
>   TSN: add documentation
>   TSN: Add the standard formerly known as AVB to the kernel
>   Adding TSN-driver to Intel I210 controller
>   Add TSN header for the driver
>   Add TSN machinery to drive the traffic from a shim over the network
>   Add TSN event-tracing
>   AVB ALSA - Add ALSA shim for TSN
>   MAINTAINERS: add TSN/AVB-entries
> 
>  Documentation/TSN/tsn.txt | 147 +
>  MAINTAINERS   |  14 +
>  drivers/media/Kconfig |  15 +
>  drivers/media/Makefile|   3 +-
>  drivers/media/avb/Makefile|   5 +
>  drivers/media/avb/avb_alsa.c  | 742 +++
>  drivers/media/avb/tsn_iec61883.h  | 124 
>  drivers/net/ethernet/intel/Kconfig|  18 +
>  drivers/net/ethernet/intel/igb/Makefile   |   2 +-
>  drivers/net/ethernet/intel/igb/igb.h  |  19 +
>  drivers/net/ethernet/intel/igb/igb_main.c |  10 +-
>  drivers/net/ethernet/intel/igb/igb_tsn.c  | 396 
>  include/linux/netdevice.h |  32 +
>  include/linux/tsn.h   | 806 
>  include/trace/events/tsn.h| 349 +++
>  net/Kconfig   |   1 +
>  net/Makefile  |   1 +
>  net/tsn/Kconfig   |  32 +
>  net/tsn/Makefile  |   6 +
>  net/tsn/tsn_configfs.c| 623 +++
>  net/tsn/tsn_core.c| 975 
> ++
>  net/tsn/tsn_header.c  | 203 +++
>  net/tsn/tsn_internal.h| 383 
>  net/tsn/tsn_net.c | 403 
>  24 files changed, 5306 insertions(+), 3 deletions(-)
>  create mode 100644 Documentation/TSN/tsn.txt
>  create mode 100644 drivers/media/avb/Makefile
>  create mode 100644 drivers/media/avb/avb_alsa.c
>  create mode 100644 drivers/media/avb/tsn_iec61883.h
>  create mode 100644 drivers/net/ethernet/intel/igb/igb_tsn.c
>  create mode 100644 include/linux/tsn.h
>  create mode 100644 include/trace/events/tsn.h
>  create mode 100644 net/tsn/Kconfig
>  create mode 100644 net/tsn/Makefile
>  create mode 100644 net/tsn/tsn_configfs.c
>  create mode 100644 net/tsn/tsn_core.c
>  create mode 100644 net/tsn/tsn_header.c
>  create mode 100644 net/tsn/tsn_internal.h
>  create mode 100644 net/tsn/tsn_net.c
> 
> --
> 2.7.4

-- 
Henrik Austad
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html