Re: [LAD] PipeWire, and "a more generic seeking and timing framework"

2018-02-19 Thread Jonathan E. Brickman
Many thanks, Paul, for this and much more.  My lame excuse is I have
had a chunk of my head buried in a singular problem for a long time and
those librarians are very tired :-)
Given reality-check, then, maybe the institution of multiple JACK
subgraphs, with time-decoupling by Pulse-style transport, is a way to
get everything done!
J.E.B.
On Mon, 2018-02-19 at 18:04 -0500, Paul Davis wrote:
> JACK is already much closer to the hardware than the networking
> stack.
> 
> At the conclusion of the jack process callback, it writes samples
> *directly into the memory mapped buffer being used by the audio
> hardware*. The process callback is  preemptively (and with realtime
> scheduling) triggered directly from the interrupt handler of the
> audio interface.
> 
> JACK does not use a round-robin approach to its clients. It creates a
> data (flow) graph based on their interconnections and executes them
> (serially or in parallel) in the order dictated by the graph. 
> 
> 
> 
> On Mon, Feb 19, 2018 at 5:57 PM, Jonathan Brickman  com> wrote:
> > Not really sure the subgraph is so good -- one of the things JACK
> > gives us is the extremely solid knowledge of what it just did, is
> > doing now, and will do next period.  If I run Pulse with JACK, it's
> > JACK controlling the hardware and Pulse feeding into it, not the
> > other way around, because Pulse is not tightly synchronized,
> > whereas JACK is.  But if you can make it work as well, more power
> > to you.
> > 
> > Concerning seeking and timing, though, I have had to wonder.  My
> > impression of JACK for a long time (and more learned ladies and
> > gentlemen, please correct) is that it uses a basically round-robin
> > approach to its clients, with variation.  I have had to wonder,
> > especially given my need for this, how practical a model might be
> > possible, using preemptive multitasking or even Ethernet-style
> > collision avoidance through entropic data, at current CPU speeds. 
> > It's chopped into frames, right?  Couldn't audio and MIDI data be
> > mapped into networking frames and then thrown around using the
> > kernel networking stack?  The timestamps are there...the
> > connectivity is there...have to do interesting translations... :-)  
> > Could be done at the IP level or even lower I would think.  The
> > lower you go, the more power you get, because you're closer to the
> > kernel at every step.
> > 
> > 
-- 
Jonathan E. Brickman   j...@ponderworthy.com   (785)233-9977
Hear us at http://ponderworthy.comcom -- CDs and MP3 now available!
Music of compassion; fire, and life!!!___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] PipeWire, and "a more generic seeking and timing framework"

2018-02-19 Thread Paul Davis
JACK is already much closer to the hardware than the networking stack.

At the conclusion of the jack process callback, it writes samples *directly
into the memory mapped buffer being used by the audio hardware*. The
process callback is  preemptively (and with realtime scheduling) triggered
directly from the interrupt handler of the audio interface.

JACK does not use a round-robin approach to its clients. It creates a data
(flow) graph based on their interconnections and executes them (serially or
in parallel) in the order dictated by the graph.


On Mon, Feb 19, 2018 at 5:57 PM, Jonathan Brickman 
wrote:

> Not really sure the subgraph is so good -- one of the things JACK gives us
> is the extremely solid knowledge of what it just did, is doing now, and
> will do next period.  If I run Pulse with JACK, it's JACK controlling the
> hardware and Pulse feeding into it, not the other way around, because Pulse
> is not tightly synchronized, whereas JACK is.  But if you can make it work
> as well, more power to you.
>
> Concerning seeking and timing, though, I have had to wonder.  My
> impression of JACK for a long time (and more learned ladies and gentlemen,
> please correct) is that it uses a basically round-robin approach to its
> clients, with variation.  I have had to wonder, especially given my need
> for this , how practical a
> model might be possible, using preemptive multitasking or even
> Ethernet-style collision avoidance through entropic data, at current CPU
> speeds.  It's chopped into frames, right?  Couldn't audio and MIDI data be
> mapped into networking frames and then thrown around using the kernel
> networking stack?  The timestamps are there...the connectivity is
> there...have to do interesting translations... :-)  Could be done at the IP
> level or even lower I would think.  The lower you go, the more power you
> get, because you're closer to the kernel at every step.
>
> --
> *Jonathan E. Brickman   j...@ponderworthy.com
> 
>(785)233-9977
> <(785)%20233-9977>*
> *Hear us at http://ponderworthy.com  -- CDs and
> MP3s now available! *
> *Music of compassion; fire, and life!!!*
>
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> https://lists.linuxaudio.org/listinfo/linux-audio-dev
>
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] PipeWire, and "a more generic seeking and timing framework"

2018-02-19 Thread Jonathan Brickman
Not really sure the subgraph is so good -- one of the things JACK gives us
is the extremely solid knowledge of what it just did, is doing now, and
will do next period.  If I run Pulse with JACK, it's JACK controlling the
hardware and Pulse feeding into it, not the other way around, because Pulse
is not tightly synchronized, whereas JACK is.  But if you can make it work
as well, more power to you.

Concerning seeking and timing, though, I have had to wonder.  My impression
of JACK for a long time (and more learned ladies and gentlemen, please
correct) is that it uses a basically round-robin approach to its clients,
with variation.  I have had to wonder, especially given my need for this
, how practical a model might be
possible, using preemptive multitasking or even Ethernet-style collision
avoidance through entropic data, at current CPU speeds.  It's chopped into
frames, right?  Couldn't audio and MIDI data be mapped into networking
frames and then thrown around using the kernel networking stack?  The
timestamps are there...the connectivity is there...have to do interesting
translations... :-)  Could be done at the IP level or even lower I would
think.  The lower you go, the more power you get, because you're closer to
the kernel at every step.

-- 
*Jonathan E. Brickman   j...@ponderworthy.com

  (785)233-9977*
*Hear us at http://ponderworthy.com  -- CDs and
MP3s now available! *
*Music of compassion; fire, and life!!!*
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev