Re: [PD] Choices of IPC when using fast-forward

2022-03-22 Thread Charles Z Henry
On Thu, Mar 17, 2022 at 3:06 AM cyrille henry  wrote:
>
> Hello Chuck,
>
>
> Le 16/03/2022 à 22:00, Charles Z Henry a écrit :
>
> [...]
>
> >
> > My conclusion there was that shmem can be used for asynchronous
> > inter-process communication with minimal risk to real-time.
> it can also be used between 2 synchronous process!

I see that in the examples now.  You've used [pd~]  and it's pretty
cool how that works.

>   It's very
> > good as a fundamental object--it does not block, it does not
> > synchronize.
> that's the aim!
>
> > Notable limitations:
> > 1. Every process needs to know/use the same size for shmem ID's.
> is that a real limitation?

I was unsure about this ordering, but I have run some tests to clarify.

P1:[allocate ID size(
P2:[allocate ID size(  <-- Order does not matter
P1:[memset array1(  <--
P2:[memdump array2(

You can write the data before the 2nd process has run allocate and it
still works fine.

> Do you have a practicable example where one need to share memory of different 
> size?
>
> > 2. Once allocated, a shmem cannot be re-sized.
>
> There is an allocate message now!
> It allow to change the Id and the memory size.
> But Antoine have problem compiling it on windows, so the last version is only 
> available in deken for linux and osX.
> I'll be happy if anyone else want to gives a try at a windows compilation...

Re-sizing now works great too.  Only thing I noticed was not to send
extra allocate messages in the first process, before the 2nd process
has a chance to run its allocate.

There were also some orderings of allocate on P1 and P2 with different
sizes that still worked, even though that's not really how anyone
should use it.
None of my intentional misuse caused any crashes.

> > 3. Writing to/from an extremely large array all at once poses a risk
> > to real-time.
> yes, obviously, moving data from an memory position to an other need time.
> It's far from ideal, but in this situation of "extremely large array" you can 
> spread over time your read/write.
>
> best
> Cyrille

I think I misused the word "limitations", sorry I used that word.
More like obstacles I was looking at
I was working out some way to hand data around, while only allocating
shmem in 2^N sizes that hang around and get re-used instead of
re-allocated.

I'll work my way up to testing how big a data transfer has to be to
cause a problem to real time



___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list


Re: [PD] Choices of IPC when using fast-forward

2022-03-17 Thread Charles Z Henry
On Thu, Mar 17, 2022 at 3:26 AM IOhannes m zmölnig  wrote:
>
>
> On 3/17/22 08:58, cyrille henry wrote:
> >
> >> Notable limitations:
> >> 1. Every process needs to know/use the same size for shmem ID's.
> > is that a real limitation?
> > Do you have a practicable example where one need to share memory of
> > different size?
>
> i don't think this is the problem that chuck is referring to.
> afaiu, it's rather that the two processes need to have a priori
> knowledge of two different "thingies" in order to share some memory
> (without bad surprises): the ID and the size.
>
> from a UX pov the question is, why it's not possible to only have to
> share a single "thingy" (the ID) and have the others be shared implicitly.
>
> fmgdsaf
> IOhannes

Yes, it's exactly that--there's always at least one shared piece of
information that's hard-coded in both patches, if you want to
communicate solely through shmem.  It's trivially extended though.
All processes agree to read from one chosen shmem ID of size 2 on
startup and know that it contains the ID/size of a variable-length
shmem that's now known.  Before you know it, you're writing a whole
protocol.

What's the best method for callbacks from a process that has completed
its task and has data ready to be staged out?
The toplevel process has to be able to be reached from multiple
processes--so that seems like it should just be a udp port.  Unsure on
this point, though

Best,
Chuck



___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list


Re: [PD] Choices of IPC when using fast-forward

2022-03-17 Thread Charles Z Henry
On Thu, Mar 17, 2022 at 3:06 AM cyrille henry  wrote:
>
> Hello Chuck,
>
>
> Le 16/03/2022 à 22:00, Charles Z Henry a écrit :
>
> [...]
>
> >
> > My conclusion there was that shmem can be used for asynchronous
> > inter-process communication with minimal risk to real-time.
> it can also be used between 2 synchronous process!
>
>   It's very
> > good as a fundamental object--it does not block, it does not
> > synchronize.
> that's the aim!
>
> > Notable limitations:
> > 1. Every process needs to know/use the same size for shmem ID's.
> is that a real limitation?
> Do you have a practicable example where one need to share memory of different 
> size?

The basic thing I'm building is a network analyzer.  Send out a signal
X and record from the adc Y
Then, you can recover the impulse response.  I want to apply it to
passive electrical circuits (and loudspeakers) for measurement and to
room reverberation

To get a better estimate of the impulse response, you can increase the
measurement time.  The noise terms grow slower than the signal terms
do.
There's no way of knowing before the 1st measurement how long the
measurement needs to be.  So, it's always going to be adjusted at
least once


> > 2. Once allocated, a shmem cannot be re-sized.
>
> There is an allocate message now!
> It allow to change the Id and the memory size.

Ah, that's awesome!  I'll read it today.  I think I will have some questions

> But Antoine have problem compiling it on windows, so the last version is only 
> available in deken for linux and osX.
> I'll be happy if anyone else want to gives a try at a windows compilation...
>
> > 3. Writing to/from an extremely large array all at once poses a risk
> > to real-time.
> yes, obviously, moving data from an memory position to an other need time.
> It's far from ideal, but in this situation of "extremely large array" you can 
> spread over time your read/write.
>
> best
> Cyrille

Yes, that's a key feature I want to implement with abstractions.

There's no risk to doing the memory management and big transfers from
shmem to array in a "fast forward" or non-realtime process.
The toplevel, realtime, audio process should manage the staging of
data and have that ability to spread it out over time



___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list


Re: [PD] Choices of IPC when using fast-forward

2022-03-17 Thread Lucas Cordiviola

On 17/03/2022 04:58, cyrille henry wrote:

so the last version is only available in deken for linux and osX.
I'll be happy if anyone else want to gives a try at a windows 
compilation...


shmem[v1.0] is the latest?

There are Windows versions for it.

If there is a new version: where I can find the sources?


--

Mensaje telepatico asistido por maquinas.




___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list


Re: [PD] Choices of IPC when using fast-forward

2022-03-17 Thread cyrille henry



Le 17/03/2022 à 09:24, IOhannes m zmölnig a écrit :


On 3/17/22 08:58, cyrille henry wrote:



Notable limitations:
1. Every process needs to know/use the same size for shmem ID's.

is that a real limitation?
Do you have a practicable example where one need to share memory of different 
size?


i don't think this is the problem that chuck is referring to.
afaiu, it's rather that the two processes need to have a priori knowledge of two 
different "thingies" in order to share some memory (without bad surprises): the 
ID and the size.


Things are like that because I copy code from your object "pix_share_read" and 
"pix_share_write"!


from a UX pov the question is, why it's not possible to only have to share a single 
"thingy" (the ID) and have the others be shared implicitly.


Yes, automatically sharing the memory size could be possible, and can be useful 
in some situation.

Since I don't spend a lot's of time in pd currently, one should not expect a 
new version soon. But I accept patch!

cheers
c



fmgdsaf
IOhannes

___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list




___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list


Re: [PD] Choices of IPC when using fast-forward

2022-03-17 Thread IOhannes m zmölnig


On 3/17/22 08:58, cyrille henry wrote:



Notable limitations:
1. Every process needs to know/use the same size for shmem ID's.

is that a real limitation?
Do you have a practicable example where one need to share memory of 
different size?


i don't think this is the problem that chuck is referring to.
afaiu, it's rather that the two processes need to have a priori 
knowledge of two different "thingies" in order to share some memory 
(without bad surprises): the ID and the size.


from a UX pov the question is, why it's not possible to only have to 
share a single "thingy" (the ID) and have the others be shared implicitly.


fmgdsaf
IOhannes


OpenPGP_signature
Description: OpenPGP digital signature
___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list


Re: [PD] Choices of IPC when using fast-forward

2022-03-17 Thread cyrille henry

Hello Chuck,


Le 16/03/2022 à 22:00, Charles Z Henry a écrit :

[...]



My conclusion there was that shmem can be used for asynchronous
inter-process communication with minimal risk to real-time. 

it can also be used between 2 synchronous process!

 It's very

good as a fundamental object--it does not block, it does not
synchronize.

that's the aim!


Notable limitations:
1. Every process needs to know/use the same size for shmem ID's.

is that a real limitation?
Do you have a practicable example where one need to share memory of different 
size?


2. Once allocated, a shmem cannot be re-sized.


There is an allocate message now!
It allow to change the Id and the memory size.
But Antoine have problem compiling it on windows, so the last version is only 
available in deken for linux and osX.
I'll be happy if anyone else want to gives a try at a windows compilation...


3. Writing to/from an extremely large array all at once poses a risk
to real-time.

yes, obviously, moving data from an memory position to an other need time.
It's far from ideal, but in this situation of "extremely large array" you can 
spread over time your read/write.

best
Cyrille



I'd like to write a pair of management abstractions for using fast
forward and shmem, then, that make it easy to stage in/out large,
variable-length data

Anybody else have best practices for IPC when using fast forward?
Having a listener on the 2nd process between computing sprints in the
"fast forward" process completely changes how I can do things.

other: is there a good way to start/stop processes other than
[ggee/shell]? cross-platform?

Chuck



___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list
.




___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list


Re: [PD] Choices of IPC when using fast-forward

2022-03-16 Thread Roman Haefeli
On Wed, 2022-03-16 at 16:00 -0500, Charles Z Henry wrote:
> 
> other: is there a good way to start/stop processes other than
> [ggee/shell]? cross-platform?

There is [command] which works on Linux and macOS, but not on Windows.

Roman


signature.asc
Description: This is a digitally signed message part
___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list