Re: [PD] Delay time limit bug (was: PVoc patch "bug"?)

2015-09-22 Thread Alexandre Torres Porres
here's another example, there's a delay line with a size of 2048 samples,
in patch with a block size of 2048, and the delay line is only able to
delay a maximum of 64 samples

2015-09-22 14:07 GMT-03:00 Alexandre Torres Porres :

>
>
> 2015-09-22 5:56 GMT-03:00 Christof Ressi :
>
>> You're totally right that the sentence >The delay time is always at
>> least one sample *and at most the length of the delay line (specified by
>> the delwrite~)*< is misleading.
>>
>
> well, I still consider it to be a bug, it's not that it is misleading, it
> is just not happening because of bug. There's nothing to prevent you from
> reading a delay line to the maximum of what it was specified, if it can't,
> then the object is buggy. If it has some limitation of a block less or so,
> then there's a simple way to fix it, just add an extra block to the delay
> line and make it work. Anyway, I filed this as a bug report yesterday, I
> hope it gets checked upon soon, hopefully it'll work for the next Pd
> release (0.47).
>
>
>
>> BTW: There's a funny issue when the blocksize of the [delread~] is
>> smaller than the blocksize of the [delwrite~]: In that case the
>> [delread~] is reading more often than the delay line itself is actually
>> updated, so you get repetitions of blocks.
>>
>
> Again, i think you can always code it to work around these issues. But in
> this case, I don't see why not have them both in the same block.
>
>
>
>> > actually, I made some tests and it is the (buffersize - windows size +
>> one block 0f 64 samples).
>> Are you sure?
>>
>
> yep, check the patch I sent, works on vanilla.
>
> cheers
>
>
>
>> *Gesendet:* Montag, 21. September 2015 um 23:05 Uhr
>> *Von:* "Alexandre Torres Porres" 
>> *An:* "Christof Ressi" , "Miller Puckette" <
>> mpuck...@imusic1.ucsd.edu>, "pd-list@lists.iem.at" 
>> *Betreff:* Delay time limit bug (was: PVoc patch "bug"?)
>> > the actual limit of the delay line is the buffersize minus the windows
>> size
>>
>> actually, I made some tests and it is the (buffersize - windows size +
>> one block 0f 64 samples).
>>
>> But anyway, this limitation is what I perceived, but I fail to see why
>> any such limitation should happen. If the delay is "x" long, we should be
>> able to read from "x" behind in time... if not, there's a bug in it. That's
>> how I see it, and why I marked this issue as a potential bug.
>>
>> From the [vd~] help file, it says
>>
>> "The delay time is always at least one sample *and at most the length of
>> the delay line (specified by the delwrite~)*"
>>
>> So if we can't read it at most from the specified delay line, there's a
>> bug!
>>
>> > since the delay line is only written for every block and you want to
>> read
>> > the last N samples from the delay line, [vd~] simply clips to the
>> > maximum reading index.
>>
>> Again, I fail to see a reason here. If such a limitation happens, maybe
>> the object could be coded in a way that it allows an extra something to
>> make it possible a total length read out.
>>
>> But I thought that maybe the order forcing of delay objects could be
>> something to take into consideration. Well, I did the order forcing and
>> many such tests, but nothing really changed!
>>
>> I have then the latest version attached. I'm copying miller here and also
>> sending to the list. I'll also post this as a bug report.
>>
>> cheers
>>
>>
>> 2015-09-21 16:45 GMT-03:00 Christof Ressi :
>>>
>>> Hey, as I suspected, you are simply hitting the limit of the delay line.
>>> You can test this on your own with the patch I've sent you. Note that the
>>> actual limit of the delay line is the buffersize minus the windows size,
>>> since the delay line is only written for every block and you want to read
>>> the last N samples from the delay line. [vd~] simply clips to the maximum
>>> reading index. Note that there isn't any phase difference anymore between
>>> the two windows after both have exceeded the limit.
>>>
>>> Cheers
>>>
>>> *Gesendet:* Montag, 21. September 2015 um 19:53 Uhr
>>> *Von:* "Alexandre Torres Porres" 
>>> *An:* "Christof Ressi" , "pd-list@lists.iem.at" <
>>> pd-list@lists.iem.at>
>>> *Betreff:* Re: Re: PVoc patch "bug"?
>>> I've simplified the patch a lot so many things can be discarded.
>>>
>>> The window size shouldn't affect anything as the reading point in the
>>> delay line is fixed. Now I don't have [vline~] or anything, just a steady
>>> signal fed to [vd~], when we get close to the end of the delay line it just
>>> gets ruined, and that's all that there is to it. There's no flaw in the
>>> patch, nothing I didn't think of. It's really something very mysterious or
>>> perhaps a bug.
>>>
>>> The patch is now simpler and also vanilla compatible. I tried it in the
>>> new Pd Vanilla 0.46-7 and I got the same weird behaviour.
>>>
>>> Check attachment please
>>>
>>> 

Re: [PD] Pduino and arudino mini pro/raspi debian- Pduino or Comport?

2015-09-22 Thread Pagano, Patrick
i am saying i was using raspberry pi with an arduino uno and the comport help 
patch was working fine
when i switched to the arduino pro mini that patch was not importing the values 
from the device in a readable format, when i switched to the pduino patch for 
whatever reason the values were showing up.  So i assume it has to do with the 
patches and perhaps the pduino patch is more suited for the newer arduinos. I 
will try the pd-pduino stuff tonight 

Patrick Pagano B.S, M.F.A
Audio and Projection Design Faculty
Digital Worlds Institute
University of Florida, USA
(352)294-2020


From: Pd-list  on behalf of IOhannes m zmoelnig 

Sent: Tuesday, September 22, 2015 3:26 AM
To: pd-l...@mail.iem.at
Subject: Re: [PD] Pduino and arudino mini pro/raspi debian- Pduino or   Comport?

On 2015-09-22 04:47, Pagano, Patrick wrote:
> I could not get the arduino pro mini to work with just comport no matter how 
> i tried, so i installed "Pduino" and [...] got it to work.

i'm not sure i understand what you are trying to say here.
"Pduino" is really just an *abstraction* built around [comport], so does
this mean that the problem was simply your patch?

>
> after installing pd-mapping, pd-pure, pd-moocow [pdstring]

btw, Debian now also has a package "pd-pduino" (but only since
recently), which will pull in all needed packages.

gfmser
IOhannes


___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] Delay time limit bug (was: PVoc patch "bug"?)

2015-09-22 Thread Alexandre Torres Porres
2015-09-22 15:41 GMT-03:00 IOhannes m zmölnig :

> this seems to be the easy route, just as [s~]/[r~] or [throw~]/[catch~]:
> simply forbid the use of [delread~]/[vd~] if the block-sizes differ.
>

I think it's an elegant solution, and can't see why it would be a problem.
But then, I was suggesting it because Christof was pointing how

"it would have to keep track of ALL the objects reading from the buffer and
the individual blocksizes they are operating at. Which would be highly
inefficient and give no practical benefit."

or forbid any use of [delread~]/[vd~] if the block-size is not 64 (which
> is really  the behaviour of [s~]/[r~])
>

now that's bad, because it'd ruin the usage of delay lines in FFT patches,
like spectral delays or the phase vocoder patch I'm working on.

cheers
___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] Understanding the mechanics of rebuilding Pd's DSP graph

2015-09-22 Thread Jonathan Wilkes via Pd-list
In C, what's the overhead of having function_call(return array->x_size) 
insteadof array->x_size inside a perform routine?
If that's not significant, it seems like it'd be better to over-allocate the 
array at creation/resize time and report the requested size to the user.  That 
way reallocation (and dsp-rebuilding) is only necessary if there's a 
substantial size change, or if the array is used by an external that uses the 
old API.
That's certainly more difficult to do than just rebuilding the graph on every 
resizing.  But to me it's preferable to telling new users, "Here's howto resize 
an array, which is a central feature for using objects like [tabplay~] and'Put' 
menu arrays and [soundfiler], but in reality don't use it because[[explanation 
of Pd's implementation details go here]]."
-Jonathan   

  On Tuesday, September 22, 2015 12:05 PM, Roman Haefeli  
wrote:
   

 On Sun, 2015-09-20 at 22:19 +0200, IOhannes m zmölnig wrote:
> On 09/17/2015 11:55 PM, Roman Haefeli wrote:
> 
> > Is the time it takes to recalculate the graph only dependent on the
> > number of tilde-objects running in the current instance of Pd? If so, is
> > that a linear correlation? 10 times more tilde-objects means it takes 10
> > times as long to recalculate the graph?
> 
> [skipping those]

Simple tests suggests that the relation is linear. But maybe this
depends on the kind of graph? What I tested: I created 500 audio
processing abstractions dynamically and then I measured the time it
takes to send 'dsp 0, dsp 1' to pd. I did the same test again with 1000
instances and time doubled.

> > Why is resizing tables so much slower, when tilde-objects are
> > referencing it? I noticed that even resizing very small tables can be a
> > cause for audio drop-outs. I wonder whether 'live-resizing' should be
> > avoided altogether.
> 
> because the table-accessing objects will only check whether a table
> exists and of what size it is) when the DSP graph is re-calculated.
> this is a speed optimization, so those objects don't need to check the
> table existance/size in each signal block.
> the way how it is implemented is, that a table is marked as "being used
> in DSP processing" by a referencing object. as soon as such a table
> changes it's size (or is deleted), the DSP graph is notified - by means
> of recalculation.

Now, after knowing all these facts, it seems unwise to do table resizing
at all, especially for quite small tables. With today's amounts of RAM
available, it seems wise to allocate enough at patch-loading time and
only utilize the necessary part of it.  

> i guess the API could be changed to *unuse* a table (a simple refcounter
> should do), so that as soon as no DSP-object is referencing the object
> within the DSP-graph, any substantial change to it wouldn't trigger a
> DSP  graph recompilation.

The ability to recompile only a partition of the graph in general would
be a huge gain, IMHO. The ability to resize arrays without recompilation
isn't that big an advantage, is it? It would allow for a little simpler
patching, though.

Roman
___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
http://lists.puredata.info/listinfo/pd-list


  ___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] [vd~] VS [delread~] - different delay limit!

2015-09-22 Thread Alexandre Torres Porres
I found another difference between vd~ and delread~

vd~ will have that issue where you need to divide the time in ms for the
overlap number - which I think is bad and maybe it should just work around
that. It's really annoying working with a different time range.

now, delread~ doesn't need that, you can work with the actual ms

one way or another, it seems bad that both behave differently. I point they
should work the same way and that vd~ behaves like delread~ in this case.

cheers

2015-09-22 15:34 GMT-03:00 Alexandre Torres Porres :

> funny, I found out about the same thing and just posted on the thread that
> I'm reporting it as a bug
>
> Well, my oppinion is that there might be some explanation why it happens,
> but also that both objects have bugs regarding the way they operate as they
> can't reach the delay limit when it comes to changing the block size, and
> they also have different limits... so both should be fixed to just be able
> to reach the specified maximum limit.
>
> cheers
>
> 2015-09-22 15:17 GMT-03:00 Christof Ressi :
>
>> In the course of a discussion with Alexandre I ran into something really
>> interesting: [delread~] and [vd~] have different delay limits! While
>> [delread~] has always the buffersize minus the blocksize of the subpatch
>> where it is located, the limit of [vd~] is 64 samples greater. Any
>> explanations?
>>
>> In my example patch, simply choose any blocksize, then set the delay time
>> to maximum 100 (which is actually beyond the maximum), and then toggle
>> between [vd~] and [delread~] to see the 64 samples difference...
>>
>>
>> ___
>> Pd-list@lists.iem.at mailing list
>> UNSUBSCRIBE and account-management ->
>> http://lists.puredata.info/listinfo/pd-list
>>
>>
>
___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] Pduino and arudino mini pro/raspi debian- Pduino or Comport?

2015-09-22 Thread IOhannes m zmölnig
On 09/22/2015 07:58 PM, Pagano, Patrick wrote:
> i am trying to apt-get that pd-pduino and it cannot locate it 
> is there a repo i need to update?

when i said "but only since recently", i really meant it: the package
has entered Debian/unstable on august 26, and has migrated to
Debian/testing (aka stretch) two weeks ago.

but you don't need it, as the package won't provide anything you don't
already have.
i was just doing some advertisments on recent progress in Debian land.

fmdars
IOhannes




signature.asc
Description: OpenPGP digital signature
___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] Delay time limit bug (was: PVoc patch "bug"?)

2015-09-22 Thread IOhannes m zmölnig
On 09/22/2015 08:27 PM, Alexandre Torres Porres wrote:
> if you check my last patch, I can't see why it would have to be hard not to
> make it happen. instead of bothering about the block sizes of the vd~
> objects, just make it sure the delwrite~ is at the same block size and it

this seems to be the easy route, just as [s~]/[r~] or [throw~]/[catch~]:
simply forbid the use of [delread~]/[vd~] if the block-sizes differ.

or forbid any use of [delread~]/[vd~] if the block-size is not 64 (which
is really  the behaviour of [s~]/[r~])


are you sure you want that?

gfmasrd
IOhannes



signature.asc
Description: OpenPGP digital signature
___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] Delay time limit bug (was: PVoc patch "bug"?)

2015-09-22 Thread Alexandre Torres Porres
if you check my last patch, I can't see why it would have to be hard not to
make it happen. instead of bothering about the block sizes of the vd~
objects, just make it sure the delwrite~ is at the same block size and it
should work one way or another, if not, it's just a bug. It's bad that you
have a 2048 delay size and can only use 64 samples, bad bad bad.

by the way, that is not the case with objects like z~ and delay~ - so,
again, I just think this is a serious bug that should be fixed. Any
explanation about it is just an explanation why the bug exists, not a
reason for it to exist.

by the way, testing my patch with delread~ shows that it can't delay at
all, while [vd~] at least is able to delay 64 samples.

I've made another patch to show how delread~ doesn't work at all

cheers

2015-09-22 15:08 GMT-03:00 Christof Ressi :

> > well, I still consider it to be a bug, it's not that it is misleading,
> it is just not happening because of bug.
> > There's nothing to prevent you from reading a delay line to the maximum
> of what it was specified, if it can't, then the object is buggy.
> > If it has some limitation of a block less or so, then there's a simple
> way to fix it, just add an extra block to the delay line and make it work.
>
> Of course Pd COULD allocate 'extra' memory according to the blocksize of
> the reading object, but then it would have to keep track of ALL the objects
> reading from the buffer and the individual blocksizes they are operating
> at. Which would be highly inefficient and give no practical benefit. The
> easier way: changing that sentence in the help file ;-).
>
> But the additional 64 samples were bothering me and after some testing I
> discovered something really weird! I'll write this as a new post to the
> list.
>
>
>
> *Gesendet:* Dienstag, 22. September 2015 um 19:38 Uhr
> *Von:* "Alexandre Torres Porres" 
> *An:* "Christof Ressi" , "Miller Puckette" <
> mpuck...@imusic1.ucsd.edu>
> *Cc:* Pd-List 
> *Betreff:* Re: Delay time limit bug (was: PVoc patch "bug"?)
> here's another example, there's a delay line with a size of 2048 samples,
> in patch with a block size of 2048, and the delay line is only able to
> delay a maximum of 64 samples
>
> 2015-09-22 14:07 GMT-03:00 Alexandre Torres Porres :
>>
>>
>>
>> 2015-09-22 5:56 GMT-03:00 Christof Ressi :
>>>
>>> You're totally right that the sentence >The delay time is always at
>>> least one sample *and at most the length of the delay line (specified
>>> by the delwrite~)*< is misleading.
>>>
>>
>> well, I still consider it to be a bug, it's not that it is misleading, it
>> is just not happening because of bug. There's nothing to prevent you from
>> reading a delay line to the maximum of what it was specified, if it can't,
>> then the object is buggy. If it has some limitation of a block less or so,
>> then there's a simple way to fix it, just add an extra block to the delay
>> line and make it work. Anyway, I filed this as a bug report yesterday, I
>> hope it gets checked upon soon, hopefully it'll work for the next Pd
>> release (0.47).
>>
>>
>>
>>> BTW: There's a funny issue when the blocksize of the [delread~] is
>>> smaller than the blocksize of the [delwrite~]: In that case the
>>> [delread~] is reading more often than the delay line itself is actually
>>> updated, so you get repetitions of blocks.
>>>
>>
>> Again, i think you can always code it to work around these issues. But in
>> this case, I don't see why not have them both in the same block.
>>
>>
>>
>>> > actually, I made some tests and it is the (buffersize - windows size
>>> + one block 0f 64 samples).
>>> Are you sure?
>>>
>>
>> yep, check the patch I sent, works on vanilla.
>>
>> cheers
>>
>>
>>
>>> *Gesendet:* Montag, 21. September 2015 um 23:05 Uhr
>>> *Von:* "Alexandre Torres Porres" 
>>> *An:* "Christof Ressi" , "Miller Puckette" <
>>> mpuck...@imusic1.ucsd.edu>, "pd-list@lists.iem.at" >> >
>>> *Betreff:* Delay time limit bug (was: PVoc patch "bug"?)
>>> > the actual limit of the delay line is the buffersize minus the
>>> windows size
>>>
>>> actually, I made some tests and it is the (buffersize - windows size +
>>> one block 0f 64 samples).
>>>
>>> But anyway, this limitation is what I perceived, but I fail to see why
>>> any such limitation should happen. If the delay is "x" long, we should be
>>> able to read from "x" behind in time... if not, there's a bug in it. That's
>>> how I see it, and why I marked this issue as a potential bug.
>>>
>>> From the [vd~] help file, it says
>>>
>>> "The delay time is always at least one sample *and at most the length
>>> of the delay line (specified by the delwrite~)*"
>>>
>>> So if we can't read it at most from the specified delay line, there's a
>>> bug!
>>>
>>> > since the delay line is only written for every block and you 

Re: [PD] [vd~] VS [delread~] - different delay limit!

2015-09-22 Thread Alexandre Torres Porres
funny, I found out about the same thing and just posted on the thread that
I'm reporting it as a bug

Well, my oppinion is that there might be some explanation why it happens,
but also that both objects have bugs regarding the way they operate as they
can't reach the delay limit when it comes to changing the block size, and
they also have different limits... so both should be fixed to just be able
to reach the specified maximum limit.

cheers

2015-09-22 15:17 GMT-03:00 Christof Ressi :

> In the course of a discussion with Alexandre I ran into something really
> interesting: [delread~] and [vd~] have different delay limits! While
> [delread~] has always the buffersize minus the blocksize of the subpatch
> where it is located, the limit of [vd~] is 64 samples greater. Any
> explanations?
>
> In my example patch, simply choose any blocksize, then set the delay time
> to maximum 100 (which is actually beyond the maximum), and then toggle
> between [vd~] and [delread~] to see the 64 samples difference...
>
>
> ___
> Pd-list@lists.iem.at mailing list
> UNSUBSCRIBE and account-management ->
> http://lists.puredata.info/listinfo/pd-list
>
>
___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] Understanding the mechanics of rebuilding Pd's DSP graph

2015-09-22 Thread Roman Haefeli
On Sun, 2015-09-20 at 22:19 +0200, IOhannes m zmölnig wrote:
> On 09/17/2015 11:55 PM, Roman Haefeli wrote:
> 
> > Is the time it takes to recalculate the graph only dependent on the
> > number of tilde-objects running in the current instance of Pd? If so, is
> > that a linear correlation? 10 times more tilde-objects means it takes 10
> > times as long to recalculate the graph?
> 
> [skipping those]

Simple tests suggests that the relation is linear. But maybe this
depends on the kind of graph? What I tested: I created 500 audio
processing abstractions dynamically and then I measured the time it
takes to send 'dsp 0, dsp 1' to pd. I did the same test again with 1000
instances and time doubled.

> > Why is resizing tables so much slower, when tilde-objects are
> > referencing it? I noticed that even resizing very small tables can be a
> > cause for audio drop-outs. I wonder whether 'live-resizing' should be
> > avoided altogether.
> 
> because the table-accessing objects will only check whether a table
> exists and of what size it is) when the DSP graph is re-calculated.
> this is a speed optimization, so those objects don't need to check the
> table existance/size in each signal block.
> the way how it is implemented is, that a table is marked as "being used
> in DSP processing" by a referencing object. as soon as such a table
> changes it's size (or is deleted), the DSP graph is notified - by means
> of recalculation.

Now, after knowing all these facts, it seems unwise to do table resizing
at all, especially for quite small tables. With today's amounts of RAM
available, it seems wise to allocate enough at patch-loading time and
only utilize the necessary part of it.  

> i guess the API could be changed to *unuse* a table (a simple refcounter
> should do), so that as soon as no DSP-object is referencing the object
> within the DSP-graph, any substantial change to it wouldn't trigger a
> DSP  graph recompilation.

The ability to recompile only a partition of the graph in general would
be a huge gain, IMHO. The ability to resize arrays without recompilation
isn't that big an advantage, is it? It would allow for a little simpler
patching, though.

Roman


signature.asc
Description: This is a digitally signed message part
___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] Pduino and arudino mini pro/raspi debian- Pduino or Comport?

2015-09-22 Thread Pagano, Patrick
i am trying to apt-get that pd-pduino and it cannot locate it 
is there a repo i need to update?


Patrick Pagano B.S, M.F.A
Audio and Projection Design Faculty
Digital Worlds Institute
University of Florida, USA
(352)294-2020


From: Pd-list  on behalf of Pagano, Patrick 

Sent: Tuesday, September 22, 2015 1:56 PM
To: IOhannes m zmoelnig; pd-l...@mail.iem.at
Subject: Re: [PD] Pduino and arudino mini pro/raspi debian- Pduino  or  
Comport?

i am saying i was using raspberry pi with an arduino uno and the comport help 
patch was working fine
when i switched to the arduino pro mini that patch was not importing the values 
from the device in a readable format, when i switched to the pduino patch for 
whatever reason the values were showing up.  So i assume it has to do with the 
patches and perhaps the pduino patch is more suited for the newer arduinos. I 
will try the pd-pduino stuff tonight

Patrick Pagano B.S, M.F.A
Audio and Projection Design Faculty
Digital Worlds Institute
University of Florida, USA
(352)294-2020


From: Pd-list  on behalf of IOhannes m zmoelnig 

Sent: Tuesday, September 22, 2015 3:26 AM
To: pd-l...@mail.iem.at
Subject: Re: [PD] Pduino and arudino mini pro/raspi debian- Pduino or   Comport?

On 2015-09-22 04:47, Pagano, Patrick wrote:
> I could not get the arduino pro mini to work with just comport no matter how 
> i tried, so i installed "Pduino" and [...] got it to work.

i'm not sure i understand what you are trying to say here.
"Pduino" is really just an *abstraction* built around [comport], so does
this mean that the problem was simply your patch?

>
> after installing pd-mapping, pd-pure, pd-moocow [pdstring]

btw, Debian now also has a package "pd-pduino" (but only since
recently), which will pull in all needed packages.

gfmser
IOhannes


___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
http://lists.puredata.info/listinfo/pd-list

___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] Delay time limit bug (was: PVoc patch "bug"?)

2015-09-22 Thread Christof Ressi


> well, I still consider it to be a bug, it's not that it is misleading, it is just not happening because of bug.

> There's nothing to prevent you from reading a delay line to the maximum of what it was specified, if it can't, then the object is buggy.

> If it has some limitation of a block less or so, then there's a simple way to fix it, just add an extra block to the delay line and make it work.

 

Of course Pd COULD allocate 'extra' memory according to the blocksize of the reading object, but then it would have to keep track of ALL the objects reading from the buffer and the individual blocksizes they are operating at. Which would be highly inefficient and give no practical benefit. The easier way: changing that sentence in the help file ;-).

 

But the additional 64 samples were bothering me and after some testing I discovered something really weird! I'll write this as a new post to the list.

 

 

 

Gesendet: Dienstag, 22. September 2015 um 19:38 Uhr
Von: "Alexandre Torres Porres" 
An: "Christof Ressi" , "Miller Puckette" 
Cc: Pd-List 
Betreff: Re: Delay time limit bug (was: PVoc patch "bug"?)


here's another example, there's a delay line with a size of 2048 samples, in patch with a block size of 2048, and the delay line is only able to delay a maximum of 64 samples 

 
2015-09-22 14:07 GMT-03:00 Alexandre Torres Porres :


 
 
2015-09-22 5:56 GMT-03:00 Christof Ressi :





You're totally right that the sentence >The delay time is always at least one sample and at most the length of the delay line (specified by the delwrite~)< is misleading.





 

well, I still consider it to be a bug, it's not that it is misleading, it is just not happening because of bug. There's nothing to prevent you from reading a delay line to the maximum of what it was specified, if it can't, then the object is buggy. If it has some limitation of a block less or so, then there's a simple way to fix it, just add an extra block to the delay line and make it work. Anyway, I filed this as a bug report yesterday, I hope it gets checked upon soon, hopefully it'll work for the next Pd release (0.47).

 

  




BTW: There's a funny issue when the blocksize of the [delread~] is smaller than the blocksize of the [delwrite~]: In that case the [delread~] is reading more often than the delay line itself is actually updated, so you get repetitions of blocks.




 

Again, i think you can always code it to work around these issues. But in this case, I don't see why not have them both in the same block.

 

 




> actually, I made some tests and it is the (buffersize - windows size + one block 0f 64 samples). 

Are you sure? 




 

yep, check the patch I sent, works on vanilla.

 

cheers



 

 





Gesendet: Montag, 21. September 2015 um 23:05 Uhr
Von: "Alexandre Torres Porres" 
An: "Christof Ressi" , "Miller Puckette" , "pd-list@lists.iem.at" 
Betreff: Delay time limit bug (was: PVoc patch "bug"?)




> the actual limit of the delay line is the buffersize minus the windows size

 


actually, I made some tests and it is the (buffersize - windows size + one block 0f 64 samples). 

 

But anyway, this limitation is what I perceived, but I fail to see why any such limitation should happen. If the delay is "x" long, we should be able to read from "x" behind in time... if not, there's a bug in it. That's how I see it, and why I marked this issue as a potential bug.

 

From the [vd~] help file, it says

 

"The delay time is always at least one sample and at most the length of the delay line (specified by the delwrite~)"

 

So if we can't read it at most from the specified delay line, there's a bug!

 


> since the delay line is only written for every block and you want to read

> the last N samples from the delay line, [vd~] simply clips to the 

> maximum reading index. 

 

Again, I fail to see a reason here. If such a limitation happens, maybe the object could be coded in a way that it allows an extra something to make it possible a total length read out.

 

But I thought that maybe the order forcing of delay objects could be something to take into consideration. Well, I did the order forcing and many such tests, but nothing really changed! 

 

I have then the latest version attached. I'm copying miller here and also sending to the list. I'll also post this as a bug report.

 

cheers

 



 
2015-09-21 16:45 GMT-03:00 Christof Ressi :





Hey, as I suspected, you are simply hitting the limit of the delay line. You can test this on your own with the patch I've sent you. Note that the actual limit of the delay line is the buffersize minus the windows size, since the delay line is only written for every block and you want to read the last N samples from the delay line. [vd~] simply 

[PD] [vd~] VS [delread~] - different delay limit!

2015-09-22 Thread Christof Ressi
In the course of a discussion with Alexandre I ran into something really interesting: [delread~] and [vd~] have different delay limits! While [delread~] has always the buffersize minus the blocksize of the subpatch where it is located, the limit of [vd~] is 64 samples greater. Any explanations?

 

In my example patch, simply choose any blocksize, then set the delay time to maximum 100 (which is actually beyond the maximum), and then toggle between [vd~] and [delread~] to see the 64 samples difference...
 #N canvas 108 235 1541 619 10;
#N canvas 643 87 725 364 subpatch 1;
#N canvas 154 251 450 300 delwrite 0;
#X obj 94 94 inlet~;
#X obj 98 196 outlet~;
#X obj 102 143 delwrite~ \$0-del 100;
#X connect 0 0 2 0;
#X restore 84 86 pd delwrite;
#N canvas 149 132 450 300 delread 0;
#X obj 99 77 inlet~;
#X obj 94 192 outlet~;
#X obj 94 118 r \$0-time;
#X obj 93 146 delread~ \$0-del;
#X connect 2 0 3 0;
#X connect 3 0 1 0;
#X restore 73 127 pd delread;
#X obj 71 -8 inlet;
#X obj 56 250 tabwrite~ \$0-plot;
#X obj 320 208 block~ 64;
#X msg 322 161 set \$1;
#X obj 336 42 r \$0-blocksize;
#X obj 383 84 t f b;
#X obj 415 109 samplerate~;
#X obj 412 181 /;
#X obj 418 133 / 1000;
#X obj 419 221 s \$0-blocksize_ms;
#X msg 129 26 0;
#X obj 84 51 osc~ 10;
#X obj 223 99 s \$0-plot;
#X obj 217 43 loadbang;
#X msg 221 72 xticks 0 64 1;
#N canvas 149 132 450 300 vd 0;
#X obj 99 77 inlet~;
#X obj 94 192 outlet~;
#X obj 94 118 r \$0-time;
#X obj 93 147 vd~ \$0-del;
#X connect 2 0 3 0;
#X connect 3 0 1 0;
#X restore 100 162 pd vd;
#X obj 67 208 *~;
#X obj 115 208 *~;
#X obj 164 146 r \$0-toggle;
#X obj 164 180 == 0;
#X msg 180 110 0;
#X connect 0 0 1 0;
#X connect 0 0 17 0;
#X connect 1 0 18 0;
#X connect 2 0 3 0;
#X connect 2 0 12 0;
#X connect 5 0 4 0;
#X connect 6 0 7 0;
#X connect 6 0 5 0;
#X connect 7 0 9 0;
#X connect 7 1 8 0;
#X connect 8 0 10 0;
#X connect 9 0 11 0;
#X connect 10 0 9 1;
#X connect 12 0 13 1;
#X connect 13 0 0 0;
#X connect 15 0 16 0;
#X connect 15 0 22 0;
#X connect 16 0 14 0;
#X connect 17 0 19 0;
#X connect 18 0 3 0;
#X connect 19 0 3 0;
#X connect 20 0 21 0;
#X connect 20 0 18 1;
#X connect 21 0 19 1;
#X connect 22 0 21 0;
#X restore 112 201 pd subpatch;
#N canvas 0 50 450 250 (subpatch) 0;
#X array \$0-plot 4410 float 2;
#X coords 0 1 4410 -1 1000 140 1 0 0;
#X restore 20 302 graph;
#X obj 25 240 s \$0-blocksize;
#X msg 24 69 64;
#X msg 26 93 128;
#X msg 30 118 256;
#X msg 32 142 512;
#X msg 35 163 1024;
#X msg 39 184 2048;
#X msg 43 206 4096;
#X text 132 268 ms;
#X obj 148 151 nbx 5 14 0 100 0 0 \$0-time empty delay_time 0 -8 0
10 -262144 -1 -1 100 256;
#X obj 112 177 bng 15 250 50 0 empty \$0-toggle empty 17 7 0 10 -262144
-1 -1;
#X text 131 176 click me;
#X floatatom 92 270 5 0 0 0 blocksize: #0-blocksize_ms -, f 5;
#X text 107 148 set:;
#X text 224 156 buffersize (100 ms) - blocksize;
#X text 222 136 maximum delay time for [delread~] =;
#X text 17 451 (ticks are every 64 samples);
#X obj 232 208 tgl 15 0 \$0-toggle empty empty 17 7 0 10 -262144 -1
-1 0 1;
#X text 251 208 toggle between [vd~] and [delread~];
#X connect 3 0 2 0;
#X connect 4 0 2 0;
#X connect 5 0 2 0;
#X connect 6 0 2 0;
#X connect 7 0 2 0;
#X connect 8 0 2 0;
#X connect 9 0 2 0;
#X connect 12 0 0 0;
___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] Delay time limit bug (was: PVoc patch "bug"?)

2015-09-22 Thread Alexandre Torres Porres
2015-09-22 5:56 GMT-03:00 Christof Ressi :

> You're totally right that the sentence >The delay time is always at least
> one sample *and at most the length of the delay line (specified by the
> delwrite~)*< is misleading.
>

well, I still consider it to be a bug, it's not that it is misleading, it
is just not happening because of bug. There's nothing to prevent you from
reading a delay line to the maximum of what it was specified, if it can't,
then the object is buggy. If it has some limitation of a block less or so,
then there's a simple way to fix it, just add an extra block to the delay
line and make it work. Anyway, I filed this as a bug report yesterday, I
hope it gets checked upon soon, hopefully it'll work for the next Pd
release (0.47).



> BTW: There's a funny issue when the blocksize of the [delread~] is smaller
> than the blocksize of the [delwrite~]: In that case the [delread~] is
> reading more often than the delay line itself is actually updated, so you
> get repetitions of blocks.
>

Again, i think you can always code it to work around these issues. But in
this case, I don't see why not have them both in the same block.



> > actually, I made some tests and it is the (buffersize - windows size +
> one block 0f 64 samples).
> Are you sure?
>

yep, check the patch I sent, works on vanilla.

cheers



> *Gesendet:* Montag, 21. September 2015 um 23:05 Uhr
> *Von:* "Alexandre Torres Porres" 
> *An:* "Christof Ressi" , "Miller Puckette" <
> mpuck...@imusic1.ucsd.edu>, "pd-list@lists.iem.at" 
> *Betreff:* Delay time limit bug (was: PVoc patch "bug"?)
> > the actual limit of the delay line is the buffersize minus the windows
> size
>
> actually, I made some tests and it is the (buffersize - windows size +
> one block 0f 64 samples).
>
> But anyway, this limitation is what I perceived, but I fail to see why any
> such limitation should happen. If the delay is "x" long, we should be able
> to read from "x" behind in time... if not, there's a bug in it. That's how
> I see it, and why I marked this issue as a potential bug.
>
> From the [vd~] help file, it says
>
> "The delay time is always at least one sample *and at most the length of
> the delay line (specified by the delwrite~)*"
>
> So if we can't read it at most from the specified delay line, there's a
> bug!
>
> > since the delay line is only written for every block and you want to read
> > the last N samples from the delay line, [vd~] simply clips to the
> > maximum reading index.
>
> Again, I fail to see a reason here. If such a limitation happens, maybe
> the object could be coded in a way that it allows an extra something to
> make it possible a total length read out.
>
> But I thought that maybe the order forcing of delay objects could be
> something to take into consideration. Well, I did the order forcing and
> many such tests, but nothing really changed!
>
> I have then the latest version attached. I'm copying miller here and also
> sending to the list. I'll also post this as a bug report.
>
> cheers
>
>
> 2015-09-21 16:45 GMT-03:00 Christof Ressi :
>>
>> Hey, as I suspected, you are simply hitting the limit of the delay line.
>> You can test this on your own with the patch I've sent you. Note that the
>> actual limit of the delay line is the buffersize minus the windows size,
>> since the delay line is only written for every block and you want to read
>> the last N samples from the delay line. [vd~] simply clips to the maximum
>> reading index. Note that there isn't any phase difference anymore between
>> the two windows after both have exceeded the limit.
>>
>> Cheers
>>
>> *Gesendet:* Montag, 21. September 2015 um 19:53 Uhr
>> *Von:* "Alexandre Torres Porres" 
>> *An:* "Christof Ressi" , "pd-list@lists.iem.at" <
>> pd-list@lists.iem.at>
>> *Betreff:* Re: Re: PVoc patch "bug"?
>> I've simplified the patch a lot so many things can be discarded.
>>
>> The window size shouldn't affect anything as the reading point in the
>> delay line is fixed. Now I don't have [vline~] or anything, just a steady
>> signal fed to [vd~], when we get close to the end of the delay line it just
>> gets ruined, and that's all that there is to it. There's no flaw in the
>> patch, nothing I didn't think of. It's really something very mysterious or
>> perhaps a bug.
>>
>> The patch is now simpler and also vanilla compatible. I tried it in the
>> new Pd Vanilla 0.46-7 and I got the same weird behaviour.
>>
>> Check attachment please
>>
>> cheers
>>
>> 2015-09-21 14:12 GMT-03:00 Christof Ressi :
>>>
>>> Well, I just think you're hitting the limit of the delay line. Your
>>> window size is 2048 samples, so inside the subpatch that's 2048/(44,1*4) =
>>> 11,6 ms. But one window is one hop size (2,9 ms) behind, therefore 11,6 ms
>>> + 2,9 ms = 14,5 ms and 1000 ms - 14,5 ms = 985,5 ms --> that's 

Re: [PD] A patch to create a patch to create a patch to create a patch to close puredata...

2015-09-22 Thread Olivier Baudu
Hi list,

It's quite the same stuff than the last one, but not exactly, so...

https://vimeo.com/140111564

Cheers...

°1


Le 13/09/2015 02:04, Olivier Baudu a écrit :
> Masterpieces !! :-D
> 
> Are those patchs the ones Benjamin was talking about few answers before ?
> (He saw them at the PdConv in Montreal)
> 
> I take this opportunity to post my last work :
> 
> https://vimeo.com/139090261
> 
> Cheers
> 
> °1
> 
> 
> Le 12/09/2015 07:13, Matt Barber a écrit :
>> Jonathan Wilkes and I made these a few years ago. Run "orthodox" first
>> to get a feel for it.
>>
>> On Mon, Sep 7, 2015 at 9:57 AM, Olivier Baudu <01iv...@labomedia.net
>> > wrote:
>>
>> Youplala...
>>
>> https://vimeo.com/138517416
>>
>> Cheers
>>
>> °1
>>
>> Le 20/08/2015 01:20, Olivier Baudu a écrit :
>> > One more useless stuff for you, list :
>> >
>> > https://vimeo.com/136762246
>> >
>> > Cheers
>> >
>> > °1
>> >
>> > Le 11/08/2015 23:48, Olivier Baudu a écrit :
>> >> Hi list,
>> >>
>> >> Did you think I'd forgotten you ? :-p
>> >>
>> >> It follows :
>> >>
>> >> https://vimeo.com/136014798
>> >>
>> >> Cheers...
>> >>
>> >> °1
>> >>
>> >> Le 17/07/2015 13:41, i go bananas a écrit :
>> >>> these are awesome.
>> >>>
>> >>> On Fri, Jul 17, 2015 at 8:23 PM, Benjamin ~ b01 > 
>> >>> >> wrote:
>> >>>
>> >>> nice piece of digital art ;)
>> >>>
>> >>> btw, does someone on the list have an old patch that was 
>> producing a
>> >>> nice gui animation inside Pd ?
>> >>> I saw it a long ago @ Pd Conv Montreal ...
>> >>>
>> >>> thanks
>> >>> ++
>> >>> benjamin
>> >>>
>> >>> Le 15/07/2015 01:30, Olivier Baudu a écrit :
>> >>> > Sorry list...
>> >>> >
>> >>> > I can't refrain myself... :-p
>> >>> >
>> >>> > The Bangarland :
>> >>> > https://vimeo.com/133499700
>> >>> >
>> >>> > Cheers...
>> >>> >
>> >>> > 01
>> >>> >
>> >>> > Le 06/07/2015 22:04, Jaime E Oliver a écrit :
>> >>> >> nice indeed!
>> >>> >> J
>> >>> >> On Jul 6, 2015, at 2:10 PM, Jack >  > >> wrote:
>> >>> >>
>> >>> >> Hello Olivier,
>> >>> >>
>> >>> >> Very nice ;)
>> >>> >> ++
>> >>> >>
>> >>> >> Jack
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> Le 06/07/2015 20:46, Olivier Baudu a écrit :
>> >>> > Thank you Julian...
>> >>> >
>> >>> > Well, I don't know if this one is funny but, for sure, 
>> it's still
>> >>> > useless... :-)
>> >>> >
>> >>> > The Carouslide: https://vimeo.com/132739686
>> >>> >
>> >>> > :-p
>> >>> >
>> >>> > 01
>> >>> >
>> >>> > Le 01/07/2015 15:02, Julian Brooks a écrit :
>> >>> >> definitely raised a smile :)
>> >>> >>
>> >>> >> 2015-06-30 17:15 GMT+01:00 Olivier Baudu 
>> <01iv...@labomedia.net 
>> >
>> >>> >> > 
>> > >>> >>
>> >>> >> Hi list...
>> >>> >>
>> >>> >> I got bored again... so...
>> >>> >>
>> >>> >> https://vimeo.com/132195870
>> >>> >>
>> >>> >> :-p
>> >>> >>
>> >>> >> Cheers...
>> >>> >>
>> >>> >> °1
>> >>> >>
>> >>> >> Le 24/06/2015 17:34, Olivier Baudu a écrit :
>> >>> >>> Hi list...
>> >>> >>>
>> >>> >>> I had time to waste so here you are :
>> >>> >>>
>> >>> >>> https://vimeo.com/131648084
>> >>> >>>
>> >>> >>> :-p
>> >>> >>>
>> >>> >>> Cheers...
>> >>> >>>
>> >>> >>> °1ivier
>> >>> >>>
>> >>> >>>
>> >>> >>>
>> >>> >>> ___
>> >>> >>> Pd-list@lists.iem.at 
>> >
>> >>> 
>> >> mailing
>> >>> >>> list UNSUBSCRIBE and account-management ->
>> >>> >>> 

Re: [PD] A patch to create a patch to create a patch to create a patch to close puredata...

2015-09-22 Thread Matt Barber
Good one. I like how minimalist these all are.

On Tue, Sep 22, 2015 at 5:39 PM, Olivier Baudu <01iv...@labomedia.net>
wrote:

> Hi list,
>
> It's quite the same stuff than the last one, but not exactly, so...
>
> https://vimeo.com/140111564
>
> Cheers...
>
> °1
>
>
> Le 13/09/2015 02:04, Olivier Baudu a écrit :
> > Masterpieces !! :-D
> >
> > Are those patchs the ones Benjamin was talking about few answers before ?
> > (He saw them at the PdConv in Montreal)
> >
> > I take this opportunity to post my last work :
> >
> > https://vimeo.com/139090261
> >
> > Cheers
> >
> > °1
> >
> >
> > Le 12/09/2015 07:13, Matt Barber a écrit :
> >> Jonathan Wilkes and I made these a few years ago. Run "orthodox" first
> >> to get a feel for it.
> >>
> >> On Mon, Sep 7, 2015 at 9:57 AM, Olivier Baudu <01iv...@labomedia.net
> >> > wrote:
> >>
> >> Youplala...
> >>
> >> https://vimeo.com/138517416
> >>
> >> Cheers
> >>
> >> °1
> >>
> >> Le 20/08/2015 01:20, Olivier Baudu a écrit :
> >> > One more useless stuff for you, list :
> >> >
> >> > https://vimeo.com/136762246
> >> >
> >> > Cheers
> >> >
> >> > °1
> >> >
> >> > Le 11/08/2015 23:48, Olivier Baudu a écrit :
> >> >> Hi list,
> >> >>
> >> >> Did you think I'd forgotten you ? :-p
> >> >>
> >> >> It follows :
> >> >>
> >> >> https://vimeo.com/136014798
> >> >>
> >> >> Cheers...
> >> >>
> >> >> °1
> >> >>
> >> >> Le 17/07/2015 13:41, i go bananas a écrit :
> >> >>> these are awesome.
> >> >>>
> >> >>> On Fri, Jul 17, 2015 at 8:23 PM, Benjamin ~ b01  
> >> >>> >> wrote:
> >> >>>
> >> >>> nice piece of digital art ;)
> >> >>>
> >> >>> btw, does someone on the list have an old patch that was
> producing a
> >> >>> nice gui animation inside Pd ?
> >> >>> I saw it a long ago @ Pd Conv Montreal ...
> >> >>>
> >> >>> thanks
> >> >>> ++
> >> >>> benjamin
> >> >>>
> >> >>> Le 15/07/2015 01:30, Olivier Baudu a écrit :
> >> >>> > Sorry list...
> >> >>> >
> >> >>> > I can't refrain myself... :-p
> >> >>> >
> >> >>> > The Bangarland :
> >> >>> > https://vimeo.com/133499700
> >> >>> >
> >> >>> > Cheers...
> >> >>> >
> >> >>> > 01
> >> >>> >
> >> >>> > Le 06/07/2015 22:04, Jaime E Oliver a écrit :
> >> >>> >> nice indeed!
> >> >>> >> J
> >> >>> >> On Jul 6, 2015, at 2:10 PM, Jack   >> >> wrote:
> >> >>> >>
> >> >>> >> Hello Olivier,
> >> >>> >>
> >> >>> >> Very nice ;)
> >> >>> >> ++
> >> >>> >>
> >> >>> >> Jack
> >> >>> >>
> >> >>> >>
> >> >>> >>
> >> >>> >>
> >> >>> >> Le 06/07/2015 20:46, Olivier Baudu a écrit :
> >> >>> > Thank you Julian...
> >> >>> >
> >> >>> > Well, I don't know if this one is funny but, for
> sure, it's still
> >> >>> > useless... :-)
> >> >>> >
> >> >>> > The Carouslide: https://vimeo.com/132739686
> >> >>> >
> >> >>> > :-p
> >> >>> >
> >> >>> > 01
> >> >>> >
> >> >>> > Le 01/07/2015 15:02, Julian Brooks a écrit :
> >> >>> >> definitely raised a smile :)
> >> >>> >>
> >> >>> >> 2015-06-30 17:15 GMT+01:00 Olivier Baudu <
> 01iv...@labomedia.net 
> >> >
> >> >>> >> 
> >>  >> >>> >>
> >> >>> >> Hi list...
> >> >>> >>
> >> >>> >> I got bored again... so...
> >> >>> >>
> >> >>> >> https://vimeo.com/132195870
> >> >>> >>
> >> >>> >> :-p
> >> >>> >>
> >> >>> >> Cheers...
> >> >>> >>
> >> >>> >> °1
> >> >>> >>
> >> >>> >> Le 24/06/2015 17:34, Olivier Baudu a écrit :
> >> >>> >>> Hi list...
> >> >>> >>>
> >> >>> >>> I had time to waste so here you are :
> >> >>> >>>
> >> >>> >>> https://vimeo.com/131648084
> >> >>> >>>
> >> >>> >>> :-p
> >> >>> >>>
> >> >>> >>> Cheers...
> >> >>> >>>
> >> >>> >>> °1ivier
> >> >>> >>>
> >> >>> >>>
> >> >>> >>>
> >> >>> >>> ___
> >> >>> >>> 

Re: [PD] Understanding the mechanics of rebuilding Pd's DSP graph

2015-09-22 Thread Matt Barber
There's nothing wrong per se with resizing an array -- but there are good
reasons not to do it while a patch is running after a [tab*] object has
referred to it. I have myself only noticed audio dropouts when I'm resizing
a table with soundfiler; I thought it must have been a disk-access
bottleneck (soundfiler runs synchronously, yes?), but it would make sense
if it in very large patches that a resize triggering a DSP recalc could do
it. Though, then wouldn't adding any tilde object do the same?

On Tue, Sep 22, 2015 at 4:00 PM, Jonathan Wilkes via Pd-list <
pd-list@lists.iem.at> wrote:

> In C, what's the overhead of having function_call(return array->x_size)
> instead
> of array->x_size inside a perform routine?
>
> If that's not significant, it seems like it'd be better to over-allocate
> the array at creation/resize time and report the requested size to the
> user.  That way reallocation (and dsp-rebuilding) is only necessary if
> there's a substantial size change, or if the array is used by an external
> that uses the old API.
>
> That's certainly more difficult to do than just rebuilding the graph on
> every resizing.  But to me it's preferable to telling new users, "Here's how
> to resize an array, which is a central feature for using objects like
> [tabplay~] and
> 'Put' menu arrays and [soundfiler], but in reality don't use it because
> [[explanation of Pd's implementation details go here]]."
>
> -Jonathan
>
>
> On Tuesday, September 22, 2015 12:05 PM, Roman Haefeli 
> wrote:
>
>
> On Sun, 2015-09-20 at 22:19 +0200, IOhannes m zmölnig wrote:
> > On 09/17/2015 11:55 PM, Roman Haefeli wrote:
> >
> > > Is the time it takes to recalculate the graph only dependent on the
> > > number of tilde-objects running in the current instance of Pd? If so,
> is
> > > that a linear correlation? 10 times more tilde-objects means it takes
> 10
> > > times as long to recalculate the graph?
> >
> > [skipping those]
>
> Simple tests suggests that the relation is linear. But maybe this
> depends on the kind of graph? What I tested: I created 500 audio
> processing abstractions dynamically and then I measured the time it
> takes to send 'dsp 0, dsp 1' to pd. I did the same test again with 1000
> instances and time doubled.
>
> > > Why is resizing tables so much slower, when tilde-objects are
> > > referencing it? I noticed that even resizing very small tables can be a
> > > cause for audio drop-outs. I wonder whether 'live-resizing' should be
> > > avoided altogether.
> >
> > because the table-accessing objects will only check whether a table
> > exists and of what size it is) when the DSP graph is re-calculated.
> > this is a speed optimization, so those objects don't need to check the
> > table existance/size in each signal block.
> > the way how it is implemented is, that a table is marked as "being used
> > in DSP processing" by a referencing object. as soon as such a table
> > changes it's size (or is deleted), the DSP graph is notified - by means
> > of recalculation.
>
> Now, after knowing all these facts, it seems unwise to do table resizing
> at all, especially for quite small tables. With today's amounts of RAM
> available, it seems wise to allocate enough at patch-loading time and
> only utilize the necessary part of it.
>
> > i guess the API could be changed to *unuse* a table (a simple refcounter
> > should do), so that as soon as no DSP-object is referencing the object
> > within the DSP-graph, any substantial change to it wouldn't trigger a
> > DSP  graph recompilation.
>
> The ability to recompile only a partition of the graph in general would
> be a huge gain, IMHO. The ability to resize arrays without recompilation
> isn't that big an advantage, is it? It would allow for a little simpler
> patching, though.
>
>
> Roman
>
> ___
> Pd-list@lists.iem.at mailing list
> UNSUBSCRIBE and account-management ->
> http://lists.puredata.info/listinfo/pd-list
>
>
>
> ___
> Pd-list@lists.iem.at mailing list
> UNSUBSCRIBE and account-management ->
> http://lists.puredata.info/listinfo/pd-list
>
>
___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] Understanding the mechanics of rebuilding Pd's DSP graph

2015-09-22 Thread Jonathan Wilkes via Pd-list
Does [soundfiler] rebuild the dsp graph on read, or only if the -resize flagis 
used?  If its the latter then you can just set the right array size ahead of 
time.Then if you still get dropouts you'll know it's the blocking i/o doing it.
-Jonathan
 


 On Tuesday, September 22, 2015 10:50 PM, Matt Barber  
wrote:
   

 There's nothing wrong per se with resizing an array -- but there are good 
reasons not to do it while a patch is running after a [tab*] object has 
referred to it. I have myself only noticed audio dropouts when I'm resizing a 
table with soundfiler; I thought it must have been a disk-access bottleneck 
(soundfiler runs synchronously, yes?), but it would make sense if it in very 
large patches that a resize triggering a DSP recalc could do it. Though, then 
wouldn't adding any tilde object do the same?
On Tue, Sep 22, 2015 at 4:00 PM, Jonathan Wilkes via Pd-list 
 wrote:

In C, what's the overhead of having function_call(return array->x_size) 
insteadof array->x_size inside a perform routine?
If that's not significant, it seems like it'd be better to over-allocate the 
array at creation/resize time and report the requested size to the user.  That 
way reallocation (and dsp-rebuilding) is only necessary if there's a 
substantial size change, or if the array is used by an external that uses the 
old API.
That's certainly more difficult to do than just rebuilding the graph on every 
resizing.  But to me it's preferable to telling new users, "Here's howto resize 
an array, which is a central feature for using objects like [tabplay~] and'Put' 
menu arrays and [soundfiler], but in reality don't use it because[[explanation 
of Pd's implementation details go here]]."
-Jonathan   

  On Tuesday, September 22, 2015 12:05 PM, Roman Haefeli  
wrote:
   

 On Sun, 2015-09-20 at 22:19 +0200, IOhannes m zmölnig wrote:
> On 09/17/2015 11:55 PM, Roman Haefeli wrote:
> 
> > Is the time it takes to recalculate the graph only dependent on the
> > number of tilde-objects running in the current instance of Pd? If so, is
> > that a linear correlation? 10 times more tilde-objects means it takes 10
> > times as long to recalculate the graph?
> 
> [skipping those]

Simple tests suggests that the relation is linear. But maybe this
depends on the kind of graph? What I tested: I created 500 audio
processing abstractions dynamically and then I measured the time it
takes to send 'dsp 0, dsp 1' to pd. I did the same test again with 1000
instances and time doubled.

> > Why is resizing tables so much slower, when tilde-objects are
> > referencing it? I noticed that even resizing very small tables can be a
> > cause for audio drop-outs. I wonder whether 'live-resizing' should be
> > avoided altogether.
> 
> because the table-accessing objects will only check whether a table
> exists and of what size it is) when the DSP graph is re-calculated.
> this is a speed optimization, so those objects don't need to check the
> table existance/size in each signal block.
> the way how it is implemented is, that a table is marked as "being used
> in DSP processing" by a referencing object. as soon as such a table
> changes it's size (or is deleted), the DSP graph is notified - by means
> of recalculation.

Now, after knowing all these facts, it seems unwise to do table resizing
at all, especially for quite small tables. With today's amounts of RAM
available, it seems wise to allocate enough at patch-loading time and
only utilize the necessary part of it.  

> i guess the API could be changed to *unuse* a table (a simple refcounter
> should do), so that as soon as no DSP-object is referencing the object
> within the DSP-graph, any substantial change to it wouldn't trigger a
> DSP  graph recompilation.

The ability to recompile only a partition of the graph in general would
be a huge gain, IMHO. The ability to resize arrays without recompilation
isn't that big an advantage, is it? It would allow for a little simpler
patching, though.

Roman
___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
http://lists.puredata.info/listinfo/pd-list


   
___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
http://lists.puredata.info/listinfo/pd-list





  ___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
http://lists.puredata.info/listinfo/pd-list


[PD] more delay weirdness

2015-09-22 Thread Alexandre Torres Porres
wow, I'm still finding some weird things going on with delay lines and fft
subpatches.

Find my newest issue in the attached patch. Now I have only [z~] as the
delay line (but same happens with [delay~]).

So I have two patches: on the parent patch, [z~ 64] will act as a "back"
window, and you can check it prints a block that is behind 64 samples
indeed.

When it comes to the subpatch with a block of 256 and overlap of 4, the
"front" window is not in the "front" anymore, but behind by 128 samples!!!

what the hell?


delay-test-again.pd
Description: Binary data
___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] Delay time limit bug (was: PVoc patch "bug"?)

2015-09-22 Thread Christof Ressi

You're totally right that the sentence >The delay time is always at least one sample and at most the length of the delay line (specified by the delwrite~)< is misleading.

 

But the reason why it can't be exactly the length of the delay line is quite simple: Because in Pd audio is computed in blocks, the [delwrite~] is actually updated every M samples, where M is the blocksize of the subpatch where the [delwrite~] is located. [delread~] and [vd~] always get the N last samples of the delay line, starting from the index you're sending to the object, where N is the blocksize of the subpatch where the reading object is located. So if the reading index would be exactly at the last sample, you could only read that single sample for the whole block. 

 


BTW: There's a funny issue when the blocksize of the [delread~] is smaller than the blocksize of the [delwrite~]: In that case the [delread~] is reading more often than the delay line itself is actually updated, so you get repetitions of blocks.

 

> actually, I made some tests and it is the (buffersize - windows size + one block 0f 64 samples). 

 

Are you sure? Maybe it only appears like that because there's an inherent 'delay' of 64 samples between a parent patch of bocksize 64 and a subpatch with a higher blocksize. I should do some testing about that.

 

 


Gesendet: Montag, 21. September 2015 um 23:05 Uhr
Von: "Alexandre Torres Porres" 
An: "Christof Ressi" , "Miller Puckette" , "pd-list@lists.iem.at" 
Betreff: Delay time limit bug (was: PVoc patch "bug"?)


> the actual limit of the delay line is the buffersize minus the windows size

 


actually, I made some tests and it is the (buffersize - windows size + one block 0f 64 samples). 

 

But anyway, this limitation is what I perceived, but I fail to see why any such limitation should happen. If the delay is "x" long, we should be able to read from "x" behind in time... if not, there's a bug in it. That's how I see it, and why I marked this issue as a potential bug.

 

From the [vd~] help file, it says

 

"The delay time is always at least one sample and at most the length of the delay line (specified by the delwrite~)"

 

So if we can't read it at most from the specified delay line, there's a bug!

 


> since the delay line is only written for every block and you want to read

> the last N samples from the delay line, [vd~] simply clips to the 

> maximum reading index. 

 

Again, I fail to see a reason here. If such a limitation happens, maybe the object could be coded in a way that it allows an extra something to make it possible a total length read out.

 

But I thought that maybe the order forcing of delay objects could be something to take into consideration. Well, I did the order forcing and many such tests, but nothing really changed! 

 

I have then the latest version attached. I'm copying miller here and also sending to the list. I'll also post this as a bug report.

 

cheers

 



 
2015-09-21 16:45 GMT-03:00 Christof Ressi :





Hey, as I suspected, you are simply hitting the limit of the delay line. You can test this on your own with the patch I've sent you. Note that the actual limit of the delay line is the buffersize minus the windows size, since the delay line is only written for every block and you want to read the last N samples from the delay line. [vd~] simply clips to the maximum reading index. Note that there isn't any phase difference anymore between the two windows after both have exceeded the limit.

 

Cheers

 

Gesendet: Montag, 21. September 2015 um 19:53 Uhr
Von: "Alexandre Torres Porres" 
An: "Christof Ressi" , "pd-list@lists.iem.at" 
Betreff: Re: Re: PVoc patch "bug"?




I've simplified the patch a lot so many things can be discarded.
 

The window size shouldn't affect anything as the reading point in the delay line is fixed. Now I don't have [vline~] or anything, just a steady signal fed to [vd~], when we get close to the end of the delay line it just gets ruined, and that's all that there is to it. There's no flaw in the patch, nothing I didn't think of. It's really something very mysterious or perhaps a bug.

The patch is now simpler and also vanilla compatible. I tried it in the new Pd Vanilla 0.46-7 and I got the same weird behaviour.

 

Check attachment please

 

cheers


 
2015-09-21 14:12 GMT-03:00 Christof Ressi :





Well, I just think you're hitting the limit of the delay line. Your window size is 2048 samples, so inside the subpatch that's 2048/(44,1*4) = 11,6 ms. But one window is one hop size (2,9 ms) behind, therefore 11,6 ms + 2,9 ms = 14,5 ms and 1000 ms - 14,5 ms = 985,5 ms --> that's pretty much the limit you were experiencing. Hope that helps.

 

Cheers



Gesendet: Montag, 21. September 2015 um 18:27 Uhr
Von: "Alexandre Torres Porres" 

Re: [PD] Pduino and arudino mini pro/raspi debian- Pduino or Comport?

2015-09-22 Thread IOhannes m zmoelnig
On 2015-09-22 04:47, Pagano, Patrick wrote:
> I could not get the arduino pro mini to work with just comport no matter how 
> i tried, so i installed "Pduino" and [...] got it to work.

i'm not sure i understand what you are trying to say here.
"Pduino" is really just an *abstraction* built around [comport], so does
this mean that the problem was simply your patch?

> 
> after installing pd-mapping, pd-pure, pd-moocow [pdstring]

btw, Debian now also has a package "pd-pduino" (but only since
recently), which will pull in all needed packages.

gfmser
IOhannes



signature.asc
Description: OpenPGP digital signature
___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] how to expand time limit for video delay? -> ramfs

2015-09-22 Thread jamal crawford
hi list

> ssd buffer?

if i remember correctly you can capture a video stream to a file  and
read it simultaneously and asynchronously (neologism?) using linux
perhaps have a look for examples of that,

or..use tapes :-)

if you have some spare RAM, use ramfs, even faster then ssd

cheers

___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
http://lists.puredata.info/listinfo/pd-list