On Bela we've been running with blocksize of 8 for a few years since 2016 and 
then we moved to 16 samples per block around 2018 I think (in both cases 
redefining the problematic constants). I think only in a couple of occasions 
this caused an incompatibility with a couple of externals relying on a 
hardcoded value of 64 instead of taking DEFDACBLKSIZE value from `s_stuff.h`.
Another drawback is the CPU penalty of running smaller blocks (which is what 
made us switch from 8 to 16 samples per block at the time). Smaller blocksizes 
means in general higher CPU usage. When running with a larger buffer size 
(which may be desirable when CPU performance is more important than latency), 
if the blocksize is hardcoded, the CPU penalty remains.

A runtime value would be a beneficial change for us.


Christof Ressi wrote on 16/01/2022 18:04:
This sounds reasonable. I have made a feature request on GitHub: 
https://github.com/pure-data/pure-data/issues/1549

When I have time, I can make a PR. Or maybe someone else wants to do it?

Christof

On 16.01.2022 17:15, Athos Bacchiocchi wrote:
I tried setting DEFSENDVS, DEFDELVS to 16 as well, and everything seems to be 
working fine now.

> (I would not recommend doing this in practice, though.)

Why wouldn't you recommend it? (I am assuming you were referring to changing 
the DEFSENDVS/DEFDELVS define values).

I agree that it would be nice to have the "internal/rendering" blocksize of Pd 
be a runtime parameter as well.

If I understand correctly, one would have to:
- Add a runtime flag, eg. -rendering-blocksize (as opposed to the existing 
-blocksize)
- Modifying all the code depending on the DEFDACBLKSIZE to use the run-time 
value instead. I guess one way to do this would be providing the value through 
a global function so that the value cannot be changed from anywhere but the 
startup code.
- Have the default value of this now modifiable rendering block size still be 
64 for backward compatibility
- Making sure that the io block size is always forced to be at least the size 
of the rendering block size
- For alsa settings, add the additional values that can be set to the menu for 
the io block size

Would that be enough?

As for the need for it, I would personally use it, I tuned my linux system to 
run at 16 samples but I cannot use such settings when I want to run Pd (I can 
with Bitwig and Bespoke synth).
In general I think that nowadays it would be useful, considering linux and Pd 
can be run in all sorts of embedded devices that in turn are part of more 
complex signal chains (eg: guitar pedalboards) together with several other 
digital gadgets, with all the latencies adding up.

Also, if I understand correctly, this would improve the time granularity with 
control-rate objects and messages... even though on the other end it might 
increase the relative overhead of the control-rate processing compared to the 
audio-rate one.

Athos



_______________________________________________
[email protected] mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list



_______________________________________________
[email protected] mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list

Reply via email to