[music-dsp] Impulse response normalization

2013-11-04 Thread Alessandro Saccoia
Hi all,
I can't find any material about impulse reponse normalization for a convolution 
reverb. Using Logic's space designer I notice that there's definitely a 
preprocessing of the impulse reponse that one loads: given the same input and 
impulse without preprocessing, the convolution would yield a maximum floating 
point value of 4 that will cause digital clipping.
I can imagine that to avoid clipping for an arbitrary input, the normalization 
has to be done on the peak of the absolute value of the frequency response, 
right?
But I have troubles figuring out how to look for this peak that theoretically 
can be anywhere during the decay of the impulse response: imagining a sliding 
window FFT analysis still puzzles because when I look at the output of the 
matlab command FREQZ(B,A,N) with varying N, I get naturally different peaks due 
to the different interpolations.
Moreover, if this is the way to go, I wonder if the maximum part size of the 
partitioned convolution algorithm shall be used to set the size of the sliding 
window during the analysis of the impulse response. Thanks!
Alessandro
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Precision issues when mixing a large number of signals

2012-12-10 Thread Alessandro Saccoia
 
 I don't think you have been clear about what you are trying to achieve.
 
 Are you trying to compute the sum of many signals for each time point? Or are 
 you trying to compute the running sum of a single signal over many time 
 points?

Hello, thanks for helping. I want to sum prerecorded signals progressively. 
Each time a new recording is added to the system, this signal is added to the 
running mix and then discarded so the original source gets lost. 
At each instant it should be possible to retrieve the mix run till that moment.

 
 What are the signals? are they of nominally equal amplitude?
 

normalized (-1,1)

 Your original formula looks like you are looking for a recursive solution to 
 a normalized running sum of a single signal over many time points.

nope. I meant summing many signals, without knowing all of them beforehand, and 
needing to know all the intermediate results

 
 I could relax this requirement, and forcing all the signals to
 be of a given size, but I can't see how a sample by sample summation,
 where there are M sums (M the forced length of the signals) could
 profit from a running compensation.
 
 It doesn't really matter whether the sum is accross samples of a signal 
 signal or accross signals, you can always use error compensation when 
 computing the sum. It's just a way of increasing the precision of an 
 accumulator.
 

I have watched again the wikipedia entry, yeah that makes totally sense now, 
yesterday night it was really late!

 
 Also, with a non linear
 operation, I fear of introducing discontinuities that could sound
 even worse than the white noise I expect using the simple approach.
 
 Using floating point is a non-linear operation. Your simple approach also has 
 quite some nonlinearity (accumulated error due to recursive division and 
 re-rounding at each step).

I see. cheers

alessandro

 
 Ross
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Precision issues when mixing a large number of signals

2012-12-09 Thread Alessandro Saccoia
Thanks Bjorn,

 
 On Dec 9, 2012, at 2:33 PM, Alessandro Saccoia wrote:
 
 Hi list, 
 given a large number of signals (N  1000), I wonder what happens when 
 adding them with a running sum Y.
 
1N - 1
 Y = - * X + ( ---) * Y
N  N
 
 
 Yes, your intuition is correct: this is not a good way to go, although how 
 bad it is depends on your datatype. All I can say here is that this formula 
 will result in N roundoff errors for one of your signals, N-1 for another, 
 and so on.
 
 You might *need* to use this formula if you don't know N in advance, but 
 after processing the first sample, you will know N, (right?) so I don't see 
 how that can happen.

It's going to be a sort of comulative process that goes on in time, so I won't 
necessarily know N in advance. If had a strong evidence that I should prefer 
one method over the other, I could decide to keep all the temporary X1, X2,… 
and recompute everything each time. Performance and storage are not a strong 
concern in this case, but the quality of the final and intermediate results is.

 
 
 When you do know N in advance, it would be better to:
 
 1
 Y =   * ( X+  X+ . X   )
 N   12  N
 
 or
 
 1 1 1
 Y =   * X+ --- X+ . + --- X
 N 1 N2  N N
 
 Exactly which is better depends on your datatype (fixed vs floating point, 
 etc). If you are concerned about overflow, the latter is better. For 
 performance, the former is better. For precision, without thinking too 
 carefully about it I would think the former is better, but, obviously, not in 
 the presence of overflow.

I think I will use floating points, and maybe spend some time trying to figure 
out what is the transfer function for N-+inf and seeing if something (thinking 
of a sort of dithering) could help in keeping the rounding error limited to a 
certain region of the spectrum to avoid white noise. I am not sure I will make 
it, but I could definitely give it a try. cheers,

alessandro

 
   bjorn
 
 Given the limited precision, intuitively something bad will happen for a 
 large N.
 Is there a better method than the trivial scale and sum to minimize the 
 effects of the loss of precision?
 If I reduce the bandwidth of the inputs signals in advance, do I have any 
 chance of minimizing this (possible) artifacts?
 Thank you
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
 
 -
 Bjorn Roche
 http://www.xonami.com
 Audio Collaboration
 http://blog.bjornroche.com
 
 
 
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Precision issues when mixing a large number of signals

2012-12-09 Thread Alessandro Saccoia
That is really interesting, but I can't see how to apply the Kahan's algorithm 
to a set of signals. 
In my original question, I was thinkin of mixing signals of arbitrary sizes.
I could relax this requirement, and forcing all the signals to be of a given 
size, but I can't see how a sample by sample summation, where there are M sums 
(M the forced length of the signals) could profit from a running compensation.
Also, with a non linear operation, I fear of introducing discontinuities that 
could sound even worse than the white noise I expect using the simple approach..


On Dec 10, 2012, at 1:43 AM, Brad Smith rainwarr...@gmail.com wrote:

 I would consider storing N and your sum separately, doing the division
 only to read the output (don't destroy your original sum in the
 process). I guess this is the first thing that Bjorn suggested, but
 maybe stated a little differently.
 
 There's a technique called Kahan's Algorithm that tries to compensate
 for the running errors accumulated during summation. It can help
 increase the precision a bit:
 http://en.wikipedia.org/wiki/Kahan_summation_algorithm
 
 Also, there's the simple technique of recursively dividing the sums
 into pairs, which will prevent later results from having greater error
 than earlier ones, though you'd probably need to know N in advance for
 this to be practical: http://en.wikipedia.org/wiki/Pairwise_summation
 
 -- Brad Smith
 
 
 
 
 On Sun, Dec 9, 2012 at 7:32 PM, Alessandro Saccoia
 alessandro.sacc...@gmail.com wrote:
 Thanks Bjorn,
 
 
 On Dec 9, 2012, at 2:33 PM, Alessandro Saccoia wrote:
 
 Hi list,
 given a large number of signals (N  1000), I wonder what happens when 
 adding them with a running sum Y.
 
   1N - 1
 Y = - * X + ( ---) * Y
   N  N
 
 
 Yes, your intuition is correct: this is not a good way to go, although how 
 bad it is depends on your datatype. All I can say here is that this formula 
 will result in N roundoff errors for one of your signals, N-1 for another, 
 and so on.
 
 You might *need* to use this formula if you don't know N in advance, but 
 after processing the first sample, you will know N, (right?) so I don't see 
 how that can happen.
 
 It's going to be a sort of comulative process that goes on in time, so I 
 won't necessarily know N in advance. If had a strong evidence that I should 
 prefer one method over the other, I could decide to keep all the temporary 
 X1, X2,… and recompute everything each time. Performance and storage are not 
 a strong concern in this case, but the quality of the final and intermediate 
 results is.
 
 
 
 When you do know N in advance, it would be better to:
 
1
 Y =   * ( X+  X+ . X   )
N   12  N
 
 or
 
1 1 1
 Y =   * X+ --- X+ . + --- X
N 1 N2  N N
 
 Exactly which is better depends on your datatype (fixed vs floating point, 
 etc). If you are concerned about overflow, the latter is better. For 
 performance, the former is better. For precision, without thinking too 
 carefully about it I would think the former is better, but, obviously, not 
 in the presence of overflow.
 
 I think I will use floating points, and maybe spend some time trying to 
 figure out what is the transfer function for N-+inf and seeing if something 
 (thinking of a sort of dithering) could help in keeping the rounding error 
 limited to a certain region of the spectrum to avoid white noise. I am not 
 sure I will make it, but I could definitely give it a try. cheers,
 
 alessandro
 
 
  bjorn
 
 Given the limited precision, intuitively something bad will happen for a 
 large N.
 Is there a better method than the trivial scale and sum to minimize the 
 effects of the loss of precision?
 If I reduce the bandwidth of the inputs signals in advance, do I have any 
 chance of minimizing this (possible) artifacts?
 Thank you
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, 
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
 
 -
 Bjorn Roche
 http://www.xonami.com
 Audio Collaboration
 http://blog.bjornroche.com
 
 
 
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, 
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews

Re: [music-dsp] recommendation for VST host for dev. modifications

2012-06-27 Thread Alessandro Saccoia
Mike,
The OP asked for an open source VST host, that's what MrsWatson and the Juce 
one are. 
I can't read any suggestion of avoiding the SDK in this thread, so I guess you 
are referring to messages that have never reached the list.
best
-a

On Jun 27, 2012, at 2:13 AM, Michael Gogins wrote:

 MrsWatson appears to presuppose the use of the Steinberg VST SDK,
 which is precisely what I am proposing to avoid.
 
 Regards,
 Mike
 
 On Tue, Jun 26, 2012 at 6:31 PM, Alessandro Saccoia
 alessandro.sacc...@gmail.com wrote:
 You could take a look at Mrs Watson from Teragon Audio
 http://teragonaudio.com/MrsWatson.html
 best
 Alessandro
 
 On Jun 26, 2012, at 11:10 PM, Michael Gogins wrote:
 
 The JUCE license (GPL) is not compatible with the Csound license (LGPL).
 
 Regards,
 Mike
 
 On Tue, Jun 26, 2012 at 4:56 PM, Rob Belcham hybridalien...@hotmail.com 
 wrote:
 JUCE has quite a good vst host. I use it a lot for testing VST plugins.
 
 Cheers
 Rob
 
 --
 From: Roberta music-...@musemagic.com
 Sent: Monday, June 25, 2012 4:40 AM
 To: music-dsp@music.columbia.edu
 Subject: [music-dsp] recommendation for VST host for dev. modifications
 
 
 Hi,
 
 I'm wondering if anyone has worked with any VST host source, open source,
 for some development modifications, which one most closely models Cubase 
 and
 is easy to work with?  Alternatively the src. for VST Host which comes 
 with
 the Cubase VST SDK?   Right now my best candidate is LMMS.  Thx.
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, 
 dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
 
 
 
 --
 Michael Gogins
 Irreducible Productions
 http://www.michael-gogins.com
 Michael dot Gogins at gmail dot com
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, 
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
 
 
 
 -- 
 Michael Gogins
 Irreducible Productions
 http://www.michael-gogins.com
 Michael dot Gogins at gmail dot com
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Seminar: Listening and Learning Systems for Composition and Live Performance (by Nick Collins)

2012-05-31 Thread Alessandro Saccoia
Good morning Sam,
I am really interested in signing up and participating to the seminar if you 
could setup a webcam, as I cannot come to Barcelona in those days. Could you 
offer this option? Thank you
alessandro

On May 31, 2012, at 3:39 AM, Sam Roig wrote:

 8/9/10.06.2012
 
 LISTENING AND LEARNING SYSTEMS FOR COMPOSITION AND LIVE PERFORMANCE
 (a 3-day seminar by Nick Collins)
 
 This seminar will explore practical machine listening and machine learning 
 methods within the SuperCollider environment, alongside associated technical 
 and musical issues. Applications for such techniques range from autonomous 
 concert systems, through novel musical controllers, to sound analysis 
 projects outside of realtime informing musical composition. We will 
 investigate built-in and third party UGens and classes for listening and 
 learning, including the SCMIR library for music information retrieval in 
 SuperCollider.
 
 Level: intermediate
 
 Tutor: Nick Collins [ http://www.sussex.ac.uk/Users/nc81/ ]
 
 Nick Collins is a composer, performer and researcher in the field of computer 
 music. He lectures at the University of Sussex, running the music informatics 
 degree programmes and research group. Research interests include machine 
 listening, interactive and generative music, and audiovisual performance. He 
 co-edited the Cambridge Companion to Electronic Music (Cambridge University 
 Press 2007) and The SuperCollider Book (MIT Press, 2011) and wrote the 
 Introduction to Computer Music (Wiley 2009). iPhone apps include RISCy, 
 TOPLAPapp, Concat, BBCut and PhotoNoise for iPad.
 
 Dates:
 Friday 08.06.2012, 18:00-22.00h.
 Saturday 09.06.2012, 11:00–14:00h, 16:00-19:00h
 Sunday 10.06.2012, 11:00–14:00h, 16:00-19:00h
 
 Location: Fabra i Coats – Fàbrica de Creació. Sant Adrià, 20. Barcelona. 
 Metro Sant Andreu.
 
 Price: 90€
 
 To sign up please send an email to i...@lullcec.org.
 
 +info: [ 
 http://lullcec.org/en/2012/workshops/sistemes-daudicio-i-aprenentatge-artificial-per-a-la-composicio-i-la-interpretacio-en-viu/
  ]
 
 This activity is organized by l'ull cec with the collaboration of Consell 
 Nacional de la Cultura i les Arts, Institut de Cultura de Barcelona and Fabra 
 Coats – Fábrica de Creació.
 
 
 
 web: [ http://lullcec.org ]
 facebook: [ http://facebook.com/lullcec ]
 twitter: [ http://twitter.com/lullcec ]
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Window presum synthesis

2012-04-20 Thread Alessandro Saccoia
Hello,
I haven't read your post in detail but 

 ps. I've seen this article
 http://archive.chipcenter.com/dsp/DSP000315F1.html often being
 mentioned as explaining it all but unfortunately the site no longer
 exists…

always check archive.org for pages that are gone...
http://web.archive.org/web/20060513150136/http://archive.chipcenter.com/dsp/DSP000315F1.html
The images haven't been archived, but you could still find it a useful 
reference.
best
alessandro
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] best board/set to do music on with minimal upstart?

2012-04-13 Thread Alessandro Saccoia
Arduino paired with a decent ADC/DAC would be good just for lightweight DSP.
It could be used to control a workhorse DSP through I2C communication, 
but I don't think there is any ready to go development board out there… you 
should wire it yourself,
and program both the processors.
alessandro

On Apr 13, 2012, at 12:16 PM, Bram de Jong wrote:

 Maybe I should also add: dirt cheap :D
 
 On Fri, Apr 13, 2012 at 9:50 AM, STEFFAN DIEDRICHSEN
 sdiedrich...@me.com wrote:
 Line 6 has these programmable foot pedals. But the IDE is windows.
 How about an open source board project like the Arduino? A musiDuino?
 
 I found *some* references to miDuino but not musiDuino.
 MiDuino seems very focussed on Midi, not audio...
 
 - bram
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Introducing myself

2012-02-22 Thread Alessandro Saccoia
Hello Bill,
I take your question as a chance to introduce myself.
When you sweep the input parameter you are introducing discontinuities in the 
output signal, and that sounds awful.
The simplest case to figure that out in your code is imagining that you have 
the input variable set at 0 (pan = 0), and then abruptly you change the input 
parameter to a value that will make the value of the pan variable jump to 1. 
Both of the channels will generate a square wave, that will sound badly because 
of the aliasing. One solution is to control the slew rate of your parameter 
lowpass filtering your parameter. A simple moving average filter should do the 
job correctly.

I have been reading this newsletter for a couple of years now, and I think that 
it's the best place to learn about the practical applications of musical dsp. I 
have been working in the digital audio field since 3 years now, even though I 
have been interested in computer music since my first years at the university.
Now I am freelancing in this field, and I also get to play music more often. 
This is really stimulating my imagination, and I hope that in the next months I 
will have the time to implement some new effect or instruments. 
Thank you for all the nice things that I have learnt here, 

Alessandro



On Feb 22, 2012, at 10:16 PM, Bill Moorier wrote:

 Hello, just thought I would introduce myself to the list since I've
 been lurking for a little while but haven't posted anything yet.
 
 I'm been programming computers since I was 9 years old, so 27 years
 now.  I got started on a zx81 - it had a whole 1kB of user memory!
 
 I've never really done too much music-related software though, until
 now.  In my spare time, I've started working on an engine for doing
 realtime audio DSP in javascript.  It works surprisingly well!  Not
 ready to release anything yet, but it's good fun :)
 
 The biggest thing holding me back right now is my lack of a background
 in audio.  Often it means I have no idea how to track down problems,
 like today's for example.  I built a dead-simple autopanner:
 http://abstractnonsense.com/22-feb-2012-js.html
 
 It works nicely when you keep the input parameter (from 0.0 to 1.0)
 fixed, but it sounds *awful* when you change the parameter, no matter
 how smoothly you do it.  Here's a sample output:
 http://abstractnonsense.com/22-feb-2012-swept.mp3
 
 Any pointers for figuring out things like this?  I seem to be able to
 make a lot of dead-simple versions of effects, but it all goes wrong
 when I try to beef them up!
 
 Anyway, thanks for having me on the list, I'm already learning a lot :)
 
 Cheers,
 Bill.
 --
 Bill Moorier abstractnonsense.com | @billmoorier
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp