[music-dsp] http://musicdsp.org/

2018-11-27 Thread Thomas Young
http://musicdsp.org/ seems to be down, does anyone know if the webmaster can be 
contacted to fix it?

CONFIDENTIALITY NOTICE: This e-mail message (including any attachments) is for 
the sole use of the intended recipient and may contain confidential, privileged 
and/or trade secret information. Any unauthorized review, use, disclosure or 
distribution is prohibited. If you are not the intended recipient, please 
contact the sender by reply e-mail and destroy all copies of the original 
message.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] [admin] list etiquette

2015-08-28 Thread Thomas Young
Thank you Douglas. Some of the toxic replies here have really harmed the 
friendly spirit and approachability of this mailing list.


-Original Message-
From: music-dsp [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of 
Douglas Repetto
Sent: 22 August 2015 16:22
To: A discussion list for music-related DSP
Subject: [music-dsp] [admin] list etiquette

Hi everyone, Douglas the list admin here.

I've been away and haven't really been monitoring the list recently.
It's been full of bad feelings, unpleasant interactions, and macho posturing. 
Really not much that I find interesting. I just want to reiterate a few things 
about the list.

I'm loathe to make or enforce rules. But the list has been pretty much useless 
for the majority of subscribers for the last year or so. I know this because 
many of them have written to complain. It's certainly not useful to me.

I've also had several reports of people trying to unsubscribe other people and 
other childish behavior. Come on.

So:

* Please limit yourself to two well-considered posts per day. Take it off list 
if you need more than that.
* No personal attacks. I'm just going to unsub people who are insulting. Sorry.
* Please stop making macho comments about first year EE students know this 
and blahblahblah. This list is for anyone with an interest in sound and dsp. No 
topic is too basic, and complete beginners are welcome.

I will happily unsubscribe people who find they can't consistently follow these 
guidelines.

The current list climate is hostile and self-aggrandizing. No beginner, gentle 
coder, or friendly hobbyist is going to post to such a list. If you can't help 
make the list friendly to everyone, please leave. This isn't the list for you.


douglas

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] [admin] another HTML test

2013-11-11 Thread Thomas Young
Thanks Doug :)



-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of douglas repetto
Sent: 11 November 2013 17:19
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] [admin] another HTML test





Okay, it seems to be working. You can now post to the list in HTML but your 
message will be converted to plain text. This seems like a good first step to 
accommodate people who need plain text and people for whom sending plain text 
is a pain. I tried to respond to some message from my phone while I was out of 
town and found that it's impossible to send plain text email that way!



best,

douglas









On 11/11/13 12:16 PM, douglas repetto wrote:



 Hi Dave,



 I have the list set to convert HTML mail to plain text. So it's a good

 sign that the email went through. I'm composing this in red text and

 changing fonts around. That should all disappear when this message

 arrives...



 Here's a list:



 1. * taco

 2. * truck







 Wheee!









--

... http://artbots.org 
.douglas.irving http://dorkbot.org 
.. http://music.columbia.edu/cmc/music-dsp

...repetto. http://music.columbia.edu/organism

... http://music.columbia.edu/~douglas





--

dupswapdrop -- the music-dsp mailing list and website:

subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp

http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] family of soft clipping functions.

2013-10-29 Thread Thomas Young
This reminds me of experimenting with polynomials as an amplitude enveloping 
function for a soft synthesiser. There was something rather alluring about the 
idea of a one-line-of-code amplitude envelope - unfortunately it made creating 
the envelopes pretty tiresome, and when I thought about the performance I 
realised it was probably slower to do that maths than the few conditionals you 
would use for a standard piecewise envelope. So basically a rubbish idea unless 
you have some strange need to have your amplitude envelope be a single line of 
code :I


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of robert 
bristow-johnson
Sent: 29 October 2013 01:56
To: music-dsp@music.columbia.edu
Subject: [music-dsp] family of soft clipping functions.


at the last AES in NYC, i was talking with some other folks (that likely hang 
out here, too) about this family of soft clipping curves made outa polynomials 
(so you have some idea of how high in frequency any generated images will 
appear).

these are odd-order, odd-symmetry polynomials that are monotonic from -1  x  
+1 and have as many continuous derivatives as possible at +/- 1 where these 
curves might be spliced to constant-valued rails.

the whole idea is to integrate the even polynomial  (1 - x^2)^N

 x
  g(x)  =  integral{ (1 - v^2)^N dv}
 0

you figger this out using binomial expansion and integrating each power term.

normalize g(x) with whatever g(1) is so that the curve is g(x)/g(1) and splice 
that to two constant functions for the rails

   { -1x = -1
   {
  f(x)  =  { g(x)/g(1)   -1 = x = +1
   {
   { +1  +1 = x


you can hard limit (at +/- 1) before passing through this soft clipper and it 
still works fine.  but it has some gain in the linear region which is g(0).


if you want to add some even harmonic distortion to this, add a little bit of

 (1 - x^2)^M

to f(x)  for |x|  1 and it's still smooth everywhere, but there is a little DC 
added (which has to be the case for even-symmetry distortion). 
  M does not have to be the same as N and i wouldn't expect it to be.



you can think of f(x) as a smooth approximation the sign or signum 
function

   sgn(x)  =   lim   f(a*x)
  a - +inf

or

   sgn(x)  =   lim   f(x)
  N - +inf

which motivates using this as a smooth approximation of the sgn(x) 
function as an alternative to
   (2/pi)*arctan(a*x) or tanh(a*x).  from the sgn(x) function, you can 
create smooth versions of the unit step function and use that for 
splicing.  as many derivatives are continuous in the splice as possible. 
  and it satisfies conditions of symmetry and complementarity that are 
useful in our line of work.

  u(x)  =  1/2 * (1 + sgn(x))  =approx   1/2*(1 + f(x))

you can run a raised cosine (Hann) through this and get a more flattened 
Hann.  in some old stuff i wrote, i dubbed this window:


  w(x)  =  1/2  +  (9/16)*cos(pi*x) - (1/16)*cos(3*pi*x)


as the Flattened Hann Window but long ago Carla Scaletti called it the 
Bristow-Johnson window in some Kyma manual.  i don't think it deserves 
that label (i've seen that function in some wavelet/filterbank lit since 
for half-band filters).  you get that window by running a simple Hann 
through the biased waveshaper:

1/2 * ( 1 + f(2x-1) )

with N=1.  you will get an even more pronounced effect (of smoothly 
flattening the Hann) with higher N.

below is a matlab file that demonstrates this as a soft clipping function.

BTW, Olli N, this can be integrated with that splicing theory thing we 
were talking about approximately a year ago.  it would define the 
odd-symmetry component.  we should do an AES paper about this.  i 
think now there is nearly enough meat to make a decent paper.  before 
i didn't think so.




  FILE:  softclip.m

line_color = ['g' 'c' 'b' 'm' 'r'];

figure;
hold on;

x = linspace(-2, 2, 2^18 + 1);

x = max(x, -1); % hard clip for |x|  1
x = min(x, +1);

for N = 0:10

n = linspace(0, N, N+1);

a = (-1).^n ./ (factorial(N-n) .* factorial(n) .* (2*n + 1));

a = a/sum(a);

y = x .* polyval(fliplr(a), x.^2);  % stupid MATLAB puts the coefs 
in the wrong order

plot(x, y, line_color(mod(N,length(line_color))+1) );

end

hold off;





have fun with it, if you're so inclined.

L8r,

-- 

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp

Re: [music-dsp] Analog 24dB lowpass filter emulation perfected. (Roland, Oberheim etc)

2013-10-28 Thread Thomas Young
Interesting, thanks

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Ove Karlsen
Sent: 28 October 2013 11:57
To: music-dsp@music.columbia.edu
Subject: [music-dsp] Analog 24dB lowpass filter emulation perfected. (Roland, 
Oberheim etc)

Peace Be With You.

I have perfected the analog 24dB lowpass filter in digital form.

It is a generalized and fast implementation without obscurities. It takes from 
analog simply what is good, and otherwise is a perfect implementation one would 
expect from a digital filter.

I wrote about it here: http://ovekarlsen.com/Blog/abdullah-filter/

Interested should read, and there is a soundexample.

Peace Be With You.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] My latest computer DSP signal pathfor audioimprovements

2013-05-03 Thread Thomas Young
 apparently that is more emotional and personal for some people to be able to 
 neutrally communicate about

This from the man who wants to recapture the sound of records from his lost 
youth :P

I think Sampo had a lot of good points personally. I would not dismiss the 
value of simplification, your existing audio setup is bafflingly complicated.

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Theo Verelst
Sent: 03 May 2013 00:12
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] My latest computer DSP signal pathfor audioimprovements

Sampo Syreeni wrote:
 On 2013-05-02, Rob Belcham wrote:

 Agreed. There is a lot more low end in the processed but the loss of 
 the high frequencies is distracting.
 ...


Well, I wanted to make clear modern mixes on consumer materials are often 
messed up, and discuss that. I f your monitoring makes all the CD and well 
known blurays and such sound fine to your ears, fine.

I'm heavy enough edified univ. EE with serious theoretical physics knowledge, 
and well developed practical skills, so I don't feel I need new theories, when 
the 1d and 2d year Electrical Engineering university years offer a complete, 
mathematically closing, and theoretically accurate theory for electrical 
network theory, signal processing, information theory and such. I mean I know 
what an FFT will do with a normal electrical signal transient, what poles and 
zeros are and do, and what the difference is between a control loop and a 
filter+delay, and that such systems have certain mathematical properties.

I don't know about you, but I had absolutely great records and radio to listen 
to as a kid and teenager, and I can't stand the garbage that nowadays is made 
of the materials of good, serious musicians (which I am too), so that's what I 
wanted to discuss, and to an extend offer solutions for, but apparently that is 
more emotional and personal for some people to be able to neutrally communicate 
about.

You mathematically oriented language doesn't do justice to the basis of EE, 
which is much harder than you appear to think, those different signal paths can 
be followed I think by audio engineers with some experience (and have to an 
extend), and there is such thing as loudness perception and studio processing 
that can work with it. So apart from some new boys and girls wanting to use 
music for power instead of enjoyment, there's a whole world of studio 
processing, and some of that I'm quite accurately aware of, so maybe that's 
interesting for some people to hear about.

I suppose if I'd put a schematic of a digital TV before you, you might be 
tempted to think it's a mess, but hey...

T.V.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] basic trouble with signed and unsigned types

2013-05-02 Thread Thomas Young
As someone pointed out what you really want here is c++ templates, those 
compile out completely and can be specialised for different types - which you 
need for speed.

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Sampo Syreeni
Sent: 01 May 2013 23:08
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] basic trouble with signed and unsigned types

On 2013-05-01, Bjorn Roche wrote:

 Not sure I completely understand what you are after, but I think most 
 audio folks don't use types like int, long and short, but rather types 
 like int32_t from types.h (or sys/types.h). There are various 
 guarantees made about the sizes of those types that you can look up.

Correct, and for now I'm doing something similar by calculating all of my 
widths, maximum values and whatnot beforehand into #defines. In the end I'd 
like an implementation that is fully parametrizable, though, and with as few 
adjustable parameters as at all possible. That's because while the idea I'm 
trying to put into code is primarily useful wrt audio, the basic framework of 
dithering I'm working within admits plenty of environments beyond *just* that. 
So, ideally, I'd want to make the code just work, even within certain highly 
unusual environments.

(Say, you receive a single bit integer type from the outside which you have to 
use to process your samples; there's way to go towards that but it might 
eventually happen. And, by the way, if you've ever done anything with DSD/SACD, 
treating the 1-bit channel as a modulo-linear one actually seems to make it too 
perfectly ditherable, breaking the Lipshitz and Vanderkooy critique. Thus, 
audio relevance even here. :)

What I'm after is ways of reimplementing at least right shifts for unsigned 
quantities within standard C99, so that my new operator is fully portable and 
guaranteedly implements right shifts without sign extension, regardless of what 
integer type you use them on, and without any parametric polymorphism or 
reference to built-in constants describing your types at the meta level.

There are bound to be other problems I have to worry about in the future, like 
how to deal with the possibility of one's complement arithmetic where I'm now 
taking parities. And whatnot. But at least the right shift would help me a lot 
right now.

 Also, I assume you've given C++ templates and operator overloading 
 consideration.

Yes. Immediately after I wanted to die. ;)
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-50-5756111, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] My latest computer DSP signal path for audioimprovements

2013-05-01 Thread Thomas Young
There is quite an audible loss of high end, it's especially noticeable on the 
'double processed' example which sounds very low passed. Is that intentional?

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Theo Verelst
Sent: 01 May 2013 17:57
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] My latest computer DSP signal path for 
audioimprovements

Rob Belcham wrote:
 Hi Theo,

 Some wet / dry example audio would be interesting to hear.

 Cheers
 Rob
 ..

OK, that should be reasonable.

For this limited club (little over a hundred world wide ?) of people I suppose 
there's no real problem with a few very short (7 secs) examples from varied 
musical sources:

http://www.theover.org/Musicdspexas/dsptst1.mp3

BEWARE: it's a nice few examples, but it isn't produced very nicely because I 
forgot to put off a 10 dB boost+perfect limiter (1.2 Sec) in the recording...

I didn't want to start the whole startup and mic mixing again, so beware it 
sounds a bit like radio-compressed, but the idea is still ok-ish.

For people who are interested in playing/learning/??? with my actual signal 
path, I by now have a complete set of tcl scripts to start the particular chain 
settings up, completely automatically, with settings from the last example, and 
all the jack-rack and other files, which I can make downloadable per 
individual. You'll need a strong Linux machine with a 192kHz soundcard and the 
ladspa plugins, jamin and jack installed.

Greetings

  Theo V.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] meassuring the difference

2013-03-07 Thread Thomas Young
Your mean square error procedure is slightly incorrect. You should take the 
final signals from both processes, say A[1..n] and B[1..n], subtract them to 
get your error signal E[1..n], then the mean square error is the sum of the 
squared error over n.

Sum( E[1..n]^2 ) / n

This (MSE) is a statistical approach though and isn't necessarily a great way 
of measuring perceived acoustical differences.

It depends on the nature of your signal but you may want to check the error in 
the frequency domain (weighted to specific frequency band if appropriate) 
rather than the time domain.

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of volker böhm
Sent: 07 March 2013 15:10
To: A discussion list for music-related DSP
Subject: [music-dsp] meassuring the difference

dear all,

i'm trying to meassure the difference between two equivalent but not identical 
processes.
right now i'm feeding some test signals to both algorithms at the same time and 
subtract the output signals.

now i'm looking for something to quantify the error signal.
from statistics i know there is something like the mean squared error.
so i'm squaring the error signal and take the (running) average.

mostly i'm getting some numbers very close to zero and a gut feeling tells me i 
want to see those on a dB scale.
so i'm taking the logarithm and multiply by 10, as i have already squared the 
values before.
(as far as i can see, this is equivalent to a RMS meassurement).

is there a correct/better/preferred way of doing this?

next to a listening test, in the end i want to have a simple meassure of the 
difference of the two processes which is close to our perception of the 
difference. does that make sense?

thanks for any comments,
volker.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Lerping Biquad coefficients to a flat response

2013-01-07 Thread Thomas Young
Ah, I wondered what you meant about those parenthesis!

The better efficiency comes from avoiding all the divisions (I don't normalise 
out a0 in my implementation) - it's a pretty trivial performance difference in 
reality but the coefficient equations are a little bit neater as well - less of 
those confusing parenthesis ;)
 
-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of robert 
bristow-johnson
Sent: 04 January 2013 20:59
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat response

On 1/4/13 1:29 PM, Thomas Young wrote:
 Er.. yes sorry I transcribed it wrong, well spotted

 a1: ( - 2 * cos(w0) ) * A^2


ooops!  i'm embarrassed!  i was thinking it was -2 + cos(), sorry!

can't believe it.  chalk it up to being a year older and losing one year's 
portion of brain cells.

 So yea it's the same, I just rearranged it a bit for efficiency

dunno what's more efficient before normalizing out a0, but if it's after, you 
change only the numerator feedforward coefs and you would multiply them by the 
reciprocal of what the linear peak gain, 1/A^2.

sorry, that parenth thing must be early onset alzheimer's.

bestest,

r b-j

 -Original Message-
 From: music-dsp-boun...@music.columbia.edu 
 [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of robert 
 bristow-johnson
 Sent: 04 January 2013 18:25
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat 
 response

 On 1/4/13 1:11 PM, Thomas Young wrote:
 someone tell me what it was about
 In a nutshell...

 Q: What is the equation for the coefficients of a peaking EQ biquad filter 
 with constant 0 dB peak gain?

 A: This (using cookbook variables):

 b0: 1 + alpha * A
 b1: -2 * cos(w0)
 b2: 1 - alpha * A
 a0: A^2 + alpha * A
 a1: - 2 * cos(w0) * A^2--- are sure you don't need pareths here?
 a2: A^2 - alpha * A

 dragged over a lot of emails ;)
 yeah, i guess that's the same as

 b0: (1 + alpha * A)/A^2
 b1: (-2 * cos(w0)/A^2
 b2: (1 - alpha * A)/A^2
 a0: 1 + alpha / A
 a1: -2 * cos(w0)
 a2: 1 - alpha / A

 if you put in the parenths where you should, i think these are the same.

 r b0j

 -Original Message-
 From: music-dsp-boun...@music.columbia.edu
 [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of robert 
 bristow-johnson
 Sent: 04 January 2013 17:58
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat 
 response



 looks like i came here late.  someone tell me what it was about.
 admittedly, i didn't completely understand from a cursory reading.

 the only difference between the two BPFs in the cookbook is that of a 
 constant gain factor.  in one the peak of the BPF is always at zero dB.
 in the other, if you were to project the asymptotes of the skirt of the 
 freq response (you know, the +6 dB/oct line and the -6 dB/oct line), they 
 will intersect at the point that is 0 dB and at the resonant frequency.  
 otherwise same shape, same filter.

 the peaking EQ is a BPF with gain A^2 - 1 with the output added to a wire.  
 and, only on the peaking EQ, the definition of Q is fudged so that it 
 continues to be related to BW in the same manner and so the cut response 
 exactly undoes a boost response for the same dB, same f0, same Q.  nothing 
 more to it than that.  no Orfanidis subtlety.

 if the resonant frequency is much less than Nyquist, then there is even 
 symmetry of magnitude about f0 (on a log(f) scale) for BPF, notch, APF, and 
 peaking EQ.  for the two shelves, it's odd symmetry about f0 (if you adjust 
 for half of the dB shelf gain).  the only difference between the high shelf 
 and low shelf is a gain constant and flipping the rest of the transfer 
 function upside down.  this is the case no matter what the dB boost is or 
 the Q (or S).  if f0 approaches Fs/2, then that even or odd symmetry gets 
 warped from the BLT and ain't so symmetrical anymore.

 nothing else comes to mind.




-- 

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Lerping Biquad coefficients to a flat response

2013-01-04 Thread Thomas Young
Hi Nigel, which analogue prototype are you referring to when you suggest 
multiplying denominator coefficients by the gain factor, the peaking one?

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Nigel Redmon
Sent: 04 January 2013 09:26
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat response

 On 4/01/2013 4:34 AM, Thomas Young wrote:
 However I was hoping to avoid scaling the output since if I have to 
 do that then I might as well just change the wet/dry mix with the 
 original signal for essentially the same effect and less messing 
 about.

I read quickly this morning and missed this...with something like a lowpass, 
you do get irregularities, but with something like a peaking filter it stays 
pretty smooth when summing with the original signal (I guess because the phase 
change is smoother, with the second order split up between two halves). So that 
part isn't a problem, BUT...

Consider a wet/dry mix...say you have a 6 dB peak that you want to move down to 
0 dB (skirts down to -6 dB). OK, at 0% wet you have a flat line at 0 dB. At 
100% wet you have your 6 dB peak again (skirt at 0 dB). At 50% wet, you have 
about a 3 dB peak, skirt still at 0 dB. There is no setting that will give you 
anything but the skit at 0 dB.

Again, as Ross said earlier, you could have just an output gain-set it to 0.5 
(-6 dB), and now you have your skirt at - 6 dB, peak at 0 dB. But a wet/dry mix 
know is not going to do it.

Ross said:
 There is only a difference of scale factors between your constraints and the 
 RBJ peaking filter constraints so you should be able to use them with minor 
 modifications (as Nigel suggests, although I didn't take the time to review 
 his result).
 
 Assuming that you want the gain at DC and nyquist to be equal to your 
 stopband gain then this is pretty much equivalent to the RBJ coefficient 
 formulas except that Robert computed them under the requirement of unity gain 
 at DC and Nyquist, and some specified gain at cf. You want unity gain at cf 
 and specified gain at DC and Nyquist. This seems to me to just be a direct 
 reinterpretation of the gain values. You should be able to propagate the 
 needed gain values through Robert's formulas.

Actually, it's more than a reinterpretation of the gain values (note that no 
matter what gain you give it, you won't get anything like what Thomas is 
after). The poles are peaking the filter and the zeros are holding down the 
shirt (at 0 dB in the unmodified filter); obviously the transfer function is 
arranged to keep that relationship at any gain setting. So, you need to change 
it so that the gain is controlling something else-changing the relationship of 
the motion between the poles and zeros (the mod I gave does that).


On Jan 3, 2013, at 10:34 PM, Ross Bencina rossb-li...@audiomulch.com wrote:

 Hi Thomas,
 
 Replying to both of your messages at once...
 
 On 4/01/2013 4:34 AM, Thomas Young wrote:
 However I was hoping to avoid scaling the output since if I have to 
 do that then I might as well just change the wet/dry mix with the 
 original signal for essentially the same effect and less messing 
 about.
 
 Someone else might correct me on this, but I'm not sure that will get you the 
 same effect. Your proposal seems to be based on the assumption that the 
 filter is phase linear and 0 delay (ie that the phases all line up between 
 input and filtered version). That's not the case.
 
 In reality you'd be mixing the phase-warped and delayed (filtered) signal 
 with the original-phase signal. I couldn't tell you what the frequency 
 response would look like, but probably not as good as just scaling the 
 peaking filter output.
 
 On 4/01/2013 6:03 AM, Thomas Young wrote:
  Additional optional mumblings:
 
  I think really there are two 'correct' solutions to manipulating 
  only the coefficients to my ends (that is, generation of 
  coefficients which produce filters interpolating from bandpass to flat):
 
  The first is to go from pole/zero to transfer function, basically as 
  you (Nigel) described in your first message - stick the zeros in the 
  centre, poles near the edge of the unit circle and reduce their 
  radii
  - doing the maths to convert these into the appropriate biquad 
  coefficients. This isn't really feasible for me to do in realtime 
  though. I was trying to do a sort of tricksy workaround by lerping 
  from one set of coefficients to another but on reflection I don't 
  think there is any mathematical correctness there.
 
  The second is to have an analogue prototype which somehow includes 
  skirt gain and take the bilinear transform to get the equations for 
  the coefficients. I'm not really very good with the s domain either 
  so I actually wouldn't know how to go about this, but it's what I 
  was originally thinking of.
 
 In the end you're going to have a set

Re: [music-dsp] Lerping Biquad coefficients to a flat response

2013-01-04 Thread Thomas Young
Aha, success! Multiplying denominator coefficients of the peaking filter by A^2 
does indeed have the desired effect.

Thank you very much for the help

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Thomas Young
Sent: 04 January 2013 10:33
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat response

Hi Nigel, which analogue prototype are you referring to when you suggest 
multiplying denominator coefficients by the gain factor, the peaking one?

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Nigel Redmon
Sent: 04 January 2013 09:26
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat response

 On 4/01/2013 4:34 AM, Thomas Young wrote:
 However I was hoping to avoid scaling the output since if I have to 
 do that then I might as well just change the wet/dry mix with the 
 original signal for essentially the same effect and less messing 
 about.

I read quickly this morning and missed this...with something like a lowpass, 
you do get irregularities, but with something like a peaking filter it stays 
pretty smooth when summing with the original signal (I guess because the phase 
change is smoother, with the second order split up between two halves). So that 
part isn't a problem, BUT...

Consider a wet/dry mix...say you have a 6 dB peak that you want to move down to 
0 dB (skirts down to -6 dB). OK, at 0% wet you have a flat line at 0 dB. At 
100% wet you have your 6 dB peak again (skirt at 0 dB). At 50% wet, you have 
about a 3 dB peak, skirt still at 0 dB. There is no setting that will give you 
anything but the skit at 0 dB.

Again, as Ross said earlier, you could have just an output gain-set it to 0.5 
(-6 dB), and now you have your skirt at - 6 dB, peak at 0 dB. But a wet/dry mix 
know is not going to do it.

Ross said:
 There is only a difference of scale factors between your constraints and the 
 RBJ peaking filter constraints so you should be able to use them with minor 
 modifications (as Nigel suggests, although I didn't take the time to review 
 his result).
 
 Assuming that you want the gain at DC and nyquist to be equal to your 
 stopband gain then this is pretty much equivalent to the RBJ coefficient 
 formulas except that Robert computed them under the requirement of unity gain 
 at DC and Nyquist, and some specified gain at cf. You want unity gain at cf 
 and specified gain at DC and Nyquist. This seems to me to just be a direct 
 reinterpretation of the gain values. You should be able to propagate the 
 needed gain values through Robert's formulas.

Actually, it's more than a reinterpretation of the gain values (note that no 
matter what gain you give it, you won't get anything like what Thomas is 
after). The poles are peaking the filter and the zeros are holding down the 
shirt (at 0 dB in the unmodified filter); obviously the transfer function is 
arranged to keep that relationship at any gain setting. So, you need to change 
it so that the gain is controlling something else-changing the relationship of 
the motion between the poles and zeros (the mod I gave does that).


On Jan 3, 2013, at 10:34 PM, Ross Bencina rossb-li...@audiomulch.com wrote:

 Hi Thomas,
 
 Replying to both of your messages at once...
 
 On 4/01/2013 4:34 AM, Thomas Young wrote:
 However I was hoping to avoid scaling the output since if I have to 
 do that then I might as well just change the wet/dry mix with the 
 original signal for essentially the same effect and less messing 
 about.
 
 Someone else might correct me on this, but I'm not sure that will get you the 
 same effect. Your proposal seems to be based on the assumption that the 
 filter is phase linear and 0 delay (ie that the phases all line up between 
 input and filtered version). That's not the case.
 
 In reality you'd be mixing the phase-warped and delayed (filtered) signal 
 with the original-phase signal. I couldn't tell you what the frequency 
 response would look like, but probably not as good as just scaling the 
 peaking filter output.
 
 On 4/01/2013 6:03 AM, Thomas Young wrote:
  Additional optional mumblings:
 
  I think really there are two 'correct' solutions to manipulating 
  only the coefficients to my ends (that is, generation of 
  coefficients which produce filters interpolating from bandpass to flat):
 
  The first is to go from pole/zero to transfer function, basically as 
  you (Nigel) described in your first message - stick the zeros in the 
  centre, poles near the edge of the unit circle and reduce their 
  radii
  - doing the maths to convert these into the appropriate biquad 
  coefficients. This isn't really feasible for me to do in realtime 
  though. I was trying to do a sort of tricksy workaround by lerping 
  from one set of coefficients

Re: [music-dsp] Lerping Biquad coefficients to a flat response

2013-01-04 Thread Thomas Young
 someone tell me what it was about

In a nutshell...

Q: What is the equation for the coefficients of a peaking EQ biquad filter with 
constant 0 dB peak gain?

A: This (using cookbook variables):

b0: 1 + alpha * A
b1: -2 * cos(w0)
b2: 1 - alpha * A
a0: A^2 + alpha * A
a1: - 2 * cos(w0) * A^2
a2: A^2 - alpha * A

dragged over a lot of emails ;)

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of robert 
bristow-johnson
Sent: 04 January 2013 17:58
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat response



looks like i came here late.  someone tell me what it was about.  
admittedly, i didn't completely understand from a cursory reading.

the only difference between the two BPFs in the cookbook is that of a constant 
gain factor.  in one the peak of the BPF is always at zero dB.  
in the other, if you were to project the asymptotes of the skirt of the freq 
response (you know, the +6 dB/oct line and the -6 dB/oct line), they will 
intersect at the point that is 0 dB and at the resonant frequency.  otherwise 
same shape, same filter.

the peaking EQ is a BPF with gain A^2 - 1 with the output added to a wire.  
and, only on the peaking EQ, the definition of Q is fudged so that it continues 
to be related to BW in the same manner and so the cut response exactly undoes a 
boost response for the same dB, same f0, same Q.  nothing more to it than that. 
 no Orfanidis subtlety.

if the resonant frequency is much less than Nyquist, then there is even 
symmetry of magnitude about f0 (on a log(f) scale) for BPF, notch, APF, and 
peaking EQ.  for the two shelves, it's odd symmetry about f0 (if you adjust for 
half of the dB shelf gain).  the only difference between the high shelf and low 
shelf is a gain constant and flipping the rest of the transfer function upside 
down.  this is the case no matter what the dB boost is or the Q (or S).  if 
f0 approaches Fs/2, then that even or odd symmetry gets warped from the BLT and 
ain't so symmetrical anymore.

nothing else comes to mind.

-- 

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.





On 1/4/13 11:23 AM, Nigel Redmon wrote:
 Great!

 On Jan 4, 2013, at 2:40 AM, Thomas Youngthomas.yo...@rebellion.co.uk  wrote:

 Aha, success! Multiplying denominator coefficients of the peaking filter by 
 A^2 does indeed have the desired effect.

 Thank you very much for the help

 -Original Message-
 From: music-dsp-boun...@music.columbia.edu 
 [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Thomas 
 Young
 Sent: 04 January 2013 10:33
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat 
 response

 Hi Nigel, which analogue prototype are you referring to when you suggest 
 multiplying denominator coefficients by the gain factor, the peaking one?

 -Original Message-
 From: music-dsp-boun...@music.columbia.edu 
 [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Nigel 
 Redmon
 Sent: 04 January 2013 09:26
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat 
 response

 On 4/01/2013 4:34 AM, Thomas Young wrote:
 However I was hoping to avoid scaling the output since if I have to 
 do that then I might as well just change the wet/dry mix with the 
 original signal for essentially the same effect and less messing 
 about.
 I read quickly this morning and missed this...with something like a lowpass, 
 you do get irregularities, but with something like a peaking filter it stays 
 pretty smooth when summing with the original signal (I guess because the 
 phase change is smoother, with the second order split up between two 
 halves). So that part isn't a problem, BUT...

 Consider a wet/dry mix...say you have a 6 dB peak that you want to move down 
 to 0 dB (skirts down to -6 dB). OK, at 0% wet you have a flat line at 0 dB. 
 At 100% wet you have your 6 dB peak again (skirt at 0 dB). At 50% wet, you 
 have about a 3 dB peak, skirt still at 0 dB. There is no setting that will 
 give you anything but the skit at 0 dB.

 Again, as Ross said earlier, you could have just an output gain-set it to 
 0.5 (-6 dB), and now you have your skirt at - 6 dB, peak at 0 dB. But a 
 wet/dry mix know is not going to do it.

 Ross said:
 There is only a difference of scale factors between your constraints and 
 the RBJ peaking filter constraints so you should be able to use them with 
 minor modifications (as Nigel suggests, although I didn't take the time to 
 review his result).

 Assuming that you want the gain at DC and nyquist to be equal to your 
 stopband gain then this is pretty much equivalent to the RBJ coefficient 
 formulas except that Robert computed them under the requirement of unity 
 gain at DC and Nyquist, and some specified gain at cf. You want unity gain

Re: [music-dsp] Lerping Biquad coefficients to a flat response

2013-01-04 Thread Thomas Young
Er.. yes sorry I transcribed it wrong, well spotted

a1: ( - 2 * cos(w0) ) * A^2

So yea it's the same, I just rearranged it a bit for efficiency

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of robert 
bristow-johnson
Sent: 04 January 2013 18:25
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat response

On 1/4/13 1:11 PM, Thomas Young wrote:
 someone tell me what it was about
 In a nutshell...

 Q: What is the equation for the coefficients of a peaking EQ biquad filter 
 with constant 0 dB peak gain?

 A: This (using cookbook variables):

 b0: 1 + alpha * A
 b1: -2 * cos(w0)
 b2: 1 - alpha * A
 a0: A^2 + alpha * A
 a1: - 2 * cos(w0) * A^2--- are sure you don't need pareths here?
 a2: A^2 - alpha * A

 dragged over a lot of emails ;)

yeah, i guess that's the same as

b0: (1 + alpha * A)/A^2
b1: (-2 * cos(w0)/A^2
b2: (1 - alpha * A)/A^2
a0: 1 + alpha / A
a1: -2 * cos(w0)
a2: 1 - alpha / A

if you put in the parenths where you should, i think these are the same.

r b0j

 -Original Message-
 From: music-dsp-boun...@music.columbia.edu 
 [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of robert 
 bristow-johnson
 Sent: 04 January 2013 17:58
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat 
 response



 looks like i came here late.  someone tell me what it was about.
 admittedly, i didn't completely understand from a cursory reading.

 the only difference between the two BPFs in the cookbook is that of a 
 constant gain factor.  in one the peak of the BPF is always at zero dB.
 in the other, if you were to project the asymptotes of the skirt of the 
 freq response (you know, the +6 dB/oct line and the -6 dB/oct line), they 
 will intersect at the point that is 0 dB and at the resonant frequency.  
 otherwise same shape, same filter.

 the peaking EQ is a BPF with gain A^2 - 1 with the output added to a wire.  
 and, only on the peaking EQ, the definition of Q is fudged so that it 
 continues to be related to BW in the same manner and so the cut response 
 exactly undoes a boost response for the same dB, same f0, same Q.  nothing 
 more to it than that.  no Orfanidis subtlety.

 if the resonant frequency is much less than Nyquist, then there is even 
 symmetry of magnitude about f0 (on a log(f) scale) for BPF, notch, APF, and 
 peaking EQ.  for the two shelves, it's odd symmetry about f0 (if you adjust 
 for half of the dB shelf gain).  the only difference between the high shelf 
 and low shelf is a gain constant and flipping the rest of the transfer 
 function upside down.  this is the case no matter what the dB boost is or the 
 Q (or S).  if f0 approaches Fs/2, then that even or odd symmetry gets 
 warped from the BLT and ain't so symmetrical anymore.

 nothing else comes to mind.



-- 

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Lerping Biquad coefficients to a flat response

2013-01-03 Thread Thomas Young
One for RBJ if he's back from his hols :) or anyone kind enough to answer of 
course...

Is there a way to modify the bandpass coefficient equations in the cookbook 
(the one from the analogue prototype H(s) = s / (s^2 + s/Q + 1)) such that the 
gain of the stopband may be specified? I want to be able 

My thought was to interpolate the coefficients towards a flat response, i.e.

b0=1 b1=0 b2=0
a0=1 a1=0 a2=0

I tried some plots and this does basically work except that the characteristics 
of the filter are affected (namely the peak gain exceeds 0dB).

Thomas Young
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Lerping Biquad coefficients to a flat response

2013-01-03 Thread Thomas Young
Thanks Nigel - I have just been playing around with the pole/zero plotter (very 
helpful app for visualising the problem) and thinking about it. You guys are 
probably right the simplest approach is just to scale the output and using the 
peaking filter.

Additional optional mumblings:

I think really there are two 'correct' solutions to manipulating only the 
coefficients to my ends (that is, generation of coefficients which produce 
filters interpolating from bandpass to flat):

The first is to go from pole/zero to transfer function, basically as you 
(Nigel) described in your first message - stick the zeros in the centre, poles 
near the edge of the unit circle and reduce their radii - doing the maths to 
convert these into the appropriate biquad coefficients. This isn't really 
feasible for me to do in realtime though. I was trying to do a sort of tricksy 
workaround by lerping from one set of coefficients to another but on reflection 
I don't think there is any mathematical correctness there.

The second is to have an analogue prototype which somehow includes skirt gain 
and take the bilinear transform to get the equations for the coefficients. I'm 
not really very good with the s domain either so I actually wouldn't know how 
to go about this, but it's what I was originally thinking of.

Thanks for the help

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Nigel Redmon
Sent: 03 January 2013 18:48
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat response

Thomas-it's a matter of manipulating the A and Q relationships in the numerator 
and denominator of the peaking EQ analog prototypes. I'm not as good in 
thinking in the s domain as the z, so I'd have to plot it out and think-too 
busy right now, though it's pretty trivial. But just doing the gain adjustment 
to the existing peaking EQ, as Ross suggested, is trivial. Not much reason to 
go through the fuss unless you're concerned about adding a single multiply. (To 
add to the confusion, my peaking implementation is different for gain and 
boost, so that the EQ remains symmetrical, a la Zolzer).


On Jan 3, 2013, at 9:34 AM, Thomas Young thomas.yo...@rebellion.co.uk wrote:

 I'm pretty sure that the BLT bandpass ends up with zeros at DC and 
 nyquist
 
 Yes I think this is essentially my problem, there are no stop bands per-se 
 just zeros which I was basically trying to lerp away - which I guess isn't 
 really the correct approach.
 
 The solution you are proposing would work I believe; along the same lines 
 there is a different bandpass filter in the RBJCB which has a constant stop 
 band gain (or 'skirt gain' as he calls it) and peak gain for the passband - 
 so a similar technique would work there by scaling the output.
 
 However I was hoping to avoid scaling the output since if I have to do that 
 then I might as well just change the wet/dry mix with the original signal for 
 essentially the same effect and less messing about. I feel in my gut there 
 must be some way to do it by just manipulating coefficients.
 
 
 -Original Message-
 From: music-dsp-boun...@music.columbia.edu 
 [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Ross 
 Bencina
 Sent: 03 January 2013 17:16
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat 
 response
 
 On 4/01/2013 4:05 AM, Thomas Young wrote:
 Is there a way to modify the bandpass coefficient equations in the 
 cookbook (the one from the analogue prototype H(s) = s / (s^2 + s/Q +
 1)) such that the gain of the stopband may be specified? I want to be 
 able
 
 I'm pretty sure that the BLT bandpass ends up with zeros at DC and 
 nyquist so I'm not sure how you're going to define stopband gain in 
 this case :)
 
 Maybe start with the peaking filter and scale the output according to your 
 desired stopband gain and then set the peak gain to give 0dB at the peak.
 
 peakGain_dB = -stopbandGain_dB
 
 (assuming -ve stopbandGain_dB).
 
 Does that help?
 
 Ross.
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book 
 reviews, dsp links http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book 
 reviews, dsp links http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

Re: [music-dsp] Ghost tone

2012-12-06 Thread Thomas Young
1) The low frequencies are audible
2) It's not speaker distortion, the low frequencies are present in the signal

I think the spectrum of the first signal can be a bit misleading, if you are a 
bit more selective about where you take the spectrum (i.e. between the 
asymptotic sections) the low frequency contribution is easier to see.

The unpleasant pressure effect is exactly that, sound pressure waves. The 
strength will be dependent on the acoustics of your environment, it will be 
particularly objectionable if your ears happen to be somewhere where a lot of 
the wavefronts collide. The proximity of headphones to your ears is no doubt 
exacerbating the effect, especially in the very low frequencies which would 
otherwise bounce all over the place and diffuse.

Mics generally won't pick up very low frequencies - or more accurately their 
sensitivity to lower frequencies is very low. 


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Didier Dambrin
Sent: 06 December 2012 05:50
To: A discussion list for music-related DSP
Subject: [music-dsp] Ghost tone

Hi,

Here's something to listen to:
http://flstudio.image-line.com/help/publicfiles_gol/GhostTone.wav


It's divided in 2 parts, the same bunch of sine harmonics in the upper range, 
only difference is the phase alignment. (both will appear similar through a 
spectrogram)

Disregarding the difference in sound in the upper range, 1. anyone confirms the 
very low tone is very audible in the first half?
2. (anyone confirms it's not speaker distortion?) 3. anyone knows about 
litterature about the phenomenon?

While I can understand where the ghost tone is from, I don't understand why 
it's audible. I happen to have hyperacusis  can't stand the low traffic 
rumbling here around, and I was wondering why mics weren't picking it, as I 
perceive it very loud. I hadn't been able to resynthesize a tone as nasty until 
now, mainly because I was trying low tones alone, and I can't hear simple sines 
under 20Hz.
The question is why do we(?) hear it, why is so much pressure noticable (can 
anyone stand it through headphones? I find the pressure effect very disturbing).
Strangely enough, I find the tone a lot more audible when (through
headphones) it goes to both hears, not if it's only left or right. 

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] i need a knee

2012-08-10 Thread Thomas Young
Not getting a response on a mailing list doesn't mean people are disrespecting 
you, we can do without the self righteousness thank you.

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Bastian Schnuerle
Sent: 10 August 2012 11:24
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] i need a knee

ok, i got it by myself, took a while .. but a small hint would have been nice, 
you guys have all those books i can not afford and i am only a ee dipl.ing. and 
they wanted me to build bombs and instead i am coding musical instruments, you 
should respect that .. thanks



Am 20.07.2012 um 08:17 schrieb robert bristow-johnson:

 On 7/19/12 4:19 PM, Bastian Schnuerle wrote:
 hey everybody,

 i am trying to compute a knee value in peak limiting. my code below 
 works quite nice for very loud singals (which i like), but if not so 
 loud values are processed this code introduces distortion. in the 
 mycode there a some way-off alterations, which made the signal being 
 quite smooth for loud signal, but as soon as it come to low peak, 
 they/i are/am failing.

 i would be very happy if you guys could post me some suggestions why 
 this beaviour for low peaked signals appear and maybe your are 
 willing to share altererations to my code to make it universal 
 working ?!

 .//...

 static const float log6dB = 6.02059991327962f;

20*log10(2.0)

 the dB equivalent to one octave.

 const float kNee = log6dB-GetKneeDBfromGUI();

GetKneeDBfromGUI() returns the knee in dB relative to what?  the 
 rails?

 mFinalEnv = mProcessSignal;

 how is mProcessSignal defined?

 float kneeGain = 1.0;
 if( mFinalEnv  1.0f ) gain = 1.0f/mFinalEnv;

 and where is gain used?


 const float outLimit = 2.0f*1.0f;

 float curOutLimit6dBBelow = curOutLimit/2.0f; const float 
 maxUnequalAtt = -10.0f;

 if( mFinalEnv  curOutLimit6dBBelow ) //6db below limit {
 floatinv = curOutLimit6dBBelow/mFinalEnv;

 // do knee computation
 floatkneePos1 = 0;
 floatkneePos2 = 0;

 if( mFinalEnv = curOutLimit )
 {
 kneePos1 = 1.0f;
 kneePos2 = 1.0f;
 }
 else
 {
 // create two knee curves
 floatxPos = (mFinalEnv - curOutLimit6dBBelow)/ 
 (curOutLimit-curOutLimit6dBBelow);
 kneePos1 = xPos*xPos*xPos*xPos;

 kneePos2 = xPos;
 }

 // xfade between the knees
 float kneeFactor = kNee/log6dB;

 is kNee ever anything other than 1?

 kneeFactor = ((1.0f-kneeFactor)*kneePos1 + 
 kneeFactor*kneePos2)/7;

 i see the xfade.  i don't see why you are dividing by 7.
 .


 floatoverallAttDb = FastLinToDb(inv);
 kneeAtt = FastDbToLin(get_max(gain*(overallAttDb*
 (kneeFactor)),maxUnequalAtt));


 okay, gain is used here.

 kneeGain = kneeAtt;

 }

 }

 for( int fc = 0; fc  channels; ++fc ) {
 float val = mFinalVectors[fc][mFinaTail[fc]] * kneeGain; }

 ..//..

 can you define mathematically you're trying to do?  i can see that you 
 have some sorta mix between linear and x^4.  i realize this is for a 
 soft-knee limiter of some sort.  but i cannot grok the intended math 
 to be accomplished with this code.  can you just state the math (with 
 if statements, where needed)?

 --

 r b-j  r...@audioimagination.com

 Imagination is more important than knowledge.



 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book 
 reviews, dsp links http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Pointers for auto-classification of sounds?

2012-06-08 Thread Thomas Young
You haven't really explained which aspect of the timbre you want to use to 
organise the sounds, and timbre is such a catch-all word that you need to 
specify the characteristics you are looking for in more detail to have any 
chance of producing something useful.

Categorising by amplitude envelope would be pretty straight forward since 
amplitude over time is something you can easily measure and process before 
judging it against some metric or heuristic. You would need to pick similar 
aspects of the timbre, things you can quantatively measure, in order to develop 
a scheme for categorisation.


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Charles Turner
Sent: 08 June 2012 18:36
To: music-dsp@music.columbia.edu
Subject: [music-dsp] Pointers for auto-classification of sounds?

Hi all-

I was initially hesitant to post to the list as I haven't explored this topic 
very deeply, but after a second thought I said what the hell, so please 
forgive if my Friday mood is more lazy than inquisitive.

Here's my project: say I have a collective of sound files, all short and the 
same length, say 1 second in length. I want to classify them according to 
timbre via a single characteristic that I can then organize along one axis of a 
visual graph.

The files have these other properties:

  . Amplitude envelope. I don't need to classify by time characteristic, but 
samples could have different characteristics,
ranging from complete silence, to a classical ADSR shape, to values pegged 
at either +-100% or 0% amplitude.

  . Timbre. Samples could range in timbre from noise to
(hypothetically) a pure sine wave.

Any ideas on how to approach this? I've looked at a few papers on the subject, 
and their aims seem somewhat different and more elaborate than mine (instrument 
classification, etc). Also, I've started to play around with Emmanuel 
Jourdain's zsa.descriptors for Max/MSP, mostly because of the eazy-peazy 
environment. But what other technology (irrespective of language) might I 
profit from looking at?

Thanks so much for any interest!

Best,

Charles
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] maintaining musicdsp.org

2012-04-04 Thread Thomas Young
Maybe submissions should be added to a moderation queue rather than added 
directly (i.e. they need to be manually whitelisted). I don't think a super 
quick turnaround on new algorithm submissions is really important for something 
like musicdsp.org.

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Bram de Jong
Sent: 04 April 2012 16:07
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] maintaining musicdsp.org

On Wed, Apr 4, 2012 at 3:31 PM, Bastian Schnuerle 
bastian.schnue...@silberstein.de wrote:
 what is exactly the roadmap and tasks to do ? i think i could find 
 some helping hands for you, including mine .. maybe altogether we find 
 a way to get some work away from you ?

oh there is absolutely no roadmap! it just needs some love to stop the spammers 
from submitting DSP algorithms (about well known drugs and handbags I won't 
describe here lest I end up in spam filters). and, if anyone has some new/fresh 
ideas for it and someone else feels like implementing those, always welcome!

 - bram
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] maintaining musicdsp.org

2012-04-04 Thread Thomas Young
lol wow

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Bram de Jong
Sent: 04 April 2012 16:15
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] maintaining musicdsp.org

On Wed, Apr 4, 2012 at 5:11 PM, Thomas Young thomas.yo...@rebellion.co.uk 
wrote:
 Maybe submissions should be added to a moderation queue rather than added 
 directly (i.e. they need to be manually whitelisted). I don't think a super 
 quick turnaround on new algorithm submissions is really important for 
 something like musicdsp.org.

they ARE added to a queue.
the queue now contains about 500 spam submissions.
that's the whole (current) problem.

some kind of report as spam thing for the comments would be nice too as there 
are SOME (but few) spam comments.

 - bram
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-23 Thread Thomas Young
Ringing in your ears due to exposure to loud noise is the stereocilia (small 
hair cells) being damaged and falsely reporting to your brain that there is 
still sound vibration present. The frequency of the ringing is not a function 
of the sound that damaged your ears (a super loud bassy sound doesn't cause a 
bassy ringing in your ears).

Tom

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of David Olofson
Sent: 23 February 2012 16:15
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] a little about myself

On Thursday 23 February 2012, at 17.10.52, David Olofson da...@olofson.net 
wrote:
 On Thursday 23 February 2012, at 16.17.38, Adam Puckett
 
 adotsdothmu...@gmail.com wrote:
  Interesting. How would you make an ear ringing sound?
[...]
 A more realistic approach might be to FFT the offending audio snippet that
 triggers the effect, remove all bands but the too loud peaks and then
 IFFT that to create the ring waveform.

Oh, wait. Probably throw in some heavy distortion before the FFT, 
approximating the mechanical distortion in the ear...? Also, the whole thing 
needs to take in account that the too loud level is highly frequency 
dependent.

Nothing is ever nearly as simple as it may seem at first. ;-)


-- 
//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.--- Games, examples, libraries, scripting, sound, music, graphics ---.
|   http://consulting.olofson.net  http://olofsonarcade.com   |
'-'
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Introducing myself (Alessandro Saccoia)

2012-02-23 Thread Thomas Young
 float pan = sin(2 * PI * frequency * time++ / 44100);

As 'time' increases, changes to 'frequency' will result in larger and larger 
discontinuities. You should offset (add) the change in time rather than 
multiplying by it.  

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Bill Moorier
Sent: 23 February 2012 18:05
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] Introducing myself (Alessandro Saccoia)

Thanks Alessandro!  Unfortunately I don't think this is the problem though.

I added a simple moving average on the parameter and it didn't make
the nasty artifacts go away.  So I rewrote the whole thing as a VST so
I can post more code without having to reveal my messy in-progress
javascript framework ;)  Here it is:
http://abstractnonsense.com/23-feb-2012-cc.html

So if I sweep the parameter smoothly with this VST loaded in Ableton,
I get nasty artifacts as the sweep is happening.  I can see from the
trace that the parameter isn't changing too quickly, and the resulting
frequency variable always changes by less than 0.01Hz between
samples.  But the pan variable ends up changing by a lot - often
nearly as much as 0.1 between samples!

Thinking about it some more, I'm getting suspicious of the code that
writes to the pan variable - it's supposed to be an LFO.  The thing
that makes me suspicious is, if we change the frequency, even very
slightly, between samples then we're gluing together two different
sine waves.  But we're not taking into account phase!  If the time
variable gets big (which it will), then there doesn't seem to be any
reason to believe that two slightly-different-frequency sine waves
would have almost the same values at the current time.  The ends of
the sine waves might not match up when we glue them together!

Am I on the right track with this line of thinking?  Is there a better
way to write a variable-frequency LFO than just:
float pan = sin(2 * PI * frequency * time++ / 44100);

Thanks again,
Bill.



 Hello Bill,
 I take your question as a chance to introduce myself.
 When you sweep the input parameter you are introducing discontinuities in the 
 output signal, and that sounds awful.
 The simplest case to figure that out in your code is imagining that you have 
 the input variable set at 0 (pan = 0), and then abruptly you change the input 
 parameter to a value that will make the value of the pan variable jump to 1. 
 Both of the channels will generate a square wave, that will sound badly 
 because of the aliasing. One solution is to control the slew rate of your 
 parameter lowpass filtering your parameter. A simple moving average filter 
 should do the job correctly.

 I have been reading this newsletter for a couple of years now, and I think 
 that it's the best place to learn about the practical applications of musical 
 dsp. I have been working in the digital audio field since 3 years now, even 
 though I have been interested in computer music since my first years at the 
 university.
 Now I am freelancing in this field, and I also get to play music more often. 
 This is really stimulating my imagination, and I hope that in the next months 
 I will have the time to implement some new effect or instruments.
 Thank you for all the nice things that I have learnt here,

 Alessandro

-- 
Bill Moorier abstractnonsense.com | @billmoorier
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-22 Thread Thomas Young
I've tried that before, basically writing a soft synth and a sequencer. I won't 
pretend I made anything that sounded very good but it was fun. In the demoscene 
there is a whole field of 'programmed music' which is just this, although 
generally they use a sequencer for the composition and just code the 
instruments.

I have to say I've never really seen the appeal of CSound since it seems as 
complicated as just writing your own code in C/C++, maybe I am missing some 
great use case for it.

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Adam Puckett
Sent: 22 February 2012 13:45
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] a little about myself

It's nice to see some familiar names in Csound's defense.

Here's something I've considered since learning C: has anyone
(attempted to) compose music in straight C (or C++) just using the
audio APIs? I think that would be quite a challenge. I can see quite a
bit more algorithmic potential there than probably any of the DSLs
written in it.

On 2/21/12, Michael Gogins michael.gog...@gmail.com wrote:
 It's very easy to use Csound to solve idle mind puzzles! I think many
 of us, certainly myself, find ourselves becoming distracted by the
 technical work involved in making computer music, as opposed to the
 superficially easier but in reality far more difficult work of
 composing.

 Regards,
 Mike

 On Tue, Feb 21, 2012 at 7:53 PM, Emanuel Landeholm
 emanuel.landeh...@gmail.com wrote:
 Well. I need to start using csound. To actually do things in the real
 world instead of just solving idle mind puzzles.

 On Tue, Feb 21, 2012 at 10:02 PM, Victor victor.lazzar...@nuim.ie wrote:
 i have been running csound in realtime since about 1998, which makes it
 what? about fourteen years, however i remember seeing code for RT audio
 in the version i picked up from cecelia.media.mit.edu back in 94. So,
 strictly this capability has been there for the best part of twenty
 years.

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp



 --
 Michael Gogins
 Irreducible Productions
 http://www.michael-gogins.com
 Michael dot Gogins at gmail dot com
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Signal processing and dbFS

2012-01-18 Thread Thomas Young
Most people seem to be overthinking this, Uli has posted the important equation 
here:

Y [dBFS] = 20*Log10(X/FS)

That is all you need for converting normalised peak values to dbFS.


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Uli Brueggemann
Sent: 18 January 2012 08:20
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Signal processing and dbFS

My simple point of view about dBFS (full scale):
The full scale FS of a 16 bit soundcard is 2^15=32768, of a 24 bit
soundcard it is 2^23 = 8388608.
The dB number Y of a value X represents a relation to the full scale:
Y [dBFS] = 20*Log10(X/FS)
So with X=32786 and a 16 bit soundcard you get Y =
20*Log10(32768/32768) = 20*Log10(1) = 20*0 = 0 dBFS

The result 32768/32768 = 1.0 (or 8388608/8388608) explains, why the
DSP maths often use values in the range -1.0 .. +1.0.
Of course with floating point numbers you can also use higher values.
At least until you try to send the result to a soundcard. Converting a
float number to an integer value by multiplication with the resolution
of the soundcard will lead to clipping if you have values exceeding
the full scale.

All this consideration has nothing to do with the output voltage of
the soundcard. Thus 0 dBFS may result in a level of -10 dBV or + 4dBu
or whatever the analog reference level is.

In practical applications it is always necessary to avoid clipping in
the digital domain and in the analog domain. This means to leave some
headroom. E.g if you define a headroom of 12 dB for yourself as
optimal then you may arbitrarily define -12 dBFS as 0 dBLS (Linda S).
That's why there is all the confusion with different definitions
around. Especially if you forget to mention LS and talk about 0 dB
only instead of 0 dBLS :-)

Uli



On Wed, Jan 18, 2012 at 7:34 AM, Linda Seltzer
lselt...@alumni.caltech.edu wrote:
 Good evening from snowy Washington State,

 Those who know me personally know that my background in music DSP is via
 the ATT Labs / Bell Labs signal processing culture.  So for the first
 time I have encountered the way recording studio engineers think about
 measurements.  They plot frequency response graphs using dbSF on the Y
 axis.

 Recording engineers are accustomed to the idea of dbSF because of the
 analog mixes where the needle looked like it was pegging at 0dB.  The
 problem is what this means in signal processing.  My understanding is that
 different manufactureres have different voltage levels that are assigned
 as 0dB.  In signal processing we are not normally keeping track of what
 the real world voltage level is out there.

 When I am looking at a signal in Matlab it is a number in linear sampling
 from the lowest negative number to the highest negative number.  Or it is
 normalized to 1.

 I don't know how to translate from my plot of a frequency response in
 Matlab to a graph using dbFS.  Visually a signal processing engineer
 wouldn't think about the peaks as much as a recording engineer sitting at
 a mixing console does.  I can see that from their point of view they are
 worrying about the peaks because of worries about distortion.  A signal
 processing engineer assumes that was already taken care of by the times
 the numbers are in a Matlab data set.  The idea of plotting things based
 on full scale is a bit out of the ordinary in signal processing because
 many of our signals never approach full scale.  Telephone calls are
 different from rock music.

 Any comments on this issue would be greatly appreciated.

 Linda Seltzer



 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Signal processing and dbFS

2012-01-18 Thread Thomas Young
I'm just highlighting Uli's post which I think contains the answer to her main 
question of how to convert a normalised matlab plot into dbFS. I don't mean to 
denigrate what other people have said.

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Nigel Redmon
Sent: 18 January 2012 15:58
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Signal processing and dbFS

If we're overthinking it, it's probably because we're not sure what Linda's 
after. I'm sure she knows the formula for voltage conversion to dB.


On Jan 18, 2012, at 2:34 AM, Thomas Young wrote:
 Most people seem to be overthinking this, Uli has posted the important 
 equation here:
 
 Y [dBFS] = 20*Log10(X/FS)
 
 That is all you need for converting normalised peak values to dbFS.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] anyone care to take a look at the Additive synthesis article at Wikipedia?

2012-01-16 Thread Thomas Young
I'd like to say well done to everyone who has edited this so far, it looks 
massively better :)

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of robert 
bristow-johnson
Sent: 16 January 2012 16:16
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] anyone care to take a look at the Additive synthesis 
article at Wikipedia?

On 1/16/12 1:16 AM, Nigel Redmon wrote:
 Nice improvements.

 This may seem like nitpicking, but the Timeline of additive synthesizers 
 section seems to choose keeping the instrument name as the start of the 
 sentence over proper grammar. For instance:

Hammond organ, invented in 1934[26], is an electronic organ that uses nine 
 drawbars to mix several harmonics, which are generated by a set of tonewheels.

 This should either read

The Hammond organ, invented in 1934[26], is an electronic organ that 
 uses...

 or something like

Hammond organ-invented in 1934[26], the Hammond organ is an electronic 
 organ that uses...

 or

Hammond organ: Invented in 1934[26], the Hammond organ is an electronic 
 organ that uses...

 (Note that one entry, EMS Digital Oscillator Bank (DOB) and Analysing Filter 
 Bank: According to..., does it this way already.)

 You have enough cooks working on that page right now, so I'd rather leave it 
 up to you guys what route you go. But if you use a sentence, it should read 
 like one.


well, there is evidence that Clusternote is from Japan.  dunno if these 
sentences were written by him.

i wouldn't discourage you from editing at all.

L8r,

-- 

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp]   Splitting audio signal into N frequency bands

2011-11-02 Thread Thomas Young
'Biquad' is a filter topology meaning the transfer function is expressed as a 
quadratic divided by another quadratic (biquadratic). They are pretty common in 
the world of digital filters because there are very simple to convert into an 
efficient algorithm (Direct form 1 etc...). See 
http://en.wikipedia.org/wiki/Digital_biquad_filter

A Butterworth filter has a particular transfer function, see 
http://en.wikipedia.org/wiki/Butterworth_filter. I think there are a few 
implementations in the music DSP archives.

I think David made a good point that you don't really want an aggressive cut 
off with your filter, since sounds which straddle the crossover frequencies can 
suffer quite badly. As he suggests using simpler, less aggressive filters (e.g. 
pretty much any first or second order iir filter) will probably be acoustically 
a lot nicer.

Thomas

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Thilo Köhler
Sent: 02 November 2011 12:10
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp]   Splitting audio signal into N frequency bands

Hello Thomas, Wen!

Thank you for the quick input on this.

1. I found that in the 3-band case, splitting up 
the low and high band from the input and then 
generating the mid band by subtracting them
works much better than the salami stategy
(chopping off slices with a LP).
Thanks!

2. 
 Subtracting the LP part makes sense only if the LP filter is zero-phase.
I dont know if my filters are zero phase, I am not that deep
into the filter math to tell you straight away. It is an IIRC taken from
here:
http://www.musicdsp.org/showArchiveComment.php?ArchiveID=259

This one seems to work best for my purposes, but that is just
from subjective listening wihtout any mathematical evidence.

Is this a butterworth filter like Thomas suggests? (sorry if the question
sounds like a noob...) In the comment they call it biquad, i dont know
if a biquad can be butterworth or this is mutual exclusive.

I have also tried:

http://www.musicdsp.org/showArchiveComment.php?ArchiveID=266
Doesnt work well for low cutoff frequencies, like 150Hz.
I am using single precision.

http://www.musicdsp.org/showArchiveComment.php?ArchiveID=117
Seems to be too flat, not steep enough.

http://www.musicdsp.org/showArchiveComment.php?ArchiveID=237
Seems to be too flat, not steep enough.

I think in the use case of a mulit-band compressor, perfect
reconstruction is important. That is my I want to create
the band by subtracting and not with independent filters.
I assume this is a good strategy, no?

Regards,

Thilo

 I  believe the typical way is to directly construct a series of steep
 band-pass  filters to cover the whole frequency range. This is very
 flexible but  usually means the individual parts do not accurately add up
 to the original  signal. On the other hand, if perfect sum is desirable
 you may wish to take  a look at mirror filters, such as QMF. These are
 pairs of LP and HP filters  designed to guarantee perfect reconstruction.


 --
 From: Thilo K?hler koehlerth...@gmx.de
 Sent: Monday, October 31, 2011 10:47 AM
 To: music-dsp@music.columbia.edu
 Subject: [music-dsp] Splitting audio signal into N frequency bands

 Hello all!

 I have implemented a multi-band compressor (3 bands).
 However, I am not really satisfied with the splitting of the bands,
 they have quite a large overlap.

 What I do is taking the input singal, perfoming a low pass filter
 (say 250Hz) and use the result for the low band#1.
 Then I subtract the LP result from the original input and do
 a lowepass again with a higher frequency (say 4000Hz).
 The result is my mid band#2, and after subtracting again the remaining
 signal is my highest band#3.

 I assume this proceedure is appropriate, please tell me otherwise

 The question is now the choise of the filter.
 I have tried various filters from the music-dsp code archive,
 but i still havent found a satisfiying filter.

 I need a steep LP filter (12db/oct or more),
 without resonance and fewest ringing possible.
 The result subtracted from the input must works as a HP filter.

 Are there any concrete suggestions how such a LP filter should look
 like, or is there even a different, better way to split the audio signal
 into 3 bands (or N bands)?

 I know I can use FFT, but for speed reasons, I want to avoid FFT.

 Regards,

 Thilo Koehler

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Splitting audio signal into N frequency bands

2011-10-31 Thread Thomas Young
I think you will get better results doing a lowpass  highpass, then 
subtracting those from your original signal to get a middle band.

For the filter I would have thought you want something as 'clean' as possible, 
i.e. no ripple, flat as possible in the passband and with a linear falloff (it 
shouldn't need to be particularly steep, but 12db/oct does sound about right). 
For me a Butterworth springs straight to mind, second order will give you 
12db/oct. It's pretty cheap as well if that is a concern.

I am interested to know how other people do the splitting into bands as well 
though, I plan on implementing a multiband compressor shortly myself. Multiple 
bandpass filters would seem logical, but there must be a bit of an issue with 
colouring the signal.

Thomas Young

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Thilo Köhler
Sent: 31 October 2011 10:47
To: music-dsp@music.columbia.edu
Subject: [music-dsp] Splitting audio signal into N frequency bands

Hello all!

I have implemented a multi-band compressor (3 bands).
However, I am not really satisfied with the splitting of the bands,
they have quite a large overlap.

What I do is taking the input singal, perfoming a low pass filter
(say 250Hz) and use the result for the low band#1.
Then I subtract the LP result from the original input and do
a lowepass again with a higher frequency (say 4000Hz).
The result is my mid band#2, and after subtracting again the remaining
signal is my highest band#3.

I assume this proceedure is appropriate, please tell me otherwise

The question is now the choise of the filter.
I have tried various filters from the music-dsp code archive,
but i still havent found a satisfiying filter.

I need a steep LP filter (12db/oct or more),
without resonance and fewest ringing possible.
The result subtracted from the input must works as a HP filter.

Are there any concrete suggestions how such a LP filter should look like,
or is there even a different, better way to split the audio signal
into 3 bands (or N bands)?

I know I can use FFT, but for speed reasons, I want to avoid FFT.

Regards,

Thilo Koehler

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FM Synthesis

2011-09-14 Thread Thomas Young
I don't think musicians mind tweaking and poking presets (what's the worst that 
can happen?), so it's really up to the makers to provide plenty of decent 
preset sounds. NI FM8 for example has a pretty good selection of presets and is 
a popular FM synth these days, I rate it pretty highly myself.


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Didier Dambrin
Sent: 14 September 2011 11:19
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] FM Synthesis

I believe FM sounds got back in fashion at the time everyone had forgotten 
that shitty GM FM bank in Windows, when FM meant cheesy MIDI files.
..but the problem with FM is that no one can program presets, it's very 
unpredictable when you deal with more than 3 operators, and I'm not sure 
today's musicians wanna deal with such complexity anymore. Actually, if the 
story that most DX7s never got their patches tweaked is true, maybe no 
musician ever wanted to deal with such complexity.



 On 13/09/2011 21:06, Theo Verelst wrote:
 Hi

 ..
 Remember that Frequency Modulation of only two operators already has
 theoretically (without counting sampling artifacts!) a Bessel-function
 guided spectrum which is in every case infinite, although at low
 modulation indexes the higher components are still small. Also think
 about the phase accuracy: single precision numbers are not good at
 counting more samples than a few seconds for instance.


 Not too much of a problem if you use table lookup, which is what I
 assume the DX 7 did. Phase errors are a problem in single precision if
 you compute and accumulate.


 ..
 Oh, there are Open Source FM synths maybe worth looking at: a csound
 script (or what that is called there)

  Csound has the foscil two-operator self-contained opcode, and of
 course you can roll your own operator structures ad lib. Somewhere there
 is a full DX 7 emulation complete with patches (poss in the Csound book;
 not to hand right now).

 Have we now reached the point where FM sounds are back in fashion?

 Richard Dobson
 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FM Synthesis

2011-09-14 Thread Thomas Young
Interesting. I was referring to 'propper' synths really, which I wouldn't 
really group with romplers and black boxes personally, but I guess end users 
don't necessarily make the same distinction. I can definitely 

 Personally I'm not making black boxes, because to do this you can't just be a 
 programmer, you also have to be an artist

That's part of the fun of working with music  DSP though, you need a bit of a 
creative touch, I would have thought most people on this list would agree. 
Obviously making black boxes is taking it to the extreme though.

 preset designers [...] Probably not very well paid

Heh, well yea, that's the problem with being a programmer, you can never really 
move into a 'creative' job without suffering major pay decimation :P

Out of curiosity do you program soft or hard synths? I'd be interested to know 
if it is easy to get into professional soft synth/effect industry and if it 
pays well. 

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Didier Dambrin
Sent: 14 September 2011 13:49
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] FM Synthesis

It's totally ok to make presets yourself  with a beta team, as long as the 
full editor is given to the user (so that he or others can do presets 
himself), but when it's a rompler or a black box, you really need artists 
to be using your private tools.

Also it's pretty hard to keep backwards compatibility during the first 90% 
of the dev of a plugin, so yes you make a lot of sounds during that process, 
but they will be lost. Well that's in my case.

Personally I'm not making black boxes, because to do this you can't just be 
a programmer, you also have to be an artist because you will be making 
hard-coded choices, so you'd better know what you're doing (I mean: a 
parameter that doesn't end up on the GUI is a choice that the programmer had 
to make. If it's a bad choice, the artist using it will have to do with it). 
I prefer to give as much power as possible, even if some features end up not 
being used.  Then for the next synth/effect I can still only implement what 
has been used in the previous stuff. But what's sure is that a lot of 
musicians don't like complex synths.



I always suspected most of the presets just fell out of the development 
process, you are always going to need to test that your engine sounds good 
(synth construction is 50% science and 50% magic after all) and that surely 
generates a whole load of sounds. Polish those up and you already have a 
load of presets, let your QA/Interface designers have a bash while they are 
developing as well and you have yourself a whole suite! I could be totally 
wrong of course.

 some kind of middleware guys, in-between musicians  engineers, who 
 program presets for engines

 And where can I apply for this job? ;)

 -Original Message-
 From: music-dsp-boun...@music.columbia.edu 
 [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Didier Dambrin
 Sent: 14 September 2011 11:32
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] FM Synthesis

 The evolution seems to be going towards some kind of middleware guys,
 in-between musicians  engineers, who program presets for engines put in
 black box instruments, so that the musican can get access to non-sampled
 FM presets, while not having access to FM at all.
 Well, seems to be what NI is doing as well.

 It already happened with samplers, in the past the musician got a sampler,
 that he could either program himself or feed with soundbanks. These days a
 musician buys a rompler,  doesn't have access to the editor (which
 obviously has to exist for the middleware guy who made the soundbank).




I don't think musicians mind tweaking and poking presets (what's the worst
that can happen?), so it's really up to the makers to provide plenty of
decent preset sounds. NI FM8 for example has a pretty good selection of
presets and is a popular FM synth these days, I rate it pretty highly
myself.


 -Original Message-
 From: music-dsp-boun...@music.columbia.edu
 [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Didier Dambrin
 Sent: 14 September 2011 11:19
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] FM Synthesis

 I believe FM sounds got back in fashion at the time everyone had 
 forgotten
 that shitty GM FM bank in Windows, when FM meant cheesy MIDI files.
 ..but the problem with FM is that no one can program presets, it's very
 unpredictable when you deal with more than 3 operators, and I'm not sure
 today's musicians wanna deal with such complexity anymore. Actually, if
 the
 story that most DX7s never got their patches tweaked is true, maybe no
 musician ever wanted to deal with such complexity.



 On 13/09/2011 21:06, Theo Verelst wrote:
 Hi

 ..
 Remember that Frequency Modulation of only two operators already has
 theoretically (without counting 

Re: [music-dsp] FM Synthesis

2011-09-12 Thread Thomas Young
Regarding:

buffer[i] = Math.sin( ( phase * ratio * frequency + buffer[ i ] ) * PI_TWO ); 
// FM (PM whatever)
phase += 1.0 / 44100.0; // advance phase in normalized space

Your oscillator will be prone to drifing out of phase due to the way you are 
adding the reciprocal of the sample rate (a recurring decimal) to a variable 
each sample.

phase += 1.0 / 44100.0; ( 0.226757369...)

This error will accumulate and become significant after a large number of 
samples, it may not be your problem here but it would still be worth using a 
different mechanism for calculating it (i.e. use (i / SampleRate))

Don't you also want + buffer[ i ] after the * PI_TWO? I might just be 
missing something there though.

Tom Young 

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Andre Michelle
Sent: 12 September 2011 14:59
To: music-dsp@music.columbia.edu
Subject: [music-dsp] FM Synthesis

Hi all,


I just started an implementation of a FM-Synth and I am struggling with a 
certain effect, which I am not sure, if it is caused by a bug or conceptional 
error.

The structure is straight-forward. I have N operators, which can be freely 
routed including feedback. Each Oscillator has one buffer to write its 
processing result. The next operators in the processing chain can use the 
buffer as modulation input. I compare the output with NI FM8 to check if I get 
similar output. So far very close. The issue occurs, when I choose a routing 
with feedback. A  B  A, where B's output will be send to the sound card. If I 
choose a blocksize (samples each computation) of one, I get reasonable audio at 
my output (similar to sawtooth). However running a blocksize of e.g. 64 samples 
or even just more than 1 sample results in some nasty glitchy sound. Looks to 
me like a phase problem, but I am pretty sure, that my code covers all that. My 
question is: Is it even possible to let the operators process entire 
audio-blocks independently? I guess, later processing of LFOs, envelopes and 
everything each sample might not lead to readable code and could not be easily 
optimized.

my Java code of a single Operator:
http://files.andre-michelle.com/temp/OperatorOscillator.txt

For the routing above (ABA, out:B) the linear process sequence is A,B,Output 
(while B,A,Output would also be nice)

Any hints? Thanks!

--
André Michelle
http://www.audiotool.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Electrical Engineering Foundations

2011-08-30 Thread Thomas Young
If the goal here is education then I would suggest formatting your work as a 
syllabus or structured as lessons with exercises. The raw information is 
already available through books and sites like Wikipedia (where the wording and 
presentation is already carefully reviewed), so I would have thought the focus 
would be on presenting the reader with the information in what you consider to 
be the 'correct' order, from the fundamentals upward.


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Theo Verelst
Sent: 27 August 2011 23:11
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] Electrical Engineering Foundations

I've made a small beginning at  http://www.theover.org/Dsp .

Feel free to comment/request/fire question, etc, in fact I don't even 
mind making it a Wiki page (to let other contribute), and I don't know 
yet how many linked pages and example materials I'll make.

Theo
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Multichannel Stream Mixer

2011-08-30 Thread Thomas Young
Is there something wrong with just summing the buffers for each channel?

It's not really clear what you are trying to achieve, do you want to downmix 
your channels?

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of StephaneKyles
Sent: 30 August 2011 14:05
To: 'A discussion list for music-related DSP'
Subject: Re: [music-dsp] Multichannel Stream Mixer

FramesRecorder(i,0) is a class pointer with access to i the sample number,
0 is the channel.
Variable vol, is just an example to assign volume scale to the samples.


-Original Message-
From: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Wen Xue
Sent: Tuesday, August 30, 2011 2:18 PM
To: 'A discussion list for music-related DSP'
Subject: Re: [music-dsp] Multichannel Stream Mixer

Is FramesRecorder(...) a macro or a function call? 
And why multiply vol=1.0 when everything is floating-point already? 

-Original Message-
From: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of StephaneKyles
Sent: 30 August 2011 12:13
To: 'A discussion list for music-related DSP'
Subject: [music-dsp] Multichannel Stream Mixer

Hi, I am trying to implement a multi channel mixer in c++, very basic. I
have multiple streams and their associated buffer.
im using rtaudio and stk to work with the samples and audio devices.
I gather the buffers in a tick method, and fetch all the samples to produce
the sum :

FramesRecorder is the final output buffer. -1.0f to 1.0f
Frames[numplayers] are the stream buffers. -1.0f to 1.0f
int numplayers, the stream index.

the sines are normalized to -1.0 to 1.0 , float32.

for ( unsigned int i=0; i nBufferFrames ; i++ ) {
Float vol = 1.0f;
FramesRecorder(i,0) = ( FramesRecorder(i,0) + (
Frames[numplayers](i,0) * vol ))  ; 
FramesRecorder(i,1) = ( FramesRecorder(i,1) + (
Frames[numplayers](i,1) * vol ))  ; 
}
// and something like giving the sum of streams.
FramesRecorder[0] /= numplayers;
FramesRecorder[1] /= numplayers; 


what would be the best method to mix multi channel buffers together ? I know
it's a newbie question, but I would like to know about your experience with
this. I would like to implement an efficient method as well. Thx !




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
-
No virus found in this message.
Checked by AVG - www.avg.com
Version: 10.0.1392 / Virus Database: 1520/3867 - Release Date: 08/30/11

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Electrical Engineering Foundations

2011-08-24 Thread Thomas Young
What resources would you recommend Theo?

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Theo Verelst
Sent: 24 August 2011 17:01
To: music-dsp@music.columbia.edu
Subject: [music-dsp] Electrical Engineering Foundations

Just to maybe put some people at ease and hopefully arousing some 
discussions, I'd like to point the attention of a lot of people in the 
DSP corners of recreation and science, hobby and serious research to the 
general foundations for Sampling Theory and Digital Signal Processing 
and possibly information theory and filter knowledge.

Of course there is little against a lot of gents (and possibly gals) 
plowing around in the general subject of doing something with audio and 
a computer. It appears however that for the reasonably reasons such as 
required IQ and education (or for fun) but also for less reasonable 
reasons such as power pyramids and greed, the theoretical foundations of 
even the simplest thing of all, playing a CD or soundfile through a DA 
converter for many remain rather shaky or plain wrong or simply absent.

For analog electronics filters and such are well known subjects 
applicable from the simple to the quite complex and the electronics 
world generally tends to have a solid foundation for dealing with the 
issues and theories involved, though not necessarily all electronicists 
are on par with the required network theoretical foundations, I've 
understood MNA methods in the US are even taught at high school level.

For EEs (electrical engineers) it will usually be the case they've heard 
about sampling theory somewhere in their education, and idem ditto for 
some filter or circuits and systems type of theory. Many others, like in 
IT (informatics), even physics and other disciplines or people making 
their hobby a profession this might all just as well be abacadabra, and 
it seems to me this should change.

Regards,

Theo Verelst
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] New patent application on uniformly partitioned convolution

2011-01-28 Thread Thomas Young
In principle a patent protects the investment that a company makes in order to 
develop a new technology; companies are unlikely to invest large amounts on 
research if their ideas are going to be copied and their product beaten to 
market by an opportunistic competitor.

In practice, as we all know, things like software patents are just food for 
patent trolls.

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Victor Lazzarini
Sent: 28 January 2011 18:07
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] New patent application on uniformly partitioned 
convolution

I would have thought that the whole point of a patent is to make  
money. A scientific paper, IMHO, is the way to move the field forward.

Victor
On 28 Jan 2011, at 18:02, Dave Hoskins wrote:

 The whole point of a Patent is to help engineers move forward, so  
 it's completely legitimate to take a previous invention and add to  
 it, to make a new Patent.
 It's a shame they are not seen like this at all.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Factorization of filter kernels

2011-01-19 Thread Thomas Young
Hi Uli

I don't know if this will be useful for your situation, but a simple method for 
decomposing your kernel is to simply chop it in two. So for a kernel:

1 2 3 4 5 6 7 8

You can decompose it into two zero padded kernels:

1 2 3 4 0 0 0 0 

0 0 0 0 5 6 7 8

And sum the results of convolving both of these kernels with your signal to 
achieve the same effect as convolving with the original kernel. You can do this 
because convolution is distributive over addition, i.e.

f1*(f2+f3) = f1*f2 + f1*f3

For signals f1,f2  f3 (* meaning convolve rather than multiply). 

Obviously all those zero's do not need to be evaluated, meaning the problem is 
changed to one of offsetting your convolution algorithm (which may or may not 
be practical in your situation), but does allow you to use half the number of 
coefficients.

Thomas Young

Core Technology Programmer
Rebellion Developments LTD

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Uli Brueggemann
Sent: 19 January 2011 14:56
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Factorization of filter kernels

Hi,

thanks for the answer so far.
A polyphase filter is a nice idea but it does not answer the problem.
The signal has to be demultiplexed (decimated), the different streams
have to be filtered, the results must be added to get the final output
signal.

My question has a different target.
Imagine you have two system (e.g. some convolution  boards with DSP).
Each system can just run a 512 tap filter. Now I like to connect the
two systems in series to mimic a desired 1024 tap filter. The 1024
kernel is known and shall be generated by the two 512 tap filters.
So what's a best way to decompose the known kernel into two parts ? Is
there any method described somewhere?

Uli


2011/1/19 João Felipe Santos joao@gmail.com:
 Hello,

 a technique that allows something similar to what you are suggesting
 is to use polyphase filters. The difference is that you will not
 process contiguous vectors, but (for a 2-phase decomposition example)
 process the even samples with one stage of the filter and the odd
 samples with another stage. It is generally used for multirate filter
 design, but it makes sense to use this kind of decomposition if you
 can process the stages in parallel... or at least it is what I think
 makes sense.

 You can search for references to this technique here [1] and here [2].
 A full section on how to perform the decomposition is presented on
 Digital Signal Processing: a Computer-based approach by Sanjit K.
 Mitra.

 [1] 
 http://www.ws.binghamton.edu/fowler/fowler%20personal%20page/EE521_files/IV-05%20Polyphase%20FIlters%20Revised.pdf
 [2] https://ccrma.stanford.edu/~jos/sasp/Multirate_Filter_Banks.html

 --
 João Felipe Santos



 On Tue, Jan 18, 2011 at 5:46 AM, Uli Brueggemann
 uli.brueggem...@gmail.com wrote:
 Hi,

 a convolution of two vectors with length size n and m gives a result
 of length n+m-1.
 So e.g. two vectors of length 512 with result in a vector of length 1023.

 Now let's assume we have a vector (or signal or filter kernel) of size
 1024, the last taps is 0.
 How to decompose it to two vectors of half length? The smaller vectors
 can be of any arbitrary contents but their convolution must result
 must be equal to the original vector.

 It would be even interesting to factorize  given kernel into n
 smaller kernels. Again the smaller kernels may have any arbitrary but
 senseful contents, they can be identical but this is not a must.

 Is there a good method to carry out the kernel decomposition? (e.g.
 like calculating n identical factors x of a number y by x =
 Exp(Log(y)/n) with x^n = x*x*...*x = y)

 Uli
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp