Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-30 Thread robert bristow-johnson


On Dec 29, 2010, at 9:10 PM, Nigel Redmon wrote:


i think we skeered 'em, Robert

;-)




my driver's license photo looks pretty scary (but my facebook, linked- 
in, whatever isn't so scary).  i got accosted once by plain-clothes  
NYC cops about a year ago.  they said they stopped me because i looked  
like Ted Kazinski.  i'm pretty sure they were on a fishing expedition.


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-29 Thread Gwenhwyfaer
*emerges from cover, looks around timidly*

...have the big scary beasts finished their territory skirmish now...?

*hears noise, startles, scuttles back behind cover*
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-29 Thread Gwenhwyfaer
*emerges from cover, looks around timidly*

...have the big scary beasts finished their territory skirmish now...?

*hears noise, startles, scuttles back behind cover*
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-29 Thread Theo Verelst

.have the big scary beasts finished their territory skirmish now.

Why did you hear the big machine gun kill the actual monster yet ? I didn't.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-29 Thread Nigel Redmon
i think we skeered 'em, Robert

;-)


On Dec 29, 2010, at 4:41 PM, Theo Verelst wrote:
 .have the big scary beasts finished their territory skirmish now.
 
 Why did you hear the big machine gun kill the actual monster yet ? I didn't.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-29 Thread Theo Verelst

Hi,

I loooked up the aes workshop link about past system design

http://www.aes.org/events/125/workshops/session.cfm?code=W5

I even recognize one of the speakers from the AES conference in 
Amsterdam last year (or the year before, I forgot) but I don't see a 
link to some recording of the workshop or some texts of the participants 
on the Aes server...


I think the whole discussion shows the same as has been happening in the 
Open Source scenes I am aware of, and even the University circles I've 
been in: the DSP worship phenomenon, which leads to secret (and ill) 
specifications, often coming from people who have at least dubious 
purposes (certainly not reasonavble science benefiting humanity), 
implementors who have to fight not being lifted over the horse, and 
users who are regularly more interested in in beads and mirrors of a 
specific kind than great machinery, or worse.


A dreadful situation. I believe in open government (to the normal 
extend) and in the old adiagum that democracy often has to be the least 
of all evils, so I applaud a good discussion, and free information 
(according to mr Stalman and his great free software that is normally to 
be interpreted in Open Source as free speech rather than free beer. 
I was reminded of that in the context of a major OS software meeting in 
Europe I taped at the time, where he spoke about the price of a package 
of milk in the same congregation where the wikipedia was being discussed 
by the founder who was more interested in proper server technology, I 
think).


Last weekk I was at a local (dutch, I can't help not having been born in 
California where I'd prefer to live) AES meeting, where I was reminded 
of how little DSP has pervaded normal life in a reasonable enough way 
compared to lets say rockers in the 50s already were able todo with 
non-digital tube technology! I mean, even deciding on the loudness 
levels appears to be a nearly insurmountable mountain, reuiring 
committees with so called important work. Well, I think I could make 
sure the right software blocks for preventing the main caveats would be 
somehow available and some informatio n to do a neat job stimulated, but 
I recall someone some years ago warning me to for my own sake not 
spilling course-worthy information...


I guess Yamaha, which has been mentioned in the discussion, I'm sure 
like others, is more than average aware of various musical aspects which 
often seem to escape the attention of digital designs, like for 
instance, which happens with the result of some idea or filter when it 
is let loose on a normal listener space, when the results are supposed 
to work musically with other musical instruments (I recall a demo at the 
Frankfurter messe of a well known software company trying their 
instrument sw out in a live setting with audience and various 
instruments, which was devastatingly ugly) or, that's I think in this 
discussion an important issue:  suppose you resample in a certain way, 
what is going to happen if you do i again, or if your resampling is 
followed by another form of resampling later on in the audio path? Other 
effects than can be taken easily from graphs might be going on in such 
case. And don't forget amplitude graphs of filters don't reveal wether 
the *phase shifts* have nice or nasty irregular properties...


Theo
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-28 Thread robert bristow-johnson


On Dec 27, 2010, at 7:00 PM, Nigel Redmon wrote:


OK, you're missing my point...
i keep restating your point with a different, but equivalent,  
semantic.


mmm... I don't see that--you wouldn't be telling me some of these  
things about filter characteristics, and listing coefficients if you  
understood my point...


the purpose of listing those coefficients was to spell out explicitly  
the ONLY contribution i originally wanted to make to this thread.  to  
wit:


for upsampling using polyphase FIR (or multirate techniques,  
whatever you wanna call it), you have a different set of coefficients  
for every phase or fractional delay in your upsampler.  you get  
these coefficients from an LPF FIR design and then rearranging the  
coefficients (in MATLAB, it's the reshape() function).  the total  
number of coefficients is the number of phases (the upsample ratio)  
times the FIR length for each phase. you can design this LPF FIR using  
a variety of techniques.  in my *original* contribution, i responded  
to a point you made:


On Dec 21, 2010, at 11:22 AM, Nigel Redmon wrote:

As for things like distortion modeling of guitars, I can tell you  
that windowed sinc is involved, at least on the upsampling leg where  
you likely want to preserve phase.


this point of yours is not true (well, you can tell us a windowed-sinc  
is involved, but telling us that does not necessarily make it so).  i  
was trying to be careful to concede when a windowed-sinc *is* involved  
in an Nx upsampler, you get an advantage that 1 out of N output  
samples may just be copied over.  admittedly, that was with the  
assumption that it is


   h[n] = sinc(n/N) * w[n]

with N=integer and equal to the upsample ratio.  i considered no other  
use of the sinc() design until this exchange:


On Dec 27, 2010, at 11:03 AM, robert bristow-johnson wrote:


On Dec 27, 2010, at 2:05 AM, Nigel Redmon wrote:

...
Ideally, you would want everything from 0.50 to 1.00 to be clear  
to a reasonable degree. It's not. It's down 6 dB at .50, and hits  
the -90dB stop-band at about 0.70. (You can get a closer look at it  
on the follow webpage, with settings of 0.50 factor, 31 length, and  
90 rejection: http://www.earlevel.com/main/2010/12/05/building-a-windowed-sinc-filter 
 ) To put it another way, half-band filters alias in the transition  
band. More typically, you'd move Fc down a bit (which means you are  
filtering the original points, which means you can't just copy them).


if you move Fc down a little bit (from Nyquist/2), it's no longer  
half-band and the sinc() function is not exactly sinc(n/2) so then  
for *every* n, the sinc() is non-zero.


which is when i *first* got a whiff that you didn't intend for the  
windowed-sinc function to be sin(n/N), or in the case of 2x upsampling  
that you didn't intend it to be sinc(n/2).  that's the first time i  
had any idea that you were talking about something like


   h[n] = sinc(n/(1.9)) * w[n]

which, to me, is not a design worth considering since you don't get  
the one and only advantage of the windowed-sinc.  i had trouble  
getting that point through my thick skull.  but, to add to the  
confusion, you would say stuff like:


As far as 4x, and my comment that you may want to do 2x *2x  
instead: the reason is that you can't (general case) use a quarter- 
band filter for 4x, so you don't get to skip every fourth multiply,


for which i had to disagree, since the only 2x sinc-based upsampler i  
was considering is sinc(n/2).


all of what i was trying to say (in the original response 7 days ago)  
was that a windowed-sinc upsampler is *not* involved (if you want to  
do this in an optimal way) with the possible (even likely) exception  
of 2x upsampling and (as i had always been assuming for 2x) half-band  
symmetry.  that is why i tossed out those two designs, one for  
windowed-sinc, the other for Least Squares.


because the windowed-sinc is less optimally designed, it must have  
more taps to get the same job done.  but, because it's windowed-sinc  
(and eighth-band symmetry or sinc(n/8)), one out of eight samples need  
only be copied over.  but that savings is not worth what you have to  
pay for it in the other fractional delays because it's nearly 50%  
longer FIR summation.  but for only *two* phases (or 2x upsampling),  
it *is* worth it.  it's about a wash for 4x upsampling and just not  
worth it for any higher ratio.


windowed-sinc upsampling where one copies an input need *not* be half- 
band, it could be sinc(n/4) or sinc(n/8).


back to the present:

Since we've both said a lot of things the past couple of posts,  
let me get back to my original assertion. You had said that when  
you upsample 2x with a sinc filter, you get to just copy the  
original points. I countered, *not necessarily*.

YES necessarily, if it's half-band.


Then it's *not necessarily*--you can't have both YES necessarily  
then the scope-limiting conditional if...


you can if you attach the 

Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-28 Thread Nigel Redmon
 was is not the same as is. 


It was designed in. It is in products that I could go buy at guitar center 
today. Anyway, moot point--I've never maintained that it couldn't be done a 
different way, and better. Most of the keyboard samplers ever built use linear 
interpolation. Yuck.

 i don't see why they are (or would be) using windowed-sinc since 
 windowed-sinc is optimal *only* for 2x and IMO you should have more than 2x 
 for guitar distortion.

Maybe you missed that I keep talking about multistage conversion. I've 
mentioned it several times, including my last post where I discussed the 
half-band sinc stages within a larger conversion upwards (an oversampling 
ratio significantly greater than 2... last stage).

 they way i would interpret what you said is that windowed-sinc is exclusively 
 involved in the upsampling leg of distortion modeling of [guitar amps].  can 
 you guarantee that *every* product that you can buy (not necessarily at GC) 
 involves windowed

But this is wrong. The English language does not make such assumptions in the 
absence of limiting qualifiers. You've been reading too many technical 
publications, and are used to limiting qualifiers being tacked onto everything. 
If I said there are green bushes in my yard, if does not mean that every bush 
in my yard is green, that all bushes that are in yards are green, or that there 
might be non-green bloom here or there on these bushes, nor does it mean that I 
tolerate only green bushes in my yard. Why tack on the adverb exclusively 
when it wasn't stated? (I'm sure you realize I could do the same with things 
you've said.)

 i was just trying to tell the lurking community that a windowed-sinc need not 
 be involved


Sure, that's fine. I think I've said it many times over the course of the 
conversation, but if it's not yet clear: *** If anyone thought I said windowed 
sinc was the best choice for sample rate conversion, or that it must be used 
for proper sample rate conversion--NO--I don't think that. ***

Now, my original point was that you seemed to be saying that windowed sinc 
filters let you skip computation for output samples that correspond to an 
original input samples--for half-band that means every other output sample (I 
realize this could be stated differently--no need, I'm sure we both know what I 
am saying here). I disagreed. Since then you've said that you were considering 
only 1/N, with N an integer.  Great--case closed, no longer an issue. We both 
agree that you can't copy it when N is not an integer.

But that still leaves this sub-issue: You seem to imply that a half-band (I'm 
not going to address arbitrary 1/N--we both know that half-band is where you 
get something back that makes it worthwhile using a windowed sinc in the first 
place) is a good choice when you need to convert 2x. (By good, I'm just 
talking about its filtering characteristics.) Judging from how you interpret my 
words, I'm tempted to think that means it's always a good choice (ugh--now I 
feel the need to qualify everything: I'm only talking about the half-band part 
here, not the choice of how the filter was designed)... to be continued...

 if it's a half-band, there is only one place the transition band can be.  it 
 must straddle Nyquist/2.  where it begins and ends is a matter of definition.


Right--what else can we say about it? I'm going to finish this out for the 
benefit of anyone who might not understand what I'm implying here, directed 
more to the general audience than you personally...

In a half-band filter, frequency response is down 6 dB at Fc. Let's look at a 
31-length half-band filter (Kaiser window, ~90 dB rejection), which we might 
use for oversampling by 2x:

http://www.earlevel.com/misc/halfband1.png

The dark gray area, of course, is the image mirrored about the Nyquist 
frequency of the new sample rate--we can ignore that. The red area is the 
portion we are trying to clear--it is where the image mirrored about the old 
Nyquist frequency resides. Again, we assume that we need to clear the whole 
area, within a reasonable requirement, because we don't have prior knowledge 
about what was there to begin with. (For instance, if we know that the original 
signal--before oversampling--was already significantly oversampled, then we can 
relax where the left edge of that red area is. Be forewarned that I'm going to 
stop putting in qualifiers from here on out--I want to get to the point. Yes, 
requirements can be relaxed in our favor significantly if we know something 
more about the original signal, or what's going to happen next. I've mentioned 
this a few times already, from here I'll proceed assuming the worst reasonable 
case.)

Anything left in that red area doesn't belong there, and might be a problem. 
I've shaded that portion of that red area that's not sufficiently cleared by 
the half-band filter with a darker red (allowing that 90 dB rejection is good 
enough--an arbitrary choice, for this example 

Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-28 Thread Nigel Redmon
I'm inclined to let you get in the last words, but you did ask me a couple of 
questions...

Yes to the first question.

 so which is more faithful to that maxim...


Sigh--I can't answer that without a lot of words, mostly pointless. My point 
was that I said is involved, and you say that you interpreted that as is 
exclusively involved... but why add that restriction? If I said i take a pipe 
wrench with me when i crawl under the house to fix the plumbing, you would not 
read that as i take a pipe wrench and nothing else with me when i crawl under 
the house to fix the plumbing (LOL--actually I made my son do it, a couple of 
hours ago ;-) I dunno--maybe it's my choice of the word--involved--I guess 
that's often used in a more intimate way. I meant it as... Connected by 
participation or association, but some of the other definitions of it connote 
a tighter connection, and maybe that's why you assumed exclusively. Would it 
have been better if I said, I can tell you that windowed sinc is used? Hmm, I 
have a feeling that you might read that as ... is exclusively used, not 
sure...

BTW, another aspect of was or is (used): Consider what happens when a 
company designs something that is successful, and wants to leverage that work. 
For instance, a combo guitar amp with 2 x 12's might become a 1 x 10 practice 
amp with a simpler interface (mostly same DSP, the UI simplified, fewer amp 
models reduced in order to protect the value of it's big brother amp, the 
speaker emulation changed to match the new speaker); and it might become a 
separate head, with different choices of speaker cabinets (4 x 10's, 2 12's... 
now you need to handle multiple speakers in the cabinet emulation, the product 
adds wattage, new interface, maybe even more amp models than the original). OK, 
so you get hired to port the basic code, and plug in more (or fewer) amp 
models, update the cabinet emulation to match up with more physical cabinets. 
They've decided to have you add a nice reverb including spring emulation, and 
vintage tape-echo sim. DSPs get faster, not slower, so you can stil
 l fit all this in, and you're only at 80% load on the DSP chip.

And you notice that the oversampling DSP code is using windowed sinc--you can 
save a lot of cycles by re-spec'ing it with PMcC. You also notice some goofy 
stuff with filters that you can replace with a noise-shaped biquad, or a 
different form of IIR an save some cycles. Great! Because these features are a 
small part of the overall processing, you now have a 75% load on the 
DSP--you've freed up 25% more unused cycles! For this the company will gladly 
pay you... squat. You made unused cycles--so what. Heck, they might not let you 
do it even on your own time. What's the point? The product works, gets rave 
reviews on the sound--you only stand to muck things up, and for no perceived 
benefit.

Look at any given manufacturer's lines of keyboards over the years--Yamaha's 
long run with FM, Kurzweil's K2000 and decedents, Alesis' old line of sampling 
keyboards, Korg...

Now I feel like I'm finger pointing, so a personal example: I wrote Amp Farm in 
what, 1995, 1996? I had to squeeze it into a 56002, running at, uh... slow, I 
forget. I probably spent two months after I wrote the components, just shaving 
things, and rearranging things so I could pre-fetch more data for free (and 
multistage oversampling *is* involved--lol). There was--literally--one or two 
DSP cycles left after allowing for 50 kHz pull-up (mandated by the Digi spec). 
Not three, not more--one or two cycles (I think it was one). Later versions 
added more amp and cabinet emulations. Digi upgraded their hardware, 56300 
family. Room to add convolution cabinets. Room to squeeze in a second instance 
of the plug-in running on a single DSP. More Digi upgrades of software, 
hardware (external chassis, different bus, Windows support, OS X, etc.)--new 
versions of the plug-in.

The original oversampling choices were constrained by the available DSP cycles 
of the 56002. But it never changed. Why should it? The sound was 
successful--any reason to change it? If the change could not be heard it would 
be a waste of time. If it made things sound different, people would be annoyed 
that when they brought up an old mix, it didn't sound the same. For the same 
reason, the simpler, less accurate original filter-based cabinets were not 
replaced by the improved convolution cabs--both remained available, because 
people used those old cabinets.

That's how things can stay the same way for a long time... regardless of 
whether they were the best choice or not, and regardless of whether the 
hardware improved and allowed better choices.



On Dec 28, 2010, at 6:19 PM, robert bristow-johnson wrote:
 
 so was (or is) it Line6 or was it someone else?  (this is my excuse for 
 responding when i said you could have the last word.)
 
 i sayed:
 
 was is not the same as is.
 
 On Dec 28, 2010, at 6:54 PM, Nigel Redmon wrote:
 
 It was 

Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-28 Thread robert bristow-johnson


On Dec 28, 2010, at 11:51 PM, Nigel Redmon wrote:

Would it have been better if I said, I can tell you that windowed  
sinc is used? Hmm, I have a feeling that you might read that as  
... is exclusively used, not sure...


depends on what the meaning of is is.  when Bill Clinton, when first  
asked by the media about his relationship with Monica Lewinski, said  
that there is no relationship, was he


 1. telling the truth
or
 2. evading the question
or
 3. trying to deceive
or
 4. digging a deeper hole for himself   ?

turns out to be all four.

your statements (as well as mine) can be received or interpreted in  
ways not meant.


BTW, another aspect of was or is (used): Consider what happens  
when a company designs something that is successful, and wants to  
leverage that work.


one thing that i've been preaching ( http://www.aes.org/events/125/workshops/session.cfm?code=W5 
 ) is to put it into a container.  with an API.


For instance, a combo guitar amp with 2 x 12's might become a 1 x 10  
practice amp with a simpler interface (mostly same DSP, the UI  
simplified, fewer amp models reduced in order to protect the value  
of it's big brother amp, the speaker emulation changed to match the  
new speaker); and it might become a separate head, with different  
choices of speaker cabinets (4 x 10's, 2 12's... now you need to  
handle multiple speakers in the cabinet emulation, the product adds  
wattage, new interface, maybe even more amp models than the  
original). OK, so you get hired to port the basic code, and plug in  
more (or fewer) amp models, update the cabinet emulation to match up  
with more physical cabinets. They've decided to have you add a nice  
reverb including spring emulation, and vintage tape-echo sim. DSPs  
get faster, not slower, so you can still fit all this in, and you're  
only at 80% load on the DSP chip.


And you notice that the oversampling DSP code is using windowed  
sinc--you can save a lot of cycles by re-spec'ing it with PMcC.


the spec is the same, but the design is different.  and the savings  
ain't really there for 2x oversampling, but it *is* there for higher  
oversampling.


You also notice some goofy stuff with filters that you can replace  
with a noise-shaped biquad, or a different form of IIR an save some  
cycles. Great! Because these features are a small part of the  
overall processing, you now have a 75% load on the DSP--you've freed  
up 25% more unused cycles! For this the company will gladly pay  
you... squat.


i understand.  it's for the designs that happen later when you might  
need every instruction cycle you can get.


You made unused cycles--so what. Heck, they might not let you do it  
even on your own time. What's the point? The product works, gets  
rave reviews on the sound--you only stand to muck things up, and for  
no perceived benefit.


like reduced aliases.  but we liked the way it sounded with the  
aliases!


Look at any given manufacturer's lines of keyboards over the years-- 
Yamaha's long run with FM, Kurzweil's K2000 and descendants,


i can illustrate something from that...


Alesis' old line of sampling keyboards, Korg...

Now I feel like I'm finger pointing, so a personal example: I wrote  
Amp Farm in what, 1995, 1996? I had to squeeze it into a 56002,  
running at, uh... slow, I forget.


27 MHz?  even though i haven't coded for it in a decade, i still have  
a place in my heart cut out for the ol' 56K.  with that and the 68K  
and later PPC, i used to call myself a Motorola partisan. (my sig used  
to say don't give into the Dark Side.  boycott Intel and Microsoft.   
i should have added TI to the list.)


The original oversampling choices were constrained by the available  
DSP cycles of the 56002. But it never changed. Why should it? The  
sound was successful--any reason to change it? If the change could  
not be heard it would be a waste of time. If it made things sound  
different, people would be annoyed that when they brought up an old  
mix, it didn't sound the same.


for me, that's why new products have different labels and model numbers.

... so here's something i can (or will, whether i may or not) say  
about a previous employer:  they were so hung up on doing things the  
same old way, but with new chips and in a new product, but it had to  
sound just like the old one.  so they never made the necessary  
decision to start afresh with the design when mistakes (or sub-optimal  
design decisions) where holding them back from really innovating  
something new.  they insisted on relying on the old tricks and  
wanted to just continue with newer, cheaper, bigger hardware and more  
tricks.  (BTW, these guys would have been happy for saving 25%  
because they would want to cram something else into it as more  
features, even if it breaks whatever structure you had.  because of  
the hardware limitations and the even worse limitations in their  
vision, 25% freed instruction cycles would be 

Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-27 Thread robert bristow-johnson


On Dec 27, 2010, at 12:35 PM, Nigel Redmon wrote:


Hi Robert,


hi Nigel,


OK, you're missing my point...


i keep restating your point with a different, but equivalent, semantic.

because the math says so.  you should suspect the case that for  
either half-band or for windowed-sinc (where it's sinc(n/2)), that  
every even sample of h[n] is zero (except for h[0]).


I'm saying... Why are you choosing a half-band filter for all cases  
of 2x conversion?


i didn't know i said all cases.  i don't do 2x upsamplers very often  
(in fact i dunno when the last time i've done one, usually for non- 
linear effects like tube emulation or other distortion, i would want  
to upsample by 4x or maybe 8x) but i might very well choose to use a  
windowed-sinc (which, in the case of 2x upsampling is the same as half- 
band) because the computational savings i would get with every other  
sample (those samples that i only need to copy over from the input to  
the output).  even if the design is less optimal than a Parks- 
McClellen or Least Squares design, that computational savings counts  
for something (50% of the output samples in a 2x upsampling context).


It's a good choice in some cases, but you have to accept significant  
aliasing potential in the transition band.


but you can afford to throw more taps at it (because you are big-time  
saving computational effort with 50% of the output samples).  that 20  
tap FIR interpolator designed as a Kaiser-windowed sinc performs about  
as well as an optimally designed 14 tap FIR:



/* Least Squares 2x interpolator */
/* on top of input sample */
h[-7] = 0.00,
-0.0090414415,
0.0171978554,
-0.0269091215,
0.0369055364,
-0.0456166112,
0.0515565722,
h[0] =  0.9463352999,
0.0515565722,
-0.0456166112,
0.0369055364,
-0.0269091215,
0.0171978554,
h[6] =  -0.0090414415,
/* in between sample */
h[-7] = -0.0030296594,
-0.0038903408,
0.0171010588,
-0.0419687338,
0.0878038470,
-0.1864744409,
0.6276836119,
h[0] =  0.6276836119,
-0.1864744409,
0.0878038470,
-0.0419687338,
0.0171010588,
-0.0038903408,
h[6] =  -0.0030296594,



/* Windowed-sinc() 2x interpolator */
/* on top of input sample */
h[-10]= 0.00,
0.00,
0.00,
0.00,
0.00,
0.00,
0.00,
0.00,
0.00,
0.00,
h[0] =  1.00,
0.00,
0.00,
0.00,
0.00,
0.00,
0.00,
0.00,
0.00,
h[9] =  0.00,
/* in between sample */
h[-10]= -0.0011544705,
0.0033765956,
-0.0074183471,
0.0140911350,
-0.0245084375,
0.0404299195,
-0.0652868376,
0.1077782601,
-0.1999615141,
0.6324486106,
h[0] =  0.6324486106,
-0.1999615141,
0.1077782601,
-0.0652868376,
0.0404299195,
-0.0245084375,
0.0140911350,
-0.0074183471,
0.0033765956,
h[9] =  -0.0011544705,


don't worry about the acausal impulse response.  for FIRs and a finite  
delay, we can look into the future and see the future samples at  
x[n+7] or x[n+10].


the lesson here is that BOTH methods have what you call aliasing but  
what i would call images.  and while the windowed-sinc (or half- 
band, if you want to call it that) is less optimal than the LS  
design, FOR THE SAME FIR LENGTH, a longer windowed-sinc can be made to  
look about as good as the LS design.  so why use the windowed-sinc (or  
the half-band) if it takes more taps to perform AS WELL as the least  
square design?  the reason is that for one of the two output samples  
need not be computed as an FIR filter because of the nature of the FIR  
coefficients for that phase or fractional delay.  comparing the  
designs above, to compute 2 output samples, the windowed sinc needs 20  
MAC instructions and the LS needs 28 MAC instructions.


now, i dunno if the windowed-sinc is good for all cases of 2x  
oversampling or not, but i suspect, especially as the FIR length gets  
longer (because of tougher stopband or transition band constraints),  
that this 50% savings afforded by the windowed-sinc design gets more  
and more significant.  but we would have do run the numbers and  
design both to see.


however, i am confident that when oversampling by 4x or more, the  
windowed-sinc design will perform less efficiently because the  
windowed-sinc is *less* optimal than an optimal design using P-McC or  
LS, and the savings you get with 1 out of 4 samples (or more) is less  
attractive because of the additional cost you get in the other 3 (or  
more) out of 

Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-26 Thread robert bristow-johnson


hi Nigel and others,

i was not able to deal with this Xmas Eve response because of the  
holidaze activities (i do some cooking).  anyway, when i saw this i  
figgered i had to fire up an old MATLAB program to make a concrete  
example of what i was trying to say.  the results of running this  
program, once using an optimized filter design (least squares) and the  
other using a (Kaiser) windowed design, are appended below.  i tried  
to spec the stopband gain to be about the same (-60 dB) and adjust the  
number of FIR taps (14 for LS and 20 for Kaiser window) *per*  
fractional delay so that the performance of the two designs were  
roughly comparable.  for both cases, i have set the upsample ratio to  
be 8.  this means there are 8 different sets of coefficients (of  
either 14 or 20 coefs in each set) for one of eight different (equally  
spaced) fractional delays.


now, here is the point.  the only point i was trying to make, and even  
after trying to resolve semantic differences, this point seems to get  
muddied.


1.  to do interpolation or SRC using the polyphase FIR method, you  
don't need to design it using a sinc() function, windowed or not.  but  
you are not prohibited from designing it with a windowed-sinc  
function.  if you *do* using a windowed-sinc, probably a Kaiser window  
is best.  if you *don't* use a windowed-sinc, there are other methods  
such as polynomial interpolation, or optimized LPF designs (i got  
Least Squares and Parks-McClellan in my toolbox).  in the operation of  
the upsampling SRC, the integer portion of your output pointer  
determines which set of adjacent samples to use in the FIR calculation  
and the fractional part determines which set of coefficients to use.   
all 8 fractional delays assume they're looking at the same set of 14  
or 20 samples of audio to calculate 8 different output samples before  
moving on to the next input sample frame.


2.  a windowed-sinc (doesn't matter what the window is) or polynomial  
interpolation functions like Lagrange or Hermite all share the  
property that the interpolated signal will go directly through the  
original points or samples.  this means the implied impulse response  
for all of these goes through 1 at t=0 and goes through 0 for t=n  
where n is an integer not zero.  for this reason, you can describe  
Lagrange or Hermite polynomial interpolation as a form of windowed- 
sinc.  you take their implied impulse response, divide by the sinc()  
function and what is left is an implied window function.


3.  because any of these windowed-sinc interpolation kernels go  
through the original sample points, then for the times where your  
interpolated output is evaluated at precisely the same times as the  
original input, you only need to copy the input to the output.  you  
need not crunch an FIR summation that has all zeros for the  
coefficients, except for one coefficient that is 1.  note at the  
bottom of this post that for the windowed inverse FFT design method  
(file src_coef2.c) that for the first phase or fractional delay,  
that all of the coefs are zero except for one coef.  it has to be this  
way no matter what the window is because it's the sinc() function that  
goes through zero for any integer sample except for t=0.


4.  note also that, for the least squares design, that for the same  
fractional delay of exactly an integer, that although there is a  
pleasing symmetry in the coefs, that they do *not* follow what the  
windowed-sinc coefs do (all but one are zero).  but note that the  
least squares design doesn't need as many taps as the windowed-sinc to  
get approximately the same job (of obliterating the images) done.


so here is what it means.  if you were upsampling by only a factor of  
two, the windowed sinc might be better (more efficient) because, even  
though the number of taps is more (20 vs. 14 per output sample), you  
need not compute the FIR for the output sample that lands directly on  
an input sample because of the nature of the coefficients.  you need  
only compute the FIR for the sample that is precisely half-way between  
the two neighboring input samples. so for every input sample  
(resulting in two output samples), you copy one sample from input to  
output and compute the other costing 20 FIR taps.  if you used the LS  
design instead, you would need to compute it for both fractional  
delays, including the delay that is exactly an integer.  it would cost  
2*14=28 taps, which is slightly worse.


but, as you can see, it's not so good for an upsample ratio that is  
higher.  if it were 4x oversampling, then you would have to compare  
3*20=60 taps for the windowed-sinc vs. 4*14=56 taps for the LS.  for  
8x oversampling the windowed-sinc design is even more costly in  
comparison: 7*20-140 vs. 8*14=112 for least squares.


so we both agree that for 2x upsampling, windowed-sinc looks good if  
and only if you take advantage of the fact that for one of every two  

Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-23 Thread Nigel Redmon
Somehow, we are talking about different things maybe? In what I'm talking 
about, the key is that n is not integer, for most practical filters used for 
this purpose.

Robert, when I said only for half-band, I was talking about specific case, 
not general (in the context of the earlier discussion--or at least how I 
remembered it--I thought at some point you mentioned every-other sample, but 
maybe not). I considered stopping and rewording it for the general case, but I 
really didn't have the time. If it's quarter-band, you have a zero every four 
coefficients (and you get to skip one-fourth of the calculations); third-band, 
every three (and you get to skip one-third of the calculations), etc. If it's 
for sample rate conversion and you're using windowed-sinc, you are probably 
using a factor of two as much as possible, three if necessary, if you're trying 
to take advantage of the zeros in a multirate system. That's why I limited my 
comments to half-band.

But if it's not an integer, you don't have the zeros, and can't copy the 
original samples. And in the general case, using a windowed sinc filter to 
oversample or resample at an integer ratio, you don't use an integer ratio on 
the filter. Am I saying something wrong here?

Let's talk about 2x, to keep it simple--this is also that ratio that gives you 
the biggest efficiency gain. So, you might use this filter when resampling to 
half the current sample rate. Say we have an 88.2 kHz file we want to resample 
to 44.1 kHz.

If you set your filter to half of Nyquist--22050 Hz in the case--every other 
sample is zero. But you're only down 6 dB at that point--you *will* have 
aliasing (assuming you recorded something that had frequency components just 
above 22050 Hz--the general case). You can minimize it by using a longer 
filter, but it will alias--just not extend as far into the band of interest 
before hitting the stop level. OTOH, you can move the filter corner down 
fractionally, dropping the 6 dB point to, say, 2 Hz, and get aliasing 
rolled off to an arbitrary target (say, -90 dB, -120 dB, whatever) by the new 
Nyquist frequency (22050 Hz). But you will not have a zero every other sample, 
and you can not just copy the original samples.

Make sense?


On Dec 23, 2010, at 10:18 AM, robert bristow-johnson wrote:
 
 On Dec 23, 2010, at 12:46 PM, Nigel Redmon wrote:
 
 On Dec 21, 2010, at 8:36 PM, robert bristow-johnson wrote:
 and trying to point to an obvious advantage to any windowed sinc (that you 
 don't have to compute the FIR when the output same lands squarely on top of 
 an input sample when all you need to do is copy the sample)...
 
 Not sure if you caught my earlier comment, Robert, but what you say here is 
 only true of half-band filters.
 
 no, it's true for *any* windowed-sinc polyphase FIR resampler.  and it's true 
 because sinc(n)=0 for any integer n0.
 
 it has nothing to do with half-band symmetry, but in the case where you're 
 upsampling by 2, you *will* get half-band symmetry and then it's every other 
 output sample that has zero FIR coefficients (except for a single coef that 
 is 1).
 
 but consider the use of a windowed-sinc kernel where you are upsampling by a 
 factor of 4, then one out of 4 of the output samples may be merely copied, 
 but the other 3 of 4 must go through the FIR calculation with non-zero (and 
 non-unity) coefficients.
 
 (If I understand what you're saying correctly?) That is, if you don't want 
 significant aliasing in the general case, your filter has to be set a bit 
 lower, and in that case it's filtering those original points, so you can't 
 just copy them.
 
 the general issue of aliasing has to do with the design of the FIR low-pass 
 impulse response or kernel.  some designs are better than others for 
 satisfying the specs.  if you do not require your interpolated signal to go 
 through the original sample points, then you need not restrict yourself to a 
 windowed-sinc kernel and can do better with Parks-McClellan or Least Squares 
 design of the FIR low-pass kernel.  but then you lose that one little 
 advantage of a window-sinc that when the interpolated output lands precisely 
 on the input sample, that you just copy the input sample.  that advantage 
 counts for something when upsampling by 2 because then it is every other 
 output sample that *does* land directly on top of an input sample.  you gain 
 a 50% computational savings by just copying that input sample over.
 
 BTW, because Lagrange or Hermite polynomial interpolation also goes directly 
 through each of the input sample points (just as windowed-sinc does), you can 
 take the implied impulse response and divide that by the sinc() function 
 (dealing with the 0/0 limit as a removable singularity) and come up with an 
 implied window.  so, in a sense, Lagrange or Hermite polynomial-based 
 interpolation is a form of windowed-sinc, it's just that you don't know 
 exactly what the window is, in advance.
 
 --
 
 r b-j

Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-21 Thread Peter Schoffhauzer

andy butler wrote:
One thing I'm intrigued by is the notion that when downsampling it may 
be possible to

remove the aliasing created by the downsampling itself
within the interpolation filter.
( so far, I don't see it's possible)


Normally, the signal is lowpass filtered before downsampling. This way 
it's possible to - in theory - totally remove aliasing.


- Peter
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-21 Thread Nigel Redmon
Hi Andy,

Actually, aliasing shouldn't be created by the downsampling. (well, now a 
little paranoia that I just got out of bed and I'm going to say something 
dumb--lol) OK, downsampling is often considered just the portion of the rate 
reduction that discards samples, so of course that is subject to aliasing if 
the signal is not sufficiently band-limited, but I assume you're talking about 
resampling--limiting the bandwidth, then dropping the rate.

As long as you lowpass filter the signal first, then you're only liable for 
aliasing of any signal left above half of the Nyquist frequency. And how much 
is left is a function of how much computation and delay you can tolerate, and 
how good your choices are. Play with the coefficient calculator here to get an 
idea of what it takes for a straight-forward windowed-sinc filter:

http://www.earlevel.com/main/2010/12/05/building-a-windowed-sinc-filter

I just put up an example of downsampling a sine sweep, with and without 
filtering:

http://www.earlevel.com/main/2010/12/20/sample-rate-conversion-down

As for things like distortion modeling of guitars, I can tell you that windowed 
sinc is involved, at least on the upsampling leg where you likely want to 
preserve phase.

Nigel


On Dec 21, 2010, at 4:21 AM, andy butler wrote:
 I've learned a great deal from the recent discussions
 here about interpolation.
 (particularly, that my own implementation using cubic lagrange  is actually 
 no different to a low pass filter, although
 the algorithm is merely based on calculating the cubic
 that fits 4 points. )
 
 The original post that started the discussion asked about wavetable synth 
 applications,
 added the disclaimer  not limited to that.
 
 Would there be a different answer for the choice of
 methods for use in real time applications?
 e.g. distortion modelling for guitars,
time domain pitch shifting.
 
 One thing I'm intrigued by is the notion that when downsampling it may be 
 possible to
 remove the aliasing created by the downsampling itself
 within the interpolation filter.
 ( so far, I don't see it's possible)
 
 
 andy butler

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-21 Thread robert bristow-johnson


On Dec 21, 2010, at 11:22 AM, Nigel Redmon wrote:

As for things like distortion modeling of guitars, I can tell you  
that windowed sinc is involved, at least on the upsampling leg where  
you likely want to preserve phase.

...
As long as you lowpass filter the signal first, then you're only  
liable for aliasing of any signal left above half of the Nyquist  
frequency. And how much is left is a function of how much  
computation and delay you can tolerate, and how good your choices  
are. Play with the coefficient calculator here to get an idea of  
what it takes for a straight-forward windowed-sinc filter:


http://www.earlevel.com/main/2010/12/05/building-a-windowed-sinc- 
filter




one thing i might point out is that, when comparing apples-to-apples,  
an optimal design program like Parks-McClellan (firpm() in MATLAB) or  
Least-Squares (firls()) might do better than a windowed (i presume  
Kaiser window) sinc in most cases.  this is where you are trying to  
impose the same stopband attenuation constraints and looking to  
minimize the total number of FIR taps.


now, to be fair in the case of simple upsampling, the windowed-sinc  
need not compute the FIR for the fractional delay of zero (or it's  
really a delay of precisely an integer number of samples), because all  
of the FIR tap coefs are zero except for the tap of the sample that  
your fractional delay lands squarely on (who has a coefficient of 1).   
so all you need to do is copy that sample over from the input stream  
to the output.  this savings is most effective with the upsample ratio  
is 2 (as in Nigel's example).  then you save 50% of your computational  
effort because every other output sample is simply copying the input  
sample.  that computational savings (as a percentage) is less for  
higher upsampling ratios.  like for an upsample ratio of 4, then 3 out  
of 4 output samples must be computed with the polyphase FIR and only  
one sample can be simply copied over.


now for the Parks-McC or Least Squares design, it is *not* the case  
for *any* fractional delay that all coefs are zero save 1.  that means  
that, unlike the windowed sinc, the interpolated signal will not go  
through the input points exactly (it will be filtered a little).  so  
you have to run the FIR on every phase or fractional delay, even if  
the output sample time lands exactly on the input sample time.


but for a given constraint (how tight the transition from passband to  
stopband is, and how far down the stopband is pushed), P-McC or LS  
will do better (fewer taps needed).  i think, in my experience, it was  
only for an upsample ratio of 2 where a Kaiser-windowed sinc (assuming  
i copied, not calculated every other output sample) was cheaper than P- 
McC or LS.  once i got to an upsample ratio of, say, 4, i think the  
increased number of taps i needed for the Kaiser-windowed sinc for the  
3 samples i had to compute, that this cost was greater than the  
savings i got from not needing to compute that 1 sample i was  
copying.  i cannot remember the specs of the filter, but i can look it  
up, i s'pose.


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-21 Thread Bogac Topaktas
 Would there be a different answer for the choice of
 methods for use in real time applications?
 e.g. distortion modelling for guitars,
  time domain pitch shifting.

Phase compensated polyphase IIR:

http://www.wseas.us/e-library/conferences/crete2001/papers/473.pdf


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-21 Thread Ross Bencina

robert bristow-johnson wrote:
one thing i might point out is that, when comparing apples-to-apples,  an 
optimal design program like Parks-McClellan (firpm() in MATLAB) or 
Least-Squares (firls()) might do better than a windowed (i presume  Kaiser 
window) sinc in most cases.  this is where you are trying to  impose the 
same stopband attenuation constraints and looking to  minimize the total 
number of FIR taps.


It's been posted here before, but there is an extensive introduction to the 
trade-offs of different interpolator design methods including those Robert 
mentions in:


Timo I. Laakso, Vesa Välimäki, Matti Karjalainen, and Unto K. Laine,
Splitting the unit delay - tools for fractional delay filter design,
IEEE Signal Processing Magazine, vol. 13, no. 1, January 1996.
http://signal.hut.fi/spit/publications/1996j5.pdf
http://www.acoustics.hut.fi/software/fdtools/

Ross. 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-21 Thread robert bristow-johnson


On Dec 21, 2010, at 10:45 PM, Ross Bencina wrote:


robert bristow-johnson wrote:
one thing i might point out is that, when comparing apples-to- 
apples,  an optimal design program like Parks-McClellan (firpm() in  
MATLAB) or Least-Squares (firls()) might do better than a windowed  
(i presume  Kaiser window) sinc in most cases.  this is where you  
are trying to  impose the same stopband attenuation constraints and  
looking to  minimize the total number of FIR taps.


It's been posted here before, but there is an extensive introduction  
to the trade-offs of different interpolator design methods including  
those Robert mentions in:


Timo I. Laakso, Vesa Välimäki, Matti Karjalainen, and Unto K. Laine,
Splitting the unit delay - tools for fractional delay filter design,
IEEE Signal Processing Magazine, vol. 13, no. 1, January 1996.
http://signal.hut.fi/spit/publications/1996j5.pdf
http://www.acoustics.hut.fi/software/fdtools/


yup.  i was only comparing polyphase FIR methods that are exactly the  
same, except for the coefficients they may come up with.  and trying  
to point to an obvious advantage to any windowed sinc (that you don't  
have to compute the FIR when the output same lands squarely on top of  
an input sample when all you need to do is copy the sample) to some  
optimized method that isn't a windowed sinc (which means you *do* have  
to compute the FIR even when the output sample time is the same as the  
input).


this Splitting the Unit Delay paper is to interpolation, resampling,  
and precision delay as Fred Harris' paper on windowing is to windows.   
both are timeless classics.


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp