Re: [music-dsp] tracking drum partials

2017-08-06 Thread Thomas Rehaag

sorry for the late reply. Half of my clients actually need help.

@Theo: I'm not trying to rebuild natural drum sounds. Just tried to get
a bit more insight about drum sounds
I'm not quite content with the missing overtones in electro drum synthesis.

@Ian & Corey: thanks for pointing me to the parametric estimation method.

Hope I'll find some hours in the next weeks to have a look at all the
new information and to go on with this.

Best Regards,

Thomas

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] tracking drum partials

2017-07-28 Thread Thomas Rehaag

see below.


 Original Message 
Subject: Re: [music-dsp] tracking drum partials
From: "Thomas Rehaag" <develo...@netcologne.de>
Date: Thu, July 27, 2017 4:02 pm
To: music-dsp@music.columbia.edu
--

>
> @Robert:
> I didn't quite get "for analysis, i would recommend Gaussian windows
> because each partial will have it's own gaussian bump in the frequency
> domain ..."
> Looks like you wouldn't just pick up the peaks like I do.
>

well, i *would* sorta, but it's not always as simple as that.

...


> Next the highest peaks are taken from every FFT and then tracks in time
> are built.
>
> And it looks like I've just found the simple key for better results
> after putting the whole stuff aside for 3 weeks now.
> It's just to use samples from drums that have been hit softly.
> Else every bin of the first FFTs will be crossed by 2 or 3 sweeps which
> leads to lots of artifacts.

are you using MATLAB/Octave?  you might need to think about  fftshift() .

suppose you have a sinusoid that goes on forever and you use a 
**very** long FFT and transform it.  if you can imagine that very long 
FFT as approximating the Fourier Transform, you will get better than a 
"peak", you will get a *spike* and +/- f0, the frequency of that 
sinusoid (the "negative frequency" spike will be at N-f0).  in the 
F.T., it's a pair dirac impulses at +/- f0.  then when you multiply by 
a window in the time domain, that is convolving by the F.T. of the 
window in the frequency domain.  i will call that F.T. of the window, 
the "window spectrum".  a window function is normally a low-pass kinda 
signal, which means the window spectrum will peak at a frequency of 0. 
  convolving that window spectrum with a dirac spike at f0 simply 
moves that window spectrum to f0.  so it's no longer a spike, but a 
more rounded peak at f0.  i will call that "more rounded peak" the 
"main lobe" of the window spectrum.  and it is what i meant by the 
"gaussian bump" in the previous response.


now most (actually all, to some degree) window spectra have sidelobes 
and much of what goes on with designing a good window function is to 
deal with those sidelobe bumps.  because the sidelobe of one partial 
will add to the mainlobe of another partial and possibly skew the 
apparent peak location and peak height.  one of the design goals 
behind the Hamming window was to beat down the sidelobes a little.  a 
more systematic approach is the Kaiser window which allows you to 
trade off sidelobe height and mainlobe width.  you would like *both* a 
skinny mainlobe and small sidelobes, but you can't get both without 
increasing the window length "L".


another property that *some* windows have that are of interest to the 
music-dsp crowd is that of being (or not) complementary.  that is 
adjacent windows add to 1.  this is important in **synthesis** (say in 
the phase vocoder), but is not important in **analysis**.  for 
example, the Kaiser window is not complementary, but the Hann window 
is.  so, if you don't need complementary, then you might wanna choose 
a window with good sidelobe behavior.


the reason i suggested Gaussian over Kaiser was just sorta knee-jerk. 
 perhaps Kaiser would be better, but one good thing about the Gaussian 
is that its F.T. is also a Gaussian (and there are other cool 
properties related to chirp functions).  a theoretical Gaussian window 
has no sidelobes.  so, if you let your Gaussian window get extremely 
close to zero before it is truncated, then it's pretty close to a 
theoretical Gaussian and you need not worry about sidelobes and the 
behavior of the window spectrum is also nearly perfectly Gaussian and 
you can sometimes take advantage of that.  like in interpolating 
around an apparent peak (at an integer FFT bin) to get the precise 
peak location.


now you probably do not need to get this anal-retentive about it, but 
if you want to, you can look at this:
https://www.researchgate.net/publication/3927319_Intraframe_time-scaling_of_nonstationary_sinusoids_within_the_phase_vocoder 

and i might have some old MATLAB code to run for it, if you want it 
(or anyone else), lemme know.



big thanks for the elaborate explanation!
Looks like you've turned my head into the right direction.
Had a look at the spectrum of my window: a 4096 Hann win. in the middle 
of 64k samples.

A big sidelobe party!
A 64k gaussian window that just sets priority to an area of ~4096 
samples will of course fix that.


Btw.: no Matlab here. C++ only.

Best Regards,

Thomas


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] tracking drum partials

2017-07-26 Thread Thomas Rehaag

Dear DSP Experts,

can anybody tell me how to track drum partials? Is it even possible?
What I'd like to have are the frequency & amplitude envelopes of the 
partials so that I can rebuild the drum sounds with additive synthesis.


I've tried it with heavily overlapping FFTs and then building tracks 
from the peaks.
Feeding the results into the synthesis (~60 generators) brought halfway 
acceptable sounds. Of course after playing with FFT- and overlapping 
step sizes for a while.


But those envelopes were strange and it was very frustrating to see the 
results when I analyzed a synthesized sound containing some simple sine 
sweeps this way.
Got a good result for the loudest sweep. But the rest was scattered in 
short signals with strange frequencies.


Large FFTs have got the resolution to separate the partials but a bad 
resolution in time so you don't even see the higher partials which are 
gone within a short part of the buffer.
With small FFTs every bin is crowded with some partials. And every kind 
of mask adds the more artifacts the smaller the FFT is.


Also tried BP filter banks. Even worse!
It's always resolution in time and frequency fighting each other too 
much for this subject.


Any solution for this?

Best Regards,

Thomas

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Help with "Sound Retainer"/Sostenuto Effect

2016-09-16 Thread Thomas Rehaag
In the time domain I'd try to mask the whole sample e.g. with a triangle 
window. Then repeat it half overlapped.

(Or was this already one of your "various forms of crossfading"?)
Maybe even add up the signal with it's reverse signal to have constant 
power before overlapped playback.

But that's theory.

In the frequency domain I had a quite good result with an FFT/overlap 
add and multiplying the frequency bins with (1 + 1/10*(random noise)) in 
a similar experiment.


Best Regards,

Thomas


Am 16.09.2016 um 17:48 schrieb Spencer Jackson:

Hi all:

First post on the list. Quite some time ago I set out to create a lv2
plugin re-creation of the electroharmonix freeze guitar effect. The
idea is that when you click the button it takes a short sample and
loops it for a drone like effect, sort of a granular synthesis
sustainer thing. (e.g. https://youtu.be/bPeeJrv9wb0?t=58)

I use an autocorrelation-like function to identify the wave period but
on looping I always have artifacts rather than a smooth sound like the
original I'm trying to emulate. I've tried some compression to get a
constant rms through the sample, tried various forms of crossfading,
tried layering several periods, and many combinations of these. I
ended up releasing it using 2 layers, compression, and a 64 sample
linear crossfade, but I've never been satisfied with the results and
have been trying more combinations. It works well on simple signals
but on something not perfectly periodic like a guitar chord it always
has the rhythmic noise of a poor loop.

I'm hoping either someone can help me find a bug in the code that's
spoiling the effect or a better approach. I've considered applying
subsample loop lengths, but I don't think that will help. The next
thing I could think of is taking the loop into the frequency domain
and removing all phase data so that it becomes a pure even function
which should loop nicely and still contain the same frequencies. I
thought I'd ask here for suggestions though, before spending too much
more time on it.

The GPL C code is here for review if anyone is curious:
https://github.com/ssj71/infamousPlugins/blob/master/src/stuck/stuck.c
I'm happy to offer more explanations of the code.

Thanks for your time.
_Spencer
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Hosting playback module for samples

2014-02-26 Thread Thomas Rehaag

Hi Mark,

if you just need a simple VST2 Windows host there's enough source code 
that will give you access tho VSTi. You could use the MiniHost from the 
VST2 SDK (hope you've got a copy. The download/support is gone since 
some weeks). And of course you can have a look at Hermann Seib's VstHost:

http://www.hermannseib.com/programs/vsthostsrc.zip
but the MiniHost will be less complicated and sufficient.

I've just wrote a host for a job and it was the same situation: C# on 
the other side. So we decided to use shared memory (CreateFileMapping 
...) for communication. Works fine.


Cheers,

Thomas


Am 26.02.2014 17:56, schrieb Mark Garvin:

I realize that this is slightly off the beaten path for this group,
but it's a problem that I've been trying to solve for a few years:

I had written software for notation-based composition and playback of
orchestral scores. That was done via MIDI. I was working on porting
the original C++ to C#, and everything went well...except for playback.
The world has changed from MIDI-based rack-mount samplers to computer-
based samples played back via hosted VSTi's.

And unfortunately, hosting a VSTi is another world of involved software
development, even with unmanaged C++ code. Hosting with managed code
(C#) should be possible, but I don't think it has been done yet. So
I'm stuck. I've spoken to Marc Jacobi, who has a managed wrapper for
VST C++ code, but VSTi hosting is still not that simple. Marc is very
helpful and generous, and I pester him once a year, but it remains an
elusive problem.

It occurred to me that one of the resourceful people here may have
ideas for working around this. What I'm looking for, short term, is
simply a way to play back orchestral samples or even guitar/bass/drums
as a way of testing my ported C# code. Ideally send note-on, velocity,
note-off, similar to primitive MIDI. Continuous controller for volume
would be icing.

Any ideas, however abstract, would be greatly appreciated.

MG
NYC

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] convolution in the frequency domain

2011-02-20 Thread Thomas Rehaag

Hi,

I'm trying to create an aliasing free multiplication of two signals.
As far as I remember from my days at the university (long time gone) a 
multiplication in the time domain can be replaced by / is the same as a 
convolution in the frequency domain.

And the convolution can help to avoid aliasing.

So for testing I create 2 band limited signals in the frequency domain 
that don't lead to aliasing when multiplied and converted them to the 
time domain wit an fft and multiplied them to have a reference.


Then my program does a convolution of the two buffers in the frequency 
domain with the very raw C++ code below. And everything works fine until 
one of the buffers contains a signal at the first frequency bin (freq=0).


Probably I've got a big misunderstanding of convolution in the 
frequency domain or even worse - theres just a bug in my convolution code.


Thanks in advance for any help,

Thomas


float * CConvTest::ConvolutionInF(float * pfA, float * pfB, long lSize)
{
memset(m_pFConvDest, 0, lSize * sizeof(float));
for(long ixa = -lSize + 1; ixa  lSize; ixa ++)
{
float fA = pfA[abs(ixa)];
for(long ixb = -lSize + 1; ixb  lSize; ixb ++)
{
long ixDest = ixa + ixb;
if(ixDest = 0  ixDest  lSize)
m_pFConvDest[ixDest] += fA * pfB[abs(ixb)];
}
}
return m_pFConvDest;
}


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp