Re: [music-dsp] Formants

2018-02-06 Thread Andy Drucker
I take it you're using this formant table:

https://www.classes.cs.uchicago.edu/archive/1999/spring/CS295/Computing_
Resources/Csound/CsManual3.48b1.HTML/Appendices/table3.html

The Hz-to-Q conversion is described in the caption of the illustration here:

https://en.wikipedia.org/wiki/Q_factor

-3dB attenuation is the usual passband threshold, as discussed further in
the link below, and I expect (?) it's appropriate to use with the present
values.

https://en.wikipedia.org/wiki/Bandwidth_(signal_processing)

A secondary issue is the choice of waveform to feed into a formant filter,
and how to approximate a real "glottal pulse".  There is some interesting
discussion here

http://clas.mq.edu.au/speech/acoustics/frequency/source.html

building on work in this old paper

https://pdfs.semanticscholar.org/e504/38c1e56d4ce3f7ebe3d10cea483ea38234c6.pdf





On Tue, Feb 6, 2018 at 8:56 PM, Frank Sheeran  wrote:

> I'm hoping to make some formant synthesis patches with my modular soft
> synth Moselle. http://moselle-synth.com
>
> I've looked around for formant tables and find tables with more vowels and
> fewer formants, or fewer vowels and more formants.  Tables with amplitude
> seem to have fewer vowels and only one I've found shows Q.
>
> But the Q (as shown in CSound documentation, one example pasted below) is
> specified in Hz.
>
> The parametric (const and non-const) filters I'm using need a Q input.  Is
> there a formula to convert Hz into Q?
>
> Failing that, is there a standard amplitude at which which a bandwidth
> would be measured in Hz?  EG, at -6dB or -12dB or something?  If so I could
> just eyeball it on a graph.
>
> Final question: does anyone know a more comprehensive set of such data?
> This CSound data is great but only covers 5 vowels.
>
> Frank Sheeran
>
>
> *soprano "a"*
> freq (Hz)  800  1150  2900  3900  4950
> amp (dB)  0  -6  -32  -20  -50
> bw (Hz)  80  90  120  130  140
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] Formants

2018-02-06 Thread Frank Sheeran
I'm hoping to make some formant synthesis patches with my modular soft
synth Moselle. http://moselle-synth.com

I've looked around for formant tables and find tables with more vowels and
fewer formants, or fewer vowels and more formants.  Tables with amplitude
seem to have fewer vowels and only one I've found shows Q.

But the Q (as shown in CSound documentation, one example pasted below) is
specified in Hz.

The parametric (const and non-const) filters I'm using need a Q input.  Is
there a formula to convert Hz into Q?

Failing that, is there a standard amplitude at which which a bandwidth
would be measured in Hz?  EG, at -6dB or -12dB or something?  If so I could
just eyeball it on a graph.

Final question: does anyone know a more comprehensive set of such data?
This CSound data is great but only covers 5 vowels.

Frank Sheeran


*soprano "a"*
freq (Hz)  800  1150  2900  3900  4950
amp (dB)  0  -6  -32  -20  -50
bw (Hz)  80  90  120  130  140
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Reading a buffer at variable speed

2018-02-06 Thread Ethan Fenn
Let's let t be the time, and s be the position in the buffer. So, for
example, playing back at double speed you'd just have s=2*t.

To make it exponential, and have s=0 at t=0, we can write:

s = C*R*(e^(t/C) - 1)

Here R is the initial playback rate (R=1 if it should start at normal
pitch), and C is the "chirp time", the time it takes for the playback rate
to increase by a factor of e.

To figure out how long it will take to play back, we set s=L (the length of
the buffer) and solve for t, giving:

t = C*ln(L/(C*R) + 1)

As for your first question, about why the thing you wrote doesn't seem to
work: it looks similar to some confusion I had when I first tried to figure
out how FM works. If you want a sine wave with an (angular) frequency of 1,
you write sin(t). If you want an angular frequency of 2, you write
sin(2*t). So it's tempting to think that if you want the frequency to be
f(t), a function of t, then you should write sin(f(t)*t). I suspect this is
where the factor of t came from in your equation. But this isn't right!
Instead what you want in my FM example is sin(F(t)), where F is in
antiderivative (i.e. integral) of f. The instantaneous frequency isn't
given by the thing that's multiplying t, it's given by the derivative of
the thing inside the sin function.

Hope that sheds some light on your problem.

-Ethan



On Tue, Feb 6, 2018 at 12:59 PM, Stefan Sullivan 
wrote:

> Can you explain your notation a little bit? Is x[t] the sample index into
> your signal? And t is time in samples?
>
> I might formulate it as a Delta of indicies where a Delta of 1 is a normal
> playback speed and you have some exponential rate. Would something like
> this work?
>
> delta *= rate
> t += delta
> y[n] = x[n - t + N]
>
> My notation probably means something different from yours, but the idea is
> the time varying index t accelerates or decelerates at a given rate. I've
> kind of written a hybrid of pseudocode and DSP notation, sorry.
>
> You would probably actually want some interpolation and for an application
> like this one I would probably stick with linear interpolation (even though
> most people on this list will probably disagree with me on that). Keep in
> mind, though, that skipping samples might mean aliasing which will mean
> low-pass filtering your signal (unless you know that there's no frequency
> content in offensive ranges), and since you're essentially doing a variable
> skip rate your low pass filter might either need to be aggressive or
> variable.
>
> Something about this algorithm scares me in it's seemingly unbounded need
> for memory. Seems like a lot of heuristic constraints...
>
> Stefan
>
>
> On Feb 6, 2018 06:45, "Maximiliano Estudies" 
> wrote:
>
> I am having trouble with this concept for quite some time now, I hope that
> I can explain it well enough so you can understand what I mean.
> I have signal stored in a buffer of known length. The buffer must be read
> at a variable speed, and the variations in speed have to be exponential, so
> that the resulting glissandi are (aurally) linear. In order to do that I
> came up with the following formula:
>
> x[t] = t * sample_rate * end_speed^(x[t] / T) where T is the total
> length of the buffer in samples.
>
> This doesn’t seem to work and I can’t understand why.
>
> And my second question is, how can I get the resulting length in
> milliseconds? (how long will it take to play the whole buffer)
>
> I hope I managed to be clear enough!
>
> Maxi
>
> --
> Maximiliano Estudies
> +49 176 36784771 <+49%20176%2036784771>
> maxiestudies.com
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Reading a buffer at variable speed

2018-02-06 Thread Stefan Sullivan
Can you explain your notation a little bit? Is x[t] the sample index into
your signal? And t is time in samples?

I might formulate it as a Delta of indicies where a Delta of 1 is a normal
playback speed and you have some exponential rate. Would something like
this work?

delta *= rate
t += delta
y[n] = x[n - t + N]

My notation probably means something different from yours, but the idea is
the time varying index t accelerates or decelerates at a given rate. I've
kind of written a hybrid of pseudocode and DSP notation, sorry.

You would probably actually want some interpolation and for an application
like this one I would probably stick with linear interpolation (even though
most people on this list will probably disagree with me on that). Keep in
mind, though, that skipping samples might mean aliasing which will mean
low-pass filtering your signal (unless you know that there's no frequency
content in offensive ranges), and since you're essentially doing a variable
skip rate your low pass filter might either need to be aggressive or
variable.

Something about this algorithm scares me in it's seemingly unbounded need
for memory. Seems like a lot of heuristic constraints...

Stefan


On Feb 6, 2018 06:45, "Maximiliano Estudies"  wrote:

I am having trouble with this concept for quite some time now, I hope that
I can explain it well enough so you can understand what I mean.
I have signal stored in a buffer of known length. The buffer must be read
at a variable speed, and the variations in speed have to be exponential, so
that the resulting glissandi are (aurally) linear. In order to do that I
came up with the following formula:

x[t] = t * sample_rate * end_speed^(x[t] / T) where T is the total
length of the buffer in samples.

This doesn’t seem to work and I can’t understand why.

And my second question is, how can I get the resulting length in
milliseconds? (how long will it take to play the whole buffer)

I hope I managed to be clear enough!

Maxi

-- 
Maximiliano Estudies
+49 176 36784771 <+49%20176%2036784771>
maxiestudies.com

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Reading a buffer at variable speed

2018-02-06 Thread Maximiliano Estudies
 The buffer must be read at a variable speed,

>>Do you mean it has to be played out at higher sample rates ?
yes it has to be played out at higher sample rates, so I start at the
original sample rate and end at sample rate * some chosen value

 how long will it take to play the whole buffer

>>If you can derive an average rate out of it then you can determine it.

could you explain me how?


2018-02-06 17:15 GMT+01:00 Benny Alexandar :

> >> The buffer must be read at a variable speed,
>
> Do you mean it has to be played out at higher sample rates ?
>
> >> how long will it take to play the whole buffer
>
> If you can derive an average rate out of it then you can determine it.
>
> -ben
>
> --
> *From:* music-dsp-boun...@music.columbia.edu  columbia.edu> on behalf of Maximiliano Estudies 
> *Sent:* Tuesday, February 6, 2018 8:15 PM
> *To:* music-dsp@music.columbia.edu
> *Subject:* [music-dsp] Reading a buffer at variable speed
>
> I am having trouble with this concept for quite some time now, I hope that
> I can explain it well enough so you can understand what I mean.
> I have signal stored in a buffer of known length. The buffer must be read
> at a variable speed, and the variations in speed have to be exponential, so
> that the resulting glissandi are (aurally) linear. In order to do that I
> came up with the following formula:
>
> x[t] = t * sample_rate * end_speed^(x[t] / T) where T is the total
> length of the buffer in samples.
>
> This doesn’t seem to work and I can’t understand why.
>
> And my second question is, how can I get the resulting length in
> milliseconds? (how long will it take to play the whole buffer)
>
> I hope I managed to be clear enough!
>
> Maxi
>
> --
> Maximiliano Estudies
> +49 176 36784771 <+49%20176%2036784771>
> maxiestudies.com
>



-- 
Maximiliano Estudies
+49 176 36784771
maxiestudies.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Reading a buffer at variable speed

2018-02-06 Thread Benny Alexandar
>> The buffer must be read at a variable speed,

Do you mean it has to be played out at higher sample rates ?

>> how long will it take to play the whole buffer

If you can derive an average rate out of it then you can determine it.

-ben


From: music-dsp-boun...@music.columbia.edu 
 on behalf of Maximiliano Estudies 

Sent: Tuesday, February 6, 2018 8:15 PM
To: music-dsp@music.columbia.edu
Subject: [music-dsp] Reading a buffer at variable speed

I am having trouble with this concept for quite some time now, I hope that I 
can explain it well enough so you can understand what I mean.
I have signal stored in a buffer of known length. The buffer must be read at a 
variable speed, and the variations in speed have to be exponential, so that the 
resulting glissandi are (aurally) linear. In order to do that I came up with 
the following formula:

x[t] = t * sample_rate * end_speed^(x[t] / T) where T is the total length 
of the buffer in samples.

This doesn’t seem to work and I can’t understand why.

And my second question is, how can I get the resulting length in milliseconds? 
(how long will it take to play the whole buffer)

I hope I managed to be clear enough!

Maxi

--
Maximiliano Estudies
+49 176 36784771
maxiestudies.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] Reading a buffer at variable speed

2018-02-06 Thread Maximiliano Estudies
I am having trouble with this concept for quite some time now, I hope that
I can explain it well enough so you can understand what I mean.
I have signal stored in a buffer of known length. The buffer must be read
at a variable speed, and the variations in speed have to be exponential, so
that the resulting glissandi are (aurally) linear. In order to do that I
came up with the following formula:

x[t] = t * sample_rate * end_speed^(x[t] / T) where T is the total
length of the buffer in samples.

This doesn’t seem to work and I can’t understand why.

And my second question is, how can I get the resulting length in
milliseconds? (how long will it take to play the whole buffer)

I hope I managed to be clear enough!

Maxi

-- 
Maximiliano Estudies
+49 176 36784771
maxiestudies.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] The Gaborator, a C++ library for constant-Q spectrograms

2018-02-06 Thread Andreas Gustafsson
Hello all,

The first public release of the Gaborator library is now available.

The Gaborator is a C++ library that generates constant-Q spectrograms
for visualization and analysis of audio signals. It also supports an
accurate inverse transformation of the spectrogram coefficients back
into audio for spectral effects and editing. Both analysis and
resynthesis run at several million samples per second on a single core
of an Intel Core i5 mobile CPU.

The download link and some online demos can be found at

  https://gaborator.com/

Enjoy,
-- 
Andreas Gustafsson, g...@waxingwave.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Postdoctoral Research Assistant in Computational Sound Scene Analysis (deadline: 7 March 2018)

2018-02-06 Thread Emmanouil Benetos

Postdoctoral Research Assistant in Computational Sound Scene Analysis

Queen Mary University of London, UK
Salary: GBP 32,956 to GBP 36,677 per annum
Closing Date: 07 March 2018 (23:59 GMT)

https://webapps2.is.qmul.ac.uk/jobs/job.action?jobID=3095

===

Description: The school of Electronic Engineering and Computer Science 
at Queen Mary University of London (QMUL) has a vacancy for a 
Post-Doctoral Research Assistant (PDRA) on computational sound scene 
analysis, as part of the EPSRC-funded project "Integrating sound and 
context recognition for acoustic scene analysis".


The responsibilities of the role are to investigate, develop and 
evaluate novel digital signal processing and machine learning 
technologies for context-aware sound recognition, applied to continuous 
audio streams in urban, nature and domestic environments. The work will 
include integrating audio-based context recognition with sound 
recognition methods, developing a research software framework and 
creating real and simulated datasets for context-aware sound recognition 
in complex acoustic environments.


The ideal candidate will have a PhD in Computer Science, 
Electrical/Electronic Engineering or in a relevant field, with research 
experience in one or more of the following areas: Digital Signal 
Processing, Machine Learning, Audio/Acoustics, Music Information 
Retrieval or equivalent. The candidate will have a strong research track 
record with publications in high-quality journals and conference 
proceedings. Research experience in audio signal processing and/or 
computational sound scene analysis is desirable. The successful 
candidate will have programming proficiency in one or more of: Python, 
Matlab, C/C++, and will have demonstrated the ability to work 
collaboratively and independently.


This post is based in the Centre for Digital Music (C4DM) and Centre for 
Intelligent Sensing (CIS) of Queen Mary University of London. C4DM is a 
world-leading multidisciplinary research group in the field of Digital 
Music & Audio Technology; CIS has highly reputed research expertise in 
multi-sensor data processing, distributed signal processing, vision and 
audio analysis. Both groups are part of the School of Electronic 
Engineering and Computer Science (EECS). Details about the School can be 
found at http://www.eecs.qmul.ac.uk; details about C4DM at 
http://c4dm.eecs.qmul.ac.uk; and details about CIS at 
http://cis.eecs.qmul.ac.uk/.


This is a full time post for 14 months starting 1 April 2018 or as soon 
as feasible after this date. The starting salary will be in the range of 
£32,956 - £36,677 per annum inclusive of London allowance. Benefits 
include 30 days annual leave, defined benefit pension scheme and 
interest-free season ticket loan.


Candidates must be able to demonstrate their eligibility to work in the 
UK in accordance with the Immigration, Asylum and Nationality Act 2006. 
Where required this may include entry clearance or continued leave to 
remain under the Points Based Immigration Scheme.


Informal enquiries should be addressed to Dr Emmanouil Benetos 
(emmanouil.bene...@qmul.ac.uk) or by phone: +44 20 7882 3066.


For more information and to apply online, please visit: 
https://webapps2.is.qmul.ac.uk/jobs/job.action?jobID=3095


The closing date for applications is 7 March 2018. Interviews are 
expected to be held shortly thereafter.


--
Dr Emmanouil Benetos
RAEng Research Fellow, Lecturer
School of Electronic Engineering and Computer Science
Queen Mary University of London
Tel: +44 (0)20 7882 3066
e-mail: emmanouil.bene...@qmul.ac.uk
http://www.eecs.qmul.ac.uk/~emmanouilb/

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp