Re: [music-dsp] Clock drift and compensation

2018-01-23 Thread Bogac Topaktas
On Tue, January 23, 2018 7:17 pm, Benny Alexandar wrote:
> Now if the tuner xtal is drifting then the dsp audio streaming needs to
> adjust to that drift, else buffer overflow or underrun happens as the
> sample rates doesn't match.

Assuming you do not have the option of modifying the hardware,
you may use sample length modification techniques similar to these:

https://www.google.com/patents/WO2008006080A2?cl=en

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Anyone using Chebyshev polynomials to approximate trigonometric functions in FPGA DSP

2016-01-20 Thread Bogac Topaktas
Jack Crenshaw's book "Math Toolkit for Real-Time Programming"
contains all the information you need: "Chapter 5 - Getting the
Sines Right" provides theory and practice of approximating
sines & cosines with various methods including Chebyshev polynomials.

Another good resources is Jean-Michel Muller's book
"Elementary Functions: Algorithms and Implementation".

On Tue, January 19, 2016 8:05 pm, Theo Verelst wrote:
> Hi all,
>
>
> Maybe a bit forward, but hey, there are PhDs here, too, so here it goes:
> I've played a
> little with the latest Vivado HLx design tools fro Xilinx FPGAs and the
> cheap Zynq implementation I use (a Parallella board), and I was looking
> for interesting examples to put in C-to_chip compiler that I can connected
> over AXI bus to a Linux program running on the ARM cores in the Zynq chip.
>
>
> In other words, computations and manipulations with additions, multiplies
> and other logical operations (say of 32 bits) that compile nicely to for
> instance the computation of y=sin(t) in such a form that the Silicon
> Compiler can have a go at it, and produce a nice
> relative low-latency FPGA block to connect up with other blocks to do nice
> (and very low
> latency) DSP with.
>
> Regards,
>
>
> Theo V.
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
>


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] FFTW Help in C

2015-06-11 Thread Bogac Topaktas
The following link provides lots of detail regarding implementation:

http://www.codeproject.com/Articles/6855/FFT-of-waveIn-audio-signals

You can find the FFT code written by Don Cross, which is used in the
above article at:

http://web.archive.org/web/20020221213551/http://www.intersrv.com/~dcross/fft.html

http://web.archive.org/web/20020221213551/http://www.intersrv.com/~dcross/fft.zip

If you don't need high perfomance optimizations of FFTW, it's a much more
accessible FFT implementation than FFTW.


On Thu, June 11, 2015 5:20 pm, Connor Gettel wrote:
> Hello Everyone,
>
>
> My name’s Connor and I’m new to this mailing list. I was hoping
> somebody might be able to help me out with some FFT code.
>
> I want to do a spectral analysis of the mic input of my sound card. So
> far in my program i’ve got my main function initialising portaudio,
> inputParameters, outputParameters etc, and a callback function above
> passing audio through. It all runs smoothly.
>
> What I don’t understand at all is how to structure the FFT code in and
> around the callback as i’m fairly new to C. I understand all the steps
> of the FFT mostly in terms of memory allocation, setting up a plan, and
> executing the plan, but I’m still really unclear as how to structure
> these pieces of code into the program. What exactly can and can’t go
> inside the callback? I know it’s a tricky place because of timing
> etc…
>
> Could anybody please explain to me how i could achieve a real to complex
> 1 dimensional DFT on my audio input using a callback?
>
>
> I cannot even begin to explain how grateful I would be if somebody could
> walk me through this process.
>
> I have attached my callback function code so far with the FFT code
> unincorporated at the very bottom below the main function (should anyone
> wish to have a look)
>
> I hope this is all clear enough, if more information is required please
> let me know.
>
> Thanks very much in advance!
>
>
> All the best,
>
>
> Connor.
>
>
>
> --
> dupswapdrop -- the music-dsp mailing list and website: subscription info,
> FAQ, source code archive, list archive, book reviews, dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Filtering out unwanted square wave (Radio: DCS/DPL signal)

2014-07-30 Thread Bogac Topaktas
On Wed, July 30, 2014 11:51 pm, Sampo Syreeni wrote:
> What typically messes you up here is the syncronization, especially over
> terrestrial radio channes, so that you don't *really* want to just subtract
> anything. But even there you can do a maximum-likelihood center-of-step
> sampled detection step, and then a round local minimization (preferably
> againt L^1 norm, but L^2 is obviously more efficient since it can be
> implemented as an FFT based convoluion) over each transition. The math
> easily yields a matched filter, which upon sampling gives you a noise
> tolrant estimate of what should be subracted. --

This is optimal linear filtering to achieve high SNR under additive
stochastic noise. We're not dealing with stationary noise (like 60Hz
hum and its harmonics). My earlier assumptions and proposed solutions
are wrong.

>From the frequency domain perspective, the above matched filter will
boost spectral components that have the highest SNR. Would not this
create problems if there is spectral correlation between the NRZ sequence
and affected speech?

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Filtering out unwanted square wave (Radio: DCS/DPL signal)

2014-07-30 Thread Bogac Topaktas
The most efficient way is to use adaptive noise cancellation. See:

http://people.ece.cornell.edu/land/courses/ece4760/FinalProjects/s2008/rmo25_kdw24/rmo25_kdw24/index.html

http://www.dsprelated.com/showmessage/5838/5.php

http://www.cs.cmu.edu/~aarti/pubs/ANC.pdf

It works perfectly for removing 50/60Hz hum from single coil
pickups of electric instruments, where static notch filtering
is not adequate (see first reference above).

In the worst case, i.e., if you can't construct a perfect cancellation
signal, you may recover spoken words with a robust speech recognizer and
then synthesize a clean speech afterwards.

On Tue, July 29, 2014 6:32 pm, Bjorn Roche wrote:
> Hey all,
>
>
> I'm dealing with a non-music but still audio-related DSP issue. I need
> to remove a DPL/DCS signal from a recording. Roughly speaking, a DCS signal
> is a low frequency (67.15Hz) square wave sent at the same time, over the
> same carrier, as speech. Because it is a square wave, it has many strong
> harmonics that overlap with speech. Obviously, the speech must be
> preserved as well as possible and the goal is to reject the DCS as much as
> possible because it's annoying as all get-out.
>
> On the surface, this seems like a problem that might be solved the same
> way as removing 60Hz power-line noise: lots of notch filters. However,
> power-line noise tends to be weaker and comes from a source that is much
> closer to a sine wave.
>
> So, my question is: is there a better way to do this? (preferably
> something someone has experience with)
>
> This link contains more info about DCS:
> http://onfreq.com/syntorx/dcs.html It
> mentions "Since DCS creates audio harmonics well above 300 Hz (i.e. into
> the audible portion of the band from 300 to 3000 Hz), radios must have
> good filters to remove the unwanted DCS noise." Ha! I've asked this also on
> stack exchange here:
> http://dsp.stackexchange.com/questions/17462/filtering-out-unwanted-squar
> e-wave-radio-dcs-dpl-signal
>
> TIA!
>
>
> bjorn
>
> --
> -
> Bjorn Roche
> bjornroche.com  @xonamiaudio
> --
> dupswapdrop -- the music-dsp mailing list and website: subscription info,
> FAQ, source code archive, list archive, book reviews, dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>
>


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] English as a second language - measuring voice similarity

2014-07-17 Thread Bogac Topaktas
On Thu, July 17, 2014 2:05 pm, Rohit Agarwal wrote:
> You will likely compare in feature domain and that depends on the kind
> of features you use. You need some heuristics for determining how close a
> given feature sequence is to a reference feature sequence pegged as
> native. You would use these to give the learners feedback.

Exactly. In Danijel's case, identification of non-nativeness is not enough.
The application should also provide proper guidance towards native
pronunciation.

An agile solution for just detecting non-nativeness would be:

1) detect the subject's words with a robust speech recognizer

2) generate native speech of those words with a native speech synthesizer

3) extract features from both of them

4) compare the features with a non-parametric classifier

I think defining nativeness in a dialect free manner is challenging.

>  
> ____
>
>
> From:"Bogac Topaktas" 
>
>
> Sent:"A discussion list for music-related DSP"
> 
>
>
> Date:Thu, July 17, 2014 4:21 pm
>
>
> Subject:Re: [music-dsp] English as a second language - measuring voice
> similarity
>
>> Searching for the phrase "non-native speaker
>>
> identification" gives
>
>> pointers to key resources. See the references section of:
>
>>
>
>> http://en.wikipedia.org/wiki/Non-native_speech_database
>>
>
>>
>
>> It seems like what you need is a machine learning algorithm that
>>
>
>> compares the subjects speech with records on a speech database.
>
>>
>
>> On Thu, July 17, 2014 10:28 am, Danijel Domazet wrote:
>>
>
>>> Hi list,
>>>
>
>>> My project requires to verify if some ESL (English as a Second
>>>
> Language)
>
>
>>> people can speak and pronounce similar to native english speaker.
> They
>
>
>>> would like to practice and require measurement system for
> verifying the
>
>>> progress. What would be the ways of implementing this?
>
>>>
>
>>> Thanks,
>>>
>
>>>
>
>>>
>
>>> Danijel Domazet
>>>
>
>>> LittleEndian.com
>>>
>
>>>
>
>>>
>
>>> --
>>>
>
>>> dupswapdrop -- the music-dsp mailing list and website:
> subscription
>
>>> info,
>
>>> FAQ, source code archive, list archive, book reviews, dsp
>>>
> links
>
>>> http://music.columbia.edu/cmc/music-dsp
>>>
>
>>> http://music.columbia.edu/mailman/listinfo/music-dsp
>>>
>
>>>
>
>>>
>
>>
>
>>
>
>> --
>>
>
>> dupswapdrop -- the music-dsp mailing list and website:
>
>> subscription info, FAQ, source code archive, list archive, book
> reviews,
>
>> dsp links
>
>> http://music.columbia.edu/cmc/music-dsp
>>
>
>> http://music.columbia.edu/mailman/listinfo/music-dsp
>>
>
>>
> --
> dupswapdrop -- the music-dsp mailing list and website: subscription info,
> FAQ, source code archive, list archive, book reviews, dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>
>


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] English as a second language - measuring voice similarity

2014-07-17 Thread Bogac Topaktas
Searching for the phrase "non-native speaker identification" gives
pointers to key resources. See the references section of:

http://en.wikipedia.org/wiki/Non-native_speech_database

It seems like what you need is a machine learning algorithm that
compares the subjects speech with records on a speech database.

On Thu, July 17, 2014 10:28 am, Danijel Domazet wrote:
> Hi list,
> My project requires to verify if some ESL (English as a Second Language)
> people can speak and pronounce similar to native english speaker. They
> would like to practice and require measurement system for verifying the
> progress. What would be the ways of implementing this?
>
> Thanks,
>
>
> Danijel Domazet
> LittleEndian.com
>
>
> --
> dupswapdrop -- the music-dsp mailing list and website: subscription info,
> FAQ, source code archive, list archive, book reviews, dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>
>


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] scripted dynamics compression

2014-07-15 Thread Bogac Topaktas
I don't have the second edition, so I'm
interpreting from the matlab source code:

Attack and release times are hard coded in the script (at & rt).
x: input array
Ct: Compression threshold
Cs: Compression slope
Et: Expansion threshold
Es: Expansion slope

And the hard coded tav parameter selects rms or peak level detection.

On Tue, July 15, 2014 3:26 am, Reid Oda wrote:
> This is great! I don't have the book, though. Would you mind telling me
> what the arguments mean?
>
> compexp(x, CT, CS, ET, ES)
>
>
> On Mon, Jul 14, 2014 at 11:20 AM, Bogac Topaktas 
> wrote:
>
>
>> Chapter 4 of DAFX book (2nd ed.) contains matlab examples for
>> dynamic range control. Matlab files are available online:
>>
>>
>> http://www2.hsu-hh.de/ant/dafx2002/DAFX_Book_Page_2nd_edition/M_files_c
>> hap04.zip
>>
>> DAFX - Digital Audio Effects
>> Edited by Udo Zölzer
>> ISBN: 978-0-470-66599-2
>> John Wiley & Sons, 2011
>>
>>
>> http://www2.hsu-hh.de/ant/dafx2002/DAFX_Book_Page_2nd_edition/chapter4.
>> html
>>
>> http://www2.hsu-hh.de/ant/dafx2002/DAFX_Book_Page_2nd_edition/matlab.ht
>> ml
>>
>> If you have the first edition, then they are located in chapter 5:
>>
>>
>> http://www2.hsu-hh.de/ant/dafx2002/DAFX_Book_Page/M_files_chap05.zip
>>
>>
>> http://www2.hsu-hh.de/ant/dafx2002/DAFX_Book_Page/matlab.html
>>
>>
>> On Mon, July 14, 2014 7:44 pm, Reid Oda wrote:
>>
>>> Hi list,
>>>
>>>
>>>
>>> Does anyone know of a decent dynamics compressor that can be run as
>>> part of a script? I am partial to matlab or python, but any
>>> scriptable, non-realtime, non-gui compressor will do. Thanks!
>>>
>>> Best,
>>> Reid
>>>
>>>
>>>
>>> --
>>> Reid Oda
>>> Ph.D. Candidate
>>> Princeton University
>>> 858-349-2037
>>> http://www.cs.princeton.edu/~roda
>>> --
>>> dupswapdrop -- the music-dsp mailing list and website: subscription
>>> info, FAQ, source code archive, list archive, book reviews, dsp links
>>> http://music.columbia.edu/cmc/music-dsp
>>> http://music.columbia.edu/mailman/listinfo/music-dsp
>>>
>>>
>>>
>>
>>
>> --
>> dupswapdrop -- the music-dsp mailing list and website: subscription info,
>> FAQ, source code archive, list archive, book reviews,
>> dsp links http://music.columbia.edu/cmc/music-dsp
>> http://music.columbia.edu/mailman/listinfo/music-dsp
>>
>>
>
>
>
> --
> Reid Oda
> Ph.D. Candidate
> Princeton University
> 858-349-2037
> http://www.cs.princeton.edu/~roda
> --
> dupswapdrop -- the music-dsp mailing list and website: subscription info,
> FAQ, source code archive, list archive, book reviews, dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] scripted dynamics compression

2014-07-14 Thread Bogac Topaktas
Chapter 4 of DAFX book (2nd ed.) contains matlab examples for
dynamic range control. Matlab files are available online:

http://www2.hsu-hh.de/ant/dafx2002/DAFX_Book_Page_2nd_edition/M_files_chap04.zip

DAFX - Digital Audio Effects
Edited by Udo Zölzer
ISBN: 978-0-470-66599-2
John Wiley & Sons, 2011

http://www2.hsu-hh.de/ant/dafx2002/DAFX_Book_Page_2nd_edition/chapter4.html

http://www2.hsu-hh.de/ant/dafx2002/DAFX_Book_Page_2nd_edition/matlab.html

If you have the first edition, then they are located in chapter 5:

http://www2.hsu-hh.de/ant/dafx2002/DAFX_Book_Page/M_files_chap05.zip

http://www2.hsu-hh.de/ant/dafx2002/DAFX_Book_Page/matlab.html

On Mon, July 14, 2014 7:44 pm, Reid Oda wrote:
> Hi list,
>
>
> Does anyone know of a decent dynamics compressor that can be run as
> part of a script? I am partial to matlab or python, but any scriptable,
> non-realtime, non-gui compressor will do. Thanks!
>
> Best,
> Reid
>
>
> --
> Reid Oda
> Ph.D. Candidate
> Princeton University
> 858-349-2037
> http://www.cs.princeton.edu/~roda
> --
> dupswapdrop -- the music-dsp mailing list and website: subscription info,
> FAQ, source code archive, list archive, book reviews, dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>
>


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Frequency based analysis alternatives?

2014-07-09 Thread Bogac Topaktas
Gabor transform or Wigner Distribution analysis
or their combination may be used.

See the following tutorial paper on joint-domain time-frequency analysis:

"Wigner Distribution Representation and Analysis of Audio Signals: An
Illustrated Tutorial Review"
Douglas Preis and Voula Chris Georgopoulos
JAES Volume 47 Issue 12 pp. 1043-1053; December 1999

And the following resources also contain relevant information:

"Scatter: A Software Tool for Visualizing, Transforming, and Performing
Atomic Decompositions of Sound"
http://mast.mat.ucsb.edu/docs/paper_36.pdf

"Time-Frequency Analysis using Time-Order Representation and Wigner
Distribution"
http://www.iiit.net/techreports/2008_122.pdf

http://en.wikipedia.org/wiki/Gabor%E2%80%93Wigner_transform

On Wed, July 9, 2014 3:03 pm, Rohit Agarwal wrote:
> Most of our modern DSP techniques that we use for the analysis of sound
> signals are based on the FFT as a first step. This imposes limits on time
> resolution since the FFT window has to be wide. For most natural sound
> apps this is no hindrance as the rate of events is commonly slow. Speech
> recognition is such an example. Even in the music space for the most part
>  the required time res for most common apps is not that great so FFT
> suffices. What are the alternatives to the FFT? Have wavelets been
> used for real world solutions? If an app needs much higher time resolution
>  and there are limits on sampling frequency, what kind of time domain
> techniques are well known?   --

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] R: R: Simulating Valve Amps

2014-06-26 Thread Bogac Topaktas
In guitar amps grid currents are mostly
important while simulating power stages.

In modern high-gain amps like Soldano SLO, the distortion
is generated in preamp stages and there are large grid
resistors in all 12AX7 stages to eliminate grid currents,
as they would ruin the "juicyness" by causing too much dynamic asymmetry.

In an old VOX AC30, any stage in preamp/poweramp can draw grid current.
That's the reason it sounds less juicy or more gnarly compared to modern
amps.

On Thu, June 26, 2014 3:01 pm, STEFFAN DIEDRICHSEN wrote:
> The problem with the tube equations from Norman Koren is, that they don’t
> account for grid current. Having done some live investigation in tube
> amps, my conclusion is, that grid currents contribute largely to the
> operation of a tube amp, if you drive them into distortion. Once you saw
> this and understand how it works, it’s easy to model this with pleasing
> result.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] [admin] Re: Simulating Valve Amps

2014-06-24 Thread Bogac Topaktas
On Wed, June 25, 2014 12:21 am, robert bristow-johnson wrote:
> dunno what is meant by "elaborate tricks" but it just boils down to an
> H(z).  lotsa ways to get an H(z), and they normally come up with
> different coefficients.

It means solving the problem in a topology-preserving manner. If your
topology requires five multipliers per biquad and generates annoying noises
during coefficent update then, yes, there are lots of ways to do it. There
would be no need to preserve the topology.

But if your topology requires just three multipliers (including gain) per a
second-order peaking filter and virtualy makes no noise during any coefficent
update then you need to think really hard to come up with a solution which
would preserve both the efficiency and numerical robustness of your
topology.

Bogac.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] [admin] Re: Simulating Valve Amps

2014-06-24 Thread Bogac Topaktas
On Tue, June 24, 2014 9:53 pm, Stefan Stenzel wrote:
>>> On 09.04.2014, at 19:12, robert
>>> bristow-johnson  wrote:
>>>
 if there is feedback, there must be at least one sample of delay,
 despite claims of zero-delay feedback i have read here on music-dsp
 and at other places.
>>
>> it's a technical fact in a causal environment.  y[n] is a function of
>> y[n-1], y[n-2]... and x[n], x[n-1], x[n-2] ...
>
> I think Urs might be referring to feedback in analog circuits that can be
> modelled without a delay in the digital domain. Think of an op-amp set up
> as a gain follower, although there is a feedback path it seems reasonable
> that it requires no unit delay for a digital equivalent. For the resonance
> feedback in e.g. a Moog style filter, this might be tricky.
>
> Stefan

Any system where the computation of y[n] depends on y[n] is a delay-free
feedback loop system.

If that delay-free feedback loop contains any non-linearity,
then iterative solutions are necessary. If it's fully linear,
there are efficient but elaborate tricks to handle the case.

Classic examples for the linear case are the high-end parallel
parametric equalizers used for mastering (GML 8200, Sontec MEP-250EX,
Maselec MEA-2 etc.)

Bogac.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread Bogac Topaktas
On Mon, June 23, 2014 7:37 am, robert bristow-johnson wrote:
> the other thing Urs brought up for discussion is an iterative and
> recursive process that converges on a result value, given an input.  i am
> saying that this can be rolled out into a non-recursive equivalent, if the
> number of iterations needed for convergence is finite.  this is a totally
> different issue.

For most circuits involving non-linear feedback loops,
number of iterations needed for convergence depends on
signal level (while required accuracy kept constant).

For instance, in Tube Screamer Overdrive circuit, number
of iterations decreases when the signal gets closer to rails.

Of course, it's possible to derive alternative equations to
reach a fixed-step approximation (which -as you pointed out-
is much more suitable for real-time operation).

I totally agree with Urs. Today's market is all about authenticity,
efficiency is just a plus.

Bogac.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-18 Thread Bogac Topaktas
I came across one of my old posts on this subject (see below).

I still think that Eric K. Pritchard's patents contain almost every
important detail to know about this subject. Especially, his later
work concerning "Fat Emulation" (i.e., sound enhancements caused by
various inter-modulation distortion sources) is very intriguing.

Bogac.

---

http://music.columbia.edu/pipermail/music-dsp/2009-September/068051.html

The most comprehensive (and widely unknown) resource of information on
modeling and simulation of tube amplifiers is Eric K. Pritchard's Patents:

USpat# 4,809,336; 4,995,084; 5,133,014;
5,434,536; 5,636,284; 5,734,725; 5,761,316; 5,761,317; 5,802,182;
5,805,713; 5,848,165; 6,057,737; 6,411,720; and 6,631,195.
(all available online at www.uspto.gov )

In your particular case (power stage/output transformer/loudspeaker
interaction), key information is contained in #4,809,336; 4,995,084 (the
best); 5,133,014; 5,434,536; 5,636,284; 5,761,316; and 5,805,713.

In addition to the above, the following patents contain key information on
simulating the power supply sag: #5,635,872 and #5,909,145.


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-18 Thread Bogac Topaktas
Fender once patented a relatively efficient tone stack simulation method:

"Simulated tone stack for electric guitar"
by Curtis , et al.
US patent #6,222,110
www.uspto.gov

On Wed, June 18, 2014 4:01 pm, Tim Goetze wrote:
> [Andrew Simper]
>
>> On 18 June 2014 18:26, Tim Goetze  wrote:
>>
>>> ...  Thanks to
>>> the work of Yeh, I personally consider the tonestack a solved problem,
>>>  or at least one of least concern for the time being.
>>
>> A linear tonestack has been a solved problem way before Yeh wrote any
>> papers. Also I would not consider a mapping component values to direct
>> form 1 biquad coefficients a good way to simulate a tone stack when you
>> can easily preserve the time varying behaviour as well if you use
>> standard circuit simulation techniques like nodal analysis.
>
> I absolutely agree that this looks to be the most promising approach
> in terms of realism.  However, the last time I looked into this, the
> computational cost seemed a good deal too high for a realtime
> implementation sharing a CPU with other tasks.  But perhaps I'll need to
> evaluate it again?
>
> Cheers,
> Tim
> --
> dupswapdrop -- the music-dsp mailing list and website: subscription info,
> FAQ, source code archive, list archive, book reviews, dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>
>


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] A little diversion: what audio equipment do you use to judge DSP quality ?

2013-05-04 Thread Bogac Topaktas
Any high quality system would be suitable as long as you are aware of
its strengths and weaknesses. For instance, as indicated in my other
post, you may use Yamaha NS10's as long as you do not get mislead by
it's over-brightness.

I prefer working with set-ups that sound nice to my ears, and try to
mentally apply de-emphasis so that I do not get carried away during
fine-tuning.

Bogac.

>
> Ipod+earbuds, a Radio Shack HiFi system from the 90s? A nice modern
> surroud system or multimedia speakers, or maybe some of the modern
> monitors like KRK, Genelecs, or those fun JBLs with self-correction?
>
> Or maybe big studio monitors like Tannoy, Quested, B&W, etc ? O
> audiophile systems, or a great headphone amp+preamp and Dr Dre's, etc etc?
>
> Theo V.
>
> P.S. Of course I've mentioned a few times I use anything from uality
> mobile phones, screen speakers and car-Pioneer to medium and large
> self-built quality monitoring, which I give the hardest tests, too:
> known mics with frequency compensated recordings of the system being
> played back on itself!



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Reviewing audio quality

2013-05-04 Thread Bogac Topaktas
> If you're serious about it then your audio quality monitoring setup should
> be like a studio's control room. A classic example is using Yamaha NS10Ms
> as near field monitors.

The NS10's are over bright, i.e., they over-emphasize upper mid-range &
highs.
If you mix-down just listening to them, you end up with dull sounding mixes.
I've read about seasoned engineers placing thin curtains in front of them to
avoid this.

Of course, they are suitable for general-purpose DSP algorithm verification.
Though, I would not use them solely for fine-tuning exciters, reverbs
etc., as the HF over-emphasis may be misleading.

Bogac.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] OT: Sound Level Meter w/ RS-232/USB/WiFi Output

2012-08-28 Thread Bogac Topaktas
Hi Kenneth,

I just searched the phrase: "Sound Level Meter usb" on ebay,
and 47 results found. The ones most suitable for your requirements:

USB:

http://www.ebay.com/itm/New-Noise-Sound-Level-Digital-Decibel-dB-Meter-USB-/150471363165

http://www.ebay.com/itm/Digital-Sound-Pressure-Level-Meter-30-130-dB-Decibel-USB-Noise-Measurement-/280947659166

RS-232C port & software:

http://www.ebay.com/itm/SL5868P-Digital-Sound-Pressure-Noise-Level-Meter-30-130dB-dB-Decibel-USB-/170888901735

Bogac.

> [ Apologies for this only tangentially related email ... ]
>
> Does anyone know where I can obtain a "sound meter" (output in dB) whose
> current reading can be obtained by "polling" the device via a RS-232,
> USB, or WiFi connector/port?
>
> Thanks for any info/ideas!
>
>   -Kenneth
> --
> Prof Kenneth H Jacker   k...@cs.appstate.edu
> Computer Science Dept   www.cs.appstate.edu/~khj
> Appalachian State Univ
> Boone, NC  28608  USA
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Noise gate for guitar amplifiers and hysteresis

2012-07-04 Thread Bogac Topaktas
Hi Ivan,

Most noise gates have one threshold level but they utilize
different release behaviors:

THAT Corporation Design Note 100  - A Fully Adjustable Noise Gate
http://www.thatcorp.com/datashts/dn100.pdf

Boss NF-1 Noise Gate pedal schematic diagram
http://www.hobby-hour.com/electronics/s/nf1-noise-gate.php

To the best of my knowledge, Rocktron's HUSH is the best noise
gate for guitar. You can find the details of its operation in
the following manuals:

http://www.rocktron.com/manuals/hush_thepedal.pdf

http://www.fullcompass.com/common/files/5068-Hush-Super-C%20Rocktron%20manual.pdf

Bogac.

> Hello dear subscribers of Music DSP !
>
> I need your lights about a problem I have in a noise gate development
> for guitar amplifiers. To prevent the noise gate close and open
> successively on sustained notes, creating distorsion artefacts or
> "chattering", developers include a hold functionnality or a hysteresis
> control. They allow the noise gate to open and to close for different
> threshold values.
>
> I have a very simple noise gate working as a VST plug-in, including
> variable threshold, ratio, attack and release controls (expander-like).
> I won't talk about knees to simplify the problem. The VCA attenuates the
> signal if my RMS attack/release envelop follower is under the threshold.
> However, I have difficulties to implement the "anti-chattering"
> controls, mainly with the hysteresis effect.
>
> Example :
>
> - I have a null signal. That means my gate is closed, and the noise gate
> output is null too.
> - Then, the amplitude of the input signal increases. The gate is still
> closed, until the envelop is higher than the "opening threshold". That
> means I have to apply a gain to my signal which is G1 = (envelope value
> / opening threshold) power (ratio - 1) after a few simplifications.
> - Next, the envelop amplitude is higher than the threshold. The output
> signal is equal to the input signal.
>
> Things become more complicated now :
>
> - The amplitude of the signal decreases, but the envelop stays higher
> than the "closing threshold", which is lower than the "opening
> threshold" of course. The VCA still applies a gain of one and the gate
> is still opened.
> - Next, the envelop amplitude is below the "closing threshold". The gate
> becomes closed, and a gain of G2 = (envelope value / closing threshold)
> power (ratio - 1) is applied to the input signal.
> - Then, the amplitude of the signal increases, but the envelop amplitude
> stays lower than the "opening threshold". What is the attenuation I must
> apply to my signal ? I can't apply G2, because it is depending on the
> "closing threshold", and I can't apply G1, because the gain must not
> change too fast.
>
> So my question is : what is the best way to handle the move between the
> gain G1 and the gain G2, while preventing chattering, and clicks ?
>
> Thanks in advance !
> Ivan Cohen.
>
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] New patent application on uniformly partitioned convolution

2011-01-28 Thread Bogac Topaktas
Dear Andreas,

You are absolutely right. Unfortunately, this is a common
practice of large corporations (i.e., they use their financial
power first to pass initial examination stage and then to
prevent re-examination requests).

Regarding the mentioned company, they have even patented
companding systems which have been previously patented 25
years ago! They just mention that their patent implements the
previous system in digital domain and voila:)

We are so lucky that European patent office does not operate
the same way..(at least for now).

bogac.

> Dear list,
>
> I just stumbled upon this recent patent application (publication date
> 2010-11-18):
>
> http://www.freepatentsonline.com/y2010/0293214.html
>
> It appears to be yet another try in the long and rather unpleasant
> history of patents on partitioned convolution.
>
> As far as I understand this patent application, it does not contain
> anything that is not common knowledge about uniformly partitioned
> convolution as exposed in the relevant papers or implemented in
> applications such as BruteFIR.
>
> It is notable that the author mentions the frequency domain delay line
> (FDL) approach and claims a possible performance improvement compared to
> FDL.
>
> In my opinion, the only difference to the FDL approach is that the delay
> line-like structure does not store partial results as in the FDL
> approach, but frequency-domain representations of past input blocks.
> Thus, only the point in time where the operations (complex
> multiplications and additions) are performed is changed. But the
> operations performed are identical and thus the algorithmic complexity
> is the same.
>
> In my opinion, the description in the Kulp paper [1] includes the
> claimed algorithm quite well.
>
> Regards,
> Andreas
>
> [1] Kulp, Barry D., Digital Equalization Using Fourier Transform
> Techniques AES 85th Convention, 1988, AES preprint 2694
>
>
> --
>
> Dipl.-Inf. Andreas Franck
>
> Fraunhofer Institut für
> Digitale Medientechnologie IDMT
> Ehrenbergstrasse 31
> 98693 Ilmenau
>
> mail f...@idmt.fraunhofer.de
> phone+49 (0) 3677 467 386
> fax  +49 (0) 3677 467 4386
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] resonance

2011-01-03 Thread Bogac Topaktas
> Why don't they work well when parameters are changed more often?

See Dr. Robin J. Clark's Ph.D. Thesis (esp. Chapter 6) for more info:

http://www.tech.plym.ac.uk/spmc/pdf/audio/RobClarkPhD.pdf
http://www.tech.plym.ac.uk/spmc/rclark.html

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Wavelet algorithm for Time-Frequency Analysis

2010-12-27 Thread Bogac Topaktas
>
>>Hi!
>>
>> I'd like to compute the continuous wavelet transform (CWT) of my input
>> signal
>> (audio file) with the morlet wavelet, to get a time frequency plane
>> which
>> corresponds to the time frequency content of the audio signal.

BTW, why did you choose wavelet transform?

Did you consider Gabor transform or Wigner Distribution analysis
or their combination? See:

"Wigner Distribution Representation and Analysis of Audio Signals: An
Illustrated Tutorial Review"
Preis and Georgopoulos
JAES Volume 47 Issue 12 pp. 1043-1053; December 1999

http://en.wikipedia.org/wiki/Gabor%E2%80%93Wigner_transform




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Wavelet algorithm for Time-Frequency Analysis

2010-12-27 Thread Bogac Topaktas
Hi Stefan,

Have you seen the following papers?:

"An Algorithm for the Continuous Morlet Wavelet Transform"
Richard Buessow
http://arxiv.org/abs/0706.0099

"Fast Algorithms for Discrete and Continuous Wavelet Transforms"
Rioul and Duhamel
http://perso.telecom-paristech.fr/~rioul/publis/199102rioulduhamel.pdf

"Fast computation of the continuous wavelet transform through oblique
projections" M. Vrhel, Chulhee Lee, M. Unser, icassp, vol. 3,
pp.1459-1462, Acoustics, Speech, and Signal Processing, 1996. ICASSP-96
Vol 3. Conference Proceedings., 1996 IEEE International Conference on,
1996

and

http://www.wavelet.org/phpBB2/viewtopic.php?t=3147

>Hi!
>
> I'd like to compute the continuous wavelet transform (CWT) of my input
> signal
> (audio file) with the morlet wavelet, to get a time frequency plane which
> corresponds to the time frequency content of the audio signal.
>
> Right now, I'm implementing this by computing one FFT per frequency,
> multipying
> the FFT of the morlet wavelet with the FFT of the audio signal, and then
> using
> the IFFT. This gives me the convolution between one morlet wavelet and the
> input signal. I wonder if there is a faster algorithm to achieve the same
> result, because right now, I need
>
>   O(N log N * K)
>
> steps, where N is the length of the audio input signal and K is the number
> of
> frequencies I want to have in my time-frequency plot.
>
>Cu... Stefan
> --
> Stefan Westerfeld, Hamburg/Germany, http://space.twc.de/~stefan
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-21 Thread Bogac Topaktas
> Would there be a different answer for the choice of
> methods for use in real time applications?
> e.g. distortion modelling for guitars,
>  time domain pitch shifting.

Phase compensated polyphase IIR:

http://www.wseas.us/e-library/conferences/crete2001/papers/473.pdf


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] who else needs a fractional delay.

2010-11-20 Thread Bogac Topaktas
>Ross Bencina wrote:
> From my point of view the more difficult thing is recovering a stable
> wordclock from a jittery packet stream -- and getting this to start up
> quickly enough to be useful. In the past I've used an Ordinary Least
> Squares
> regression on packet timestamps to estimate the incoming sample rate and
> intercept (time offset). This time I have a mechanism for time offset
> based
> on the assumption that the "most on time" packets represent the best time
> offset, but the rate estimator is still a bit of a mystery... I have a
> Kalman filter version that works about as well as the OLS rate detector
> and
> is a little cheaper -- lately I've been reading up on "robust regression"
> methods (LMS and TLS) -- they're pretty costly but I'm hoping they'll
> allow
> me to lock on to the word clock more quickly.
>
> I hope that's clear. Suggestions on techniques or methods I havn't already
> mentioned would be most appreciated...

I think what you need is a robust state estimator, for instance a
H-infinity filter. Check out estimation theory literature for more info;
Dan Simon's book is a good resource:

"Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches"
ISBN-10: 0471708585




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp