Re: [music-dsp] TMS320 commercial synth/effects?

2020-07-20 Thread Vladimir Pantelic
TMS320 is blanket name for a lot of different TI DSP products, from the
fixed point C54x to the 8-core floating point C67x variants and beyond.
some of them even having quite powerful ARM cores and extra hardware
acceleration added...

On Mon, Jul 20, 2020, 22:18 robert bristow-johnson 
wrote:

>
> The H3000 is a legendary piece of gear.  I've worked with the two main
> designers of it and they both live in the same town in Vermont that I do.
> I did not get to work on that product line when I joined Eventide in late
> 1991.
>
> From a simple and effective user-interface POV, it's also quite well
> designed for it's vintage (1989, i think).
>
> But the TMS320 is not what made the H3000 great.  It became great in spite
> of the lack of greatness in the 16-bit TI DSP chip.  In fact, the H3000 has
> *three* TMS320 chips in it and to get the box to do great things, they had
> to divide up very difficult complex programming among the 3 DSPs and do
> some synchronized parallel processing where they were passing data between
> the chips and timing their instructions to do that.
>
> In today's world, I don't see it as a good idea to use the TMS320 for any
> synth/effects at all.  What do you have?  An old dev board?
>
>
> --
>
> r b-j  r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
> > On July 20, 2020 7:42 AM Tristan  wrote:
> >
> >
> > Hi Peter,
> >
> > The Eventide H3000 uses TMS32010 DSP chips.
> >
> > https://reverb.com/news/tech-behind-eventide-h3000-ultra-harmonizer
> >
> > /Tristan
> >
> > On Mon, Jul 20th, 2020 at 7:52 PM, "Peter P." 
>
> > wrote:
> >
> > > Dear list,
> > >
> > > I am looking for examples of commercial synthesis and audio effects
> > > products using DSPs from the TMS320 family.
> > >
> > > Thanks in advance for any pointers!
> > > cheers, Peter
> > >
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] OT List Reply To

2018-10-24 Thread Vladimir Pantelic
1) http://www.unicom.com/pw/reply-to-harmful.html

2) http://marc.merlins.org/netrants/reply-to-useful.html

3) http://marc.merlins.org/netrants/reply-to-still-harmful.html

4) tbd :)

personally I'm in the 2) camp :)

On 24.10.2018 02:50, gm wrote:
> It's quite a nuisance that the lists reply to is set to the person who
> wrote the mail
> and not to the list adress
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] WSOLA on RealTime

2018-09-27 Thread Vladimir Pantelic
On Thu, Sep 27, 2018, 19:39 Alex Dashevski  wrote:

> Hi,
> The current code has problem.
> if I send you the current code, can you help me with implementation ? Are
> you familiar with android ?
>

comp.dsp would read: "please do my homework for me"
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-22 Thread Vladimir Pantelic

On 22/08/18 17:00, Theo Verelst wrote:

There's a lot of ways to look at wave tables and their use, for instance the way 
I used one quite some years ago in a for the time advanced enough Open Source 
hardware analog synthesizer simulation


can you name it?

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] resampling

2018-07-22 Thread Vladimir Pantelic
https://en.wikipedia.org/wiki/GNU_Lesser_General_Public_License

On Sun, Jul 22, 2018, 21:23 Alex Dashevski  wrote:

> Hi,
> Could you explain how to use with LGPL ? I can't understand it.
> Thanks,
> Alex
>
> 2018-07-19 21:28 GMT+03:00 Esteban Maestre :
>
>> Hi Alex,
>>
>>
>> This is a good read:
>>
>> https://ccrma.stanford.edu/~jos/resample/
>>
>>
>> Using Google, I found somebody who used the LGPL code available at
>> Julius' site:
>>
>> https://github.com/intervigilium/libresample
>>
>>
>> Good luck!
>>
>> Esteban
>>
>> On 7/19/2018 2:15 PM, Alex Dashevski wrote:
>>
>> Hi,
>>
>> I need to convert 48Khz to 8KHz on input and convert 8Khz to 48Khz on
>> audio on output.
>> Could you explain how to do it ?
>> I need to implement this on android(NDK).
>>
>> Thanks,
>> Alex
>>
>>
>> ___
>> dupswapdrop: music-dsp mailing 
>> listmusic-dsp@music.columbia.eduhttps://lists.columbia.edu/mailman/listinfo/music-dsp
>>
>>
>> --
>>
>> Esteban Maestre
>> Computational Acoustic Modeling Lab
>> Department of Music Research, McGill 
>> Universityhttp://ccrma.stanford.edu/~esteban
>>
>>
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Clock drift and compensation

2018-01-29 Thread Vladimir Pantelic
you have two unknowns, the frequency of the sampled signal and the difference in 
sample rate between sender and receiver. with no further information given, you 
can only assume one and determine the other based on it.




On 01/29/2018 06:27 PM, Benny Alexandar wrote:

Hi GM,

Thanks for the suggestion. Yes, it should work for sine tone kind of signals.

I have this doubt on sampling and drift.

  - Suppose transmitter is sampling a sine tone say  (Fin) 1KHz at 8kHZ (Fs) 
sample rate.

This means 8 samples should correspond to one cycle of 1 kHz.

- Receiver is sampling at 7.999 kHz  because of drift in crystal,
but I'm thinking my receiver is having a sample rate of 8kHz and takes 8 samples 
for one cycle.

which gives  999.875 Hz and not 1kHz.

So, how to detect this drift and take only that many samples for the 
corresponding receiver sample rate.

in this case 7.999 samples corresponds to 1 kHz.

-ben


*From:* music-dsp-boun...@music.columbia.edu 
 on behalf of gm 

*Sent:* Monday, January 29, 2018 1:29 AM
*To:* music-dsp@music.columbia.edu
*Subject:* Re: [music-dsp] Clock drift and compensation

diff gives you the phase step per sample,
basically the frequency.

However the phase will jump back to zero periodically when the phase exceeds 
360°
(when it wraps around) in this case diff will get you a wrong result.

So you need to "unwrap" the phase or the phase difference, for example:


diff = phase_new - phase_old
if phase_old > Pi and phase_new < Pi then diff += 2Pi

or similar.


Am 28.01.2018 um 17:19 schrieb Benny Alexandar:

Hi GM,

>> HT -> Atan2 -> differenciate -> unwrap
Could you please explain how to find the drift using HT,

HT -> gives real(I) & imaginary (Q) components of real signal
Atan2 -> the phase of an I Q signal
diff-> gives what ?
unwrap ?

-ben



*From:* music-dsp-boun...@music.columbia.edu 
 
 
 on behalf of gm 
 

*Sent:* Saturday, January 27, 2018 5:20 PM
*To:* music-dsp@music.columbia.edu 
*Subject:* Re: [music-dsp] Clock drift and compensation

I don't understand your project at all so not sure if this is helpful,
probably not,
but you can calculate the drift or instantanous frequency of a sine wave
on a per sample basis
using a Hilbert transform
HT -> Atan2 -> differenciate -> unwrap
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu 
https://lists.columbia.edu/mailman/listinfo/music-dsp





___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] [dumb question] do Eurorack audio and CV signals use the same connectors?

2017-11-14 Thread Vladimir Pantelic
for completeness' sake this applies to power connections too ;)

On Nov 14, 2017 09:17, "Ezra Buchla"  wrote:



i must add for completion that you should also be prepared to accept +/-
12v on the *output*. (see for example here:
https://www.whimsicalraps.com/pages/run-a-word-of-warning)
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Can anyone figure out this simple, but apparently wrong, mixing technique?

2016-12-10 Thread Vladimir Pantelic

On 10.12.2016 21:42, Eric Brombaugh wrote:

This is what happens when you let "software architects" try to do DSP.

It seems that what he's doing is maximizing instantaneous dynamic range by
subtracting a mixing product. That achieves his goal of normalizing the sum but
adds in anharmonic components that weren't in the original signals.


we prefer to call that "colour" :)


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] MIDI Synth Design Advice Please

2016-02-01 Thread Vladimir Pantelic
apart from making sure your task is not interrupted on that core, you
have to take the latencies (aka buffering) on both the MIDI input and
ALSA output side into account.

https://www.google.com/search?q=alsa+low+latency seems to yield some
good pointers to start reading

for the MIDI side, I would look into:

https://www.google.com/search?q=linux+low+latency+uart



On 01.02.2016 16:10, Scott Gravenhorst wrote:
> I'm looking for advice regarding the design of a MIDI synthesizer.
> 
> In the past, I've worked with FPGAs and with dsPIC microprocessors to
> create successful MIDI synthesizer designs.  These designs all had one
> thing in common: the processing consisted of code to generate a single
> frame (2 samples for stereo) which was presented to the DAC at each DAC
> interrupt.  No ring buffer was used because the code was guaranteed to
> complete new samples generation before the DAC "need more" interrupt was
> received.  This is possible because in both FPGA and dsPIC cases, there
> is no operating system (bare metal) and thus the time required to
> complete calculations was easy to know.
> 
> I've recently purchased a Raspberry Pi 2 which uses a 900 MHz 4 core
> ARMv7 processor.  I've installed a Cirrus Logic - Element 14 sound board
> (similar to Wolfson) for high quality audio.  I've also built a physical
> MIDI interface which connects MIDI to the console UART (with console
> features disabled).  I've been learning about ALSA programming and have
> used the example program pcm.c to experiment with generating a sine
> audio output stream using the sine function.  This is, from my
> viewpoint, a rudimentary synthesizer.  I've also been experimenting with
> threads and CPU affinity as well as isolcpu to isolate cores.  My
> assumption (which could be incorrect) is that isolated cores will run at
> near bare metal efficiency because the interrupts from random devices
> and other mundane kernel tasks will be handled by the core or cores left
> for the kernel's use and that the clocks of the isolated core or cores
> can be used to generate samples with more time deterministic properties
> than would be without isolated cores.
> 
> Advice regarding this endeavor would be appreciated.  Questions include
> which of the transfer methods will come closest to my goal of low
> latency (from time to MIDI message receipt to sound coming out of the
> speakers).
> 
> -- Scott Gravenhorst
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
> 

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp