Re: [music-dsp] TMS320 commercial synth/effects?

2020-08-04 Thread Sound of L.A. Music and Audio

Hi

Here is something with an early 320:
http://www.96khz.org/oldpages/activediffusion.htm
Became a part of a commercial product later

Has been based on this work:
http://www.96khz.org/oldpages/echocancelling2.htm

Later ported to a music bare bone instrument:
http://www.96khz.org/htm/chameleonsynth.htm

In fact it is all about echo production and cancelling

regards
jürgen


Am 20.07.2020 um 11:52 schrieb Peter P.:

Dear list,

I am looking for examples of commercial synthesis and audio effects
products using DSPs from the TMS320 family.

Thanks in advance for any pointers!
cheers, Peter
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] CfP: Special Issue "Machine Learning Applied to Music/Audio Signal Processing"

2020-06-23 Thread Sound of L.A. Music and Audio


Hi all,

does this also cover automated speech recognition? I am working on a
system wich optimizes patterns for recognition that way that it adapts
to different ways of speeking to distinguish dialects an such.

Jürgen
https://www.xing.com/profile/Juergen_Schuhmacher


Am 16.06.2020 um 19:56 schrieb Lerch, Alexander G:

Dr. Alexander Lerch

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] An example video: C to FPGA programming

2020-01-17 Thread Sound of L.A. Music and Audio

I am following this "C to VHDL" thing now for 15 years. Mentor had a
tool to do that already in 2005.

Up to now, the basic issue never changed:

C/C++ has not the options to define those things which are required to
generate an application specific hardware. But exactly this is the point
about FPGA programming: Defining an intelligent solution for a specific
problem. Then they become efficient. And they become very efficient if
you manage to think of an intelligent hardware structure. If MATLAB HDL
Coder oder Xilinx HLS should be able to find this from the C definition,
first the C definition had to contain the solution and MATWORKS or
XILINX must have taken this particular idea into account and must have
put in into the library.

Both ist not the case.

This way you can only get a standard hardware which performs your
function and shun some work in balancing the timing and simplfy vector
width handling and trimming of calculations.

The vast majority of the solution anyway is getting known and existing
cores instatiated automatically instead of instatiating them manually by
copy and paste.



Am 10.01.2020 um 11:25 schrieb Patric Schmitz:

Hi Theo,

I believe the link should be:

     https://www.youtube.com/watch?v=kfWNfjcIO2Q

Thanks for sharing,
Patric

On 1/10/20 11:18 AM, Theo Verelst wrote:

Hi all

Maybe it's not everybody's cup of tea, but I recall some here are
(like me) interested in
music applications of FPGA based signal processing. I made a video
showing a real time
"Silicon Compile" and test program run on a Zynq board using Xilinx's
Vivado HLS to create
an FPGA bit file that initializes a 64k short integer fixed point sine
lookup table
(-pi/2 .. pi/2) which can be used like a C function with argument
passing by a simple test
program running on ARM processors.

The main point is the power the C compilation can provide the FPGA
with, and to see the use of the latest 2019.2 tools at work with the
board, some may find that useful or
entertaining:

    https://youtu.be/Nel6QAvmGcs

Theo V
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sound Analysis

2019-01-05 Thread Sound of L.A. Music and Audio

Hello all

Is there a synchronous synchronisation to the given power frequency or 
does it have slippage like an asynch engine?

If long term synchronous, there certainly is jitter.


Am 02.01.2019 um 02:50 schrieb Ben Bradley:> Even with that, you're
> still not accounting for variations caused by the mechanical
> geartrain. I can imagine that has an effect, but not how much.


Am 02.01.2019 um 02:50 schrieb Ben Bradley:

as the Hammond tone generation is mechanically tied to the 50Hz or
60Hz power frequency, and I don't think the line frequency has ever
been regulated as accurately as 1 cent (1/100th semitone). For
accurate recordings, you'd need to run the organ off a power inverter
with a quartz crystal derived power frequency. Even with that, you're
still not accounting for variations caused by the mechanical
geartrain. I can imagine that has an effect, but not how much.


Anyway: For this particular recording approach, you will have to the 
identify base frequency and its vibrato / tremolo with an equation and 
fitting. A FFT will most probably fail.


jürgen

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] variations on exponential curves

2018-10-21 Thread Sound of L.A. Music and Audio

Hi all

Am 01.10.2018 um 09:21 schrieb Frank Sheeran:

current = previous * multiplier + delta



Am 01.10.2018 um 09:21 schrieb Frank Sheeran:
> current = previous * multiplier + delta
Im a using this multiplication with offset to sequentially generate and 
detune the frequencies for music channels:


Playing with Commodore VIC20 and C-64 I found the relation of 185/196 as 
a good approximation for the relative frequency distance of two notes. 
For this kind of tuning, I am using a cyclic iteration to generate the 
frequency values for organ like instruments in order to prepare the 
values for all channels in sequence leading to a logarithmic scaling (12 
notes per ratio 2). Is is very close to the temperted tuning:

http://www.96khz.org/oldpages/musicfrequencygeneration.htm

Now, regarding the topic, adding an offset during multiplication like 
mentioned here, helps to detune this logarithmic scaling that way, that 
frequencies at the lower end can be lower than math "says", while higher 
frequencies can be slightly higher. This is necessary when coming to 
piano tuning to fullfill the spreading issue often observed with real 
piano frequency timmings.


Also this tuning can help to handle the psychoacoustic non linearity 
issue decribed here:

https://de.wikipedia.org/wiki/Mel

Around 2005, I implemented such a detailled adaptive logarithmic scaled 
curve to create measurement systems to drive and test industrial hearing 
systems / cochlear implants for a customer.


After having dealed with this topics, i started to try around to use 
that in music apps by dublicating MIDI channels and using individual 
harmonics with detuned frequencies rather than creating harmonics by non 
linear circuits wich automatically created harmonics hard linked to the 
base frequency.


You may tryout yourself about the difference.

Jürgen
www.96khz.org

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Creating new sound synthesis equipment

2018-08-09 Thread Sound of L.A. Music and Audio

Hello Theo

Am 08.08.2018 um 20:03 schrieb Theo Verelst:

For instance when a FPGA board, cheaper than the CPU of a PC, beats the PC
in practical sense, there's every reason to prefer that solution, 
especially

if the tools are getting more advanced than C compilers on a moderately
functioning PC multi tasking platform.


Yes, right, but do you think THIS is the case?

I hardly can see that an FPGA board can ever be that costworthy as a CPU 
board with the same power. The same with EDA tools: C/C++ Compilers, 
simulation option, verification and such is much better, easier and 
quicker done in the field of software and CPUs. Costs of material and 
costs of development time are the most important aspects driving 
companies to use CPU systems and replace FPGAs wherever possible.


Jürgen


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Creating new sound synthesis equipment

2018-07-26 Thread Sound of L.A. Music and Audio

Hi Paula and others

I wrote so many articles about where and when to use FPGAs for wave 
synthesis, that I cannot count them anymore. Only some short words in reply:


I agree that FPGAs do offer design techniques that cannot be done with 
DPSs. But I hardly see them being made real in music instruments. The 
reason might be that most people switch from C++ to the FPGA world and 
try to copy their methods to VHDL only, so they do not make use of all 
their abilities. Another point is inspiration what one can do :-)


What I mostly see is the usage of pure calculation power and here, the 
advantage of FPGAs decreases and decreases. When I started with FPGAs 
there were many more reasons to use them then nowadays.


Starting with my system I implemented things like S/PDIF transceivers, 
PWM/PDM Converters and sample rate converters in the FPGA just to 
overcome the limits of the existing chips. Today lot of that stuff is 
obsolete, since chips are present and/or functions like S/PDIF can be 
found in microprocessors already. No need to waste FPGA power for.


I see the same with my clients:

For instance a wave generation application from 2005 for one of my 
clients formerly done with a Cyclone II FPGA now runs in two 
ARM-Processors, since they overcame the FPGA (and are cheaper!). A radar 
application from 2008 done in a Virtex with Power PC is now partly 
performed by an Intel I7 multi core system - even the FFT! Same reasons.


So the range for the "must be an FPGA" in the audio field somehow is 
shrinking. This is why I wonder, why music companies now start with 
FPGAs. When I talked to companies to present my synth, there was low 
interest. Maybe FPGAs were too mysterious for some of them. :-)


Well the advantage of the high sample rate always had been there, but 
people mostly do not see the necessarity. While at that point of time, 
the discussion was to increase audio quality to 96kHz - now, everybody 
listens to mp3 and so what do we need a better quality for?


What changed?

The audio and music clients hardly have the requirement for better 
hardware which is also a matter of understanding: I recently had a 
discussion about bandwidth in analog systems and which sample rate we 
have to apply to represent the desired pulse waves correctly. The audio 
/ loudspeaker experts came out of totally different results than the 
engineers for super sonic wave processing who were closer to my 
proposals although both having the same frequency range in mind. 
Obviously physics in music business is different.


Maybe I should put the questions also here :-)

The same is with MIDI (my best liked topic):

When talking to musicans I often hear that MIDI processing and knob 
scanning can be done with a little microprocessor because MIDI was slow.
In return there is no nead for fast MIDI since people cannot press so 
many knobs the same time, "we" cannot hear MIDI jitter, since musicians 
do not totally stick to the measure either and so on.


Facts are different and again in the "non music business" no subject of 
discussion. In the industrial field, data transmission speed, bandwidth, 
jitter and phase noise is calculated and the design is done correctly to 
avoid issues.


MIDI to me appeared to be a limiter for the evolution of synthesizers as 
soon as I recognized it and understood the needs. I had a million of 
talks about that too. You maybe know about my self designed high speed 
MIDI. The strange thing about that is, that the serial transmission rate 
of simple PC UARTs already exceeded 900kb 15 years ago, while MIDI still 
was stuck at 31kHz.


I think THIS is also a big reason why some people moved to VST, in terms 
to avoid wiring and synchronisation issues. Whereby even with USB they 
still might run into problems in getting their 10 finger accord 
transformed to sound quickly enough using windows :-)



>   The main advantage over softsynths (like VSTs, etc) is that musicians
> prefer a "tactile" surface rather than a keyboard/mouse when "playing".
> Though I know a lot of composers (including film composers) who prefer
> scoring using VSTs.



Am 26.07.2018 um 12:30 schrieb pa...@synth.net:
> Rolf,
>
>   My tuppence worth ;)
>

Am 26.07.2018 um 12:30 schrieb pa...@synth.net:

Rolf,

  My tuppence worth ;)

  I think where FPGAs score is in their ability to do lots of things at 
once, something not possible with a CPU or DSP. So going from Mono to 
poly is often quite simply a copy/paste (ok, I'm over simplifying it).


  I 100% agree about off loading stuff like USB and MIDI to a CPU, which 
is where the ZinQ and Cyclone SoC range really come into their own.


  The main advantage over softsynths (like VSTs, etc) is that musicians 
prefer a "tactile" surface rather than a keyboard/mouse when "playing". 
Though I know a lot of composers (including film composers) who prefer 
scoring using VSTs.


  I also agree that MIDI is now at a stage where it's not adequate to 
meet the demands of modern 

Re: [music-dsp] resampling

2018-07-22 Thread Sound of L.A. Music and Audio


This code is also dangerous "LGPL" :-)

Seriously, I'm afraid this is also too much for him. Code is not really 
good to explain solutions. I prefer the clarification and let people 
code themselves.


Let's try it this way:

1. Apply an anti aliasing filter with an edge frequency of about 2..3kHz 
and a stop frequency of not more than 4kHz to meet the 8kHz sampling 
rate. Do not use only linear interpolation but CIC + FIR. For simple 
approaches use an IIR. A nice filter was a 6x4 TAP FIR Filter and a 3kHz 
edge.


2. Pick up every 6th sample to get 8kHz

3. Appreciate a dark sound without any "s", "t" ... :-)


To transform back to 48kHz

1. use a linear interpolation with a one stage CIC. Do not use the often 
proposed filling with zeros. For simple solutions use the smaple 6 times.


2. Apply a short 12 TAP - FIR filter with an edge frequency of 16kHz 
which has easy coefficients for 48kHz and reduces artifacts from the 
linear interpolation.


3. This should maintain the 8kHz quality without new degradation

Jürgen


Am 23.07.2018 um 03:08 schrieb Henrik G. Sundt:
This solution, without using any low pass filters before and after the 
desimation, will generate a lot of aliasing frequencies, Kjetil!


Here is another solution:
https://github.com/intervigilium/libresample/tree/master/jni/resample

Henrik


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] What is resonance?

2018-07-22 Thread Sound of L.A. Music and Audio
A HHR performes both simple reflection and self oscillation. At the 
beginning of the triggering phase the sound runs into it and is 
reflected at the inner walls. A little part of the sound comes out 
through the hole while a larger part is reflected creating steady waves 
as known in all rooms as modes. These partly again come out. So far this 
is just reflection.


But the little hole causes losses and local over pressure so it is also 
an obstacle for the air inside which will start to pump. This way a 
continous wave in beneath the HH-frequency is so to speak "loading" the 
HHR which will become louder and louder. This causes a shift of the 
phase difference between the incoming and outgoing waves coming to zero 
finally when the feeding signal reaches the losses.


Therefore the HH might emphasize but also eliminate certain frequencies 
in the room depending on the phase and volume difference and the length 
of the signal. It might happen that a triggering sound is thus limited 
and does not overload the room with modes but after the sound is over, 
the HHR resonator will continue to feed the room with sound when 
unloading his energy, unless damping material is used inside to limit this:


http://96khz.org/files/2003/helmholtz-resonator-damped-silenced.jpg


A classical HHR built as a heavy iron sphere will have only one dominant 
frequency and low losses, while a wooden case might have up to three. A 
light wooden case might even pick up energy from the moving air inside 
starting to emit sound on it's own resonance freqs.


So does the guitar: it picks up energy from the stings and performs both 
energy storage in the wood and direct reflections inside the corpus. In 
theory a guitar even has little HHR capabilites :-)


To distinguish both effects in the meaning of signal processing, one 
could describe the reflections as a FIR behaviour while the energy 
storing is an IIR. Real time simulation of loudspeaker cases can be done 
this way, e.g. bass reflection tubes. In a very simple model the 
reflections can also be replaced by IIR since the reflection of a sine 
wave with a smoothed volume curve will also add to something like a II 
response.


gtx

Dipl.-Ing. Jürgen Schuhmacher


Am 22.07.2018 um 23:20 schrieb Stefan Sullivan:
Yes. The term helmholz resonator should be a hint ;) Basically when a 
sounds gets added to itself after a delay you end up adding energy to 
the frequency that corresponds to that delay amount. For very long echos 
we don't hear it as a resonance, but for shorter delays it will boost 
higher and higher frequencies into the audible range.


Stefan

On Sun, Jul 22, 2018, 08:10 > wrote:


Hello all
Is "feedback with delay" really resonance? I recognize many people
describe the effects of "room resonanes this way", but to my
understanding these are no resonances in the basic meaning but
reflections. A resonance is a self standing oscillating system like
a guitar string or an air mass in a Helmholtz resonator.
  Rolf

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] sound transcription knowledge

2018-07-10 Thread Sound of L.A. Music and Audio

Hello Richard and others

Am 10.07.2018 um 15:25 schrieb Richard Dobson:
I am very much into this topic of mapping live musical parameters to 
(generative, real-time rendered) visuals, so I'm very interested in what 
others have to add to this list.


Music to Video is also a thing which took my interest all of my life and 
here are some of my approaches:


Two years ago, I worked out a concept for real time rendering images and 
objects according to music note occurence to express emotions coming up 
when listening. This can (but does not necessarily have to) be related 
to some people's ability to "see" music: This special effect in some 
musician's brains is pretty often viewed and also subject of scientifc 
research. So is the way these colours occur and in wich way music is 
related to visual impressions.


Surprisingly (or not?) many of these impressions have a lot in common: 
For instance some people reported to see some kind of brown nutshells 
when listening to long durant week bass tunes. The project leader 
provided me with a list of such audio-music links.


The intention was to draw images similar to these impressions to give 
also the "non viewing" audience the chance to see sound.


My solution finally based on my VRR system I once created in FPGAs to 
test optical lenses the virtual way and / or emulate camera and video 
sensors in real time. It has been derived from a DSP algo set developped 
earlier for testing purposes. You can get an idea here:


https://www.mikrocontroller.net/topic/403956?goto=4679470#4679470

Now, coming to a more human interpretation of sound on order to draw 
emotional images, one has to "decode" the meaning of sound, such as 
harsch, week, aggressive, calming and so on. I continued this and 
developped some mapping tables to analyze the harmonic constellations, 
the amount of energy, attack and sustain of sounds and transform them to 
colours, contrast and more or less round or edged objects. For copyright 
reasons I cannot go into further details here but first versions of the 
hardware were partly described here:


http://www.96khz.org/htm/graphicaudiooutc3.htm
and here:
http://www.96khz.org/htm/graphicvisualizerrt.htm

This is still in progress, currently i am building up a system to become 
an installation of a museeum.


What you can look for is to identify certain dominant frequencies in the 
mix with analog or virtual analog filters and use amplitude and phase to 
drive a colour model (6 main colours r,g,b, c,m,y willdo) and cause 
intererences of the stereo channels. Some of the "interferrences" images 
were created this way:


http://engineer.bplaced.net/index.htm


And a final thing: One interesting side effect and maybe another kind of 
music visualization can be done with error processing:


I am often testing my algorithms (also the non musical) by visualization 
of the differences between two calculation of lower and higher 
resulutions in order to find out possible unhappy cummulations of 
errors. The resulting difference is amplified and displayed over two 
parameters on a video screen. This leads to very interesting fractal 
images changing their colours and structures in real time. If you then 
apply a music signal, this leads to very strange images:


https://www.youtube.com/watch?v=tdmdSt6UZLM
https://www.youtube.com/watch?v=UgbjkZS36_k

These images are so to speak the debris when taking away the correctly 
processed bits the audio signals.


Jürgen














___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] How to tune digital waveguides?

2018-06-22 Thread Sound of L.A. Music and Audio


Am 22.06.2018 um 01:11 schrieb robert bristow-johnson:
do you need an accurate fractional-sample-precision delay?  that's how 
you tune a comb filter to a specific note.


How would you achieve a sub sample delay just in software without a 
filter? And consequently when using such a filter, why not performing 
the tuning directly therewith?


Which ciruit / dataflow do you see where such a combination works 
effectively?


Well we now about the issue when working with dedicated chips having a 
fixed comb filter included where such "pre processing" is requiered.



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Blend two audio

2018-06-19 Thread Sound of L.A. Music and Audio

This is not surprising since sin*sin + cos*cos = 1  :-)

But the problems, I mentioned remain, whereby people can lower issues by 
blending in partitions with low dynamics (if possible).



Am 19.06.2018 um 07:49 schrieb Tom O'Hara:
> On 6/18/2018 6:42 PM, gm wrote:
>>
>> I find that in practice a cosine/sine fade works very well for
>> uncorrelated signals.
>
> Likewise.
>
> Tom
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Blend two audio

2018-06-18 Thread Sound of L.A. Music and Audio

Am 18.06.2018 um 08:13 schrieb Felix Eichas:
> There's also a paper regarding power complementary crossfade curves.
> Maybe a bit scientific but still worth a read:
>
> http://dafx16.vutbr.cz/dafxpapers/16-DAFx-16_paper_07-PN.pdf
>
> Regards,
> Felix


Interesting paper, I did not expect that this issue has been analyzed in 
such a detailled way.


Anyway, there are some issues:

The mathematial power of a signal is related to it's spectrum, and if we 
cross face two signals with different spectrum, than we have to make up 
our mind which frequencies we want to focus at. Mathematically - and 
this is done in the paper - it is easy to meassure all frequencies' 
power and simply adjust the levels that way that they match - according 
to the definition of power, which is related to the period as you know.


Well, this is not the solution!

The reason is that - depending on the particular application, individual 
frequencies have a different "importance" in the app. This is the case 
e.g. with radar sweeps, refelection triggering and similar things.


For us, here, dealing with audio, we have to take the hearing curves 
into account, meaning, that at a specific loudness level, the 
frequencies have a different impact, so simple level orientated fading 
leads to wrong results. The here problem is, that some loud parts of the 
music do create some kind of "mask effects" in the ear, so this 
frequencies do not appear in the experienced power.


As a consequence of that, also the speed of fading (a flat or a more 
steep curve) also has a significant impact on the loudness, we "feel".
Also for short cross fades, some frequecies hardly run into the 
mathematical equation so also the algorithmic way is strongly depending 
on the fading period and causes different results.


I typically have that problem, when putting together several takes in 
orchestral recordings. The level meter is no help during this decision. 
Instead listening is the only way to do that correctly.


With piano recordings I remember situations, where - due to the 
complexity of the sound - it was nearly impossible to fade that 100% 
because either the bass was to high or the disctant would have been.
So mixing is always compromise, because some musical notes do work as 
accents in th flow and a mathmematical algorithms hardly can judge this.


The result of that is, that for example the level of a subsequent part 
might already have to be changed, just because the flatness of the 
fading curve is changed, which in theory should not be the case, when 
regarding the signal power.


My option to this issue:

Signal Power is not equivalent to audio power and this again is not the 
same as expericenced loudness and this again is not the same as musical 
loudness impression in the a contex of a track. These are 4 "different 
shoes" , as we say in germany.


Regards

Jürgen




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Playing a Square Wave

2018-06-14 Thread Sound of L.A. Music and Audio


Hi Paula and all

Am 13.06.2018 um 14:35 schrieb pa...@synth.net:

Though, remember these are mass market products, they will use the 
appropriate part for a given price point.
Right, wherey according to my exoeriences, the exisiting DACs Chips of 
the higher price reagion we have nowadays really do a very good job 
since the make use of dithering, oversamping and pre distortion to drive 
the analog AA filters perfectly. Also the chip technology progressed 
that way, that it is possible to integrate perfectly working analog 7 
pol elyptic filters into the chips leading to an incredible precision.


So it is more a question to get this out into the PCB. EMI, shielding 
and analog signal treatment is more important. If people e.g. are using 
unbalanced analog out with chinch and such, you can forget this.


But again: This only speaks for the audible spectrum where these chips 
are designed for.



Now, if you want a GORGEOUS sounding DAC, go play with a synclavier.
These are discrete DACs and sound like NOTHING I've ever heard before.. 
just utterly amazing.


Regarding the different approaches of DACs trying to overcome 
Delta-Sigma-DACs I mostly observed an increased brilliance caused by 
exactely this unintend harmonics we have, when not performing filtering 
correctly. We did a lot of research on DACs in our ultrasonic projects 
and usually THIS is the big difference, and as stated above, the 
specific way the in audible alias harmonics trigger the subsequent 
analog elektronic and mechanic makes the sound. And there is much room 
for "design" and optimization.


I very well admit and expect that people have designed nice sounding 
equipment, but this is nothing else then sound design having not much to 
do with authentic wave replaying what we need in recording cases. And 
again: The particular loudspeaker has a strong impact, when it's 
internal filters are fed with harmonics >20kHz. The parts rejected at an 
inductance will cause a ping pong reflection pattern showin up elsewhre 
and that part passed through will cause audible harmonics at all non 
linear components. Then we have saturation effects and partital movement 
of the loudspeaker membrane. Beeing involved in a project to reduce 
these artifacts be actively controlling the membrane, I observed that 
all sound of music became less harsch, up to "boring". Brilliance was lost!


So maybe an synth-DAC might be differently designed, than a HIFI-DAC 
possibly, such es a techno loudspeaker ist different from those for 
acoustical music.


Anyway: I had a look at the mentioned synclavier: It seems to have used 
a higher sample rate of 100kHz. This makes sense, when using only 16 bit 
and trying to modulate the waves like required in a synth. The higher 
the incoming material, the less artifacts you will get because of the 
resampling which is required when interplating the waves and moving the 
to another pitch. So this is clear to me that this instrument ust have 
had a better smoothed sound with eg vibrato and such..



Jürgen
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Playing a Square Wave

2018-06-14 Thread Sound of L.A. Music and Audio

Am 13.06.2018 um 15:01 schrieb Niels Dettenbach:


By theory, any square wave could be constructed by a infinite number of
(sinus) signals, while many of that images seems like produced from a finite
number of such "signal parts". this means - if i think correctly - a really
perfect square would have "infinite energy" required (pls correct me, if i'm
wrong here).


To get a good sqare, it is nearly impossible to get this by sinus 
overlay, since this does not converge well enough. You might get an 
impression here, where I show some additive synthesis:


But:

regarding the ear and hearing up to only 20kHz it is very well possible 
to synthesize a "sqare like" sound with only harmonics up to 20kHz. This 
works even better, since the filters in our system to not have to be 
neutral against the very high frequencies coming with a digital square 
wave.


so, what we hear when using real squares from a digital pin is mainly 
the summed up artifacts caused by the filters, the transmission and 
finally the loudspeakers.


What we also have to take into account when comparing waves:

Somtimes we can hear even ultrasonic waves, since they do have an impact 
on the membrane and littel bones in our ear. it is a kind of masking at 
high levels, so the sound impression is sightly different with / without 
utrasonic waves.


So in theory it should be right to generally provide frequencies of the 
real live up to 50kHz and above when replaying sound, but practically 
this required other hardware to do so.


I always recommend to limit the specttru down to 16-18kHz.

On the other hand: Feed hard squares into the filters might be used in 
sound synthesis for intended effects:


As you pointed out, more harmonics mean "more energy" and the non 
linearity and imperfectnes of all filters and systems will definitly 
cause impact on the final sound - also in der audible region.


Jürgen

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialias question

2018-06-01 Thread Sound of L.A. Music and Audio

Hello Kevin

I am not convinced that your application totally compares to a 
continously changed sampling rate, but anyway:


The maths stays the same, so you will have to respect Nyquist and take 
the artifacts of your AA filter as well as your signal processing into 
account. This means you might use a sampling rate significantly higher 
than the highest frequency to be represented correctly and this is the 
edge frequency of the stop band of your AA-filter.


For a wave form generator in an industrial device, having similar 
demands, we are using something like DSD internally and perform a 
continous downsampling / filtering. According to the fully digital 
representation no further aliasing occurs. There is only the alias from 
the primary sampling process, held low because of the high input rate.


What you can / must do is an internal upsampling, since I expect to 
operate with normal 192kHz/24Bit input (?)


Regarding your concerns: It is a difference if you playback the stream 
with a multiple of the sampling frequency, especially with the same 
frequency, performing modulation mathematically or if you perform a 
slight variation of the output frequency, such as with an analog PLL 
with modulation taking the values from a FIFO. In the first case, there 
is a convolution with the filter behaviour of you processing, in the 
second case, there is also a spreading, according to the individual 
ratio to the new sampling frequency.


From the view of a musical application, case 2 is preferred, because 
any harmonics included in the stream , such as the wave table, can be 
preprocess, easier controlled and are a "musical" harmonic. In one of my 
synths I operate this way, that all primary frequencies come from a PLL 
buffered 2 stage DDS accesssing the wave table with 100% each so there 
are no gaps and jumps in the wave table as with classical DDS.


j


Am 01.06.2018 um 04:03 schrieb Kevin Chi:

Dear List,

Long time lurker here, learned a lot from the random posts, so thanks 
for those.


Maybe somebody can help me out what is the best practice for realtime 
applications to minimize aliasing when
scanning a waveform by changing speed or constantly modulating the delay 
time on a delay (it's like the

resampling rate is changing by every step)?

I guess the smaller the change, the smaller the aliasing effect is, but 
what if the change can be fast enough to make it

audible?

If you can suggest a paper or site or keywords that I should look for, 
I'd appreciate the help!


Kevin



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] PCM audio amplitudes represent pressure or displacement?

2018-05-30 Thread Sound of L.A. Music and Audio

Hello

Apart from the mentioned book of Neumann, I could recommend Sengpiel 
Audio. It is still available in the internet and supported by the son of 
EBS (AFAIK).


You will find many answers there. Some pages are available in english 
too, such as the conversion calculations:

http://www.sengpielaudio.com/Calculations03.htm

As a quick reply to your questions: These patterns are generated by more 
than one diapgragm and / or effects caused by substraction of the sound 
from different directions eventually making use of accoustic time 
delays. For instance the Cardioid mic does:

http://www.sengpielaudio.com/PressureGradientAndPhase.htm

But it is important to understand, that these abstractions are focussing 
ideal cases. Practically no microphone comes close to these simple cases 
of behaviour.


As entry point, this page might be of interest:

http://www.sengpielaudio.com/HejiaE.htm

It also has visualisation of the mic systems.

GTX


Am 29.05.2018 um 03:53 schrieb Rich Breen:
Microphones can be either pressure or velocity; any basic microphone 
design book/paper will cover the differences.


best,
rich





___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Real-time pitch shifting?

2018-05-22 Thread Sound of L.A. Music and Audio

Hello all

Am 22.05.2018 um 14:11 schrieb Theo Verelst:
> fundamentally limited by the length of the sinc (-like) "perfect"
> resample kernel, and the required delay for accurate re-sampling might
> be considerable!
This can be limited by an increasing sampling rate reducing the 
coarsness, but leads to large number of TAPs. I had some progress with 
this regular issue in digital signal processing in replacing DSPs by 
FPGAs running at high speeds.


http://www.96khz.org/htm/realtimeresampler2.htm

> getting a huge transient at the end, or you do some sort of 
smoothing. This will always be an issue hard to avoid. Smooting in 
between parallely (pipelined) processed wavelets is essential and th 
shorter the fragments are the less artifacts one will get. Doing this in 
the frequency domain will require e.g. a tight FFT with high overlapping.


We have similar issues when processing radar reflections or time of 
flight problems.




> Also, for programs meant for the human voice, there might be issues
> because those programs might do estimations of the voice parameters in
> order to change pitch
With the voice it is even more tricky, since the formant shaping is 
different for other frequencies. One reason is, that there are more than 
one "equalizer" involved. Just putting the whole track to a hight frequ 
will disreagard this.


Moreover the tuning is not even correct:

Very skilled singers perform a slightly higher pitch to the dark "A", 
"O" and "U" in comparison to the "I" and "E". This is because they 
appear lower even if sung on the right pitch. But this is not static but 
realated to the music. Very low "A" are overpitched a bit more, than 
higher "A" when singing.


All this cannot be handled by a static pitch shift.

And there is much more...


engineer

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] What happened to that FPGA for DSP idea

2017-11-05 Thread Sound of L.A. Music and Audio

Hello Theo

bandwidth is indeed an interesting point.
I have done a calculation regarding the number or operations requiring 
RAM access to get and store the information for pipelined filtering and 
sound processing over the time slices. Summarizing all the (up to 4 
reads and writes per RAM) bits over all available rams in the Artix, it 
requires something around 20-50 PCs with current DDR Controllers to 
perform the same number of calculations the same time.


j



Am 10.10.2017 um 16:39 schrieb Theo Verelst:

What was that about? I did chip design elements back in the 90s, what
does that have to do with making a FPGA design that at high level
verifies as interesting because it even seems to  out-compute a pentium?
I've done things with both DSPs and FPGAs and it strikes me that the way
to go for intelligent algorithms can well include FPGA nowadays because
it's becoming easier to "compile to silicon".

I was reading about High Bandwidth Memory stacked on FPGA to make even
PC memory bandwidth look pale in comparison I think it's an interesting
development that can be practically experimented with already.

T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] What happened to that FPGA for DSP idea

2017-10-06 Thread Sound of L.A. Music and Audio


Hello all,

something general about that:

As most auf you know, I developped my first FPGA synth in 2004 (Spartan 
2) starting from DSP routines formerly running on DSP 56302 and such, 
like available in Sydec's Mixtreme and Soundart's Chameleon and since 
then I had been envolved in many projects with FPGAs + DSPs - also 
industrial.


I rapidly recognized, that classical sound synthesis (VA or not) based 
on aritmethic equations (linear or not) are better realized in hard 
wired DSPs because of costs, time to market, maintainnce and power.


FPGAs could be used in very specific areas in the time domain signal 
processing using e.g. numerical integration and physical modelling to 
achive higher sampling rates (which mostly nobody wants :-) )and a high 
number of parallel channels (which nowbody needs :-) )


So apart from some rare cases, my wave synthesis modules went into 
applications like Radar, Lidar, Video, SensorEmulation and drive 
control, regarding audio into speaker control, magnetic emulation and 
monitor filtering, and a number of ultrasonic applications for audio and 
medical.


DSPs are nowadays that big in performance and small in price that I 
really wonder why so many people come up with the idea now and not 10 
years before, when DSPs wer still slow (and PCs too).


Anyway: Parts of my Synth will be free from any NDA and other legal 
limitations so I intend to offer them for those beeing interested in 
creating their own synth. My Synth is equiped with several wave 
synthesis thechniques which are not common for audio and contain also 
DSP engines to process multiple channels like in a digital console 
adding echos, space, reverb for ech channel individually. This gives 
some options not available with classical synths and 6 outputs only 
followed by a DSP adding reverb and performing stereo processing on two 
channels only.


The biggest progress is the control with precision of at least 1024 for 
any of the midi parameters up to 4096 instead of only 256 available with 
normal MIDI. This gives the chance of a real time control close to 
analog feeling. You can find more about this here:

http://www.96khz.org/oldpages/enhancedmiditransmission.htm
http://www.96khz.org/htm/midiviaspdif.htm

I have also a prototype running using 1023 steps for encoding already:
http://www.96khz.org/htm/midicontroller31.htm

Also integrated is a converter from audio to MIDI operation at this high 
speed. The Conversion rate is about 10 times igher than the commercial 
products and overcomes some issues, we found when guitar players like to 
hear, what the wanted to hear. For instance there is no necessasy note 
granularity: The channeling can shun the midi tables to find the 
frequecies required for pure tuning and balanced tuning or my 196/184(b) 
tuning. My midi operates with frequency values so it controls the engine 
the analog way as knwon from voltages in a snyth.

The precision is therefore less then 0,1 Cent leading to a real vibrato.

Currently I am negotiation with guy to be able to attach smart 
controller keyboards. USB is the open issue there.


more to come

www.pyratone.de


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] What happened to that FPGA for DSP idea

2017-10-06 Thread Sound of L.A. Music and Audio


Hello all,

something general about that:

As most auf you know, I developped my first FPGA synth in 2004 (Spartan 
2) starting from DSP routines formerly running on DSP 56302 and such, 
like available in Sydec's Mixtreme and Soundart's Chameleon and since 
then I had been envolved in many projects with FPGAs + DSPs - also 
industrial.


I rapidly recognized, that classical sound synthesis (VA or not) based 
on aritmethic equations (linear or not) are better realized in hard 
wired DSPs because of costs, time to market, maintainnce and power.


FPGAs could be used in very specific areas in the time domain signal 
processing using e.g. numerical integration and physical modelling to 
achive higher sampling rates (which mostly nobody wants :-) )and a high 
number of parallel channels (which nowbody needs :-) )


So apart from some rare cases, my wave synthesis modules went into 
applications like Radar, Lidar, Video, SensorEmulation and drive 
control, regarding audio into speaker control, magnetic emulation and 
monitor filtering, and a number of ultrasonic applications for audio and 
medical.


DSPs are nowadays that big in performance and small in price that I 
really wonder why so many people come up with the idea now and not 10 
years before, when DSPs wer still slow (and PCs too).


Anyway: Parts of my Synth will be free from any NDA and other legal 
limitations so I intend to offer them for those beeing interested in 
creating their own synth. My Synth is equiped with several wave 
synthesis thechniques which are not common for audio and contain also 
DSP engines to process multiple channels like in a digital console 
adding echos, space, reverb for ech channel individually. This gives 
some options not available with classical synths and 6 outputs only 
followed by a DSP adding reverb and performing stereo processing on two 
channels only.


The biggest progress is the control with  precision of at least 1024 for 
any of the midi parameters up to 4096 instead of only 256 available with 
normal MIDI. This gives the chance of a real time control close to 
analog feeling. You can find more about this here:

http://www.96khz.org/oldpages/enhancedmiditransmission.htm
http://www.96khz.org/htm/midiviaspdif.htm

I have also a prototype running using 1023 steps for encoding already:
http://www.96khz.org/htm/midicontroller31.htm

Also integrated is a converter from audio to MIDI operation at this high 
speed. The Conversion rate is about 10 times igher than the commercial 
products and overcomes some issues, we found when guitar players like to 
hear, what the wanted to hear. For instance there is no necessasy note 
granularity: The channeling can shun the midi tables to find the 
frequecies required for pure tuning and balanced tuning or my 196/184(b) 
tuning. My midi operates with frequency values so it controls the engine 
the analog way as knwon from voltages in a snyth.

The precision is therefore less then 0,1 Cent leading to a real vibrato.

Currently I am negotiation with guy to be able to attach smart 
controller keyboards. USB is the open issue there.


more to come

www.pyratone.de
















___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Generating pink noise in Python

2016-01-22 Thread Sound of L.A. Music and Audio
Just for clarification: In theory only one inverter with feedback is 
required in order to have an instable, oscillating circuit. Practically 
the technology is too quick and the amplitude will not be high enough to 
define good levels. More than one of them will lead to steady states of 
the voltage levels before the feedback "arrives" and causes toogling.


Moreover the length switching option requires relations like eg 15/13 or 
11/9 rather than just 3/1 to work properly. Also a strong temperature 
dependency applies.


Of course this creates no weill definde noise, but unpredictable results 
with series of 1 and 0 at any possible length and probability.


>CSAT
A modified circuit of that is operation in a very current technical 
device. The artifical noise is measured, stored and sent via a virtual 
channel to the destination in pieces wth various speeds and redundancy. 
The destination also receives another stream where (applying an 
uncertain strategy) bits were replaced by pieces of information. By 
gathering, reordering and matching the packets, two time coherent 
streams are available in the destination device. By substracting both 
streams, the information is completely restored.


Doing this, it is totally impossible for anybody to extract this 
information from the noise, unlike as it is with common AES encoding, 
because there are no patterns, no regular distances in between the bits 
and (the main point:) no code which could be cracked.


So the the destination device can not by deterred from receiving the 
information. It can neither be stopped nor mislead and thus it can 
straight forward find its "detination" where it is controlled to go for.




Am 22.01.2016 um 10:44 schrieb STEFFAN DIEDRICHSEN:



On 22.01.2016|KW3, at 02:50, robert bristow-johnson
> wrote:

i think i could code whatever into a sufficiently general-purpose DSP
 (so the Analog Devices "Sigma" series might be left out of that
class).  but i cannot understand what some of the components (inverter
chain) do in your diagram.




Those inverter chains serve as time delays for the oscillators.

See: https://www.fairchildsemi.com/application-notes/AN/AN-118.pdf

A wonderful case of true CSAT (computer stone age technology).

Steffan




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Generating pink noise in Python

2016-01-21 Thread Sound of L.A. Music and Audio


I wonder how a "grey" version of shades might sound like - will have to 
think about that :-)


Good paper indeed - seems to be easy to implement with a low number of 
resources. I found similar circuits based on LFSR-structures when 
designing and experimenting for radar some years ago - but am not 
allowed to publish (as usual in my business),


What I used for my first music synth, is a dynamic noise generator based 
on a inverter chains and twisting mechanisms. But I think this might be 
not appropriate for DSPs.


http://www.96khz.org/oldpages/digitalnoisegenerator.htm

Or someting in german:
http://www.mikrocontroller.net/articles/Digitaler_Rauschgenerator_im_FPGA

jürgen



Am 20.01.2016 um 23:59 schrieb Stefan Stenzel:

Allen,

Did you consider the recipe for pink noise I published recently?
It performs better in terms of precision and performance than all others.

https://github.com/Stenzel/newshadeofpink

Regards,
Stefan




On 20 Jan 2016, at 21:41 , Allen Downey  wrote:

Hi Music-DSP,

Short version: I've got a new blog post about generating pink noise in Python:

http://www.dsprelated.com/showarticle/908.php

Longer version:

If you set the WABAC machine for 1978, you might remember Martin Gardner's 
column in Scientific American about pink noise and fractal music.  It presented 
material from a paper by Voss and Clarke that included an algorithm for 
generating pink noise.

Then in 1999 there was a discussion on this very mailing list about the Voss 
algorithm, which included a message from James McCartney that suggested an 
improvement to the algorithm.  This page explains the details: 
http://www.firstpr.com.au/dsp/pink-noise/

And in 2006 there was another discussion on this very mailing list, where Larry 
Trammell suggested a stochastic version of the algorithm.  This page has the 
details:  http://home.earthlink.net/~ltrammell/tech/pinkalg.htm

As an exercise in my book, Think DSP, I challenge the reader to implement one 
of these algorithms in Python, using NumPy for reasonable performance.  I 
present my implementation in a new blog post at

http://www.dsprelated.com/showarticle/908.php

 From there, there's a link to the IPython notebook with all the details.

Hope you find it interesting!

Allen

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp