On 28/02/2014 2:06 PM, Michael Gogins wrote:
I think the VSTHost code could be adapted. It is possible to mix managed
C++/CLI and unmanaged standard C++ code in a single binary. I think this
could be used to provide a .NET wrapper for the VSTHost classes that C#
could use.
I agree.
Maybe I mis
Sorry for the misunderstanding.
I think the VSTHost code could be adapted. It is possible to mix managed
C++/CLI and unmanaged standard C++ code in a single binary. I think this
could be used to provide a .NET wrapper for the VSTHost classes that C#
could use.
Regards,
Mike
On 28/02/2014 12:16 AM, Michael Gogins wrote:
For straight sample playback, the C library FluidSynth, you can use it via
PInvoke. FluidSynth plays SoundFonts, which are widely available, and there
are tools for making your own SoundFonts from sample recordings.
For more sophisticated synthesis,
On 2/27/14 6:33 PM, Theo Verelst wrote:
Frequency modulation, which is what happens when the "to be synced
with" signal changes from one frequency to another is theoretically
not limited in bandwidth,
the issue is that, however you try to model it, the result of a
hard-sync oscillator is stil
Does anyone know the literature for loudspeaker predistortion--literature
appropriate for senior-year electrical engineering students? (That's not me.) I
suppose this would rule out fancy stuff like Volterra series inversion and use
of psychoacoustic metrics.
How dependent on the signal is a no
Thinking a bit about the theoretical generalities involved in the
problem, it might be a good idea to imagine a few of the main "rules" in
the sampling domain, with the problem of limited bandwidth.
To know the exact phase of a sine wave in the sample domain it is at
least theoretically poss
For straight sample playback, the C library FluidSynth, you can use it via
PInvoke. FluidSynth plays SoundFonts, which are widely available, and there
are tools for making your own SoundFonts from sample recordings.
For more sophisticated synthesis, the C library Csound, you can use it via
PInvoke
If i understand correctly, Juce would be the solution.
You say you already have the working c++ code, so you could use that and add an
audioprocessor from juce to do your playback.
> Op 27 feb. 2014 om 06:36 heeft Ross Bencina het
> volgende geschreven:
>
> Hello Mark,
>
>> On 27/02/2014 3:52
Okay, so in a nutshell you are doing de-mastering and re-mastering on
a track (if I understand correctly).
It's still not clear, what is the conclusion from all this?
- Peter
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book
The big graphs of signal processing are for things like mid-frequency
averaging, mid-low subbands tuning, sample spoiling removal,
low-frequency decompression, reverse CD equalization, and more than a
few other mastering effect corrections. If that doesn't mean anything to
you, fine, it has t
Checked the video again, so seems like you have some signal (music),
then you process that through some modular graph processor (maybe
something FFT-based?), plus (?) some hardware processor(s) (reverb?),
and then the two signals differ in the 2-4k range.
I'm not sure, what's that supposed to mean
11 matches
Mail list logo