Re: [music-dsp] Musicdsp.org finally updated

2019-02-26 Thread Michael Gogins
Thanks for doing this.

Mike

On Wed, Feb 27, 2019, 04:40 Robert Marsanyi  wrote:

> That’s a phenomenal resource.  Thanks, Bram.
>
> --rbt
>
> On Feb 26, 2019, at 7:16 AM, Jacob Penn  wrote:
>
> Amazing!
>
> [image: insignia] 
> JACOB PENN.MUMUKSHU
> 612.388.5992
>
> On February 26, 2019 at 6:59:24 AM, Bram de Jong (bram.dej...@gmail.com)
> wrote:
>
> Hi all,
>
> New and improved: https://www.musicdsp.org/en/latest/
>
> I'm still in the process of going through all the comments, cleaning it
> all up. But... if you want to add or change anything:
> https://github.com/bdejong/musicdsp
>
> grts,
>
> Bram
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] IIR filter efficiency

2017-03-09 Thread Michael Gogins
Have you ensured (a) that you are avoiding "denormal" floating-point
numbers in the filter (see
http://stackoverflow.com/questions/2487653/avoiding-denormal-values-in-c)
and (b) the compiler is applying all possible optimizations such as
SIMD, inlining, etc.?

Often in DSP code, a very small but nornal white noise is added to the
signal before filtering.

Hope this helps,
Mike

-----
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com


On Fri, Mar 10, 2017 at 11:56 AM, ChordWizard Software
<corpor...@chordwizard.com> wrote:
> Greetings,
>
> I'm in the process of building an experimental wavetable synth, and I have 
> come across something I can't understand.  I'm hoping someone can shed some 
> light.
>
> I am using C++ and the audio buffer is in floats.  It's an x86 architecture 
> if that makes a difference.
>
> At present the synth is primarily a render loop with 4th order interpolation 
> from the original waveform followed by a LPF IIR filter.   The per-sample 
> workload in float arithmetic consists primarily of:
>
> - rendering:  5 multiplications, 3 additions, plus phase management
> - filtering:  4 multiplications, 2 additions, 2 subtractions
>
> which on the face of it, look pretty similar, perhaps slightly heavier for 
> rendering.
>
> But when I profile the performance of this loop, it appears the IIR filter 
> takes up by far the majority of the time, around 8 to 10 times as long as the 
> rendering process.
>
> I've used two profiling tools, plus a custom internal profiling mechanism and 
> all methods reports similar ratios.  I've confirmed that bypassing the filter 
> greatly does radically reduce the processing load.
>
> How can I make sense of this?
>
> Regards,
>
> Stephen Clarke
> Managing Director
> ChordWizard Software Pty Ltd
> corpor...@chordwizard.com
> http://www.chordwizard.com
> ph: (+61) 2 4960 9520
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Floating-point round-off noise and phase increments

2016-08-26 Thread Michael Gogins
Multiply not increment.

Not phase += increment but phase = index * increment.

Adding lets the error add up also. Multiplying keeps the error minimal.

On Aug 26, 2016 10:10 AM, "Hanns Holger Rutz"  wrote:

> Hi there,
>
> probably there is some good knowledge about this, so I'm looking for a
> pointer: I'm currently rewriting some code and I'm wondering about the
> drift of phase-increment.
>
> I.e. I have for example an oscillator or I have a resampling function,
> each of which needs to trace a fractional phase. So somewhere in some
> inner loop there is a `phase += phaseIncr` for each sample processed.
>
> Now I already encountered some problems, for instance when using an
> `Impulse` oscillator with a fixed increment of `1.0 / windowSize` for
> triggering some windowing actions. This goes fine for a while, but for
> longer sound files or sound productions, inevitably there will be an
> error compared to the exact phase `framesProcessed * phaseIncr`, so
> instead of emitting an impulse precisely every `windowSize` samples, it
> may jump to `windowSize +/- 1`.
>
> So I'm tempted to introduce a stabilisation for the case where
> `phaseIncr` remains constant and one can thus "analytically integrate".
> Something like
>
>   // definitions (pseudo code)
>
>   var inPhase0  := 0.0
>   var inPhaseCount  := 0L
>   def inPhase   = inPhase0 + inPhaseCount * phaseIncr
>
>   For every sample processed, we increent `inPhaseCount`,
>   and whenever `phaseIncr` changes, we flush first:
>
>   inPhase0  := inPhase
>   inPhaseCount  := 0L
>
> So obviously this avoids the drift, but now my thought is that this
> result in an increases of phase distortion over time, because the 64-bit
> floating point number needs to use more digits for the pre-decimal point
> positions. For example, in a sine oscillator, if this runs for an hour
> or so, will this result in phase distortions and thus a widened spectral
> line?
>
> Perhaps a compromise is to "flush" from time to time?
>
> Thanks, ..h.h..
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] automation of parametric EQ .

2015-12-21 Thread Michael Gogins
It should be possible to define mappings from one plugin's control
parameters to another plugin's. This would have to be done by the
user. At most there would be a parametric linear or logarithmic
function involved fo map the values, in addition to mapping the
controller numbers or NRPNs. Are there any products like this? There
used to be universal patch librarians.

Regards,
Mike

-
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com


On Mon, Dec 21, 2015 at 7:44 PM, Bjorn Roche <bj...@shimmeo.com> wrote:
>
>
> On Mon, Dec 21, 2015 at 6:46 PM, robert bristow-johnson
> <r...@audioimagination.com> wrote:
>>
>>
>> regarding Pro Tools (which i do not own and haven't worked with since 2002
>> when i was at Wave Mechanics, now called SoundToys), please take a look at
>> this blog:
>>
>>  http://www.avidblogs.com/pro-tools-11-analog-console/
>>
>> evidently, for a single channel strip, there is a volume slider, but no
>> "built-in" EQ, like in an analog board.  you're s'pose to insert EQ III or
>> something like that.
>
>
> Some DAWs are like that, while others have EQs built in.
>
>>
>> now in the avid blog, words like these are written: "... which of the 20
>> EQ plug-ins should I use?... You can build an SSL, or a Neve, ..., Sonnox,
>> McDSP, iZotope, MetricHalo..."
>>
>> so then, in your session, you mix some kinda nice sound, save all of the
>> sliders in PT automation and then ask "What would this sound like if I used
>> iZ instead of McDSP?", can you or can you not apply that automation to
>> corresponding parameters of the other plugin?  i thought that you could.
>
>
> I've never seen anything like that. I wonder if the industry even wants
> this. Right now, if I build a protools (or other DAW) session and want to
> share it with you, you have to have all the plugins I used in the session.
> That's another sale for the plugin company -- unless you could substitute
> other plugins easily. There are, of course, workarounds, like "freezing" a
> track and so on.
>
>>
>> if that is the case, then, IMO, someone in some standards committee at
>> NAMM or AES or something should be pushing for standardization of some
>> *known* common parameters.
>
>
> I don't really see how that would be possible in a general case. How would
> you map company A's 4-band parametric that also has a high and low shelf to
> Company B's 5 band parametric that has no shelves, but an air-band? What if
> one company offers a greater range for Q than another company? Plugins are
> supposed to be as unique as possible. That's the point.
>
>> this, on top of the generalization that Knud Bank Christensen did last
>> decade (which sorta supersedes the Orfanidis correction to the digital
>> parametric EQ), really nails the specification problem down:  whether it's
>> analog or digital, if it's 2nd-order (and not some kinda FIR EQ), then there
>> are 5 knobs corresponding to 5 coefficients that *fully* define the
>> frequency response behavior of the EQ.  those 5 coefficient knobs can be
>> mapped to 5 parameter knobs that are meaningful to the user.
>
>
>  Can you send a reference to Christensen's work that you are referring to?
>
> --
> Bjorn Roche
> @shimmeoapp
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] [admin] list etiquette

2015-08-28 Thread Michael Gogins
When the text or even the subtext of the posts is more about the
virtues of the poster or the faults of another poster than it is about
the subject matter, the discussion is less than useful.

Regards,
Mike

-
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com


On Fri, Aug 28, 2015 at 8:48 AM, Peter S peter.schoffhau...@gmail.com wrote:
 What is your problem? You can approach the mailing list and discuss
 whatever topic you want. Nothing is lost, don't make such a fuss.
 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] [admin] list etiquette

2015-08-22 Thread Michael Gogins
Thank you, Douglas.

Regards,
Mike

-
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com


On Sat, Aug 22, 2015 at 11:21 AM, Douglas Repetto
doug...@music.columbia.edu wrote:
 Hi everyone, Douglas the list admin here.

 I've been away and haven't really been monitoring the list recently.
 It's been full of bad feelings, unpleasant interactions, and macho
 posturing. Really not much that I find interesting. I just want to
 reiterate a few things about the list.

 I'm loathe to make or enforce rules. But the list has been pretty much
 useless for the majority of subscribers for the last year or so. I
 know this because many of them have written to complain. It's
 certainly not useful to me.

 I've also had several reports of people trying to unsubscribe other
 people and other childish behavior. Come on.

 So:

 * Please limit yourself to two well-considered posts per day. Take it
 off list if you need more than that.
 * No personal attacks. I'm just going to unsub people who are insulting. 
 Sorry.
 * Please stop making macho comments about first year EE students know
 this and blahblahblah. This list is for anyone with an interest in
 sound and dsp. No topic is too basic, and complete beginners are
 welcome.

 I will happily unsubscribe people who find they can't consistently
 follow these guidelines.

 The current list climate is hostile and self-aggrandizing. No
 beginner, gentle coder, or friendly hobbyist is going to post to such
 a list. If you can't help make the list friendly to everyone, please
 leave. This isn't the list for you.


 douglas

 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Audio Latency Test App for iOS and Android

2015-03-02 Thread Michael Gogins
Thank you for this information, it is very useful.

Regards,
Mike


-
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com

On Mon, Mar 2, 2015 at 4:29 PM, Patrick Vlaskovits vlaskov...@gmail.com
wrote:

 Hiya!

 We've released a free app for Android and iOS developers that measures
 roundtrip audio latency.

 http://superpowered.com/latency/

 Interestingly enough, the data suggest that older iOS devices have BETTER
 latency than more recent ones. Ouch!

 iPhone 6 Plus comes in at 38 ms, while iPhone 4S comes in a healthy 8 ms.

 Apps and device latency data are here -
 http://superpowered.com/latency/

 Please don't hesitate to reach out if we can answer any questions:
 he...@superpowered.com

 Keep on keepin' on,
 Patrick
 @pv
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-10 Thread Michael Gogins
What I am interested in, regarding this discussion, is quite specific.
I make computer music using Csound, and usually using completely
synthesized sound, and so far only in stereo. Csound can run at any
sample rate, can output floating-point soundfiles, and can dither. My
sounds are not necessarily simple and cover the whole frequency range
and a wide dynamic range.

My only real question is, since the signal path right up to the point
where the soundfile is written is likely to be the same in all cases,
what kind of differences if any can I try to hear in CD audio versus
say 96 KHz floating-point?

These differences (if any) will be caused by the different Csound
sampling rate, the different soundfile sample word size/dynamic range,
and of course the different things that might happen to these two
kinds of soundfiles on their way out of a high-quality
DAC/amplifier/monitor speaker rig.

At times, my pieces have fortunately been presented in nice quiet
concert halls with really good amplifiers and speakers. I have also
been able to listen a few times in high-end recording studios designed
for this kind of music (this is a very different listening
experience).

Regards,
Mike

-
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com


On Tue, Feb 10, 2015 at 4:13 PM, Ethan Duni ethan.d...@gmail.com wrote:
 I'm all for releasing stuff from improved masters. There's a trend in my
 favorite genre (heavy metal) to rerelease a lot of classics in full
 dynamic range editions lately. While I'm not sure that all of these
 releases really sound much better (how much dynamic range was there in an
 underground death metal recording from 1991 anyway?) I like the trend.
 These are regular CD releases, no weird formats (demonstrating that such is
 not required to sell the improved master releases).

 But the thing is that you often *can* hear the extra sampling frequency -
 in the form of additional distortion. It sounds, if anything, *worse* than
 a release with an appropriate sample rate! Trying to sell people on better
 audio, and then giving them a bunch of additional intermodulation
 distortion is not a justified marketing ploy, it's outright deceptive and
 abusive. This is working from the assumption that your customers are
 idiots, and that you should exploit that to make money, irrespective of
 whether audio quality is harmed or not. The fact the Neil Young is himself
 one of the suckers renders this less objectionable, but only slightly.
 Anyway Pono is already a byword for audiophile snake oil so hopefully the
 damage will mostly be limited to the bank accounts of Mr. Young and his
 various financial backers in this idiocy. Sounds like the product is a real
 dog in industrial design terms anyway (no hold button, awkward shape,
 etc.). Good riddance...

 E
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-06 Thread Michael Gogins
This was done before John ffitch (I believe it was he) changed the
filter samples in even the single-precision version of Csound to use
double-precision. And I think this change may have been made as a
result of my report.

Regards,
Mike

-
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com


On Fri, Feb 6, 2015 at 2:04 PM, Victor Lazzarini
victor.lazzar...@nuim.ie wrote:
 Yes, but note that in the case Michael is reporting, all filters have 
 double-precision coeffs and data storage. It is only when passing samples 
 between unit generators that the difference lies (either single or
 double precision is used). Still, I believe that
 there can be audible differences.

 Victor Lazzarini
 Dean of Arts, Celtic Studies, and Philosophy
 Maynooth University
 Ireland

 On 6 Feb 2015, at 18:43, Ethan Duni ethan.d...@gmail.com wrote:

 Thanks for the reference Vicki

 What they are hearing is not noise or peaks sitting at the 24th
 bit but rather the distortion that goes with truncation at 24b, and
 it is said to have a characteristic coloration effect on sound.  I'm
 aware of an effort to show this with AB/X tests, hopefully it will be
 published.

 I'm skeptical, but definitely hope that such a test gets undertaken and
 published. Would be interesting to have some real data either way.

 The problem with failing to dither at 24b is that many such truncation
 steps would be done routinely in mastering, and thus the truncation
 distortion products continue to build up.

 Hopefully everyone agrees that the questions of what is appropriate for
 intermediate processing and what is appropriate for final distribution are
 quite different, and that substantially higher resolutions (and probably
 including dither) are indicated for intermediate processing. As Michael
 Goggins says:

 In my own work, I have verified with a double-blind ABX comparator at
 a high degree of statistical significance that I can hear the
 differences in certain selected portions of the same Csound piece
 rendered with 32 bit floating point samples versus 64 bit floating
 point samples. These are sample words used in internal calculations,
 not for output soundfiles. What I heard was differences in the sound
 of the same filter algorithm. These differences were not at all hard
 to hear, but they occurred in only one or two places in the piece.

 Indeed, it is not particularly difficult to cook up filter
 designs/algorithms that will break any given finite internal resolution. At
 some point those filter designs become pathological, but there are plenty
 of reasonable cases where 32 bit float internal precision is insufficient.
 Note that a 32-bit float only has 24 bits of mantissa, which is 8 bits less
 than is typically used in embedded fixed-point implementations (for
 sensitive components like filter guts, I mean). So even very standard stuff
 that has been around for decades in the fixed-point world will break if
 implemented naively in 32 bit float.

 E
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] 14-bit MIDI controls, how should we do Coarse and Fine?

2015-02-05 Thread Michael Gogins
I would like to see a redefinition or extension of the MIDI
specification as follows:

-- The semantics of the messages are changed as little as possible --
still note on, note off, controller, etc with the same ID numbers.

-- No fiddly 7/8 bit numbers to represent delta times, float times
from start of performance used instead.

-- Indeed, all values are float.

-- Indeed, any number of channels, controller numbers, etc.

-- Note on messages can optionally have IDs tieing them unambiguously
to specific note off messages, for true polyphony.

-- The network protocol for the lower half of the driver can be
anything and plug into the same upper half of the driver. Thus
transport speed can go way up.

The virtue of this scheme is that it can extend the functionality of
existing software by plugging new drivers into it, where the extended
protocol values are cut down to be backward compatible with existing
software, while at the same time new functionality is enabled in new
drivers for new software, yet programming patterns would be similar.

What do you think?

Regards,
Mike



-
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com


On Thu, Feb 5, 2015 at 4:06 PM, robert bristow-johnson
r...@audioimagination.com wrote:
 On 2/5/15 8:37 AM, Theo Verelst wrote:

 robert bristow-johnson wrote:

 ...
 so this may have been settled long ago, but i cannot from the standard,
 make perfect sense of it.
 ...


 Does it really say modulation bender in the midi-org spec ?


 i dunno where i got that text file, but this guy
 http://archive.cs.uu.nl/pub/MIDI/DOC/midi-specs got the same.  it really
 does say that in this translation of the spec to someone's .txt file.

 Or is the official midi org a thing of the past.

 maybe, i dunno.

 It seems to me like it's up to the receiver of the messages to make some
 sort of communicating systems type of sense of it all. Hard technically
 thinking, that leads to a lot of protocol and error control that needs to be
 done, with timers, interpretation of the controller values, possible tempo
 match implications (does the controller come just before or after the new
 bar, and what is the function of the controller and what are it's side
 effects in a mix for instance), so by lack of a definition I suppose it
 should follow from what it is that you are designing, and what will drive
 it.

 Pretty early on in the Midi-hausse I made some new software with elements
 that didn't exist yet for a couple of synthesizers, but the advantage was
 the machines I wrote software to drive for were pretty much a given entity,
 and I had them on my desk, so I could within reason verify if the the
 software worked good.

 More recently *I* delved into bit-precise timing issues of MIDI messages,
 which for for instance monophonic synthesizer modules could lead to a pretty
 constant-latency, in principle working more accurate than the 1/31250
 second.


 MIDI isn't even that.  more like a MIDI msg period of (2 or 3) * (8+2) *
 1/31250 second.  your timing precision is not better than 0.64 ms.  if you
 had 11 simultaneous Note On messages, one note would be happenin' 6 ms later
 than the first note, no getting around that.

 That's stretching the normal intention and nice use of the MIDI standard
 from the time of the popularity of serial interfaces

 i wasn't trying to stretch or change the standard.  i just wanted to know
 how to avoid the glitch (of 127/16384) you might get if someone's smooth
 14-bit precision control is adjusting a parameter (i think the only way to
 avoid it is with a couple ms delay).  perhaps, no one is doing 14-bit
 control precision.  i know one company i worked for did not.

 someone mentioned using the LSB controls (#32-63) as the MSB for some
 unrelated controls.  sounds a little application specific (or unpredictable)
 to me.

 --

 r b-j  r...@audioimagination.com

 Imagination is more important than knowledge.



 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] I am working on a new DSP textbook using Python. Comments are welcome!

2015-01-14 Thread Michael Gogins
Cool!

Thanks,
Mike
On Jan 14, 2015 6:41 PM, Allen Downey dow...@allendowney.com wrote:

 I am developing a textbook for a computational (as opposed to mathematical)
 approach to DSP, with emphasis on applications -- especially sound/music
 processing.  People on this list might like this example from Chapter 9:


 http://nbviewer.ipython.org/github/AllenDowney/ThinkDSP/blob/master/code/chap09preview.ipynb

 I have a draft of the first 9 chapters, working on one or two more. I am
 publishing excepts and the supporting IPython notebooks in my blog, here:

 http://thinkdsp.blogspot.com

 Of if you want to go straight to the book, it is here:

 http://think-dsp.com

 Comments (and corrections) are welcome!
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] magic formulae

2014-11-27 Thread Michael Gogins
I've experimented with this using LuaJIT, which has bitwise operations. I
used a LuaJIT binding to PortAudio for real time audio output. Ivan send
you my stuff  if you like.

Regards,
Mike
On Nov 27, 2014 8:54 AM, Victor Lazzarini victor.lazzar...@nuim.ie
wrote:

 Thanks everyone for the links. Apart from an article in arXiv written by
 viznut, I had no
 further luck finding papers on the subject (the article was from 2011, so
 I thought that by
 now there would have been something somewhere, beyond the code examples and
 overviews etc.).
 
 Dr Victor Lazzarini
 Dean of Arts, Celtic Studies and Philosophy,
 Maynooth University,
 Maynooth, Co Kildare, Ireland
 Tel: 00 353 7086936
 Fax: 00 353 1 7086952

  On 27 Nov 2014, at 13:38, Tito Latini tito.01b...@gmail.com wrote:
 
  On Thu, Nov 27, 2014 at 09:46:13AM -0200, a...@ime.usp.br wrote:
  Another post from him, with more analysis stuff.
 
 
 http://countercomplex.blogspot.com.br/2011/10/some-deep-analysis-of-one-line-music.html
 
  Cheers,
  Antonio.
 
  Quoting Ross Bencina rossb-li...@audiomulch.com:
 
  On 27/11/2014 8:35 PM, Victor Lazzarini wrote:
  Does anyone have any references for magic formulae for synthesis (I
  am not sure that this is the usual term)?
  What I mean is the type of bit manipulation that generates
  rhythmic/pitch patterns etc., built (as far as I can see)
  a little bit on an ad hoc basis, like kt*((kt12|kt8)63kt4)???
  etc.
 
  If anyone has a suggestion of papers etc on the subject, I???d be
  grateful.
 
  Viznut's stuff was going on a couple of years ago:
 
 
 http://countercomplex.blogspot.com.au/2011/10/algorithmic-symphonies-from-one-line-of.html
 
  Cheers,
 
  Ross.
 
  other links here
 
  http://canonical.org/%7Ekragen/bytebeat/
 
  --
  dupswapdrop -- the music-dsp mailing list and website:
  subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
  http://music.columbia.edu/cmc/music-dsp
  http://music.columbia.edu/mailman/listinfo/music-dsp

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Hosting playback module for samples

2014-02-27 Thread Michael Gogins
For straight sample playback, the C library FluidSynth, you can use it via
PInvoke. FluidSynth plays SoundFonts, which are widely available, and there
are tools for making your own SoundFonts from sample recordings.

For more sophisticated synthesis, the C library Csound, you can use it via
PInvoke. Csound is basically as powerful as it gets in sound synthesis.
Csound can use FluidSynth. Csound also has its own basic toolkit for simple
sample plaback, or you can build your own more complex samplers using
Csound's orchestra language.

Hope this helps,
Mike


-
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com


On Wed, Feb 26, 2014 at 11:56 AM, Mark Garvin mgar...@panix.com wrote:

 I realize that this is slightly off the beaten path for this group,
 but it's a problem that I've been trying to solve for a few years:

 I had written software for notation-based composition and playback of
 orchestral scores. That was done via MIDI. I was working on porting
 the original C++ to C#, and everything went well...except for playback.
 The world has changed from MIDI-based rack-mount samplers to computer-
 based samples played back via hosted VSTi's.

 And unfortunately, hosting a VSTi is another world of involved software
 development, even with unmanaged C++ code. Hosting with managed code
 (C#) should be possible, but I don't think it has been done yet. So
 I'm stuck. I've spoken to Marc Jacobi, who has a managed wrapper for
 VST C++ code, but VSTi hosting is still not that simple. Marc is very
 helpful and generous, and I pester him once a year, but it remains an
 elusive problem.

 It occurred to me that one of the resourceful people here may have
 ideas for working around this. What I'm looking for, short term, is
 simply a way to play back orchestral samples or even guitar/bass/drums
 as a way of testing my ported C# code. Ideally send note-on, velocity,
 note-off, similar to primitive MIDI. Continuous controller for volume
 would be icing.

 Any ideas, however abstract, would be greatly appreciated.

 MG
 NYC

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Hosting playback module for samples

2014-02-27 Thread Michael Gogins
Sorry for the misunderstanding.

I think the VSTHost code could be adapted. It is possible to mix managed
C++/CLI and unmanaged standard C++ code in a single binary. I think this
could be used to provide a .NET wrapper for the VSTHost classes that C#
could use.

Regards,
Mike


-
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com


On Thu, Feb 27, 2014 at 7:02 PM, Ross Bencina rossb-li...@audiomulch.comwrote:

 On 28/02/2014 12:16 AM, Michael Gogins wrote:

 For straight sample playback, the C library FluidSynth, you can use it via
 PInvoke. FluidSynth plays SoundFonts, which are widely available, and
 there
 are tools for making your own SoundFonts from sample recordings.

 For more sophisticated synthesis, the C library Csound, you can use it via
 PInvoke. Csound is basically as powerful as it gets in sound synthesis.
 Csound can use FluidSynth. Csound also has its own basic toolkit for
 simple
 sample plaback, or you can build your own more complex samplers using
 Csound's orchestra language.


 If I understand correctly the OP wants a way to host Kontakt and other
 commercial sample players within a C# application, not to code his own
 sample player or use something open source.

 The question is the quickest path to hosting pre-existing VSTis in C# and
 sending them MIDI events.

 Ross.

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] can someone precisely define for me what is meant by proportional Q?

2014-02-15 Thread Michael Gogins
Thank you, indeed, for this very helpful video. I have learned a fair
amount about sampling and digital audio by studying and doing computer
music for decades, but this definitely deepened my understanding and was
enjoyable to boot.

Regards,
Mike


-
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com


On Thu, Feb 13, 2014 at 6:39 PM, Dave Gamble davegam...@gmail.com wrote:

 This video is excellent:
 http://www.youtube.com/watch?feature=player_embeddedv=cIQ9IXSUzuM

 I recommend it unreservedly to everyone here. I think it's a phenomenally
 well considered piece of work.

 Theo, I think this will be helpful to you in terms of clarifying the
 consequences of the sampling theorem in an intuitive way.

 Dave.

 On Thursday, February 13, 2014, Theo Verelst theo...@theover.org wrote:

  Dave Gamble wrote:
 
  Hey Theo,
 
  How low a THD+N figure from a DAC would satisfy you?
 
 
  it depends on how you measure. How about 44 wave-pieces for a 1kHz tone
  from CD. What do you think ?
 
  T.
 
  --
  dupswapdrop -- the music-dsp mailing list and website:
  subscription info, FAQ, source code archive, list archive, book reviews,
  dsp links
  http://music.columbia.edu/cmc/music-dsp
  http://music.columbia.edu/mailman/listinfo/music-dsp
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Of possible interest

2012-11-20 Thread Michael Gogins
http://arxiv.org/pdf/1211.4047.pdf


--
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] recommendation for VST host for dev. modifications

2012-06-26 Thread Michael Gogins
The JUCE license (GPL) is not compatible with the Csound license (LGPL).

Regards,
Mike

On Tue, Jun 26, 2012 at 4:56 PM, Rob Belcham hybridalien...@hotmail.com wrote:
 JUCE has quite a good vst host. I use it a lot for testing VST plugins.

 Cheers
 Rob

 --
 From: Roberta music-...@musemagic.com
 Sent: Monday, June 25, 2012 4:40 AM
 To: music-dsp@music.columbia.edu
 Subject: [music-dsp] recommendation for VST host for dev. modifications


 Hi,

 I'm wondering if anyone has worked with any VST host source, open source,
 for some development modifications, which one most closely models Cubase and
 is easy to work with?  Alternatively the src. for VST Host which comes with
 the Cubase VST SDK?   Right now my best candidate is LMMS.  Thx.

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp



-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] recommendation for VST host for dev. modifications

2012-06-26 Thread Michael Gogins
MrsWatson appears to presuppose the use of the Steinberg VST SDK,
which is precisely what I am proposing to avoid.

Regards,
Mike

On Tue, Jun 26, 2012 at 6:31 PM, Alessandro Saccoia
alessandro.sacc...@gmail.com wrote:
 You could take a look at Mrs Watson from Teragon Audio
 http://teragonaudio.com/MrsWatson.html
 best
 Alessandro

 On Jun 26, 2012, at 11:10 PM, Michael Gogins wrote:

 The JUCE license (GPL) is not compatible with the Csound license (LGPL).

 Regards,
 Mike

 On Tue, Jun 26, 2012 at 4:56 PM, Rob Belcham hybridalien...@hotmail.com 
 wrote:
 JUCE has quite a good vst host. I use it a lot for testing VST plugins.

 Cheers
 Rob

 --
 From: Roberta music-...@musemagic.com
 Sent: Monday, June 25, 2012 4:40 AM
 To: music-dsp@music.columbia.edu
 Subject: [music-dsp] recommendation for VST host for dev. modifications


 Hi,

 I'm wondering if anyone has worked with any VST host source, open source,
 for some development modifications, which one most closely models Cubase 
 and
 is easy to work with?  Alternatively the src. for VST Host which comes with
 the Cubase VST SDK?   Right now my best candidate is LMMS.  Thx.

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp



 --
 Michael Gogins
 Irreducible Productions
 http://www.michael-gogins.com
 Michael dot Gogins at gmail dot com
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp



-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] To EE or not to EE (Was: Job at Waldorf and Possible Job Opportunity)

2012-05-03 Thread Michael Gogins
I'm a an algorithmic composer and contributor to Csound. I have no
academic qualification for this work. I was a music major in jazz
peformance for a year, but my B.A. is in comparative religion.

Nevertheless I profoundly believe that formal education can be
enormously beneficial. It all depends on the quality of the teachers.
I have found that formal education can (a) force you to do the
homework, i.e. to get your hands dirty, in an accelerated process
where trained (teachers), semi-trained (teaching assistants) and
untrained (fellow students) are right at hand to help you out. But
more importantly (b) good teachers can convey critical thinking. In
good schools, critical thinking is what they are really teaching.

In my experience critical thinking does not come naturally, because
you have to learn that the first suspect in what is wrong is yourself,
and the second suspect is what you assume, and the third suspect is
what everyone knows. Also, you kind of need to have a good living
example of a critical thinker in front of you to show you how it's
done.

I guess what I'm really talking about is teachers, nor formal
education, but for some reason teachers are most commonly found and
most easily located in schools.

Regards,
Mike

On Thu, May 3, 2012 at 5:58 PM, Nigel Redmon earle...@earlevel.com wrote:
 A couple of ideas...

 First, note that Given identical qualification can imply that someone 
 without a degree might have gotten to the same level as someone with a degree 
 by a lot of digging and figuring on their own. Some call this getting one's 
 hands dirty—implying that you didn't just read books on theory and listen 
 to lectures, you had to do dirty work, and create things—try things and think 
 about why they worked or didn't.

 Also, a formal education can lock you into a limiting mentality. For 
 instance, we all know the limitations of linear interpolation, but some know 
 it too well.

 ;-)


 On May 2, 2012, at 8:47 PM, Ross Bencina wrote:
 Hi All, (but especially Stefan and Al)

 I'm wondering if I can draw you on what is it about Electrical Engineering 
 qualifications that is important to these kind of jobs (I have some ideas, 
 but not the full picture, since I'm not an EE).

 I was interested to see in Stefan's recently posted job:

 ...Advantageous:
 - Some insight into electrical engineering
 [...]
 Given identical qualification, we prefer candidates without a formal 
 degree
 -- http://www.waldorfmusic.de/en/jobs.html

 What is problematic about formal degrees in this context?


 Then Al posted a job:
 ...We are considering a broad range of candidates, from recent graduates 
 (electrical engineering or convince us otherwise)


 I'm someone with a computer music and software development background who's 
 just started taking some math subjects in my spare time to fill in some 
 gaps -- so I'm guessing that mathematic modelling of electronic systems and 
 digital signal processing mathematics are a big part of what you're after.

 Can you clarify what skills you anticipate from EE graduates or people with 
 insight into EE?

 Thanks!

 Ross.


 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp



-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-28 Thread Michael Gogins
On the one hand, I completely agree with Bill, I'm only interested in
whether the music is good, and no, I don't think it's completely
subjective.

On the other hand, I do think there are many things it would be
advisable for a person who wants to write good music to know.

For someone who wants to write good computer music, what would be
useful to know is... bewildering and hard to define! But some
understanding of how good software is written, some understanding of
signal processing, some understanding of musical acoustics and
psychological acoustics, some understanding of music history and
theory, some understanding of musical form, and above all a deep,
personal immersion in music itself... deep listening.

On Tue, Feb 28, 2012 at 11:03 AM, Bill Schottstaedt
b...@ccrma.stanford.edu wrote:
 I don't think this conversation is useful.  The only question I'd
 ask is did this person make good music?, and I don't care at all about
 his degrees or grants.  One of the best mathematicians I've known
 does not even have a high-school diploma.  If I find such a person,
 then it's interesting to ask how she did it.  But there are very few,
 and no generalizations seem to come to mind.

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp



-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-27 Thread Michael Gogins
/music-dsp



-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-24 Thread Michael Gogins
Ross is correct. I know that RTcmix supports real-time audio - I was
using it that way just last week. What I meant is that before you run
a new synthesis routine in RTcmix, you have to compile its C++ source
code.

I was trying to get LuaJIT to do this (run DSP code immediately). I
created the infrastructure for this and it works. Unfortunately, it
only works reliably for simple test cases. Complex code causes LuaJIT
to crash. Not sure why. This kind of thing works much better in
Csound.

That's one reason I'm taking a closer look at working directly in C++.
That, in turn, makes RTcmix itself more attractive in some ways.

Best,
Mike

On Fri, Feb 24, 2012 at 3:25 AM, Ross Bencina
rossb-li...@audiomulch.com wrote:
 Hi Brad,


 On 24/02/2012 3:01 PM, Brad Garton wrote:

 Joining this conversation a little late, but what the heck...


 Me too...


 On Feb 22, 2012, at 9:18 AM, Michael Gogins wrote:

 I got my start in computer music in 1986 or 1987 at the woof group at
 Columbia University using cmix on a Sun workstation.


 Michael was a stalwart back in those wild Ancient Days!

 cmix has never
 had a runtime synthesis language; even now instrument code has to be
 written in C++.


 One possible misconception -- by runtime synthesis language I'm sure
 Michael
 means a design language for instantiating synthesis/DSP algorithms *in
 real time*
 as the language/synth-engine is running.  I tend to think of languages
 like ChucK
 or Supercollider more in that sense than Csound, and even SC
 differentiates between
 the language and then sending the synth-code to the server.


 My reading would be that Michael may be implying that there is a difference
 between interpretation and compilation.

 CSound does not have a runtime synthesis language either. It's a compiler
 with a VM. There is no way to re-write the code while it's running.

 SC3 is very limited in this regard too (you can restructure the synth graph
 but there's no way to edit a synthdef except by replacing it, and there's no
 language code running sample synchronously in the server). So you have a
 kind of runtime compilation model.

 I didn't get much of a chance to play with SC1 but my understanding is that
 you could actually process samples in the synthesis loop (like you can with
 cmix). To me this is real runtime synthesis. You get this in C/C++ too --
 your program can make signal dependent runtime decisions about what
 synthesis code to execute.

 Anything else is just plugging unit generators together, which is limiting
 in many situations (one reason I abandoned these kind of environments and
 started writing my algorithms in C++).



 RTcmix (http;//rtcmix.org) works quite well in real time, in fact it has
 now for almost
 two decades.  The trade-off in writing C/C++ code is that it is one of the
 most
 efficient languages currently in use.  We've also taken a route which
 allows it
 to be 'imbedded' in other environments.  rtcmix~ was the first of the
 'language
 objects' I did for max/msp.  iRTcmix (RTcmix in iOS) even passes muster at
 the
 clamped App Store, check out iLooch for fun:
  http://music.columbia.edu/~brad/ilooch/
 (almost 2 years old now).

 For me the deeper issue is how these various languages/environments shape
 creative thinking.  I tend to like the way I think about music, especially
 algorthmic
 composityon, using the RTcmix parse language than I do in, say SC.  Each
 system
 has things 'it likes to do', and i think it important to be aware of
 these.


 Indeed.

 The problem with plug unit generators languages for me is that they
 privilege the process (network of unit generators) over the content (the
 signal). Programming in C++ makes the signal efficiently accessible. Nothing
 wrong with patchable environments of course :) just that their not the whole
 story.

 Ross.





 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp



-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-22 Thread Michael Gogins
I have done this several times and plan to do more.

I got my start in computer music in 1986 or 1987 at the woof group at
Columbia University using cmix on a Sun workstation. cmix has never
had a runtime synthesis language; even now instrument code has to be
written in C++. Score code can be written in a variety of languages
including the built-in minc interpreter, Perl, Python, C, C++, and I
have used Lua. The pieces I made used C++ for both synthesis and score
generation. They were done using a Lindenmayer system to place
complex, recursively generated patterns of phase-aligned grains into a
soundfile.

When I created the CsoundAC class library, which is written in C++, it
was intended to be usable with C++ or with various wrapper languges
such as Java, Python, Lisp, or Lua. Nobody maintained the C++
interface, but I am making it usable directly from C++ again. In this
case, Csound is used for synthesis, but actually CsoundAC itself has
some rudimentary facilities for synthesis including a soundfile class
and some granular synthesis classes.

Right now, I am finishing some fixes that need to be done so that I
can compose in C++ using Qt as a toolkit for widgets that I will use
to tweak mastering (final EQ, reverb, and compression) as well as
sensitive instrument parameters (e.g. for STKBowed). In this case
also, Csound is used for synthesis but the orchestra code is
completely embedded in the C++ program. I will probably also embed
plugin opcodes written in C++ in the program and register them with
Csound just before compiling the orchestra. These plugins will not
actually be external DLLs, they will be routines in the main
composition program which will register them with Csound.

Ideally, I would like to write entire instrument definitions in C++
embedded in the program. Then Csound would serve as an engine and a
framework that would manage scheduling, voice allocation, and all
input and output. These are the hard parts of music programming. I
would manage all the score generation and sound synthesis in C++.

I am writing an article about composing in C++ with the Csound API and
CsoundAC, and I will try to get it published in the Csound Journal or
elsewhere.

All these techniques work with command-line programs as well as with GUIs.

Regards,
Mike

On Wed, Feb 22, 2012 at 8:44 AM, Adam Puckett adotsdothmu...@gmail.com wrote:
 It's nice to see some familiar names in Csound's defense.

 Here's something I've considered since learning C: has anyone
 (attempted to) compose music in straight C (or C++) just using the
 audio APIs? I think that would be quite a challenge. I can see quite a
 bit more algorithmic potential there than probably any of the DSLs
 written in it.

 On 2/21/12, Michael Gogins michael.gog...@gmail.com wrote:
 It's very easy to use Csound to solve idle mind puzzles! I think many
 of us, certainly myself, find ourselves becoming distracted by the
 technical work involved in making computer music, as opposed to the
 superficially easier but in reality far more difficult work of
 composing.

 Regards,
 Mike

 On Tue, Feb 21, 2012 at 7:53 PM, Emanuel Landeholm
 emanuel.landeh...@gmail.com wrote:
 Well. I need to start using csound. To actually do things in the real
 world instead of just solving idle mind puzzles.

 On Tue, Feb 21, 2012 at 10:02 PM, Victor victor.lazzar...@nuim.ie wrote:
 i have been running csound in realtime since about 1998, which makes it
 what? about fourteen years, however i remember seeing code for RT audio
 in the version i picked up from cecelia.media.mit.edu back in 94. So,
 strictly this capability has been there for the best part of twenty
 years.

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp



 --
 Michael Gogins
 Irreducible Productions
 http://www.michael-gogins.com
 Michael dot Gogins at gmail dot com
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp



-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-22 Thread Michael Gogins
For me as a composer working almost exclusively with algorithmic
composition and synthesis, the question of language is complex. It's
not just the power of the language, but also the ease of writing code,
plus the time for building, plus the time for maintaining any
necessary build system and other parts of the toolchain, plus the
number of steps involved in the compose/build/run/listen,
compose/build/run/listen cycle, plus any postproduction or mastering
steps... and the actual runtime speed of the code is often critical as
some of the algorithmic composition procedures are quite
compute-intensive.

I've experimented with many languages, computer music languages, etc.,
and it's come down to using Csound plus as few other parts as
possible. But runtime speed always turns out to more important than
one might expect... That in turn means a choice between composing in
LuaJIT or composing in C++, with qtcreator taking care of the build
system and toolchain maintenance, and some additions to CsoundAC
taking care of the post-processing stuff.

I would prefer to compose using just one language, but at least this
way I only have to deal with the Csound orchestra language plus one
other language, either Lua or C++. I don't have enough experience yet
to tell which way I will end up going, but right now it looks like
each piece will be a C++ program written in qtcreator and run from
qtcreator. Each piece will have as much as GUI as it needs for
mastering and testing instrument parameters, and automatically do any
necessary post-processing, Jack configuration, etc.

This is what I will be writing about...

Regards,
Mike

On Wed, Feb 22, 2012 at 11:17 AM, Risto Holopainen
risto.holopai...@imv.uio.no wrote:

 Yes, it's quite a challenge to compose music just using C/C++. Lately, I
 have tried to compose entire pieces written in C++. Many of them are just
 monolithic feedback systems with oscillators, filters and some low-level
 feature extractors. There are no hierarchic levels of control functions,
 the musical form emerges from the specific algorithm. In principle,
 similar programs should be possible to write in csound and other
 languages, but I have found it easier to use C++ for this. As others have
 pointed out, you easily forget your musical ideas while programming. The
 trick is: don't have any ideas to begin with!

 And speaking of short programs, this reminds me of the Obfuscated C
 contest and, not least, the new style of bytebeat (well, to me it seems
 like a revival of nonstandard / instruction synthesis):

 http://canonical.org/~kragen/bytebeat/
 There is also a paper at arXiv by Heikkilä that explains it.


 Risto Holopainen
 Department of Musicology
 University of Oslo




 It's nice to see some familiar names in Csound's defense.

 Here's something I've considered since learning C: has anyone
 (attempted to) compose music in straight C (or C++) just using the
 audio APIs? I think that would be quite a challenge. I can see quite a
 bit more algorithmic potential there than probably any of the DSLs
 written in it.

 On 2/21/12, Michael Gogins michael.gog...@gmail.com wrote:
 It's very easy to use Csound to solve idle mind puzzles! I think many
 of us, certainly myself, find ourselves becoming distracted by the
 technical work involved in making computer music, as opposed to the
 superficially easier but in reality far more difficult work of
 composing.

 Regards,
 Mike


 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp



-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-21 Thread Michael Gogins
It's very easy to use Csound to solve idle mind puzzles! I think many
of us, certainly myself, find ourselves becoming distracted by the
technical work involved in making computer music, as opposed to the
superficially easier but in reality far more difficult work of
composing.

Regards,
Mike

On Tue, Feb 21, 2012 at 7:53 PM, Emanuel Landeholm
emanuel.landeh...@gmail.com wrote:
 Well. I need to start using csound. To actually do things in the real
 world instead of just solving idle mind puzzles.

 On Tue, Feb 21, 2012 at 10:02 PM, Victor victor.lazzar...@nuim.ie wrote:
 i have been running csound in realtime since about 1998, which makes it 
 what? about fourteen years, however i remember seeing code for RT audio in 
 the version i picked up from cecelia.media.mit.edu back in 94. So, strictly 
 this capability has been there for the best part of twenty years.

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp



-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] music-dsp Digest, Vol 97, Issue 23

2012-01-19 Thread Michael Gogins
I am saying that the mastered dynamic range is such, not the dynamic
range of the gear. The range from the soft parts to the loud parts,
not from the noise floor the clipping ceiling.

Regards,
Mike

On 1/19/12, Theo Verelst theo...@tiscali.nl wrote:
 music-dsp-requ...@music.columbia.edu wrote:
 from 30 to 60 or so
 for an expert EA concert presentation

 Huh? I had a moderate quality cassette recorder in the 70s which had
 better normally measured properties than that...
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp



-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Signal processing and dbFS

2012-01-18 Thread Michael Gogins
OK, I'll weigh in on this.

As noted decibels are a relative measure of energy on a logarithmic
scale. Roughly, every time you double the amplitude of a signal, its
energy increases by 6 dB.

There is absolutely no fixed point or origin to the decibel scale. An
origin must be assigned. dB full scale (dbFS) means the origin of the
scale, 0, is located at full scale, i.e. the loudest signal that the
gear can handle. One must therefore measure down from 0 dB FS. dB
sound pressure level (SPL) means that the origin of the scale is the
faintest sound that a person can hear. One must therefore measure up
from 0 dB SPL.

The human ear has a dynamic range from 0 dB SPL (all but inaudible
sound) to at least 120 or 130 dB SPL (the threshold of pain). Zero is
an unbelievably small amount of energy, if we heard any better we
would hear air molecules hitting our eardrums. This is same dynamic
range as a fine microphone, and it is a greater dynamic range than
even the very best audio gear can handle through a whole signal chain
(mic, preamp, transducer, recording; recording, transducer, premp,
amp, speaker). Therefore, all recordings, even the best, have a
somewhat compressed dynamic range, and sound unnatural to trained
ears.

But of course there are much smaller levels of energy that we can
never hear, down to the air is frozen on the floor far underground,
and much larger levels of energy that would instantly kill us to hear,
like the boom of an atom bomb or even the Big Bang.

Untrained people have a just noticeable difference of several dB,
people who are experienced with sound have a just noticeable
difference of about 1/2 dB.

In audio work, one must use the decibel scale that is appropriate to
one's musical or engineering objective.

If one is worried about clipping and distortion, one must establish a
0 dB FS level somewhat below the distortion ceiling of the gear, and
measure down. There will be a nominal dB FS on the meter at that
point, which already provides some headroom below the real dB FS.

If one is worried about noise, one must establish a 0 dB level
somewhat above the noise floor of the gear, and measure up.

If one is worred about the dynamic range of a recording, one must take
into account the intended listener's environment. I'm no expert, but I
think a dynamic range of 10 or 15 dB is good for a car, 15 or 30 dB
for a home, 30 to 50 or so for an audiophile, and from 30 to 60 or so
for an expert EA concert presentation.

One must then situate one's intended dynamic range above the noise
floor of the gear, and below the 0 dB FS ceiling of the gear. With
plain tape, this is not quite possible, so one always either hears
some hiss or a compressed dynamic range. With Dolby tape, it is just
barely possible. With high-resolution digital audio it is easy -- if
the gear is good and used as intended. But it will still not be as
good as the ear.




-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Orfanidis-style filter design

2011-12-09 Thread Michael Gogins
Theo is correct regarding the morality of media today. The situation
is an absolute disaster. It will not be remedied unless and until
every download, page view, or stream of audio sends a payment to the
copyright holder. Doesn't have to be a big payment. But it needs to be
a reliable, actual payment.

Sincerely,
Mike Gogins

On Fri, Dec 9, 2011 at 12:55 PM, Theo Verelst theo...@tiscali.nl wrote:
   I still didn't learn what Orfanidis has to say but I'm glad with serious
 discussions about copyright and formal and practical organization methods
 for non Open Source intellectual or art materials.

   I think when or of I create a nice piece of DSP software or a good theory
 applicable in the field that in most western countries I automatically have
 the legal copyright. Practically I'm not so sure if even building it into a
 ROM or choosing high profile expensive scientific magazine to publish in
 would give me more chance of making a profit than bringing out an
 non-copy-protected Windows program as a small company with no legal
 department.

  I mean when the ideas of people in a field are not bound by normal laws,
 nor by instituted or commercial rules, local mores or personal morality, or
 for all I care religious morality, we might as well be living in a jungle
 hoping for Tarzan to throw us a bone. Modern music ongoings teach me
 certain groups of people are willing to bow a lot lower than just that, so
 I'm all for some amount of personal morality about these subjects, or the
 big brother effect in all kinds of public DSP (TV, CD mastering, etc.)
  probably will prevail leaving the good, talented and nice working people
 with shitty A/V materials and little income.

 Ir. Theo Verelst
 http://www.theover.org/Prod/studiosound.html

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp



-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] looking for a flexible synthesis system technically and legally appropriate for iOS development

2011-02-17 Thread Michael Gogins
LuaJIT is being ported by its impressive author, Mike Pall, to PowerPC
architecture, for pay.

Regards,
Mike

On Thu, Feb 17, 2011 at 9:47 AM, Gwenhwyfaer gwenhwyf...@gmail.com wrote:
 On 17/02/2011, Michael Gogins michael.gog...@gmail.com wrote:
 All reports are not yet in, but there is a distinct possibility that
 with LuaJIT, dynamic languages have come into their own and can be
 considered for many high-performance applications.

 Isn't LuaJIT currently limited to x86*? If so, that would seem to rule
 it out for iOS development... although as you say, it's a hugely
 impressive achievement in and of itself.
 ___
 * ...no longer - x86, x64 and PPC/e500v2 are supported; ARM, however, is not
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp




-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Modular synthesis percussion?

2011-02-17 Thread Michael Gogins
This book is an excellent source: Andy Farnell, Designing Sound. There
are code examples for Pure Data online, some of which go beyond the
book.

Regards,
Mike

On Thu, Feb 17, 2011 at 12:51 PM, Alan Wolfe alan.wo...@gmail.com wrote:
 Hey Guys,

 Does anyone know how to do percussion sounds with modular synthesis?

 I'm talking about just using VCO, VCA, EG etc, not an actual
 percussion module (:

 I've been looking around and all the info i can find shows people
 using percussion modules which isn't so helpful if you only have the
 basic tools at your disposal hehe.

 Or i guess, info about percussion synthesis in the DSP realm at all
 would be nice too if anyone can point me at any of that.

 Thank you!!
 Alan
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp




-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] looking for a flexible synthesis system technically and legally appropriate for iOS development

2011-02-17 Thread Michael Gogins
What is a whip-round?

Regards,
Mike

On Thu, Feb 17, 2011 at 12:31 PM, Gwenhwyfaer gwenhwyf...@gmail.com wrote:
 On 17/02/2011, Michael Gogins michael.gog...@gmail.com wrote:
 LuaJIT is being ported by its impressive author, Mike Pall, to PowerPC
 architecture, for pay.

 So with an iPad and a whip-round...? ;)
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp




-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Microtonal Tuning Options

2011-01-14 Thread Michael Gogins
What is your technology for? Are you recording live music, tracking in
a studio, composing with notation, composing in a sequencer, or are
you doing pure synthesis to a soundfile without any live performance
at all? In the latter case consider using Csound which has none of
these limitations.

Regards,
Mike

On Fri, Jan 14, 2011 at 1:40 PM, Brad Smith rainwarr...@gmail.com wrote:
 Some years ago I wrote a stand-alone windows program to do it:

 http://www.rainwarrior.thenoos.net/intun/index.html

 It still works. Unfortunately, I lost the source code to it, so I
 can't make changes anymore. (I should have made it open source.)

 -- Brad Smith



 On Fri, Jan 14, 2011 at 1:34 PM, Conley, Dylan
 dylan.con...@marquette.edu wrote:
 Greetings All,

 I've developed a VST plugin that splits MIDI data into 12 channels, applies 
 varying pitch bend amounts to each chromatic channel and sends the data on 
 to a host/VSTi.  Unfortunately, it seems that most VST instruments and hosts 
 lack support for specific channel pitch bend.

 I've heard the capability I'm looking for referred to as Guitar mode.  
 MIDI data is divided between 6 channels representing the strings and pitch 
 bends can be applied to individual strings to mimic bends.

 I wonder, have any of you run into this shortcoming of VST hosts and 
 instruments?  Is anyone aware of a product that supports what I am asking 
 about?  Perhaps VST is not the right technology.

 Best Regards,
 Dylan
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp




-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp