On Friday 24 February 2012, at 19.05.47, Andy Farnell 
<padawa...@obiwannabe.co.uk> wrote:
> > The problem with "plug unit generators languages" for me is that they
> > privilege the process (network of unit generators) over the content
> 
> Some really interesting thoughts here Ross. At what level of
> granularity does the trade-off of control, flexibility and
> efficiency reach its sweet spot?

For ChipSound, I decided on subsample accuracy - which basically means you can 
script your own waveforms, hardsync effects, AM, FM etc without explicit 
support from the engine.

Of course, this is an extreme case, as ChipSound started out as an attempt at 
doing something seriously useful in less than 2k lines of code - but the point 
is, I don't think there is any way around subsample accurate control if you 
want to do anything much interesting without relying on a massive set of unit 
generators. Even single sample accuracy is totally insufficient if you want to 
do chromatic granular, hardsync and similar.

<tired, headache induced tech rant>

There is indeed a bit of complexity involved (timestamped events mixed with VM 
execution + buffer splitting), but the advantage is that one can pretty much 
eliminate LFOs, envelope generators, granular-ish synthesis methods and all 
that - and still have the functionality right there! Just script whatever you 
need when you need it.

Of course, high density granular synthesis, FM etc is never going to run as 
fast this way as with dedicated, optimized unity generators, but then again, 
there is LLVM, if one really wants to push it... :-) Or just throw in a 
peephole optimizer + "multi-instructions" for starters; 50-100% speedup right 
there. (Expensive instruction dispatching; the plague of non-JIT VMs, making 
it extremely worthwhile to keep the instruction count down...)

However, I'm doing sort of ok already, with some 1k voices running "average" 
VM code on a single core of a 2.4 GHz Core 2 CPU. (90% of my target customers 
have at least dual core CPUs, and the game already runs ok with sfx + music on 
a single core P4, despite lots of physics and graphics optimization left to 
do.)

I haven't even bothered with the render-to-waveform feature yet; that could 
cut voice count to a fraction and/or improve quality drastically. It just 
feels like... cheating! "All pure realtime synthesis using only basic 
geometric waveforms!" sounds much cooler. Not that 95% of gamers would have 
any idea what that means anyway. :-D

</tech rant>


-- 
//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.--- Games, examples, libraries, scripting, sound, music, graphics ---.
|   http://consulting.olofson.net          http://olofsonarcade.com   |
'---------------------------------------------------------------------'
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Reply via email to