On Friday 20 June 2003 05:23 pm, Andy Sy wrote:

*  increasing in power every 6 months.  The final nail in the traditional
*  soundcard's coffin would be a very responsive, fine-grained scheduling
*  kernel; this is because scheduling latency is a lot more critical
*  with sound.  Once that happens, an offboard sound chip operating
*  independently of the CPU brings nothing to the table anymore.
*  Actually, even with today's non realtime kernels, that is virtually
*  the case already!!

This fine-grained scheduling kernel is a major technological target for audio 
apps on Linux to prosper. Who knows it may even replace SMPTE/MIDI time 
clock/ADAT standards with their own open source synchronization standard and 
algorithms. Top Hollywood musical composers like Marc Isham, John Williams, 
or uberly experimental composers like Wendy Carlos and the rest of the 
experimental electronic meisters like Brian Eno, may someday do synthesis 
techniques like realtime physical modeling and wavetable synthesis on a Linux 
platform. 

E-mu synthesizers can do pole filtering and z-morphing in hardware. This 
technique can let you morph a sound from any source and be modulated by any 
modulation source in realtime. This means that you can play a violin patch, 
then slowly alter the tone generators to model and morph the sound into an 
electric guitar patch, applying realtime effects like flanging or distortion 
at will (the only thing that can replicate this sonically is to 
logarithmicaly crossfade different sound samples). E-mu uses a lot of 
hardware DSPs (digital signal processors), filters, and oscillators to do 
this. 

Korg Trinity synthesizer workstations, on the other hand, use a combination of 
sample-playback and physical modeling to process sounds. This means that they 
recreate the sound environment using mathematical models, much in the same 
way surround-sound algorithms are implemented and codified, while 
simultaneously playing recorded digital "samples" of actual instruments. They 
also use a lot of DSPs to do the job.

Can a fine-grained, scheduling kernel replace a board full of DSPs?

There are a lot of exciting trends in electronic music that can be done in 
Linux if this kernel gets through, but I doubt if this can be done within 5 
years.

*  This trend is a Good Thing(tm) because you can now parcel out
*  your CPU power for whatever purpose you want.  There is no synth
*  chip sitting idle when you're not playing sound anymore.  In the
*  thick of doing other stuff, a 'near realtime' scheduling kernel
*  will do its best to ensure that, as in the days of offboard
*  sound-chips, your softsynth will never hiccup as long as you don't
*  ask it to chew on more data than it can actually handle.  And
*  of course the most wonderful aspect of 'virtual soundcards/synths'
*  is the unlimited flexibility.  Ah, the bounties that software and
*  mathematics (FFT) bring to music...

If the kernel can replace dedicated DSPs, then we won't need a sound card, and 
perhaps digital synthesis can really take off in Linux. The question is: can 
the kernel do it?


optimus
-- 
You mean you didn't *know* she was off making lots of little phone companies?


--
Philippine Linux Users' Group (PLUG) Mailing List
[EMAIL PROTECTED] (#PLUG @ irc.free.net.ph)
Official Website: http://plug.linux.org.ph
Searchable Archives: http://marc.free.net.ph
.
To leave, go to http://lists.q-linux.com/mailman/listinfo/plug
.
Are you a Linux newbie? To join the newbie list, go to
http://lists.q-linux.com/mailman/listinfo/ph-linux-newbie

Reply via email to