There's a /lot/ more information available in a MIDI performance, so the potential to do interesting things is greater. Flash the screen whenever the kick drum goes, have notes represented on screen as 3D objects using frequency for location, filter cutoff controlling lighting, blah blah etc. etc.
This is simply not true though! If you have an audio feed that is kick only, there is far more information available by analysing the audio than with a simple midi note on/velocity/duration. If the kick sound is spectrally analysed the light can be riding the amplitude and frequency content over the course of the note instead of just turning on when the drum starts.
However, the above does require a *lot* more cpu use to break apart composite audio channels or a lot more hardware and cpu use to work on multi-track input.
As I said earlier though, there is no reason not to enable both. Or even enable midi messages based on audio analysis.
Iain
