On 7/8/06, James Courtier-Dutton <[EMAIL PROTECTED]> wrote:
Is there a standard way of converting a 24bit sample to 16bit? I ask because I think that in different scenarios, one would want a different result. 1) scale a 24bit value to a 16bit by simple multiplication by a fraction. 2) bit shift the 24bit value, so that the "most useful 16bits" are returned to the user. The problem here is what are the "most useful 16bits"? I have one application where just using the lower 16bits of the 24bit value is ideal, due to extremely low input signals.
Isn't this what dithering algorithms do for you? -- Brett McCoy: Programmer by Day, Guitarist by Night http://www.alhazred.com http://www.cassandrasyndrome.com http://www.revelmoon.com
