Dear Mihail
> My English bad, but i try answer. I am relatively sure that my native lanugage is the same as yours :) and what you wrote in English is perfectly readible to me :) > > > > First of all I do not understand why dithering is needed at all. > > All professional quality audio apps use dithering (from free > software: > jack, audacity, brutefir, etc). > You make a case the all professional apps use dithering. I do not want to start a war, but I, peronally, never consider a "herd" argument as a stong one. "Party lies appeared in the books and became the truth..." :) > > 1.Do you mean that 16-bit accuracy is not enough and a human ear > can > > hear > > the truncation/round-off artefacts? I have serious doubts that this > is > > true. > > Yes, I hear. > 1. 16bit accuracy give 96 dB resolution. Human limit 120dB. > 2. For mostly purpose 96 dB enough. But when we truncate audio to 16 > bit, we get new harmonics (distortion) with -60 dB level. > 3. When i play on midi keyboard, i set gain 0.1-0.2 to prevent > clipping. It narrows dynamic range. Without dithering i got only > 40-46 > dB dynamic range when play one note. The points 1 and 2 are interesting and improtant. This tells me that the 16 bit sampling was used as a trade-off and is not good enough for high quality audio. I was under the impression that audio CDs were adequate. Apparently not, thanks for this clarification. Your point 3, however, indicates a problem somewhere else, which can not and should not be solved by dithering. My feeling is that this problem should be solved by a non-linear aplification process. > > Also, note if digital output is used, no dithering > > is needed. It is needed only if the data goes directly to DAC!! > Even in > > this latter case, see my last comment. > > Dithering should by done always, when we truncate. I disagree... The my understanding of dithering (which is what is described on the wiki page you sent) is that on average to transmit 4.8 you transmit 80% 5 and 20% 4 and you get a 4.8. This is correct, by itself. This is useful in a communication channel which can only transmit integer numbers, but we want to transmit a float and by repeating a message several times we encode the fraction. In reality this is not happening. We want to transmit 4.8 but we have only one shot in doing that. There is no choice, we either transmit 4 or 5. That is it, end of story, as soon as we did it the original signal is destroyed. It does not matter if we did dithering or not. This process, unfrotunately introduces harmonics. Dithering simply changes the harmonics which are introduced. My argument that only endpoint audio equipment has to decide how to handle the trancation artefacts. Otherwise, everytime float-int-float conversion is made, distortions are introduced. > > > 4. I do not know how imporant it is and my feeling that dithering > > should not be applied at all, but the code above does not implement > > TPDF dithering. As I said, I do not think it is needed at all, but > if > > someone > > wants to implement TPDF than two calls to uniform random generator > are > > needed per entery in the table. Only one call is made by the code > > above. > > This leads to dithering which is different from TPDF. > > Not sure what you mean ... See source code in other apps, audacity > for > example. > Well, let me try to clarify. TPDF needs a random number from a triangular distribtion. To make a random number from a triangular distribtion, two random numbers are generated and the difference between the two is found. (just like rolling two dice) Thus, every noise point needs two calls to rand(). If "all apps" (herd in my language, do not think of me as a rude person, please) does only one call than they do get a random number from a triangular distribution, but the noise is correleated from sample to sample. I do not think that this is what the goal of dithering is. Goal of dithering is to remove such correlations. So, sorry, unless you can provide a logic why the noise has to be correlated, even though all other apps do it, fluidsynth has to be modified. > You don't understand why and when dithering uses. > When we truncate without dithering we get dirty sound. Dither use > always when we truncate. In fluidsynth case dithering uses only when > we > convert float to 16bits. Yes I do not understand why and when dithering is needed (not used). I understood that fluidsynth uses it at the end of data processing just before pushing data to the audio hardware... I would like to thank you for your comments. Hope to hear more from you. ZF __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com _______________________________________________ fluid-dev mailing list fluid-dev@nongnu.org http://lists.nongnu.org/mailman/listinfo/fluid-dev