mlsstl;581692 Wrote: 
> You seem awfully anxious to close the door on an issue you brought up. 
> 
> Science deals far more in probabilities than absolutes. If DBTs were
> worthless, let's just toss all of the medicines we have today and head
> back to the 15th century. 
> 
> That's the problem with many audiophiles. They hear the tidbit that
> confirms their belief and then immediately discount the greater bulk
> that doesn't. 
> 
> I've even heard DBTs discounted because they are "stressful" because
> the listener is in the terrible position of making a choice. Meanwhile,
> the far greater and well documented peer pressure of making a choice in
> a sighted listening test is blithely discounted as nothing. 
> 
> In other words, the one-in-a-million odds are willingly accepted but
> the world is searched high and low for the most obscure of reasons to
> ignore the 99% probability.
> 
> If that is "case closed" in your book, you're welcome to it.

The problem is not with DBT, the problem is that it may not be the most
accurate way to test audio perception, or any kind of perception that
isn't a simple yes or no situation:

1. Differences in perceived sound are NOT simple yes and no results
like in drug testing. And remember, in drug testing there can be quite
a high percentage of results that are "WRONG",  but these can be
ignored in the final results; i.e. "go or no go" on efficacy of a drug.
But if only 1% of the population can hear something, does this mean it
doesn't exist? In drug testing, the 1% wrong result would be ignored as
an anomaly that is statistically insignificant. In audio testing it
could be there, and the fact that only a minority of listeners hear
something doesn't mean it doesn't exist.

This type of testing isn't what we really need in audio testing, as we
aren't necessarily testing measurable differences, but perceived
differences.

2. Listening differences are by nature cognitive differences. The
listener's brain interprets the sound and changes it. This has nothing
to do with the "preconceptions" often referred to that DBT is trying to
eliminate. They have to do with the nature of perception itself. Not all
people looking at a painting or listening to music see and hear the same
thing. A trained artist will see things in a painting I don't, and a
trained musician will hear things some others won't. It doesn't mean
they are imagining the differences, it means their brains have been
trained to process and recognize existing detail that untrained brains
don't notice (i.e., don't process during cognition). 

This is why trained listeners can often hear material that untrained
listeners can't. This has nothing to do with the nature of the material
being tested or reproduced, but everything to do with the cognitive part
of the perception process. We can train our brains to perceive certain
phenomena, but if our brains are untrained, we don't hear them, even if
objectively they are there.


Another problem with testing with DBT: If I am used to a certain type
of sound reproduction (call it "A"), that becomes the benchmark by
which I judge a sound "B". This sometimes takes only a few hearings.
But if I get used to sound "B", it becomes my benchmark. So it is
difficult to determine by DBT what sounds better or different. This has
to do with the nature of hearing/cognition and not with any "prejudices"
of the subject being tested.


Examples: My friend who can't hear the difference between his mp3 files
and lossless reproduction. It doesn't mean the differences aren't there,
or that he is physically unable to hear them. It's just that his brain
hasn't been trained to pick up on the clues that enable him to hear the
difference. If he listened to only lossless material for a period of
time, and the differences in sound between lossy and non-lossy were
pointed out to him, he would start to notice the difference. 

BTW, this has also been shown in research: random subjects were often
unable to differentiate between compressed and uncompressed files, but
"audiophiles" were.

The supposed objectivity of the DBT testing itself is not suited to
account for these situations, as we actually need to test what is heard
(perceived in the brain after cognition). And I don't know of a good
method of testing that.

My personal solution to this phenomenon in testing audio equipment is
fairly simple. I listen to setup A until I "used to it"; then switch to
setup B.
When I get used to "B", I then listen to A again. That's the only way
I've found to compare products that seems to work.


-- 
firedog

Tranquil PC fanless WHS server running SqueezeServer; SB Touch slaved to
Empirical Audio Pace Car; KRK Ergo, MF V DAC3, MF X-150 amp, Devore
Gibbon Super 8 Speakers; Mirage MS-12 sub; Dual 506 + Ortofon 20
(occasional use); sometimes use PC with M-Audio 192 as digital source.
SB Boom in second room. Arcam CD82 which I don't use anymore, even
though it's a very good player.
------------------------------------------------------------------------
firedog's Profile: http://forums.slimdevices.com/member.php?userid=11550
View this thread: http://forums.slimdevices.com/showthread.php?t=82067

_______________________________________________
audiophiles mailing list
[email protected]
http://lists.slimdevices.com/mailman/listinfo/audiophiles

Reply via email to