Disclaimer: Personal Attacks Are Unseemly But The Habitual Spreading of
Misinformation Is Unseemlier.

Last week some of you may recall that Mueller stated that no one had
challenged his classification scheme in his study.  When confronted with
chapter and verse from the critique that questioned his classification
scheme, Mueller provided no explanation for the blatantly false statement.

This week Mueller writes:

"There was no "sampling methodology." The study explicitly considered
>and rejected the possibility of doing a statistical sample, because there
is no access to a database of cases that can be sampled.  If you want to
see a more complete explanation of this issue see
http://istweb.syr.edu/~mueller/FAQ.html The attempt to discredit the study
>by applying statistical criteria to a non-statistical study has been
hashed over quite a bit and thoroughly refuted."

So - Mueller says that there was no statistical sample and that this was a
non-statistical study.


So how does Mueller describe his paper in his first paragraph?:

"The paper employs a hybrid of legal and quantitative methods"



How does he describe his method?

"The following method was developed to identify the STATISTICAL parameters
of the problem [emphasis added]"



Did he create a sample?


"The researchers identified and collected as many cases of domain name
conflict involving trademark claims as possible. Cases were identified on
an entirely unselective basis by a research assistant who was not
responsible for their classification. A case was then included in or
excluded from the database whenever sufficient information about it could
be gathered to classify it properly."


What did he do once he applied his ideosyncratic classification system to
the database? Did he project the analysis of a smaple onto what he implied
was the largest relevant universe?


"If these proportions are projected into the NSI figures about the total
number of trademark disputes it has handled, one could estimate that of the
2 million or so domain names registered, there are probably only about 257
cases (2,070 x 0.124) of infringement. This amounts to only 0.0128 percent
of all domains."



Did other people understand this to be a statistical critique?  Well, the
perhaps well-meaning but duped people at DNRC said to the U.S. Congress:

"A recently conducted study by Professor Milton Mueller of Syracuse
University finds that hundreds of domain name holders are losing their
domain names despite only 12 percentage of all domain name conflicts
involving trademark infringement. The vast majority of conflicts are what
Professor Mueller calls "string conflicts," the types of conflicts over
basic words and names in the examples discussed."


So they seemed to think it was a statistical analysis of a problem.


So what did Mueller say again?

"There was no "sampling methodology." The study explicitly considered
>and rejected the possibility of doing a statistical sample, because there
is no access to a database of cases that can be sampled.  If you want to
see a more complete explanation of this issue see
http://istweb.syr.edu/~mueller/FAQ.html The attempt to discredit the study
>by applying statistical criteria to a non-statistical study has been
hashed over quite a bit and thoroughly refuted"


WHY DO I KEEP BEATING A DEAD HORSE?

Partly for the same reason the JEFF WILLAMS FAQs are re-published.  The
newcomers may inadvertently be folled by the claims.  This isn't really a
personal attack.  It's an attempt to quell out and out misinformation.

More importantly, I "protest too much" because we can't come to a consensus
when someone is a propagandist bent on demonizing thee so-called TM
interests, and when all these misstatements are laid end to end, you see
that he is merely a propagandist.

All the study does is give a false hint of respectability to prejudicial
thinking.  It's like the studies conducted in late nineteenth/early
twentieth century which intended to "prove" that certain ethnic groups were
pre-disposed to crime.  Bad science for bad reasons.

The irony is that no one needed a statistical analysis to predict when a
policy will lead to injustice - the day I read NSI's first dispute
resolution policy I wrote in my firm's newsletter that the policy is unjust
because it gave a party the equivalent of a summary judgment when it might
not have been entitled to such.  I didn't care if such a fact pattern never
arose - the policy was unjust.  When drafting policy sometimes you don't
need statistics.  The "Syracuse" study is only used to dis-credit TM
Interests - no one defends the NSI dispute resolution policy.

The ICANN process is an experiment in both privatization and
self-governance.  It is not easy for so many disparate interest to get to
consensus.  One thing that doesn't help is slander.  Mr. Walsh was
slandered here a couple of weeks ago.  He rebutted it effectively and that
was that.  But how effectively could Mr. Walsh continue to  present his
interests if several people repeated that slander every day on these lists,
and produced fraudulent evidence to back up their slander?

That is what's going on with the TM interests.  Mueller presented a
so-called study.  If Carl Oppedahl had taken twenty cases that had had
final decisions and showed that a substantial percentage resulted in an
injustice because of NSI's policy, it would have been anecdotal, but it
would be relevant to this discussion, and useful in formulating future
policy (Carl's patents.com site with his papers are such a work in effect).
Reasonable minds could differ as to some of Mr. Oppedahl's legal
conclusions, but that's what makes horse racing. 

But Mueller is a professor and he used statistics so what he says must be
true.

IMHO, Mueller's work was the result of working backwards.  He started with
a conclusion - that TM Interests are the demons - and set out to prove it.
He cites his acedemic credentials (when really pressed he calls it THE
SYRACUSE STUDY) to lend the study some type of air of academic neutrality
(and has referred to himself as an unbiased academic in his postings - the
DNRC had cited his purported neutrality when they nominated him to be on
the WIPO Panel of Experts).

Well, the past weeks have shown that he is not neutral and not very academic.

Another academic might have gone back, learned from the critique and done
an improved study.  Mueller instead:

(1) stated on his web site that I had paid for the critique (it fit in with
his worldview - it didn't matter that he had no factual basis for this
statement);

(2) stated on his web site that I had asked the critiques' authors to
distort the truth (assuming he could have tapped my phones and read my mail
he would have known that this was not true - I am well aware that research
is worthless if you tell the researcher what to find):

(3) stated on this list that no one had challenged his classification
system and that they had "conceded" its correctness.  This was rebutted by
direct quotes from the critique; 

(4) stated that because there was one new DN/TM conflict every 1000 domain
names registered instead of every 600 domain names registered that this was
evidence of a DECREASE in DN/TM disputes.  This statement is fallacious on
its face;

and now he

(5) stated on this list that criticisms of his sample and his statistical
analysis are unfounded because he did not sample and that his was not a
statistical study.  Again, this can be refuted by direct quotation.


This is on top of a biased study.

One mistake is a mistake.  Two mistakes are sloppy.  Five outright
mistatements like this and I conclude that the author of the study only
deserves credibility with INEG.  Maybe Jeff Williams and Milton Muller are
the same person.






__________________________________________________
To receive the digest version instead, send a
blank email to [EMAIL PROTECTED]

To SUBSCRIBE forward this message to:
[EMAIL PROTECTED]

To UNSUBSCRIBE, forward this message to:
[EMAIL PROTECTED]

Problems/suggestions regarding this list? Email [EMAIL PROTECTED]
___END____________________________________________

Reply via email to