[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [wg-b] Reality checks [the grateful dead(hits)]



Kathy:
Even you seem to have understated the significance of an exclusion, especially
when it involves substrings. Consider:

KathrynKL@aol.com wrote:

> you would like the trademark owner to show
> only that someone is using "their word" and leave the burden of proof to the
> domain name holder to prove their innocence e.g,  not infringing or diluting.

In fact, with exclusions, there is not even an opportunity to prove innocence.
The domain name just isn't there.

> in commercial settings.   Needless to say, it creates tremendous problems for
> free speech by making someone the monitor upfront for what is a legitimate
> communicative message using the word "oreo" or "porsche" in it.

No one monitors anything with exclusions. Someone has to devise -- in advance --
an entirely mechanical algorithm that determines what can be registered and what
can't be. For example, [name] + s.[tld], common misspellings of [name], [name]
with the number 1, 2 or 3 after it, and so on.

This is why the whole idea of exclusions is bad. The only feasible case that can
be made for them is when they exclude only [famousname].[anytld]. But in fact
most cybersquatting --by about a 100 to 1 ratio -- involves [famousname] as a
substring, misspelling, and so on. Exclusions, as one person (a trademark lawyer)
pointed out on this list a long time ago, are  under-inclusive and over-inclusive
at the same time. Any mechanical algorithm is bound to include *many* names that
should be legal, but it will also fail to exclude many names that could in fact
be confusing if used in certain ways.

The basic fallacy here is the idea that trademark protection can be reduced to a
computer program that matches character strings. It can't. We learned this with
four years of experience with the NSI drp. It was unfair to many innocent
registrants and unsatisfying to the trademark owners. Why are we trying to repeat
this failure on a much larger scale?

Confusion and dilution are subjective phenomena that pertain to the relationships
between marks, human perception and goods and services. Character string matching
cannot capture that, under any circumstances. To enact it on a global basis
across all TLDs is nothing short of a crime -- against trademark concepts as well
as freedom of expression.

An experiment.

I may be wrong, but I think Mr. Hartman of Nabisco doesn't yet fully understand
that when we talk about exclusions, we are talking about a mechanical
string-matching algorithm, not about common-sense judgments based on how a string
"looks" to a human. Therefore, let him propose such a mechanical algorithm for
determining how the mark "oreo" should be protected. Remember, you are dealing
with computers so it has to be based on rules, not on any individual judgments.

If he does this, it will be easy to come up with hundreds of examples of
overinclusion, and probably just as many examples of uses that could still
infringe. But that is only half of the problem. The other half is that the set of
algorithms he would come up with for protecting "oreo" would almost certainly not
work with any other mark. Try to apply the same rules to "exxon" or "ford" and
the results will be totally different.

So where does that leave us? do we come up with a new algorithm for every famous
mark? That puts us right back where we ought to remain -- which is, at a
case-specific evaluation under the UDRP.

The existence of a .fame TLD helps this process by giving WIPO an arena in which
to annoint certain marks with famosity. That's all this working group can do.