[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [wg-c] Initial Numbers





> ....  I would 
> love to see the experimental data which show that increasing the 
> number of zones by four orders of magnitude will not result in an 
> unacceptably-great increase in DNS latency.

We have an existance proof:

The .com zone contains several million items, and it encompases a
significant perhaps even dominant portion of all the highly active domain
name targets and queries.

As such we can look to the experiences with the .com zone as
representative of what would happen with a root containing an equivalent
number of TLDS multiplied by a coefficient that approximates the portion
of DNS queries that .com current represents to the whole number of DNS
queries.

(In other words, if .com represents 25% of all queries, and if .com
contains 6 million TLDs, then we can model the activity of .com as
representative of a root with 0.25*6,000,000, i.e. a root with about 1.5
million TLDs.)

As we well know, the .com zone is far from falling on its face in terms of
response time.

And Peter Deutsch recently ran an experiment in which he loaded the .com
zone as if it were the root zone to see whether standard DNS server
software would collapse.  (It didn't.)

Since these two data points are tend to indicate that a root with a
million+ TLDs would be no problem, I find it hard to give much creedence
to any of the current debate about a half dozen new TLDs representing any
degree of risk.

I might add that the competitive root systems that are outside of 
teh ICANN franchise are running quite happily and reliably with many
additional TLDs.  So far in more than two years of use I have not had any
availability failures due to my use of these non-ICANN root systems.

Indeed, I find that the net is far more unstable due to routing problems
and congestion at exchange points than from DNS root issues.

		--karl--