[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: [wg-c] voting on TLDs
Actually, there is an extant limit of 64 characters for a FQDN, in Win2K DNS
resolvers. I just ran into it. If anything, this makes it more imperative to
have TLDs, so that we can have four more variable characters (there are
times that I *really* hate Microsoft incompetence).
From: firstname.lastname@example.org [mailto:email@example.com]On Behalf Of Karl
Sent: Monday, March 06, 2000 1:30 PM
Subject: Re: [wg-c] voting on TLDs
> >I might note that I've been associated with an actual test in which we
> >established a root with several million TLDs. The world did not end, the
> >seas did not boil, the sun still rose in the east, and DNS still worked.
> Karl, my recollection of that test was that it was quite limited
It was a fully functional server with several million TLDs. You can call
it limited, but I call it a very pragmatic test.
As for the access paths and query rates, we can derive an existance proof
from the access paths and query rates on the .com zone. The .com servers
handle a zone that is several million entries wide and, by removing the
impact of the ccTLDs and other TLDs, that gives us a direct correlation to
root zone behaviour. An easy formulation is to say that if the number of
queries that pass through .com represents X% of the entire number of
queries passing through all TLDs, then we can use the .com experience as
reflective of a root zone with a number of TLDs equivalent to X% of the
TLDs in .com. Given that X% is probably 50% or above, we can make an
extremely safe extrapolation (using the far more conservative number of
10%) that a root with 10% of .com's 10 million+ SLDs, i.e. a root with
1,000,000 TLDs will behave with the same degree of sucess as todays .com
I note that several of the .com servers still are occupying the same
computers as the root servers, hence putting those computers under a
traffic and query load equivalent to a root size far in excess of 1
I note further that 1,000,000 TLDs is a number vastly larger than 6 or 10.
So to build in a multi-thousand-fold safety margin, we can readily
accomodate 1000 new TLDs, not the mere 6 to 10.
I also point you to the research on DNS network loading being done by
And one more thing - if the 512 byte limit on DNS UDP packets were
modernized we could significantly increase the span of DNS servers per
zone from the current bottleneck of 13.
> > 2) If crashes can be casued by erros at the TLD level then it can
> > just as well happen from the same cause at deeper levels.
> Deeper levels affect fewer nodes. That's why United Airlines works so
> to make sure that each first flight of the day gets out on-time.
But as we see, with millions of badly run deeper zones, we do not have
failures occuring. You are just imagining a problem that, simply stated,
does not exist in real life.
I challange you to find me a DNS name that will cause my Windows,
Macintosh, Unix, or Linux machines to crash when I resolve it. And I
note, you were the one who said that the machines would "crash".