[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [wg-c] Initial Numbers



> Behalf Of Kevin J. Connolly
> Sent: Wednesday, December 15, 1999 7:59 AM
>
> "Roeland M.J. Meyer" <rmeyer@mhsc.com> 12/15/99 12:21AM  wrote
> (responding to my response to Milton Mueller's post:
>
> >> >Technically, adding a new TLD to the root means adding a few
> >> >lines of text with the character string and pointers to two name
> >> >servers. There are no technical issues whatsoever as long as the
> >> >number stays below one million, which it certainly will do.
> >>
> >> I know that this proposition does not enjoy universal support.  It
> >> overlooks the fact that the frequency of root queries is almost
> >> certainly proportional to the number of zones in the root.
> >
> >False.
>
> Well, Christian Huitema presented a paper at ISOC-NY last Spring
> on this very subject.  The more TLDs there are, the greater the
> likelihood that one will need to query the root in order to resolve
> a domain name.  This is (according to Dr. Huitema, who, I believe,
> knows a few things more than the average bear about the DNS) as
> fundamental as 1+1 = 2.  So, what's the flaw?

The flaw is that local NS is queried first. If this local NS calls a local
root-server, which many of them do, then the root-servers.net system never
sees the hit. Heer Huitema considered the theoretical Internet, how it was
designed, not how it was implemented. Because I run my own root servers,
none of my systems ever do lookups on the root-servers.net system.
Configuration is strictly a local factor, from the edge. I could list a
number of large ISPs that also do this, precisely because of load and
reliability issues. Consider another related issue, an organization, using
the root-server.net system exclusively, would be completely non-functional
if their uplink failed, even their intranet would cease to function. Both
reliable and secure systems have very strong motivation to run local roots.

FYI: In 1998, the root-servers.net system suffered almost 10% downtime. Yet,
many nets continued to operate with 99.99% uptime. How do you think that was
done?

> >>I would
> >> love to see the experimental data which show that increasing the
> >> number of zones by four orders of magnitude will not result in an
> >> unacceptably-great increase in DNS latency.  In fact, DNS latency
> >> is a subject I do not recall *anyone* on this list (other
> >> than myself)
> >> ever having raised, notwithstanding many people bandy about the
> >> absence of technical barriers to increasing the number of zones
> >> in the root.
> >
> >Latency == irrelevant.
>
> I would love to see a substantive response
> rather than a simple dismissal.

Quick cure for DNS root-server latency is to run your own root-server.

> (1)  You must have me confused with someone else.
> (2)  I'm a clueless real estate lawyer from
> New York who's still learning how to interface
> serial devices to his PC.

Yeah ... right....

> Is there maybe an equivocation here between
> "sub-net" and "subordinate domain?"  And

Often a sub-domain is delegated along with the sub-net.

> >There are no practical limits on the number of zones
> >that the DNS can support.
>
> Wow.  Fascinating proposal.  Then there's no
> reason why we can't level every
> man, woman, child and fungal mycelium on the
> planet have their own TLD?

Actually, they wouldn't want it. I expect less than 10,000 TLDs, at
full-tilt boogie. This is controllable via the TLD registration process.

> The mind boggles. Or does it make a difference
> if the tree is totally flat or structured in such a
> way that the load on the root is parsed and
> spread-out?

Uh... because it's a network and not a tree?

> >The only things that effect latency are the throughput of
> the root server
> >and the type of network it is connected to.

> This seems to suggest that latency is not
> related to the number of queries to the root
> server.

It is, but one can have many more than one root-server.

> completely incomprehensible.  If a root server
> is capable of responding to, say, 1000
> queries/sec and it receives 5000 queries during
> a one-second period, then either 80% of the
> queries don't get responded to (in which case
> the querying  process has to recognize that it's
> not going to get an answer and then re-submit
> the query) or those 4/5 queries will be enqueued.
> In either case, the actual latency of the system
> is proportional to the load on the server once
> the rated throughput of the server is exceeded.
> Thus I reiterate that latency is a function of
> load as well as throughput and network capacity.

The load can be re-directed locally and often is.