[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [wg-c] Initial Numbers




> The flaw is that local NS is queried first. If this local NS calls a local
> root-server, which many of them do, then the root-servers.net system never
> sees the hit. Heer Huitema considered the theoretical Internet, how it was
> designed, not how it was implemented. Because I run my own root servers,
> none of my systems ever do lookups on the root-servers.net system.
> Configuration is strictly a local factor, from the edge. I could list a
> number of large ISPs that also do this, precisely because of load and
> reliability issues. Consider another related issue, an organization, using
> the root-server.net system exclusively, would be completely non-functional
> if their uplink failed, even their intranet would cease to function. Both
> reliable and secure systems have very strong motivation to run local roots.

Why do you not also run your own server for ".com"? Theoretically you can,
in practice too. Problem is that ".com" changes on a daily basis (additions,
changes, deletions, delegations), so it is not a good idea to run your own
".com", because from the moment you do so, it is already (by definition)
non-authoritative. For you to run your own ".com" server (a meaningful copy
of the legacy ".com", just as your own root is a meaningful copy of the
legacy root plus any additions you care to put in), the only way to have
something adequate would be to do daily updates to see if it has changed. In
fact, if everyone decided to do daily updates, then the thousands of DNS
resolvers around the world would impose a rather heavy load upon the ".com"
servers because of this continuous demand. It is actually much wiser to just
check to see if each time a com domain needs to be resolved if your cache is
up-to-date by checking at the freshness of your data, and (maybe) comparing
it against an authoritative sourfce (for example a server for ".com"), and
only then MAYBE re-load the record if needed.
Now, your system of keeping a local copy of the rootzone is fine and valid
as long as the root-zone is fairly static (as it is today), but if the doors
open, then keeping and running off a local copy becomes as pointless and
silly as running off a local copy of a ".com" file. It would be outdated
EVERY DAY.
In fact, your suggestion is exactly the same as going back to the hosts.txt
method. DNS was invented to SOLVE that. You are suggesting to take a step
backwards. Scalability is the keyword here.

> FYI: In 1998, the root-servers.net system suffered almost 10% downtime. Yet,
> many nets continued to operate with 99.99% uptime. How do you think that was
> done?

FYI this is fud. As an ISP we did not have almost 10% downtime wrt to
capacity to resolve domains worldwide in 1998, and we run directly off the
legacy roots. There were loads of problems with NSI and correct interfacing
between them and the rootzone, and maybe updates were not as timely as they
should have been, but those are all administrative problems, not technical.
There were a couple of major outages, but I *think* that TOTAL *FULL* outage
for ALL root-servers simultaneously was not even 2 days (and that would have
been pretty bad anyway), certainly nowhere near 30 days as you suggest. A
month of downtime in 1998 of all the internet? HAHAHAHA!

> > >Latency == irrelevant.
> >
> > I would love to see a substantive response
> > rather than a simple dismissal.
> 
> Quick cure for DNS root-server latency is to run your own root-server.

As long as the root-zone is small enough so that changes do not happen too
often, but even so  major change might be missed, making the "remedy"
worse than the ailment. How long would it take you to realize it if
(for example) com/net/org actually went off on to their own separate
servers? Complaining that it was done without warning anyone is just the
same as complaining that AFNIC changed name-servers without warning anyone
but the maintainer of the root-zone...

> > Wow.  Fascinating proposal.  Then there's no
> > reason why we can't level every
> > man, woman, child and fungal mycelium on the
> > planet have their own TLD?
> 
> Actually, they wouldn't want it. I expect less than 10,000 TLDs, at
> full-tilt boogie. This is controllable via the TLD registration process.

As always, the question: Why do you want to have "email@yourname.something"
instead of just "email@yourname"? Answers:
It's shorter
It's doesn't give anyone else publicity than me
It's easier to remember
It's much sexier
If all of the above are wrong, why does ".com" have 8 million delegations,
while "ml.org" doesn't?

(...talking about load on rootservers)
> The load can be re-directed locally and often is.

Depends how you define "often" of course, but if you mean that a large % of
the internet runs their own root-servers, you are patently wrong (or give
your cold figures for your assertion, then again if you are right, why do
you care what goes on with the legacy root-servers, just get in touch with
everyone who runs their own thing, and the legacy root-servers become
irrelevant). Just more fud (the throwing of which you are happy to accuse
everyone else just so that your own sling may be missed...)

Yours, John Broomfield.