[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: [wg-c] Initial Numbers
> Behalf Of John Charles Broomfield
> Sent: Wednesday, December 15, 1999 11:25 AM
> > The flaw is that local NS is queried first. If this local
> NS calls a local
> > root-server, which many of them do, then the
> root-servers.net system never
> > sees the hit. Heer Huitema considered the theoretical
> Internet, how it was
> > designed, not how it was implemented. Because I run my own
> Why do you not also run your own server for ".com"?
> Theoretically you can,
> in practice too. Problem is that ".com" changes on a daily
> basis (additions,
> changes, deletions, delegations), so it is not a good idea to
> run your own
You outlined precisely the reason that we don't run local copy of the COM
zone. However, many corps do exactly that, behind corporate firewalls and in
NAT'd space. The reason is that they can then delete the entries that they
don't want accessable from their interior (ie XXX sites and the like). This
is the primary reason that the COM zone is available for download.
That said, where your argument partially breaks down is that newly
registered names are not immediately implemented. IOW, they don't go "live"
immediately. Generally, one has upwards of 24 hours in order for them to
become deployed and resolvable. I have a few domains that are parked and
awaiting implementation, for months, until I have time to deal with them.
Ergo, a 48-hour polling interval should keep things reasonably up to date.
The only other case that is clear is the renumbering scenario. 48 hours
should be sufficient for that as well. However, that depends on your comfort
> Now, your system of keeping a local copy of the rootzone is
> fine and valid
> as long as the root-zone is fairly static (as it is today),
> but if the doors
> open, then keeping and running off a local copy becomes as
> pointless and
> silly as running off a local copy of a ".com" file. It would
I think that I have just outlined sufficient argument to counter this point.
The additional points are that;
1) The root zone file isn't very big, even with the new TLDs.
2) Even in the new TLD space, changes just don't occur that often. A weekly
update is more than sufficient (I actually don't even do it that often<g>).
> > > >Latency == irrelevant.
> > >
> > > I would love to see a substantive response
> > > rather than a simple dismissal.
> > Quick cure for DNS root-server latency is to run your own
> As long as the root-zone is small enough so that changes do
> not happen too
> often, but even so major change might be missed, making the "remedy"
> worse than the ailment. How long would it take you to realize it if
> (for example) com/net/org actually went off on to their own separate
> servers? Complaining that it was done without warning anyone
Actually, I would welcome it. I was never comfortable the gtld-servers.net
residing on the same machine;
1) It's in violation of RFC2010
2) I was the major source of the root-server outages, in 1998.
> > > Wow. Fascinating proposal. Then there's no
> > > reason why we can't level every
> > > man, woman, child and fungal mycelium on the
> > > planet have their own TLD?
> > Actually, they wouldn't want it. I expect less than 10,000 TLDs, at
> > full-tilt boogie. This is controllable via the TLD
> registration process.
> As always, the question: Why do you want to have
> instead of just "email@yourname"? Answers:
> If all of the above are wrong, why does ".com" have 8 million
> while "ml.org" doesn't?
I wasn't intending to get religious with you. The issue of control isn't
about limiting markets, it's about doing the job right, with the resources
available. Basically, I am advocating a very stringent set of policies and
requirements, specifically to promote reasonable stability and
trustworthiness, as well as, reliability (I am sure that you've read my
paper by now), in addition to satisfying market demand. Note that even NSI
doesn't meet the requirements that I set forth. This is because I feel that
the registry business needs to be taken to the next level. If we are going
to do TLDs then let's do them properly, this time, with proper supporting
> (...talking about load on rootservers)
> > The load can be re-directed locally and often is.
> Depends how you define "often" of course, but if you mean
> that a large % of
> the internet runs their own root-servers, you are patently
> wrong (or give
> your cold figures for your assertion, then again if you are
> right, why do
> you care what goes on with the legacy root-servers, just get
> in touch with
> everyone who runs their own thing, and the legacy root-servers become
The quick answer is that every large corporate zone must do this for
sanity's sake. ISPs that resolve for large business customers must also do
this. The fact that they are a dead-copy of the legacy root zone file is