From: owner-wg-c-digest@dnso.org (WG-C-DIGEST) To: wg-c-digest@dnso.org Subject: WG-C-DIGEST V1 #29 Reply-To: Sender: owner-wg-c-digest@dnso.org Errors-To: owner-wg-c-digest@dnso.org Precedence: bulk WG-C-DIGEST Wednesday, March 8 2000 Volume 01 : Number 029 ---------------------------------------------------------------------- Date: Mon, 06 Mar 2000 08:45:28 -0800 From: Dave Crocker Subject: Re: [wg-c] voting on TLDs Always interesting to see such selective memories: At 05:29 AM 3/6/2000 -0800, William X. Walsh wrote: >On 06-Mar-2000 Dave Crocker wrote: > > At 09:39 PM 3/5/2000 -0800, Mark C. Langston wrote: > >>That's odd. Until someone went and got Paul Vixie to state that > >>adding a large number of new TLDs would pose no technical problem, the > >No, it's not. But a nice way to try and dismiss it without refuting >it. He is >spot on, Dave. The concern for stability has been present from the start of discussions about gTLD expansion, roughly five years ago. It has covered: 1. Technical and operational impact on the root 2. Administrative and operational capabilities of registries 3. Disruption due to legal distraction from the trademark community. A significant problem coming from any one of these 3 different directions will render the DNS unstable. The record of listing and discussing these 3 categories of concern is massive and public. The portion of Paul Vixie's opinion about the first concern, technical issues, attends to an entirely reasonable basis for believing that the purely technical limit to the right is quite high. Other senior technical commentators focus quite heavily on conservative operations practise when scaling a service. They conclude that one, or a few, hundred names is a reasonable near-term limit. > > Indeed, please DO look at NSI. Their history ain't nearly as wonderful as > > you seem to believe. > >I think that is exactly what he meant, the net has not >destabilized. There are Except for ignoring NSI's very long learning curve, which included messing up individual registrations randomly and seriously, corrupting the whois data base, and corrupting the root, I suppose you are right... >currently over 240 registries operating, and with various types of management >models and with a lot of variance in their operating structure. There have >been problems that have resulted in entire TLDs not being able to be resolved >for several hours. But the net has not destabilized. Indeed, they were minor Service outages of "several hours" for end-users does not constitute an instability? What a curious view of the term. d/ =-=-=-=-= Dave Crocker Brandenburg Consulting Tel: +1.408.246.8253, Fax: +1.408.273.6464 675 Spruce Drive, Sunnyvale, CA 94086 USA ------------------------------ Date: Mon, 06 Mar 2000 11:44:06 -0500 From: Kendall Dawson Subject: [wg-c] CONSENSUS CALL -- selecting the gTLDs in the initial rollout Jon, I support the proposed compromise position for the initial rollout. I really would like to see gTLD's in my lifetime. I believe that more thought needs to be given to how the system will operate after the initial rollout - but we really need to get the ball rolling here. I am willing to compromise on this issue in order to enable gTLD's to become a reality. Kendall ------------------------------ Date: Mon, 6 Mar 2000 09:23:55 -0800 From: "Mark C. Langston" Subject: Re: [wg-c] voting on TLDs On Mon, Mar 06, 2000 at 08:45:28AM -0800, Dave Crocker wrote: > > The concern for stability has been present from the start of discussions > about gTLD expansion, roughly five years ago. It has covered: > > 1. Technical and operational impact on the root > > 2. Administrative and operational capabilities of registries > > 3. Disruption due to legal distraction from the trademark community. > > A significant problem coming from any one of these 3 different directions > will render the DNS unstable. The record of listing and discussing these 3 > categories of concern is massive and public. > > The portion of Paul Vixie's opinion about the first concern, technical > issues, attends to an entirely reasonable basis for believing that the > purely technical limit to the right is quite high. Other senior technical > commentators focus quite heavily on conservative operations practise when > scaling a service. They conclude that one, or a few, hundred names is a > reasonable near-term limit. From: http://www.dnso.org/wgroups/wg-c/Arc01/msg00191.html for context, and http://www.dnso.org/wgroups/wg-c/Arc01/msg00192.html , I quote: "A million names under "." isn't fundamentally harder to write code or operate computers for than are a million names under "COM"." This was Paul's response to Eric Brunner's direct question on the matter of adding names and stability. That eliminates concern #1. Concern #3 is never going to go away, because the TM/IP community will always feel infringed upon. It's what they do for a living. As long as character strings exist, the boogeyman of infringement within those strings will be seen. Nothing can be done to eliminate #3, unless the lawyers themselves are eliminated. The merits of that approach are best left for a different conversation. :) This concern is a red herring. Which leaves us with concern #2. NSI has provided AMPLE evidence of this behavior, and I will continue to assert that the Internet has NOT come crashing down around our ears. They have at times been the very model of gross incompetence, and have done things many would not even think to test in a controlled scenario. Yet, somehow, the net continues to exist. I therefore insist that Concern #2 has been tested. It has been tested in the most severe case -- the case in which the registry is a single point of failure. We are proposing adding additional registries and additional TLDs. If the Internet didn't curl up and die with NSI mismanaging the only registry in existence, I have great confidence that the Internet will continue to route around problems at the technical level, and with new registries and open competition, the customer base will do the same. Yes, the SRS has had problems, and will continue to have them. However, as much as you want to insist that there may be some catostrophic problem lurking around the corner, and you want to make hand-waving proposals, I will continue to point my finger at running code. You're familiar with the "running code v. proposals" rule of thumb, I'm sure. > > > > > Indeed, please DO look at NSI. Their history ain't nearly as wonderful as > > > you seem to believe. > > > >I think that is exactly what he meant, the net has not > >destabilized. There are > > Except for ignoring NSI's very long learning curve, which included messing > up individual registrations randomly and seriously, corrupting the whois > data base, and corrupting the root, I suppose you are right... > Hm. Nope. Net's still working. See above for expansion on this counter to your statement. > > >currently over 240 registries operating, and with various types of management > >models and with a lot of variance in their operating structure. There have > >been problems that have resulted in entire TLDs not being able to be resolved > >for several hours. But the net has not destabilized. Indeed, they were minor > > Service outages of "several hours" for end-users does not constitute an > instability? > When was the period of time during which the Internet was unusable? I must've been logged off that day. Perhaps they should coordinate with the yearly cleaning effort that occurs every April 1. If you want to go after those responsible for service outages of several hours, then I recommend you petition ICANN to institue mandatory minimum QoS levels for all ISPs worldwide. End users do not rely on the roots. Any ISP that can't notice and correct a DNS problem of such duration will be routed around by their customer base in both the short and long term. And this works all the way up the line to the registry level. Mmmmmm, bottom-up processes. - -- Mark C. Langston mark@bitshift.org Systems & Network Admin San Jose, CA ------------------------------ Date: Mon, 06 Mar 2000 12:51:50 -0500 From: Kendall Dawson Subject: re: [wg-c] Re: vote? Jon wrote: - -------------- That said, I really want to know what others think. Not many people have spoken to this, and those who have have been roughly divided: Bob, Kent, Bill and Mr. McCarthy have indicated that it would not be appropriate to call this a "wg-c report"; Milton, Rod and William (and I) have indicated the other view. I'm happy to go along with the general sense of the group on this one, whatever it is. So let me know. - -------------- Hi, I know that I am new to WG-C, and I have seen this topic come up a couple of times in the short while that I have been a member. My personal feelings on the matter: I believe that Jon is correct in stating that we should term this a "wg-c report". This is the whole point of this Working Group! People who joined wg-c did so under the pretense of establishing a recommendation to ICANN regarding gTLD's. Every opinion which is made publicly to the list is counted and becomes the "property" of the public forum known as "wg-c". Therefore, any commentary made to the list contributes to the final recommendation in some way. If someone makes a statement which is *obviously* false - the group should respond to this post immediately. And, if group members do not - they cannot say that their voice was not heard. The report should, in my opinion, be officially labeled "wg-c report". After all, this report is one of the goals that the WG is working towards. I feel that the co-chair is perfectly within his right to present a consensus of the posts made by the group as an "official report" assuming that there has not been a massive outcry from the group telling him otherwise. In any sort of democratic forum there will be a variety of voices and a myriad of opinions. But, the final decision is always based on a consensus of the group as a whole. If enough people disagree with the decision - they are free to enlist others who have the same opinions to become a majority and change the group consensus. Just my .02 cents worth... Kendall ------------------------------ Date: Mon, 06 Mar 2000 10:23:08 -0800 From: Dave Crocker Subject: Re: [wg-c] voting on TLDs At 09:23 AM 3/6/2000 -0800, Mark C. Langston wrote: >From: http://www.dnso.org/wgroups/wg-c/Arc01/msg00191.html for context, and >http://www.dnso.org/wgroups/wg-c/Arc01/msg00192.html , I quote: > >"A million names under "." isn't fundamentally harder to write code or >operate computers for than are a million names under "COM"." > >This was Paul's response to Eric Brunner's direct question on the >matter of adding names and stability. That eliminates concern #1. It will greatly help discussion if careful attention is paid to counter-points, rather than offering facile efforts to treat the counter-points trivially and incorrectly: 1. I did not say that Paul did not say what you quoted. 2. I DID say that his opinion is one among a set of senior DNS technical experts and that others focused on OTHER aspects of the operations issue. 3. I also pointed out the error in historical analysis, in which in was stated that Paul's comment "resolved" the topic 4. Paul has very considerable experience in a number of areas, including software development and computer operations. However there are some relevant areas that he has not worked in and large-scale customer servicing - -- fundamental to a registry -- is one of them. Hence the opinions of the range of technical experts is significant. By the way, the fact that Paul made a statement about theoretical limits does not mean that he made a statement about the PROCESS of scaling up. In other words, there is nothing in his statement that says that that limit should be attempted all at once. >Concern #3 is never going to go away, because the TM/IP community will >always feel infringed upon. It's what they do for a living. As long >as character strings exist, the boogeyman of infringement within those >strings will be seen. Nothing can be done to eliminate #3, unless the "Going away" is different from "Dealing with". Apparently you want to treat the fact that it won't go away as an excuse for ignoring it. That's not a very good idea. >Which leaves us with concern #2. NSI has provided AMPLE evidence of this >behavior, and I will continue to assert that the Internet has NOT come >crashing down around our ears. They have at times been the very model >of gross incompetence, and have done things many would not even think to >test in a controlled scenario. > >Yet, somehow, the net continues to exist. This suggests a) a lack of appreciation for the impact of individual major problems, and b) a lack of appreciation for the scaling effect when there are many more potential sources of such problems. That is, the aggregate statistical probability of a major impact when there is a large number of equally novice registries operating on a large scale (as any gTLD does.) d/ ps. This sufficiently covers this issue, so I'll refrain from responding to further efforts, in this thread, to treat the scaling question trivially. =-=-=-=-= Dave Crocker Brandenburg Consulting Tel: +1.408.246.8253, Fax: +1.408.273.6464 675 Spruce Drive, Sunnyvale, CA 94086 USA ------------------------------ Date: Mon, 6 Mar 2000 10:38:24 -0800 From: "Christopher Ambler" Subject: Re: [wg-c] voting on TLDs Translation: "did not!" Christopher - ----- Original Message ----- From: "Dave Crocker" To: "Mark C. Langston" Cc: "wg-c" Sent: Sunday, March 05, 2000 10:25 PM Subject: Re: [wg-c] voting on TLDs > At 09:39 PM 3/5/2000 -0800, Mark C. Langston wrote: > > >That's odd. Until someone went and got Paul Vixie to state that > >adding a large number of new TLDs would pose no technical problem, the > >arguments here were about instability rooted in technical issues. > >Now that that argument is untenable, it's shifted to the unprovable > >claim that adding new registries will break things. > > Your assessment of history is thoroughly inaccurate. > > d/ > > =-=-=-=-= > Dave Crocker > Brandenburg Consulting > Tel: +1.408.246.8253, Fax: +1.408.273.6464 > 675 Spruce Drive, Sunnyvale, CA 94086 USA ------------------------------ Date: Mon, 6 Mar 2000 11:20:01 -0800 (PST) From: Patrick Greenwell Subject: Re: [wg-c] voting on TLDs On Mon, 6 Mar 2000, Kent Crispin wrote: > On Sun, Mar 05, 2000 at 09:39:57PM -0800, Mark C. Langston wrote: > [...] > > You say that we have no evidence that a flood of new, inexperienced > > registry admins will provide stable service. Well, we have no > > evidence to the contrary. > > Actually, we do -- so much so that it is ludicrous to claim otherwise. Care to substantiate your assertion? /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ Patrick Greenwell Earth is a single point of failure. \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ ------------------------------ Date: Mon, 6 Mar 2000 11:45:05 -0800 (PST) From: Karl Auerbach Subject: Re: [wg-c] voting on TLDs > >I have yet to see any technical or policy basis to have any belief > >whatsoever that additional TLDs, even thousands of them, will have any > >impact on the "stability of the Internet". > You might not like the analyses or concerns that have been raised, but they > have been raised repeatedly. You have seen them and you have responded to > them. There have been *NO* - read it again - *NO* technical indications that several thousand to several hundred thousand new TLDs will have any technical impact on the DNS. There is nothing to read, nothing to respond to. I might note that I've been associated with an actual test in which we established a root with several million TLDs. The world did not end, the seas did not boil, the sun still rose in the east, and DNS still worked. > Administrative instability is just as bad -- actually much worse If new TLD operators are flakes, then they won't get any business. And if they don't get any business they won't get any queries. End of story. > -- as crashing machines. My, now DNS can crash machines? I have to characterize that as super FUD. Two points: 1) If if that happens, blame the poor implementors of DNS software, don't stop new TLDs. 2) If crashes can be casued by erros at the TLD level then it can just as well happen from the same cause at deeper levels. And as we know, those deeper levels are often run by rank amateaurs. And know what? DNS isn't crashing despite lots of horrible administrators out there at the SLD and deeper levels. (Indeed, back in the olden days of the early 1990's, folks at one company set up host names that pushed every DNS limit just to see what hosts and resolvers would have trouble. Few did. And there's been ten years of debugging since then.) --karl-- ------------------------------ Date: Mon, 6 Mar 2000 11:53:03 -0800 (PST) From: Karl Auerbach Subject: Re: [wg-c] CONSENSUS CALL -- selecting the gTLDs in the initial rollout I'm opposed to the "consensus call" for the following reasons: - I do not believe that this procedure would cease after the intial 6-10, rather I fear that it would simply be ossified into being the one and only one procedure for all future times. - I am opposed to anything that places ICANN or the DNSO in the role of a policeman over whether a TLD adheres to a limited use charter. And I perceive such a policeman role in that the proposed compromise gives ICANN any to accept/reject a new TLD based on its proposed use. --karl-- ------------------------------ Date: Mon, 6 Mar 2000 11:59:17 -0800 From: Kent Crispin Subject: Re: [wg-c] voting on TLDs On Mon, Mar 06, 2000 at 11:20:01AM -0800, Patrick Greenwell wrote: > On Mon, 6 Mar 2000, Kent Crispin wrote: > > > On Sun, Mar 05, 2000 at 09:39:57PM -0800, Mark C. Langston wrote: > > [...] > > > You say that we have no evidence that a flood of new, inexperienced > > > registry admins will provide stable service. Well, we have no > > > evidence to the contrary. > > > > Actually, we do -- so much so that it is ludicrous to claim otherwise. > > Care to substantiate your assertion? If you would think about it just 3 seconds, you wouldn't ask that question. Here's the assertion: "A flood of new, inexperienced registry admins/operators can cause instabilities in the registry service." This is a special case of the more general assertion: "A flood of new, inexperienced XXX admins/operators can cause instabilities in the XXX service." Clearly, the important component here is the flood of new, inexperienced admins/operators -- there is nothing unique about the DNS that makes it immune to screwups by new people -- quite the contrary, in fact -- new people frequently screw up DNS, because DNS is a lot more complicated than it seems at first. So any case where a flood of new people has caused instability is evidence in support of this statement. If you can't think of such cases you simply aren't being honest. In fact, we are really dealing with the more general proposition: "Sudden changes in the XXX service can cause instabilities in the XXX service." This is generally and obviously true, and examples abound There is no magic in the DNS or the Registry services that make them immune to such problems -- we have ample evidence of that in the problems that the new registrars had coming on line, in the problems that have plagued NSI -- you seem conveniently to have forgotten your own efforts at getting around the problems that NSI caused in the whois system. In short, there is abundant evidence that change in the registry system causes instabilities, evidence right before your very nose -- you have been deeply involved in fighting those instabilities. - -- Kent Crispin "Do good, and you'll be kent@songbird.com lonesome." -- Mark Twain ------------------------------ Date: Mon, 06 Mar 2000 12:50:28 -0800 From: Dave Crocker Subject: Re: [wg-c] voting on TLDs At 11:45 AM 3/6/2000 -0800, Karl Auerbach wrote: >I might note that I've been associated with an actual test in which we >established a root with several million TLDs. The world did not end, the >seas did not boil, the sun still rose in the east, and DNS still worked. Karl, my recollection of that test was that it was quite limited, and that it did not attempt to emulate, for example, network access patterns, system access patterns, system update patterns or the like that constitute system-wide behavior of the global DNS root. As such, the test provided one, useful-but--constrained data point with respect to the question of DNS ability to scale. That is, it testing basic functionality of some core software. Knowing that the software in a single root server can handle a large data base is, indeed, nice. It also does not provide a definitive basis for asserting instant scalability of the global domain name root SERVICE. As an experienced networking person, you know that a networking SERVICE comprises the collection of components and their interactions, not just the behavior of one node under limited circumstances. > > Administrative instability is just as bad -- actually much worse > >If new TLD operators are flakes, then they won't get any business. And if >they don't get any business they won't get any queries. End of story. End of thinking, from some perspectives, perhaps, but much too simplistic a view of critical infrastructure service, from other perspectives. > > -- as crashing machines. > >My, now DNS can crash machines? I have to characterize that as >super FUD. Nicely creative mis-reading of the text. Thank you. > 2) If crashes can be casued by erros at the TLD level then it can > just as well happen from the same cause at deeper levels. Deeper levels affect fewer nodes. That's why United Airlines works so hard to make sure that each first flight of the day gets out on-time. And if my reference to UAL does not seem to make sense, then there is too much distance to cover in the understanding of hierarchical (dependency) systems for further discussion to be productive. d/ =-=-=-=-= Dave Crocker Brandenburg Consulting Tel: +1.408.246.8253, Fax: +1.408.273.6464 675 Spruce Drive, Sunnyvale, CA 94086 USA ------------------------------ Date: Mon, 6 Mar 2000 13:29:58 -0800 (PST) From: Karl Auerbach Subject: Re: [wg-c] voting on TLDs > >I might note that I've been associated with an actual test in which we > >established a root with several million TLDs. The world did not end, the > >seas did not boil, the sun still rose in the east, and DNS still worked. > > Karl, my recollection of that test was that it was quite limited It was a fully functional server with several million TLDs. You can call it limited, but I call it a very pragmatic test. As for the access paths and query rates, we can derive an existance proof from the access paths and query rates on the .com zone. The .com servers handle a zone that is several million entries wide and, by removing the impact of the ccTLDs and other TLDs, that gives us a direct correlation to root zone behaviour. An easy formulation is to say that if the number of queries that pass through .com represents X% of the entire number of queries passing through all TLDs, then we can use the .com experience as reflective of a root zone with a number of TLDs equivalent to X% of the TLDs in .com. Given that X% is probably 50% or above, we can make an extremely safe extrapolation (using the far more conservative number of 10%) that a root with 10% of .com's 10 million+ SLDs, i.e. a root with 1,000,000 TLDs will behave with the same degree of sucess as todays .com servers. I note that several of the .com servers still are occupying the same computers as the root servers, hence putting those computers under a traffic and query load equivalent to a root size far in excess of 1 million TLDs. I note further that 1,000,000 TLDs is a number vastly larger than 6 or 10. So to build in a multi-thousand-fold safety margin, we can readily accomodate 1000 new TLDs, not the mere 6 to 10. I also point you to the research on DNS network loading being done by CAIDA. And one more thing - if the 512 byte limit on DNS UDP packets were modernized we could significantly increase the span of DNS servers per zone from the current bottleneck of 13. > > 2) If crashes can be casued by erros at the TLD level then it can > > just as well happen from the same cause at deeper levels. > > Deeper levels affect fewer nodes. That's why United Airlines works so hard > to make sure that each first flight of the day gets out on-time. But as we see, with millions of badly run deeper zones, we do not have failures occuring. You are just imagining a problem that, simply stated, does not exist in real life. I challange you to find me a DNS name that will cause my Windows, Macintosh, Unix, or Linux machines to crash when I resolve it. And I note, you were the one who said that the machines would "crash". --karl-- ------------------------------ Date: Mon, 6 Mar 2000 13:39:52 -0800 (PST) From: Patrick Greenwell Subject: Re: [wg-c] voting on TLDs On Mon, 6 Mar 2000, Kent Crispin wrote: > On Mon, Mar 06, 2000 at 11:20:01AM -0800, Patrick Greenwell wrote: > > On Mon, 6 Mar 2000, Kent Crispin wrote: > > > > > On Sun, Mar 05, 2000 at 09:39:57PM -0800, Mark C. Langston wrote: > > > [...] > > > > You say that we have no evidence that a flood of new, inexperienced > > > > registry admins will provide stable service. Well, we have no > > > > evidence to the contrary. > > > > > > Actually, we do -- so much so that it is ludicrous to claim otherwise. > > > > Care to substantiate your assertion? > > If you would think about it just 3 seconds, you wouldn't ask that > question. I did, and nothing in your customarily ascerbic response answered my question. > Here's the assertion: > > "A flood of new, inexperienced registry admins/operators can cause > instabilities in the registry service." No, that was not the assertion. Please see above as to what the original asserition was. The term "operator" was never used. > This is a special case of the more general assertion: > > "A flood of new, inexperienced XXX admins/operators can cause > instabilities in the XXX service." Not at all. The above is your attempt at extrapolation. Still waiting for a response that backs your original assertion, which, given the limited number of *registry admins* should be very intersting to see. Bonus points if you can respond in a civil fashion. /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ Patrick Greenwell Earth is a single point of failure. \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ ------------------------------ Date: Mon, 06 Mar 2000 14:24:04 -0800 From: Dave Crocker Subject: Re: [wg-c] voting on TLDs At 01:29 PM 3/6/2000 -0800, Karl Auerbach wrote: >It was a fully functional server with several million TLDs. You can call >it limited, but I call it a very pragmatic test. The test was of a single machine, not a network. The conditions did not emulate the diversity of performance of the Internet. That diversity is a non-trivial factor. >And one more thing - if the 512 byte limit on DNS UDP packets were >modernized we could significantly increase the span of DNS servers per >zone from the current bottleneck of 13. If everyone would just agree to world peace, we could have that, too. >I challange you to find me a DNS name that will cause my Windows, >Macintosh, Unix, or Linux machines to crash when I resolve it. And I >note, you were the one who said that the machines would "crash". I challenge Karl to correlate your statement with any claim that was made about "a name" causing a machine to crash. (Translation: Please make responses and challenges that are carefully tied to previous statements, rather than going off into rhetorical territory.) At 01:39 PM 3/6/2000 -0800, Patrick Greenwell wrote: >On Mon, 6 Mar 2000, Kent Crispin wrote: > > If you would think about it just 3 seconds, you wouldn't ask that > > question. > >I did, and nothing in your customarily ascerbic response answered my >question. Actually, the rest of Kent's note was an extensive and complete answer. And that remainder was not even acerbic, in spite of how difficult it seems to be to get participants in this thread to read and respond careful. In any event, this issue has now entered into the "your mother wears combat boots" phase, so it is best left in the trenches. d/ =-=-=-=-= Dave Crocker Brandenburg Consulting Tel: +1.408.246.8253, Fax: +1.408.273.6464 675 Spruce Drive, Sunnyvale, CA 94086 USA ------------------------------ Date: Tue, 07 Mar 2000 18:52:25 -0500 From: bob broxton Subject: [wg-c] OBJECTION TO THE RELEASE OF "REPORT (PART ONE) OF WORKING GROUP C...." - --------------412568FE39A6CF3383EFDD84 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit OBJECTION TO THE RELEASE OF THIS REPORT AS "REPORT (PART ONE) OF WORKING GROUP C OF THE DOMAIN NAME SUPPORTING ORGANIZATION, INTERNET CORPORATION FOR ASSIGNED NAMES AND NUMBERS" As a member of Working Group C, I object to the release of this report "Report (Part One) of Working Group C of the Domain Name Supporting Organization, Internet Corporation for Assigned Names and Numbers" (hereafter referred to as "Report of Working Group C") for the following reasons: 1. The members of Working Group C have never given approval of this Report. As such, this is not a "Report of Working Group C". 2. The members of Working Group C have not given approval to allow the co-chairman of Working Group C to use his absolute discretion in determining what goes in this particular report and then releasing this report as a "Report of Working Group C". As such, this is not a "Report of Working Group C". 3. The co-chairman of Working Group C has decided to release this report as the "Report of Working Group C" over the known objections of some members of Working Group C to releasing the report as the "Report of Working Group C'. 4. Some members of Working Group C probably do not know this "Report of Working Group C" exists. An extremely short period of time (approximately 7 days) was allowed to review and provide any suggestions regarding this document. 5. The co-chairman has refused to change the name of the report to "The Co-Chairman's Report on the Progress of Working Group C". This would permit the material in the report to be released but allow the members of Working Group C to approve and release a report from Working Group C entitled "Report of Working Group C". 6. The issuance of this report, without the members of Working Group C approving, either the language in the report or granting this authority to the co-chairman, sets a bad precedent for future reports. Why have members? 7. Public Comments have never been requested on this "Report of Working Group C". Prior public comments were received on an Interim Report. The "Report of Working Group C" being released now has new materials for which public comments have never been received. 8. The release of any "Report of Working Group C" without first obtaining public comments on a draft of the report is contrary to ICANN's stated policy of " the development of consensus based policies (such as policies concerning new names) in an open, transparent and bottom-up manner in which interested individuals have an opportunity to participate and comment" (see ICANN FAQ on new generic top level domains - - posted September 13,1999). 9. This report was hurriedly prepared and little time allowed for review because "Members of the Names Council" requested "WG-C file a report before the NC's meeting in Cairo next week." Either Working Group C should be allowed sufficient time to study the issues, explore all the options and timely complete a report or the Names Council should disband the Working Group. To require a Working Group to hastily prepare a report for the sake of an upcoming meeting, with insufficient time for members to study, provide comments and approve the report, does not establish a lot of faith in the ICANN process. Bob Broxton Member of Working Group C Richmond, VA - --------------412568FE39A6CF3383EFDD84 Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit OBJECTION TO THE RELEASE OF THIS REPORT AS "REPORT (PART ONE) OF WORKING
GROUP C OF THE DOMAIN NAME SUPPORTING ORGANIZATION, INTERNET
CORPORATION FOR ASSIGNED NAMES AND NUMBERS"

As a member of Working Group C, I object to the release of this report "Report (Part One) of Working Group C of the Domain Name Supporting Organization, Internet Corporation for Assigned Names and Numbers" (hereafter referred to as "Report of Working Group C")  for the following reasons:

1.   The  members of Working Group C have never given approval of this Report.  As such, this is not a "Report of Working Group C".

2.   The members of Working Group C have not given approval to allow the co-chairman of Working Group C to use his absolute discretion in determining what goes in this particular report and then releasing this report as a "Report of Working Group C".  As such, this is not a "Report of Working Group C".

3.  The co-chairman of Working Group C has decided to release this report as the "Report of Working Group C" over the known  objections of some members of Working Group C to releasing the report as the "Report of Working Group C'.

4.   Some members of  Working Group C probably do not know this "Report of Working Group C" exists.  An extremely short period of time (approximately 7 days) was allowed to review and provide any suggestions regarding this document.

5.   The co-chairman has refused to change the name of the report to "The Co-Chairman's Report on the Progress of Working Group C".  This would permit the material in the report to be released but allow the members of Working Group C  to approve and release a report from Working Group C entitled "Report of Working Group C".

6.   The issuance of this report, without the members of Working Group C approving, either the language in the report or granting this authority to the co-chairman, sets a bad precedent for future reports.  Why have members?

7.   Public Comments have never been requested on this "Report of Working Group C".  Prior public comments were received on an Interim Report.  The "Report of Working Group C"  being released now has new materials for which public comments have never been received.

8.   The release of any "Report of Working Group C" without first obtaining public comments on a draft of the report is contrary to ICANN's stated policy of " the development of consensus based policies (such as policies concerning new names) in an open, transparent and bottom-up manner in which interested individuals have an opportunity to participate and comment" (see ICANN FAQ on new generic top level domains - posted September 13,1999).

9.  This report was hurriedly prepared and little time allowed for review because "Members of the Names Council" requested "WG-C file a report before the NC's meeting in Cairo next week."   Either Working Group C should be allowed sufficient time to study the issues, explore all the options and timely complete a report or the Names Council should disband the Working Group.  To require a Working Group to hastily prepare a report for the sake of an upcoming meeting, with insufficient time for members to study, provide comments and approve the report, does not establish a lot of faith in the ICANN process.
 

Bob Broxton
Member of Working Group C
Richmond, VA
 
 
  - --------------412568FE39A6CF3383EFDD84-- ------------------------------ End of WG-C-DIGEST V1 #29 *************************