This whole discussion should be taken to the
YWKTIEDNWWFALNORIBNLTICSADEWSIFOSTFSTNOML working group. (yes we know
that internet email does not work well for a large number of reasons,
including but not limited to, incorrect code, spam and dare we say it
failure of smtp to fully support the needs
: Monday, December 02, 2002 1:43 PM
To: Hallam-Baker, Phillip
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: namedroppers, continued
Hallam-Baker, Phillip wrote:
The only way to resolve this issue properly would be to
require every
submission to an IETF mailing
OK.. Almost plausible. However note that currently, the PGP
web-of-trust
covers only a small percentage of the subscribers to the IETF
list, and
there's no *really* good PKI for S/MIME yet (hint - we don't
seem to even
understand how to apply 'basicConstraints', so if you think
we're
The fact that OCSP scales fine for revocation checking
doesn't mean that
you have a system that scales fine for the *TOTAL PROCESS*.
Stop blustering, you clearly did not know the difference between
a CRL and OCSP and certainly have no real world experience of
operating PKI on which to base
To: Hallam-Baker, Phillip
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: namedroppers, continued
Hallam-Baker, Phillip wrote:
The only way to resolve this issue properly would be to
require every
submission to an IETF mailing list to be cryptographically signed
] [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 06, 2002 3:59 PM
To: Marc Schneiders
Cc: Fred Baker; Hallam-Baker, Phillip; [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: RE: namedroppers, continued
I'v been saying about need for more radical change in mail
protocol for
years now on mailing
There is a proposal to start an IRTF research group on the topic of
SPAM. Perhaps one of the things that could be looked at is how mailing
lists could apply spam defenses and still maintain open-ness.
Many of the problems with spam are now the second order effects of
dropped messages that should
Bernstein is a whiner who deserves no sympathy from anyone.
In order to avoid cluttering up his inbox with SPAM he has
one of those annoying callback systems. Only thing is that the
self-centered clod can't be bothered to put people who reply into
a whitelist.
What this means is that as a result
in the third person.
Phill
Dr. Phillip M. Hallam-Baker, B.Sc(Soton), D.Phil(Oxon), C.Eng, FBCS
-Original Message-
From: Jim Reid [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 24, 2003 4:34 AM
To: [EMAIL PROTECTED]
Cc: Hallam-Baker, Phillip; [EMAIL PROTECTED]; [EMAIL
Harald points out some significant issues with pre-existing software.
I dispute his conclusion that a failed signature means that the message will
be thrown in the trash. Most filters (and certainly any compliant with the
criteria being discussed) would quarantine mail with a failed S/MIME
-- Delivered via SpamCon Foundation DEA: http://dea.spamcon.org
-- Replies will be sent to [EMAIL PROTECTED]
-- Additional Info: http://dea.spamcon.org/i/?v=1189197
It appears that this list is in danger of going the way of the ASRG list.
That is currently down to about 6 active posters and a
The CPS states the authentication processes that the CA uses in issuing the
certificate or otherwise certifying the key (amongst other things).
You can trust the CPS in the sense that the CPS of a well known CA should
provide you with a reliable indication of the level of risk involved in
relying
Yes, the CPS disclaims all WARANTIES.
You do not want a CA that provides a recourse that depends on finding of
fault. WARANTIES are a specific legal instrument that provides recourse
through the courts under theories of merchantability and negligence. So you
have to PROVE the CA did something
to be set at military security levels. Most people simply won't
pay for that.
Phill
-Original Message-
From: Pete Resnick [mailto:[EMAIL PROTECTED]
Sent: Friday, June 06, 2003 12:10 PM
To: Hallam-Baker, Phillip
Cc: '[EMAIL PROTECTED]'
Subject: RE: Certificate / CPS issues
IANAL but I don't take the fact that habeas was founded by a lawyer to
indicate that their idea of copyright law is necessarily enforceable.
Lawyers are notoriously bad judges of their own cases. The guy running
EMarkettingAmerica thinks he can file a case on behalf of unspecified
plaintifs...
In response to the various threads on authenticated email...
Yes, there is a value to authentication, even weak authentication. The vast
majority of spam uses a forged origin address, according to our measurements
and those of the FTC. By forged origin address I mean it was sent without
any form
Stephen writes:
Does my signature on this message make you trust
it more than, say, the ten ads you got this morning
for Viagra?
Your signature tells me nothing, its what I kinow about your private key
that is significant.
If there is someone I trust that signs a statement that says that
Paul writes:
s/mime relies on the x.509 pks industry which in is turn based on the goal
of enriching a small number of ca's who have to pay for relationships to
browser/useragent vendors who then make the certs worthwhile
Hmm and the cost of MAPS would be?
It costs money to perform
Lets try a thought experiment. Imagine for a moment someone came to this
forum in 1990 proposing say lossy packet routing could never possibly work
because nobody could rely on such a system, pointing out that the Internet
was minute compared to the telephone system and that therefore the Internet
Yes, I'm sure those guidelines are all well and good and
clearly thought out.
The problem is that what actually gets *LEGISLATED* may be a
totally different
story
Well why not go and find out rather than raising a theoretical
problem that probably does not exist?
Most of the digital
]
Subject:Re: Certificate / CPS issues
* From [EMAIL PROTECTED] Sun Jun 8 18:27:12 2003
* From: Hallam-Baker, Phillip [EMAIL PROTECTED]
* To: '[EMAIL PROTECTED]' [EMAIL PROTECTED]
* Subject: Re: Certificate / CPS issues
* Date: Sun, 8 Jun 2003 18:16:32 -0700
* MIME-Version: 1.0
Pkix is busy grafting the pgp model on top in the shape of cross
certificates.
I dispute the lower risk claim. You have more control. More control does not
mean less risk.
-Original Message-
From: Haren Visavadia
Sent: Mon Jun 09 10:35:58 2003
To: 'Hallam-Baker, Phillip
But that's not the requirement. The requirement is to
disclose knowledge
of the existence of IPR, not to try to secure a license to it.
Given that, do you still disagree with the requirement?
I disagree with the requirement because I don't think there is
an enforceable requirement. I
In response to John's RFC 942 point:
Reading the document the gist of the argument appears to be 'there is no
real technical difference between the protocols that is significant enough
to decide the issue, but OSI is what everyone else is planning to use'.
So actually I would say it was the
- DoD can't afford to have its own custom solutions that aren't
commercially available and won't interoperate with commercial or other
Government systems.
Exactly the COTS mantra is all. I think the decision to move to require IPv6
capability is entirely consistent with this approach.
On Tuesday, June 17, 2003, at 11:51 AM, Hallam-Baker, Phillip wrote:
The key in my view is to work on the NAT vendors, instead
of viewing
NAT
boxes as an obstacle they should be seen for what they
really are, an
essential and important part of the internet infrastructure.
you
:47:42 2003
To: Hallam-Baker, Phillip
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject:Re: myth of the great transition (was US Defense Department
forma lly adopts IPv6)
I really wish that the IETF
had designed a decent NAT
problems.
End to end only security dogma is like saying buildings should be fireproof
and sprikler systems are evil and unnecessary
-Original Message-
From: Putzolu, David
Sent: Wed Jun 18 13:59:43 2003
To: 'Keith Moore'; Hallam-Baker, Phillip
Cc: [EMAIL PROTECTED]; [EMAIL
I'm all for eating our own dog food, but IMO workable remote
access is more
important.
The point about eating the dog food is so that you improve it
to the point where it is acceptable.
I think it is time to accept that the MBONE technology is
fatally flawed and is not going to be
Equally flawed and useless are the H.323 protocols that do not
tunnel through NAT or even work with a firewall in a remotely
acceptable fashion.
NAT is the big bad dog here, that is what breaks the
end to end connectivity. restart NAT war /
In case you had not noticed there are now
I think it is time to accept that the MBONE technology is
fatally flawed and is not going to be deployable.
There is nothing wrong with Mbone, per se--though, as someone mentioned,
it might be nice to have better codecs. The problem is that multicast is
flawed, and not going to be globally
Sounds like a conspiracy... ISPs charging orders of magnitude
more than
cost for additional addresses forcing people to use NAT.
Its called a monopoly.
There are good reasons why ISPs are encouraging their customers
to use NAT, they provide a weak firewall capability and that
in turn
Perimeter security is brittle, inflexible, complex security.
You have to have
understanding of the semantics of an application at the
perimeter to check
whether the operation is allowed - which is bad so many ways
I don't feel
like listing them all.
It is only useful in my view if you
Or are you referring to the issue that some client/server type
interactions are broken when the client is behind NAT? This should
probably be fixable in most cases (I wouldn't call updating huge
installed bases trivial though), but that still leaves the
applications
and protocols that
As the AD who sponsored this work, I have to disagree. ...
The recent interim meeting resulted in an agreement to work on
a converged spec taking ideas from SPF and Caller-ID.
Why? These are latecomers to the field. Or is it because of this:
[mailto:[EMAIL PROTECTED] On Behalf Of Eric Rosen
Eliot Even if someone *has* implemented the telnet TACACS user
Eliot option, would a user really want to use it?
Eric I don't know. Do you?
Eliot Yes, I do. Many of us do. And that's the point.
I'm sure you think you know,
Reading through the comments on voting I am struck by a difference in
the approach people take to what the IETF is for.
* One school of thought is that the reason for starting a working group
is to arrive at a better engineering outcome than is possible
independently.
* Another school of thought
Dear Phillip,
There is a motivation you forgot. It is to take control of
your particular
part of the world in using the IANA to lodge your vision
and/or your name.
Like a micro TLD Manager.
Folk greatly overestimate the effectiveness of IANA as a control point.
Why does the IESG
Why can't we elect the WG chairs? Why can't we elect the ADs?
...
When the IETF pays for the 60% (80%, 100%, take your pick) of
an AD's salary, they can elect ADs. Unfortunately, the
current system is heavily biased towards keeping existing ADs
- who, like career politicians, can
The problem with voting is that the IETF does not have a
membership list, so
there is no real basis for running a vote. The nomcom
process is intended as
a surrogate, randomly selecting motivated representatives.
The criteria applied for membership of NOMCON could be applied to direct
Nomcon can use any mechanism they like to select candidates, it makes no
difference.
Nomcon is not accountable to anyone by design, its decisions are not
binding on anyone as a result.
To have authority you have to accept accountability.
-Original Message-
From: [EMAIL PROTECTED]
Behalf Of Frank Ellermann
Hallam-Baker, Phillip wrote:
The criteria applied for membership of NOMCON could be applied to
direct voting rights without any difficulty.
Selling out vs, pseudo-random ? So far the worst idea I've
understood on this list.
I don't think that direct
These are not mutually exclusive, and the last point is more
dubious than the first two. While deployment is a necessary
condition for success, so is technical soundness. Our
broader purpose is not just to create new protocols - it's to
keep the Internet working well.
Technical
I also believe the nomcom process does provide
accountability. I think that the nomcom interview process
was more comprehensive than any job interview process I've
gone through.
I think you make a fundamental error here, accountability is determined
by whether we can get rid of someone,
From: Keith Moore [mailto:[EMAIL PROTECTED]
It is extreemly unlikely that a NOMCON is ever going to change more
than a small number of ADs because they understand that
they have no
mandate.
so in your mind, unless they kill off a lot of ADs, there's
no accountability? what a
From: kent crispin [mailto:[EMAIL PROTECTED]
On Fri, Apr 15, 2005 at 04:03:02PM -0700, Hallam-Baker, Phillip wrote:
I also believe the nomcom process does provide
accountability. I think that the nomcom interview process
was more comprehensive than any job interview process
Why do you think a decent-sized, randomly-selected subset of
the IETF (i.e. the NomComm) are taking actions that are
substantially more conservative (in terms of keeping people)
than the IETF as a whole would do?
Because they only get to do it once and have no expectation of
repeating.
Alas, much as pointing out the numerous errors above would
interest me, it's a bit far afield for this list. (I'm
particularly amused by your calling on Plato for support - he
was profoundly anti-democratic.)
That is exactly the type of dishonest rhetorical tactic that has caused
so many
Throughput is only one of the issues of concern in a communication
protocol. In most cases latency is more important.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Ole Jacobsen
Sent: Wednesday, April 27, 2005 10:24 AM
To: The IETF
Subject:
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
The only way to releive work is to distribute it, not
concentrate it.
False. You can also relieve work while keeping throughput
constant by reducing overhead.
Distributing work often reduces throughput by creating more
Behalf Of Keith Moore
Sent: Thursday, April 28, 2005 6:29 PM
I don't see anything wrong with that. It's the ADs' job to push back
on documents with technical flaws. They're supposed to use their
judgments as technical experts, not just be conduits of information
supplied by others.
were. It is also way down my list of priorities
which are now focused on stopping Internet crime.
-Original Message-
From: Keith Moore [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 28, 2005 7:49 PM
To: Hallam-Baker, Phillip
Cc: Joe Touch; John C Klensin; ietf@ietf.org;
[EMAIL
FWIW, this seems fairly easy to implement even now, with
(1) The introduction of the tracker that records comments so
that they can be accessed in a public manner. (2) The
practise where DISCUSS comment resolution is brought back to
the WG list (unless the comments are obvious and non-
From: Jeffrey Hutzelman [mailto:[EMAIL PROTECTED]
On Friday, April 29, 2005 09:18:08 AM -0700 Hallam-Baker, Phillip
[EMAIL PROTECTED] wrote:
You miss out (3) TELL PEOPLE ABOUT THE TRACKER THAT EXISTS.
There is actually a tracker:
https://datatracker.ietf.org/public
From: Bob Braden [mailto:[EMAIL PROTECTED]
* If the STD series is going to be useful then the tool
that spits out the
* current status of the RFCs should spit out HTML pages
with the RFCs
* indexed by status.
*
Presumably you mean:
:[EMAIL PROTECTED]
Sent: Friday, April 29, 2005 5:47 PM
To: Bob Braden; Hallam-Baker, Phillip
Cc: ietf@ietf.org
Subject: RE: text suggested by ADs
* If the STD series is going to be useful then the tool
that spits out
the * current status of the RFCs should spit out HTML
pages
Behalf Of Keith Moore
So, if there is indeed the IETF community expectation that
the WG gets
to pick at least some of its own chairs, then rfc3710 needs to be
revised to reflect this.
not necessarily. a significant part of the community can
expect something
and there still not
for some, let the market decide is a religious statement. it's
generally based on an unexamined faith in market conditions as an
effective way of making a good choice among competing technologies.
I don't accept the ideological case for or against free markets.
My point here is
This cacologic however might be a good way to solve the IDN
homograph issue
and the phishing problem.
I have been spending most of my time on the phishing problem for three
years. I have yet to see a phishing gang use the DNS IDN loophole for a
phishing attack.
This is probably because
I think that what is needed here is transparency, the problem is not the
outcome, it's the way the outcome is arrived at.
I think that it is equally important to have the same level of
transparency when WG chairs are appointed. The WG should be told when a
vacancy is coming up and there should
Something of the sort could be done. But to do so would require the HTTP
WG to be re-opened and there are many other proposals that would also
have to be considered.
I proposed a session-id in 1995 that was similar to Dave Raggets 1992
proposal but with some crypto in it that prevented session
I care.
I spend an increasing amount of time giving evidence in patent disputes
which might never have arisen if the IETF did not have a policy of
deleting all IDs after 6 months.
Google have it right: storage should not be an issue.
Feel free to email the data to my google mail account ha ll
Note that this is not purely hypothetical; I asked the same
question of the IESG in a comment on draft-ietf-pkix-pkixrep-03.txt:
For draft-ietf-impp-srv-04, we required an IANA maintained registry
that allowed someone to map _im._bip to a specification of how _bip
used SRV records.
draft-iesg-discuss-criteria-00.txt talks about this. Even
within the IESG, we still have one or two points to resolve,
but we wanted to get this out before the cutoff date. This
isn't in any way intended to change any of the principles of
the standards process, but we'd welcome community
To: Hallam-Baker, Phillip
Cc: IETF Discussion
Subject: Re: When to DISCUSS?
Phill,
Just picking out the nub of your message:
There is however one area that should be made very explicit
as a non
issue for DISCUSS, failure to employ a specific technology platform.
I have been
From: Scott W Brim [mailto:[EMAIL PROTECTED]
There are occasions when limiting the number of deployed
solutions is very good for the future of the Internet, and in
those cases, pushing for Foo even when Bar is just as good is
quite legitimate.
I have no argument at all when the IESG
It seems to me that the underlying problem here is that fixed port
numbers is not really the right solution to the protocol identification
problem.
When this was first proposed the number of machines on the Internet was
small and there was no DNS, only the host file.
The port number allocation
I'm saying that one source system can generate more than 64K
IMAP4 sessions. These are systems running various sorts of
proxies, so they are in effect hosting many clients at once.
True, but if we are running IPv6 then surely the solution to this
problem would be to simply request some
From: Jeffrey Hutzelman [mailto:[EMAIL PROTECTED]
On Friday, July 15, 2005 11:48:28 AM -0700 Hallam-Baker, Phillip
[EMAIL PROTECTED] wrote:
Agree, for the most part. Fixed port numbers do have some
operational
advantages, though...
They certainly have operational advantages
warning... implementing control by denying information (such
as not telling
the bad guy which port the secured-by-obscurity process is
ACTUALLY running
on) is not terribly good security. It is certainly reasonable
control over
people who want to be controlled (management), but not
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Anyone with corporate board experience has been there as
well. Or school board experience. Or, for that matter,
corridor conversations at IETF meetings. There's certainly no
shortage of examples.
I think that most people understand
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Of John Kristoff
On Fri, 15 Jul 2005 11:48:28 -0700
Hallam-Baker, Phillip [EMAIL PROTECTED] wrote:
There are certain limitations to the SRV prefix scheme but
these are
entirely fixable. All we actually need is one new RR
Host and application security are not the job of the network.
They are the job of the network interfaces. The gateway between a
network and the internetwork should be closely controlled and guarded.
Nobody is really proposing embedding security into the Internet backbone
(at least not yet).
Most people think that carriers
should not be allowing people to inject bogons.
Modern security architectures do not rely exclusively on application
security. If you want to connect up to a state of the art corporate
network the machine has to authenticate.
the notion that one has
While we are on the subject of SRV, port numbers etc:
Why not define SRV prefixes for POP3, IMAP4 and SUBMIT so that email
applications can auto-configure from the email address alone.
I recognize that this would be much more satisfactory with better DNS
security. But even without DNSSEC it
Because it raises some very interesting issues about just
which server a particular client should be bound to. The
network-nearest available one or the one associated with the
same organization as the client are typical possibilities
(along with the somewhat vague email address) and, if
layered defenses are a good notion, but mostly when the layers are
under the same administrative control. all too often people forget
that relying on the security provided by someone else is a risky
proposition, as in your example of ISPs providing ingress filtering.
I would restate your
the Internet is composed of Autonomous Systems, and they take the
first word of the name very seriously. I suspect ISP accountability
in China, for example, may be as successful as copyright enforcement
in that region.
Everyone has a common interest in keeping the Internet.
Thus far law
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
In Las Vegas I waited 8 hours in the security queue last time
to return back to my home. Of course the flights already
departed and as it was not a fault of the company, I needed
to buy a new ticket, no refund and of course, I don't
Thus far law enforcement outside the US have arrested and
prosecuted
considerably more suspected Internet criminals than the US.
This may come as a surprise to you, but the rest of the world is
actually larger than the US. (Oh wait, there I go with that dreaded
sarcasm. Sorry.)
Saltzer, Reed and Clark's paper End-to-end Arguments in
System Design points out the exceptions:
http://mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf
(starting at the heading Performance aspects).
And if Tom bothers to actually read the only two paragraphs in the paper
on security he
: Re: Sarcarm and intimidation
From: Hallam-Baker, Phillip [EMAIL PROTECTED]
There would probably be a lot more people working in
the IETF who share
my views if they did not meet with sarcasm, patronising
remarks and
intimidation.
Just out of curiousity
So in the question of ingress filtering what I am looking at is
mechanisms to create accountability.
Just beware that accountability in an interdependence system
can only based
on the threat of retaliation. What means that you must be a
little be more
equal than you peers for it to
From: Iljitsch van Beijnum [mailto:[EMAIL PROTECTED]
On 21-jul-2005, at 15:23, Hallam-Baker, Phillip wrote:
The intellectual successors of Plato's faction gave us the
dark ages,
fascism and communism, argument from authority trumps all else. The
intellectual successors
This is basic. I am not discussing that, but motivation and
quality of the expected deliveries.
I think you mis-understand the point I am maching. I do not propose that
the IEFT attempt to form the type of political relationships that you
rightly state will be needed. Such relationships are
No need to overcompensate, though. For instance, look at Galileo's
experiments: they barely support his theories, because is tools were
so crude. But Popper et al. covered this ground extensively.
Actually the person who more or less originated argument that I made was
Karl Popper. Volume
From: Keith Moore [mailto:[EMAIL PROTECTED]
The reason that there is no consensus in the spam area is that most
proposed solutions are claiming to solve the whole problem (or at
least a big chunk of it) but are grossly overstating their
applicability. To some degree this is because
My biggest worry is the one piece of structure that has no
wiggle room. As
defined, if the Nomcom in phase 1 decides not to reappoint
the incumbent,
there is no way to recover if that turns out not to work. I
like the idea
of considering incumbents on their own. But I can not find
Behalf Of Lakshminath Dondeti
For the 3rd term, I would place the incumbent at the same
level as a new
candidate (assuming neutral feedback on the sitting AD, or even if
he/she is doing a very good job).
For the 4th term, I would favor new candidates than the
incumbent (so
From: Eliot Lear [mailto:[EMAIL PROTECTED]
Hallam-Baker, Phillip wrote:
No matter how good an incumbent they are much less likely to ask
questions of the form 'why has this WG existed for over a decade?',
'why isn't it finished?', 'why is nobody using it?'
If an AD isn't asking
Sorry, I thought you were aiming toward the age old this
working group has existed too long debate. On the
usefulness question, I actually think an experienced AD is
going to know what works and what doesn't, more so than an
inexperienced one. But the AD doesn't invent the ideas.
Behalf Of JFC (Jefsey) Morfin
This is why I suggest the real danger for the IETF is the
collusion of
large organisations through external consortia to get a
market dominance
through de facto excluding IETF standardisation and IANA
registry control.
And this is why I suggest the best
From: JFC (Jefsey) Morfin [mailto:[EMAIL PROTECTED]
Except if you can grab a BCP. I am not sure you are actually
right. You certainly know a few cases.
The lack of an IETF endorsed spec from MARID did not stop Microsoft from
holding an industry gala two weeks ago in NYC. Nobody commented
There are cases where it is useful for a group to be able to take notice
of first hand experience that comes from employment.
For example I am currently reading a somewhat sureal thread in which an
individual who clearly has no experience or understanding of running
network operations for a large
From: John C Klensin [mailto:[EMAIL PROTECTED]
At the risk of providing an irritating counterexample or two...
Please explain this to almost every wireless carrier in the
world, especially those offering 3G or similar
Internet-based data services. Established actors, significant
Behalf Of Spencer Dawkins
Call me a dreamer, but if there's one voice (which may or may not be
from another planet) in a working group, the chair's
responsibility is
to decide if this is one of the hopefully rare cases where one voice
SHOULD derail apparent consensus, and if it's not -
Behalf Of Brian E Carpenter
These communities may not even be SDOs - they can be operator
consortia,
vendor consortia, industry consortia, or Lord knows what.
Ah, but those we can simply treat as individual
contributions, because there is no reason to do otherwise.
For the cases
Behalf Of Samuel Weiler
Not being a caffeine user, I hadn't noticed this one. But
I've found the lack of water and the flow controls on it disturbing.
At 16:43 today, 13 minutes into the break, I could not find
water in the break area at all. They seemed to have found
some a few
This conjecture was disturbing, but calling it a feature was
even more disturbing. After a bit of pondering, and
wondering what different groups in the IETF might mean by
complex, my first thought was that the IETF has never, ever
solved one. For example, we do QoS in small pieces that
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Keith Moore
or for that matter _Atlas Shrugged_? (not that I agree with Rand on
everything, but she had this one pegged)
I beg to differ, the Middle ages demonstrated amply that the vast
majority of the populace are not going
1 - 100 of 713 matches
Mail list logo