Re: Multicast Last Mile BOF report

2003-07-16 Thread Randy Bush
 there is a serious contest to see how many mechanisms multicast
 and v6 can develop to overcome why they are not being deployed.
 they should get a clue.
 Look at how many mechanisms NAT required before it was deployable.

there was demand for nat.

 Ease of deployment != good

and 500 gadgets do not make a technology adopted when it has no
business model folk can understand.

randy




Re: no multicast of IETF plenaries

2003-07-16 Thread Joel Jaeggli
Basically we were unable to use the same rooms for the plenary the we use 
for the regular sessions. For us it's just to much to move the ch1 
equipment in two hours. It is possible the the equipment I,m using will be 
able to source mpeg-1 but not the other sources from the plenary . if it 
works I'll update the webpage at least two hours before the plenary.

joelja

On Tue, 15 Jul 2003, Keith Moore wrote:

 from http://videolab.uoregon.edu/events/ietf/ietf57.html:
 
  NEWS:
  
07-15-2003 - We're not going to broadcast the two plenary
   sessions in the evening wednesday and thursday. They are in
   different rooms for our equipment, and moving it is too hard.
   Currently we plan on recording the two plenaries and they should
   be accessible later.
 
 I'm sorry to learn this.  The multicast access is actually turning out
 to be quite useful for me this time around, and the plenaries are
 important parts of the meeting.  I hate to miss them.  Watching the
 recorded meetings isn't the same because there's no way to provide
 real-time feedback.  
 
 (fwiw, in my experience Jabber isn't a very good mechanism for keeping
 track of what's going on, but it has worked quite well for making the
 occasional remotely-originated comment.)
 
 despite the inability to multicast the plenaries, I'm very appreciative
 to the multicast team for allowing me to participate in some of the WG
 meetings from Tennessee.
 
 Keith
  
 

-- 
-- 
Joel Jaeggli  Academic User Services   [EMAIL PROTECTED]
--PGP Key Fingerprint: 1DE9 8FCA 51FB 4195 B42A 9C32 A30D 121E  --
  In Dr. Johnson's famous dictionary patriotism is defined as the last
  resort of the scoundrel.  With all due respect to an enlightened but
  inferior lexicographer I beg to submit that it is the first.
-- Ambrose Bierce, The Devil's Dictionary





Re: The requirements cycle

2003-07-16 Thread Hamid Ould-Brahim
Alex,

[clipped]...

  Again, I was not on the IESG when PPVPN was started, however, I
  think that naturally well-scoped technologies with a clear direction
  in the solution space very often do not need requirements and
  frameworks. 



  It seems that the VPN problem and solution spaces are large and
  complex enough to warrant both requirements and framework documents.
  That said, these documents do not have to be long and fat, and it
  should be possible to produce an acceptable quality document within
  6 months.

  Regarding IESG feedback (where my piece was probably the biggest):
  Predicated on the assumption that reqs/fw documents are not needed,
  any feedback, whether it is from the IESG or not, will be perceived
  as a rock fetch. If we assume those are useful, IESG review is part
  of the process of ensuring high quality of these documents.

In fact the reason why the PPVPN wg was created is because there were
two solutions (VR and 2547) already defined with some known 
*deployment* already happening. That's what motivated the ADs
to scope the WG to just standardize these two solutions which 
at that time the WG was called NBVPN for network-based VPNs. 

There was in fact no need for the framework and requirements drafts
since day one there was no objective to create the best/optimum
approach and the wg didn't debate at all the question
of which one is best VR versus 2547. It was pretty much left to 
the market to decide. 

I am pretty sure that the feedback on that was given initially
to the ADs/IESG, and I don't understand why it was ignored 
(at least from my perception).

I think the chairs were just following the advice of the IESG
on that (that the framework and requirements are necessary before
any solution is considered). 

If you notice the ppvpn charter included a statement
that indicates that no new protocols will be developed. 
This was added because of the existence of the solutions (well
before the working group was created), and there was a feeling that
if the wg allows for new protocols, etc, the delay for getting
the solutions standardized for the providers would have been
much bigger. 

It is ironic that the wg members initially
tried to address the potential technical reasons that can happen for
delaying the work but couldn't predict that the
IESG request to develop the framework and requirements drafts
are mostly the reasons for the actual delay. 

And it is unfortunate that the recent ppvpn decisions just ignored 
and didn't explicitly acknowledged that fact.

I think the delay situation  and its impact on the
IESG decision on what to do with ppvpn wg should have taken into account 
the actual history of ppvpn working group... 

Hamid.



Re: Multicast Last Mile BOF report

2003-07-16 Thread Keith Moore

] and 500 gadgets do not make a technology adopted when it has no
] business model folk can understand.

one business model that might be understandable is: you should support
multicast if/when it saves you enough bandwidth (over the same content
being sent over separate unicast streams) to make it worth your cost.



Re: Multicast Last Mile BOF report

2003-07-16 Thread Masataka Ohta
Keith;

 ] and 500 gadgets do not make a technology adopted when it has no
 ] business model folk can understand.
 
 one business model that might be understandable is: you should support
 multicast if/when it saves you enough bandwidth (over the same content
 being sent over separate unicast streams) to make it worth your cost.

It means it saves nothing for receivers that they don't want to
receive multicast stream.

Then, senders are not motvated to send multicast stream.

Also, the bandwidth saving is beneficial to ISPs that ISPs supporting
multicast should charge less to its customers. It's not bad if ISPs are
competitive and backbone bandwidth were expensive.

Masataka Ohta



Re: Multicast Last Mile BOF report

2003-07-16 Thread Randy Bush
 and 500 gadgets do not make a technology adopted when it has no
 business model folk can understand.
 one business model that might be understandable is: you should support
 multicast if/when it saves you enough bandwidth (over the same content
 being sent over separate unicast streams) to make it worth your cost.

if i needed to save bandwidth, which is not demonstrated as i sell it
/sarcasm, that there is no content my customers want means i will add
capex and opex to save no bandwidth.

randy




Re: Multicast Last Mile BOF report

2003-07-16 Thread Keith Moore
On Wed, 16 Jul 2003 22:48:06 +0859 ()
Masataka Ohta [EMAIL PROTECTED] wrote:

]  one business model that might be understandable is: you should support
]  multicast if/when it saves you enough bandwidth (over the same content
]  being sent over separate unicast streams) to make it worth your cost.
] 
] It means it saves nothing for receivers that they don't want to
] receive multicast stream.
] 
] Then, senders are not motvated to send multicast stream.

maybe the ISPs supporting multicast could prioritize that traffic, thus 
providing better service on multicast than unicast, thus providing an
incentive for receivers to use multicast over unicast.



Re: Multicast Last Mile BOF report

2003-07-16 Thread Masataka Ohta
Keith;

 ]  one business model that might be understandable is: you should support
 ]  multicast if/when it saves you enough bandwidth (over the same content
 ]  being sent over separate unicast streams) to make it worth your cost.
 ] 
 ] It means it saves nothing for receivers that they don't want to
 ] receive multicast stream.
 ] 
 ] Then, senders are not motvated to send multicast stream.
 
 maybe the ISPs supporting multicast could prioritize that traffic, thus 
 providing better service on multicast than unicast, thus providing an
 incentive for receivers to use multicast over unicast.

Prioritization is orthogonal to uni/mulitcast issue.

Users will favour those ISPs which prioritize unicast and pay more money.

Other users may use the prioritized multicast for 1:1 communication.

Masataka Ohta



Re: Multicast Last Mile BOF report

2003-07-16 Thread Keith Moore
] Users will favour those ISPs which prioritize unicast and pay more money.

more money will nearly always buy more bandwidth.

] Other users may use the prioritized multicast for 1:1 communication.

the trick is to only prioritize multicast for which there are enough listeners.
but this seems doable.




Re: Multicast Last Mile BOF report

2003-07-16 Thread Keith Moore
]  and 500 gadgets do not make a technology adopted when it has no
]  business model folk can understand.
]  one business model that might be understandable is: you should support
]  multicast if/when it saves you enough bandwidth (over the same content
]  being sent over separate unicast streams) to make it worth your cost.
] 
] if i needed to save bandwidth, which is not demonstrated as i sell it
] /sarcasm,

if your customers pay flat rate and you can deliver them the same or better
service at less cost to you, isn't this at least some incentive?
(perhaps not enough to overcome your costs?)

if your customers pay per megabyte and they can get better service at
less cost by using multicast, isn't this also an incentive of a different
sort?

] that there is no content my customers want means i will add
] capex and opex to save no bandwidth.

indeed, lack of content is a problem.  multicast won't fly unless/until 
there is content available through it that lots of people want to get.  
IETF meetings are great for me but they have a limited audience overall.



Re: Multicast Last Mile BOF report

2003-07-16 Thread John Stracke
Keith Moore wrote:

maybe the ISPs supporting multicast could prioritize that traffic, thus 
providing better service on multicast than unicast, thus providing an
incentive for receivers to use multicast over unicast.
 

This would provide an incentive to game the system and use multicast 
even if you had only one receiver.

Isn't the real problem the fact that the set of streams that people 
might receive is so diverse, most links won't be carrying more than one 
copy of the unicast stream anyway? Multicast would help only at the 
sender's end, in the case of flash crowds; and the sender's ISP has no 
incentive to shrink the size of the bitpipe the sender needs to buy.

--
/==\
|John Stracke  |[EMAIL PROTECTED]   |
|Principal Engineer|http://www.centive.com |
|Centive   |My opinions are my own.|
|==|
|But she calls her ship _Mercy of the Goddess_! Kali. Oh.|
\==/




Re: Multicast Last Mile BOF report

2003-07-16 Thread Randy Bush
let's get real here.  though we have been pushing it since i was
in nappies (and i have deployed it in isp(s), and i also donate
to hopeless progressive causes), there is far more email traffic
about multicast than there is actual multicast traffic on the WAN
internet (yes, it is heavily used on a few LANs).

randy




re the plenary discussion on partial checksums

2003-07-16 Thread Keith Moore
I'm not sure what the problem is here:

- UDP checksums are optional
- optional checksums probably aren't applicable to TCP
- IPv4 has IP header checksums
- IPv6 could define an option for IP header checksum
  (could be applicable to IPv4 also, if you want a stronger checksum
   for the header)
- whether L2 drops packets whose checksum fails is an L2 matter;
  the ability to turn this on or off is an L2 feature
- if the app needs a different integrity check than all-or-nothing,
  this is something that belongs in the app protocol

so it seems like what we need is a bit in the IP header to indicate that
L2 integrity checks are optional, and to specify for various kinds of
IP-over-FOO how to implement that bit in FOO.  and maybe that bit could
go in the IP option to provide a stronger checksum than normally exists
in the IP header (so that the header, at least, is protected)

Keith



Re: re the plenary discussion on partial checksums

2003-07-16 Thread John Stracke
Keith Moore wrote:

so it seems like what we need is a bit in the IP header to indicate that
L2 integrity checks are optional, and to specify for various kinds of
IP-over-FOO how to implement that bit in FOO.
 

How would an app know to set this bit? The problem is that different L2s 
will have different likelihoods of corruption; you may decide that it's 
safe to set the bit on Ethernet, but not on 802.11*.  And, in general, 
the app doesn't know all of the L2s that may be involved when it sends a 
packet.

--
/==\
|John Stracke  |[EMAIL PROTECTED]   |
|Principal Engineer|http://www.centive.com |
|Centive   |My opinions are my own.|
|==|
|Linux: the Unix defragmentation tool. |
\==/




Re: re the plenary discussion on partial checksums

2003-07-16 Thread Bill Strahm
Ok, I have to ask a silly question (not like that would be a first on this list)

Why, oh WHY would I want to receive a known corrupted packet ?

Are we talking about someone thinks they can eeke out 1% more performance
because their phy/mac can cut over immediately rather than wait for the packet
and verify the checksum ??? (or compute it on the sending side)

I guess I don't see the benefit, I guess rather than a hardware L2 check, you 
rely on something in your hardware later up to fail a check (including a L7
protocol) and drop the frame there ???

I wish I had been there to see the discussion

Bill


On Wed, Jul 16, 2003 at 04:21:47PM -0400, John Stracke wrote:
 Keith Moore wrote:
 
 so it seems like what we need is a bit in the IP header to indicate that
 L2 integrity checks are optional, and to specify for various kinds of
 IP-over-FOO how to implement that bit in FOO.
   
 
 How would an app know to set this bit? The problem is that different L2s 
 will have different likelihoods of corruption; you may decide that it's 
 safe to set the bit on Ethernet, but not on 802.11*.  And, in general, 
 the app doesn't know all of the L2s that may be involved when it sends a 
 packet.
 
 -- 
 /==\
 |John Stracke  |[EMAIL PROTECTED]   |
 |Principal Engineer|http://www.centive.com |
 |Centive   |My opinions are my own.|
 |==|
 |Linux: the Unix defragmentation tool. |
 \==/
 
 



Re: re the plenary discussion on partial checksums

2003-07-16 Thread Karl Auerbach
On Wed, 16 Jul 2003, Keith Moore wrote:

 so it seems like what we need is a bit in the IP header to indicate that
 L2 integrity checks are optional

A lot of folks seem to forget that from the point of view of IP L2
includes the busses between memory and the L2 network interface.  There
have been more than a few recorded cases where packet errors were
introduced as the packet flowed in or out of memory, unprotected by link
CRCs.

To my way of thinking we don't need a bit in the IP header, we need a bit
in the heads of implementors to remind them that relying on link-by-link
protection can be dangerous even if the links have strong CRCs.

 ... IP option to provide a stronger checksum than normally exists

The last time I saw a comparision of checksum algorithm strengths was back 
in the OSI days when the IP checksum was compared to the OSI Fletcher 
checksum (my memory is that the IP checksum came in second.)

--karl--





Re: re the plenary discussion on partial checksums

2003-07-16 Thread Keith Moore
  so it seems like what we need is a bit in the IP header to indicate
  that L2 integrity checks are optional
 
 A lot of folks seem to forget that from the point of view of IP L2
 includes the busses between memory and the L2 network interface. 
 There have been more than a few recorded cases where packet errors
 were introduced as the packet flowed in or out of memory, unprotected
 by link CRCs.

For apps that tolerate lossage (or more precisely, for apps that work
better in the face of some transmission errors than they do if all
packets with transmission errors are dropped), it doesn't matter whether
the errors occur in the memory-to-interface link or somewhere else -
they'll deal with the errors no matter where they occur.  

Of course, the apps that can't tolerate lossage won't set the lossage
is okay bit, and they'll continue to expect that the packets that do
arrive, arrive intact.   For one particular L2 technology X this might
simply mean that packets that don't have that bit set, but do have
errors, are dropped.   For another L2 technology Y it might mean that if
that bit is not set then the IP-over-Y spec will require FEC or
link-level retry or both to make sure that those packets have a
reasonable probability of getting there intact.

 To my way of thinking we don't need a bit in the IP header, we need a
 bit in the heads of implementors to remind them that relying on
 link-by-link protection can be dangerous even if the links have strong
 CRCs.

Actually, it seems like we need a bit in the heads of people who don't
understand that 

- some kinds of links have inherently high error rates, 
- some apps are capable of dealing with less-than-perfect data,
- adding FEC and/or link-level retry to get error rates down to the
  level we're accustomed to from wire or fiber carries with it a
  substantial penalty in bandwidth and/or delay
- we'd like to be able to use those kinds of links with IP,
- we'd like to be able to run those apps over IP, and over those
  links, without paying the bandwidth or delay penalty for
  apps that don't need it.
- we'd like a stable spec for this so we can carve it in stone, 
  (er, silicon)
- since it's going to be carved in stone (silicon) we would do well
  to get it right.

Yes, this is a change to IP, and to the IP architecture.
But it's not rocket science, and it doesn't have to affect things that
don't use it explicitly.



Re: re the plenary discussion on partial checksums

2003-07-16 Thread Keith Moore
 so it seems like what we need is a bit in the IP header to indicate
 that L2 integrity checks are optional, and to specify for various
 kinds of IP-over-FOO how to implement that bit in FOO.
 
 How would an app know to set this bit? The problem is that different
 L2s will have different likelihoods of corruption; you may decide that
 it's safe to set the bit on Ethernet, but not on 802.11*.  And, in
 general, the app doesn't know all of the L2s that may be involved when
 it sends a packet.

I'm not sure that the app needs to know how much lossage to expect, or
to specify how much lossage is too much.  It just wants the bits, errors
included.  Depending on the app's needs it might dynamically adapt to
varying degrees of error by adding its own FEC, e2e retransmission,
and/or interleaving, and this probably works better than trying to have
the app either determine statically how much error it can expect or by
having the app specify to the network how much error is acceptable.

I suppose we could define a maximum error rate (say 15%) that
IP-over-FOO should be designed to provide if the lossage okay bit is
set.  But practically speaking I doubt it's necessary to do that-
links that are designed to support lossy traffic will already have
enough FEC or whatever to suit that kind of traffic.

The biggest questions I have are:

- where to put this bit? 

- are there unintended consequences of doing this that we can forsee?


Keith



Re: re the plenary discussion on partial checksums

2003-07-16 Thread Keith Moore
 Why, oh WHY would I want to receive a known corrupted packet ?

why oh why would you ever want to talk with someone over a phone that
occasionally clicked or popped?

why or why would you ever want to watch a video with snow, or an
occasional missing pixel, or even an occasional missing frame?

sometimes getting a lossy packet is better than not getting one at all,
or having to wait for a retransmission.



Response from a former IMPP Chair (Re: Last Call: A Model forPresence and Instant Messaging to Proposed Standard)

2003-07-16 Thread Harald Tveit Alvestrand
WRITTEN IN MY ROLE AS FORMER IMPP CHAIR

Dave and Marshall,

I have issues with your presentation of reality.
I am unable to see technical issues of real substance in your comments; the 
issues seem all to be procedural, and revolve around the missing update to 
the WG charter.
Since I was one of the WG chairs at the two IETF meetings immediately 
following the group of nine effort, I accept responsibility for this 
procedural error. The charter should have been updated.

However, I must take issue with some of your presentation of how the WG 
decided issues, as well as some of your specific issues.
In particular, your statement:

The problem with the group's dichotomy between end-to-end vs. gateway
goals was discussed repeatedly.  Only towards the end of the working
group's effort did this appear to become resolved -- in the direction of
an end-to-end content standard.  However, even then it was clear that the
few remaining participants in the group continued to hold very different
understandings of the goal.
is directly contradicting the minutes of the December 15, 2000 meeting of 
the WG in San Diego, which say:

ROOT: Should messages passed wround in CPIM-compliant
Q: do we think that we need to be able to pass around a pile of bits that
can be signed:
A: Rough consensus in the room (all but 1 raised hand yes)
Q: Should CPIM specify structure of this message?
A: All think it should specify the format.
The exact format was thereafter discussed in March 2001 and August 2001.
Because of (I believe) the failure of the editor to update the core CPIM 
document between February 2001 and November 2001, the group did not meet in 
December 2001; the format was discussed again in March 2002, July 2002, 
November 2002 and November 2002. At no time do the minutes record a 
decision of the group to revisit or reverse its decision to support a 
single format for interoperability.

For a further instance, consider this complaint:

How can duration have any useful meaning when there is no baseline
reference for the starting point or ending point of the duration and
when Internet exchange latencies are completely unpredictable? In other
words, when a participant receives a duration value from another
participant, what does it mean? Duration relative to what point of time?
We do not know how many seconds it took for the service data to reach
the receiving participant.
The decision to use intervals was a very visible one in the WG, and none of 
the minutes I have read show this decision being challenged at any working 
group meeting; indeed, instead we see long wrangles over the meaning of 
duration = 0, which is hard indeed to express in a format that has a 
baseline reference.
While I have not found this in the minutes, I believe the justification for 
using duration was the same as that used for the sysUptime in SNMP: That 
one should NOT require the elements of an IM system to have synchronized 
clocks.

A common thread running through the minutes of IMPP meetings is the 
perception that all major issues have been settled:

San Diego, December 1999:

The goal is that the group will finish its work and then go away,
hopefully before the next IETF.
London August 2001:

LD: We've reached some kind of closure on all major points; really hope
this will have been the last IMPP meeting (i.e., that we can wrap this
into revised  finished documents before the next IETF).
My conclusions:

The working group has suffered from very slow document updates, a bad error 
in judgment (mine) re charter update, and repeated re-raising of old closed 
issues (for instance, at Atlanta in November 2002, Dave Crocker could be 
heard re-raising the issue of the need for loop control, which the group 
had discussed and decided in December 2000, choosing hopcount as the 
preferred mechanism in March 2001).

However, I find the criticisms raised against the process leading to the 
forwarding of these documents to the IESG to be very much off target.

  Harald Alvestrand
  Speaking as a former chair of the IMPP group