I wasn't aware of the treatment of multicast packets as less than best effort in
wireless transmission.  That is not exactly intuitive, given that radio is 
inherently 
broadcast.

Regarding relative loss probability, we may not be talking about the same thing.

I am referring to the increased probability of losing at least one packet when 
the
packets are replicated.

In the wired world, things that determine the probability of dropping one packet
(corruption, congestion, etc.) likely determine the probability of dropping 
others.

What I was saying is that - if there is a .0001 % chance of dropping a unicast 
packet, 
and (assuming the same probability for each multicast packet) a .0001 % chance 
of
dropping each of 3 replicated multicast packets, there is about a .0003 % 
probability 
of dropping at least one of the 3 multicast packets.

This is only an approximation because the probabilities are not precisely 
additive.
For values this small, however, it is a reasonably close approximation.

This might not be obvious in deployments, given the relatively small amount of 
real
multicast traffic and the very large amount of data that will need to be 
accumulated 
for this difference to be seen as more than statistical noise.

There is also the counting difficulty; in gathering the data, it would be 
necessary to
correlate multicast replications to determine the total probability for 
multicast loss
for the group.

Without this correlation, each multicast packet loss would have to be considered
independently of corresponding replicants.

It seems possible to me that you are referring to the probability of packet 
loss for
multicast packets independently of each other.  This, too, is not intuitive 
given that
losing any multicast packet in a multicast service represents a failure of the 
service.

--
Eric

-----Original Message-----
From: Mikael Abrahamsson [mailto:swm...@swm.pp.se] 
Sent: Thursday, August 06, 2015 10:27 AM
To: Eric Gray
Cc: Glenn Parsons; Alia Atlas; Acee Lindem (acee); Toerless Eckert (eckert); 
Homenet; Dan Romascanu (droma...@avaya.com)
Subject: RE: [homenet] Despair
Importance: High

On Thu, 6 Aug 2015, Eric Gray wrote:

> It strikes me as something of a mistake generally to assume that 
> multicast is as reliable as unicast.
>
> Unicast reliability depends on the mechanism(s) used to ensure 
> reliability.  Unicast traffic tends to get lost every now and then.

Nobody doubts that packets get lost, but the general tendency since IP 
networking was invented, was that multicast delivery of packets wasn't 
especially worse than unicast. Packets get lost, but generally less than 1% get 
lost, and multicast and unicast are affected equally.

> All the same factors that affect unicast packet delivery also affect 
> delivery of each packet with multicast.  Hence multicast reliability 
> should be worse than unicast reliability by an amount roughly 
> proportional to the amount of packet replication necessary to support 
> it.

Hm, care to elaborate? That seems a lot worse than my experience in deploying 
networks would tell me.

> Each replicated packet is as likely to be lost as any unicast packet. 
> Loss of one or more packets should be expected to be more likely with 
> multiple packets than with a single packet.

But it's still only a single packet per link.

> Multicast reliability, even when considered at the link level and 
> assuming replication is not required in transmission of multicast 
> packets onto the link itself, is only slightly better.  As 
> full-duplex, point-to-point connectivity becomes increasingly likely 
> (fat yellow cables are relatively rare any more), data replication 
> still occurs - just not at the level where a router sending packets 
> onto the link is likely to be aware of it.

Correct, as of 20 years ago or something we do not use 10base5 so L2 devices do 
L2 replication.

> Hence it is interesting in this discussion that we are talking about 
> an assumption that seems broken at the start.
>
> Have I missed something?

Well, 802.11 treats multicast (and broadcast) packets as a second rate citizen, 
I am not aware of any other L1/L2 technology that does this. 3GPP uses 
basically a point-to-point tunnel, so unicast and multicast is treated in very 
similar fashion without multicast being at a disadvantage.

So IETF needs to sit down and work out a strategy on how its protocols should 
work going forward, if everybody who designs protocols in the IETF should be 
told that multicast and broadcast "doesn't work properly", and act accordingly.

What probably needs to happen is that over time, the IETF should try to use 
less multicast, but on the other hand, 802.11 really needs to make sure that 
multicast works a lot better than it does today.

-- 
Mikael Abrahamsson    email: swm...@swm.pp.se

_______________________________________________
homenet mailing list
homenet@ietf.org
https://www.ietf.org/mailman/listinfo/homenet

Reply via email to