Priscilla,

Your below questions and points are all damned good ones.  You've obviously
thought it out quite a bit more than I (and anyway, I already own all your
books so the incentive just wasn't there!).

I guess I don't know enough about what a "typical" client-server transaction
looks like.  For a very large transfer such as the one discussed below, I
would have assumed that we were talking about TCP at L4.  Slow start, etc.
would likely avert the overflow condition in that case.  But maybe there are
large exchanges that take place unreliably or that don't use a reliability
mechanism quite as capable at TCP?  In that case, I would guess that packet
drops and retransmissions would be the alternative.  I can see cases where
enqueued segments are retransmitted by the server before the switch ever got
them out the door.  So maybe that might trigger a backoff of some sort.

I agree that pause frames probably aren't widely used or even very
effective.

Let us know what your further thoughts/experiments reveal...


Priscilla Oppenheimer wrote:
> 
> s vermill wrote:
> > 
> > Priscilla Oppenheimer wrote:
> > > 
> > > > DeVoe, Charles (PKI) wrote:
> > > > > 
> > > > > What about htis. 
> > > > > The server tries to dump data to the client
> > > > > over the 10M
> > > > > pipe.  The client cannot accept it as fast as the server
> > can
> > > > > put out.
> > > > > Having a slower line to the client in effect will cause
> > > > > degradation at the
> > > > > server.
> > > 
> > > I have a better answer and question than my previous
> > wisecrack.
> > > :-) I also bumped the conversation to the top of the Web
> site.
> > > 
> > > Answer: The problem won't be the client not keeping up. The
> > > problem will occur at a store-and-forward switch between the
> > > server and client. (To connect 100-Mbps to 10-Mbps requires
> a
> > > store-and-forward device. Let's say it's a switch.)
> > > 
> > > So, the engineering question becomes, at what point will
> this
> > > mythical store-and-forward switch start dropping packets?
> > > 
> > > Here's a hypothetical scenario:
> > > 
> > > The server has a 100-Mbps NIC. It is connected to the
> switch.
> > > The client has a 10-Mbps NIC. It is also connected to the
> > > switch.
> > > 
> > > The switch has 1000 buffers. Each buffer holds a 100-byte
> > > packet.
> > > 
> > > The server is sending 100,000 packets per second as fast as
> it
> > > can (i.e. with no significant gap between the packets). Each
> > > packet is 100 bytes.
> > > 
> > > The switch is sending the packets out the 10-Mbps port as
> fast
> > > as it can.
> > > 
> > > After how many packets sent by the server will the switch
> > start
> > > dropping packets?
> > > 
> > > A free book to anyone who gets the right answer! You must
> show
> > > your work. :-)
> > > 
> > > Priscilla
> > > 
> > 
> > Priscilla,
> > 
> > Ideally, the switch generates a pause frame in the direction
> of
> > the server and no packet loss is incurred!
> 
> Does anyone really use the IEEE 802.3 pause frame?? I know that
> with many Cisco switches, it defaults to being off and isn't
> even supported on some models. For some models, it can only be
> enabled on Gigabit Ethernet interfaces.
> 
> Do you really want Ethernet doing your flow control for you? :-)
> 
> What happens at the server if it's been flow controlled? What
> does it do with the queued packets? At this point they are
> sitting on the NIC. How many buffers does the NIC have? At what
> point does the NIC start dropping packets then? Does it let the
> sending process know?
> 
> What if it were the switch port that had been flow controlled.
> Does it just drop the packets or does it queue them up? Will
> there be a head-of-the-line blocking problem if it's got stuff
> waiting to go out that can't go out? At what point does it
> start dropping packets? (In the example, this isn't relevant,
> because presumably to fix our problem the flow control is the
> other way. The switch has told the server to shut up. But it's
> still an interesting question.)
> 
> By the way, flow control is another thing that is
> auto-negotiated, and we know how well that works.
> 
> The original poster who questioned the wisdom of a server
> running 100-Mbps and a client running 10-Mbps really was on to
> something. Some sort of flow control does have to happen
> somewhere if the server is blasting out a lot to the client,
> although it may be pretty crude and involve droppped packets.
> 
> CCIE written used to have questions like this. Maybe they
> removed them because nobody could get them right. I'm pretty
> sure I missed that question when I encountered it, but as you
> can see, it sure stuck in mind as an interesting one! (I passed
> CCIE written years ago.)
> 
> I have some thoughts on a solution to the problem, but don't
> know if they are right... More later.....
> 
> Priscilla
> 
> 
> 
> 
> > 
> > Scott
> > 
> 
> 




Message Posted at:
http://www.groupstudy.com/form/read.php?f=7&i=65356&t=65263
--------------------------------------------------
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]

Reply via email to