Hi Max, 

By the looks of it, the result is different, although not perfect. That is, you 
can now see multiple packets (more than the 14k) exchanged before window goes 
to zero. 

How are you reading the data in vcl, i.e., how large is your read buffer? I 
hope it’s at least around 8-14kB. 

Also, if the linux scheduler de-schedules vcl, from time to time data will 
accumulate and the rx fifo will fill. You can solve that by changing the 
priority of your app or by trying this out with a builtin application.

Florin

> On Jul 26, 2019, at 2:14 AM, Max A. <max1...@mail.ru> wrote:
> 
> Hi Florin,
> 
> 
> 
> That’s an important difference because in case of the proxy, you cannot 
> dequeue the data from the fifo before you send it to the actual destination 
> and it gets acknowledged. That means, you need to wait at least one rtt (to 
> the final destination) before you can make space in the fifo. If the final 
> destination consuming the data is slower than the sender, you have an even 
> bigger problem.
> 
> Try doing a simple wget client, builtin or with vcl, and you’ll note that 
> data should be dequeued much faster than in the proxy case.
> 
> I made a simple get application and got the exact same result [1]. If 
> necessary, I can give you the source of the application, it is built under 
> vcl and under linux.
> 
> Thanks.
> 
> [1] https://drive.google.com/open?id=1pkymyLtpaiEwYstcdgb-pzHqWEuTDzCF
> -- 
> Max A.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13593): https://lists.fd.io/g/vpp-dev/message/13593
Mute This Topic: https://lists.fd.io/mt/32582078/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to