[vpp-dev] VLC: tls in open tcp session

2019-09-16 Thread Max A. via Lists.Fd.Io
Hello,

Is it possible to switch to using TLS in an already open TCP session using VCL?

Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13987): https://lists.fd.io/g/vpp-dev/message/13987
Mute This Topic: https://lists.fd.io/mt/34162392/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vppcom_session_connect blocking or non blocking

2019-09-05 Thread Max A. via Lists.Fd.Io
Hi Florin,

I'll check it out soon.

Thank you very much!


>Четверг,  5 сентября 2019, 1:04 +03:00 от Florin Coras 
>:
>
>Hi Max, 
>
>Here’s the patch that allows non-blocking connects [1]. 
>
>Florin
>
>[1]  https://gerrit.fd.io/r/c/vpp/+/21610
>
>>On Aug 15, 2019, at 7:41 AM, Florin Coras via Lists.Fd.Io < 
>>fcoras.lists=gmail@lists.fd.io > wrote:
>>Hi Max,
>>
>>Not at this time. It should be possible with a few changes for nonblocking 
>>sessions. I’ll add it to my list, in case nobody else beats me to it. 
>>
>>Florin
>>
>>>On Aug 15, 2019, at 2:47 AM, Max A. via Lists.Fd.Io < 
>>>max1976=mail...@lists.fd.io > wrote:
>>>
>>>Hello,
>>>
>>>Can vppcom_session_connect() function run in non-blocking mode? I see that 
>>>there is a wait for the connection result in the 
>>>vppcom_wait_for_session_state_change function.  Is it possible to get the 
>>>result of the connection using vppcom_epoll_wait?
>>>
>>>Thanks.
>>>-=-=-=-=-=-=-=-=-=-=-=-
>>>Links: You receive all messages sent to this group.
>>>
>>>View/Reply Online (#13745):  https://lists.fd.io/g/vpp-dev/message/13745
>>>Mute This Topic:  https://lists.fd.io/mt/32885087/675152
>>>Group Owner:  vpp-dev+ow...@lists.fd.io
>>>Unsubscribe:  https://lists.fd.io/g/vpp-dev/unsub  [ fcoras.li...@gmail.com ]
>>>-=-=-=-=-=-=-=-=-=-=-=-
>>
>>-=-=-=-=-=-=-=-=-=-=-=-
>>Links: You receive all messages sent to this group.
>>
>>View/Reply Online (#13747):  https://lists.fd.io/g/vpp-dev/message/13747
>>Mute This Topic:  https://lists.fd.io/mt/32885087/675152
>>Group Owner:  vpp-dev+ow...@lists.fd.io
>>Unsubscribe:  https://lists.fd.io/g/vpp-dev/unsub  [ fcoras.li...@gmail.com ]
>>-=-=-=-=-=-=-=-=-=-=-=-
>


-- 
Max A.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13908): https://lists.fd.io/g/vpp-dev/message/13908
Mute This Topic: https://lists.fd.io/mt/32885087/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vppcom_session_connect blocking or non blocking

2019-08-15 Thread Max A. via Lists.Fd.Io
Hello,

Can vppcom_session_connect() function run in non-blocking mode? I see that 
there is a wait for the connection result in the 
vppcom_wait_for_session_state_change function. Is it possible to get the result 
of the connection using vppcom_epoll_wait?

Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13745): https://lists.fd.io/g/vpp-dev/message/13745
Mute This Topic: https://lists.fd.io/mt/32885087/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] TCP host stack & small size fifo

2019-07-28 Thread Max A. via Lists.Fd.Io
Hi Florin,

I simplified the application. It sends the request and reads all the data from 
the server using the 8 KB buffer. The fifo size is set to 8 KB. In the attached 
dump [1] you can see that in packet number 14 there will be an overflow of the 
size of the tcp window. My application reports the size of the received block. 
If the tcp window size is full, the application receives 7240 bytes from vpp. 
Next, the application receives data no larger than 6 KB, and the problem does 
not occur. At what point in time does vpp decide that the buffer is full, 
before I get the data from the read function?
There is also a slightly different question. Is the fifo allocated for the all 
lifetime of the session?

Thanks.

[1]  https://drive.google.com/open?id=1Q__5UgnBAKoRGfaGaqIxAVNWqoSCzIPZ  

-- 
Max A.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13597): https://lists.fd.io/g/vpp-dev/message/13597
Mute This Topic: https://lists.fd.io/mt/32582078/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] TCP host stack & small size fifo

2019-07-26 Thread Max A. via Lists.Fd.Io
Hi Florin,


>
>That’s an important difference because in case of the proxy, you cannot 
>dequeue the data from the fifo before you send it to the actual destination 
>and it gets acknowledged. That means, you need to wait at least one rtt (to 
>the final destination) before you can make space in the fifo. If the final 
>destination consuming the data is slower than the sender, you have an even 
>bigger problem.
>
>Try doing a simple wget client, builtin or with vcl, and you’ll note that data 
>should be dequeued much faster than in the proxy case. 
I made a simple get application and got the exact same result [1]. If 
necessary, I can give you the source of the application, it is built under vcl 
and under linux.

Thanks.

[1] https://drive.google.com/open?id=1pkymyLtpaiEwYstcdgb-pzHqWEuTDzCF
-- 
Max A.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13589): https://lists.fd.io/g/vpp-dev/message/13589
Mute This Topic: https://lists.fd.io/mt/32582078/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] TCP host stack & small size fifo

2019-07-25 Thread Max A. via Lists.Fd.Io
Hi Florin,

I tried to increase the buffer size to 128k. The problem still arises, only 
less often [1].  The smaller the buffer, the more often the problem occurs.

Thanks.

[1]  https://drive.google.com/open?id=1KVSzHhPscpSNkdLN0k2gddPJwccpguoo  

-- 
Max A.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13580): https://lists.fd.io/g/vpp-dev/message/13580
Mute This Topic: https://lists.fd.io/mt/32582078/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] TCP host stack & small size fifo

2019-07-25 Thread Max A. via Lists.Fd.Io
Hi Florin,


>As explained above, as long as the sender is faster, this will happen. Still, 
>out of curiosity, can you try this [1] to see if it changes linux’s behavior 
>in any way? Although, I suspect the linux’s window probe timer, after a zero 
>window, is not smaller than min rto (which is the 200 ms you’re seeing). 
>
>[1]  https://gerrit.fd.io/r/c/20830/

Unfortunately, nothing has changed [1].

>Therefore, is the data read by the application much faster or is the 
>advertised rcv window unrelated to the amount of data buffered? Obviously, if 
>the latter, the actual buffer is larger than what you’ve configured. Also, 
>does mtcp act as proxy in this case as well? 

The mtcp application works as a simple wget client.

The problem is not that we are slowly sending data, but that we are slowly 
receiving data from the Linux stack, causing it to pause for 0.2 seconds. At 
the same time, vpp reports errors like “Segment not in the receive window”. 


[1] https://drive.google.com/open?id=1pC8yeQyldyysuloSc8rulhVC9Y0xs-Qr

-- 
Max A.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13579): https://lists.fd.io/g/vpp-dev/message/13579
Mute This Topic: https://lists.fd.io/mt/32582078/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] TCP host stack & small size fifo

2019-07-24 Thread Max A. via Lists.Fd.Io
Hi Florin,


>
>Well, the question there is how large are the rx buffers. If you never see a 
>zero rcv window advertised to the sender, I suspect the rx buffer is large 
>enough to sustain the throughput. 
Using the reference [1], you can view a dump of downloading the same file from 
the same linux server using the mtcp host stack ( buffer size has been set to 
8192 bytes) . As you can see from the dump, there is no one buffer overflow.  
And in this case, even using an 8k buffer, we get good bandwidth.

Thanks.

[1] https://drive.google.com/open?id=15-sEAQ5BVCVgi36pqXD3cRX_mUxRk9r1

-- 
Max A.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13573): https://lists.fd.io/g/vpp-dev/message/13573
Mute This Topic: https://lists.fd.io/mt/32582078/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] TCP host stack & small size fifo

2019-07-24 Thread Max A. via Lists.Fd.Io

Hi Florin,

I made a simple epoll tcp proxy (using vcl) and saw the same behavior.

I increased the fifo size to 16k, but I got exactly the same effect.  A full 
dump for a session with a buffer size of 16k can be obtained by reference [1]  
(192.168.0.1 is the interface on vpp, 192.168.0.200 is the linux host with 
nginx) .

Maybe we should not allow full buffer filling?

P.S. I tested several host stacks working with dpdk, but only in vpp this 
behavior is observed.

Thanks.

[1]   https://drive.google.com/open?id=1JV1zSpggwEoWdeddDR8lcY3yeQRY6-B3  

>Среда, 24 июля 2019, 17:45 +03:00 от Florin Coras :
>
>Hi, 
>
>It seems that linux is reluctant to send a segment smaller than the mss, so it 
>probably delays sending it. Since there’s little fifo space, that’s pretty 
>much unavoidable. 
>
>Still, note that as you increase the number of sessions, if all send traffic 
>at the same rate, then their fair share will be considerably lower than the 
>maximum you can achieve on your interfaces. If you expect some sessions to be 
>“elephant flows”, you could solve the issue by growing their fifos (see 
>segment_manager_grow_fifo) from the app. The builtin tcp proxy does not 
>support this at this time, so you’ll have to do it yourself. 
>
>Florin
>
>> On Jul 24, 2019, at 1:34 AM, max1976 via Lists.Fd.Io < 
>> max1976=mail...@lists.fd.io > wrote:
>> 
>> Hello,
>> 
>> Experimenting with the size of fifo, I saw a problem. The smaller the size 
>> of the fifo, the more often tcp window overflow errors occur (Segment not in 
>> receive window in vpp terminology). In the dump [1], is shown the data 
>> exchange between the vpp tcp proxy (192.168.0.1) and the nginx server under 
>> Linux (192.168.0.200), the size of the rx fifo in the vpp is set to 8192 
>> bytes. The red arrow indicates that the vpp is waiting for the latest data 
>> to fill the buffer. The green arrow indicates that the Linux host stack is 
>> sending data with a significant delay.
>> This behavior significantly reduces throughput. I plan to use a large number 
>> of simultaneous sessions, so I can not set the size of the fifo too large. 
>> How can I solve this problem?
>> 
>> Thanks.
>> [1]  https://monosnap.com/file/XfDjcqvpofIR7fJ6lEXgoyCB17LdfY 
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>> 
>> View/Reply Online (#13555):  https://lists.fd.io/g/vpp-dev/message/13555
>> Mute This Topic:  https://lists.fd.io/mt/32582078/675152
>> Group Owner:  vpp-dev+ow...@lists.fd.io
>> Unsubscribe:  https://lists.fd.io/g/vpp-dev/unsub [fcoras.li...@gmail.com]
>> -=-=-=-=-=-=-=-=-=-=-=-
>
>-=-=-=-=-=-=-=-=-=-=-=-
>Links: You receive all messages sent to this group.
>
>View/Reply Online (#13567):  https://lists.fd.io/g/vpp-dev/message/13567
>Mute This Topic:  https://lists.fd.io/mt/32582078/1863201
>Group Owner:  vpp-dev+ow...@lists.fd.io
>Unsubscribe:  https://lists.fd.io/g/vpp-dev/unsub [max1...@mail.ru]
>-=-=-=-=-=-=-=-=-=-=-=-


-- 
Max A.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13571): https://lists.fd.io/g/vpp-dev/message/13571
Mute This Topic: https://lists.fd.io/mt/32582078/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-