Hanlin,

I’ve been working on Envoy/Vcl integration for a while now and Ping Yu has 
joined me in this effort. Vcl works fine with a large, multi-threaded 
application like Envoy, although the integration has had its challenges. We are 
getting very close to trying to upstream the code.

-stephen

From: Florin Coras <[email protected]>
Date: Thursday, October 31, 2019 at 10:26 PM
To: wanghanlin <[email protected]>, "Stephen Belair (sbelair)" 
<[email protected]>, "Yu, Ping" <[email protected]>
Cc: "[email protected]" <[email protected]>
Subject: Re: [vpp-dev] Can "use mq eventfd" solve epoll wait h igh cpu usage 
problem?

Hi Hanlin,

Stephen and Ping have made a lot of progress with Envoy and VCL, but I’ll let 
them comment on that.

Regards,
Florin


On Oct 31, 2019, at 9:44 PM, wanghanlin 
<[email protected]<mailto:[email protected]>> wrote:

OK,I got it. Thanks a lot.
By the way, can VCL adapt to envoy or any progress about this?

Regards,
Hanlin
[Image removed by sender.]
wanghanlin
[Image removed by sender.]
[email protected]
签名由 网易邮箱大师<https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
On 11/1/2019 12:07,Florin 
Coras<[email protected]><mailto:[email protected]> wrote:
Hi Hanlin,

If a worker’s mq uses eventfds for notifications, we could nest it in 
libc_epfd, i.e.,the epoll fd we create for the linux fds. So, if an app's 
worker calls epoll_wait, in ldp we can epoll_wait on libc_epfd and if we get an 
event on the mq’s eventfd, we can call vls_epoll_wait with a 0 timeout to drain 
the events from vcl.

Having said that, keep in mind that we typically recommend that people use vcl 
because ldp, through vls, enforces a rather strict locking policy. That is 
needed in order to avoid invalidating vcl’s assumption that sessions are owned 
by only one vcl worker. Moreover, we’ve tested ldp only against a limited set 
of applications.

Regards,
Florin


On Oct 31, 2019, at 7:58 PM, wanghanlin 
<[email protected]<mailto:[email protected]>> wrote:

Do you mean, if just use eventfds only, then I needn't set timeout  to 0 in 
ldp_epoll_pwait?
If so, then how to process unhandled_evts_vector in vppcom_epoll_wait timely? 
What I'm saying is,  another thread add event to unhandled_evts_vector during 
epoll_wait, or unhandled_evts_vector not process completely because of reaching 
maxevents.

Regards,
Hanlin

[Image removed by sender.]
wanghanlin
[Image removed by sender.]
[email protected]
签名由 网易邮箱大师<https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
On 10/31/2019 23:34,Florin 
Coras<[email protected]><mailto:[email protected]> wrote:
Hi,

use_mq_eventfd will help with vcl but as you’ve noticed it won’t help for ldp 
because there we need to poll both vcl and linux fds. Because mutex-condvar 
notifications can’t be epolled we have to constantly switch between linux and 
vcl epolled fds. One option going forward would be to change ldp to detect if 
vcl is using mutex-condvars or eventfds and in case of the latter poll linux 
fds and the mq’s eventfd in a linux epoll.

Regards,
Florin


On Oct 31, 2019, at 5:54 AM, wanghanlin 
<[email protected]<mailto:[email protected]>> wrote:

hi ALL,
I found app using VCL "epoll_wait" still occupy 70% cpu with "use_mq_eventfd" 
configuration even if very little traffic.
Then I investigate code in ldp_epoll_pwait, vls_epoll_wait is called with 
timeout equal to 0.
Then I have two questions:
1. What problems can "use_mq_eventfd" solve?
2.Any other way to decrease cpu usage?
Thanks!

code in  ldp_epoll_pwait:
do
    {
      if (!ldpw->epoll_wait_vcl)
{
  rv = vls_epoll_wait (ep_vlsh, events, maxevents, 0);
  if (rv > 0)
    {
      ldpw->epoll_wait_vcl = 1;
      goto done;
    }
  else if (rv < 0)
    {
      errno = -rv;
      rv = -1;
      goto done;
    }
}
      else
ldpw->epoll_wait_vcl = 0;

      if (libc_epfd > 0)
{
  rv = libc_epoll_pwait (libc_epfd, events, maxevents, 0, sigmask);
  if (rv != 0)
    goto done;
}
    }
  while ((timeout == -1) || (clib_time_now (&ldpw->clib_time) < max_time));
[Image removed by sender.]
wanghanlin
[Image removed by sender.]
[email protected]
签名由 网易邮箱大师<https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14413): https://lists.fd.io/g/vpp-dev/message/14413
Mute This Topic: https://lists.fd.io/mt/40123765/675152
Group Owner: [email protected]<mailto:[email protected]>
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[[email protected]<mailto:[email protected]>]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14448): https://lists.fd.io/g/vpp-dev/message/14448
Mute This Topic: https://lists.fd.io/mt/40351193/675152
Group Owner: [email protected]<mailto:[email protected]>
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[[email protected]<mailto:[email protected]>]
-=-=-=-=-=-=-=-=-=-=-=-



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14472): https://lists.fd.io/g/vpp-dev/message/14472
Mute This Topic: https://lists.fd.io/mt/40393103/21656
Group Owner: [email protected]
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to