OK,Thanks.

Regards,

Hanlin

On 11/5/2019 00:12Stephen Belair (sbelair)<[email protected]> wrote:

Hi Hanlin,

 

We are working on getting perf numbers and should have some comparisons in a few weeks.

Thanks,

 

-stephen

 

From: wanghanlin <[email protected]>
Date: Friday, November 1, 2019 at 6:50 PM
To: "Stephen Belair (sbelair)" <[email protected]>
Cc: Florin Coras <[email protected]>, "Yu, Ping" <[email protected]>, "[email protected]" <[email protected]>
Subject: Re: [vpp-dev] Can "use mq eventfd" solve epoll wait h igh cpu usage problem?

 

Hi stephen

It's great. Is there any performance data comparing with the kernel path now?

 

Regards,

Hanlin

 

发自网易邮箱大师

Hanlin,

 

I’ve been working on Envoy/Vcl integration for a while now and Ping Yu has joined me in this effort. Vcl works fine with a large, multi-threaded application like Envoy, although the integration has had its challenges. We are getting very close to trying to upstream the code.

 

-stephen

 

From: Florin Coras <[email protected]>
Date: Thursday, October 31, 2019 at 10:26 PM
To: wanghanlin <[email protected]>, "Stephen Belair (sbelair)" <[email protected]>, "Yu, Ping" <[email protected]>
Cc: "[email protected]" <[email protected]>
Subject: Re: [vpp-dev] Can "use mq eventfd" solve epoll wait h igh cpu usage problem?

 

Hi Hanlin, 

 

Stephen and Ping have made a lot of progress with Envoy and VCL, but I’ll let them comment on that. 

 

Regards, 

Florin




On Oct 31, 2019, at 9:44 PM, wanghanlin <[email protected]> wrote:

 

OKI got it. Thanks a lot.

By the way, can VCL adapt to envoy or any progress about this

 

Regards,

Hanlin

Image removed by sender.

wanghanlin

Image removed by sender.

名由 网易箱大 定制

On 11/1/2019 12:07Florin Coras<[email protected]> wrote 

Hi Hanlin, 

 

If a worker’s mq uses eventfds for notifications, we could nest it in libc_epfd, i.e.,the epoll fd we create for the linux fds. So, if an app's worker calls epoll_wait, in ldp we can epoll_wait on libc_epfd and if we get an event on the mq’s eventfd, we can call vls_epoll_wait with a 0 timeout to drain the events from vcl. 

 

Having said that, keep in mind that we typically recommend that people use vcl because ldp, through vls, enforces a rather strict locking policy. That is needed in order to avoid invalidating vcl’s assumption that sessions are owned by only one vcl worker. Moreover, we’ve tested ldp only against a limited set of applications. 

 

Regards, 

Florin




On Oct 31, 2019, at 7:58 PM, wanghanlin <[email protected]> wrote:

 

Do you mean, if just use eventfds only, then I needn't set timeout  to 0 in ldp_epoll_pwait?

If so, then how to process unhandled_evts_vector in vppcom_epoll_wait timely? What I'm saying is,  another thread add event to unhandled_evts_vector during epoll_wait, or unhandled_evts_vector not process completely because of reaching maxevents.

 

Regards,

Hanlin

 

Image removed by sender.

wanghanlin

Image removed by sender.

名由 网易箱大 定制

On 10/31/2019 23:34Florin Coras<[email protected]> wrote 

Hi, 

 

use_mq_eventfd will help with vcl but as you’ve noticed it won’t help for ldp because there we need to poll both vcl and linux fds. Because mutex-condvar notifications can’t be epolled we have to constantly switch between linux and vcl epolled fds. One option going forward would be to change ldp to detect if vcl is using mutex-condvars or eventfds and in case of the latter poll linux fds and the mq’s eventfd in a linux epoll. 

 

Regards,

Florin




On Oct 31, 2019, at 5:54 AM, wanghanlin <[email protected]> wrote:

 

hi ALL

I found app using VCL "epoll_wait" still occupy 70% cpu with "use_mq_eventfd" configuration even if very little traffic.

Then I investigate code in ldp_epoll_pwait, vls_epoll_wait is called with timeout equal to 0.

Then I have two questions:

1. What problems can "use_mq_eventfd" solve?

2.Any other way to decrease cpu usage?

Thanks!

 

code in  ldp_epoll_pwait:

do

    {

      if (!ldpw->epoll_wait_vcl)

{

  rv = vls_epoll_wait (ep_vlsh, events, maxevents, 0);

  if (rv > 0)

    {

      ldpw->epoll_wait_vcl = 1;

      goto done;

    }

  else if (rv < 0)

    {

      errno = -rv;

      rv = -1;

      goto done;

    }

}

      else

ldpw->epoll_wait_vcl = 0;

 

      if (libc_epfd > 0)

{

  rv = libc_epoll_pwait (libc_epfd, events, maxevents, 0, sigmask);

  if (rv != 0)

    goto done;

}

    }

  while ((timeout == -1) || (clib_time_now (&ldpw->clib_time) < max_time));

Image removed by sender.

wanghanlin

Image removed by sender.

名由 网易箱大 定制

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14413): 
https://lists.fd.io/g/vpp-dev/message/14413
Mute This Topic: 
https://lists.fd.io/mt/40123765/675152
Group Owner: 
[email protected]
Unsubscribe: 
https://lists.fd.io/g/vpp-dev/unsub  [[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14448): 
https://lists.fd.io/g/vpp-dev/message/14448
Mute This Topic: 
https://lists.fd.io/mt/40351193/675152
Group Owner: 
[email protected]
Unsubscribe: 
https://lists.fd.io/g/vpp-dev/unsub  [[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

 

 

 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14502): https://lists.fd.io/g/vpp-dev/message/14502
Mute This Topic: https://lists.fd.io/mt/41724203/21656
Group Owner: [email protected]
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to