Who runs in interrupt mode? :)

Ok. Well it can sit in gerrit as a starting point for future pickup I guess.

Thanks,
Chris.

> On May 20, 2020, at 4:11 PM, Damjan Marion via lists.fd.io 
> <dmarion=me....@lists.fd.io> wrote:
> 
> 
> No, as another important function of that node is to fall asleep when there 
> is no interfaces in polling mode. Interface queues can be dynamically 
> assigned to different workers so there will be lot of messing around to make 
> this working.
> 
>> On 20 May 2020, at 21:51, Christian Hopps <cho...@chopps.org> wrote:
>> 
>> Would this work?
>> 
>> https://gerrit.fd.io/r/c/vpp/+/27186
>> 
>> Thanks,
>> Chris.
>> 
>>> On May 20, 2020, at 1:44 PM, Christian Hopps <cho...@chopps.org> wrote:
>>> 
>>> 
>>> 
>>>> On May 20, 2020, at 11:39 AM, Damjan Marion via lists.fd.io 
>>>> <dmarion=me....@lists.fd.io> wrote:
>>>> 
>>>> 
>>>> 
>>>>> On 20 May 2020, at 16:29, Christian Hopps <cho...@chopps.org> wrote:
>>>>> 
>>>>> 
>>>>>> On May 20, 2020, at 9:42 AM, Damjan Marion via lists.fd.io 
>>>>>> <dmarion=me....@lists.fd.io> wrote:
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> On 20 May 2020, at 14:38, Christian Hopps <cho...@chopps.org> wrote:
>>>>>>> 
>>>>>>> I'm wondering why I have unix-epoll-input in my worker threads "show 
>>>>>>> runtime" results. Couldn't it selectively disable/enable itself based 
>>>>>>> on whether it actually had any work to do (things to poll)? I'm aware 
>>>>>>> it modifies its behavior when there are other polling nodes running, 
>>>>>>> but it still is taking time to run occasionally, and I'm not sure why.
>>>>>> 
>>>>>> It. is needed there for handling interrupts, and probably nobody 
>>>>>> bothered to spend time on adding some logic to turn it off if there is 
>>>>>> no work for him as it’s impact on performance is negligible.
>>>>> 
>>>>> Ok, thanks. For me its running about 1550ns per call which, if I did my 
>>>>> math right, represents ~92 packets (min 46 octets == 84 ethernet octets) 
>>>>> at 40GE and ~23 packets on a 10GE.
>>>> 
>>>> so your core does 40Gb linerate with 64 byte packets :)
>>> 
>>> No. :)
>>> 
>>> I'm tyring to get the best I can from imix on the user side, and 1500 octet 
>>> (full MTU) fixed interval sending on the secure tunnel side. I wouldn't 
>>> expect the current processors to be able to handle small packets line rate 
>>> on 100G interfaces, in any case.
>>> 
>>>> My math is:
>>>> 
>>>> - Best case we do around 100 clock cycles per packet.
>>>> - If cpu is running at 2.5GHz that means 40ns per packet.
>>>> - in 1550ns we can process ~39 packets.
>>>> - 100 clock cycles per packet means 25Mpps @ 2.5GHz
>>>> 
>>>> 39 packets less processed out of 25M means performance impact is 0,000156%.
>>> 
>>> 1550ns is a per call stat I believe. It seems to average around ~252 calls 
>>> per second for me so thats 9828 packets. If those are 9k packets that's 
>>> 707Mbps of bandwidth (9828*9000*8=707616000). Even at 1500 it represents 
>>> 118 Mbps (117936000). Looking at the code though it's basically being 
>>> called every 1024 (1025?) loops.
>>> 
>>> My main issue though is that people are going to be paying attention to 
>>> timing with my application (IPTFS), so odd blips in packet arrival will be 
>>> noticed, and then have to be explained and justified.
>>> 
>>> I'll try disabling it with a hack for now. :)
>>> 
>>> Thanks,
>>> Chris.
>>> 
>>>> Makes sense?
>>>> 
>>>> — 
>>>> Damjan
>>>> 
>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16467): https://lists.fd.io/g/vpp-dev/message/16467
Mute This Topic: https://lists.fd.io/mt/74348786/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to