I've already seen this documen and have used this tricks a lot of times. But 
this time I send data locally over localhost. There is even no nics bind to 
linux in my machine. Therefore there is no nics interruptions I can pin to cpu. 
So what do you propose?

> 14 ???. 2016 ?., ? 20:06, Shawn Lewis <smlsr at tencara.com> ???????(?):
> 
> You have to work with IRQBalancer as well
> 
> http://www.intel.com/content/dam/doc/application-note/82575-82576-82598-82599-ethernet-controllers-interrupts-appl-note.pdf
> 
> Is just an example document which discuss this (not so much DPDK related)...  
> But the OS will attempt to balance the interrupts when you actually want to 
> remove or pin them down...
> 
>> On Thu, Apr 14, 2016 at 1:02 PM, Alexander Kiselev <kiselev99 at gmail.com> 
>> wrote:
>> 
>> 
>>> 14 ???. 2016 ?., ? 19:35, Shawn Lewis <smlsr at tencara.com> ???????(?):
>>> 
>>> Lots of things...
>>> 
>>> One just because you have a process running on an lcore, does not mean 
>>> thats all that runs on it.  Unless you have told the kernel at boot to NOT 
>>> use those specific cores, those cores will be used for many things OS 
>>> related.
>> 
>> Generally yes, but unless I start sending data to socket there is no packet 
>> loss.  I did about 10 test runs in a raw and everythis was ok. And there is 
>> no other application running on that test machine that uses cpu cores.
>> 
>> So the question is why this socket operations influence the other lcore?
>> 
>>> 
>>> IRQBlance
>>> System OS operations.
>>> Other Applications.
>>> 
>>> So by doing file i/o you are generating interrupts, where those interrupts 
>>> get serviced is up to IRQBalancer.  So could be any one of your cores.
>> 
>> That is a good point. I can use cpu affinity feature to bind unterruption 
>> handler to the core not used in my test. But I send data locally over 
>> localhost. Is it possible to use cpu affinity in that case?
>> 
>>> 
>>> 
>>> 
>>>> On Thu, Apr 14, 2016 at 12:31 PM, Alexander Kiselev <kiselev99 at 
>>>> gmail.com> wrote:
>>>> Could someone give me any hints about what could cause permormance issues 
>>>> in a situation where one lcore doing a lot of linux system calls 
>>>> (read/write on socket) slow down the other lcore doing packet forwarding? 
>>>> In my test the forwarding lcore doesn't share any memory structures with 
>>>> the other lcore that sends test data to socket. Both lcores pins to 
>>>> different processors cores. So therotically they shouldn't have any impact 
>>>> on each other but they do, once one lcore starts sending data to socket 
>>>> the other lcore starts dropping packets. Why?
>>> 
> 

Reply via email to