Re: [Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2018-05-21 Thread Amir Kaduri
Same feature, new implementation in PR:
https://github.com/ntop/PF_RING/pull/343


On Sun, May 6, 2018 at 7:18 PM, Amir Kaduri  wrote:

> Good.
> There is a pull-request waiting. I hope you'll find it beneficial:
> https://github.com/ntop/PF_RING/pull/340
>
> Thanks,
> Amir
>
> On Tue, May 1, 2018 at 1:19 AM, Alfredo Cardigliano 
> wrote:
>
>>
>>
>> On 30 Apr 2018, at 17:52, Amir Kaduri  wrote:
>>
>> Thanks for the answers.
>>
>> So the only way to make handlep->timeout>=0, is by setting the
>> file-descriptor to "blocking" (nonblock=0) according to the logic in
>> function pcap_setnonblock_mmap() and this is something that we would
>> like to avoid.
>> Therefore, we do the polling (non-blocking) in the application that uses
>> pcap/pf_ring.
>> The problem we have is with low-traffic network. According to the logic
>> in function copy_data_to_ring(), as long as the queue didn't reach the
>> "poll_num_pkts_watermark" threshold (in our case 128 packets),
>> the poll() (in userspace) won't be called (since  wake_up_interruptible(..)
>> is not called), which means that we have packets that are stuck in the ring
>> till the queue reaches the watermark.
>>
>> I wonder if you see any rationale in improving the pf_ring kernel module
>> code, to call  wake_up_interruptible() (in order to flush the queue) if
>> some "timeout" passed and the queue is not empty (but still didn't reach
>> the watermark).
>>
>>
>> I think that using the watermark in combination with a timeout is a good
>> idea.
>>
>> Alfredo
>>
>> Amir
>>
>>
>> On Thu, Apr 26, 2018 at 6:00 PM, Alfredo Cardigliano <
>> cardigli...@ntop.org> wrote:
>>
>>>
>>>
>>> On 26 Apr 2018, at 15:34, Amir Kaduri  wrote:
>>>
>>> Hi Alfredo,
>>>
>>> My code is based on libpcap, while pfring's userland examples use pfring
>>> APIs directly, therefore things are a bit harder for me.
>>>
>>> Short clarification about a related code-line:
>>> Please look at the following line: https://github.com/ntop/
>>> PF_RING/blob/dev/userland/libpcap-1.8.1/pcap-linux.c#L1875
>>>
>>> (1)  If I understand it correctly, if wait_for_incoming_packet is true,
>>> then pfring_poll() should be called.
>>>   Don't you want wait_for_incoming_packet to be true in case
>>> pf_ring_active_poll is true?
>>>
>>>
>>> “active” means spinning, thus poll should not be used in that case.
>>>
>>>   Currently, its the opposite (i.e. if pf_ring_active_poll is true,
>>> wait_for_incoming_packet will be false thus pfring_poll() won't be
>>> called).
>>>
>>>
>>> This seems to be correct
>>>
>>>
>>> (2) If the code is ok, then the only way for me to make
>>> wait_for_incoming_packet true (for pfring_poll() to be called) is by
>>> making handlep->timeout >= 0.
>>>  Correct?
>>>
>>>
>>> Correct
>>>
>>> Alfredo
>>>
>>>
>>> Thanks,
>>> Amir
>>>
>>> On Mon, Apr 9, 2018 at 10:51 AM, Alfredo Cardigliano <
>>> cardigli...@ntop.org> wrote:
>>>
 Hi Amir
 if I understand correctly, pfcount_multichannel is working, while in
 your application
 it seems that poll does not honor the timeout, if this is the case it
 seems the problem
 is not in the kernel module, I think you should look for differences
 between the two applications..

 Alfredo

 On 9 Apr 2018, at 07:20, Amir Kaduri  wrote:

 Hi Alfredo,

 I'm back to investigate/debug this issue in my environment, and maybe
 you'll manage to save me some time:

 When I use the example program "pfcount_multichannel", poll-duration
 works for me as expected:
 For watermark=128, poll-duration=1000, even if less than 128 packets
 received, I get them in pfcount_multichannel.

 On the other hand, in my other program (which is a complex one), the
 userspace application gets the packets only after 128 packets
 aggregated by the ring, regardless the polling rate (which is done
 always using 50ms timeout).

 Maybe you can figure out what can "hold" the packets in the ring and
 forward them to userspace only when the watermark threshold passes?
 Maybe something is missing during initialization?
 (for simplicity I'm not using rehash, and not using any filters).

 Thanks

 On Tue, Oct 31, 2017 at 6:32 PM, Alfredo Cardigliano <
 cardigli...@ntop.org> wrote:

> Hi Amir
> that's correct, however for some reason it seems it is not the case in
> your tests.
>
> Alfredo
>
> On 31 Oct 2017, at 12:08, Amir Kaduri  wrote:
>
> Thanks. tot_insert apparently works ok.
>
> Regarding function copy_data_to_ring():
> At the end of it there is the statement:
>  if(num_queued_pkts(pfr) >= pfr->poll_num_pkts_watermark)
>  wake_up_interruptible(>ring_slots_waitqueue);
>
> Since watermark is set to 128, and I send <128 packets, this causes
> 

Re: [Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2018-05-06 Thread Amir Kaduri
Good.
There is a pull-request waiting. I hope you'll find it beneficial:
https://github.com/ntop/PF_RING/pull/340

Thanks,
Amir

On Tue, May 1, 2018 at 1:19 AM, Alfredo Cardigliano 
wrote:

>
>
> On 30 Apr 2018, at 17:52, Amir Kaduri  wrote:
>
> Thanks for the answers.
>
> So the only way to make handlep->timeout>=0, is by setting the
> file-descriptor to "blocking" (nonblock=0) according to the logic in
> function pcap_setnonblock_mmap() and this is something that we would like
> to avoid.
> Therefore, we do the polling (non-blocking) in the application that uses
> pcap/pf_ring.
> The problem we have is with low-traffic network. According to the logic in
> function copy_data_to_ring(), as long as the queue didn't reach the
> "poll_num_pkts_watermark" threshold (in our case 128 packets),
> the poll() (in userspace) won't be called (since  wake_up_interruptible(..)
> is not called), which means that we have packets that are stuck in the ring
> till the queue reaches the watermark.
>
> I wonder if you see any rationale in improving the pf_ring kernel module
> code, to call  wake_up_interruptible() (in order to flush the queue) if
> some "timeout" passed and the queue is not empty (but still didn't reach
> the watermark).
>
>
> I think that using the watermark in combination with a timeout is a good
> idea.
>
> Alfredo
>
> Amir
>
>
> On Thu, Apr 26, 2018 at 6:00 PM, Alfredo Cardigliano  > wrote:
>
>>
>>
>> On 26 Apr 2018, at 15:34, Amir Kaduri  wrote:
>>
>> Hi Alfredo,
>>
>> My code is based on libpcap, while pfring's userland examples use pfring
>> APIs directly, therefore things are a bit harder for me.
>>
>> Short clarification about a related code-line:
>> Please look at the following line: https://github.com/ntop/
>> PF_RING/blob/dev/userland/libpcap-1.8.1/pcap-linux.c#L1875
>>
>> (1)  If I understand it correctly, if wait_for_incoming_packet is true,
>> then pfring_poll() should be called.
>>   Don't you want wait_for_incoming_packet to be true in case
>> pf_ring_active_poll is true?
>>
>>
>> “active” means spinning, thus poll should not be used in that case.
>>
>>   Currently, its the opposite (i.e. if pf_ring_active_poll is true,
>> wait_for_incoming_packet will be false thus pfring_poll() won't be
>> called).
>>
>>
>> This seems to be correct
>>
>>
>> (2) If the code is ok, then the only way for me to make
>> wait_for_incoming_packet true (for pfring_poll() to be called) is by
>> making handlep->timeout >= 0.
>>  Correct?
>>
>>
>> Correct
>>
>> Alfredo
>>
>>
>> Thanks,
>> Amir
>>
>> On Mon, Apr 9, 2018 at 10:51 AM, Alfredo Cardigliano <
>> cardigli...@ntop.org> wrote:
>>
>>> Hi Amir
>>> if I understand correctly, pfcount_multichannel is working, while in
>>> your application
>>> it seems that poll does not honor the timeout, if this is the case it
>>> seems the problem
>>> is not in the kernel module, I think you should look for differences
>>> between the two applications..
>>>
>>> Alfredo
>>>
>>> On 9 Apr 2018, at 07:20, Amir Kaduri  wrote:
>>>
>>> Hi Alfredo,
>>>
>>> I'm back to investigate/debug this issue in my environment, and maybe
>>> you'll manage to save me some time:
>>>
>>> When I use the example program "pfcount_multichannel", poll-duration
>>> works for me as expected:
>>> For watermark=128, poll-duration=1000, even if less than 128 packets
>>> received, I get them in pfcount_multichannel.
>>>
>>> On the other hand, in my other program (which is a complex one), the
>>> userspace application gets the packets only after 128 packets
>>> aggregated by the ring, regardless the polling rate (which is done
>>> always using 50ms timeout).
>>>
>>> Maybe you can figure out what can "hold" the packets in the ring and
>>> forward them to userspace only when the watermark threshold passes?
>>> Maybe something is missing during initialization?
>>> (for simplicity I'm not using rehash, and not using any filters).
>>>
>>> Thanks
>>>
>>> On Tue, Oct 31, 2017 at 6:32 PM, Alfredo Cardigliano <
>>> cardigli...@ntop.org> wrote:
>>>
 Hi Amir
 that's correct, however for some reason it seems it is not the case in
 your tests.

 Alfredo

 On 31 Oct 2017, at 12:08, Amir Kaduri  wrote:

 Thanks. tot_insert apparently works ok.

 Regarding function copy_data_to_ring():
 At the end of it there is the statement:
  if(num_queued_pkts(pfr) >= pfr->poll_num_pkts_watermark)
  wake_up_interruptible(>ring_slots_waitqueue);

 Since watermark is set to 128, and I send <128 packets, this causes
 them to wait in kernel queue.
 But since poll_duration is set to 1 (1 millisecond I assume), I expect
 the condition to check this also (meaning, there are packets in queue but 1
 millisecond passed and they weren't read),
 the wake_up_interruptible should also be called. No?

 Thanks,

Re: [Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2018-04-30 Thread Alfredo Cardigliano


> On 30 Apr 2018, at 17:52, Amir Kaduri  wrote:
> 
> Thanks for the answers.
> 
> So the only way to make handlep->timeout>=0, is by setting the 
> file-descriptor to "blocking" (nonblock=0) according to the logic in function 
> pcap_setnonblock_mmap() and this is something that we would like to avoid.
> Therefore, we do the polling (non-blocking) in the application that uses 
> pcap/pf_ring.
> The problem we have is with low-traffic network. According to the logic in 
> function copy_data_to_ring(), as long as the queue didn't reach the 
> "poll_num_pkts_watermark" threshold (in our case 128 packets),
> the poll() (in userspace) won't be called (since  wake_up_interruptible(..) 
> is not called), which means that we have packets that are stuck in the ring 
> till the queue reaches the watermark.
> 
> I wonder if you see any rationale in improving the pf_ring kernel module 
> code, to call  wake_up_interruptible() (in order to flush the queue) if some 
> "timeout" passed and the queue is not empty (but still didn't reach the 
> watermark).

I think that using the watermark in combination with a timeout is a good idea.

Alfredo

> Amir
> 
> 
> On Thu, Apr 26, 2018 at 6:00 PM, Alfredo Cardigliano  > wrote:
> 
> 
>> On 26 Apr 2018, at 15:34, Amir Kaduri > > wrote:
>> 
>> Hi Alfredo,
>> 
>> My code is based on libpcap, while pfring's userland examples use pfring 
>> APIs directly, therefore things are a bit harder for me.
>> 
>> Short clarification about a related code-line:
>> Please look at the following line: 
>> https://github.com/ntop/PF_RING/blob/dev/userland/libpcap-1.8.1/pcap-linux.c#L1875
>>  
>> 
>> 
>> (1)  If I understand it correctly, if wait_for_incoming_packet is true, then 
>> pfring_poll() should be called.
>>   Don't you want  wait_for_incoming_packet to be true in case  
>> pf_ring_active_poll is true?
> 
> “active” means spinning, thus poll should not be used in that case.
> 
>>   Currently, its the opposite (i.e. if pf_ring_active_poll is true, 
>> wait_for_incoming_packet will be false thus pfring_poll() won't be called).
> 
> This seems to be correct
> 
>> 
>> (2) If the code is ok, then the only way for me to make  
>> wait_for_incoming_packet true (for pfring_poll() to be called) is by making 
>> handlep->timeout >= 0.
>>  Correct?
> 
> Correct
> 
> Alfredo
> 
>> 
>> Thanks,
>> Amir
>> 
>> On Mon, Apr 9, 2018 at 10:51 AM, Alfredo Cardigliano > > wrote:
>> Hi Amir
>> if I understand correctly, pfcount_multichannel is working, while in your 
>> application
>> it seems that poll does not honor the timeout, if this is the case it seems 
>> the problem
>> is not in the kernel module, I think you should look for differences between 
>> the two applications..
>> 
>> Alfredo
>> 
>>> On 9 Apr 2018, at 07:20, Amir Kaduri >> > wrote:
>>> 
>>> Hi Alfredo,
>>> 
>>> I'm back to investigate/debug this issue in my environment, and maybe 
>>> you'll manage to save me some time:
>>> 
>>> When I use the example program "pfcount_multichannel", poll-duration works 
>>> for me as expected:
>>> For watermark=128, poll-duration=1000, even if less than 128 packets 
>>> received, I get them in pfcount_multichannel.
>>> 
>>> On the other hand, in my other program (which is a complex one), the 
>>> userspace application gets the packets only after 128 packets
>>> aggregated by the ring, regardless the polling rate (which is done always 
>>> using 50ms timeout).
>>> 
>>> Maybe you can figure out what can "hold" the packets in the ring and 
>>> forward them to userspace only when the watermark threshold passes?
>>> Maybe something is missing during initialization?
>>> (for simplicity I'm not using rehash, and not using any filters).
>>> 
>>> Thanks
>>> 
>>> On Tue, Oct 31, 2017 at 6:32 PM, Alfredo Cardigliano >> > wrote:
>>> Hi Amir
>>> that's correct, however for some reason it seems it is not the case in your 
>>> tests.
>>> 
>>> Alfredo
>>> 
>>> On 31 Oct 2017, at 12:08, Amir Kaduri >> > wrote:
>>> 
 Thanks. tot_insert apparently works ok.
 
 Regarding function copy_data_to_ring():
 At the end of it there is the statement:
  if(num_queued_pkts(pfr) >= pfr->poll_num_pkts_watermark)
  wake_up_interruptible(>ring_slots_waitqueue);
 
 Since watermark is set to 128, and I send <128 packets, this causes them 
 to wait in kernel queue.
 But since poll_duration is set to 1 (1 millisecond I assume), I expect the 
 condition to check this also (meaning, there are packets in queue but 1 
 millisecond passed and they weren't read),
 

Re: [Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2018-04-30 Thread Amir Kaduri
Thanks for the answers.

So the only way to make handlep->timeout>=0, is by setting the
file-descriptor to "blocking" (nonblock=0) according to the logic in
function pcap_setnonblock_mmap() and this is something that we would like
to avoid.
Therefore, we do the polling (non-blocking) in the application that uses
pcap/pf_ring.
The problem we have is with low-traffic network. According to the logic in
function copy_data_to_ring(), as long as the queue didn't reach the
"poll_num_pkts_watermark" threshold (in our case 128 packets),
the poll() (in userspace) won't be called (since  wake_up_interruptible(..)
is not called), which means that we have packets that are stuck in the ring
till the queue reaches the watermark.

I wonder if you see any rationale in improving the pf_ring kernel module
code, to call  wake_up_interruptible() (in order to flush the queue) if
some "timeout" passed and the queue is not empty (but still didn't reach
the watermark).

Amir


On Thu, Apr 26, 2018 at 6:00 PM, Alfredo Cardigliano 
wrote:

>
>
> On 26 Apr 2018, at 15:34, Amir Kaduri  wrote:
>
> Hi Alfredo,
>
> My code is based on libpcap, while pfring's userland examples use pfring
> APIs directly, therefore things are a bit harder for me.
>
> Short clarification about a related code-line:
> Please look at the following line: https://github.com/ntop/
> PF_RING/blob/dev/userland/libpcap-1.8.1/pcap-linux.c#L1875
>
> (1)  If I understand it correctly, if wait_for_incoming_packet is true,
> then pfring_poll() should be called.
>   Don't you want wait_for_incoming_packet to be true in case
> pf_ring_active_poll is true?
>
>
> “active” means spinning, thus poll should not be used in that case.
>
>   Currently, its the opposite (i.e. if pf_ring_active_poll is true,
> wait_for_incoming_packet will be false thus pfring_poll() won't be
> called).
>
>
> This seems to be correct
>
>
> (2) If the code is ok, then the only way for me to make
> wait_for_incoming_packet true (for pfring_poll() to be called) is by
> making handlep->timeout >= 0.
>  Correct?
>
>
> Correct
>
> Alfredo
>
>
> Thanks,
> Amir
>
> On Mon, Apr 9, 2018 at 10:51 AM, Alfredo Cardigliano  > wrote:
>
>> Hi Amir
>> if I understand correctly, pfcount_multichannel is working, while in your
>> application
>> it seems that poll does not honor the timeout, if this is the case it
>> seems the problem
>> is not in the kernel module, I think you should look for differences
>> between the two applications..
>>
>> Alfredo
>>
>> On 9 Apr 2018, at 07:20, Amir Kaduri  wrote:
>>
>> Hi Alfredo,
>>
>> I'm back to investigate/debug this issue in my environment, and maybe
>> you'll manage to save me some time:
>>
>> When I use the example program "pfcount_multichannel", poll-duration
>> works for me as expected:
>> For watermark=128, poll-duration=1000, even if less than 128 packets
>> received, I get them in pfcount_multichannel.
>>
>> On the other hand, in my other program (which is a complex one), the
>> userspace application gets the packets only after 128 packets
>> aggregated by the ring, regardless the polling rate (which is done always
>> using 50ms timeout).
>>
>> Maybe you can figure out what can "hold" the packets in the ring and
>> forward them to userspace only when the watermark threshold passes?
>> Maybe something is missing during initialization?
>> (for simplicity I'm not using rehash, and not using any filters).
>>
>> Thanks
>>
>> On Tue, Oct 31, 2017 at 6:32 PM, Alfredo Cardigliano <
>> cardigli...@ntop.org> wrote:
>>
>>> Hi Amir
>>> that's correct, however for some reason it seems it is not the case in
>>> your tests.
>>>
>>> Alfredo
>>>
>>> On 31 Oct 2017, at 12:08, Amir Kaduri  wrote:
>>>
>>> Thanks. tot_insert apparently works ok.
>>>
>>> Regarding function copy_data_to_ring():
>>> At the end of it there is the statement:
>>>  if(num_queued_pkts(pfr) >= pfr->poll_num_pkts_watermark)
>>>  wake_up_interruptible(>ring_slots_waitqueue);
>>>
>>> Since watermark is set to 128, and I send <128 packets, this causes them
>>> to wait in kernel queue.
>>> But since poll_duration is set to 1 (1 millisecond I assume), I expect
>>> the condition to check this also (meaning, there are packets in queue but 1
>>> millisecond passed and they weren't read),
>>> the wake_up_interruptible should also be called. No?
>>>
>>> Thanks,
>>> Amir
>>>
>>>
>>> On Tue, Oct 31, 2017 at 10:20 AM, Alfredo Cardigliano <
>>> cardigli...@ntop.org> wrote:
>>>


 On 31 Oct 2017, at 08:42, Amir Kaduri  wrote:

 Hi Alfredo,

 I'm trying to debug the issue, and I have a question about the code, to
 make sure that there is no problem there:
 Specifically, I'm referring to the function "pfring_mod_recv":
 In order that the line that refers to poll_duration ("pfring_poll(ring,
 ring->poll_duration)") will be reached, 

Re: [Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2018-04-26 Thread Alfredo Cardigliano


> On 26 Apr 2018, at 15:34, Amir Kaduri  wrote:
> 
> Hi Alfredo,
> 
> My code is based on libpcap, while pfring's userland examples use pfring APIs 
> directly, therefore things are a bit harder for me.
> 
> Short clarification about a related code-line:
> Please look at the following line: 
> https://github.com/ntop/PF_RING/blob/dev/userland/libpcap-1.8.1/pcap-linux.c#L1875
>  
> 
> 
> (1)  If I understand it correctly, if wait_for_incoming_packet is true, then 
> pfring_poll() should be called.
>   Don't you want  wait_for_incoming_packet to be true in case  
> pf_ring_active_poll is true?

“active” means spinning, thus poll should not be used in that case.

>   Currently, its the opposite (i.e. if pf_ring_active_poll is true, 
> wait_for_incoming_packet will be false thus pfring_poll() won't be called).

This seems to be correct

> 
> (2) If the code is ok, then the only way for me to make  
> wait_for_incoming_packet true (for pfring_poll() to be called) is by making 
> handlep->timeout >= 0.
>  Correct?

Correct

Alfredo

> 
> Thanks,
> Amir
> 
> On Mon, Apr 9, 2018 at 10:51 AM, Alfredo Cardigliano  > wrote:
> Hi Amir
> if I understand correctly, pfcount_multichannel is working, while in your 
> application
> it seems that poll does not honor the timeout, if this is the case it seems 
> the problem
> is not in the kernel module, I think you should look for differences between 
> the two applications..
> 
> Alfredo
> 
>> On 9 Apr 2018, at 07:20, Amir Kaduri > > wrote:
>> 
>> Hi Alfredo,
>> 
>> I'm back to investigate/debug this issue in my environment, and maybe you'll 
>> manage to save me some time:
>> 
>> When I use the example program "pfcount_multichannel", poll-duration works 
>> for me as expected:
>> For watermark=128, poll-duration=1000, even if less than 128 packets 
>> received, I get them in pfcount_multichannel.
>> 
>> On the other hand, in my other program (which is a complex one), the 
>> userspace application gets the packets only after 128 packets
>> aggregated by the ring, regardless the polling rate (which is done always 
>> using 50ms timeout).
>> 
>> Maybe you can figure out what can "hold" the packets in the ring and forward 
>> them to userspace only when the watermark threshold passes?
>> Maybe something is missing during initialization?
>> (for simplicity I'm not using rehash, and not using any filters).
>> 
>> Thanks
>> 
>> On Tue, Oct 31, 2017 at 6:32 PM, Alfredo Cardigliano > > wrote:
>> Hi Amir
>> that's correct, however for some reason it seems it is not the case in your 
>> tests.
>> 
>> Alfredo
>> 
>> On 31 Oct 2017, at 12:08, Amir Kaduri > > wrote:
>> 
>>> Thanks. tot_insert apparently works ok.
>>> 
>>> Regarding function copy_data_to_ring():
>>> At the end of it there is the statement:
>>>  if(num_queued_pkts(pfr) >= pfr->poll_num_pkts_watermark)
>>>  wake_up_interruptible(>ring_slots_waitqueue);
>>> 
>>> Since watermark is set to 128, and I send <128 packets, this causes them to 
>>> wait in kernel queue.
>>> But since poll_duration is set to 1 (1 millisecond I assume), I expect the 
>>> condition to check this also (meaning, there are packets in queue but 1 
>>> millisecond passed and they weren't read),
>>> the wake_up_interruptible should also be called. No?
>>> 
>>> Thanks,
>>> Amir
>>> 
>>> 
>>> On Tue, Oct 31, 2017 at 10:20 AM, Alfredo Cardigliano >> > wrote:
>>> 
>>> 
 On 31 Oct 2017, at 08:42, Amir Kaduri > wrote:
 
 Hi Alfredo,
 
 I'm trying to debug the issue, and I have a question about the code, to 
 make sure that there is no problem there:
 Specifically, I'm referring to the function "pfring_mod_recv":
 In order that the line that refers to poll_duration ("pfring_poll(ring, 
 ring->poll_duration)") will be reached, there are 2 conditions that should 
 occur:
 1. pfring_there_is_pkt_available(ring) should return false (otherwise, the 
 function returns at the end of the condition).
 2. wait_for_incoming_packet should be set to true.
 Currently, I'm referring to the first one:
 In order that the macro pfring_there_is_pkt_available(ring) will return 
 false, ring->slots_info->tot_insert should be equal to 
 ring->slots_info->tot_read.
 What I see in my tests that they don't get equal. I always see that 
 tot_insert>tot_read, and sometimes they get eual when tot_read++ is called 
 but it happens inside the condition, so the "pfring_mod_recv" returns with 
 1.
>>> 
>>> It seems to be correct. The kernel module inserts packets into 

Re: [Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2018-04-26 Thread Amir Kaduri
Hi Alfredo,

My code is based on libpcap, while pfring's userland examples use pfring
APIs directly, therefore things are a bit harder for me.

Short clarification about a related code-line:
Please look at the following line:
https://github.com/ntop/PF_RING/blob/dev/userland/libpcap-1.8.1/pcap-linux.c#L1875

(1)  If I understand it correctly, if wait_for_incoming_packet is true,
then pfring_poll() should be called.
  Don't you want wait_for_incoming_packet to be true in case
pf_ring_active_poll is true?
  Currently, its the opposite (i.e. if pf_ring_active_poll is true,
wait_for_incoming_packet will be false thus pfring_poll() won't be called).

(2) If the code is ok, then the only way for me to make
wait_for_incoming_packet true (for pfring_poll() to be called) is by making
handlep->timeout >= 0.
 Correct?

Thanks,
Amir

On Mon, Apr 9, 2018 at 10:51 AM, Alfredo Cardigliano 
wrote:

> Hi Amir
> if I understand correctly, pfcount_multichannel is working, while in your
> application
> it seems that poll does not honor the timeout, if this is the case it
> seems the problem
> is not in the kernel module, I think you should look for differences
> between the two applications..
>
> Alfredo
>
> On 9 Apr 2018, at 07:20, Amir Kaduri  wrote:
>
> Hi Alfredo,
>
> I'm back to investigate/debug this issue in my environment, and maybe
> you'll manage to save me some time:
>
> When I use the example program "pfcount_multichannel", poll-duration works
> for me as expected:
> For watermark=128, poll-duration=1000, even if less than 128 packets
> received, I get them in pfcount_multichannel.
>
> On the other hand, in my other program (which is a complex one), the
> userspace application gets the packets only after 128 packets
> aggregated by the ring, regardless the polling rate (which is done always
> using 50ms timeout).
>
> Maybe you can figure out what can "hold" the packets in the ring and
> forward them to userspace only when the watermark threshold passes?
> Maybe something is missing during initialization?
> (for simplicity I'm not using rehash, and not using any filters).
>
> Thanks
>
> On Tue, Oct 31, 2017 at 6:32 PM, Alfredo Cardigliano  > wrote:
>
>> Hi Amir
>> that's correct, however for some reason it seems it is not the case in
>> your tests.
>>
>> Alfredo
>>
>> On 31 Oct 2017, at 12:08, Amir Kaduri  wrote:
>>
>> Thanks. tot_insert apparently works ok.
>>
>> Regarding function copy_data_to_ring():
>> At the end of it there is the statement:
>>  if(num_queued_pkts(pfr) >= pfr->poll_num_pkts_watermark)
>>  wake_up_interruptible(>ring_slots_waitqueue);
>>
>> Since watermark is set to 128, and I send <128 packets, this causes them
>> to wait in kernel queue.
>> But since poll_duration is set to 1 (1 millisecond I assume), I expect
>> the condition to check this also (meaning, there are packets in queue but 1
>> millisecond passed and they weren't read),
>> the wake_up_interruptible should also be called. No?
>>
>> Thanks,
>> Amir
>>
>>
>> On Tue, Oct 31, 2017 at 10:20 AM, Alfredo Cardigliano <
>> cardigli...@ntop.org> wrote:
>>
>>>
>>>
>>> On 31 Oct 2017, at 08:42, Amir Kaduri  wrote:
>>>
>>> Hi Alfredo,
>>>
>>> I'm trying to debug the issue, and I have a question about the code, to
>>> make sure that there is no problem there:
>>> Specifically, I'm referring to the function "pfring_mod_recv":
>>> In order that the line that refers to poll_duration ("pfring_poll(ring,
>>> ring->poll_duration)") will be reached, there are 2 conditions that should
>>> occur:
>>> 1. pfring_there_is_pkt_available(ring) should return false (otherwise,
>>> the function returns at the end of the condition).
>>> 2. wait_for_incoming_packet should be set to true.
>>> Currently, I'm referring to the first one:
>>> In order that the macro pfring_there_is_pkt_available(ring) will return
>>> false, ring->slots_info->tot_insert should be equal to
>>> ring->slots_info->tot_read.
>>> What I see in my tests that they don't get equal. I always see that
>>> tot_insert>tot_read, and sometimes they get eual when tot_read++ is called
>>> but it happens inside the condition, so the "pfring_mod_recv" returns with
>>> 1.
>>>
>>>
>>> It seems to be correct. The kernel module inserts packets into the ring
>>> increasing tot_insert, the userspace library reads packets from the ring
>>> increasing tot_read. This means that if tot_insert == tot_read there is no
>>> packet to read. If there is a bug, it should be in the kernel module that
>>> is somehow not adding packets to the ring (thus not updating tot_insert).
>>>
>>> Alfredo
>>>
>>> I remind that I set the watermark to be high, in order to see the
>>> poll_duration takes effect.
>>>
>>> Could you please approve that you don't see any problem in the code?
>>>
>>> Thanks,
>>> Amir
>>>
>>>
>>> On Thu, Oct 26, 2017 at 12:22 PM, Alfredo Cardigliano <
>>> 

Re: [Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2018-04-09 Thread Alfredo Cardigliano
Hi Amir
if I understand correctly, pfcount_multichannel is working, while in your 
application
it seems that poll does not honor the timeout, if this is the case it seems the 
problem
is not in the kernel module, I think you should look for differences between 
the two applications..

Alfredo

> On 9 Apr 2018, at 07:20, Amir Kaduri  wrote:
> 
> Hi Alfredo,
> 
> I'm back to investigate/debug this issue in my environment, and maybe you'll 
> manage to save me some time:
> 
> When I use the example program "pfcount_multichannel", poll-duration works 
> for me as expected:
> For watermark=128, poll-duration=1000, even if less than 128 packets 
> received, I get them in pfcount_multichannel.
> 
> On the other hand, in my other program (which is a complex one), the 
> userspace application gets the packets only after 128 packets
> aggregated by the ring, regardless the polling rate (which is done always 
> using 50ms timeout).
> 
> Maybe you can figure out what can "hold" the packets in the ring and forward 
> them to userspace only when the watermark threshold passes?
> Maybe something is missing during initialization?
> (for simplicity I'm not using rehash, and not using any filters).
> 
> Thanks
> 
> On Tue, Oct 31, 2017 at 6:32 PM, Alfredo Cardigliano  > wrote:
> Hi Amir
> that's correct, however for some reason it seems it is not the case in your 
> tests.
> 
> Alfredo
> 
> On 31 Oct 2017, at 12:08, Amir Kaduri  > wrote:
> 
>> Thanks. tot_insert apparently works ok.
>> 
>> Regarding function copy_data_to_ring():
>> At the end of it there is the statement:
>>  if(num_queued_pkts(pfr) >= pfr->poll_num_pkts_watermark)
>>  wake_up_interruptible(>ring_slots_waitqueue);
>> 
>> Since watermark is set to 128, and I send <128 packets, this causes them to 
>> wait in kernel queue.
>> But since poll_duration is set to 1 (1 millisecond I assume), I expect the 
>> condition to check this also (meaning, there are packets in queue but 1 
>> millisecond passed and they weren't read),
>> the wake_up_interruptible should also be called. No?
>> 
>> Thanks,
>> Amir
>> 
>> 
>> On Tue, Oct 31, 2017 at 10:20 AM, Alfredo Cardigliano > > wrote:
>> 
>> 
>>> On 31 Oct 2017, at 08:42, Amir Kaduri >> > wrote:
>>> 
>>> Hi Alfredo,
>>> 
>>> I'm trying to debug the issue, and I have a question about the code, to 
>>> make sure that there is no problem there:
>>> Specifically, I'm referring to the function "pfring_mod_recv":
>>> In order that the line that refers to poll_duration ("pfring_poll(ring, 
>>> ring->poll_duration)") will be reached, there are 2 conditions that should 
>>> occur:
>>> 1. pfring_there_is_pkt_available(ring) should return false (otherwise, the 
>>> function returns at the end of the condition).
>>> 2. wait_for_incoming_packet should be set to true.
>>> Currently, I'm referring to the first one:
>>> In order that the macro pfring_there_is_pkt_available(ring) will return 
>>> false, ring->slots_info->tot_insert should be equal to 
>>> ring->slots_info->tot_read.
>>> What I see in my tests that they don't get equal. I always see that 
>>> tot_insert>tot_read, and sometimes they get eual when tot_read++ is called 
>>> but it happens inside the condition, so the "pfring_mod_recv" returns with 
>>> 1.
>> 
>> It seems to be correct. The kernel module inserts packets into the ring 
>> increasing tot_insert, the userspace library reads packets from the ring 
>> increasing tot_read. This means that if tot_insert == tot_read there is no 
>> packet to read. If there is a bug, it should be in the kernel module that is 
>> somehow not adding packets to the ring (thus not updating tot_insert).
>> 
>> Alfredo
>> 
>>> I remind that I set the watermark to be high, in order to see the 
>>> poll_duration takes effect.
>>> 
>>> Could you please approve that you don't see any problem in the code?
>>> 
>>> Thanks,
>>> Amir
>>> 
>>> 
>>> On Thu, Oct 26, 2017 at 12:22 PM, Alfredo Cardigliano >> > wrote:
>>> Hi Amir
>>> yes, that’s the way it should work, if this is not the case, some debugging 
>>> is needed to identify the problem
>>> 
>>> Alfredo
>>> 
 On 26 Oct 2017, at 10:14, Amir Kaduri > wrote:
 
 Basically, the functionality that I would like to have is even if less 
 than poll-watermark-threshold (default: 128) packets arrives the socket, 
 they will be forwarded to userland if 1 millisecond has passed.
 How can I gain this? Isn't it by using  pfring_set_poll_duration()?
 
 Alfredo, could you please clarify?
 
 Thanks,
 Amir
 
 On Wed, Oct 18, 2017 at 8:48 PM, Amir Kaduri > wrote:

Re: [Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2018-04-08 Thread Amir Kaduri
Hi Alfredo,

I'm back to investigate/debug this issue in my environment, and maybe
you'll manage to save me some time:

When I use the example program "pfcount_multichannel", poll-duration works
for me as expected:
For watermark=128, poll-duration=1000, even if less than 128 packets
received, I get them in pfcount_multichannel.

On the other hand, in my other program (which is a complex one), the
userspace application gets the packets only after 128 packets
aggregated by the ring, regardless the polling rate (which is done always
using 50ms timeout).

Maybe you can figure out what can "hold" the packets in the ring and
forward them to userspace only when the watermark threshold passes?
Maybe something is missing during initialization?
(for simplicity I'm not using rehash, and not using any filters).

Thanks

On Tue, Oct 31, 2017 at 6:32 PM, Alfredo Cardigliano 
wrote:

> Hi Amir
> that's correct, however for some reason it seems it is not the case in
> your tests.
>
> Alfredo
>
> On 31 Oct 2017, at 12:08, Amir Kaduri  wrote:
>
> Thanks. tot_insert apparently works ok.
>
> Regarding function copy_data_to_ring():
> At the end of it there is the statement:
>  if(num_queued_pkts(pfr) >= pfr->poll_num_pkts_watermark)
>  wake_up_interruptible(>ring_slots_waitqueue);
>
> Since watermark is set to 128, and I send <128 packets, this causes them
> to wait in kernel queue.
> But since poll_duration is set to 1 (1 millisecond I assume), I expect the
> condition to check this also (meaning, there are packets in queue but 1
> millisecond passed and they weren't read),
> the wake_up_interruptible should also be called. No?
>
> Thanks,
> Amir
>
>
> On Tue, Oct 31, 2017 at 10:20 AM, Alfredo Cardigliano <
> cardigli...@ntop.org> wrote:
>
>>
>>
>> On 31 Oct 2017, at 08:42, Amir Kaduri  wrote:
>>
>> Hi Alfredo,
>>
>> I'm trying to debug the issue, and I have a question about the code, to
>> make sure that there is no problem there:
>> Specifically, I'm referring to the function "pfring_mod_recv":
>> In order that the line that refers to poll_duration ("pfring_poll(ring,
>> ring->poll_duration)") will be reached, there are 2 conditions that should
>> occur:
>> 1. pfring_there_is_pkt_available(ring) should return false (otherwise,
>> the function returns at the end of the condition).
>> 2. wait_for_incoming_packet should be set to true.
>> Currently, I'm referring to the first one:
>> In order that the macro pfring_there_is_pkt_available(ring) will return
>> false, ring->slots_info->tot_insert should be equal to
>> ring->slots_info->tot_read.
>> What I see in my tests that they don't get equal. I always see that
>> tot_insert>tot_read, and sometimes they get eual when tot_read++ is called
>> but it happens inside the condition, so the "pfring_mod_recv" returns with
>> 1.
>>
>>
>> It seems to be correct. The kernel module inserts packets into the ring
>> increasing tot_insert, the userspace library reads packets from the ring
>> increasing tot_read. This means that if tot_insert == tot_read there is no
>> packet to read. If there is a bug, it should be in the kernel module that
>> is somehow not adding packets to the ring (thus not updating tot_insert).
>>
>> Alfredo
>>
>> I remind that I set the watermark to be high, in order to see the
>> poll_duration takes effect.
>>
>> Could you please approve that you don't see any problem in the code?
>>
>> Thanks,
>> Amir
>>
>>
>> On Thu, Oct 26, 2017 at 12:22 PM, Alfredo Cardigliano <
>> cardigli...@ntop.org> wrote:
>>
>>> Hi Amir
>>> yes, that’s the way it should work, if this is not the case, some
>>> debugging is needed to identify the problem
>>>
>>> Alfredo
>>>
>>> On 26 Oct 2017, at 10:14, Amir Kaduri  wrote:
>>>
>>> Basically, the functionality that I would like to have is even if less
>>> than poll-watermark-threshold (default: 128) packets arrives the socket,
>>> they will be forwarded to userland if 1 millisecond has passed.
>>> How can I gain this? Isn't it by using  pfring_set_poll_duration()?
>>>
>>> Alfredo, could you please clarify?
>>>
>>> Thanks,
>>> Amir
>>>
>>> On Wed, Oct 18, 2017 at 8:48 PM, Amir Kaduri 
>>> wrote:
>>>
 Hi,

 I'm using pf_ring 6.6.0 (no ZC) on CentOS 7, on 10G interfaces (ixgbe
 drivers).
 As far as I understand the relation between poll-watermark and
 poll-duration, packets will be queued untill one of comes first: or passing
 the poll-watermark packets threshold, or a poll-duration milliseconds has
 passed.
 I set poll-watermark to the maximum (4096)
 (using pfring_set_poll_watermark()) and set poll-duration to the
 minimum (1) (using pfring_set_poll_duration()).
 I've sent 400 packets to the socket. I see that they are received by
 the NIC, but they didn't pass to userland. Only when passing 500 packets, a
 chunk of them passed to userland.
 I don't quite understand 

Re: [Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2017-10-31 Thread Alfredo Cardigliano
Hi Amir
that's correct, however for some reason it seems it is not the case in your 
tests.

Alfredo

> On 31 Oct 2017, at 12:08, Amir Kaduri  wrote:
> 
> Thanks. tot_insert apparently works ok.
> 
> Regarding function copy_data_to_ring():
> At the end of it there is the statement:
>  if(num_queued_pkts(pfr) >= pfr->poll_num_pkts_watermark)
>  wake_up_interruptible(>ring_slots_waitqueue);
> 
> Since watermark is set to 128, and I send <128 packets, this causes them to 
> wait in kernel queue.
> But since poll_duration is set to 1 (1 millisecond I assume), I expect the 
> condition to check this also (meaning, there are packets in queue but 1 
> millisecond passed and they weren't read),
> the wake_up_interruptible should also be called. No?
> 
> Thanks,
> Amir
> 
> 
>> On Tue, Oct 31, 2017 at 10:20 AM, Alfredo Cardigliano  
>> wrote:
>> 
>> 
>>> On 31 Oct 2017, at 08:42, Amir Kaduri  wrote:
>>> 
>>> Hi Alfredo,
>>> 
>>> I'm trying to debug the issue, and I have a question about the code, to 
>>> make sure that there is no problem there:
>>> Specifically, I'm referring to the function "pfring_mod_recv":
>>> In order that the line that refers to poll_duration ("pfring_poll(ring, 
>>> ring->poll_duration)") will be reached, there are 2 conditions that should 
>>> occur:
>>> 1. pfring_there_is_pkt_available(ring) should return false (otherwise, the 
>>> function returns at the end of the condition).
>>> 2. wait_for_incoming_packet should be set to true.
>>> Currently, I'm referring to the first one:
>>> In order that the macro pfring_there_is_pkt_available(ring) will return 
>>> false, ring->slots_info->tot_insert should be equal to 
>>> ring->slots_info->tot_read.
>>> What I see in my tests that they don't get equal. I always see that 
>>> tot_insert>tot_read, and sometimes they get eual when tot_read++ is called 
>>> but it happens inside the condition, so the "pfring_mod_recv" returns with 
>>> 1.
>> 
>> It seems to be correct. The kernel module inserts packets into the ring 
>> increasing tot_insert, the userspace library reads packets from the ring 
>> increasing tot_read. This means that if tot_insert == tot_read there is no 
>> packet to read. If there is a bug, it should be in the kernel module that is 
>> somehow not adding packets to the ring (thus not updating tot_insert).
>> 
>> Alfredo
>> 
>>> I remind that I set the watermark to be high, in order to see the 
>>> poll_duration takes effect.
>>> 
>>> Could you please approve that you don't see any problem in the code?
>>> 
>>> Thanks,
>>> Amir 
>>> 
>>> 
 On Thu, Oct 26, 2017 at 12:22 PM, Alfredo Cardigliano 
  wrote:
 Hi Amir
 yes, that’s the way it should work, if this is not the case, some 
 debugging is needed to identify the problem
 
 Alfredo
 
> On 26 Oct 2017, at 10:14, Amir Kaduri  wrote:
> 
> Basically, the functionality that I would like to have is even if less 
> than poll-watermark-threshold (default: 128) packets arrives the socket, 
> they will be forwarded to userland if 1 millisecond has passed.
> How can I gain this? Isn't it by using  pfring_set_poll_duration()?
> 
> Alfredo, could you please clarify?
> 
> Thanks,
> Amir
> 
>> On Wed, Oct 18, 2017 at 8:48 PM, Amir Kaduri  wrote:
>> Hi,
>> 
>> I'm using pf_ring 6.6.0 (no ZC) on CentOS 7, on 10G interfaces (ixgbe 
>> drivers).
>> As far as I understand the relation between poll-watermark and 
>> poll-duration, packets will be queued untill one of comes first: or 
>> passing the poll-watermark packets threshold, or a poll-duration 
>> milliseconds has passed.
>> I set poll-watermark to the maximum (4096) (using 
>> pfring_set_poll_watermark()) and set poll-duration to the minimum (1) 
>> (using pfring_set_poll_duration()).
>> I've sent 400 packets to the socket. I see that they are received by the 
>> NIC, but they didn't pass to userland. Only when passing 500 packets, a 
>> chunk of them passed to userland.
>> I don't quite understand the behavior: since poll-duration is 1 
>> (millisecond I assume), I've expected all the packets to pass to 
>> userland immediately, even though poll-watermark is much higher.
>> 
>> Can anyone shed some light on the above?
>> 
>> Thanks,
>> Amir
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
 
 
 ___
 Ntop-misc mailing list
 Ntop-misc@listgateway.unipi.it
 http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>> 
>>> ___
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it
>>> 

Re: [Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2017-10-31 Thread Amir Kaduri
Thanks. tot_insert apparently works ok.

Regarding function copy_data_to_ring():
At the end of it there is the statement:
 if(num_queued_pkts(pfr) >= pfr->poll_num_pkts_watermark)
 wake_up_interruptible(>ring_slots_waitqueue);

Since watermark is set to 128, and I send <128 packets, this causes them to
wait in kernel queue.
But since poll_duration is set to 1 (1 millisecond I assume), I expect the
condition to check this also (meaning, there are packets in queue but 1
millisecond passed and they weren't read),
the wake_up_interruptible should also be called. No?

Thanks,
Amir


On Tue, Oct 31, 2017 at 10:20 AM, Alfredo Cardigliano 
wrote:

>
>
> On 31 Oct 2017, at 08:42, Amir Kaduri  wrote:
>
> Hi Alfredo,
>
> I'm trying to debug the issue, and I have a question about the code, to
> make sure that there is no problem there:
> Specifically, I'm referring to the function "pfring_mod_recv":
> In order that the line that refers to poll_duration ("pfring_poll(ring,
> ring->poll_duration)") will be reached, there are 2 conditions that should
> occur:
> 1. pfring_there_is_pkt_available(ring) should return false (otherwise,
> the function returns at the end of the condition).
> 2. wait_for_incoming_packet should be set to true.
> Currently, I'm referring to the first one:
> In order that the macro pfring_there_is_pkt_available(ring) will return
> false, ring->slots_info->tot_insert should be equal to
> ring->slots_info->tot_read.
> What I see in my tests that they don't get equal. I always see that
> tot_insert>tot_read, and sometimes they get eual when tot_read++ is called
> but it happens inside the condition, so the "pfring_mod_recv" returns with
> 1.
>
>
> It seems to be correct. The kernel module inserts packets into the ring
> increasing tot_insert, the userspace library reads packets from the ring
> increasing tot_read. This means that if tot_insert == tot_read there is no
> packet to read. If there is a bug, it should be in the kernel module that
> is somehow not adding packets to the ring (thus not updating tot_insert).
>
> Alfredo
>
> I remind that I set the watermark to be high, in order to see the
> poll_duration takes effect.
>
> Could you please approve that you don't see any problem in the code?
>
> Thanks,
> Amir
>
>
> On Thu, Oct 26, 2017 at 12:22 PM, Alfredo Cardigliano <
> cardigli...@ntop.org> wrote:
>
>> Hi Amir
>> yes, that’s the way it should work, if this is not the case, some
>> debugging is needed to identify the problem
>>
>> Alfredo
>>
>> On 26 Oct 2017, at 10:14, Amir Kaduri  wrote:
>>
>> Basically, the functionality that I would like to have is even if less
>> than poll-watermark-threshold (default: 128) packets arrives the socket,
>> they will be forwarded to userland if 1 millisecond has passed.
>> How can I gain this? Isn't it by using  pfring_set_poll_duration()?
>>
>> Alfredo, could you please clarify?
>>
>> Thanks,
>> Amir
>>
>> On Wed, Oct 18, 2017 at 8:48 PM, Amir Kaduri  wrote:
>>
>>> Hi,
>>>
>>> I'm using pf_ring 6.6.0 (no ZC) on CentOS 7, on 10G interfaces (ixgbe
>>> drivers).
>>> As far as I understand the relation between poll-watermark and
>>> poll-duration, packets will be queued untill one of comes first: or passing
>>> the poll-watermark packets threshold, or a poll-duration milliseconds has
>>> passed.
>>> I set poll-watermark to the maximum (4096) (using 
>>> pfring_set_poll_watermark())
>>> and set poll-duration to the minimum (1) (using pfring_set_poll_duratio
>>> n()).
>>> I've sent 400 packets to the socket. I see that they are received by the
>>> NIC, but they didn't pass to userland. Only when passing 500 packets, a
>>> chunk of them passed to userland.
>>> I don't quite understand the behavior: since poll-duration is 1
>>> (millisecond I assume), I've expected all the packets to pass to userland
>>> immediately, even though poll-watermark is much higher.
>>>
>>> Can anyone shed some light on the above?
>>>
>>> Thanks,
>>> Amir
>>>
>>
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>>
>>
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
>
>
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2017-10-31 Thread Alfredo Cardigliano


> On 31 Oct 2017, at 08:42, Amir Kaduri  wrote:
> 
> Hi Alfredo,
> 
> I'm trying to debug the issue, and I have a question about the code, to make 
> sure that there is no problem there:
> Specifically, I'm referring to the function "pfring_mod_recv":
> In order that the line that refers to poll_duration ("pfring_poll(ring, 
> ring->poll_duration)") will be reached, there are 2 conditions that should 
> occur:
> 1. pfring_there_is_pkt_available(ring) should return false (otherwise, the 
> function returns at the end of the condition).
> 2. wait_for_incoming_packet should be set to true.
> Currently, I'm referring to the first one:
> In order that the macro pfring_there_is_pkt_available(ring) will return 
> false, ring->slots_info->tot_insert should be equal to 
> ring->slots_info->tot_read.
> What I see in my tests that they don't get equal. I always see that 
> tot_insert>tot_read, and sometimes they get eual when tot_read++ is called 
> but it happens inside the condition, so the "pfring_mod_recv" returns with 1.

It seems to be correct. The kernel module inserts packets into the ring 
increasing tot_insert, the userspace library reads packets from the ring 
increasing tot_read. This means that if tot_insert == tot_read there is no 
packet to read. If there is a bug, it should be in the kernel module that is 
somehow not adding packets to the ring (thus not updating tot_insert).

Alfredo

> I remind that I set the watermark to be high, in order to see the 
> poll_duration takes effect.
> 
> Could you please approve that you don't see any problem in the code?
> 
> Thanks,
> Amir
> 
> 
> On Thu, Oct 26, 2017 at 12:22 PM, Alfredo Cardigliano  > wrote:
> Hi Amir
> yes, that’s the way it should work, if this is not the case, some debugging 
> is needed to identify the problem
> 
> Alfredo
> 
>> On 26 Oct 2017, at 10:14, Amir Kaduri > > wrote:
>> 
>> Basically, the functionality that I would like to have is even if less than 
>> poll-watermark-threshold (default: 128) packets arrives the socket, they 
>> will be forwarded to userland if 1 millisecond has passed.
>> How can I gain this? Isn't it by using  pfring_set_poll_duration()?
>> 
>> Alfredo, could you please clarify?
>> 
>> Thanks,
>> Amir
>> 
>> On Wed, Oct 18, 2017 at 8:48 PM, Amir Kaduri > > wrote:
>> Hi,
>> 
>> I'm using pf_ring 6.6.0 (no ZC) on CentOS 7, on 10G interfaces (ixgbe 
>> drivers).
>> As far as I understand the relation between poll-watermark and 
>> poll-duration, packets will be queued untill one of comes first: or passing 
>> the poll-watermark packets threshold, or a poll-duration milliseconds has 
>> passed.
>> I set poll-watermark to the maximum (4096) (using 
>> pfring_set_poll_watermark()) and set poll-duration to the minimum (1) (using 
>> pfring_set_poll_duration()).
>> I've sent 400 packets to the socket. I see that they are received by the 
>> NIC, but they didn't pass to userland. Only when passing 500 packets, a 
>> chunk of them passed to userland.
>> I don't quite understand the behavior: since poll-duration is 1 (millisecond 
>> I assume), I've expected all the packets to pass to userland immediately, 
>> even though poll-watermark is much higher.
>> 
>> Can anyone shed some light on the above?
>> 
>> Thanks,
>> Amir
>> 
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it 
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> 
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it 
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> 
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2017-10-31 Thread Amir Kaduri
Hi Alfredo,

I'm trying to debug the issue, and I have a question about the code, to
make sure that there is no problem there:
Specifically, I'm referring to the function "pfring_mod_recv":
In order that the line that refers to poll_duration ("pfring_poll(ring,
ring->poll_duration)") will be reached, there are 2 conditions that should
occur:
1. pfring_there_is_pkt_available(ring) should return false (otherwise, the
function returns at the end of the condition).
2. wait_for_incoming_packet should be set to true.
Currently, I'm referring to the first one:
In order that the macro pfring_there_is_pkt_available(ring) will return
false, ring->slots_info->tot_insert should be equal to
ring->slots_info->tot_read.
What I see in my tests that they don't get equal. I always see that
tot_insert>tot_read, and sometimes they get eual when tot_read++ is called
but it happens inside the condition, so the "pfring_mod_recv" returns with
1.
I remind that I set the watermark to be high, in order to see the
poll_duration takes effect.

Could you please approve that you don't see any problem in the code?

Thanks,
Amir


On Thu, Oct 26, 2017 at 12:22 PM, Alfredo Cardigliano 
wrote:

> Hi Amir
> yes, that’s the way it should work, if this is not the case, some
> debugging is needed to identify the problem
>
> Alfredo
>
> On 26 Oct 2017, at 10:14, Amir Kaduri  wrote:
>
> Basically, the functionality that I would like to have is even if less
> than poll-watermark-threshold (default: 128) packets arrives the socket,
> they will be forwarded to userland if 1 millisecond has passed.
> How can I gain this? Isn't it by using  pfring_set_poll_duration()?
>
> Alfredo, could you please clarify?
>
> Thanks,
> Amir
>
> On Wed, Oct 18, 2017 at 8:48 PM, Amir Kaduri  wrote:
>
>> Hi,
>>
>> I'm using pf_ring 6.6.0 (no ZC) on CentOS 7, on 10G interfaces (ixgbe
>> drivers).
>> As far as I understand the relation between poll-watermark and
>> poll-duration, packets will be queued untill one of comes first: or passing
>> the poll-watermark packets threshold, or a poll-duration milliseconds has
>> passed.
>> I set poll-watermark to the maximum (4096) (using 
>> pfring_set_poll_watermark())
>> and set poll-duration to the minimum (1) (using pfring_set_poll_duratio
>> n()).
>> I've sent 400 packets to the socket. I see that they are received by the
>> NIC, but they didn't pass to userland. Only when passing 500 packets, a
>> chunk of them passed to userland.
>> I don't quite understand the behavior: since poll-duration is 1
>> (millisecond I assume), I've expected all the packets to pass to userland
>> immediately, even though poll-watermark is much higher.
>>
>> Can anyone shed some light on the above?
>>
>> Thanks,
>> Amir
>>
>
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
>
>
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2017-10-26 Thread Alfredo Cardigliano
Hi Amir
yes, that’s the way it should work, if this is not the case, some debugging is 
needed to identify the problem

Alfredo

> On 26 Oct 2017, at 10:14, Amir Kaduri  wrote:
> 
> Basically, the functionality that I would like to have is even if less than 
> poll-watermark-threshold (default: 128) packets arrives the socket, they will 
> be forwarded to userland if 1 millisecond has passed.
> How can I gain this? Isn't it by using  pfring_set_poll_duration()?
> 
> Alfredo, could you please clarify?
> 
> Thanks,
> Amir
> 
> On Wed, Oct 18, 2017 at 8:48 PM, Amir Kaduri  > wrote:
> Hi,
> 
> I'm using pf_ring 6.6.0 (no ZC) on CentOS 7, on 10G interfaces (ixgbe 
> drivers).
> As far as I understand the relation between poll-watermark and poll-duration, 
> packets will be queued untill one of comes first: or passing the 
> poll-watermark packets threshold, or a poll-duration milliseconds has passed.
> I set poll-watermark to the maximum (4096) (using 
> pfring_set_poll_watermark()) and set poll-duration to the minimum (1) (using 
> pfring_set_poll_duration()).
> I've sent 400 packets to the socket. I see that they are received by the NIC, 
> but they didn't pass to userland. Only when passing 500 packets, a chunk of 
> them passed to userland.
> I don't quite understand the behavior: since poll-duration is 1 (millisecond 
> I assume), I've expected all the packets to pass to userland immediately, 
> even though poll-watermark is much higher.
> 
> Can anyone shed some light on the above?
> 
> Thanks,
> Amir
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2017-10-26 Thread Amir Kaduri
Basically, the functionality that I would like to have is even if less than
poll-watermark-threshold (default: 128) packets arrives the socket, they
will be forwarded to userland if 1 millisecond has passed.
How can I gain this? Isn't it by using  pfring_set_poll_duration()?

Alfredo, could you please clarify?

Thanks,
Amir

On Wed, Oct 18, 2017 at 8:48 PM, Amir Kaduri  wrote:

> Hi,
>
> I'm using pf_ring 6.6.0 (no ZC) on CentOS 7, on 10G interfaces (ixgbe
> drivers).
> As far as I understand the relation between poll-watermark and
> poll-duration, packets will be queued untill one of comes first: or passing
> the poll-watermark packets threshold, or a poll-duration milliseconds has
> passed.
> I set poll-watermark to the maximum (4096) (using pfring_set_poll_watermark())
> and set poll-duration to the minimum (1) (using pfring_set_poll_
> duration()).
> I've sent 400 packets to the socket. I see that they are received by the
> NIC, but they didn't pass to userland. Only when passing 500 packets, a
> chunk of them passed to userland.
> I don't quite understand the behavior: since poll-duration is 1
> (millisecond I assume), I've expected all the packets to pass to userland
> immediately, even though poll-watermark is much higher.
>
> Can anyone shed some light on the above?
>
> Thanks,
> Amir
>
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

[Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2017-10-18 Thread Amir Kaduri
Hi,

I'm using pf_ring 6.6.0 (no ZC) on CentOS 7, on 10G interfaces (ixgbe
drivers).
As far as I understand the relation between poll-watermark and
poll-duration, packets will be queued untill one of comes first: or passing
the poll-watermark packets threshold, or a poll-duration milliseconds has
passed.
I set poll-watermark to the maximum (4096)
(using pfring_set_poll_watermark()) and set poll-duration to the minimum
(1) (using pfring_set_poll_duration()).
I've sent 400 packets to the socket. I see that they are received by the
NIC, but they didn't pass to userland. Only when passing 500 packets, a
chunk of them passed to userland.
I don't quite understand the behavior: since poll-duration is 1
(millisecond I assume), I've expected all the packets to pass to userland
immediately, even though poll-watermark is much higher.

Can anyone shed some light on the above?

Thanks,
Amir
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc