Re: [dpdk-users] 回覆﹕ Packets drop while fetching with rte_eth_rx_burst

2018-03-25 Thread Filip Janiszewski
Thanks Marco,

I'm running DPDK 18.02. I might understand that the counter is not
implemented yet, but why rte_eth_rx_burst never returns nb_pkts?
According to:
http://dpdk.org/doc/api/rte__ethdev_8h.html#a3e7d76a451b46348686ea97d6367f102

"The rte_eth_rx_burst() function returns the number of packets actually
retrieved, which is the number of rte_mbuf data structures effectively
supplied into the rx_pkts array. A return value equal to nb_pkts
indicates that the RX queue contained at least rx_pkts packets, and this
is likely to signify that other received packets remain in the input queue."

So in case of drops I would expect the rx queue to be full and
rte_eth_rx_burst to return nb_pkts, but this never happen and it seems
that there's plenty of space in the ring, is that correct?

Thanks

Il 03/25/2018 01:30 PM, MAC Lee ha scritto:
> Hi Filip,
> which dpdk version are you using? You can take a look to the source code 
> of dpdk , the rxdrop counter may be not implemented in dpdk. So you always 
> get 0 in rxdrop. 
> 
> Thanks,
> Marco
> 
> 18/3/25 (週日),Filip Janiszewski  寫道:
> 
>  主旨: [dpdk-users] Packets drop while fetching with rte_eth_rx_burst
>  收件者: users@dpdk.org
>  日期: 2018年3月25日,日,下午6:33
>  
>  Hi Everybody,
>  
>  I have a weird drop problem, and to
>  understand my question the best way
>  is to have a look at this simple (and
>  cleaned from all the not relevant
>  stuff) snippet:
>  
>  while( 1 )
>  {
>      if( config->running ==
>  false ) {
>          break;
>      }
>      num_of_pkt =
>  rte_eth_rx_burst( config->port_id,
>           
>                 
>           config->queue_idx,
>           
>                 
>           buffers,
>           
>                 
>           MAX_BURST_DEQ_SIZE);
>      if( unlikely( num_of_pkt
>  == MAX_BURST_DEQ_SIZE ) ) {
>         
>  rx_ring_full = true; //probably not the best name
>      }
>  
>      if( likely( num_of_pkt
>  > 0 ) )
>      {
>          pk_captured
>  += num_of_pkt;
>  
>         
>  num_of_enq_pkt =
>  rte_ring_sp_enqueue_bulk(config->incoming_pkts_ring,
>           
>                 
>                 
>       (void*)buffers,
>           
>                 
>                 
>       num_of_pkt,
>           
>                 
>                 
>       _ring_free_space);
>          //if
>  num_of_enq_pkt == 0 free the mbufs..
>       }
>  }
>  
>  This loop is retrieving packets from
>  the device and pushing them into a
>  queue for further processing by another
>  lcore.
>  
>  When I run a test with a Mellanox card
>  sending 20M (20878300) packets at
>  2.5M p/s the loop seems to miss some
>  packets and pk_captured is always
>  like 19M or similar.
>  
>  rx_ring_full is never true, which means
>  that num_of_pkt is always <
>  MAX_BURST_DEQ_SIZE, so according to the
>  documentation I shall not have
>  drops at HW level. Also, num_of_enq_pkt
>  is never 0 which means that all
>  the packets are enqueued.
>  
>  Now, if from that snipped I remove the
>  rte_ring_sp_enqueue_bulk call
>  (and make sure to release all the
>  mbufs) then pk_captured is always
>  exactly equal to the amount of packets
>  I've send to the NIC.
>  
>  So it seems (but I cant deal with this
>  idea) that
>  rte_ring_sp_enqueue_bulk is somehow too
>  slow and between one call to
>  rte_eth_rx_burst and another some
>  packets are dropped due to full ring
>  on the NIC, but, why num_of_pkt (from
>  rte_eth_rx_burst) is always
>  smaller than MAX_BURST_DEQ_SIZE (much
>  smaller) as if there was always
>  sufficient room for the packets?
>  
>  Is anybody able to help me understand
>  what's happening here?
>  
>  Note, MAX_BURST_DEQ_SIZE is 512.
>  
>  Thanks
>  
> 

-- 
BR, Filip
+48 666 369 823


[dpdk-users] 回覆﹕ Packets drop while fetching with rte_eth_rx_burst

2018-03-25 Thread MAC Lee
Hi Filip,
which dpdk version are you using? You can take a look to the source code of 
dpdk , the rxdrop counter may be not implemented in dpdk. So you always get 0 
in rxdrop. 

Thanks,
Marco

18/3/25 (週日),Filip Janiszewski  寫道:

 主旨: [dpdk-users] Packets drop while fetching with rte_eth_rx_burst
 收件者: users@dpdk.org
 日期: 2018年3月25日,日,下午6:33
 
 Hi Everybody,
 
 I have a weird drop problem, and to
 understand my question the best way
 is to have a look at this simple (and
 cleaned from all the not relevant
 stuff) snippet:
 
 while( 1 )
 {
     if( config->running ==
 false ) {
         break;
     }
     num_of_pkt =
 rte_eth_rx_burst( config->port_id,
          
                
          config->queue_idx,
          
                
          buffers,
          
                
          MAX_BURST_DEQ_SIZE);
     if( unlikely( num_of_pkt
 == MAX_BURST_DEQ_SIZE ) ) {
        
 rx_ring_full = true; //probably not the best name
     }
 
     if( likely( num_of_pkt
 > 0 ) )
     {
         pk_captured
 += num_of_pkt;
 
        
 num_of_enq_pkt =
 rte_ring_sp_enqueue_bulk(config->incoming_pkts_ring,
          
                
                
      (void*)buffers,
          
                
                
      num_of_pkt,
          
                
                
      _ring_free_space);
         //if
 num_of_enq_pkt == 0 free the mbufs..
      }
 }
 
 This loop is retrieving packets from
 the device and pushing them into a
 queue for further processing by another
 lcore.
 
 When I run a test with a Mellanox card
 sending 20M (20878300) packets at
 2.5M p/s the loop seems to miss some
 packets and pk_captured is always
 like 19M or similar.
 
 rx_ring_full is never true, which means
 that num_of_pkt is always <
 MAX_BURST_DEQ_SIZE, so according to the
 documentation I shall not have
 drops at HW level. Also, num_of_enq_pkt
 is never 0 which means that all
 the packets are enqueued.
 
 Now, if from that snipped I remove the
 rte_ring_sp_enqueue_bulk call
 (and make sure to release all the
 mbufs) then pk_captured is always
 exactly equal to the amount of packets
 I've send to the NIC.
 
 So it seems (but I cant deal with this
 idea) that
 rte_ring_sp_enqueue_bulk is somehow too
 slow and between one call to
 rte_eth_rx_burst and another some
 packets are dropped due to full ring
 on the NIC, but, why num_of_pkt (from
 rte_eth_rx_burst) is always
 smaller than MAX_BURST_DEQ_SIZE (much
 smaller) as if there was always
 sufficient room for the packets?
 
 Is anybody able to help me understand
 what's happening here?
 
 Note, MAX_BURST_DEQ_SIZE is 512.
 
 Thanks
 


Re: [dpdk-users] Apply patches from the mailing list

2018-03-25 Thread Shreyansh Jain
> -Original Message-
> From: users [mailto:users-boun...@dpdk.org] On Behalf Of
> long...@viettel.com.vn
> Sent: Sunday, March 25, 2018 11:59 AM
> To: users@dpdk.org
> Subject: [dpdk-users] Apply patches from the mailing list
> 
> A very basic question, but how do I apply some of the patches that
> were put on the dev mailing list to try it out? I already looked at
> the next- subtrees but apparently even major patch set such as the new
> packet framework/ip_pipeline is not in there (yet).

This is what I do:

1. Access http://dpdk.org/dev/patchwork/project/dpdk/list/ and search for 
patches from the author. This has all the patches posted to Mailing List - with 
their state (that is, for example, superseded if a series has been superseded 
with another version)

2. You have three options:
 a) Either select all patches (you will need to register/login) in a series and 
add to "bundle" and download that bundle as mbox
 b) Select individual patch and look for "download patch" or "download mbox" 
link and manually download them.
OR, one I use most frequently:
 b) Copy the link to patch (for example, 
http://dpdk.org/dev/patchwork/patch/36473/) and append "mbox" to it 
(http://dpdk.org/dev/patchwork/patch/36473/mbox)

Then,

$ wget  -O - | git am

One can easily make a script which can do the steps (1)>(2b) above based on a 
given patch ID (last integer in the link to patch).

Maybe there is a better and efficient way - this is just what I do. :)

> 
> The contributor guideline only has sections for submitting patches to
> the mailing list, not pulling and applying patches for local testing.
> I know of dpdk patchwork but there are no instructions provided.

Maybe you can go ahead and send across a patch for a method you find best and 
efficient.
Others can add their way/suggestions and I am confident Thomas would be happy 
to accept a documentation improvement patch.

-
Shreyansh


[dpdk-users] Packets drop while fetching with rte_eth_rx_burst

2018-03-25 Thread Filip Janiszewski
Hi Everybody,

I have a weird drop problem, and to understand my question the best way
is to have a look at this simple (and cleaned from all the not relevant
stuff) snippet:

while( 1 )
{
if( config->running == false ) {
break;
}
num_of_pkt = rte_eth_rx_burst( config->port_id,
   config->queue_idx,
   buffers,
   MAX_BURST_DEQ_SIZE);
if( unlikely( num_of_pkt == MAX_BURST_DEQ_SIZE ) ) {
rx_ring_full = true; //probably not the best name
}

if( likely( num_of_pkt > 0 ) )
{
pk_captured += num_of_pkt;

num_of_enq_pkt =
rte_ring_sp_enqueue_bulk(config->incoming_pkts_ring,
   (void*)buffers,
   num_of_pkt,
   _ring_free_space);
//if num_of_enq_pkt == 0 free the mbufs..
 }
}

This loop is retrieving packets from the device and pushing them into a
queue for further processing by another lcore.

When I run a test with a Mellanox card sending 20M (20878300) packets at
2.5M p/s the loop seems to miss some packets and pk_captured is always
like 19M or similar.

rx_ring_full is never true, which means that num_of_pkt is always <
MAX_BURST_DEQ_SIZE, so according to the documentation I shall not have
drops at HW level. Also, num_of_enq_pkt is never 0 which means that all
the packets are enqueued.

Now, if from that snipped I remove the rte_ring_sp_enqueue_bulk call
(and make sure to release all the mbufs) then pk_captured is always
exactly equal to the amount of packets I've send to the NIC.

So it seems (but I cant deal with this idea) that
rte_ring_sp_enqueue_bulk is somehow too slow and between one call to
rte_eth_rx_burst and another some packets are dropped due to full ring
on the NIC, but, why num_of_pkt (from rte_eth_rx_burst) is always
smaller than MAX_BURST_DEQ_SIZE (much smaller) as if there was always
sufficient room for the packets?

Is anybody able to help me understand what's happening here?

Note, MAX_BURST_DEQ_SIZE is 512.

Thanks


[dpdk-users] How to get the timestamp of the packets

2018-03-25 Thread Sungho Hong
Hello DPDK users,
in the dpdk document
http://dpdk.readthedocs.io/en/v17.11/nics/features.html

there is a mention about Macsec and Timestamp offload.
but I have no clue of how to use these features.

Would it be possible to know where I can find at least how these features
are enabled? there is nothing mentioned in the docs nor the references.



Sungho Hong


[dpdk-users] Apply patches from the mailing list

2018-03-25 Thread longtb5
A very basic question, but how do I apply some of the patches that 
were put on the dev mailing list to try it out? I already looked at 
the next- subtrees but apparently even major patch set such as the new 
packet framework/ip_pipeline is not in there (yet). 

The contributor guideline only has sections for submitting patches to 
the mailing list, not pulling and applying patches for local testing. 
I know of dpdk patchwork but there are no instructions provided. 

Regards, 
-BL