Re: [dpdk-users] Query on handling packets

2019-02-05 Thread Harsh Patel
Cool. Thanks a lot. We'll do that.

On Tue, Feb 5, 2019, 19:57 Wiles, Keith  wrote:

>
>
> > On Feb 5, 2019, at 8:22 AM, Harsh Patel 
> wrote:
> >
> > Can you help us with those questions we asked you? We need them as
> parameters for our testing.
>
> i would love to but i do not know much about what you are asking, sorry.
>
> i hope someone else steps in, maybe the pmd maintainer could help. look in
> the maintainers file and message him directly.
> >
> > Thanks,
> > Harsh & Hrishikesh
> >
> > On Tue, Feb 5, 2019, 19:42 Wiles, Keith  wrote:
> >
> >
> > > On Feb 5, 2019, at 8:00 AM, Harsh Patel 
> wrote:
> > >
> > > Hi,
> > > One of the mistake was as following. ns-3 frees the packet buffer just
> as it writes to the socket and thus we thought that we should also do the
> same. But dpdk while writing places the packet buffer to the tx descriptor
> ring and perform the transmission after that on its own. And we were
> freeing early so sometimes the packets were lost i.e. freed before
> transmission.
> > >
> > > Another thing was that as you suggested earlier we compiled the whole
> ns-3 in optimized mode. That improved the performance.
> > >
> > > These 2 things combined got us the desired results.
> >
> > Excellent thanks
> > >
> > > Regards,
> > > Harsh & Hrishikesh
> > >
> > > On Tue, Feb 5, 2019, 18:33 Wiles, Keith  wrote:
> > >
> > >
> > > > On Feb 5, 2019, at 12:37 AM, Harsh Patel 
> wrote:
> > > >
> > > > Hi,
> > > >
> > > > We would like to inform you that our code is working as expected and
> we are able to obtain 95-98 Mbps data rate for a 100Mbps application rate.
> We are now working on the testing of the code. Thanks a lot, especially to
> Keith for all the help you provided.
> > > >
> > > > We have 2 main queries :-
> > > > 1) We wanted to calculate Backlog at the NIC Tx Descriptors but were
> not able to find anything in the documentation. Can you help us in how to
> calculate the backlog?
> > > > 2) We searched on how to use Byte Queue Limit (BQL) on the NIC queue
> but couldn't find anything like that in DPDK. Does DPDK support BQL? If so,
> can you help us on how to use it for our project?
> > >
> > > what was the last set of problems if I may ask?
> > > >
> > > > Thanks & Regards
> > > > Harsh & Hrishikesh
> > > >
> > > > On Thu, 31 Jan 2019 at 22:28, Wiles, Keith 
> wrote:
> > > >
> > > >
> > > > Sent from my iPhone
> > > >
> > > > On Jan 30, 2019, at 5:36 PM, Harsh Patel 
> wrote:
> > > >
> > > >> Hello,
> > > >>
> > > >> This mail is to inform you that the integration of DPDK is working
> with ns-3 on a basic level. The model is running.
> > > >> For UDP traffic we are getting throughput same or better than raw
> socket. (Around 100Mbps)
> > > >> But unfortunately for TCP, there are burst packet losses due to
> which the throughput is drastically affected after some point of time. The
> bandwidth of the link used was 100Mbps.
> > > >> We have obtained cwnd and ssthresh graphs which show that once the
> flow gets out from Slow Start mode, there are so many packet losses that
> the congestion window & the slow start threshold is not able to go above
> 4-5 packets.
> > > >
> > > > Can you determine where the packets are being dropped?
> > > >> We have attached the graphs with this mail.
> > > >>
> > > >
> > > > I do not see the graphs attached but that’s OK.
> > > >> We would like to know if there is any reason to this or how can we
> fix this.
> > > >
> > > > I think we have to find out where the packets are being dropped this
> is the only reason for the case to your referring to.
> > > >>
> > > >> Thanks & Regards
> > > >> Harsh & Hrishikesh
> > > >>
> > > >> On Wed, 16 Jan 2019 at 19:25, Harsh Patel 
> wrote:
> > > >> Hi
> > > >>
> > > >> We were able to optimise the DPDK version. There were couple of
> things we needed to do.
> > > >>
> > > >> We were using tx timeout as 1s/2048, which we found out to be very
> less. Then we increased the timeout, but we were getting lot of
> retransmissions.
> > > >>
> > > >> So we removed the timeout and sent single packet as soon as we get
> it. This increased the throughput.
> > > >>
> > > >> Then we used DPDK feature to launch function on core, and gave a
> dedicated core for Rx. This increased the throughput further.
> > > >>
> > > >> The code is working really well for low bandwidth (<~50Mbps) and is
> outperforming raw socket version.
> > > >> But for high bandwidth, we are getting packet length mismatches for
> some reason. We are investigating it.
> > > >>
> > > >> We really thank you for the suggestions given by you and also for
> keeping the patience for last couple of months.
> > > >>
> > > >> Thank you
> > > >>
> > > >> Regards,
> > > >> Harsh & Hrishikesh
> > > >>
> > > >> On Fri, Jan 4, 2019, 11:27 Harsh Patel 
> wrote:
> > > >> Yes that would be helpful.
> > > >> It'd be ok for now to use the same dpdk version to overcome the
> build issues.
> > > >> We will look into updating the code for latest versions once we get
> past this 

Re: [dpdk-users] Query on handling packets

2019-02-05 Thread Wiles, Keith


> On Feb 5, 2019, at 8:22 AM, Harsh Patel  wrote:
> 
> Can you help us with those questions we asked you? We need them as parameters 
> for our testing.

i would love to but i do not know much about what you are asking, sorry.

i hope someone else steps in, maybe the pmd maintainer could help. look in the 
maintainers file and message him directly.
> 
> Thanks, 
> Harsh & Hrishikesh 
> 
> On Tue, Feb 5, 2019, 19:42 Wiles, Keith  wrote:
> 
> 
> > On Feb 5, 2019, at 8:00 AM, Harsh Patel  wrote:
> > 
> > Hi, 
> > One of the mistake was as following. ns-3 frees the packet buffer just as 
> > it writes to the socket and thus we thought that we should also do the 
> > same. But dpdk while writing places the packet buffer to the tx descriptor 
> > ring and perform the transmission after that on its own. And we were 
> > freeing early so sometimes the packets were lost i.e. freed before 
> > transmission. 
> > 
> > Another thing was that as you suggested earlier we compiled the whole ns-3 
> > in optimized mode. That improved the performance. 
> > 
> > These 2 things combined got us the desired results. 
> 
> Excellent thanks
> > 
> > Regards, 
> > Harsh & Hrishikesh 
> > 
> > On Tue, Feb 5, 2019, 18:33 Wiles, Keith  wrote:
> > 
> > 
> > > On Feb 5, 2019, at 12:37 AM, Harsh Patel  wrote:
> > > 
> > > Hi, 
> > > 
> > > We would like to inform you that our code is working as expected and we 
> > > are able to obtain 95-98 Mbps data rate for a 100Mbps application rate. 
> > > We are now working on the testing of the code. Thanks a lot, especially 
> > > to Keith for all the help you provided.
> > > 
> > > We have 2 main queries :-
> > > 1) We wanted to calculate Backlog at the NIC Tx Descriptors but were not 
> > > able to find anything in the documentation. Can you help us in how to 
> > > calculate the backlog?
> > > 2) We searched on how to use Byte Queue Limit (BQL) on the NIC queue but 
> > > couldn't find anything like that in DPDK. Does DPDK support BQL? If so, 
> > > can you help us on how to use it for our project?
> > 
> > what was the last set of problems if I may ask?
> > > 
> > > Thanks & Regards
> > > Harsh & Hrishikesh
> > > 
> > > On Thu, 31 Jan 2019 at 22:28, Wiles, Keith  wrote:
> > > 
> > > 
> > > Sent from my iPhone
> > > 
> > > On Jan 30, 2019, at 5:36 PM, Harsh Patel  wrote:
> > > 
> > >> Hello, 
> > >> 
> > >> This mail is to inform you that the integration of DPDK is working with 
> > >> ns-3 on a basic level. The model is running. 
> > >> For UDP traffic we are getting throughput same or better than raw 
> > >> socket. (Around 100Mbps)
> > >> But unfortunately for TCP, there are burst packet losses due to which 
> > >> the throughput is drastically affected after some point of time. The 
> > >> bandwidth of the link used was 100Mbps. 
> > >> We have obtained cwnd and ssthresh graphs which show that once the flow 
> > >> gets out from Slow Start mode, there are so many packet losses that the 
> > >> congestion window & the slow start threshold is not able to go above 4-5 
> > >> packets. 
> > > 
> > > Can you determine where the packets are being dropped?
> > >> We have attached the graphs with this mail.
> > >> 
> > > 
> > > I do not see the graphs attached but that’s OK. 
> > >> We would like to know if there is any reason to this or how can we fix 
> > >> this. 
> > > 
> > > I think we have to find out where the packets are being dropped this is 
> > > the only reason for the case to your referring to. 
> > >> 
> > >> Thanks & Regards
> > >> Harsh & Hrishikesh
> > >> 
> > >> On Wed, 16 Jan 2019 at 19:25, Harsh Patel  
> > >> wrote:
> > >> Hi
> > >> 
> > >> We were able to optimise the DPDK version. There were couple of things 
> > >> we needed to do.
> > >> 
> > >> We were using tx timeout as 1s/2048, which we found out to be very less. 
> > >> Then we increased the timeout, but we were getting lot of 
> > >> retransmissions.
> > >> 
> > >> So we removed the timeout and sent single packet as soon as we get it. 
> > >> This increased the throughput.
> > >> 
> > >> Then we used DPDK feature to launch function on core, and gave a 
> > >> dedicated core for Rx. This increased the throughput further.
> > >> 
> > >> The code is working really well for low bandwidth (<~50Mbps) and is 
> > >> outperforming raw socket version.
> > >> But for high bandwidth, we are getting packet length mismatches for some 
> > >> reason. We are investigating it.
> > >> 
> > >> We really thank you for the suggestions given by you and also for 
> > >> keeping the patience for last couple of months. 
> > >> 
> > >> Thank you
> > >> 
> > >> Regards, 
> > >> Harsh & Hrishikesh 
> > >> 
> > >> On Fri, Jan 4, 2019, 11:27 Harsh Patel  wrote:
> > >> Yes that would be helpful. 
> > >> It'd be ok for now to use the same dpdk version to overcome the build 
> > >> issues. 
> > >> We will look into updating the code for latest versions once we get past 
> > >> this problem. 
> > >> 
> > >> Thank you very much. 
> > >> 
> > >> 

Re: [dpdk-users] Query on handling packets

2019-02-05 Thread Harsh Patel
Can you help us with those questions we asked you? We need them as
parameters for our testing.

Thanks,
Harsh & Hrishikesh

On Tue, Feb 5, 2019, 19:42 Wiles, Keith  wrote:

>
>
> > On Feb 5, 2019, at 8:00 AM, Harsh Patel 
> wrote:
> >
> > Hi,
> > One of the mistake was as following. ns-3 frees the packet buffer just
> as it writes to the socket and thus we thought that we should also do the
> same. But dpdk while writing places the packet buffer to the tx descriptor
> ring and perform the transmission after that on its own. And we were
> freeing early so sometimes the packets were lost i.e. freed before
> transmission.
> >
> > Another thing was that as you suggested earlier we compiled the whole
> ns-3 in optimized mode. That improved the performance.
> >
> > These 2 things combined got us the desired results.
>
> Excellent thanks
> >
> > Regards,
> > Harsh & Hrishikesh
> >
> > On Tue, Feb 5, 2019, 18:33 Wiles, Keith  wrote:
> >
> >
> > > On Feb 5, 2019, at 12:37 AM, Harsh Patel 
> wrote:
> > >
> > > Hi,
> > >
> > > We would like to inform you that our code is working as expected and
> we are able to obtain 95-98 Mbps data rate for a 100Mbps application rate.
> We are now working on the testing of the code. Thanks a lot, especially to
> Keith for all the help you provided.
> > >
> > > We have 2 main queries :-
> > > 1) We wanted to calculate Backlog at the NIC Tx Descriptors but were
> not able to find anything in the documentation. Can you help us in how to
> calculate the backlog?
> > > 2) We searched on how to use Byte Queue Limit (BQL) on the NIC queue
> but couldn't find anything like that in DPDK. Does DPDK support BQL? If so,
> can you help us on how to use it for our project?
> >
> > what was the last set of problems if I may ask?
> > >
> > > Thanks & Regards
> > > Harsh & Hrishikesh
> > >
> > > On Thu, 31 Jan 2019 at 22:28, Wiles, Keith 
> wrote:
> > >
> > >
> > > Sent from my iPhone
> > >
> > > On Jan 30, 2019, at 5:36 PM, Harsh Patel 
> wrote:
> > >
> > >> Hello,
> > >>
> > >> This mail is to inform you that the integration of DPDK is working
> with ns-3 on a basic level. The model is running.
> > >> For UDP traffic we are getting throughput same or better than raw
> socket. (Around 100Mbps)
> > >> But unfortunately for TCP, there are burst packet losses due to which
> the throughput is drastically affected after some point of time. The
> bandwidth of the link used was 100Mbps.
> > >> We have obtained cwnd and ssthresh graphs which show that once the
> flow gets out from Slow Start mode, there are so many packet losses that
> the congestion window & the slow start threshold is not able to go above
> 4-5 packets.
> > >
> > > Can you determine where the packets are being dropped?
> > >> We have attached the graphs with this mail.
> > >>
> > >
> > > I do not see the graphs attached but that’s OK.
> > >> We would like to know if there is any reason to this or how can we
> fix this.
> > >
> > > I think we have to find out where the packets are being dropped this
> is the only reason for the case to your referring to.
> > >>
> > >> Thanks & Regards
> > >> Harsh & Hrishikesh
> > >>
> > >> On Wed, 16 Jan 2019 at 19:25, Harsh Patel 
> wrote:
> > >> Hi
> > >>
> > >> We were able to optimise the DPDK version. There were couple of
> things we needed to do.
> > >>
> > >> We were using tx timeout as 1s/2048, which we found out to be very
> less. Then we increased the timeout, but we were getting lot of
> retransmissions.
> > >>
> > >> So we removed the timeout and sent single packet as soon as we get
> it. This increased the throughput.
> > >>
> > >> Then we used DPDK feature to launch function on core, and gave a
> dedicated core for Rx. This increased the throughput further.
> > >>
> > >> The code is working really well for low bandwidth (<~50Mbps) and is
> outperforming raw socket version.
> > >> But for high bandwidth, we are getting packet length mismatches for
> some reason. We are investigating it.
> > >>
> > >> We really thank you for the suggestions given by you and also for
> keeping the patience for last couple of months.
> > >>
> > >> Thank you
> > >>
> > >> Regards,
> > >> Harsh & Hrishikesh
> > >>
> > >> On Fri, Jan 4, 2019, 11:27 Harsh Patel 
> wrote:
> > >> Yes that would be helpful.
> > >> It'd be ok for now to use the same dpdk version to overcome the build
> issues.
> > >> We will look into updating the code for latest versions once we get
> past this problem.
> > >>
> > >> Thank you very much.
> > >>
> > >> Regards,
> > >> Harsh & Hrishikesh
> > >>
> > >> On Fri, Jan 4, 2019, 04:13 Wiles, Keith 
> wrote:
> > >>
> > >>
> > >> > On Jan 3, 2019, at 12:12 PM, Harsh Patel 
> wrote:
> > >> >
> > >> > Hi
> > >> >
> > >> > We applied your suggestion of removing the `IsLinkUp()` call. But
> the performace is even worse. We could only get around 340kbits/s.
> > >> >
> > >> > The Top Hotspots are:
> > >> >
> > >> > FunctionModuleCPU Time
> > >> > eth_em_recv_pktslibrte_pmd_e1000.so

Re: [dpdk-users] Query on handling packets

2019-02-05 Thread Wiles, Keith


> On Feb 5, 2019, at 8:00 AM, Harsh Patel  wrote:
> 
> Hi, 
> One of the mistake was as following. ns-3 frees the packet buffer just as it 
> writes to the socket and thus we thought that we should also do the same. But 
> dpdk while writing places the packet buffer to the tx descriptor ring and 
> perform the transmission after that on its own. And we were freeing early so 
> sometimes the packets were lost i.e. freed before transmission. 
> 
> Another thing was that as you suggested earlier we compiled the whole ns-3 in 
> optimized mode. That improved the performance. 
> 
> These 2 things combined got us the desired results. 

Excellent thanks
> 
> Regards, 
> Harsh & Hrishikesh 
> 
> On Tue, Feb 5, 2019, 18:33 Wiles, Keith  wrote:
> 
> 
> > On Feb 5, 2019, at 12:37 AM, Harsh Patel  wrote:
> > 
> > Hi, 
> > 
> > We would like to inform you that our code is working as expected and we are 
> > able to obtain 95-98 Mbps data rate for a 100Mbps application rate. We are 
> > now working on the testing of the code. Thanks a lot, especially to Keith 
> > for all the help you provided.
> > 
> > We have 2 main queries :-
> > 1) We wanted to calculate Backlog at the NIC Tx Descriptors but were not 
> > able to find anything in the documentation. Can you help us in how to 
> > calculate the backlog?
> > 2) We searched on how to use Byte Queue Limit (BQL) on the NIC queue but 
> > couldn't find anything like that in DPDK. Does DPDK support BQL? If so, can 
> > you help us on how to use it for our project?
> 
> what was the last set of problems if I may ask?
> > 
> > Thanks & Regards
> > Harsh & Hrishikesh
> > 
> > On Thu, 31 Jan 2019 at 22:28, Wiles, Keith  wrote:
> > 
> > 
> > Sent from my iPhone
> > 
> > On Jan 30, 2019, at 5:36 PM, Harsh Patel  wrote:
> > 
> >> Hello, 
> >> 
> >> This mail is to inform you that the integration of DPDK is working with 
> >> ns-3 on a basic level. The model is running. 
> >> For UDP traffic we are getting throughput same or better than raw socket. 
> >> (Around 100Mbps)
> >> But unfortunately for TCP, there are burst packet losses due to which the 
> >> throughput is drastically affected after some point of time. The bandwidth 
> >> of the link used was 100Mbps. 
> >> We have obtained cwnd and ssthresh graphs which show that once the flow 
> >> gets out from Slow Start mode, there are so many packet losses that the 
> >> congestion window & the slow start threshold is not able to go above 4-5 
> >> packets. 
> > 
> > Can you determine where the packets are being dropped?
> >> We have attached the graphs with this mail.
> >> 
> > 
> > I do not see the graphs attached but that’s OK. 
> >> We would like to know if there is any reason to this or how can we fix 
> >> this. 
> > 
> > I think we have to find out where the packets are being dropped this is the 
> > only reason for the case to your referring to. 
> >> 
> >> Thanks & Regards
> >> Harsh & Hrishikesh
> >> 
> >> On Wed, 16 Jan 2019 at 19:25, Harsh Patel  wrote:
> >> Hi
> >> 
> >> We were able to optimise the DPDK version. There were couple of things we 
> >> needed to do.
> >> 
> >> We were using tx timeout as 1s/2048, which we found out to be very less. 
> >> Then we increased the timeout, but we were getting lot of retransmissions.
> >> 
> >> So we removed the timeout and sent single packet as soon as we get it. 
> >> This increased the throughput.
> >> 
> >> Then we used DPDK feature to launch function on core, and gave a dedicated 
> >> core for Rx. This increased the throughput further.
> >> 
> >> The code is working really well for low bandwidth (<~50Mbps) and is 
> >> outperforming raw socket version.
> >> But for high bandwidth, we are getting packet length mismatches for some 
> >> reason. We are investigating it.
> >> 
> >> We really thank you for the suggestions given by you and also for keeping 
> >> the patience for last couple of months. 
> >> 
> >> Thank you
> >> 
> >> Regards, 
> >> Harsh & Hrishikesh 
> >> 
> >> On Fri, Jan 4, 2019, 11:27 Harsh Patel  wrote:
> >> Yes that would be helpful. 
> >> It'd be ok for now to use the same dpdk version to overcome the build 
> >> issues. 
> >> We will look into updating the code for latest versions once we get past 
> >> this problem. 
> >> 
> >> Thank you very much. 
> >> 
> >> Regards, 
> >> Harsh & Hrishikesh
> >> 
> >> On Fri, Jan 4, 2019, 04:13 Wiles, Keith  wrote:
> >> 
> >> 
> >> > On Jan 3, 2019, at 12:12 PM, Harsh Patel  
> >> > wrote:
> >> > 
> >> > Hi
> >> > 
> >> > We applied your suggestion of removing the `IsLinkUp()` call. But the 
> >> > performace is even worse. We could only get around 340kbits/s.
> >> > 
> >> > The Top Hotspots are:
> >> > 
> >> > FunctionModuleCPU Time
> >> > eth_em_recv_pktslibrte_pmd_e1000.so15.106s
> >> > rte_delay_us_blocklibrte_eal.so.6.17.372s
> >> > ns3::DpdkNetDevice::Readlibns3.28.1-fd-net-device-debug.so5.080s
> >> > rte_eth_rx_burstlibns3.28.1-fd-net-device-debug.so3.558s
> >> > 

Re: [dpdk-users] Query on handling packets

2019-02-05 Thread Harsh Patel
Hi,
One of the mistake was as following. ns-3 frees the packet buffer just as
it writes to the socket and thus we thought that we should also do the
same. But dpdk while writing places the packet buffer to the tx descriptor
ring and perform the transmission after that on its own. And we were
freeing early so sometimes the packets were lost i.e. freed before
transmission.

Another thing was that as you suggested earlier we compiled the whole ns-3
in optimized mode. That improved the performance.

These 2 things combined got us the desired results.

Regards,
Harsh & Hrishikesh

On Tue, Feb 5, 2019, 18:33 Wiles, Keith  wrote:

>
>
> > On Feb 5, 2019, at 12:37 AM, Harsh Patel 
> wrote:
> >
> > Hi,
> >
> > We would like to inform you that our code is working as expected and we
> are able to obtain 95-98 Mbps data rate for a 100Mbps application rate. We
> are now working on the testing of the code. Thanks a lot, especially to
> Keith for all the help you provided.
> >
> > We have 2 main queries :-
> > 1) We wanted to calculate Backlog at the NIC Tx Descriptors but were not
> able to find anything in the documentation. Can you help us in how to
> calculate the backlog?
> > 2) We searched on how to use Byte Queue Limit (BQL) on the NIC queue but
> couldn't find anything like that in DPDK. Does DPDK support BQL? If so, can
> you help us on how to use it for our project?
>
> what was the last set of problems if I may ask?
> >
> > Thanks & Regards
> > Harsh & Hrishikesh
> >
> > On Thu, 31 Jan 2019 at 22:28, Wiles, Keith 
> wrote:
> >
> >
> > Sent from my iPhone
> >
> > On Jan 30, 2019, at 5:36 PM, Harsh Patel 
> wrote:
> >
> >> Hello,
> >>
> >> This mail is to inform you that the integration of DPDK is working with
> ns-3 on a basic level. The model is running.
> >> For UDP traffic we are getting throughput same or better than raw
> socket. (Around 100Mbps)
> >> But unfortunately for TCP, there are burst packet losses due to which
> the throughput is drastically affected after some point of time. The
> bandwidth of the link used was 100Mbps.
> >> We have obtained cwnd and ssthresh graphs which show that once the flow
> gets out from Slow Start mode, there are so many packet losses that the
> congestion window & the slow start threshold is not able to go above 4-5
> packets.
> >
> > Can you determine where the packets are being dropped?
> >> We have attached the graphs with this mail.
> >>
> >
> > I do not see the graphs attached but that’s OK.
> >> We would like to know if there is any reason to this or how can we fix
> this.
> >
> > I think we have to find out where the packets are being dropped this is
> the only reason for the case to your referring to.
> >>
> >> Thanks & Regards
> >> Harsh & Hrishikesh
> >>
> >> On Wed, 16 Jan 2019 at 19:25, Harsh Patel 
> wrote:
> >> Hi
> >>
> >> We were able to optimise the DPDK version. There were couple of things
> we needed to do.
> >>
> >> We were using tx timeout as 1s/2048, which we found out to be very
> less. Then we increased the timeout, but we were getting lot of
> retransmissions.
> >>
> >> So we removed the timeout and sent single packet as soon as we get it.
> This increased the throughput.
> >>
> >> Then we used DPDK feature to launch function on core, and gave a
> dedicated core for Rx. This increased the throughput further.
> >>
> >> The code is working really well for low bandwidth (<~50Mbps) and is
> outperforming raw socket version.
> >> But for high bandwidth, we are getting packet length mismatches for
> some reason. We are investigating it.
> >>
> >> We really thank you for the suggestions given by you and also for
> keeping the patience for last couple of months.
> >>
> >> Thank you
> >>
> >> Regards,
> >> Harsh & Hrishikesh
> >>
> >> On Fri, Jan 4, 2019, 11:27 Harsh Patel 
> wrote:
> >> Yes that would be helpful.
> >> It'd be ok for now to use the same dpdk version to overcome the build
> issues.
> >> We will look into updating the code for latest versions once we get
> past this problem.
> >>
> >> Thank you very much.
> >>
> >> Regards,
> >> Harsh & Hrishikesh
> >>
> >> On Fri, Jan 4, 2019, 04:13 Wiles, Keith  wrote:
> >>
> >>
> >> > On Jan 3, 2019, at 12:12 PM, Harsh Patel 
> wrote:
> >> >
> >> > Hi
> >> >
> >> > We applied your suggestion of removing the `IsLinkUp()` call. But the
> performace is even worse. We could only get around 340kbits/s.
> >> >
> >> > The Top Hotspots are:
> >> >
> >> > FunctionModuleCPU Time
> >> > eth_em_recv_pktslibrte_pmd_e1000.so15.106s
> >> > rte_delay_us_blocklibrte_eal.so.6.17.372s
> >> > ns3::DpdkNetDevice::Readlibns3.28.1-fd-net-device-debug.so
> 5.080s
> >> > rte_eth_rx_burstlibns3.28.1-fd-net-device-debug.so3.558s
> >> > ns3::DpdkNetDeviceReader::DoRead
> libns3.28.1-fd-net-device-debug.so3.364s
> >> > [Others]4.760s
> >>
> >> Performance reduced by removing that link status check, that is weird.
> >> >
> >> > Upon checking the callers of `rte_delay_us_block`, we got 

Re: [dpdk-users] Query on handling packets

2019-02-05 Thread Wiles, Keith


> On Feb 5, 2019, at 12:37 AM, Harsh Patel  wrote:
> 
> Hi, 
> 
> We would like to inform you that our code is working as expected and we are 
> able to obtain 95-98 Mbps data rate for a 100Mbps application rate. We are 
> now working on the testing of the code. Thanks a lot, especially to Keith for 
> all the help you provided.
> 
> We have 2 main queries :-
> 1) We wanted to calculate Backlog at the NIC Tx Descriptors but were not able 
> to find anything in the documentation. Can you help us in how to calculate 
> the backlog?
> 2) We searched on how to use Byte Queue Limit (BQL) on the NIC queue but 
> couldn't find anything like that in DPDK. Does DPDK support BQL? If so, can 
> you help us on how to use it for our project?

what was the last set of problems if I may ask?
> 
> Thanks & Regards
> Harsh & Hrishikesh
> 
> On Thu, 31 Jan 2019 at 22:28, Wiles, Keith  wrote:
> 
> 
> Sent from my iPhone
> 
> On Jan 30, 2019, at 5:36 PM, Harsh Patel  wrote:
> 
>> Hello, 
>> 
>> This mail is to inform you that the integration of DPDK is working with ns-3 
>> on a basic level. The model is running. 
>> For UDP traffic we are getting throughput same or better than raw socket. 
>> (Around 100Mbps)
>> But unfortunately for TCP, there are burst packet losses due to which the 
>> throughput is drastically affected after some point of time. The bandwidth 
>> of the link used was 100Mbps. 
>> We have obtained cwnd and ssthresh graphs which show that once the flow gets 
>> out from Slow Start mode, there are so many packet losses that the 
>> congestion window & the slow start threshold is not able to go above 4-5 
>> packets. 
> 
> Can you determine where the packets are being dropped?
>> We have attached the graphs with this mail.
>> 
> 
> I do not see the graphs attached but that’s OK. 
>> We would like to know if there is any reason to this or how can we fix this. 
> 
> I think we have to find out where the packets are being dropped this is the 
> only reason for the case to your referring to. 
>> 
>> Thanks & Regards
>> Harsh & Hrishikesh
>> 
>> On Wed, 16 Jan 2019 at 19:25, Harsh Patel  wrote:
>> Hi
>> 
>> We were able to optimise the DPDK version. There were couple of things we 
>> needed to do.
>> 
>> We were using tx timeout as 1s/2048, which we found out to be very less. 
>> Then we increased the timeout, but we were getting lot of retransmissions.
>> 
>> So we removed the timeout and sent single packet as soon as we get it. This 
>> increased the throughput.
>> 
>> Then we used DPDK feature to launch function on core, and gave a dedicated 
>> core for Rx. This increased the throughput further.
>> 
>> The code is working really well for low bandwidth (<~50Mbps) and is 
>> outperforming raw socket version.
>> But for high bandwidth, we are getting packet length mismatches for some 
>> reason. We are investigating it.
>> 
>> We really thank you for the suggestions given by you and also for keeping 
>> the patience for last couple of months. 
>> 
>> Thank you
>> 
>> Regards, 
>> Harsh & Hrishikesh 
>> 
>> On Fri, Jan 4, 2019, 11:27 Harsh Patel  wrote:
>> Yes that would be helpful. 
>> It'd be ok for now to use the same dpdk version to overcome the build 
>> issues. 
>> We will look into updating the code for latest versions once we get past 
>> this problem. 
>> 
>> Thank you very much. 
>> 
>> Regards, 
>> Harsh & Hrishikesh
>> 
>> On Fri, Jan 4, 2019, 04:13 Wiles, Keith  wrote:
>> 
>> 
>> > On Jan 3, 2019, at 12:12 PM, Harsh Patel  wrote:
>> > 
>> > Hi
>> > 
>> > We applied your suggestion of removing the `IsLinkUp()` call. But the 
>> > performace is even worse. We could only get around 340kbits/s.
>> > 
>> > The Top Hotspots are:
>> > 
>> > FunctionModuleCPU Time
>> > eth_em_recv_pktslibrte_pmd_e1000.so15.106s
>> > rte_delay_us_blocklibrte_eal.so.6.17.372s
>> > ns3::DpdkNetDevice::Readlibns3.28.1-fd-net-device-debug.so5.080s
>> > rte_eth_rx_burstlibns3.28.1-fd-net-device-debug.so3.558s
>> > ns3::DpdkNetDeviceReader::DoReadlibns3.28.1-fd-net-device-debug.so
>> > 3.364s
>> > [Others]4.760s
>> 
>> Performance reduced by removing that link status check, that is weird.
>> > 
>> > Upon checking the callers of `rte_delay_us_block`, we got to know that 
>> > most of the time (92%) spent in this function is during initialization.
>> > This does not waste our processing time during communication. So, it's a 
>> > good start to our optimization.
>> > 
>> > CallersCPU Time: TotalCPU Time: Self
>> > rte_delay_us_block100.0%7.372s
>> >   e1000_enable_ulp_lpt_lp92.3%6.804s
>> >   e1000_write_phy_reg_mdic1.8%0.136s
>> >   e1000_reset_hw_ich8lan1.7%0.128s
>> >   e1000_read_phy_reg_mdic1.4%0.104s
>> >   eth_em_link_update1.4%0.100s
>> >   e1000_get_cfg_done_generic0.7%0.052s
>> >   e1000_post_phy_reset_ich8lan.part.180.7%0.048s
>> 
>> I guess you are having vTune start your application and that is