> -----Original Message-----
> From: Maxime Coquelin <maxime.coque...@redhat.com>
> Sent: Monday, April 27, 2020 4:45 PM
> To: Liu, Yong <yong....@intel.com>; Ye, Xiaolong <xiaolong...@intel.com>;
> Wang, Zhihong <zhihong.w...@intel.com>
> Cc: dev@dpdk.org
> Subject: Re: [PATCH v2 2/2] vhost: cache gpa to hpa translation
>
> Hi Marvin,
>
> On 4/1/20 4:50 PM, Marvin Liu wrote:
> > If Tx zero copy enabled, gpa to hpa mapping table is updated one by
> > one. This will harm performance when guest memory backend using 2M
> > hugepages. Now add cached mapping table which will sorted by using
> > sequence. Address translation will first check cached mapping table,
> > then check unsorted mapping table if no match found.
> >
> > Signed-off-by: Marvin Liu <yong....@intel.com>
> >
>
> I don't like the approach, as I think it could have nasty effects.
> For example, the system is loaded normally and let's say 25% of the
> pages are used. Then we have a small spike, and buffers that were never
> used start to be used, it will cause writing new entries into the cache
> in the hot path when it is already overloaded. Wouldn't it increase the
> number of packets dropped?
>
> At set_mem_table time, instead of adding the guest pages unsorted, maybe
> better to add them sorted there. Then you can use a better algorithm
> than linear searching (O(n)), like binary search (O(log n)).
>
Maxime,
Thanks for input. Previous sorted way is according using sequence, it may cause
more packets drop if accessing pages sequence varied a lot.
Based on current dpdk and virtio-net implementation, it is unlikely to be
happened. Anyway, it is not the best choice.
I will use binary search replace current cache solution.
Regards,
Marvin
> Thanks,
> Maxime
>