HI Pablo Sorry for delay in response.
>-----Original Message----- >From: De Lara Guarch, Pablo <pablo.de.lara.gua...@intel.com> >Sent: 11 January 2019 00:17 >To: Trahe, Fiona <fiona.tr...@intel.com>; Verma, Shally ><shally.ve...@cavium.com>; Stephen Hemminger ><step...@networkplumber.org> >Cc: firstname.lastname@example.org; akhil.go...@nxp.com; Jozwiak, TomaszX ><tomaszx.jozw...@intel.com>; Gupta, Ashish ><ashish.gu...@cavium.com>; Daly, Lee <lee.d...@intel.com>; Luse, Paul E ><paul.e.l...@intel.com>; Trahe, Fiona ><fiona.tr...@intel.com> >Subject: RE: [dpdk-dev] [PATCH] compressdev: add feature flag to specify where >processing is done > .... >> > >> I just did a survey of DPDK an 1/3 of it is never used by any open >> > >> source project. Hate to see more dead code and special cases created. >> > >> >> > >> At least, some example code in examples would help. Something like >> > >> a simple in memory compressed storage server using a network API >> > >> (SMB?/SSH?/FTP?) >> > >[Fiona] There is no compressdev sample app yet. >> > >However I've double-checked with the SPDK team, they're currently >> > >integrating compressdev and intend to push a patch to SPDK - a storage >> open-source project - using this flag. >> > [Shally] Am seeing some of our HW based PMD also leveraging this >> > choice. So I would say to make it generic feature flag instead of SW >> > specific. >> [Fiona] I can do but would like to understand this better first. >> My understanding of HW offload is that the enqueue is just packaging up >> the op and sending to the HW. >> And the dequeue is just collecting the result from the HW and passing back >> to the op. >> The work is done by the HW accelerator, in between those 2 API calls, not >> using any CPU cycles. >> So what would it mean for HW to set OP_DONE_IN_DEQUEUE? > >Any comments on this? I agree with Fiona that this flag makes sense on SW only, >but it seems that you have another use case. I am waiting for feedback on this internally. Will revert on this as I get more information. Thanks Shally > >Thanks, >Pablo > >> >> > Thanks >> > Shally