Hi Adam, > -----Original Message----- > From: Dybkowski, AdamX > Sent: Friday, September 27, 2019 4:48 PM > To: dev@dpdk.org; Trahe, Fiona <fiona.tr...@intel.com>; Kusztal, ArkadiuszX > <arkadiuszx.kusz...@intel.com>; akhil.go...@nxp.com > Cc: Dybkowski, AdamX <adamx.dybkow...@intel.com> > Subject: [PATCH v2 3/3] crypto/qat: handle Single Pass Crypto Requests on > GEN3 QAT > > This patch improves the performance of AES GCM by using > the Single Pass Crypto Request functionality when running > on GEN3 QAT. Falls back to classic chained mode on older > hardware. > > Signed-off-by: Adam Dybkowski <adamx.dybkow...@intel.com> > --- > doc/guides/rel_notes/release_19_11.rst | 7 +++ > drivers/crypto/qat/qat_sym.c | 13 +++- > drivers/crypto/qat/qat_sym_session.c | 86 ++++++++++++++++++++++++-- > drivers/crypto/qat/qat_sym_session.h | 9 ++- > 4 files changed, 107 insertions(+), 8 deletions(-) > > diff --git a/doc/guides/rel_notes/release_19_11.rst > b/doc/guides/rel_notes/release_19_11.rst > index 573683da4..4817b7f23 100644 > --- a/doc/guides/rel_notes/release_19_11.rst > +++ b/doc/guides/rel_notes/release_19_11.rst > @@ -61,6 +61,13 @@ New Features > Added stateful decompression support in the Intel QuickAssist Technology > PMD. > Please note that stateful compression is not supported. > > +* **Enabled Single Pass GCM acceleration on QAT GEN3.** > + > + Added support for Single Pass GCM, available on QAT GEN3 only (Intel > + QuickAssist Technology C4xxx). It is automatically chosen instead of the > + classic chained mode when running on QAT GEN3, significantly improving > + the performance of AES GCM operations. > + > Removed Items > ------------- > > diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c > index 46ef27a6d..5ff4aa1e5 100644 > --- a/drivers/crypto/qat/qat_sym.c > +++ b/drivers/crypto/qat/qat_sym.c > @@ -1,5 +1,5 @@ > /* SPDX-License-Identifier: BSD-3-Clause > - * Copyright(c) 2015-2018 Intel Corporation > + * Copyright(c) 2015-2019 Intel Corporation > */ > > #include <openssl/evp.h> > @@ -12,6 +12,7 @@ > > #include "qat_sym.h" > > + > /** Decrypt a single partial block > * Depends on openssl libcrypto > * Uses ECB+XOR to do CFB encryption, same result, more performant > @@ -195,7 +196,8 @@ qat_sym_build_request(void *in_op, uint8_t *out_msg, > rte_mov128((uint8_t *)qat_req, (const uint8_t *)&(ctx->fw_req)); > qat_req->comn_mid.opaque_data = (uint64_t)(uintptr_t)op; > cipher_param = (void *)&qat_req->serv_specif_rqpars; > - auth_param = (void *)((uint8_t *)cipher_param + sizeof(*cipher_param)); > + auth_param = (void *)((uint8_t *)cipher_param + > + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); > > if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER || > ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) { > @@ -593,6 +595,13 @@ qat_sym_build_request(void *in_op, uint8_t *out_msg, > qat_req->comn_mid.dest_data_addr = dst_buf_start; > } > > + /* Handle Single-Pass GCM */ > + if (ctx->is_single_pass) { > + cipher_param->spc_aad_addr = op->sym->aead.aad.phys_addr; > + cipher_param->spc_auth_res_addr = > + op->sym->aead.digest.phys_addr; > + } > + > #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG > QAT_DP_HEXDUMP_LOG(DEBUG, "qat_req:", qat_req, > sizeof(struct icp_qat_fw_la_bulk_req)); > diff --git a/drivers/crypto/qat/qat_sym_session.c > b/drivers/crypto/qat/qat_sym_session.c > index e5167b3fa..7d0f4a69d 100644 > --- a/drivers/crypto/qat/qat_sym_session.c > +++ b/drivers/crypto/qat/qat_sym_session.c > @@ -450,7 +450,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, > break; > case ICP_QAT_FW_LA_CMD_CIPHER_HASH: > if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) { > - ret = qat_sym_session_configure_aead(xform, > + ret = qat_sym_session_configure_aead(dev, xform, > session); > if (ret < 0) > return ret; > @@ -467,7 +467,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, > break; > case ICP_QAT_FW_LA_CMD_HASH_CIPHER: > if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) { > - ret = qat_sym_session_configure_aead(xform, > + ret = qat_sym_session_configure_aead(dev, xform, > session); > if (ret < 0) > return ret; > @@ -503,6 +503,72 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, > return 0; > } > > +static int > +qat_sym_session_handle_single_pass(struct qat_sym_dev_private *internals, > + struct qat_sym_session *session, > + struct rte_crypto_aead_xform *aead_xform) > +{ > + enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen; > + > + if (qat_dev_gen == QAT_GEN3 && > + aead_xform->iv.length == QAT_AES_GCM_SPC_IV_SIZE) { > + /* Use faster Single-Pass GCM */ [Fiona] Need to set min_qat_dev_gen in session here. Crypto sessions can be built independently of the device. Catches a very unlikely corner case. If e.g. platform had a gen1 and gen3 device, did the session init on the gen3, then attached it to an op sent to gen1, This min_qat_dev_gen would catch it. Same situation possible with ZUC so we added that check then.