Re-sending with updated comments.

Regards
Shally

On Thu, Jun 1, 2017 at 7:58 PM, Verma, Shally <[email protected]>
wrote:

> Regards
> Shally
>
> -----Original Message-----
> From: Savolainen, Petri (Nokia - FI/Espoo) [mailto:petri.savolainen@
> nokia.com]
> Sent: 01 June 2017 18:20
> To: Shally Verma <[email protected]>; [email protected]
> Cc: Challa, Mahipal <[email protected]>; Narayana, Prasad Athreya
> <[email protected]>; Verma, Shally <
> [email protected]>
> Subject: RE: [lng-odp] [RFC, API-NEXT v3 1/1] comp: compression interface
>
>
>
> > -----Original Message-----
> > From: lng-odp [mailto:[email protected]] On Behalf Of
> > Shally Verma
> > Sent: Monday, May 22, 2017 9:55 AM
> > To: [email protected]
> > Cc: Mahipal Challa <[email protected]>; [email protected]; Shally
> > Verma <[email protected]>
> > Subject: [lng-odp] [RFC, API-NEXT v3 1/1] comp: compression interface
> >
>
> A short description (5-10 lines) about the new API should be added here.
>
> For example: what kind of compression this API offers. How it's used.
> Which kind of applications typical use it.
>
> Current ODP Comp interface is designed to support widely used lossless
> data compression schemes : deflate, zlib and lzs.
> Typical applications include (but not limited to ) IPComp, other
> application and transport layer protocols such as HTTP compression.
>
> Changes from V2:
> Add separate API for sync and async operation,
> Add separate API to load dictionary,
> Removed SHA only support from compression interface,
> Rename SHA operations as hash operation.
> Add reference odp_packet_data_range_t for accessing packet buffer
>
>
> > Signed-off-by: Shally Verma <[email protected]>
> > Signed-off-by: Mahipal Challa <[email protected]>
> > ---
> >  include/odp/api/spec/comp.h | 740
> > ++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 740 insertions(+)
> >
> > diff --git a/include/odp/api/spec/comp.h b/include/odp/api/spec/comp.h
> > new file mode 100644 index 0000000..6c13ad4
> > --- /dev/null
> > +++ b/include/odp/api/spec/comp.h
> > @@ -0,0 +1,740 @@
> > +/*
> > + */
> > +
> > +/**
> > + * @file
> > + *
> > + * ODP Compression
> > + */
> > +
> > +#ifndef ODP_API_COMP_H_
> > +#define ODP_API_COMP_H_
> > +
> > +#include <odp/visibility_begin.h>
> > +#include <odp/api/spec/support.h>
> > +#include <odp/api/packet.h>
> > +
> > +#ifdef __cplusplus
> > +extern "C" {
> > +#endif
> > +
> > +/** @defgroup odp_compression ODP COMP
> > + *  ODP Compression defines interface to compress/decompress along
> > +with
> > hash
> > + *  operations.
> > + *
> > + *  Compression here implicitly refer to both compression and
> > decompression.
>
> Typo: implicitly.
>
> Actually, you could just say: "Compression API supports both compression
> and decompression operations."
>
>
> > + *  Example, compression algo 'deflate' mean both 'deflate' and
> > 'inflate'.
> > + *
> > + *  if opcode = ODP_COMP_COMPRESS, then it will Compress and/or apply
> > hash,
> > + *  if opcode = ODP_COMP_DECOMPRESS, then it will Decompress and/or
> > + apply
> > + *  hash.
>
> I think it would be better to have separate operations for compress and
> decompress. Application would typically call one on ingress side and the
> other on egress side, or maybe only one of those. Dual functionality would
> make sense if every second packet on the same code path would be compressed
> and every second decompressed, but I think that's not the case.
>
> Application will create either Compression session or Decompression
> session. so if two operations are requested simultaneously, then two
> sessions would be created where each session will have its specific
> operation context. So this way Compress and Decompress will be two separate
> operations.
>


> > + *
> > + *  Current version of interface allow Compression ONLY,and
> > + *  both Compression + hash ONLY sessions.
>
> What this comment tries to say. Decompression is not supported at all ? Or
>
- Decompression and Decompression+hash supported

"hashing only" is not supported yet.
>
-Yes "hashing only" not supported.

"Hashing only" should not be a target for this API.
> - Yes. it is not.
>


>
>
> > + *
> > + *  Macros, enums, types and operations to utilise compression.
> > + *  @{
> > + */
> > +
> > +/**
> > + * @def ODP_COMP_SESSION_INVALID
> > + * Invalid session handle
> > + */
> > +
> > +/**
> > + * @typedef odp_comp_session_t (platform dependent)
> > + * Comp API opaque session handle
>
> "Compression session handle"
>
> No need to mention opaque, as all ODP handles are opaque.
> Ack.
>
> > + */
> > +
> > +/**
> > + * @typedef odp_comp_compl_t
> > +* Compression API completion event (platform dependent)
>
> Remove "platform dependent", all opaque types are like that.
> Ack.
>
> > +*/
> > +
> > +/**
> > + * Compression API operation mode
> > + */
> > +typedef enum {
> > +     /** Synchronous, return results immediately */
> > +     ODP_COMP_SYNC,
> > +     /** Asynchronous, return results via queue event */
> > +     ODP_COMP_ASYNC,
> > +} odp_comp_op_mode_t;
> > +
> > +/**
> > + * Comp API operation type.
> > + *
> > + */
> > +typedef enum {
> > +     /** Compress and/or Compute digest  */
> > +     ODP_COMP_OP_COMPRESS,
> > +     /** Decompress and/or Compute digest */
> > +     ODP_COMP_OP_DECOMPRESS,
> > +} odp_comp_op_t;
>
> Maybe we get rid of this. Anyway, *or* compute digest is wrong. Digest
> don't need to be mentioned here at all.
> Ack.
>


> > +
> > +/**
> > + * Comp API hash algorithm
> > + *
> > + */
> > +typedef enum {
> > +     /** ODP_COMP_HASH_ALG_NONE*/
> > +     ODP_COMP_HASH_ALG_NONE,
> > +     /** ODP_COMP_HASH_ALG_SHA1*/
> > +     ODP_COMP_HASH_ALG_SHA1,
> > +     /**  ODP_COMP_HASH_ALG_SHA256*/
> > +     ODP_COMP_HASH_ALG_SHA256
> > +} odp_comp_hash_alg_t;
> > +
> > +/**
> > + * Comp API compression algorithm
> > + *
> > + */
> > +typedef enum {
> > +     /** No algorithm specified.
> > +     * Means no compression, no output provided.
> > +     */
> > +     ODP_COMP_ALG_NULL,
>
> Isn't output of null == input data. So, there's *output* but no
> *transformation*. Also it could be mentioned that this is only for testing
> purposes (not for generic hash offload).
>
> Ack.
>


> > +     /** DEFLATE -
> > +     * implicit Inflate in case of decode operation.
> > +     */
>
>
> You just need to specify the algorithm accurately: e.g. Deflate algorithm
> as specified RFC 1951 etc.
>
> No need to mention inflate if it's not a specific algorithm but just the
> decomposes operation of Deflate.
>
> Ack.
>


> > +     ODP_COMP_ALG_DEFLATE,
> > +     /** ZLIB-RFC1950 */
> > +     ODP_COMP_ALG_ZLIB,
> > +     /** LZS*/
> > +     ODP_COMP_ALG_LZS
> > +} odp_comp_alg_t;
> > +
> > +/**
> > + * Comp API session creation return code
> > + *
> > + */
> > +typedef enum {
> > +     /** Session created */
> > +     ODP_COMP_SES_CREATE_ERR_NONE,
> > +     /** Creation failed, no resources */
> > +     ODP_COMP_SES_CREATE_ERR_ENOMEM,
> > +     /** Creation failed, bad compression params */
> > +     ODP_COMP_SES_CREATE_ERR_INV_COMP,
> > +     /** Creation failed, bad hash params */
> > +     ODP_COMP_SES_CREATE_ERR_INV_HASH,
> > +     /** Creation failed,requested configuration not supported*/
> > +     ODP_COMP_SES_CREATE_ERR_NOT_SUPPORTED
> > +} odp_comp_ses_create_err_t;
> > +
> > +/**
> > + * Comp API operation return codes
> > + *
> > + */
> > +typedef enum {
> > +     /** Operation completed successfully*/
> > +     ODP_COMP_ERR_NONE,
> > +     /** Invalid user data pointers*/
> > +     ODP_COMP_ERR_DATA_PTR,
> > +     /** Invalid input data size*/
> > +     ODP_COMP_ERR_DATA_SIZE,
> > +     /**  Compression and/or hash Algo failure*/
> > +     ODP_COMP_ERR_ALGO_FAIL,
> > +     /** Operation paused due to insufficient output buffer.
> > +     *
> > +     * This is not an error condition. On seeing this situation,
> > +     * Implementation should maintain context of in-progress
> > operation and
> > +     * application should call packet processing API again with
> > valid
> > +     * output buffer but no other altercation to operation params
> > +     * (odp_comp_op_param_t).
> > +     *
> > +     * if using async mode, application should either make sure to
> > +     * provide sufficient output buffer size OR maintain relevant
> > +     * context (or ordering) information with respect to each input
> > packet
> > +     * enqueued for processing
> > +     *
> > +     */
> > +     ODP_COMP_ERR_OUT_OF_SPACE,
>
> Seems that this is part of normal operation. If application cannot predict
> maximum output size, then e.g. return value/output parameter should be used
> to indicate that more output space is needed. Error code should be just
> details, what went wrong. Many applications would just care if operation
> was success/failure or partial. The failure reason (this enum) is merely
> for debugging.
>
> These status codes are updated in odp_comp_op_result_t structure at
> output.
>
If we have issues with usage of _ERR then I can rename whole status enum
> something like ODP_COMP_OP_STATUS_xxx. Please suggest if that looks more
> suitable.
>



> > +     /** Error if operation has been requested in an invalid state
> > */
> > +     ODP_COMP_ERR_INV_STATE,
> > +     /** Error if API call does not input params or mode. */
> > +     ODP_COMP_ERR_NOT_SUPPORTED
> > +} odp_comp_err_t;
> > +
> > +/**
> > +* Comp API enumeration for preferred compression level/speed.
> > +*
> > +* trade-off between speed and compression ratio.
>
> This is the body of Doxygen documentation. So, it should contain full
> sentences and describe the "thing" a more thoroughly than the heading line
> above (or remove body if there's nothing more to say).
>


> Ack.
>


>  > +*
> > +*/
> > +typedef enum {
> > +     /* Use implementaion default */
> > +     ODP_COMP_LEVEL_DEFAULT = 0,
> > +     /** Minimum compression (fastest in speed) */
> > +     ODP_COMP_LEVEL_MIN,
> > +     /** Maximum compression (slowest in speed) */
> > +     ODP_COMP_LEVEL_MAX,
> > +} odp_comp_level_t;
> > +
> > +/**
> > + * Comp API enumeration for huffman encoding
> > + *
> > + */
> > +typedef enum {
> > +     /** use implementation default to choose between compression
> > codes  */
> > +     ODP_COMP_HUFFMAN_CODE_DEFAULT = 0,
> > +     /** use fixed huffman codes */
> > +     ODP_COMP_HUFFMAN_CODE_FIXED,
> > +     /** use dynamic huffman coding */
> > +     ODP_COMP_HUFFMAN_CODE_DYNAMIC,
> > +} odp_comp_huffman_code_t;
> > +
> > +/**
> > + * Hash algorithms in a bit field structure
> > + *
> > + */
> > +typedef union odp_comp_hash_algos_t {
> > +     /** hash algorithms*/
> > +     struct {
> > +             /** SHA-1*/
> > +             uint32_t sha1  : 1;
> > +
> > +             /** SHA-2 with 256 bits of Message Digest*/
> > +             uint32_t sha256 : 1;
> > +
> > +     } bit;
> > +
> > +     /** All bits of the bit field structure
> > +     *
> > +     * This field can be used to set/clear all flags, or bitwise
> > +     * operations over the entire structure.
> > +     */
> > +     uint32_t all_bits;
> > +} odp_comp_hash_algos_t;
> > +
> > +/**
> > + * Comp algorithms in a bit field structure
> > + *
> > + */
> > +typedef union odp_comp_algos_t {
> > +     /** Compression algorithms */
> > +     struct {
> > +             /** ODP_COMP_ALG_NULL */
> > +             uint32_t null       : 1;
> > +
> > +             /** ODP_COMP_ALG_DEFLATE */
> > +             uint32_t deflate        : 1;
> > +
> > +             /** ODP_COMP_ALG_ZLIB */
> > +             uint32_t zlib        : 1;
> > +
> > +             /** ODP_COMP_ALG_LZS */
> > +             uint32_t lzs         :1;
> > +     } bit;
> > +
> > +     /** All bits of the bit field structure
> > +     *
> > +     * This field can be used to set/clear all flags, or bitwise
> > +     * operations over the entire structure. */
> > +     uint32_t all_bits;
> > +} odp_comp_algos_t;
> > +
> > +/**
> > + * Compression Interface Capabilities
> > + *
> > + */
> > +typedef struct odp_comp_capability_t {
> > +     /** Maximum number of  sessions*/
> > +     uint32_t max_sessions;
> > +
> > +     /** Supported compression algorithms*/
> > +     odp_comp_algos_t comps;
> > +
> > +     /** Supported hash algorithms*/
> > +     odp_comp_hash_algos_t   hash;
> > +
> > +     /** Support level for synchrnous operation mode (ODP_COMP_SYNC)
> > +     *   User should set odp_comp_session_param_t:mode based on
> > +     *   support level as indicated by this param.
> > +     *   true - mode supported,
> > +     *   false - mode not supported
>
> Same comment as Dmitry. True/false are not odp_support_t values.
> Ack.
>


> > +     */
> > +     odp_support_t  sync;
> > +
> > +     /** Support level for asynchronous operation mode
> > (ODP_COMP_ASYNC)
>
> Typo: asynchronous
>
> Ack.
>


> > +     *   User should set odp_comp_session_param_t:mode param based
> > on
> > +     *   support level as indicated by this param.
> > +     *   true - mode supported,
> > +     *   false - mode not supported
> > +     *
> > +     */
> > +     odp_support_t async;
> > +} odp_comp_capability_t;
> > +
> > +/**
> > + * Hash algorithm capabilities
> > + *
> > + */
> > +typedef struct odp_comp_hash_alg_capability_t {
> > +     /** Digest length in bytes */
> > +     uint32_t digest_len;
> > +} odp_comp_hash_alg_capability_t;
> > +
> > +/**
> > + * Compression algorithm capabilities
> > + * structure for each algorithm.
> > + *
> > + */
> > +typedef struct odp_comp_alg_capability_t {
> > +     /** Boolean indicating alg support dictionary load
> > +     *
> > +     * true: yes
> > +     * false : no
> > +     *
> > +     * dictionary , if supported, consists of a pointer to character
> > +     * array
> > +     */
> > +     odp_bool_t support_dict;
>
> odp_support_t ?
>
> Already agreed to it in feedback to Dmitry.
>


> > +
> > +     /** Optional Maximum length of dictionary supported
> > +     *   by implementation of the algorithm.
> > +     *
> > +     *   Invalid if support_dict == false.
> > +     *
> > +     *   use either of user input or maximum allowed, whichever is
> > less.
> > +     *   In such case, implementation keep responsibility to update
> > user
> > +     *   used dictionary length.
> > +     *
> > +     */
> > +     uint32_t dict_len;
> > +
> > +     /* Maximum compression level supported by algo.
> > +     *  Indicates number of compression levels supported by
> > algorithm
> > +     *
> > +     * where,
> > +     *
> > +     * 1 means fastest compression i.e. output produced at
> > +     * best possible speed at the expense of compression quality,
> > and
> > +     *
> > +     * max_level means best compression i.e.output produced is best
> > possible
> > +     * compressed content at the expense of speed.
> > +     *
> > +     * Example, if max_level = 4 , it means algorithm supports four
> > levels
> > +     * compression. User can set this value from 1 up to 4 to select
> > between
> > +     * fastest compression to best compression.
> > +     * See RFC1950 for an example explanation to level.
> > +     *
> > +     * value 0 and 1, carry same interpretation and mean algorithm
> > works on
> > +     * fixed strategy to compress content.
> > +     *
> > +     */
> > +     uint32_t max_level;
> > +
> > +     /* hash algorithms supported along with compression operation
> > */
> > +     odp_comp_hash_algos_t hash_alg;
> > +} odp_comp_alg_capability_t;
> > +
> > +/**
> > + * Comp API generic buffer structure
> > + *
> > + */
> > +typedef struct odp_comp_buf_t {
> > +     /** pointer to character array */
> > +     uint8_t *buf;
> > +     /** length of the buf */
> > +     uint32_t len;
> > +} odp_comp_buf_t;
> > +
> > +/**
> > + * Comp API dictionary type
> > + *
> > + */
> > +typedef odp_comp_buf_t odp_comp_dict_t;
>
>
> Comp_buf is not used for anything else than this. It's better to typedef
> dict directly. If buf is defined later it may very well have different
> specification than dict.
>
> Ack.
>
> > +
> > +/**
> > + * Comp API algorithm specific parameters
> > + *
> > + */
> > +typedef struct odp_comp_alg_param_t {
> > +     struct comp_alg_def_param {
> > +             /** compression level where
> > +                 ODP_COMP_LEVEL_MIN <= level <=
> > ODP_COMP_LEVEL_MAX
> > +             */
> > +             odp_comp_level_t level;
> > +             /** huffman code to use */
> > +             odp_comp_huffman_code_t comp_code;
> > +     } deflate;
> > +     struct comp_alg_zlib_param {
> > +                     /** deflate algo params */
> > +                     struct comp_alg_def_param def;
> > +     } zlib;
> > +} odp_comp_alg_param_t;
> > +
> > +/**
> > + * Comp API data range specifier
> > + *
> > + */
> > +typedef union odp_comp_data_t {
> > +     struct odp_comp_pkt {
> > +             /** Offset from beginning of input */
> > +             odp_packet_t packet;
>
> This is packet, not offset.
> Ack.
>


> > +
> > +             /** Length of data to operate on */
> > +             odp_packet_data_range_t data_range;
>
> This is packet data range, not length.
> Ack.
>


> > +     } packet;
> > +} odp_comp_data_t;
> > +
> > + /**
> > + * Comp API session creation parameters
> > + *
> > + */
> > +typedef struct odp_comp_session_param_t {
> > +     /** Compress vs. Decompress operation */
> > +     odp_comp_op_t op;
> > +
> > +     /** Sync vs Async
> > +     *
> > +     * When mode = ODP_COMP_SYNC, odp_comp_operation() should be
> > called.
> > +     *
> > +     * When mode = ODP_COMP_ASYNC, odp_comp_operation_enq() should
> > be called.
> > +     *
> > +     * Use odp_comp_capability() for supported mode.
> > +     * Implementation should support atleast one of the mode.
> > +     *
> > +     */
> > +     odp_comp_op_mode_t mode;
> > +
> > +     /** Compression algorithm
> > +     *
> > +     *  Use odp_comp_capability() for supported algorithms.
> > +     */
> > +     odp_comp_alg_t comp_alg;
> > +
> > +     /** Hash algorithm
> > +     *
> > +     *  Use odp_comp_alg_capability() for supported algorithms.
> > +     */
> > +     odp_comp_hash_alg_t hash_alg;
> > +
> > +     /** parameters specific to compression */
> > +     odp_comp_alg_param_t alg_param;
> > +
> > +     /** Async mode completion event queue
> > +     *
> > +     * When mode = ODP_COMP_ASYNC, User should wait on
> > ODP_COMP_COMPL_EVENT
> > +     * on this queue.
> > +     *
> > +     * By default, implementation enques completion events in-order-
> > of
> > +     * request submission and thus queue is considered ordered.
> > +     *
> > +     * Please note, behavior could be changed or enhanced
> > +     * to queue event in-order-of their completion to enable
> > +     * performance-oriented application to leverage hw offered
> > parallelism.
> > +     * However, This will be subject to application requirement and
> > more
> > +     * explicit defined use-case.
> > +     *
> > +     */
> > +     odp_queue_t compl_queue;
> > +} odp_comp_session_param_t;
> > +
> > +/**
> > + * Comp API operation parameters.
> > + * Called to process each data unit.
> > + *
> > + */
> > +typedef struct odp_comp_op_param_t {
> > +     /** Session handle from creation */
> > +     odp_comp_session_t session;
> > +
> > +     /** User context */
> > +     void *ctx;
> > +
> > +     /** Boolean indicating End of data, where
> > +     *
> > +     *   true : last chunk
> > +     *
> > +     *   false: more to follow
> > +     *
> > +     * If set to true, indicates this is the last chunk of
> > +     * data.After the processing of last chunk of data,
> > +     * implementation should move algorithm to stateless mode
> > +     * for next of batch of operation i.e. reset history,
> > +     * insert 'End of Block' marker into compressed data stream(if
> > +     * supported by algo). Processing is considered complete
> > +     * when API returns with any status except
> > ODP_COMP_ERR_OUT_OF_SPACE.
> > +     * (See deflate/zlib for interpretation of stateless/stateful).
> > +     *
> > +     * For stateless compressions (ex ipcomp), last should be set to
> > 'true'
> > +     * for every input packet processing call.
> > +     *
> > +     * For compression+hash, digest will be available after
> > +     * last chunk is processed completely.
> > +     */
> > +     odp_bool_t last;
> > +
> > +     /** Input data */
> > +     odp_comp_data_t input;
> > +
> > +     /** placeholder for output data.
> > +     *
> > +     * For Compression/Decompression+hash session,
> > +     * output  will store both data and digest(with digest appended
> > at
> > +     * end-of-data). User should pass packet of sufficiently large
> > size
> > +     * to store digest.
> > +     *
> > +     */
> > +     odp_comp_data_t output;
> > +} odp_comp_op_param_t;
> > +
> > +/**
> > + * Comp API per operation result
> > + *
> > + */
> > +typedef struct odp_comp_op_result_t {
> > +     /** User context from request */
> > +     void *ctx;
> > +
> > +     /** Operation Return Code */
> > +     odp_comp_err_t err;
> > +
> > +     /** pointer to output.Valid when odp_comp_err_t is
> > +     * ODP_COMP_ERR_NONE
> > +     *
> > +     * Contain processed output for compression/decompression
> > operations, and
> > +     * both data and digest for Compression+hash Operations.
> > +     *
> > +     */
> > +     odp_comp_data_t output;
> > +} odp_comp_op_result_t;
> > +
> > +/**
> > + * Query comp capabilities
> > + *
> > + * Outputs comp capabilities on success.
> > + *
> > + * @param[out] capa   Pointer to capability structure for output
> > + *
> > + * @retval 0 on success
> > + * @retval <0 on failure
> > + */
> > +int odp_comp_capability(odp_comp_capability_t *capa);
> > +
> > +/**
> > + * Query supported compression algorithm capabilities
> > + *
> > + * Outputs all supported configuration options for the algorithm.
> > + *
> > + * @param      comp         Compression algorithm
> > + * @param[out] capa         Array of capability structures for output
> > + * @param      num          Maximum number of capability structures to
> > output
> > + *
> > + * @return Number of capability structures for the algorithm. If this
> > + is
> > larger
> > + *         than 'num', only 'num' first structures were output and
> > application
> > + *         may call the function again with a larger value of 'num'.
> > + * @retval <0 on failure
> > + */
> > +int odp_comp_alg_capability(odp_comp_alg_t comp,
> > +                         odp_comp_alg_capability_t capa[], int
> > num);
> > +
> > + /**
> > +  * Query supported hash algorithm capabilities
> > +  *
> > +  * Outputs all supported configuration options for the algorithm.
> > +  *
> > +  * @param   hash         hash algorithm
> > +  * @param[out] capa      Array of capability structures for output
> > +  * @param   num          Maximum number of capability
> > structures to output
> > +  *
> > +  * @return Number of capability structures for the algorithm. If
> > + this is
> > larger
> > +  *      than 'num', only 'num' first structures were output and
> > application
> > +  *      may call the function again with a larger value of 'num'.
> > +  * @retval <0 on failure
> > +  */
> > +int odp_comp_hash_alg_capability(odp_comp_hash_alg_t hash,
> > +                              odp_comp_hash_alg_capability_t
> > capa[],
> > +                              int num);
> > +
> > + /**
> > + * Initialize comp session parameters
> > + *
> > + * Initialize an odp_comp_session_param_t to its default values for
> > + * all fields.
> > + *
> > + * @param param   Pointer to odp_comp_session_param_t to be initialized
> > + */
> > +void odp_comp_session_param_init(odp_comp_session_param_t *param);
> > +
> > + /**
> > + * Compression session creation
> > + *
> > + * Create a comp session according to the session parameters. Use
> > + * odp_comp_session_param_init() to initialize parameters into their
> > + * default values.
> > + *
> > + * @param param             Session parameters
> > + * @param session           Created session else
> ODP_COMP_SESSION_INVALID
> > + * @param status            Failure code if unsuccessful
> > + *
> > + * @retval 0 on success
> > + * @retval <0 on failure
> > + */
> > +int odp_comp_session_create(odp_comp_session_param_t *param,
> > +                         odp_comp_session_t *session,
> > +                         odp_comp_ses_create_err_t *status);
> > +
> > + /**
> > + * Comp session destroy
> > + *
> > + * Destroy an unused session. Result is undefined if session is being
> > used
> > + * (i.e. asynchronous operation is in progress).
> > + *
> > + * @param session           Session handle
> > + *
> > + * @retval 0 on success
> > + * @retval <0 on failure
> > + */
> > +int odp_comp_session_destroy(odp_comp_session_t session);
> > +
> > +/**
> > +* Comp set dictionary
> > +*
> > +* Should be called when there is no operation in progress i.e.
> > +* before initiating processing of first chunk of data and
> > +* after processing of last chunk of data is complete.
> > +*
> > +* @param session        Session handle
> > +* @param dict[in,out]           Pointer to dictionary.
> > +*                       implementation should update length of
> > dictionary
> > +*                       used at output.
> > +* @retval 0 on success
> > +* @retval <0 on failure
> > +*
> > +* @note:
> > +* Application should call odp_comp_alg_capability() to query
> > 'support_dict'
> > +* before making call to this API.
> > +*/
> > +int odp_comp_set_dict(odp_comp_session_t session,
> > +                   odp_comp_dict_t *dict);
> > +
> > + /**
> > + * Comp synchronous operation
> > + *
> > + * If session is created in ODP_COMP_SYNC mode, this call wait for
> > operation
> > + * to complete and update result at output
> > + *
> > + * If session is created in ODP_COMP_ASYNC mode, this call fails and
> > update
> > + * status code ODP_COMP_ERR_NOT_SUPPORTED.
> > + *
> > + * If operation returns ODP_COMP_ERR_OUT_OF_SPACE, then application
> > should call
> > + * API again with valid output buffer (and no-more input) until call
> > completes
> > + * with status code except ODP_COMP_ERR_OUT_OF_SPACE.
> > + *
> > + * for compression+hash, call returns with hash appended to the end
> > + of
> > + * processed data.
> > + * User should compute processed data len =
> > + result.output.len-digest_len,
> > where
> > + * digest_len queried through odp_comp_hash_alg_capability()
> > + *
> > + * @param param[in]         Operation parameters.
> > + * @param result[out]       Result of operation.
> > + *
> > + * @retval 0 on success
> > + * @retval <0 on failure
> > + */
> > +int odp_comp_operation(odp_comp_op_param_t   *param,
> > +                    odp_comp_op_result_t  *result);
>
>
> I think two operations are better design as explained above - e.g.
> odp_comp_compress() and odp_comp_decompress(). Those may have difference
> also in behavior, e.g. application may always prepare enough space for
> compress() as resulting data length is always less than what was input.
>
> As I explained above session is operation specific. If we make it as two
separate API then it will be set of 4 API as odp_comp_xx() and
odp_comp_xx_enq(). Thus is it really required to further separate these
out?

>
>
> > +
> > +/**
> > + * Comp asynchronous operation.
> > + *
> > + * If session is created in ODP_COMP_ASYNC mode, result will be
> > +queued
> > + * to completion queue. application should monitor
> > +ODP_COMP_COMPL_EVENT
> > + * on queue.
> > + *
> > + * If session is created in ODP_COMP_SYNC mode, call fails with
> > +status
> > + * code ODP_COMP_ERR_NOT_SUPPORTED.
> > + *
> > + * If operation updares result structure with status
> > + * ODP_COMP_ERR_OUT_OF_SPACE then application
> > + * should call API again with valid output buffer (and no-more input)
> > + * until call completes with any other error code.
> > + * Application should either maintain context to associate events to
> > + * input request enqued.OR provide sufficiently large buffer size to
> > + * avoid ODP_COMP_ERR_OUT_OF_SPACE.
> > + *
> > + * @param param[in]     Operation parameters.
> > + *
> > + * @retval 0 on success
> > + * @retval <0 on failure
> > + */
> > +int odp_comp_operation_enq(odp_comp_op_param_t *param);
>
> Same here. Compress() vs. decompress()
> Answered above.
>


> > +
> > + /**
> > +  * Return Compression completion handle that is associated with event.
> > Valid
> > +  * for an async mode of operation.
> > +  *
> > +  * @param ev  ODP_EVENT_COMP_COMPL Event Handle
> > +  *
> > +  * @return    Compression Completion Handle
> > +  *
> > +  */
> > +odp_comp_compl_t odp_comp_compl_from_event(odp_event_t ev);
> > +
> > + /**
> > +  * Query result from completion handle. Valid for async mode.
> > +  *
> > +  * @param completion_event  Completion event handle as
> > +  *                       returned by
> > odp_comp_compl_from_event()
> > +  * @param result[out]       Pointer to result structure
> > +  *
> > +  *
> > +  */
> > +void odp_comp_compl_result(odp_comp_compl_t event,
> > +                        odp_comp_op_result_t *result);
> > +
> > + /**
> > +  * Convert comp completion handle to event handle. Valid for async
> > +mode
> > +  *
> > +  * @param completion_event  Completion event handle as returned by
> > +  *                       odp_comp_compl_from_event()
> > +  *
> > +  * @return Event handle
> > +  */
> > +odp_event_t odp_comp_compl_to_event(odp_comp_compl_t
> > +completion_event);
>
>
> Not really needed. Application can only read completion data and free the
> event.
>
> I took it based on ODP design principle to provide 2-way typecasting. and
> I can envision its usage with multi-threaded design where one thread
> getting completion handle and passing to other thread which further extract
> event using it. Such cases are not implemented yet but it  make sense to
> provide both ways typecasting.
>


> -Petri
>
>
>
>

Reply via email to