Just a Resend.

On Tue, Aug 8, 2017 at 10:21 PM, shally verma
<shallyvermacav...@gmail.com> wrote:
> Petri/Berry
>
> As per discussion in today's call, this is what I summarize :
>
> Two new requirements added:
>
> 1. Support compression / decompression of multiple ranges with in one
> single packet and
> 2. Operating on multiple packets with in call where each packet may
> further carry multiple range.
>
> To meet 1st requirement, current proposal says, calling
> odp_packet_compress/decomp() for each packet per range as it allows
> easy and clean management of output buffer and also more flexibility
> per various possible  application use cases:.
> - call can be made in statefull or stateless mode per each range, Or
> - Modify  headers according to each range output,
> - Application manage output buffers at their end,
> - Async notification on out of space condition  will be easily maintained.
>
> Currently , we do support file-based compression using ODP APIs, but
> i.e. on stream of data bytes where 1 chunk occupy - 1 packet (which
> can be segmented/unsegmented) not on stream of Packets.
> Is there a use-case to have a file which consists of stream of
> Packets? (where packet is of type HTTP Packet ?)
>
> New proposal says "allow multiple ranges with in one single call". Few
> concerns raised for this, key one include:
>
> - How do we know each range be compressed in stateful  or stateless
> mode i.e. each range is independent Or dependent?
> - How do we handle out_of_space error while operating on individual
> range? Especially for the case when Implementation try to use HW
> parallelization for better throughput?
> - How do we support same design in asynchronous mode ??
> - How do we see it  - as performance gain ? Or ease-of-use?  Or it may
> end up introducing more overhead to implementation.
>
> Please feedback your inputs to design issues as envisioned.
>
> For now we can focus on requirement #1 as design for requirement # 2
> will be based on outcome of #1.
>
> Thanks
> Shally
>
> On Tue, Aug 8, 2017 at 6:38 PM, Savolainen, Petri (Nokia - FI/Espoo)
> <petri.savolai...@nokia.com> wrote:
>>
>>
>>> -----Original Message-----
>>> From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of
>>> Github ODP bot
>>> Sent: Friday, August 04, 2017 4:00 PM
>>> To: lng-odp@lists.linaro.org
>>> Subject: [lng-odp] [PATCH API-NEXT v8 1/1] comp: compression spec
>>>
>>> From: Shally Verma <shally.ve...@cavium.com>
>>>
>>> Signed-off-by: Shally Verma <shally.ve...@cavium.com>
>>> Signed-off-by: Mahipal Challa <mahipal.cha...@cavium.com> Cc
>>> prasadathreya.naray...@cavium.com
>>> ---
>>> /** Email created from pull request 102 (1234sv:api-next)
>>>  ** https://github.com/Linaro/odp/pull/102
>>>  ** Patch: https://github.com/Linaro/odp/pull/102.patch
>>>  ** Base sha: 8390f890d4bd2babb63a24f7b15d2f4763e44050
>>>  ** Merge commit sha: fbdff8c82a19f5b640ae299204b3bb1bbbefdccb
>>>  **/
>>>  include/odp/api/spec/comp.h | 815
>>> ++++++++++++++++++++++++++++++++++++++++++++
>>>  1 file changed, 815 insertions(+)
>>>  create mode 100644 include/odp/api/spec/comp.h
>>>
>>> diff --git a/include/odp/api/spec/comp.h b/include/odp/api/spec/comp.h
>>> new file mode 100644
>>> index 00000000..2956094c
>>> --- /dev/null
>>> +++ b/include/odp/api/spec/comp.h
>>> @@ -0,0 +1,815 @@
>>> +/* Copyright (c) 2013, Linaro Limited
>>
>> Year 2017
>>
>>> +
>>> +/**
>>> + * Comp API hash algorithm
>>> + *
>>> + */
>>> +typedef enum {
>>> +     /** ODP_COMP_HASH_ALG_NONE*/
>>
>> This kind of comment is not very helpful. Each enumeration needs explanation 
>> - like odp_comp_alg_t under.
>>
>>> +     ODP_COMP_HASH_ALG_NONE,
>>> +     /** ODP_COMP_HASH_ALG_SHA1*/
>>> +     ODP_COMP_HASH_ALG_SHA1,
>>> +     /**  ODP_COMP_HASH_ALG_SHA256*/
>>> +     ODP_COMP_HASH_ALG_SHA256
>>> +} odp_comp_hash_alg_t;
>>> +
>>> +/**
>>> + * Comp API compression algorithm
>>> + *
>>> + */
>>> +typedef enum {
>>> +     /** No algorithm specified.
>>> +      * Means no compression, output == input.
>>> +      * if provided, no operation (compression/decompression or
>>> hash)
>>> +      * applied on input. Added for testing purpose.
>>> +      */
>>> +     ODP_COMP_ALG_NULL,
>>> +     /** DEFLATE - RFC1951 */
>>> +     ODP_COMP_ALG_DEFLATE,
>>> +     /** ZLIB - RFC1950 */
>>> +     ODP_COMP_ALG_ZLIB,
>>> +     /** LZS */
>>> +     ODP_COMP_ALG_LZS
>>> +} odp_comp_alg_t;
>>> +
>>> +
>>> +/**
>>> + * Hash algorithms in a bit field structure
>>> + *
>>> + */
>>> +typedef union odp_comp_hash_algos_t {
>>> +     /** hash algorithms */
>>> +     struct {
>>> +             /** SHA-1 */
>>> +             uint32_t sha1  : 1;
>>> +
>>> +             /** SHA with 256 bits of Message Digest */
>>> +             uint32_t sha256 : 1;
>>
>>
>> Need to be more explicit in algorithm definition: SHA-1, SHA-256, ... 
>> algorithm (SHA-2 would also do, but we use SHA-256 in crypto API since it 
>> seems to be used by standards).
>>
>> Actually, these explanations should go under enum definitions and then just 
>> refer to those enums here - like odp_comp_algos_t under.
>>
>>
>>> +
>>> +     } bit;
>>> +
>>> +     /** All bits of the bit field structure
>>> +      *
>>> +      * This field can be used to set/clear all flags, or bitwise
>>> +      * operations over the entire structure.
>>> +      */
>>> +     uint32_t all_bits;
>>> +} odp_comp_hash_algos_t;
>>> +
>>> +/**
>>> + * Comp algorithms in a bit field structure
>>> + *
>>> + */
>>> +typedef union odp_comp_algos_t {
>>> +     /** Compression algorithms */
>>> +     struct {
>>> +             /** ODP_COMP_ALG_NULL */
>>> +             uint32_t null       : 1;
>>> +
>>> +             /** ODP_COMP_ALG_DEFLATE */
>>> +             uint32_t deflate    : 1;
>>> +
>>> +             /** ODP_COMP_ALG_ZLIB */
>>> +             uint32_t zlib       : 1;
>>> +
>>> +             /** ODP_COMP_ALG_LZS */
>>> +             uint32_t lzs        :1;
>>> +     } bit;
>>> +
>>> +     /** All bits of the bit field structure
>>> +      * This field can be used to set/clear all flags, or bitwise
>>> +      * operations over the entire structure.
>>> +      */
>>> +     uint32_t all_bits;
>>> +} odp_comp_algos_t;
>>> +
>>> +/**
>>> + * Compression Interface Capabilities
>>> + *
>>> + */
>>> +typedef struct odp_comp_capability_t {
>>> +     /** Maximum number of  sessions */
>>> +     uint32_t max_sessions;
>>> +
>>> +     /** Supported compression algorithms */
>>> +     odp_comp_algos_t comp_algs;
>>
>> No need to save one character => comp_algos
>>
>>> +
>>> +     /** Supported hash algorithms. */
>>> +     odp_comp_hash_algos_t hash_algs;
>>
>> hash_algos
>>
>>> +
>>> +     /* sync/async mode of operation support.
>>> +      * Implementation should support atleast one of the mode.
>>> +      */
>>
>>
>> "mode" field definition missing on this line ?
>>
>>
>>> +
>>> +     /** Support type for synchronous operation mode
>>> (ODP_COMP_SYNC).
>>> +      *  User should set odp_comp_session_param_t:mode based on
>>> +      *  support level as indicated by this param.
>>> +      */
>>> +     odp_support_t sync;
>>> +
>>> +     /** Support type for asynchronous operation mode
>>> (ODP_COMP_ASYNC).
>>> +      *  User should set odp_comp_session_param_t:mode param based
>>> on
>>> +      *  support level as indicated by this param.
>>> +      */
>>> +     odp_support_t async;
>>> +} odp_comp_capability_t;
>>> +
>>> +/**
>>> + * Hash algorithm capabilities
>>> + *
>>> + */
>>> +typedef struct odp_comp_hash_alg_capability_t {
>>> +     /** Digest length in bytes */
>>> +     uint32_t digest_len;
>>> +} odp_comp_hash_alg_capability_t;
>>> +
>>> +/**
>>> + * Compression algorithm capabilities structure for each algorithm.
>>> + *
>>> + */
>>> +typedef struct odp_comp_alg_capability_t {
>>> +     /** Enumeration indicating alg support for dictionary load */
>>> +     odp_support_t support_dict;
>>> +
>>> +     /** Optional Maximum length of dictionary supported
>>> +      *   by implementation of the algorithm.
>>> +      *
>>> +      *   Invalid if support_dict == ODP_SUPPORT_NO.
>>> +      *
>>> +      *   Implementation use dictionary of length less than or equal
>>> to value
>>> +      *   indicated by dict_len. if set to 0 and if support_dict ==
>>> +      *   ODP_SUPPORT_YES, then implementation will use dictionary
>>> length
>>> +      *   less than or equal to user input length in
>>> odp_comp_set_dict()
>>> +      *   and update used dictionary length at output of the call.
>>> +      *
>>> +      */
>>> +     uint32_t dict_len;
>>> +
>>> +     /* Maximum compression level supported by implementation of
>>> this algo.
>>> +      *  Indicates number of compression levels supported by
>>> implementation,
>>> +      *
>>> +      * where,
>>> +      *
>>> +      * 1 means fastest compression i.e. output produced at
>>> +      * best possible speed at the expense of compression quality,
>>> and
>>> +      *
>>> +      * max_level means best compression i.e.output produced is best
>>> possible
>>> +      * compressed content at the expense of speed.
>>> +      *
>>> +      * Example, if max_level = 4 , it means algorithm supports four
>>> levels
>>> +      * of compression from value 1 up to 4. User can set this value
>>> from
>>> +      * 1 (fastest compression) to 4 (best compression).
>>> +      * See RFC1950 for an example explanation to level.
>>> +      *
>>> +      * Value 0 mean implementation use its default value.
>>> +      *
>>> +      */
>>> +     uint32_t max_level;
>>> +
>>> +     /* Supported hash algorithms */
>>> +     odp_comp_hash_algos_t hash_alg;
>>> +} odp_comp_alg_capability_t;
>>> +
>>> +/**
>>> + * Comp API dictionary type
>>> + *
>>> + */
>>> +typedef struct odp_comp_dict_t {
>>> +     /** pointer to character array */
>>> +     uint8_t *buf;
>>
>> Is the data input only? If it is, then this should be const data pointer.
>>
>> What is expected about the format of the data? Null terminated strings? If 
>> it's strings, then e.g. const char * would be better.
>>
>>
>>> +     /** length of the dictionary. */
>>> +     uint32_t len;
>>> +} odp_comp_dict_t;
>>> +
>>> +/**
>>> + * Comp API algorithm specific parameters
>>> + *
>>> + */
>>> +typedef struct odp_comp_alg_param_t {
>>> +     struct comp_alg_def_param {
>>> +             /** compression level where
>>> +              * ODP_COMP_LEVEL_MIN <= level <= ODP_COMP_LEVEL_MAX
>>> +              */
>>> +             odp_comp_level_t level;
>>> +             /** huffman code to use */
>>> +             odp_comp_huffman_code_t comp_code;
>>> +     } deflate;
>>
>> Doxygen comment missing? Is this filled in only if algorithms is ALG_DEFLATE 
>> ?
>>
>>
>>> +     struct comp_alg_zlib_param {
>>> +                     /** deflate algo params */
>>> +                     struct comp_alg_def_param def;
>>> +     } zlib;
>>
>> Same here.
>>
>>> +} odp_comp_alg_param_t;
>>> +
>>> +/**
>>> + * Comp API data range specifier
>>> + *
>>> + */
>>> +typedef union odp_comp_data_t {
>>> +     struct {
>>> +             /** packet */
>>> +             odp_packet_t packet;
>>> +
>>> +             /** packet data range to operate on  */
>>> +             odp_packet_data_range_t data_range;
>>
>>
>> Barry have indicated use case for defining multiple ranges per packet. How 
>> that will be handled?
>>
>>
>>> +     } pkt;
>>> +} odp_comp_data_t;
>>
>>
>> Packet is quite deep in parameter structures. E.g. in compress call
>>
>> param.input.pkt.packet            = pkt;
>> param.input.pkt.data_range.offset = 0;
>> param.input.pkt.data_range.len    = len;
>>
>> odp_comp_compress(&param, &result)
>>
>>
>>
>> I'd pull up the packet handle(s) from param, and maybe have separate calls 
>> for full packets and ranges:
>>
>> int odp_comp_packet(const odp_packet_t pkt_in[],
>>                     odp_packet_t pkt_out[],
>>                     const odp_comp_op_param_t param[],
>>                     int num_pkt);
>>
>> typedef struct odp_comp_packet_range_t {
>>         int num_range;
>>
>>         struct {
>>                 uint32_t offset;
>>                 uint32_t len;
>>         } range[];
>>
>> } odp_comp_packet_range_t;
>>
>> int odp_comp_packet_range(const odp_packet_t pkt_in[],
>>                           const odp_comp_packet_range_t range_in[],
>>                           odp_packet_t pkt_out[],
>>                           // also range_out needed ??
>>                           const odp_comp_op_param_t param[],
>>                           int num_pkt);
>>
>>
>> Then later on, direct memory address input/output would look something like 
>> this:
>>
>> int odp_comp_mem(uint8_t *ptr_in[],
>>                  uint32_t len_in[],
>>                  uint8_t *ptr_out[],
>>                  uint32_t *len_out[],
>>                  const odp_comp_op_param_t param[],
>>                  int num_ptr);
>>
>>
>>> +
>>> + /**
>>> + * Comp API session creation parameters
>>> + *
>>> + */
>>> +typedef struct odp_comp_session_param_t {
>>> +     /** Compress vs. Decompress operation */
>>> +     odp_comp_op_t op;
>>> +
>>> +     /** Sync vs Async
>>> +      *
>>> +      * When mode = ODP_COMP_SYNC,
>>> odp_comp_compress()/odp_comp_decomp()
>>> +      * should be called.
>>> +      *
>>> +      * When mode = ODP_COMP_ASYNC, odp_comp_compress_enq()/
>>> +      * odp_comp_decomp_enq() should be called.
>>> +      *
>>> +      * Use odp_comp_capability() for supported mode.
>>> +      *
>>> +      */
>>> +     odp_comp_op_mode_t mode;
>>> +
>>> +     /** Compression algorithm
>>> +      *
>>> +      *  Use odp_comp_capability() for supported algorithms.
>>> +      */
>>> +     odp_comp_alg_t comp_alg;
>>> +
>>> +     /** Hash algorithm
>>> +      *
>>> +      *  Use odp_comp_alg_capability() for supported hash algo for
>>> +      *  compression algo given as comp_alg. Implementation should
>>> not
>>> +      *  support hash only operation on data. output should always
>>> contain
>>> +      *  data + hash.
>>> +      *
>>> +      */
>>> +     odp_comp_hash_alg_t hash_alg;
>>> +
>>> +     /** parameters specific to compression */
>>> +     odp_comp_alg_param_t alg_param;
>>> +
>>> +     /** Async mode completion event queue
>>> +      *
>>> +      * When mode = ODP_COMP_ASYNC, user should wait on
>>> ODP_EVENT_PACKET
>>> +      * with subtype ODP_EVENT_PACKET_COMP on this queue.
>>> +      *
>>> +      * By default, implementation enques completion events in-
>>> order-of
>>> +      * request submission and thus queue is considered ordered.
>>> +      *
>>> +      * Please note, behavior could be changed or enhanced
>>> +      * to queue event in-order-of their completion to enable
>>> +      * performance-oriented application to leverage hw offered
>>> parallelism.
>>> +      * However, this will be subject to application requirement and
>>> more
>>> +      * explicit defined use-case.
>>> +      *
>>> +      */
>>> +     odp_queue_t compl_queue;
>>> +} odp_comp_session_param_t;
>>> +
>>> +/**
>>> + * Comp API operation parameters.
>>> + * Called to process each data unit.
>>> + *
>>> + */
>>> +typedef struct odp_comp_op_param_t {
>>> +     /** Session handle from creation */
>>> +     odp_comp_session_t session;
>>> +
>>> +     /** User context */
>>> +     void *ctx;
>>> +
>>> +     /** Boolean indicating End of data, where
>>> +      *
>>> +      *   true : last chunk
>>> +      *
>>> +      *   false: more to follow
>>> +      *
>>> +      * If set to true, indicates this is the last chunk of
>>> +      * data. After processing of last chunk of data is complete
>>> i.e.
>>> +      * call returned with any error code except
>>> ODP_COMP_ERR_OUT_OF_SPACE,
>>> +      * implementation should move algorithm to stateless mode
>>> +      * for next of batch of operation i.e. reset history,
>>> +      * insert 'End of Block' marker into compressed data stream(if
>>> +      * supported by algo).See deflate/zlib for interpretation of
>>> +      * stateless/stateful.
>>> +      *
>>> +      * For stateless compressions (ex ipcomp), last should be set
>>> to 'true'
>>> +      * for every input packet processing call.
>>> +      *
>>> +      * For compression + hash, digest will be available after
>>> +      * last chunk is processed completely. In case of
>>> +      * ODP_COMP_ERR_OUT_OF_SPACE, application should keep on
>>> calling
>>> +      * odp_comp_xxx() API with more output buffer unless call
>>> returns
>>> +      * with ODP_COMP_ERR_NONE or other failure code except
>>> +      *  ODP_COMP_ERR_OUT_OF_SPACE.
>>> +      */
>>> +     odp_bool_t last;
>>> +
>>> +     /** Input data */
>>> +     odp_comp_data_t input;
>>
>> I'd move this out of the struct, since it's not operation parameter but the 
>> target of the operation. From call to call, everything else may remain 
>> constant, but the target is not constant.
>>
>>> +
>>> +     /** placeholder for output data.
>>> +      *
>>> +      * For Compression/Decompression+hash session,
>>> +      * output  will store both data and digest(with digest appended
>>> at
>>> +      * end-of-data). User should pass packet of sufficiently large
>>> size
>>> +      * to store digest.
>>> +      *
>>> +      */
>>> +     odp_comp_data_t output;
>>
>> Same here.
>>
>>> +} odp_comp_op_param_t;
>>> +
>>> +/**
>>> + * Comp API per operation result
>>> + *
>>> + */
>>> +typedef struct odp_comp_op_result_t {
>>> +     /** User context from request */
>>> +     void *ctx;
>>> +
>>> +     /** Operation Return Code */
>>> +     odp_comp_err_t err;
>>> +
>>> +     /** Pointer to output.Valid when odp_comp_err_t is
>>> +      * ODP_COMP_ERR_NONE or ODP_COMP_ERR_OUT_OF_SPACE
>>> +      *
>>> +      * Contain data after compression/decompression operation,
>>> +      * or data + digest for compression/decompression + hash
>>> operation.
>>> +      *
>>> +      */
>>> +     odp_comp_data_t output;
>>
>>
>> This would also go out of the struct.
>>
>>
>>> +} odp_comp_op_result_t;
>>> +
>>
>> -Petri
>>
>>

Reply via email to