Hi,

I am unfamiliar with the guts of existing commercial ASN.1 tool
implementations, since most of my work has been with limited free
software implementations of ASN.1.  I am curious how various tool
vendors have implemented BER encoders and decoders for certain nested
types.  For example, given:

        Foo ::= SEQUENCE {
                x       INTEGER
        }
        SignedFoo ::= SEQUENCE {
                foo     Foo,
                sig     OCTET STRING -- signature of BER encoding of foo
        }

Is a tool implementor likely to provide an option allowing for the
encoder for "SignedFoo" to be passed the encoding of a "Foo"?
Likewise, and perhaps more importantly, is a tool implementor likely
to provide an option allowing for the decoder for "SignedFoo" to
return the contained encoding of "Foo" without modification?  This
latter case is useful if the sender is not necessarily sending a DER
encoding, for example.

On a somewhat related note, given:

        Foo ::= SEQUENCE {
                a       INTEGER,
                b       OCTET STRING
        }
        FooBasic ::= Foo
        FooExtended ::= SEQUENCE {
                COMPONENTS OF Foo,
                c       INTEGER
        }

would a C language tool implementor be likely to generate
encoders/decoders for "FooBasic" and "FooExtended" that can use a
common structure for storing the C language representations of the
types?

Now for a somewhat more complex example (taken from tentative
in-progress work to revise the Kerberos protocol):

        RFC1510String   ::= GeneralString (IA5String)
        KerberosString  ::= CHOICE {
                rfc1510         RFC1510String,
                utf8            UTF8String,
                ...
        }

        Realm{StrType}          ::= StrType
        PrincipalName{StrType}  ::= SEQUENCE {
                name-type       [0] Int32,
                name-string     [1] SEQUENCE OF StrType
        }

        Signed{InnerType}       ::= SEQUENCE {
                cksum   [0] Checksum OPTIONAL,
                inner   [1] InnerType,
                ...
        }

        Ticket          ::= CHOICE {
                ticket1         [APPLICATION 1] Ticket1
                ticket2         [APPLICATION 4] Signed{Ticket2}
        }

        TicketCommon{StrType}   ::= SEQUENCE {
                tkt-vno         [0] INTEGER,
                realm           [1] Realm{StrType},
                sname           [2] PrincipalName{StrType},
                enc-part        [3] EncryptedData -- EncTicketPart --
        }

        Ticket1         ::= TicketCommon{RFC1510String}
        Ticket2         ::= SEQUENCE {
                COMPONENTS OF TicketCommon{KerberosString},
                extensions      [4] TicketExtensions OPTIONAL,
                ...
        }

Would the C language encoders/decoders for the "Ticket" type be likely
to be able to use a common structure (perhaps with union members for
the parameterized types) for storing the C language representation of
the type?  Or would it be more likely that the tool would use
competely different C structures for storing "Ticket1" and "Ticket2"?

Basically, I know that I can make certain optimizations when
hand-coding encoders and decoders, and I was wondering whether
commercial tools would generate encoders and decoders that would make
similar optimizations.  Certainly I could see an off-the-shelf tool
forsaking such optimizations for the sake of simplicity.

What I'm pondering is whether I should forsake some precision in
specifying the differences between "Ticket1" and "Ticket2" (for
example) in order to make it more likely that an off-the-shelf tool
will generate encoders/decoders that use substantially common code
paths and data structures for the two types.  I would welcome any
opinions on this matter.

Thanks,
---Tom

Reply via email to