Hi Dave,

XML signature, as realized by the W3C XML Signature Recommendation, is quite
inefficient, but not because of specific design decisions in the
standard(with exception of XPath Filter 1.0), but because of the
requirements for canonicalization as well as the power of XML Signature to
authenticate many references.

The Apache API's are particularly bad at handling large XML documents (25MB
and up), but this is also the fault of Java-based parsing and DOM
manipulation.

Canonicalization is performed in both directions on the <SignedInfo> at the
very least (ie: for Core Validation as well as Core Generation). The real
inefficiencies come into play where large node-sets are signed as siblings
to the <ds:Signature>, because these must be canonicalized per the W3C
standard no matter what. If you leave out canonicalization you can realize
significant performance advantages, but then you will be non-interoperable.

In the general case, XML Signature is usually combined with other XML-based
processing such that when it is all said and done, to process an "XML
Signature" results in mostly node-set centric operations and a sliver of
digesting and make one or two "expensive" private key operations.

Hope this Helps,

Blake Dournaee
Sarvega, Inc.
Senior Security Architect



-----Original Message-----
From: David Wall @ Yozons, Inc. [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, June 23, 2004 11:24 AM
To: Blake Dournaee; [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; 'Girish Juneja'
Subject: Re: ThreeSignerContract example -- detached signatures examples?

Thanks, Blake.  I certainly have a much better understanding after your
clarifications, examples and tips.  You "da man!"  Many thanks.

Regarding this one comment of yours:

> XML Signatures are slow because XML processing is slow. There are just no
> two ways about it. 90 percent of the processing in an XML signature is XML
> parsing, canonicalization, and transformation. The private key and hashing
> operations are usually orders of magnitude faster, depending on the size
of
> the documents.

My impression is this may be the fault of the Apache APIs and not XML DSigs
in particular, though you can certainly tell I'm not (yet?) an expert on
either technology.  In fact, our digital signature technology operates on
XML data, but not in such a generalized way, and it's extremely fast because
the XML data and digital signature code are customized to work together.

However, it seems to me that if the XML DSig world allowed me to do the
canonicalization, transforms and even the Reference digesting once and then
save that resulting data, I could do the actual signature parts very
quickly.  I presume that verification already makes such assumptions and
doesn't force a re-canonicalization or transforms and just relies on the
Reference DigestValues.  After all, once I've done most of those things,
nothing should change, and only if I changed the document at a later date
would I need to redo any of those things.  If I had a "volatile XML doc" I
would naturally not take those steps, but when my XML data is "in final
form," it would be easy to process the data then and only save the "ready to
sign" version.

The SignatureValue and KeyInfo are about the only things particular to a
true signing as all the digesting and canonicalization really only needs to
occur once if we could save it off.  There's generally no need to have
non-canonicalized data, so when creating XML, if we took those steps then,
we could "assume" that they were ready for the signature process.  And if
they were not, sure, you'd get invalid signatures, but then you deserve
invalid signatures -- after all, you could always do those steps right
before signing just be safe, though I think in many situations processing
could be improved simply by having the basic steps done once and saved as
the new original XML doc.

Is that even possible with XML DSigs, or is it more a limitation of the
Apache APIs that preclude separating out all of the transforms and
canonicalizations?

David

Reply via email to