(Continuing top posting to keep thread consistent).

First of all, if the client itself is compromised, the file content can be compromised just before you do whatever you do to assure the server that this was what the user of the client wanted. No way to fix that other than to keep the client free of compromise.

Now the common solution used by driver and program signing programs such as Symbian signed (discontinued), Windows driver package signing for compatibility, Windows driver binary signing for Windows 10, and older versions of Windows Mobile 5.x/6.x signing is this:

1. Client has a unique X509 certificate with a private key known only
   to that client.  Doesn't matter who issued the certificate, but the
   server needs to have its own known genuine copy of that
   certificate.  That certificate should be such that no one except
   that client and that server trusts things for being signed by it.
2. Client signs the file/message with its private certificate exactly
   as if that would be the final signature (even though almost no one
   will trust it).
3. Client sends signed file/message to server.
4. Server verifies the signed file/message using its private list of
   trusted client certificates.
5. Server maps the verified signature to the identity to be used for
   the final signature.
6. Server talks to its closely related CA to get a brand new
   single-use-only certificate for the identity (with a serial number
   added as an extra subject name element).
7. Server removes the client signature and signs the file/message with
   the brand new certificate for the final identity.
8. Server throws away the private key, so that no more files can be
   signed with that certificate, ever.
9. If the server ever signs anything by mistake, it (or an admin if the
   server was permanently compromised) asks its closely related CA to
   revoke the affected single-use certificate.
10. If the client certificate needs to be revoked due to the certificate
   or its user being compromised on some current or past time/date, the
   related CA revokes all the single-use certificates for that identity
   since that date/time. Other/replacement client certificates for the
   same (visible) identity remain valid because single-use certificates
   for those requests were never revoked.
11. Note that in this setup, there is no need for a time stamping
   service, simply give the single-use certificate the long (listed)
   lifetime, and rely on the uncompromised server diligently deleting
   the private key within a few seconds after creating it.

Of cause you don't need to make this entire "temporary certificate" dance (steps 6, and 8 to 11), for instance if the server will be using certificates issued by one of the pay-per-certificate public CAs, the server will instead need to keep around a certificate/private key pair for each public identity, and may need to use a public time stamping service just as if it was a normal direct end-signer.

As for the format, putting S/MIME aka CMS aka PKCS#7 format signatures inside a file format, look at how Microsoft did this with its AuthentiCode concept and how Sun/Oracle did with its jar format. Its almost the same procedure with 5 minor differences:

 * AuthentiCode sets the PKCS#7 content-type to an AuthentiCode
   specific OID, jar file use the generic "data" OID.  At least some
   members of OpenSSL team suggest that using any OID other than "data"
   is not allowed by more recent versions of the CMS standard.
 * AuthentiCode signs a custom DER encoded ASN.1 structure with the
   relevant attributes and secure hashes of the parts of the signed
   document that don't depend on the value/size of the signature blob
   itself.  Jar signs a formatted text file with that same data.
 * (classic) AuthentiCode uses the original countersignature format and
   attribute for timestamping, requiring a timestamping service that
   creates "bare" signatures of the small blob sent for timestamping,
   jar files may or may not use a slightly different format allowing
   use of the newer RFC-based timestamping protocol.
 * Some implementations of jar signature validation (most notably the
   one used by Mozilla/Firefox/Thunderbird) requires the certificates
   to have the attributes of the historic "object signing" protocol,
   AuthentiCode and most other jar implementations simply require the
relevant extended usage attribute in the end certificate. Commercial code/object signing certificates tend to include the
   extra attributes to work for both.
 * The two formats sometimes forget to check different obscure parts of
   the file being signed for some file formats.  I won't elaborate on
   which ones.

From this list of differences and similarities, it should be somewhat easy to make sane choices for how to design your own use of X.509 certificates. Note that if the items being signed are documents rather than program-like objects, you should probably check for the e-mail signing or document-signing extended usage attribute, not the object-signing one.


On 23/06/2015 19:55, Marco Warga wrote:
Many thanks for the answer.

I should have been more specific on the requirements right away. The "file" was really just an example to keep it simple. Reading my own writing, I would probably have suggested what you did :-)

So here are the facts:
- client/server are not connected to the internet
- the network protocol is existing and proprietary
- the file structure is existing and proprietary, but can be extended to allow for additional signature information to be embedded that will be sent to the server
- the data actually transferred (and to be signed) is part of that file
- the data has to be signed with an X.509 certificates public key that already exists

S/MIME does pretty much do what I want to do. However the network protocol or the data to be signed cannot be changed for compatibility reasons. Under these circumstances, I don't really see how I could achieve my goal easier than by openssl directly.

Considering the "very common requirement": I was thinking of i.e. windows driver signatures, android/ios app signatures and similar mechanisms to ensure that files are from a trusted source.

Am 22.06.2015 um 14:44 schrieb Michael Wojcik:

Response inline below, prefixed with "MW". (Unfortunately Outlook is incapable of replying to HTML messages properly, so you'll have to excuse the formatting.)

*From:*openssl-users [mailto:openssl-users-boun...@openssl.org] *On Behalf Of *Marco Warga
*Sent:* Saturday, June 20, 2015 04:48
*To:* openssl-users@openssl.org
*Subject:* [openssl-users] beginner needs advice on data signature/verification

Hi,

I hope some of you could give me advice on my project using openssl.

MW: Why are you using OpenSSL for this application? You want to create a file on a trusted system, pass it through an untrusted intermediary, and process it on another trusted system. Why not simply use an existing mechanism like secure email? (GPG is the obvious choice, unless there are licensing issues.) If you are determined to create your own protocol from primitives, then really all you appear to need here is an HMAC. Don't involve the horrific mess that is X.509 PKI unless it actually provides some benefit.


Lets say I have a server/service on a machine processing a file a corresponding client sends. That file is usually created by me on a clean third machine. The server side is assumed to be uncompromised (no hacker). The client side may be compromised. Now I need to make sure that the service only accepts those files that are created by me. I believe that is a very common requirement and has been done alot of times - I just can't find tutorials on how to implement it. Know any ?

MW: No, but that's probably because what you've described isn't "a very common requirement". It's too vague. We don't know what problem you're actually trying to solve. It may be that you just need to send a file with a verifier, which - as I noted above - /is/ commonly done, generally using something like GPG or (for roll-your-own protocols where both ends are controlled by the same party) an HMAC.



Lets assume I have an x509 cert together with its private key signed by a ca owned by me. The trusted ca cert will be present on the server side. This is what I plan to do:

1.) Create the data files/blobs and sign them using the priv key of the cert. Distribute the cert and the signature along with (or inside) the data file.
2.) Have the client send that data file to the server (cert/sig first)
3.) Service receives the cert, builds a cert store with the local ca cert in it and verifies the client's cert with X509_verify_cert() 4.) if cert verifies ok, service compares the signature against the one calculated from the incoming data using the public key that came inside the cert just verified


Would this be the right approach considering that anything the client sends may be forged (cert, sig, data...) ?

MW: It's safe from malicious behavior by the client, under a threat model where an attacker is not able to forge client certificates or client signatures. In other words, it's safe as long as the private keys are neither leaked nor forced.


Or would it be safer to have the cert used for signing stored on the server side and not send with the data (instead just its subject protected by the signature) ?

MW: Irrelevant to the security of the scheme. Simpler from a development and operations standpoint. But using something like PGP/GPG or S/MIME would be simpler yet. There are any number of examples online for signing a file and verifying its signature.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

_______________________________________________
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users

Reply via email to