Is there a way to programmatically obtain a list of available ciphers,
digests and algorithms? I looked at the header files, but may have
overlooked something.
Bear
__
OpenSSL Project
Dr. Stephen Henson wrote:
On Sun, Jul 30, 2006, Girish Venkatachalam wrote:
--- Bear Giles [EMAIL PROTECTED] wrote:
Is there a way to programmatically obtain a list of
available ciphers,
digests and algorithms? I looked at the header
files, but may have
overlooked something
Le Saux, Eric wrote:
I am trying to understand why ZLIB is being used that way. Here is what
gives better results on a continuous reliable stream of data:
1) You create a z_stream for sending, and another z_stream for
receiving.
2) You call deflateInit() and inflateInit() on
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Has any thought been given to splitting off the OpenSSL non-SSL BIO (and
possibly ASN.1) code into a separate library?
The reason for doing this, obviously, is to make it easy to write
applications that can use the BIO routines without requiring the
Scott Harris wrote:
I need some help to change the Certificate I generated using Microsoft
Certificate server in .*DER* format to convert to .*DB* format to
use with Netscape API. Any body knows *how to convert a .DER certificate
to .DB *. Any tools that that can do that...
There's not a
I'm developing an OpenSSL-based SSL sniffer that monitors decrypted
SSL traffic using the webserver's private keys on real site traffic
(similar to ssldump). For some reasons, part of the SSL traffic is
not being decrypted.
I'm looking for possible reasons for this. The ones I am
I remember mentioning this a while back, but don't think anything
ever came from it.
Are there any plans to add convenience functions for the hashes
specified in draft-ietf-pkix-certstore-http? (This proposed
document provides some implementation details for RFC2585, and
basically maps a URL of
Are you in the US BTW if so can you resend you patch with a CC: to
[EMAIL PROTECTED]
Is that the preferred address now, instead of [EMAIL PROTECTED]?
I've tried checking the bxa.doc.gov website, but it's aimed at
commercial exporters instead of OSS exporters.
In the full-blown PKI, you will also have things
like fetch me the cert corresponding to this name and fetch me the
key (or a handle to the key) with this fingerprint.
Remember that there are actually two independent pieces of code here -
a tab A independent shared library and a slot B
About the certificate and the chain: the chain will add a lot of
redundant information, at least as I see it, since each component of
the chain will most obviously be in separate certificate entries.
I tend to agree, but wanted to start close to any model API first.
Makes it easier to explain
Here's a quick prototype implementation of a typical backend.
The wrapper isn't shown - that's the code common to all plug-ins
that's located in the static part of the library, e.g., breaking a
cert chain into multiple certs, or reconstructing that cert chain.
The dynamic part is much more
That, or have the plug-ins keep a very sharp eye on the version of the
loading OpenSSL to avoid version clashes (something we do with engines
today, sadly).
Other way around - the framework has to pay close attention to the
version of the plugin. I think the only dynamic library system that
bear In contrast, with a plug-in the framework itself acts as the loader,
bear and it must detect mising symbols in older shared libraries. Depending
bear on the way it's hooked, it either has to resolve each symbol individually
bear or a single structure and then check the version number.
On Unix as well as on VMS and others, it's
often possible to upgrade a shareable library to a newer version
without breaking the programs using it.
Of course.
This requires the API to stay
the same except for added functions, and for types and structures to
never change (except for adding
So how would protection against such bad data be done at the database
level?
If you know it's a CA cert repository, you can require that the issuer
already be in the database.
If this a deferrable constraint, you can still add root certs (e.g., to
populate the database.) But if you remove
I've been looking at the Java KeyStore API (which reflects the work of
a other bright people looking at the same problem) and have a revised
schema and proposed API.
SECOND PROPOSAL
---
The primary key is an opaque string henceforce known as the alias.
The plugin may treat this as a
As primary keys go, I'm certain that whole-cert hashes would come in
quite handy.
It's a mindset thing. A key should lead you to more information.
Give me your name, I'll tell you your phone number and home address.
Read off the tag on the back of the car and I'll tell you the make
and
Bear Giles wrote:
Of course, this opens the whole can-o-worms of what constitutes
a duplicate cert? Is it an exact match, or matching I+SN, or
some other criteria?
There are some cases where only an exact match is acceptable. An example
is how OpenSSL performs a verify operation
Bear Giles wrote:
To avoid duplication of code I'd say such concerns should be addressed
either at the application level or on top of whatever OpenSSL plugin API
is adopted.
I think that would be a serious mistake. I'm specifically thinking
of something like the CA cert
Richard Levitte - VMS Whacker wrote:
From: Bear Giles [EMAIL PROTECTED]
bear Of course, this opens the whole can-o-worms of what constitutes
bear a duplicate cert? Is it an exact match, or matching I+SN, or
bear some other criteria?
Depending on who you listen to, one could say
To avoid duplication of code I'd say such concerns should be addressed
either at the application level or on top of whatever OpenSSL plugin API
is adopted.
I think that would be a serious mistake. I'm specifically thinking
of something like the CA cert repository/JSP code in my postgresql
I can imagine that one might get the same certificate
from several source, but I'm pretty sure it could be resolved but
applying a little bit of automagic intelligence and tossing all
duplicates except for the copy that has the highest trust attached to
it.
I was assuming this would be done
Not true. I've searched on the hash of the certificate when we are
producing certificates that must maintain privacy and therefore have
garbage in the Issuer and Subject fields.
Maybe I'm just dense, but I don't get this. If you simply mask the
issuer and subject fields, e.g., using a
Issuer and subject number should also be unique, and it's a common
search pattern. I don't think anyone searches on the hash of the
entire certificate.
It should be unique but it might not be, either by accident or malicious
intent.
This indirectly raises a question we've been
One classic approach is to have all lookup functions return a
list of unique keys. The caller then requests each object individually
via a lookup that guarantees uniqueness. Uniqueness is easy to guarantee
on any hashed or relational store - make it the primary key!
An earlier
I'll dig out the code. It was largely based around the PKCS#11
functionality but with an OpenSSL flavour. That is you have a load of
objects each of which is a set of attributes. You can then lookup based
on exact matches of each attribute.
This is query by example. It has some benefits,
A request for some additional hashes I would submit some
patches myself, but this stuff is so simple it would probably
take longer to verify my patches than to code them directly. :-)
The hashes are mentioned in draft-ietf-pkix-certstore-http-00.txt,
available at
First, the serious stuff. Version 0.2 of my libpkixpq library is up at
http://www.dimensional.com/~bgiles. It mostly renames asn1_integer to
hugeint and x509_name to principal, and adds a slew of operators to each
type. This should make it possible to create indices on either type,
, not datetime.
Future enhancements:
1) Make it possible to compare x509_name and asn1_integer objects
directly.
2) Add all arithmetic functions for asn1_integer.
Export stuff:
1) A copy of this notice has been sent to [EMAIL PROTECTED]
--
Bear Giles
bgiles (at) coyotesong (dot) com
I came across a minor nit in the EVP_BlockDecode() logic.
As I understand the RFC for base64 encoding, characters outside
of the specified character range (and whitespace characters in
particular) should be ignored. Unfortunately, after stripping
off the leading and trailing whitespace this
bear NID_domainComponent. So I'm still not sure that these tables
bear can be used to validate the input to these routines.
Do I get it right, you're after having the string length limits and
possibly the allowed string types for DC and more in that table?
What I'm ultimately trying to
Oops. The information *is* in obj_mac.h, even if it's unused.
But again, shouldn't this be in crypto/asn1/a_strnid.c (and elsewhere)
so it's recognized by default?
__
OpenSSL Project
2.domainComponent_min = 2
2.domainComponent_max = 4 # no longer 3 due to .info
This produces a DN that looks like
/CN=Bear Giles/OU=Persons/DC=example/DC=com
Before someone asks the obvious question :-), I didn't try
this first since I'm working on a database binding that stores
private key or password kept elsewhere on the system.
Access to these functions would be restricted using the normal
database access control mechanism.
--
Bear Giles
bgiles (at) coyotesong (dot) com
__
OpenSSL Project
34 matches
Mail list logo