Re: ssl handshake with multiple tcp connect?

2011-08-26 Thread David Schwartz

On 8/25/2011 6:04 AM, Arjan Filius wrote:


Hello,

today i ran into a situation, where i notice firefox/chrome and
gnutls-cli use 3 tcp sessions to get a single ssl session, where openssl
s_client takes only one.

one tcp session is what i expect, and i hope someone may have an
explanation.

compared the gnutls-cli with openssl s_client as thay would do no http
interpretation, and are easely reproduced by commandline:

gnutls-cli --insecure -V -r www.xs4all.nl /dev/null
uses 3 tcp sessions to complete
openssl s_client -connect www.xs4all.nl:443  /dev/null
uses 1 tcp session to complete


Any idea how that may come? until now, i was under the impression a ssl
session setup should only use 1 tcp session (apart from ocsp/crl checks)


Why are you passing '-r' to gnutls-cli? You are asking it to try to 
resume the session on a new TCP connection. (I count two connections.)


DS



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Simple question: Maximum length of PEM file?

2011-07-27 Thread David Schwartz

On 7/26/2011 10:16 PM, Katif wrote:


Can you tell me what are the application dependency factor here so we'll be
able to chase a limit?

It is used as an RSA key exchange certification/private key pairing.

Thanks...


Maximum RSA key size supported.
Extensions supported.

DS


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Simple question: Maximum length of PEM file?

2011-07-26 Thread David Schwartz

On 7/26/2011 4:38 AM, Katif wrote:


I need to know in advance the maximum length of the following three PEM
formatted files (excluding the -BEGIN/END lines):


It's application-dependent. There is no answer in general.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Query Regarding usage of SSL_Connect()

2011-07-17 Thread David Schwartz

On 7/14/2011 6:17 AM, Amit Kumar wrote:

Hi team,
I am using SSL_Connect() in one of my projects and this SSL_connect
is returning a value of -1.

With SSL_get_error() i can see it is *SSL_ERROR_WANT_READ ?*
*
*
*   Now i am not understanding why this can come and if this is there
then should i call SSL_Connect again.
*
I am really new to OpenSSL API's and learning it. Please consider me
as a beginner while replying.

   Any help will be greatly appreciated.


It means SSL_Connect has made as much forward progress as it can right 
now and will be able to make further forward progress when it reads some 
data from the server. Since you asked it not to block, it is not blocking.


DS


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: SSL_read returns SSL_ERROR_WANT_READ

2011-07-11 Thread David Schwartz

On 7/11/2011 3:18 PM, Carla Strembicke wrote:


The server recieves the  encrypted data and  sends to the lower level
and where it is pumped into the SSL structure ( which is using these
memory buffers) using the BIO_write call ( I acutally see that bytes are
written into it) and the buffer looks good.  I then go and do an
SSL_read() and I get nothing except  SSL_ERRO_WANT_READ. I do see that a
session has been established and that the packet member actually
contains the data I want access tobut the member state=8576 and
rstate=240.
What am I missing


Nothing that seems normal.


Is it somthing to do this the handshake that I am missing or the readinf
of the data.
I have been working on this for a while and am at a stale
mate..please help!!!


What's the problem exactly? If you get SSL_ERROR_WANT_READ it means that 
there is no application data yet. The data you passed was likely 
negotiation data.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Replacement of functions that operate with sockets

2011-06-22 Thread David Schwartz

On 6/21/2011 2:40 AM, ml.vladimbe...@gmail.com wrote:


The fourth function is SSL_EncryptUserData, which encrypt our own
application data before we can send their to secure channel:

int SSL_EncryptApplicationData(char *buf_in, int buf_in_len, char
buf_out, int buf_out_len, int *need_buf_out_len);

The result of this function is number of bytes written to the buf_out
buffer, if success.
[need_buf_out_len] - the necessary size of the output buffer if
buf_out_len is not enough to contain all data

When I(programmer) need to send any data to the secure socket I am
calling SSL_EncryptUserData and after this I send encrypted data from
buf_out to the socket.


No, that can't possibly work. Any mechanism involving trying to look 
through the SSL state machine is doomed to fail. Completely erase from 
your mind any notion that you can map particular bits of encrypted data 
to particular bits of decrypted data or vice versa.


The SSL engine is a black box with four hooks. What goes on inside it 
is, as far as your application should be concerned, unimportant.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Replacement of functions that operate with sockets

2011-06-22 Thread David Schwartz

On 6/21/2011 2:53 AM, ml.vladimbe...@gmail.com wrote:


Jim, for me the main goal to replace functions that operate with sockets
is performance. I want to use OpenSSL with Windows IO Completion ports.
The method that you suggest is very interesting but the main is not
achieved - OpenSSL is still writing to the socket. Besides we got
so-called double buffering and also more memory usage because of 2
sockets.


I do exactly this using BIO pairs. I manage all four data streams. When 
the application wants to send data to the other side, I hand it to 
OpenSSL. When I receive data on the socket, I hand it to OpenSSL. When I 
can send data on the socket, I get it from OpenSSL and send it to the 
socket. When OpenSSL has decrypted data, I get it from OpenSSL and send 
it to the upper application layers.


Just remember that you have four I/O streams you have to handle -- 
encrypted in, encrypted out, plaintext in, plaintext out. Make no 
attempt to 'associate' these streams. Treat them as completely logically 
independent.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Replacement of functions that operate with sockets

2011-06-22 Thread David Schwartz

On 6/22/2011 3:20 AM, ml.vladimbe...@gmail.com wrote:


Where can I find this example with BIO pairs? I can't understand only
with openssl's documentation how to work with BIO pairs.

I will be grateful for the help.


Look in ssl/ssltest.c, in the doit_biopair function.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Replacement of functions that operate with sockets

2011-06-20 Thread David Schwartz

On 6/15/2011 11:57 AM, ml.vladimbe...@gmail.com wrote:

Hello.
By default OpenSSL itself works with sockets. I would want to implement
operation with sockets without admitting it to OpenSSL. I.e. for
example, when OpenSSL wants to write down something in a socket, it
should cause my function and I will transfer data to the socket. And it
is exact also obtaining the data from a socket I cause a function of
OpenSSL, transferring to it the data accepted from a socket.
I.e. I implement function WriteSocket. When OpenSSL wants to write
something in a socket, it causes WriteSocket and nothing more it should
disturb.


Well that wouldn't work as stated. How would OpenSSL know when it was 
time to call WriteSocket? You will have to call into OpenSSL when you 
want to see if has any data it needs to write to the socket.


In fact, you will have to manage *four* I/O streams to and from OpenSSL. 
When you receive encrypted data from the socket, you will have to hand 
it to OpenSSL. When you know it is safe to write to the socket, you will 
need to check if OpenSSL has any encrypted data to send and if so, read 
it from OpenSSL and send it to the other side. When anything changes, 
you will also need to check if OpenSSL has any decrypted plaintext to 
deliver to your application. And you will have to pass any plaintext 
your application wish to send to OpenSSL.


 When the data from a socket has come I send notification

message OpenSSL about that the data has come, actually I cause function
of OpenSSL ReadSocket, transferring it the buffer with the come data.
I.e. any operations of waiting of reading OpenSSL is not necessary to
do, it has sent the data and can easy smoke a bamboo, expecting when I
will cause ReadSocket.
Whether is it possible to implement? I have read in the documentation
about BIO-functions, and could not understand is it possible to
implement or not.


Look at the example code that uses BIO pairs.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Why my SSL_Connect() hangs at times?

2011-06-13 Thread David Schwartz

On 6/11/2011 8:52 AM, kali muthu wrote:


I have Linux Server which has been connected with a Windows XP client
using SSL Sockets. I am able to read and write through those sockets.


Good.


Recently my calls to SSL_Connect() waits for long time. And yes I am
using in Blocking mode. My search on that issue ended up with, I have
to use non-blocking mode and have to use time outs as well. But I want
the connection to be successful so as to proceed further. Only when I am
done with those little transfers between the Server and the Client, I
will be able to move to the next step. Hence I used blocking mode here.


Sounds good.


While at the start of SSL Socket programming, I let the socket
connections close abruptly without releasing them (through exceptions
and as a beginner's ignorance). Will that might be the reason for my
client not get connected with the Server? By the way I mean that those
connections may not be still cleared which makes my current
SSL_Connect() call to hang? If so, can I clean up those through any
command or something?


It's not clear what you're talking about. What did you not do? Your 
SSL_Connect isn't hanging, it's blocking, because you asked it to.




Or What might be reasons that make SSL_Connect to hang/wait for long?


In blocking mode, SSL_Connection will block until the connection is 
established or until it fails definitively. This can take arbitrarily 
long, depending on what the other side does.



And how can I establish a connection in such case when I had to use
blocking mode?


You are establishing a connection, right? It's just taking awhile. But 
you said you wanted to wait. So what's the problem exactly?


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: SSL Communication using BIO

2011-05-23 Thread David Schwartz

On 5/23/2011 1:59 AM, Harshvir Sidhu wrote:

David,
So are you suggesting that i change the approach in my Code.


Hard for me to give you a useful answer without seeing your code. If 
your code tries to treat OpenSSL as a filter, expecting input and output 
to correlate, then yes. If your code handled OpenSSL as a black box with 
four separate I/O paths (encrypted data in, encryped data out, plaintext 
in, plaintext out) without assuming any relationship between them, then 
it's fine.


 My

application is for Windows and in Managed C++. In that i am using
Callback function for receive, when the callback function is called, and
when i call SSL_read in that, it hangs at recv call in the OpenSSL code,
my assumption is that data was already read from socket, when callback
was called. Another thing i would like to mention is I am using Sockets
Managed Class, not the native sockets.


When your callback function is called, that means encrypted data is 
available on the socket. The SSL_Read function is for reading 
unencrypted data from the SSL engine. It is only appropriate to call 
SSL_Read in response to a data available callback on the socket in one 
case -- if your last SSL operation was an SSL_Read and it returned a 
WANT_READ indication. In any other case, this is broken behavior 
reflecting erroneously trying to look through the SSL engine.


Your code must treat the SSL engine as a black box. Yes, we happen to 
know that *IN* *GENERAL* we're reading encrypted data from the socket, 
decrypting it, and then passing the plaintext to the application, your 
code should treat this as an OpenSSL internal detail and should not 
pretend it knows that this will happen.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: SSL Communication using BIO

2011-05-22 Thread David Schwartz

On 5/22/2011 5:10 PM, Harshvir Sidhu wrote:


Previously I have used SSL_XXX functions for performing SSL
operations. Now i have am working on an application which is written in
Managed C++ using callback functions(BeginReceive and EndReceive), and
SSL_Read function is not working for that. So i tried using BIO_
functions to create a bio pair for internal and network bio and then
using them to encrypt/decrypt data before sending using normal socket,
but when i try to use that my handshake is not getting completed, i do
not see any error on s_server, but it dont seem to work when i try to
enter something on server side, my callback dont get called.
Can someone point me to some example code for this in which BIO is
used to encrypt and decrypt data and then using normal sockets for
send/receive? I am not able to find anything in openssl source exmple or
on google.


You are thinking about the problem wrong. You are thinking I need to 
send some data. So I send it to OpenSSL. OpenSSL encrypts it, so then I 
need to get that encrypted data from OpenSSL and write it to the socket. 
Then, the other end will reply, so I need to read some encrypted data 
from the socket, give it to OpenSSL, and then OpenSSL will decrypt it 
and give it to me. This attempt to look through the OpenSSL engine 
will produce broken code and pain.


Instead, treat the OpenSSL engine as a black box whose internals are 
wholly unknown to you. If you receive some data from the socket, give it 
to OpenSSL. If OpenSSL wants to send some data on the socket, send it. 
If you want to send some data to the other side, give it to OpenSSL. If 
OpenSSL has some plaintext for you, take it and process it. But make no 
assumptions about the sequence or interactions between these things.


For example, a typical mistake is to wait for data to be received on the 
socket before calling SSL_Read. This is completely broken behavior. Data 
received on the socket is encrypted. Data received from SSL_Read is 
decrypted. These are two distinct streams that, as far as your 
application should be concerned, are totally unrelated. (Except when 
SSL_Read specifically returns a WANT_READ, of course, and then only 
until some other event invalidates the WANT_READ indication.)


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: How do calculate the

2011-05-20 Thread David Schwartz

On 5/18/2011 3:27 AM, G S wrote:

I'm probably being obtuse here, but I don't see how encrypting your
request with a public key would help you with your original problem.

What stops a rogue app from doing the same encryption?


They can't see what the parameters are.  So what are they going to encrypt?


I think you're missing the point, or I'm misunderstanding what you mean 
by parameters. Your stated problem was to detect whether a request was 
originating from your app or not. No solution to this will work unless 
somehow your app can do something that a fake/spam app cannot do.


Your solution was:

 1. Generate a random key and initialization vector to encrypt
 the block of text.

So a rogue app can generate its own block of text, presumably containing 
the spam or what not, and it can also certainly generate a random key 
and IV.


 2. Encrypt that random key with the RSA public key.

A rogue app can do this unless you can somehow keep the public key 
private. This may be possible, but most likely an attacker could extract 
the key from your application.


 3. Encrypt the data payload with the random key and IV,
 using Blowfish or other encryption.

Surely an attacker can do this.

 4. Send the encrypted data payload, encrypted random key, and IV to
 the server for decryption.

Again, no reason an attacker can't do this.

So either I'm misunderstanding you, or your method won't actually do 
anything. Or is the thinking that an attacker won't be able to extract 
the public key? Or is that an attacker wouldn't be able to figure out 
how to format the parameters?


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Multiple connection from 1 client

2011-05-10 Thread David Schwartz

On 5/9/2011 1:45 PM, Eric S. Eberhard wrote:


 int setblock(fd, mode)
 int fd;
 int mode; /* True - blocking, False - non blocking */
 {
 int flags;
 int prevmode;

 flags = fcntl(fd, F_GETFL, 0);
 prevmode = !(flags  O_NDELAY);
 if (mode)
 flags = ~O_NDELAY; /* turn blocking on */
 else
 flags |= O_NDELAY; /* turn blocking off */
 fcntl(fd, F_SETFL, flags);

 return prevmode;
 }


This code is ancient and is in desperate need of being made to conform 
to modern standards before being used.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Clients glomming onto a listener

2011-05-10 Thread David Schwartz

On 5/10/2011 2:10 AM, John Hollingum wrote:


I have a service written in Perl, running on Linux that presents a very
simple SSL listener. When this service is hit, it identifies the
connecting node from its certificate/peer address and just sends some
xml to them containing data from some files in the queue directory that
contains their data.

All the client does is to open a socket and start reading.

This works, but it is susceptible to problems which I believe are caused
by clients with bad internet connections (the pathology suggests this).
It seems that something unpleasant occurs in the SSL handshake process
which causes the socket to hang indefinitely. Nobody else can connect
when this has happened.


Well, that's what happens if you take a single-threaded listener and 
tell it complete the SSL handshake.



But that's all a distraction. The fact is that the service shouldn't be
susceptible to the vagaries of what stupid clients or bad networks do.


Right, that's why a single-threaded listener that *ever* waits for a 
client is a terrible design.



Pretty much immediately after the accept the program forks a handler,
but the rogue clients must be glomming onto the main process before the
SSL negotiation is complete.


Calling 'fork' with an accepted SSL connection has all kinds of known 
issues. The fundamental problem is that there are many operations that 
must occur both before and after the 'fork', for different reasons, and 
obviously can't do both.



I can't help thinking that I should be able to tell SSL to have some
sensible (fairly aggressive) timeouts on connections that fail to
complete an SSL handshake. Is this possible? Does it sound like I'm even
identifying the problem correctly?


It wouldn't help, since it's not SSL that needs to time out but the 
socket itself. If you want to timeout the socket operations, you have to 
do it. OpenSSL just reads and writes to and from the BIOs and thus the 
socket.



I'm wondering if I could get more control by using accept in
non-blocking mode. Is this worth looking into?


That would help if your issue was blocking in 'accept'. Since you are a 
single-threaded listener that has to handle multiple clients, why are 
you doing *anything* with blocking sockets calls? That's an obvious red 
flag.


Blocking socket operations only exist to allow sockets to behave like 
terminals or files for applications that don't know anything specific 
about sockets. Your application not only is sockets-specific 
fundamentally but has to do something very unusual that requires 
non-blocking operations.


Timeouts are a possible fix, so long as you only accept connections from 
trusted clients in the first place. Otherwise, you create a trivial DoS 
attack.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Multiple connection from 1 client

2011-05-09 Thread David Schwartz

On 5/9/2011 6:27 AM, Harshvir Sidhu wrote:


 Also i suspect, that if i change the socket to non blocking, then
my current read write code will not work. i mean the one in which i use
FD_SET and select to perform operations.
Thanks.


It's very easy to get things wrong and it won't work unless you get 
everything right.


The most common mistake is refusing to call one of the SSL_* functions 
until you get a 'select' hit. You should only do that if OpenSSL 
specifically tells you to do that.


The second most common mistake is assuming that an SSL connection has 
separate read and write readiness, like a TCP connection does. An SSL 
connection is a single state machine and so has only a single state. (So 
if SSL_Read returns WANT_READ and then you call SSL_Write, regardless of 
what return value you get, the WANT_READ from SSL_Read is invalidated 
because SSL_Write can change the state of the SSL connection.)


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: OpenSSL and multithreaded programs

2011-05-05 Thread David Schwartz

On 5/5/2011 10:01 AM, Chris Dodd wrote:



Is the OpenSSL library supposed to be at all reentrant? I've had odd
problems (intermittent errors) when trying to use OpenSSL in a
multithreaded
program (multiple threads each dealing with independent SSL connections),
and have apparently solved them by creating a single global mutex and
wrapping a mutex acquire around every call into the library. Is
this kind of locking expected to be needed?


This should not be needed so long as you follow two rules:

1) You must properly set the multi-threaded locking callback.

2) You must not attempt to access the same object directly from two 
threads at the same time. For example, you cannot call SSL_read and 
SSL_write concurrently on the same SSL object.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: RSA_private_decrypt across processes

2011-05-04 Thread David Schwartz

On 5/4/2011 9:14 AM, Ashwin Chandra wrote:


Okay I read the complete bug report and it looks like there is a fix in
the latest openssl. However I checked it out and it limits the maximum
time RAND_poll will take to a second. 1000ms. Is there any other way to
speed this up?


Populate the OpenSSL entropy pool yourself with entropy obtained from 
the system's entropy pool. See the documentation for 'CryptGenRandom'.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: How to create threaded pool with OpenSSL

2011-05-03 Thread David Schwartz

On 5/3/2011 11:31 AM, derleader mail wrote:

Hi,
I found OpenSSL server code which uses threds in order to process
clients. Is it possible to create connection pool with OpenSSL. There is
no information about this on openssl.org

How I can add threaded pool to this code?

http://pastebin.com/pkDB7fHm

Regards


Threading support in OpenSSL itself is pretty much limited to using 
locks to permit, for example, more than one SSL connection to share the 
same SSL context structure. If you want a connection pool, just write 
one or use one that's already done. But OpenSSL won't really help you 
with that -- it's just the SSL part.


DS


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Cannot encrypt text - need help

2011-05-01 Thread David Schwartz

On 5/1/2011 1:34 AM, derleader mail wrote:


I'm going to use stream protocol - TCP/IP. Here is the template source
code of the server without the encryption part


We mean application protocol.


while (1) {
sock = accept(listensock, NULL, NULL);
printf(client connected to child thread %i with pid %i.\n,
pthread_self(), getpid());
nread = recv(sock, buffer, 25, 0);
buffer[nread] = '\0';
printf(%s\n, buffer);
send(sock, buffer, nread, 0);
close(sock);
printf(client disconnected from child thread %i with pid %i.\n,
pthread_self(), getpid());
}
}


This code isn't very helpful. It just reads and writes the very same 
data. Nothing in this code tells us, for example, how to identify a 
complete message.


You could interpose an encryption protocol that also imposed no such 
requirements. You would need to work out your own padding though. 
Blowfish is a block encryption algorithm and cannot encrypt just a 
single byte. So if you only read one byte, you'd need to pad it before 
encryption and then you'd need some way to remove the padding on the 
other end.


I would strongly urge you to just use SSL. It is designed for *exactly* 
this purpose.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Cannot encrypt text - need help

2011-05-01 Thread David Schwartz

On 5/1/2011 3:31 AM, derleader mail wrote:


So I need a high performance solution that can handle many connections
with little server load.

1. SSL is a good solution but is not high performance - it's more
suitable for encryption of a web page. When establishing connection more
that 100 connections are used to perform the SSL handshake and is not
suitable for big bynary data.


I don't know where you're getting that from, but it's totally incorrect. 
The SSL handshake, if repeated between the same two endpoints multiple 
times, is quite high performance because the sessions can be cached. As 
for big binary data, why do you think SSL is unsuitable?



2. Symethric encryption is more suitable because it is higth performance
and will scale very well.


SSL is symmetric encryption. PK is used for session setup and key 
negotiation, but the encryption of bulk data is symmetric.



I need a high performance optimizad solution.

What is your opinion?
What will be the best approach?


SSL. It's already well-maintained and heavily optimized. It can easily 
be proxied without understanding the underlying application protocol. 
Padding, message integrity, session caching, authentication and the like 
are already done.


As a plus, SSL permits easily adjusting the encryption and 
authentication schemes to provide the desired balance between 
performance and security. And SSL accelerators are widely available -- 
for example, newer Intel processors have AES acceleration, so if you use 
SSL, those who have them can choose AES as the bulk encryption protocol. 
Had you decided on blowfish and locked it in the way you seem to be 
planning, it would take significant changes to get the benefit of AES-NI.


Also, you will have a much harder time getting your project accepted if 
you just made up the security scheme yourself. The effort required to 
ensure the scheme was properly designed and implemented (especially 
given all the false starts and misunderstandings so far) would almost 
certainly drastically outweigh any hypothetical performance benefit you 
might get.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Cannot encrypt text - need help

2011-04-30 Thread David Schwartz

On 4/30/2011 10:48 AM, derleader mail wrote:


Thank you very much for the reply. The problem is that the encryption
and decryption must be on separate machines. I need a way to take the
size of the encrypted message using language function like strlen(). Is
there other solution?


Are you designing the protocol that one machine uses to send the 
encrypted data to the other or has someone else designed that protocol? 
If that protocol requires that the encrypted data be a string, has the 
mechanism by which that will be done been determined yet or is it up to you?


It sounds like you are trying to implement a mechanism before the 
mechanism has been decided on. Before you attempt to send even a single 
byte over a network, it should be decided which bytes will go where and 
that decision should be reflected in a written specification. This may 
require an hour or two of pain, but trust me, it will eliminate days of 
pain. And, as a free bonus, anyone else who needs to interoperate with 
you can look to that specification to know what to do.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Combining MD5 and SHA-1 to reduce collision probability

2011-04-20 Thread David Schwartz

On 4/20/2011 1:18 AM, Luc Perthuis wrote:

Hi all,

I'm specially interested on finding a way to uniquely identify rather
small data chunks (less than or equal to 128*1024 bytes in size) without
using a byte per byte compare.

Is there any theoretical proof for a good selection of 2 HASH
(computing the results of two different algorithms on the same data)
that would annihilate the collision risk ?


Simply use a hash for which the probability of a collision (either 
accidental or malicious) is at least one order of magnitude lower than 
the probability of your most probable failure mode. IMO, SHA-512 has 
this characteristic, unless you plan on shielding your hardware from 
cosmic rays.


There is no advantage to using two different algorithms and two huge 
disadvantages. First, the computation time will be greater. Second, the 
vulnerability risk is likely greater. It is expected, for example, to be 
harder to break a single 512-bit hash than to break two 256-bit hashes 
concurrently. The probability of an accidental collision should be the 
same -- so why do more work?



NB: I've mentioned MD5 and SHA-1 in the subject, as they are the most
used nowadays, but if this association doesn't fit the need whereas
another does, that would be interesting anyway ;-)


If you're willing to go to 288-bits, why not just use 512-bit SHA and 
truncate to 288 bits? That way, you don't have the known weaknesses of 
MD5 dragging you down and each bit 'pays its way'.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: RSA key

2011-04-13 Thread David Schwartz

On 4/13/2011 2:35 AM, pattabi raman wrote:


*1. If I can't use sprintf then how can I copy the enrypted message to a
character buffer. Bcoz so far I am sending the request to middleware in
Char Buffer using TCP /IP socket. How can I able to achieve now.*
**


If you don't know how to copy bytes of data, you don't know how to code 
in C. You can copy it yourself, using a 'for' loop. You can use 'memcpy'.



*2. Actually I am using 2048 bit public key. So what is the right size I
can use. I tried to use RSA_size(rsa) , which gives core dump error. *
So any idea on the above points will help me a lot. Thanks.


I'd have to see the code to be sure, but likely your core dump comes 
from misusing the result of this call. For example, there is no 
guarantee that you can *en*crypt a value just because it is RSA_size or 
fewer bytes.


DS


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: RSA key

2011-04-12 Thread David Schwartz

On 4/11/2011 6:36 PM, Adrian D. Sacrez wrote:

 I'm fairly new to OpenSSL. How do I convert the rsa generated
 ry rsa_keygen_ex() into a public and private key?
 Is there a way to do that?

I assume you mean RSA_generate_key_ex. It already is. The purpose of 
this function is to generate a new RSA private and public key.


If you need the private key or public key in some particular format, you 
should tell us which key and what format.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: does OpenSSL call locking-callback/thread-id-callback from any internal threads?

2011-04-10 Thread David Schwartz

On 4/10/2011 3:03 PM, Anton Vodonosov wrote:


The question: if I provide locking_callback, will it be called only from the 
threads where I invoke OpenSSL functions,
or OpenSSL may call it from some private/internal threads not created by me?


Since there's no callback to create a thread, OpenSSL has no way to 
create a thread.



I assume OpenSSL does not invoke the callback from any not-mine threads, 
because the documentation
says I should only provide the callback if *my* program is multi-threaded, from 
which I deduce there is no
any concurrency inside OpenSSL, until I invoke it from several threads.

But I would like to here a confirmation.


If OpenSSL created its own threads, that would defeat the entire logic 
of the threading callbacks. The idea is to be able to compile OpenSSL 
once and use it with various different threading models.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: BIO_do_accept() + fork() is leaking 64B?

2011-03-25 Thread David Schwartz

On 3/25/2011 3:50 AM, Michał Stawiński wrote:


//freeing popped client BIO in parent would disconnect client in child,
//so I can not free it, which will cause 64B memory leak
//parent:   BIO_free ( b=client_bio ) : 1   //???


	I don't know of any elegant solution. But there's a way that works. 
Open a file descriptor or socket you don't care about (for example, open 
/dev/null). Then 'dup2' that file descriptor over the file descriptor 
for this connection. That will implicitly disassociate the descriptor 
from the client's connection, so you can free the client BIO without 
affecting the child.


D
S

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: BIO_do_accept() + fork() is leaking 64B?

2011-03-25 Thread David Schwartz

On 3/25/2011 2:33 PM, Michal Stawinski wrote:


2011/3/25 David Schwartzdav...@webmaster.com:



I don't know of any elegant solution. But there's a way that works.
Open a file descriptor or socket you don't care about (for example, open
/dev/null). Then 'dup2' that file descriptor over the file descriptor for
this connection. That will implicitly disassociate the descriptor from the
client's connection, so you can free the client BIO without affecting the
child.



This might be the way to go, though in fact it lacks elegance I
thought BIO's have. On the other hand, I'll probably have to go for it
anyway, so thank you very much anyway :)


Yes, it lacks elegance.


I would be gratefull if anyone can give me a rationale for such a BIO
design, or (even better) tell me I am a stupid bastard, and all I want
can be done using some other, clean and neat solution.


The BIO design was never designed to handle an application that 'fork's 
and then expects both halves to be able to continue operation. Many 
libraries have this 'defect'.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Examples to encrypt/decrypt

2011-03-25 Thread David Schwartz

On 3/25/2011 4:17 PM, Jeremy Farrell wrote:


From: Jeffrey Walton
Sent: Friday, March 25, 2011 8:45 PM
On Fri, Mar 25, 2011 at 3:56 PM, Anthony Gabrielsonagabriels...@comcast.net  
wrote:



This will do what you want:

http://agabrielson.wordpress.com/2010/07/15/openssl-an-example-from-the-command-line/


 memset(plaintext,0,sizeof(plaintext));

The optimizer might remove your zeroization.

Jeff



But only if has a bug, in which case it might do anything.


It can remove it even without a bug. It's a common optimization to 
remove an assignment that the optimizer can prove has no effects. Since 
the 'memset' is the last reference to 'plaintext', the optimizer can 
legally remove it.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: How to handle Expired or not yet valid X.509 certificates - or simply is the system date wrong?

2011-03-22 Thread David Schwartz

On 3/22/2011 9:07 AM, Steffen DETTMER wrote:


When some entity verifies a certificate, finds a valid signature
etc but the current date is not between Valid From to Valid
To, meaning the certificate seems not yet valid or expired,
what is recommended to do?


It depends what you're doing.


I think, essentially, this should be application specific, but
are there guide lines or common sense?


The basic idea is this: If the thing you're checking is from a past 
date, you can verify that date, and the certificate was valid on that 
date, then continue. If the operation is based on the current date, reject.



In practice there could be issues with wrong sytem date / system
clocks / time stamps, which could lead to bad situations,
especially when users are not allowed to change the system date
(for security reasons) and then failing to remotely administrate
(because the peer rejects the actually valid certificate as
expired or not yet valid).
It cannot be assumed all entities are connected to the internet or
any other external trusted time (except maybe an SSL protected one).


If a system does not have a reliable source of time, then it cannot 
reliably perform security operations other than verifying timestamped 
signatures. That should have been addressed when the system was designed.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: data size issue with SSL_read( ) / SSL_write

2011-03-17 Thread David Schwartz

On 3/17/2011 5:00 AM, ikuzar wrote:


The problem :

when I print data, I have got :
HELLO��y0�y
0�y��y
i`�0�y
������L���L��-M
etc...
instead of
HELLO.

in MYrecv, when I make L = 5, it works

what should I do to read just the right size so that when I print I get
HELLO, GOODBYE, etc ... and not HELLO��y0�y,  GOODBYE��y0�y etc ...
thanks for your help


You made two common rookie mistakes:

1) Your MY_recv function is totally broken. It ignores the return value 
of SSL_read, so you have no idea how many bytes you received. So even 
though you received five bytes, you are printing god only knows how many 
bytes.


2) You forgot to implement a protocol. Who or what said that those five 
bytes you received should be printed? You need to specify and implement 
an application protocol on top of SSL. Otherwise, you will continue to 
make mistakes like '1' above. With a protocol, you'd know how to 
determine when you had a complete application-level message. Without 
one, it is impossible to do it right because there is no such thing as 
'right'.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: data size issue with SSL_read( ) / SSL_write

2011-03-17 Thread David Schwartz

On 3/17/2011 6:40 AM, ikuzar wrote:


Why do we expect \r\n ? why not \0 ?


That's why you need to implement a protocol.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: data size issue with SSL_read( ) / SSL_write

2011-03-17 Thread David Schwartz

On 3/17/2011 7:43 AM, ikuzar wrote:


I am confused.
When I used a simple c++ program which uses SSL functions for the first
time, I need not implement  a protocol. when I tell SSL_write( ) to send
5 bytes and tell SSL_read( ) to read 10 bytes, the last reads 5 bytes !
( doesn't it ? am I wrong ? I assume SSL reads expect \0 then it stop
reading).


No, that's not what it does. When you call SSL_read, it gives you 
however many bytes it has available at that time, up to a maximum of the 
number of bytes you asked for. If no data is available and the socket is 
blocking, it blocks until it has some data to give you and gives you 
that much.


It has no way to know when to stop reading. That's *your* job when you 
implement the protocol.


TCP and SSL are byte stream protocols that do not preserve message 
boundaries. If you call SSL_write and send 10 bytes, you should 
completely expect that you might call SSL_read 10 times and get 1 byte 
each time or you might get all 10 bytes in a one read. Or you might get 
5 bytes and then 5 more bytes. It's a byte stream -- nothing 'glues' the 
bytes together.


If you want to end a 'message' with a \0 and read until you read a \0, 
then write code to do that. YOU MUST IMPLEMENT A PROTOCOL ON TOP OF SSL.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: SSL_ERROR_WANT_READ and SSL_ERROR_WANT-WRITE question

2011-03-07 Thread David Schwartz

On 3/7/2011 2:45 PM, Yan, Bob wrote:


My question is that if my Reader thread gets a SSL_ERROR_WANT_WRITE
error from SSL_read function call, can my Writer thread do the SSL_write
operation right after the Reader’s SSL_read operation?


Yes.

 Or, if my Writer

thread gets a SSL_ERROR_WANT_READ error from SSL_write call, can my
Reader thread do the SSL_read just following the Writer’s SSL_write
operation?


Yes.

 Basically is that ok to mix the SSL_read and SSL_write

function by two different threads regardless the returning error code?


Yes, there is one very important caveat though -- an SSL connection has 
one and only one state. So the following sequence of operations will get 
you in big trouble:


1) Reader thread calls SSL_write, gets WANT_READ.

2) Writer thread calls SSL_read, gets WANT_READ.

3) Reader thread (not knowing what happened in step 2) calls 'select' or 
similar function in response to the WANT_READ it got in step 1 and does 
not call SSL_write again until the socket is readable.


After step 2, the state of the SSL connection is 'data must be read from 
the socket in order to read from the SSL connection'. It is an error to 
assume that the WANT_READ returned in step 1 is still valid since step 2 
may have invalidated it.


This can cause your code to deadlock in real world situations. For 
example, supposed the SSL connection is in the process of renegotiating:


At step 1 above suppose it has sent the last thing it needed to send to 
complete the renegotiation and now it just must read the last bit of 
renegotiation data before it can continue to make further forward 
progress. So it returns WANT_READ.


At step 2 above, the engine knows it needs to read from the socket to 
make further progress, so it does. Suppose the renegotiation data has 
all arrived and it reads all of it, but there's no application data to 
read, so it returns WANT_READ.


At step 3 above, the reader thread is now blocking waiting for the 
renegotiation data to arrive on the socket. But that renegotiation data 
has already been received and read by the SSL engine. So the thread will 
block indefinitely waiting for something that has already happened.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: SSL_ERROR_WANT_READ and SSL_ERROR_WANT-WRITE question

2011-03-07 Thread David Schwartz

On 3/7/2011 4:19 PM, Yan, Bob wrote:

Thank you very much, David,



In general, if the application use select/poll system function to

 check the readable of underline BIO and invoke the SSL_read/SSL_write
 only if there are data available on the socket, can the deadlock still
 happened?

Not only can the deadlock I explained still happen, but many other 
deadlocks can happen. The design you are talking about is the complete 
opposite of how you correct use non-blocking sockets with OpenSSL.


You should call SSL_read/SSL_write any time you want to read or write to 
or from the SSL connection. You should call a 'select' or 'poll' 
function only when OpenSSL tells you to.




Specially, in your last statement At step 3 above, the reader thread
is now blocking waiting for the renegotiation data to arrive on the
socket.  But that renegotiation data has already been received and read
by  the SSL engine.


So the thread will block indefinitely waiting for  something that has  already happened., the question is that if the 

underline socket

is  non-blocking and the

 application is using select/poll to check the readable of SSL
 connection and then  invoke the SSL_write/SSL_read call, can this 
deadlock still happen?


Yes, that's precisely how the deadlock happens. It is easy to assume the 
SSL connection has both a 'read state' and a 'write state' because TCP 
connections do. But SSL does not. If 'SSL_write' returns WANT_READ, that 
invalidates any prior WANT_READ condition you may have gotten from 
'SSL_read' -- the SSL connection has one and only one state.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: BN_mod_mul_montgomery() causing cpu spike

2011-03-05 Thread David Schwartz

On 3/2/2011 10:23 AM, prakgen wrote:


I've enabled fips in sshd (OpenSSH 5.5p1)


Why?

 and linked it against

openssl-fips-1.2. Everytime time sshd is spawned, the cpu utilization
shoots up and remains high (40% to 90%) for around 5 seconds.


Doctor, it hurts when I do that.
Then don't do that.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: BN_mod_mul_montgomery() causing cpu spike

2011-03-05 Thread David Schwartz

On 3/5/2011 6:23 AM, prakgen wrote:


 and linked it against

openssl-fips-1.2. Everytime time sshd is spawned, the cpu utilization
shoots up and remains high (40% to 90%) for around 5 seconds.


Doctor, it hurts when I do that.
Then don't do that.


Well Doctor, I need to do that.


Then it's going to keep hurting.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: SSL_connect( ) want read

2011-03-04 Thread David Schwartz

On 3/3/2011 6:50 AM, ikuzar wrote:

Hello,
I have got a SSL_ERROR_WANT_READ after a call to SSL_connect. I 'd like
to know what should I do exactly ?
Thanks


Retry the connect operation later, ideally after confirming that the 
underlying socket is readable.


DS


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: SSL_write( ) fails

2011-03-02 Thread David Schwartz

On 3/2/2011 9:55 AM, ikuzar wrote:


3) I come back to the SSL_write( ). He wants to read( ).
The doc says :
 Caveat: Any TLS/SSL I/O function can lead to either of
*SSL_ERROR_WANT_READ* and *SSL_ERROR_WANT_WRITE*. In particular,
|SSL_read()| or |SSL_peek()| may want to write data and |SSL_write()|
may want to read data. This is mainly because TLS/SSL handshakes may
occur at any time during the protocol (initiated by either the client or
the server); |SSL_read(),| |SSL_peek(),| and |SSL_write()| will handle
any pending handshakes. 
3.1) When the doc says SSL_write () may want to read data... what does
it mean exactly ? Does it mean that a function is blocked somewhere
because it wants read ? ( In my case : this function is accept( ) ?? )


It means that for the SSL_write operation to make further forward 
progress, the SSL engine must read some data from the connection. Since 
the connection is non-blocking, it is not blocking. It is somewhat 
analogous to EAGAIN.


The difference is that you know specifically that it must *read* from 
the connection. You may retry the SSL_write operation at any time. You 
could, for example, wait half a second and then call SSL_write again if 
you wanted to. The ideal response would be to wait until you know data 
can be read from the other side, for example, by using 'select' or 
'poll' to detect readability of the socket.



3.2) Does the client and server share the same ssl object ... ?


I think that question is too vague to answer. Each side has its own 
software running and tracks the state of the shared SSL connection 
however it wants. However, if you had trusted shared memory to store a 
shared object in, what would you need SSL for?


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Registration

2011-02-25 Thread David Schwartz

On 2/25/2011 11:59 AM, Michael S. Zick wrote:

On Fri February 25 2011, Ricardo Custodio wrote:

Veja www.icp.edu.br



Interesting, I get a server certificate fails authentication
from the above address.


You haven't chosen to trust the CA that issued it.


Keep in mind that when the person offering advice can't get it right. . . .


How is your decision not to trust the CA he chose to use a mistake on 
his part?


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Registration

2011-02-25 Thread David Schwartz

On 2/25/2011 5:03 PM, John R Pierce wrote:


the root certificate in question is not in either Google Chrome's list
of CAs, or in Mozilla Firefox's list.

AC-SSL da ICPEDU is the Root CA, issuing a certificate to www.icp.edu.br

The Root Certificate appears to be one locally generated...

CN=AC-SSL da ICPEDU
S=Distrito Federal
C=BR
E=go...@icp.edu.br
O=ICPEDU
O=RNP
L=Brasilia

with an issuer statement...

Os certificados da ICPEDU sao para uso exclusivo por instituicoes
brasileiras de ensino e pesquisa, e nao tem eficacia probante.

which iGoogle roughly translates as...

Certificates of ICPEDU are for exclusive use by institutions of
higher education and research, and has no probative efficacy.

So basically, this is pretty close to self-signed.


So it's working as designed. He's decided that encryption that can't be 
broken passively is better than nothing. It's not clear to me that this 
is a mistake on his part. Perhaps if he didn't realize the implications 
of his decision, it might be an error. But not knowing his requirements, 
I don't see how we can say that.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Expiration date of a STARTTLS certificate

2011-02-21 Thread David Schwartz

On 2/20/2011 6:42 PM, Bharani Dharan wrote:

Hi,

I want to find following details but getting error. Errors are
highlighted in RED. Kindly advise.

# echo  | openssl s_client -connect server:25 -starttls smtp  certificate

gethostbyname failure

connect:errno=0


Presumably the name of the server is not server.

DS



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: problem in ssl connection with server

2011-02-03 Thread David Schwartz

On 2/2/2011 9:13 PM, praveen kumar wrote:


  i got this error,they configured port 8000 for ssl but still i cant get 
problem where it is?

  Can any one help me where is the exact problem?


Their server doesn't correctly support SSL negotiation. You can make it 
work by disabling TLS1 negotiation. With s_client, use the '-no_tls1' flag.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: [FWD] problem in privete key

2011-01-31 Thread David Schwartz

On 1/31/2011 12:25 AM, Lutz Jaenicke wrote:


  Dear friend

  This is praveenkumar working as a app developer from Linkwell 
telesystems,hyderabad,India.

  i have a problem in ssl while hitting the server with the certificate 
provided by server.i am using openssl tool in linux.

  When i tried to execute  client with the certificate in the command line  ,i 
am getting the error like this

   openSSLs_client -connect ip:port -cert certfile.crt

   ERROR:
  unable to load client certificate private key file
3077682908:error:0906D06C:PEM routines:PEM_read_bio:no start 
line:pem_lib.c:698:Expecting: ANY PRIVATE KEY
error in s_client


This is the sample certificate file

file name:certfile.crt

date inside the file like this

-BEGIN CERTIFICATE-

[snip]

-END CERTIFICATE-

This is file sent by the server.please any one help me to connect to the server.



If the file is sent by the server, why are you passing it so s_client? 
The '-cert' option, when passed to 's_client' is used to specify a 
*client* certificate. Without a corresponding private key, it won't work.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: RSA_generate_key function

2011-01-31 Thread David Schwartz

On 1/31/2011 5:37 PM, Ashwin Chandra wrote:

I would like to call this function to generate the same public/private
key everytime.

I thought all I had to do was create the same seed using RAND_seed each
time, however I still keep getting different key pairs.

Is there any way to have RSA_generate_key generate the same
public/private key each time? (I know this doesn’t make sense security
wise, but the work I have to do requires it).



Replace RSA_generate_key with your own function that returns the desired 
key. Using the same seed each time won't work because intervening 
operations can leave the PRNG in a different state. You could use your 
own PRNG to replace OpenSSL's and then put it into a particular state 
prior to calling RSA_generate_key.


DS


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Intermediate CA

2011-01-13 Thread David Schwartz

On 1/12/2011 3:19 PM, Jijo wrote:

Hi All,

I hope this a basic question for you guys..

I'm trying to setup TLS connection between Client and Server.

In the server i did following things,
1. Created a selfsigned rootCA
2. Created IntermediateCA and signed with rootCA.
3. Create a Server Certificate and signed with intermediateCA.
4. Appened the rootCA also to the server Certficate.


In the Client.
1. Create a Server Certificate and signed with rootCA.
2. Stored CA as rootCA

Now i made a TLS connection from Client to Server and the client returns
an error:20 Unable to get Local Issuer Certficate.


If the client doesn't have the intermediate certificate, how can it know 
the server's certificate is valid?



I don't see this error if i use intermediateCA as CA in Client 

Am i supposed to use intermediateCA as CA in Client?


You have to get the IC to the client somehow. The usual method is to 
have the server send it. Does the server software provide a way to 
supply a certificate chain?


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl-users] Re: How to disable index and serial?

2011-01-12 Thread David Schwartz

On 1/12/2011 6:48 AM, Mark H. Wood wrote:


Oh, now I'm curious.  How do they test the randomness of a single
sample?  1 is every bit as random (or nonrandom) as
0xdcb4a459f014617692d112f0942c89cb.


They don't validate the number itself, they validatet hat the method by 
which the number was claimed to be generated meets the requirements for 
randomness and that the number was in fact generated by the method by 
which it was claimed to be generated.


One way is to have an auditor present during an ISO 21188 root key 
ceremony. Typically, the auditor examines the videotape of the root key 
ceremony, the notarized log book, the signed statements of the signatory 
and lawyer witnesses, and if necessary, questions the signatory witnesses.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: How to disable index and serial?

2011-01-11 Thread David Schwartz

On 1/11/2011 7:02 AM, Fredrik Strömberg wrote:


(For the curious: I don´t need serial because I only identify with CN,
and I don´t need a database because I will never revoke any
certificates.)


The problem is, everybody else identifies by serial. So unless you don't 
plan to interoperate with anyone else's software, you also need to 
assign each certificate a unique serial number.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: openssl socket

2010-12-29 Thread David Schwartz

On 12/29/2010 1:11 AM, Esimorp E wrote:

Hi all,
I tried changing the one-to-one socket type in OpenSSL to one-to-many by
changing SOCK_STREAM to SOCK_SEQPACKET and it compiled fine but while
trying to run other program on it I had the following error:
bss_dgram.c(236): OpenSSL internal error, assertion failed: ret = 0

Please, can anyone tell me how to solve this problem.


Change the socket type back to SOCK_STREAM or implement all the code 
necessary to handle the semantic differences between these socket types.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: SSL cert chain validation timestamp issues

2010-12-21 Thread David Schwartz

On 12/20/2010 10:49 AM, travis+ml-open...@subspacefield.org wrote:


So a friend ran into this lately;

libnss, at least on Linux, checks that the signing cert (chain) is valid
at the time of signature - as opposed to present time.  (It may check
present time as well - not sure on that)


This is correct behavior. Certificates don't expire even if the 
credentials used to sign them do. The whole point of a signature is that 
it cannot be repudiated.



This makes for problems if you renew the cert, since the new cert will
have a creation (start) date of the current time, after the object was
signed.


The new cert didn't make the signature and has nothing to do with the 
signature. The phrase renew the cert is code for issue a new 
certificate to the same recipient with a later expiration date. It has 
no effect on the existing certificate and it certainly has no 
retroactive effect on things the previous certificate has already done.



Can anyone think of why this would be a good thing?


It's vital. What good would an expiring signature be? The whole point of 
a signature is that it cannot be repudiated, revoked, expired, or 
otherwise invalidated.



If one actually trusted the signature date, someone could lie by
backdating the object.


Sure, those we trust can always lie. But we're not stupid. We pick the 
entites we trust by making sure they are entities we do not expect to 
lie. If you can get Verisign to issue a forged timestamp, then you can 
make us think a signature was made in the past. (The timestamp is 
normally itself signed by an entity we have chosen to trust for that 
purpose.)



Also, we're unsure how to create a new cert that's still valid for
the range - I think we're gonna have the person set their system
clock back, since I don't think openssl command line actually prompts
for a creation date.


Why would you want to do that and what good would that do? They wouldn't 
be able to get a past timestamp unless they bribed a timestamping 
authority. And if they did that, why would you want to help them create 
a certificate with a bogus date?! So what exactly would the point be?


I think you are expecting a new certificate to somehow go back and time 
and modify or affect previous operations that have already taken place. 
It can do no such thing. Operations that have taken place in the past 
are beyond our ability to affect in the future.


Again, the whole point of a signature is that nothing done after the 
signature is made can affect it. It stands forever as it is as 
conclusive proof that the entity named certified the information signed.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: How to find the other end can support SSL or not

2010-12-17 Thread David Schwartz

On 12/17/2010 1:41 AM, Kingston Smiler wrote:


Is there any way to identify whether the other end supports TLS or not.


There is no way we could know the answer to this question. We have no 
idea what your other end is, who designed it, or how.



My requirement is like this.
If the other end supports TLS i should send the packet over TLS
connection otherwise i should send traffic over TCP.


Are you designing the other end? Do you have a requirement that a MITM 
must not be able to make you think the other end doesn't support TLS if 
it actually does?



If the other end doesn't support TLS, my initial TCP connection will be
successful. When i do SSL_Connect
over this TCP socket, the other end considers this  as TCP payload and
doesn't respond to me. I don't get an
failure related to SSL_connect. So is there a way to identify whether
the other end supports TLS or not.


Did you design one? If so, then use it. If not, then it comes down to 
whether the person who did design the other end provided a way to do 
this. You would have to ask them or, if they provided a specification, 
check it.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: SSL shutdown

2010-12-02 Thread David Schwartz

On 12/2/2010 2:36 AM, Aarno Syvänen wrote:

Hi List,

I have problem with SSL_shutdown. Advice seems to be to call it again, if the 
return value is 0.
However, this means that shutdown can hang forever. Can I just call 
SSL_shutdown and go on ?


You can go do other things and try to shut the connection down again later.

Here is the relevant documentation (assuming a non-blocking socket BIO):

   If the underlying BIO is non-blocking, SSL_shutdown() will also 
return when the underlying BIO could not satisfy the needs of 
SSL_shutdown() to continue the handshake. In this case a call to 
SSL_get_error() with the return value of SSL_shutdown() will yield 
SSL_ERROR_WANT_READ or SSL_ERROR_WANT_WRITE. The calling process then 
must repeat the call after taking appropriate action to satisfy the 
needs of SSL_shutdown().


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Handshake split across multiple TCP connections

2010-11-29 Thread David Schwartz

On 11/29/2010 2:34 AM, A. N. Alias wrote:


I've been using IE, Chrome and Firefox as clients for a test SSL/TLS server.
This works fine with Firefox, which uses a single TCP connection for the TLS
handshake and subsequent communication.  However, IE and Chrome seem often to
send different parts of the handshake on different TCP connections.  When this
happens, the server fails.


It is certainly possible for a browser to have more than one connection 
to the same server, each in a different state.



As an example, IE may connect and send a ClientHello.  The server responds with
a ServerHello on the same socket.  IE then replies with
ClientExchange/ChangeCipherSpec/Finished, but not necessarily on the same
socket.  Instead, IE may establish a brand new TCP connection and send the
messages over that -- all while the server is expecting the messages on the
original socket.  The server remains blocked on a recv() for that socket,
waiting for data that will never come through it.


If so, then the server is stupid. Why would a server that had two 
connections ever remain blocked in a 'recv' on one of them? Clients 
generally make no timeliness guarantees, but servers generally do. So it 
is almost never acceptable for a server to say I won't do job X for 
client Y until I can do job Z for client T.



How does one deal with a handshake sent over multiple TCP connections?  It's not
clear how to determine which connections are sending data for which handshakes,
especially when many clients hit the server hard.


Each connection is sending data for that connection's handshake.

 The server doesn't know

whether to call recv() to receive handshake messages over the original
connection, or to use accept()/recv() to take a new connection.  If a new
connection does come, can we simply terminate the original one?


Any sensible server can handle multiple connections in parallel. There 
are many architectures you can use including multi-threaded, all 
non-blocking network I/O, process per connection, and so on.



I might well be missing something here, or perhaps not correctly interpreting
what I'm seeing in the debug spew.  Any help much appreciated.  Thanks,


If you're developing a server that ever gets into a state where it will 
not do any work for any client until a particular client does some 
particular thing, you're probably not going to ever be happy with the 
results.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: problem with pem file, no start line. centos.

2010-11-18 Thread David Schwartz

On 11/18/2010 12:50 AM, Steve yongjin Shin wrote:


-BEGIN RSA PRIVATE KEY-
...omitted..
-END RSA PRIVATE KEY-
-BEGIN CERTIFICATE-
...omitted...
-END CERTIFICATE-
=
so I started my vnc reflect server
but, it shows error message
=
openssl_init: SSL_CTX_use_certificate_chain_file() failed.
ssl error: error:0906D06C:PEM routines:PEM_read_bio:no start line
=
my test.pem file itself definitely has a start line.
but, it shows that kind of error message.


The program wants a private key file, not an RSA private key file. You 
can convert one to the other with 'openssl pkey'.


openssl pkey -in my_rsa_private_key.pem -out my_private_key.pem

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Question regarding OpenSSL Security Advisory

2010-11-18 Thread David Schwartz

On 11/18/2010 7:26 AM, Pandit Panburana wrote:


I am not clear about the condition that vulnerability when using
internal session caching mechanism. Is it the same thing as TLS session
caching or this is some thing different?


The internal session caching mechanism caches TSL session information. 
If that doesn't answer your question, then I'm not sure what you're asking.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Question regarding OpenSSL Security Advisory

2010-11-17 Thread David Schwartz

On 11/16/2010 11:06 PM, Nivedita Melinkeri wrote:


Hi,
I had some questions about the latest security advisory. I understand
that this applies to multi-threaded application while using ssl sessions.


Correct.


If the application is written thread safe using
CRYPTO_set_locking_callback functions will the vulnerability still apply ?


If it didn't, it wouldn't be a vulnerability at all.


If the ssl code calls the locking callback function before accessing the
internal session cache then the vulnerability should not
apply to above mentioned applications.


Right, it shouldn't, but it does. That's what makes it a vulnerability. 
Code not working under conditions where it cannot be expected to work is 
not a vulnerability, it's simply misuse. This is a vulnerability because 
it affects applications that use the code correctly.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Force ASN.1 encoding routines to keep existing encoding

2010-11-08 Thread David Schwartz

On 11/6/2010 7:44 AM, Martin Boßlet wrote:


I just tested, whether the BER-encoding is preserved if I do not alter
any of the contents. Unfortunately, it seems as if the encoding is not
preserved. I did the following:

d2i_PKCS7_bio(file,p7);

and then directly

i2d_PKCS7_bio(file2, p7);

again. file was BER-encoded using e.g. an Octet String in
constructed form with inifinite length, which was DER-encoded in
primitive form using definite length in the output.
Is there a way how I can circumvent the reencoding?

Best regards,
Martin


Really, the best solution is just not to do that then. If it wants the 
signature done on the byte-for-byte form supplied, then do the signature 
on the byte-for-byte form supplied. Don't convert it into any other form 
and then convert it back because absent DER, it's unreasonable to expect 
that to produce the same output.


Keep both the PKCS7 object and a raw byte version. Compute and check 
signatures on the raw byte version. Do other checks on the PKCS7 object.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: SSL_connect and SSL_accept deadlock!

2010-11-07 Thread David Schwartz


	This may be a stretch, but did you confirm the socket is within the 
range of sockets your platform allows you to 'select' on? For example, 
Linux by default doesn't permit you to 'select' on socket numbers 1,025 
and up, though you can have more than 1,024 file descriptors in use 
without a problem.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: SSL_connect and SSL_accept deadlock!

2010-11-03 Thread David Schwartz

On 11/2/2010 6:25 PM, Md Lazreg wrote:


 r=select(m_sock_fd + 1, fds, 0, 0, ptv);
 if (r = 0  (Errno == EAGAIN || Errno == EINTR))/*if we timed
out with EAGAIN try again*/
 {
 r = 1;
 }


This code is broken. If 'select' returns zero, checking errno is a 
mistake. (What is 'Errno' anyway?)



   r = SSL_connect(m_ssl);
   if (r  0)
   {
  break;
   }
   r = ssl_retry(r);
   if ( r = 0)
   {
  break;
   }
   t = time(NULL) - time0;
}


Err, what? Is an ssl_retry return of zero supposed to indicate a fatal 
error? The code in ssl_retry doesn't seem to follow this rule. (For 
example, consider if 'select' returns zero and errno is zero. That would 
indicate a timeout, not a fatal error.)



int time0 = time(NULL);
timeout=10 seconds;
while (ttimeout)
{
   r = SSL_accept(m_ssl);
   if (r  0)
   {
  break;
   }
   r = ssl_retry(r);
   if ( r = 0)
   {
  break;
   }
   t = time(NULL) - time0;
}
if (t=timeout)


There no code to initially set 't'.

Also, an overall comment: Maybe it's just my taste, but your code seems 
to have a 'worst of both worlds' quality to it. It uses non-blocking 
sockets, but then finds clever ways to make the non-blocking operations 
act like blocking ones.


Is the server multithreaded? If so, I could see this as mere laziness 
(or, efficient use of coding resources to be more charitable) rather 
than actual poor design.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: FIPS mode - fails to read the RSA key

2010-10-06 Thread David Schwartz

On 10/6/2010 5:01 AM, john.mattapi...@wipro.com wrote:

Thanks Steve,

I used the following commands to create the certificate using the
openssl built with FIPS support

openssl genrsa -des3 -out wv-key.pem 1024
openssl req -new -x509 -key wv-key.pem -out wv-cert.pem -days 365

Do I miss any option to make it FIPS supported

John


You need to defined the environment variable 'OPENSSL_FIPS'. Otherwise, 
the 'openssl' executable will never call FIPS_mode_set(1) as required by 
the security policy.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Set Time out for SSL read

2010-10-02 Thread David Schwartz

On 9/30/2010 11:39 PM, Raj wrote:


Can you please let me know how can I set time out as a whole. I think
you are mentioning about SSL_CTX_Set_timeout function. If it is so then
I have set the time out using this function, and sadly I didn't get the
expected result.


There are a lot of ways. The most common is 'alarm'. Your platform may 
also have a particular way of timing out TCP connections such as through 
a 'setsockopt'. This is an ugly method and, IMO, is only appropriate in 
a program that needs to terminate if the connection is lost.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Set Time out for SSL read

2010-09-30 Thread David Schwartz

On 9/29/2010 11:41 PM, Raj wrote:


Hi All
Is there any method to set time our for SSL _read function.
As from the Open SSL document SSL_read will not return if there is no
data to read from the socket


You really shouldn't need this. If you know for sure that it's the other 
side's turn to transmit, you should be timing out the connection (or 
even application) as a whole, not just the read. If you don't know for 
sure that it's the other side's turn to transmit, you should not be 
making a blocking call to SSL_read.


In any event, I recommend that you basically never use blocking functions.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: where is the memory being held

2010-09-28 Thread David Schwartz

On 9/27/2010 4:13 PM, Scott Neugroschl wrote:

As David said, yes.
On the other hand, you could re-implement malloc() and free() for your
platform.


There's really no way to make that help very much. It might help a 
little, but the fundamental problem is this:


If you want to implement each 'malloc' so that a later 'free' can return 
the memory to the operating system, you can. But that requires rounding 
up even small allocations to at least a page, which increases your 
memory footprint.


If you don't implement each 'malloc' that way, you still wind up with 
the problem that one small allocation that has not been freed in the 
middle of a bunch of larger allocations that have been freed prevents 
you from returning any of the memory used by the larger allocations to 
the operating system.


Generally, what you need are algorithms designed for low memory 
footprint and a way to 'group' allocations that will tend to be freed as 
a unit (such as those related to a single SSL session) such that when 
they are all freed, the memory can be returned to the OS. OpenSSL simply 
is not designed this way.


You could probably hack OpenSSL to pass a pointer to a session object to 
calls to malloc/free (perhaps using TSD) and use that TSD pointer as an 
allocation context. That might increase the chances that the whole 
allocation context is freed. It may even be sufficient (or at least 
helpful) just to hook all OpenSSL calls to malloc/free and process them 
from their own arena.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Does OpenSSL have any plans of supporting SSL_read / SSL_write on the same SSL_S from multiple threads?

2010-09-27 Thread David Schwartz

On 9/25/2010 9:31 AM, Jayaraghavendran k wrote:


(a) Does OpenSSL plan to support this feature in any of it's future
releases? (Or does any of the releases already support it? I went
through the Change Logs, but couldn't find anything), If no, why not?


I can't answer whether there are any plans, but I doubt it. The reason 
not to is that the library is not the right place to implement that kind 
of logic.



(b) As far as I understand, the main problem with the parallel
SSL_read / SSL_write is renegotiation, i.e. a call to SSL_read can
lead to a send call and vice-versa, so, if I ensure I don't do
renegotiation at all (both sides use my application) then will the
code work fine?


No, it will still break. The SSL connection has one and only one state, 
and you are trying to manipulate it from two places at the same time.



(c) I would also like to know the reason behind such a design
considering the fact that TCP supports parallel send / recv. Is it
enforced by the protocol design or any other design parameters forced
such a design?


This is how every other library works. TCP is an exception.

Take, for example, a typical string library. You can perform 'read' 
operations (those that do not change state) from multiple threads to the 
same string at the same time. But you would never expect the string 
library to support two 'write' operations (those that do change state) 
to be supported to the same string at the same time. If you did, say 
'a+=A;' and 'a+=B;' at the same time in two different threads, you 
wouldn't expect a sensible result.


Another problem is that there's basically no way OpenSSL could provide 
this capability without a service thread. Consider if a blocking 
SSL_read is terminated from another thread that calls a shutdown 
function -- what thread is left to complete the SSL protocol shutdown? 
TCP handles lingering data in the kernel with the kernel's own threads, 
but OpenSSL can't do that. And unless you use a service thread per 
connection in flux, you wind up in the very platform-specific world of 
I/O multiplexing.


All of this can be done, but not sensibly inside OpenSSL.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: where is the memory being held

2010-09-27 Thread David Schwartz

On 9/26/2010 11:14 PM, zhu qun-ying wrote:


Does it mean that it is hard to change the behavior?


Yes, because it's not implemented in any one particular place. It's a 
fundamental design assumption throughout OpenSSL that it's aimed at 
general-purpose computers with virtual memory subsystems.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: where is the memory being held

2010-09-24 Thread David Schwartz

On 9/24/2010 11:05 AM, zhu qun-ying wrote:


I think I should clarify something here.  The app is running

 in a small device that does not have virtual memory

(no swap space) and the memory is limited (256/512 M).

 In peek connections, it may use up to 90% of the system memory,
 and when connection goes down, memory usage is not coming down.
 This leave very little memory for other part of the system,
 as this app is only a small part of a bigger system. The memory
 usage is a big concern as it is always running with the box.


So far periodically restart the app is not a good solution.


Sounds like OpenSSL wasn't what you wanted. OpenSSL is intended for use 
on general-purpose computers with virtual memory. It is not designed to 
return virtual memory to the system, which in your case means it won't 
return physical memory to the system. Ouch.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Creating Extended Validation SSL Certificates

2010-09-23 Thread David Schwartz

On 9/23/2010 7:16 AM, Gumbie wrote:

   Can someone explain what is needed to create and EV (Extended
Validation) Certificate? I have been trying to research this and have
found limited information on this. Only one document that was of any
help -àhttp://www.cabforum.org/EV_Certificate_Guidelines.pdf.

   My issue is with OpenSSL and adding the needed additional OIDs to the
certificate.

Thanks in advance,

Gumbie



Either request them from any CA that offers them or yourself make a CA 
that follows the EV guidelines. The whole point of EV certificates is 
that you cannot create them without going through extended validation. 
By design, there is no way to bypass this requirement.


DS


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: where is the memory being held

2010-09-23 Thread David Schwartz

On 9/23/2010 11:42 AM, zhu qun-ying wrote:

Hi,

I have an SSL apllication, that it suppose to run for a long time. After some 
time of running, I found the usage of the memory is growing.  I stop all SSL 
connections and checked all SSL * has been freed  but it could not release the 
memory back to the system.

After some investigation, I found there is no memory leak, but seems lot of 
memory are unable to release back to system.  mtrace found out there are quite 
a lot of fragmented memory being held by the SSL library.  I would like to know 
what could I do to reduce the memory held by SSL library after all connections 
have been dropped?

I am handling the SSL session through share memory myself and that part of the 
memory is allocated from the start.

mallinfo() reports after some test and no connection for a while:

system bytes = 28271952
in use bytes =  1809184
non-inuse bytes  = 26462768
non-inuse chunks =   81
mmap regions =4
mmap bytes   =  1773568
Total (incl. mmap):
system bytes = 30045520
in use bytes =  3582752
releasable bytes =   462496

--
qun-ying


This all seems normal. Virtual memory is not normally considered a 
scarce resource and unless the consumption is really absurd, it's not 
worth worrying about.


Unless your virtual memory use grows linearly with constant load, it's 
generally not worth worrying about. If it grows in an exponentially 
decreasing way with constant load or grows linearly with increasing peak 
load, I wouldn't worry about it at all.


DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: SHA-1 Hash Problem with i2d_Pubkey()

2010-09-13 Thread David Schwartz

On 9/12/2010 11:38 PM, Raj Singh wrote:


issuer_pubkey_len = i2d_PUBKEY(pubKey, NULL);
issuer_pubkey = malloc(issuer_pubkey_len);
i2d_PUBKEY(pubKey, issuer_pubkey);
memory_dump(issuer_pubkey, issuer_pubkey, issuer_pubkey_len);

The problem, is issuer_pubkey buffer is different each time, I run the
my application using same code.


Umm, you forgot to save the original issuer_pubkey. After the call to 
i2d_PUBKEY, issuer_pubkey points elsewhere. Try:


issuer_pubkey_len = i2d_PUBKEY(pubKey, NULL);
issuer_pubkey = malloc(issuer_pubkey_len);
foo=issuer_pubkey;
i2d_PUBKEY(pubKey, foo);
memory_dump(issuer_pubkey, issuer_pubkey, issuer_pubkey_len);

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Connection Resetting

2010-09-01 Thread David Schwartz

Sam Jantz wrote:

 It's multi threaded with non-blocking I/O.  I'm not sure exactly what
 you mean by socket discovery, but I think you are asking how my program
 determines when something is ready?  If that's the case then my program
 uses a select statement to watch the file descriptor to see if it's ready
 for read or write.  It uses a call back system to perform the correct
 action based on which fd_set was ready.

Okay, just make sure to only call 'select' when OpenSSL tells you to.
Otherwise, you may be waiting for something that has already happened.

 
 void ProxySSLConnection::handle_ssl_error(int ret,
 handler_function handler, const char * caller)
 {
     int error = SSL_get_error(_ssl, ret);
     switch (error)
     {
         case SSL_ERROR_WANT_READ:
             schedule_read(handler);
            break;

Your code has a subtle race condition because it assumes the two directions
of an SSL connection have independent states. Consider the following case:

1) SSL_read on connection A returns SSL_ERROR_WANT_READ.

2) In another thread, SSL_read on connection B returns with some data.

3) Some data arrives on connection A. SSL_read on connection A now would
return data immediately.

4) You call SSL_write on connection A to send the data you received in step
2. It reads from the socket the data that arrived in step 3. (SSL_read would
not return data without having to read on the socket, the socket is not
readable.)

5) You now act on the SSL_ERROR_WANT_READ you got in step 1, but it was
invalidated by the actions in step 4. You call 'select' to wait for data
that has already been received and never see the data received in step 3 and
read in step 4.

Before you call 'select' to wait for readability or writability, you must
make sure that data movement in the other direction did not make the
WANT_READ/WANT_WRITE indication invalid.

This bug tends to rear its ugly head only on renegotiations though. So I
don't think it's causing your actual problem.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Connection Resetting

2010-08-31 Thread David Schwartz

 I'm writing a SSL proxy (which is working great except for this issue)
 and every time I got to attach a file in an email the connection resets,
 and it gets caught in an infinite retransmit loop.

There are two totally different ways you can make an SSL proxy, and to figure 
out your issue, we really need to know which type.

1) An SSL proxy can understand the underlying protocol, know which side is 
supposed to transmit when, and only try to read from that side. In this case, 
it's vital that the proxy correctly track the protocol and not be reading from 
one side when it's the other side's turn to send.

2) An SSL proxy can ignore the underlying protocol and not know which side is 
supposed to transmit when. In this case, the proxy must always be ready to read 
from either side. It must never block indefinitely trying to read from one side.

You can also have a hybrid. For example, you can read only from the client side 
until you get the full request, and then once you process the request, you 
switch to bidirectional proxying.

It is very common for people to naively assume that their code will magically 
know which side to read from. I assure, this is not the case. Unless you 
carefully track the protocol, all you know is that the client has to send some 
data first. But once it does, all bets are off -- again, unless you carefully 
track the protocol.

Also, you don't mention whether your I/O is blocking or non-blocking, and if 
non-blocking, how your socket discovery works. This can be subtle with OpenSSL 
and your mistake might lie there. For example, if you using blocking I/O, you 
can't just block one thread in SSL_read in each direction, because if you do, 
there's nothing you can do when SSL_read returns (since the connection you need 
to send on is in use, potentially indefinitely, by the other thread).

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Man in the middle proxy - Not working

2010-08-18 Thread David Schwartz

Raj wrote:

 I have tried one more method to read the data from the socket,
 which was
 partially successful  it is defined as follows
 do
  {
   dwReadDataLen = SSL_read(Serverssl,pBuff,iBufferSize);  // Gets
 the
 data from the server side
   SSL_write(SourceSsl,pBuff,dwReadDataLen); // Writes the data back
 to
 the SSL
  } while(dwReadDataLen  0 );

This is the basic idea of how you proxy, but it can't work for a
general HTTP proxy. For one thing, it assumes the end of a reply is marked
by the close of a connection. This is true for some HTTP requests, but it's
not true in general.

You can write a proxy two different ways:

1) You can understand the protocol you are parsing and know when it
changes directions. Based on this understanding, you can switch from
proxying in one direction to proxying in the other.

2) You can avoid having to understand the protocol you are parsing.
But in this case, you will not know which side is supposed to send data
next, so you must always be ready to proxy in either direction.

It seems you do neither of these two things. You try to proxy in
only one direction at a time but you don't track the protocol. How do you
even know when you've sent the entire request and can even enter this loop?
How do you know when you've read the entire reply and can begin reading the
next request?

Your test condition, 'dwReadDataLen0' will be true so long as the
connection is healthy. It will typically remain healthy even when the reply
has been fully sent.

DS
 

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: SSL/TLS with server names picked from DNS

2010-08-12 Thread David Schwartz
Sandeep Kiran P wrote:

 We dont have any control on how the server generates its certificates.
 As said earlier, we only control the client portion of SSL/TLS.
 Sites where our client application runs, is handed over the location
 where trusted CA certs are stored and thats all we have.
 
 Secondly, as you pointed out, if we were to maintain a list of
 legitimate server certs, we could have as well maintained a list of
 server names at the client. The advantage with using DNS SRV RR is,
 a domain admin can add or remove servers without having to make any
 changes to the affected client applications.

There are a few fairly obvious solutions to this problem. Just pick
whichever one of them is the least awful for your application.

You could, for example, reserve a particular domain name known to the client
just for securely retrieving the list of authorized common names for
servers. The client can securely retrieve something like:
'https://serverlist.mydomain.com/server.list.txt'. Then it can still use SRV
records to find servers but ignore the servers if the list doesn't appear in
the server.list file.

This adds only a slight administrative burden in running a secure web server
that serves the server list file and in adding a new server's name to that
file.

You could also run a validation service on each server. The client, when
told to use a particular server, would simply confirm the validation service
is present on that server. Just make sure the validation service can't be
MITMed. (Easily done by ensuring the validation process validates the
server's common name.)

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Man in the middle proxy - Not working

2010-08-04 Thread David Schwartz

Raj wrote:

 Thanks for all the response
 1. I was able to do the handshaking successfully with the
 browser.
 On receiving the request from the browser I will send HTTP OK 
 response
 back to the browser, I was able to do the handshaking and read the
 actual
 GET request.
 2. Then I create a new socket to establish the connection with
 server. The connection was successful.
 Sends the request to the server
 Reads the request from the server
 
 When I read the response from the server it always return empty.

What does that mean? Are you doing a blocking read or a non-blocking read?
If 'read' returns zero, then the connection was closed by the server. If
'read' returns a number less than zero, there is an error -- tell us what
error you are getting. If 'read' returns a number greater than zero, then
that is the first part of the response.

 I
 don't
 know what went wrong here. I am reading the data from the socket using
 'recv' function. Can anybody tell me what went wrong

So, what return value do you get from 'recv'?

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Man in the middle proxy - Not working

2010-07-27 Thread David Schwartz
Alexey Drozdov wrote:

 Hi!
 
 When your setup proxy setting for browsers, they using HTTP CONNECT
 method for establish pure tcp-connection via proxy (not for local
 resources).
 It's seems like:
 
 Client send HTTP-request to proxy
   CONNECT remotehost:port HTTP/1.1
   Host: remotehost:port
 
 And begin wait HTTP-response like:
   HTTP/1.1 200 Connection established
 
 Then browser send initiate ssl handshake over this pure tcp-channel.
 
 Your proxy get HTTP-request instead ssl-handshake and fail:
 2572:error:1407609B:SSL routines:SSL23_GET_CLIENT_HELLO:https proxy
 request:.ssls23_srvr.c:391
 
 ---
 / Alexey Drozdov

In other words, you switched to SSL too early. The way you did it, how would
you know what host and port you were supposed to proxy a connection to?! You
have to wait and get the CONNECT request from the client to know what host
and port they want a connection to. Then send an HTTP 200 reply, and then
begin proxying.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Man in the middle proxy - Not working

2010-07-27 Thread David Schwartz

Rene Hollan:

 Oh! I totally misunderstood this.
 I thought OP wanted to MITM SSL sessions (which is possible, if
 (a) the traffic is decrypted, (b) certs are reissued and resigned,
 and (c) the client TRUSTS the modified cert chain (typically its
 root cert)).

 This is just HTTPS Proxy. In which case other answers about
 terminating the HTTP connection first are correct.

No, you were correct. He does want to MITM SSL sessions.

A MITM and a normal proxy operate precisely the same way up until the actual
proxying part starts. His problem is earlier, when he establishes the
connection to the client, determines what host and port the client wants to
talk to, and then switches to his SSL proxy/MITM capability.

All those steps are the same.

1) Accept plaintext connection.

2) Wait for client to send request.

3) Confirm CONNECT request, host and port valid.

4) Send 200 reply.

5) Make connection to host and port requested by client.

6) If normal proxying, begin proxying (copy ciphertext between client and
server). If MITMing, begin MITMing (do SSL negotiation with both client and
plaintext, copy plaintext between client and server).

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Why does my browser give a warning about a mismatched hostname

2010-07-24 Thread David Schwartz
 I generated the ssl request, I signed it in my CA (openssl) and
 uploaded
 signed certificate back to device.
 I generated also ca.der and uploaded it to my Internet browser. When I
 trying open ilo my browser give a warning about a mismatched hostname.
 
 I'm accessing this device via IP address.
 I don't want add this addresses to my DNS.

You told your browser you wanted a secure connection to 1.2.3.4 (or
whatever) and instead it got a secure connection to
some-iLO-2-Subsystem-Name. It has no reason to think you want to send your
secrets to some-iLO-2-Subsystem-Name -- hence the warning.

Simply put, you did not get a secure connection to the thing you requested a
secure connection to. You got a secure connection to something else.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: handling SSL_ERROR_ZERO_RETURN from SSL_read

2010-07-13 Thread David Schwartz

Amit Ben Shahar wrote:

 Hi,
 
 The documentation specifies that SSL_ERROR_ZERO_RETURN is returned if
 the transport layer is closed normally.
 My question is, how should i handle this return code?
 specifically should i call SSL_free normally to free resources, or are
 resources already freed?

Handle it the same way you would handle 'read' returning zero. Resources
cannot already be freed because if they were, a subsequent call to, say,
SSL_write would cause the program to crash.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: question about max length string to encrypt with rsa 2048

2010-07-11 Thread David Schwartz
Chuck Pareto wrote:

 My group is using RSA with a key thats 2048 in size.
 We want to encrypt strings that are longer then this
 key size gives.
 If we switch to a key that is 4096 what is the max
 string length we can encrypt? is it double? 

No, no! You are doing this all wrong!

RSA is an algorithm that defines a mathematical operation sometimes called
encryption, and it bears a superficial resemblance to actual encryption
algorithms actually use it to encrypt data you want to keep secret. But the
RSA primitive operations have to be assembled by cryptographic experts into
complete recipes that create a actual encryption algorithm that can meet
real security requirements.

There are numerous known defects in RSA encryption that prevent it from
being used directly as an encryption algorithm. As a trivial example,
consider messages to attack a target. One day you send a message attack
tomorrow. This message is intercepted, but the enemy cannot make sense of
it. Two days later, the enemy intercepts the same ciphertext (because the
algorithm is deterministic). He now knows that you will attack tomorrow.

There's also Johan Håstad's attack based on the Chinese remainder theorem.
And many others, including a devastating chosen-ciphertext attack.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Smime decrypting passin argument with windows shell

2010-06-12 Thread David Schwartz

fatalfr fatalfr wrote:

 Thank you for your reply. Actually I use
 -passin (email editing problem ?)
 Complete command line working fine in cmd is
 the following one :
 
 openSSL smime -decrypt -in OUT\TEST_OK.TXT -out OUT\OK.TXT
 -inkey SBE\sbe-test.key.pem -passin pass:tn!;bg+xy:tABrP1YZK
 
 But I cannot have it working under windows shell =
 no way to give the pass argument. The other commands of my
 workflow (encrypting, signing, verifying) are ok.

It sounds like you're not using appropriate armor. Your password contains
characters that are special to the shell.

 RetourRun = WshShell.Run(C:\OpenSSL\bin\openssl.exe smime
 -decrypt -in OUT\TEST_OK.TXT -out OUT\OK.TXT -inkey
 SBE\sbe-test.key.pem –passin pass:  MotDpass$, 1, True)

I don't see any code in there to make the password safe to pass to the
shell. What if the password is
moose rd /s c:\program files? (DO NOT TEST THAT! It will destroy files.)

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: blowfish failing after around 1k input data...

2010-06-11 Thread David Schwartz

Charlie wrote:

 His algorithm has one part that doesn't seem right to me, but changing
 it made things even worse.  It seems weird that the Final function is
 inside the main for loop.  It seems like final should mean... final.
 (ie: after the looping is done).

It's quite common that fixing one bug exposes others. The solution is to fix
the other bugs, not to put the one bug back and wonder why it doesn't work
*with* an obvious bug.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Segfault when encrypting

2010-06-10 Thread David Schwartz

Hannes Schuller wrote:

  I'm very puzzled here. Why do you sign the reply and then sign a hash
  of the signature? You say Message encryption successful, but that's
  a signature you're doing, not an encryption.
 
 I was under the impression that RSA_private_encrypt and
 RSA_public_encrypt do nothing but encrypt the given payload. The
 (non-quoted) code before this ensures the reply is shorter than the
 regular message digest length.

RSA is a low-level algorithm. It provides primitives called encryption and
decryption. But whether they actually encrypt (in the sense of concealing
contents so that only a desired party can access them) depends on the system
as a whole.

So yes, RSA_private_encrypt performs the RSA primitive operation known as
encryption. But it doesn't actually encrypt anything in the sense of
concealing contents to that only a desired party can access them. (Because
anyone with the public key can reverse the operation.)

 What would be the prefered way for encryption?

See the various EVP_Encrypt functions that provide encryption in the
conventional sense rather than the RSA_* functions that provide RSA
primitive operations that may or may not meet security objectives.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Segfault when encrypting

2010-06-09 Thread David Schwartz

Hannes Schuller wrote:

 hash = (unsigned char *)malloc(RSA_size(rsa) * sizeof(unsigned char));
 ciphertext = (char *)malloc(RSA_size(rsa) * sizeof(char));
 signature = (char *)malloc(RSA_size(rsa) * sizeof(char));
 if (ciphertext != NULL  signature != NULL  hash != NULL) {
   memset(ciphertext, 0, RSA_size(rsa));
   ok = RSA_private_encrypt(strlen(reply), (unsigned char *)reply,
   (unsigned char *)ciphertext, rsa, RSA_PKCS1_PADDING);
   if (ok  0) {
   derror1(Message encryption error: %s,
ERR_error_string(ERR_get_error(), (char
 *)NULL)); return (true);
   } else {
   dtrace1(Message encryption successful; return value:
 %d, ok); }
   len = base64Encode(ciphertext, ok);
   memset(hash, 0, RSA_size(rsa));
   RIPEMD160((unsigned char *)ciphertext, len, hash);
   memset(signature, 0, RSA_size(rsa));
   ok = RSA_private_encrypt(RIPEMD160_DIGEST_LENGTH, hash,
   (unsigned char *)signature, rsa, RSA_PKCS1_PADDING);

I'm very puzzled here. Why do you sign the reply and then sign a hash of the
signature? You say Message encryption successful, but that's a signature
you're doing, not an encryption.

 That final line causes a segmentation fault. Here is a backtrace:
 
 -
 #0  0x774d0ba6 in ?? () from /lib/libc.so.6
 #1  0x774d2aa0 in malloc () from /lib/libc.so.6
 #2  0x77abc962 in CRYPTO_malloc (num=-142946720,

Since the fault is in 'malloc', that indicates something is trashing the
heap. Tools like 'valgrind' can find this for you. My guess would be that
the problem lies in 'base64Encode'. It doesn't seem to have any place to put
its output (and if it operates in place, it may overflow the memory
allocated for 'ciphertext'), but for all I know it calls 'realloc'
internally.

I have to also give you a generic warning -- there are some subtle clues in
your code that suggest that you do not know what you're doing. If you, or
anyone else, is going to rely on this code to meet any security
requirements, I *strongly* urge you to have the code evaluated by a security
expert sooner rather than later. It appears that you have designed this code
such that only the public key is needed to perform an operation that you
think of as decryption, and that's usually a sign of a serious design flaw.

Why are you don't things using these low-level functions anyway? OpenSSL
provides high-level functions with well-defined security properties.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: max length to encrypt

2010-06-02 Thread David Schwartz

Chuck Pareto wrote:


 I'm not sure what you mean by shouldn't be using public-key
 encryption, why?

Because you don't understand its properties, so there's no way you can know
whether or not it meets your security requirements.

 It seems like .Net sets up a nice class that is easily
 implemented, all I need is the key and the exponent and I can
 encrypt and decrypt when needed.

Right, except you don't get any security.

 I don't think I really have a choice about what to use, I recently
 started in a group that has a public and private key they are using
 to encrypt and then decrypt strings of data.

Which is fine if, for example, those strings of data are randomly-chosen
keys for a symmetric cipher. It is, however, not fine if those strings are
messages.

 I don't think I can change that. What would be the advantages of doing
 what you suggest and using symmetric encryption to encrypt and PK
 encryption for encrypting the key?

The advantage would be that if you have reasonable security objectives,
there's a good chance the algorithm would meet them. Numerous attacks
against RSA are known -- RSA is just an algorithm, it is not a scheme -- and
you need a well-designed cryptographic scheme to meet actual security
requirements.

http://crypto.stanford.edu/~dabo/abstracts/RSAattack-survey.html


 I don't think we have a symmetric key because we are using RSA with
 a public and private key.

That's a non-sequiter. The public and private key could be being used to
encipher and decipher the symmetric key. This is the normal approach.

 If you think your approach is better please let me know and I will
 discuss it with my group and see if we can make a change.

If your group includes a security expert, this kind of stuff would already
be done. If it doesn't, the likelihood of this making things any better
isn't really all that great.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: OpenSSL Error Handling

2010-05-29 Thread David Schwartz

Pankaj Aggarwal wrote:

 I am able to think about the following approaches :
 
 1. Keep a record a threads which are spawned.
 
 2. Expose a function from our library for cleanup when the thread exits 

 Is there any other way to avoid the memory leak caused by error queues ?

There are several:

3. Only call OpenSSL functions from threads whose lifetimes are managed by
your library. Dispatch requests that require calls into the library to your
handler threads. So the functions called from the outside look like this:
Allocate and fill out a request object, put it on a processing queue,
unblock/signal an event to wake a worker thread wait for the object to
complete, extract the results.

4. Call ERR_remove_state before any function that put things on the OpenSSL
error stack is permitted to return.

5. Hook the system's thread shutdown logic (in a platform specific way) so
that you can run ERR_remove_state when a thread terminates. On POSIX
platforms, for example, you can create some thread-specific data whose
destructor calls ERR_remove_state.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: compilation problem for xscale.

2010-05-26 Thread David Schwartz
Rusty Carruth wrote:

 I would have thought that OPENssl, for which I have the source, would
 have met the requirements to use the _GPL symbols in the kernel.

The requirement is that the module claim that it is available under the GPL
by containing a specific license declaration. You can fix this two ways:

1) Modify the Linux kernel so that this requirement is removed. The GPL
explicitly give you the right to do this if you wish to.

2) Modify your module so that it claims it is available under the GPL even
though it is not. The functional exception to copyright gives you the right
to do this.

Note that making sure you do not violate copyright law is your
responsibility. While you are permitted to bypass technical restrictions
(GPLv2 grants you that right), if you choose do so, it is still entirely
your responsibility to comply with the license. Specifically, GPLv2 does not
permit you to create a derivative work and distribute that entire work under
any terms not compatible with the GPLv2. Since you cannot make OpenSSL
available under compatible terms, you must not distribute a work that is
derivative both of OpenSSL and the Linux kernel. (If you are unclear on what
this means, I'd strongly urge you to consult a lawyer.)

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: max length to encrypt

2010-05-26 Thread David Schwartz

Chuck Pareto wrote:

 if my public key is 256 bytes long, what is the max length
 of the string I can use to encrypt? Is it 256?

If the output is exactly 256 bytes, there are (in theory) 2^(256*8) possible
outputs. That means there can be at most 2^(256*8) possible inputs.  There
are more than 2^(256*8) possible input strings of 256 bytes or less (since
there are that many strings just of exactly 256 bytes). So there's no way it
can possibly take all input strings of 256 bytes or less.

In any event, unless you know exactly what you are doing, you should not be
using PK algorithms directly. There are *way* too many gotchas. Use a system
that includes the PK algorithm you.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: AES-256 CBC encrypt/decrypt usage problem

2010-05-25 Thread David Schwartz

Kunal Sharma wrote:

What I see happening is this:

ENCRYPT - size of /etc/rgconf on disk is 157043 bytes
ENCRYPT - size of /etc/rgconf_encrypted on disk is 157044 bytes.
BROWSER saves the file to disk - size is 136 bytes (How ???)

You called 'strlen' on something that was not a string, so it gave you junk.

 Would be nice if you could provide pointers to this problem
 or if I need to do something extra here.

You supplied the answer yourself:

 If your data can include NULs, you should not use strlen to
 calculate the length of the buffer, you need to provide the
 length in some other way - in your example presumably as an
 additional parameter.

Carter

And there you have it. Either convert your encrypted data to strings or pass
the length along with the data through your code paths.

You can only use 'strlen' on something that you know is in fact a string.
The encrypted data is *not* a string, it's a chunk of arbitrary bytes. The
result of calling 'strlen' on it is effectively random. There is no way to
know how many bytes it holds just from looking at the data -- you need to
store that separately.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: AES-256 CBC encrypt/decrypt usage problem

2010-05-20 Thread David Schwartz

Kunal Sharma wrote:


void encode2(char *inbuf,char *outbuf)
{
unsigned char key32[] = As different as chalk and cheese;
unsigned char iv[] = As dark as pitch;

AES_KEY aeskey;

memset(outbuf, 0, sizeof(outbuf));

AES_set_encrypt_key(key32, 32*8, aeskey);

AES_cbc_encrypt(inbuf, outbuf, strlen(inbuf), aeskey, iv,
AES_ENCRYPT);

return;
}

You can't mean 'sizeof(outbuf)' -- 'outbuf' is a *pointer* to the output
buffer. What does the size of that pointer have to do with anything?

void decode2(char *inbuf,char *outbuf,int len)
{
unsigned char key32[] = As different as chalk and cheese;
unsigned char iv[] = As dark as pitch;

AES_KEY aeskey;

memset(outbuf, 0, sizeof(outbuf));

AES_set_decrypt_key(key32, 32*8, aeskey);

AES_cbc_encrypt(inbuf, outbuf, len, aeskey, iv, AES_DECRYPT);

return;
}

Same use of 'sizeof(outbuf)' where that makes no sense (what does the size
of the pointer to the output buffer have to do with anything?). Also, what
happens if the plaintext is not a precise multiple of the cipher block size?

It seems like you have picked a low-level encryption/decryption function
where you wanted a high-level one.

Also, you have one amusing boner. Your 'decode2' function tries to zero the
output buffer, but actually only zeroes part of it. But you call it with the
output buffer and input buffer the same! So you are actually erasing part of
your input buffer before you use it!

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: openssl enc block size

2010-05-08 Thread David Schwartz
Johannes Baeuer wrote:

 Why would a 16 byte block need to be padded by one byte to 17 bytes?

Is it really not immediately obvious?

No encrypted output for one or more bytes of input can be less than 16
bytes. Thus the smallest possible output sequence is 16-bytes. The number of
possible encrypted outputs of 16-bytes or fewer is therefore 2^(16*8).

The number of possible 15-byte plaintext inputs is 2^(15*8) and the number
of possible 16-byte plaintext inputs is 2^(16*8). Thus the number of
possible plaintext inputs of 16 bytes or fewer is greater than
2^(15*8)+2^(16*8) and thus greater than 2^(16*8).

So the number of plaintext inputs of 16 bytes or fewer is greater than the
number of ciphertext outputs of 16 bytes or fewer. Therefore, some inputs of
16 bytes of fewer must have outputs of more than 16 bytes.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Is it not possible to decrypt partial AES messages?

2010-05-05 Thread David Schwartz

Christina Penn wrote:


 Hello David,
 
 Can you show me exactly how to break up my example code to make my example
work?

It's really simple. When you want to decrypt a message, call
EVP_DecryptInit_ex. For each chunk of data you want to decrypt that is part
of the message, call EVP_DecryptUpdate. For the last block (or after it),
call EVP_DecryptFinal_ex.

 I tried removing the EVP_DecryptFinal_ex from my DecryptMessage function
and
 just seeing if the first part would just decrypt the first 7 bytes, but it
got
 thrown into my catch statement. I am really confused.

I'm not sure what you mean. That should have worked. (Note that zero bytes
coming out *is* working. You are not guaranteed that any particular number
of input bytes will produce any particular number of output bytes except
that all of the input will, of course, produce all of the output. If you
want a stream cipher, you know where to find them.)

By the way, I strongly advise you not to use the C++ 'string' class for
arbitrary chunks of bytes. It's really not suitable.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Is it not possible to decrypt partial AES messages?

2010-05-04 Thread David Schwartz

Christina Penn wrote:

 Here is some example code of me trying to decrypt a partial AES message.
 It doesn't work.. is there a way I can do something like this? It only
works
 if I call DecryptMessage() with the entire encrypted string. Why?

Your DecryptMessage function is specifically designed to require the entire
encrypted string:

if(!EVP_DecryptFinal_ex(deCTX, plaintext+p_len, f_len))
cerr  ERROR in EVP_DecryptFinal_ex  endl;

See how it calls EVP_DecryptFinal_ex?

As EVP_DecryptInit should only be called at the very start to initialize a
message, so EVP_DecryptFinal_ex should only be called at the very end to
finish a complete message. In the middle, you should only be using
EVP_DecryptUpdate.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Verisign client requirements

2010-04-20 Thread David Schwartz

Piper Guy1 wrote:

  This is precisely what a browser does. Again, using the
  https://www.amazon.com; example, OpenSSL takes care of getting the
  certificate from the server, making sure the certificate is valid,
 checking
  that the server owns the certificate, and making sure the
 certificate's
  trust chain has a root CA that is on your trusted list. However,
  www.badguys.com could also pass all those tests.
 
 .how does OpenSSL (or can our implementation of OpenSSL):
 - make sure the certs are valid,

It's not a particularly simple process to describe, but it checks the
integrity of all the certificates, checks their signatures, and makes sure
the chain's links are correct.

 - the server owns the certificate

The certificate is public. Nobody really owns it. What the certificate says,
using the www.amazon.com example again, is say that the owner of
www.amazon.com is the only person who knows the private key that corresponds
with the public key in the certificate. OpenSSL confirms that the entity
presenting the certificate knows the private key by challenging them to
perform an operation that requires that private key.

 - make sure the certificate trust chain has a root CA.

It follows the chain and makes sure it lands on a root CA that the client
trusts.

However, all of that does nothing if the certificate is the *wrong*
certificate. If I type https://www.amazon.com; in my browser, and all the
above checks pass, but the certificate is for
www.creditcardstealingscammers.com, I sure as heck don't want to trust the
server!

 
  So that leaves you to check
  the common name on the certificate and make sure it's the *right*
 name --
  that is, the server you wanted to reach.
 
 I also noticed when i sniffed the wire that the common name field is
 in clear text.
 What's the point of verifying that?

As I said, to make sure that you're connected to the server you wanted to be
connected to and not the server of a malicious third party. The field is
sent in plain text, but so what? Anyone could get it anyway just by
connecting to the server. So it's not a secret.

 Based on the 4 points above, would you say that our implementation is
 not very
 strong? I don't have the security expertise to challenge the server
 guys so it's
 basically status quo, which my gut tells me is not very strong.

Presuming they check the validity of the common name, it sounds like they're
doing things the same way everyone else is. If they are not checking the
common name, then an active interceptor could hijack the connection if he
had the private key corresponding to any certificate whose chain landed on a
root CA the client trusts.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Multi Threaded questions

2010-04-18 Thread David Schwartz

Sad Clouds wrote:

  1)  According to the FAQ, an SSL connection may not concurrently be
  used by multiple threads. Does this mean that an SSL connection can
  be used by different threads provided access is limited to one at a
  time?


 I assume that having a mutex for each SSL object would prevent it from
 being concurrently used by multiple threads. So this should be OK.

Yes, that works. However, you can't use blocking operations in that case.
Otherwise, a thread trying to write to the connection would be blocked
potentially for ever as some other thread blocked trying to read from the
connection held the connection lock.

 However do you really need to use multiple concurrent threads with the
 same SSL object? Think of it as a TCP socket, each thread has a list of
 open sockets, or SSL objects, there is no need to share it with other
 threads.

Actually, it's pretty common to do that with TCP connections. You may have
one thread that's blocked trying to read from the connection all the time
while another thread tries write to the connection as it discovers data that
needs to be sent. You can't do this with OpenSSL. (At least, not precisely
the same way.)

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Verisign client requirements

2010-04-05 Thread David Schwartz

Piper.guy1 wrote:

 Hi,
 
 Please understand I'm a newbie to security if my question sounds
 rather elementary.
 
 The embedded product I'm working on requires a secure connection to
 our server that uses a Verisign certificate to authenticate. I've been
 porting the OpenSSL examples from the O'Reilly publication so far and
 I have been successfully able to make a secure encrypted connection
 without authentication. (example client1.c). Our next step it to
 implement authentication using a Verisign cert.

Usually, when people talk about authentication, they mean the server
authenticating the client. The client always makes sure it has reached the
correct server. I presume, what you are talking about is the same check that
every browser does. When you punch https://www.amazon.com; into your
browser, your browser makes sure the server you reach presents a certificate
for www.amazon.com that is signed by a CA your browser trusts -- and then
it makes the server prove it knows the private key corresponding to the
public key in that certificate.

Is that all you're trying to do?
 
 3rd party CA's are talked about in the book very nicely but the focus
 is on the server, and very little is discussed regarding what the
 client needs to implement, unless I'm not reading in the right place,
 or there's very little else for the client to do.
 
 It would seem that I would have to implement much of example
 client2.c; or essentially call:
 
 1. SSL_CTX_load_verify_locations() with the trusted certificates file

Correct. And the trusted files should include the root certificates your
client trusts. It must include the Verisign root certificate that your
server's trust chain starts with (or ends with, depending on how you look at
it).

 2. SSL_CTX_set_verify() with the SSL_VERIFY_PEER flag set
 
 Do I have to add anything else to the trusted certificates file or
 will OpenSSL magically know to authenticate with Verisign?
 
 Is this all I need to do?

The problem with this is it will wind up accepting any certificate whose
trust chain's root is one of your trusted certificates, yours or not. The
best solution to this problem is to confirm that the certificate presented
has as its common name the name your client is trying to reach.

This is precisely what a browser does. Again, using the
https://www.amazon.com; example, OpenSSL takes care of getting the
certificate from the server, making sure the certificate is valid, checking
that the server owns the certificate, and making sure the certificate's
trust chain has a root CA that is on your trusted list. However,
www.badguys.com could also pass all those tests. So that leaves you to check
the common name on the certificate and make sure it's the *right* name --
that is, the server you wanted to reach.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Pre Master Secret Regarding

2010-04-03 Thread David Schwartz

Aravinda babu wrote:

 During SSL/TLS handshake,a pre master secret is sent from client to the
 server by encrypting pre master secret with server's public key.
 From that both client and server derive master secret and finally one
 symmetric key. My doubt is, why both cannot use pre master secret itself
 as a symmetric key ?

The minor reasons:

1) The scheme used to identify the server may not support encrypting data
large enough to be used as the symmetric key.

2) The client's random number generation may not be sufficiently secure, so
having the server participate in generating the symmetric key provides
greater protection from passive attacks.

3) Using this approach, you would need a phase where the server proves it
can decrypt the symmetric key anyway.

The major reason:

If you did that, you would have no protection against replay attacks.
Nothing would stop an attacker from intercepting the SSL session and playing
it back to the server. Consider a secure web application that receives
commands from a command center to disarm the safe alarm every business
morning and then one to arm it every day at close of business. If an
attacker intercepts the disarm the safe session, he could play it back any
time he wanted and disarm the safe alarm at 2AM on a Sunday morning.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Random Numbers

2010-03-31 Thread David Schwartz

P Kamath wrote:

 I said it is an RNG, not cryptographic RNG.  By adding current time
 source,
 however crude, and doing a sha1/md5, why should it not be cryptoPRNG?
 What
 properties should I look for?

You should look for a cryptographically-secure random number generator.
Seriously, you shouldn't be hacking random bits of junk together and then
relying on it to be secure.

DS

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Shorten the timeout for openssl s_client?

2010-03-12 Thread David Schwartz
Todd Thatcher wrote:

 Using the command  “openssl s_client –connect gmail.google.com:443”
 openssl gets the certificate information and stays connected until I enter
QUIT,
 or the timeout is hit -- about 2 minutes later.   I want to script
certificate
 expiration date checks for out servers. Is there a command-line switch or
some
 other advice that I can use to change this behavior?  

Two ideas:

1) echo QUIT | openssl s_client -connect gmail.google.com:443

2) openssl s_client -connect gmail.google.com:443  /dev/null

DS



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Sign an SSL certificate with mutile trusted roots?

2010-02-25 Thread David Schwartz

Rene Hollan wrote:

 I guess I'm just dense and stupid. Won't that fail since the CA
 IC cert won't be signed by the CA cert identified as it's issuer?

Yeah, I think you're right. I made the same mistake I was trying to convince
the OP not to make -- thinking that CAs sign certificates. The public IC
will be signed by the wrong key and there's no way to produce a signature
signed by the right key.

DS



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Sign an SSL certificate with mutile trusted roots?

2010-02-24 Thread David Schwartz
Shaun Crampton wrote:

 Is there any way to accomplish this while using only one domain?

Can you be very precise about what you mean by only one domain? For
example, you can do it by pointing www.example.com and www-x.example.com at
the same IP and having the server issue a different certificate based on
which name the client connected to. This assumes the embedded device support
the TLS server name extension. You can also use only one domain by using two
different ports, say 80 for normal clients and 81 for embedded device
clients.

 E.g. is it possible for me to send a CSR to Thawte, get back the
 certificate and then send it on to the embedded device manufacturer
 for an additional signature?  Will browsers support it?

You can't add a second signature to a certificate because CA's don't just
sign certificates, they issue them. Just one of the many reasons this won't
work is this -- suppose the certificate has serial number 7, but the other
CA has already issued a certificate with that serial number.

When a CA issues a certificate, it binds a private key to an identity the CA
vouches for. You can have two certificates that bind the same key (just use
the same key to sign both CSRs) but that won't help you very much. You might
as well use different keys for extra security. You still have to serve the
right certificate to the right client.

There are three things that could have been built in that would have made
life simpler for you, but alas, neither exists. X.509 could have supported
'multiple identity, serial number, authority key, signature' sets to go in a
certificate. Or TLS could have allowed more than one certificate to be sent.
Or TLS could have allowed the client to upload its list of trusted roots.
Sadly, none of these things exist.

 Sorry, the client will only trust a server cert that is signed by the
 manufacturers root cert.  The server's cert must be issued by the
 manufacturer's CA.

If you mean this literally, then you have to have two different certs. If
you don't mean this literally (and a trust chain that ends in the
manufacturer's root cert is sufficient), you can try having the
manufacturer's CA sign the public key in the public CA's intermediate
certificate with its own intermediate certificate with the same AKID (even
though it won't be the hash of the key, it doesn't need to be). This is some
very ugly hackery and probably won't work anyway. The idea is to build a
cert chain that can end on either the public CA's root cert or go past it
to the manufacturer's CA, like this:

Server Cert - Public IC - Manufacturer's cert of public IC -
Manufacturer's root

So here's what I think will happen:

When normal Internet clients connect and get this cert chain, they will stop
at the public IC. They already know the public CA's root key and they have a
complete chain at that point.

When embedded clients connect and get this cert chain, they will stop at the
manufacturer's cert signing the public IC key. (That cert will have to be
specially constructed with no AKID.) At that point, they will have a
complete chain.

It would take more time than I have to try to work out the details and see
if it would work. Also, this has some security considerations -- for
example, anyone else with a key signed by that IC now also has a key signed
by the manufacturer's CA. I cannot assure you that it will work or that it
won't compromise security. Just that it may be worth investigating if the
more rational solutions aren't possible.

DS



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


  1   2   3   4   5   6   7   8   9   10   >