Re: Accessing Manual Pages in openssl
Marek Marcola wrote: Hello And for the great unwashed using Windows Marek :-) Is it just the online versions? On Unix pod2man.pl script is used. I think that pod2chm from CPAN perl module may help :-) Best regards, Another option may be using http://www.openssl.org/docs/crypto/, http://www.openssl.org/docs/ssl and http://www.openssl.org/docs/apps Hope it helps, Ted ;) -- PGP Public Key Information Download complete Key from http://www.convey.de/ted/tedkey_convey.asc Key fingerprint = 31B0 E029 BCF9 6605 DAC1 B2E1 0CC8 70F4 7AFB 8D26 smime.p7s Description: S/MIME Cryptographic Signature
RE: SSL_peek() ?
> Nothing, but how do I know when I can start my SSL_write() again, > because the WANT_READ condition that is stopping SSL_write() from taking > any more data has cleared ? I recommend a very conservative approach. Any time you make any forward progress or might have made any forward progress, you try to do anything that that forward progress might have made possible. If you make further forward progress in that process, you again retry. You only stop when everything you might be able to do is determined to be impossible. > After every SSL_peek()/SSL_read() do I try to issue a new SSL_write() > and if there is movement I consider it cleared ? I would try afterwards. Alternatively, there may be a status function that will tell you definitively. This isn't the I/O model I'm most familiar with. > The problem here is how to detect when I can drive my SSL_write() by the > writable event select() indicator and when I have to disregard it (even > though my application has Mb's of data still to send) until I know the > WANT_READ for SSL_write() condition has cleared itself. Then I can go > back to select() driven SSL_write()'s. The start disregard part I > understand. The switch back part I do not. Again, the conservative rule is that any time you make forward progress, you try everything that that forward progress might have made possible. This is totally safe and even if there are some bugs, it should give you maximal resistance to them. It may be sub-optimal, but I would save optimization for proven need. (And it will then be easy to test.) > Ideally I don't want to use SSL_read() to take application data from > OpenSSL at that moment (because the application doesn't want it at that > time), I ideally want to be using SSL_peek() to give OpenSSL program > control to service read() and clear condition. You're going to start me on another religious point. TCP does not guarantee that you can write if you don't read. And clearly, if you insist that the other side do something first before you will do something that you should always be doing and the other side makes the reverse assumption, you can deadlock. You may not say, "I will not read until you let me write" because if the other side says "I will not read until *you* let me *write* (by reading)", you (can, in theory) deadlock. This is another "I can't imagine how it could ever happen" problem that has happened and broken programs. > But having thought about that situation some more what if I have too > much data in my inbound buffer waiting for the application then OpenSSL > may not call read() because its buffers are full enough already to > service my SSL_peek(). But I dont know exactly how its internal > buffering works to be sure, maybe it will just eat more memory ? You have to drain the buffer, that's your job. You may not refuse to drain the buffer until someone else drains some other buffer because if you could make that assumption, so could they. Obviously, you both can't. > So I guess I must service it with SSL_read() enough to allow the other > end and the respective flow controls to pull enough data through to get > to the packet thats currently holding up my SSL_write() ? Correct. In theory, you must read every available byte. :( > What if I am not reading any data at all ? i.e. I am not even looking > at the readable select() event and never calling SSL_read() during the > time I am using SSL_write() to send a lot of data. Would it be fair to > say that I will never get a WANT_READ back from SSL_write(), because > unless my application calls SSL_renegotiate() itself or receives an > inbound alert/re-negotiate request then could I say there will be no > situation that can stall my SSL_write() with WANT_READ ? I believe the other side could trigger a renegotiation. I'm not really sure. Perhaps someone else can give you a better answer on this. > Is it possible to stop this situation from occuring altogether. If > RENEGOTIATION is the only situation it can occur in. I just read > somewhere that gave me the impression there is TLS protocol packet to > indicate one party is not capable/willing to RENEGOTIATION. Can this be > done at session setup, so the other end knows not to even ask? I can't > see anything from a grep of OpenSSL source to confirm that is true. That would be another "it has always worked for me" kind of solution. Your code would work until and unless some new feature causes this to happen. I think it would be better to solve it with proper code, even if that code is ugly and inefficient. Who cares how efficient code is that handles an "I can't imagine this will ever happen" condition. That said, if your security model allows it, disabling renegotiations closes a DoS hole. > I'm just trying to work through all the eventualities with my > application and OpenSSL and get a better understanding of whats going on > under the
Re: SSL_peek() ?
Mikhail Kruk wrote: After every SSL_peek()/SSL_read() do I try to issue a new SSL_write() and if there is movement I consider it cleared ? You don't need SSL_peak/SSL_read. SSL_write will do the reading that it wants to do. Hmm.. I believe that any of SSL_peek()/SSL_read() or SSL_write() are capable of clearing the SSL_write() WANT_READ condition. As they will call cause an underlying read() to be performed. I am completely happy with stopping waking for writable event indication with select(). That side I was understanding. The exact problem I have in mind is to do with that inbound data backlog, that maybe so big that that the sending wont event send the packet to me as the TCP window is 0, because the inbound OpenSSL buffers are full, the kernel to OpenSSL buffers are full and the kernel inbound TCP buffers are full. I'm not sure if that exactly that will happen, but its a possibility I am running through my mind, to understand how I get out of that situation. At the time I am bulk writing masses of data, I am currently not even looking at the readability situation of the file descriptor, because the application doesn't want or need that data right now. Think of it like a PIPELINED request/response application. The application can't do anything with all the requests in the queue right now (they maybe backlogged through all the available inbound buffering), since its working on the current one and the response data is say 4Gb. Do I have a problem using SSL ? I'm not 100% sure that there is a way to handle the situation where you keep ignoring the incoming data and leave it in OpenSSL's buffer. You might end up with a growing memory utilization of in fact get stuck because SSL_write can't read the data it needs because the buffer is full. It contradicts the documentation though: You are understanding the scenario, its the "get stuck" I'm trying to avoid. Thanks for your response, Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SSL_peek() ?
> > I'm probably missing something, but what's wrong with select()'ing for > > read when your SSL_write returns WANT_READ? > > See relatively elegant read_write() implementation from > > http://www.rtfm.com/openssl-examples/ > > Nothing, but how do I know when I can start my SSL_write() again, > because the WANT_READ condition that is stopping SSL_write() from taking > any more data has cleared ? Did you look at the code in openssl-examples? You call SSL_write It returns WANT_READ You stop select'ing for write on that descriptor and only select for read. Every time select tells you that the socket is readable you try calling SSL_write again. It will either succeed or it will return WANT_READ again, in which case you keep waiting for more data... and then call SSL_write again. > After every SSL_peek()/SSL_read() do I try to issue a new SSL_write() > and if there is movement I consider it cleared ? You don't need SSL_peak/SSL_read. SSL_write will do the reading that it wants to do. > The problem here is how to detect when I can drive my SSL_write() by the > writable event select() indicator and when I have to disregard it (even > though my application has Mb's of data still to send) until I know the > WANT_READ for SSL_write() condition has cleared itself. The way to clear WANT_READ for SSL_write() is to call SSL_write again, not SSL_read. Don't call SSL_read() if you want to write. > Then I can go > back to select() driven SSL_write()'s. The start disregard part I > understand. The switch back part I do not. It might take a couple of attempts I guess if not all data arrives at once, but it's not a problem. You'll be blocked in select and servicing other fds in the mean time (if there are other fds of course). > Ideally I don't want to use SSL_read() to take application data from > OpenSSL at that moment (because the application doesn't want it at that > time), I ideally want to be using SSL_peek() to give OpenSSL program > control to service read() and clear condition. You don't need either. All you need is calls to SSL_write. > But having thought about that situation some more what if I have too > much data in my inbound buffer waiting for the application then OpenSSL > may not call read() because its buffers are full enough already to > service my SSL_peek(). But I dont know exactly how its internal > buffering works to be sure, maybe it will just eat more memory ? > > So I guess I must service it with SSL_read() enough to allow the other > end and the respective flow controls to pull enough data through to get > to the packet thats currently holding up my SSL_write() ? I guess it will keep growing the buffer until it gets the stuff that it wants... Are you saying that you are receiving a lot of data and not looking at it? That can in fact cause problems, I guess. But why are you not reading it? I'm not 100% sure that there is a way to handle the situation where you keep ignoring the incoming data and leave it in OpenSSL's buffer. You might end up with a growing memory utilization of in fact get stuck because SSL_write can't read the data it needs because the buffer is full. It contradicts the documentation though: * As at any time a re-negotiation is possible, a call to SSL_write() can also cause read operations! The calling process then must repeat the call after taking appropriate action to satisfy the needs of SSL_write(). The action depends on the underlying BIO. When using a non-blocking socket, nothing is to be done, but select() can be used to check for the required condition. * I do beleive that the only time SSL_write will return WANT_READ is if a renegotiation is happening. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SSL_peek() ?
David Schwartz wrote: Having thought about the issue some more, what scenarios can cause a WANT_READ for SSL_write() call ? If the SSL protocol requires the server to get some information before it can send any information. For example, if it's still doing the original negotiation, it can't send any application data because it doesn't know the key yet! My initial negotiation has taken place completely, I am using SSL_accept() as many times as it needs to complete that phase before moving on to do any SSL_read() or SSL_write(). Sorry I should have been clearer, I was requesting what specific scenarios can cause it. Maybe there is only the "re-negotiate" one ? If my application is not reading from the socket (even when data maybe there) because its busy doing SSL_write() then is it possible for a WANT_READ to occur ? Does the other end have to send something (renegotiation request) in order for setup that condition. Or do I have to instigate something for trigger the possibility of a WANT_READ for SSL_write() call ? I'm not sure whether you need to call SSL_read before you select, but it couldn't hurt and could help, though it may be unnecessary cost. I think the right thing to do is to stop selecting for writability and select for readability. When you make some forward progress on the reading front, you may try the writing again. But I think you miss my point I am making in this paragraph I am asking if I never call SSL_read(), is it possible for my SSL_write() to return WANT_READ ? i.e. if I am no longer taking new SSL data from my peer, I can no longer be receiving requests to re-negotiate. And if re-negotiation is the only cause of WANT_READ from SSL_write() then I think my problem scenario is pretty much solved. What concerned me was I don't actually want to read any data while I am bulk writing data, hence SSL_peek(). But if I'm not reading data full stop I dont have a problem. And if I am reading data thats okay because my application can deal with new data (or re-negotiation) requests at that time. Because I read new data inbetween the bulk writes. I dont like the idea of BIO pair, as the data from some places is already tripple buffered in my app. I'm sure SSL adds double buffering too (to transform the encryption). Actually, it has a very cool way to minimize the copies, though I'm not sure if it works out to be cheaper or more expensive when everything is considered. When you do, say, a read, rather than giving a buffer for OpenSSL to copy into, you give it a pointer and it puts a pointer to where the data already was in there. Look at BIO_nread. I dont mind that so much. That is I wouldn't have a big problem setting up the input buffer for SSL_write() to be static and a unique allocation per (SSL *), since I am already copying to data onto the stack before I issue an SSL_write(). So using BIO for input is fine. When you talk of BIO *pair*, is the pair for read/write or the pair for input (plain text) and output (cypher) ? From what you've said I think I shall investigate that some more, after I am happy with the basic design. If I can do away with the copying data to stack when I use SSL_read()/SSL_write() and get SSL to put it direct that will save 2 copies per round trip. Thanks for your response, Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SSL_peek() ?
Mikhail Kruk wrote: I'm probably missing something, but what's wrong with select()'ing for read when your SSL_write returns WANT_READ? See relatively elegant read_write() implementation from http://www.rtfm.com/openssl-examples/ Nothing, but how do I know when I can start my SSL_write() again, because the WANT_READ condition that is stopping SSL_write() from taking any more data has cleared ? After every SSL_peek()/SSL_read() do I try to issue a new SSL_write() and if there is movement I consider it cleared ? The problem here is how to detect when I can drive my SSL_write() by the writable event select() indicator and when I have to disregard it (even though my application has Mb's of data still to send) until I know the WANT_READ for SSL_write() condition has cleared itself. Then I can go back to select() driven SSL_write()'s. The start disregard part I understand. The switch back part I do not. Ideally I don't want to use SSL_read() to take application data from OpenSSL at that moment (because the application doesn't want it at that time), I ideally want to be using SSL_peek() to give OpenSSL program control to service read() and clear condition. But having thought about that situation some more what if I have too much data in my inbound buffer waiting for the application then OpenSSL may not call read() because its buffers are full enough already to service my SSL_peek(). But I dont know exactly how its internal buffering works to be sure, maybe it will just eat more memory ? So I guess I must service it with SSL_read() enough to allow the other end and the respective flow controls to pull enough data through to get to the packet thats currently holding up my SSL_write() ? What if I am not reading any data at all ? i.e. I am not even looking at the readable select() event and never calling SSL_read() during the time I am using SSL_write() to send a lot of data. Would it be fair to say that I will never get a WANT_READ back from SSL_write(), because unless my application calls SSL_renegotiate() itself or receives an inbound alert/re-negotiate request then could I say there will be no situation that can stall my SSL_write() with WANT_READ ? Is it possible to stop this situation from occuring altogether. If RENEGOTIATION is the only situation it can occur in. I just read somewhere that gave me the impression there is TLS protocol packet to indicate one party is not capable/willing to RENEGOTIATION. Can this be done at session setup, so the other end knows not to even ask? I can't see anything from a grep of OpenSSL source to confirm that is true. I'm just trying to work through all the eventualities with my application and OpenSSL and get a better understanding of whats going on under the bonnet. Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SSL_peek() ?
On 6/22/06, David Schwartz <[EMAIL PROTECTED]> wrote: > I would like OpenSSL to be an efficient approach and elegant solution > but the more I look the less happy I'm becoming. I find bio pairs elegant, but your point is well taken. DS BIO pairs are very nice if you use them exclusively -- as they can handle memory-to-memory copies, unencrypted point-to-point read/write, and encrypted read/write. (The latter can also be done as a write to a shared memory segment, for another app to read from.) However, if you're used to handles and now need to learn about BIO pointers (which are handled approximately the same as FILE pointers but with many extensions), you're screwed. Especially if you have an app that uses handles that you're trying to backport OpenSSL into. -Kyle H __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER
SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER is likely for those OSes (*glares at Windows in particular*) which require locking allocated memory in place, performing an operation on it, and then unlocking it (to allow the OS to manage the placement of the memory block in physical memory). It's not the default because on other systems that don't require it it could be quite disastrous. A 'nonblocking write' will not halt the execution of the program in any circumstance (anything that would normally block, due to kernel buffer issues or whathaveyou would errno=EWOULDBLOCK and do nothing else); however, a nonblocking SSL_write doesn't necessarily mean that nothing has been sent to the TLS layer for encryption. This is my understanding of the situation. If I'm incorrect, would someone please correct me? -Kyle H On 6/22/06, Darryl Miles <[EMAIL PROTECTED]> wrote: SSL_CTX_set_mode(3) SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER Make it possible to retry SSL_write() with changed buffer location (the buffer contents must stay the same). This is not the default to avoid the mis- conception that non-blocking SSL_write() behaves like non-blocking write(). What is that all about ? My application makes no guarantee what the exact address given to SSL_write() is, it only guarantees the first so many bytes are my valid data. Why do I need to give it such guarantees ? What if I am also using the SSL_MODE_ENABLE_PARTIAL_WRITE, but not using SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER. Do I have to make sure that the address I pass to SSL_write() for the 2nd call has exactly the same memory address offset as the original call ? i.e. if I wrote 32 byte out of 64 from my static buffer; would I need to call "SSL_write(ssl, &static_buffer[32], 32);" ? What are the implementation reasons for these unusual requirements ? Where can I find the full information about those unusual requirements and the "This is not the default to avoid the mis-conception that non-blocking SSL_write() behaves like non-blocking write()." part ? i.e. What else is going to bite me ? Maybe I can wrap OpenSSL with my own library that bring back a more friendly interface. Not my today. My application (appears to) work with OpenSSL, its only because I am now auditing all the possible what-if scenarios I am turning up things that might bite me. Help I'm sinking :) Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED] __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: SSL_peek() ?
> Yes, what I meant by "My application's main task in the moment is > sending bulk data, a lot of data (enough to cause flow control > bottleneck)." meant exactly what you are asking. A previous SSL_write() > returns WANT_WRITE, which I presume is because the OpenSSL BIO_s_socket > did a write() which returned EAGAIN, so I put the fd into a select() > fd_set for write, waiting for the O/S to have room for more data. That sounds completely right so far. > My application is then woken because the O/S says I can now write. So I > call SSL_write() again, but this time I get back WANT_READ. Okay, so now you OpenSSL has to read some data before it can write. > So now I can't drive my next SSL_write() from the O/S writable > indication, because I'd end up in a tight loop, until the read data came > in and I did a SSL_peek() to clear the WANT_WRITE, the next iteration of > the tight loops would clear. Correct. So you need to 'select' just for readability. You are now blocked until some protocol data is received. This is a funny version of the same mistake as before. You want to send application data, so you are selecting for writability. But the SSL protocol requires you to send protocol data before any application data can be sent. So again, you get incorrect results from 'select' that don't tell you what you really wanted to know. > Having thought about the issue some more, what scenarios can cause a > WANT_READ for SSL_write() call ? If the SSL protocol requires the server to get some information before it can send any information. For example, if it's still doing the original negotiation, it can't send any application data because it doesn't know the key yet! > If my application is not reading from > the socket (even when data maybe there) because its busy doing > SSL_write() then is it possible for a WANT_READ to occur ? Does the > other end have to send something (renegotiation request) in order for > setup that condition. Or do I have to instigate something for trigger > the possibility of a WANT_READ for SSL_write() call ? I'm not sure whether you need to call SSL_read before you select, but it couldn't hurt and could help, though it may be unnecessary cost. I think the right thing to do is to stop selecting for writability and select for readability. When you make some forward progress on the reading front, you may try the writing again. > I dont like the idea of BIO pair, as the data from some places is > already tripple buffered in my app. I'm sure SSL adds double buffering > too (to transform the encryption). Actually, it has a very cool way to minimize the copies, though I'm not sure if it works out to be cheaper or more expensive when everything is considered. When you do, say, a read, rather than giving a buffer for OpenSSL to copy into, you give it a pointer and it puts a pointer to where the data already was in there. Look at BIO_nread. > I would like OpenSSL to be an efficient approach and elegant solution > but the more I look the less happy I'm becoming. I find bio pairs elegant, but your point is well taken. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER
Darryl Miles wrote: SSL_CTX_set_mode(3) SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER Make it possible to retry SSL_write() with changed buffer location (the buffer contents must stay the same). This is not the default to avoid the mis- conception that non-blocking SSL_write() behaves like non-blocking write(). $ grep SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER * s2_pkt.c:!(s->mode & SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER))) s3_pkt.c: !(s->mode & SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER)) ssl.h:#define SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER 0x0002L - s3_pkt.c --- /* if s->s3->wbuf.left != 0, we need to call this */ int ssl3_write_pending(SSL *s, int type, const unsigned char *buf, unsigned int len) { int i; /* */ if ((s->s3->wpend_tot > (int)len) || ((s->s3->wpend_buf != buf) && !(s->mode & SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER)) || (s->s3->wpend_type != type)) { SSLerr(SSL_F_SSL3_WRITE_PENDING,SSL_R_BAD_WRITE_RETRY); return(-1); } - This seems to be some sanity check for its own sake. i.e. if I use: SSL_set_mode(ssl, SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER); this sanity check can never return an error. There appears to be no other impact on the code other than disabling this check. Duh ? There must be some history behind this feature. Can I safely ignore the notion of worrying about the exact buffer location is reuse for the next SSL_write() ? When looking at the impact on SSL_MODE_ENABLE_PARTIAL_WRITE (another confusing option) I see a vague security/efficiency warning in the comment: -- s3_pkt.c - if ((i == (int)n) || (type == SSL3_RT_APPLICATION_DATA && (s->mode & SSL_MODE_ENABLE_PARTIAL_WRITE))) { /* next chunk of data should get another prepended empty fragment * in ciphersuites with known-IV weakness: */ s->s3->empty_fragment_done = 0; return tot+i; } When might someone want to use SSL_MODE_ENABLE_PARTIAL_WRITE ? What are its implications, does it cause an extra few bytes to be emitted in the TLS protocol for the "prepended empty fragment" ? So if I want efficiency I am better not using it ? My application is 100% happy to represent the data to SSL_write(). If SSL_write() returns -1 WANT_WRITE, it will represent all the data again next time, if SSL_Write() return 42 (out of 4096 I presented) my next call will have cut off the first 42 bytes and present the next 4096 bytes. Since I copy to stack buffer to give to SSL_write(). Now that copy to stack buffer might change the memory offset I am using for the next write (getting back to understanding the SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER option, as byte 42 is not at position &buffer[0] not &buffer[42]); Would general advise be not to use this feature ? I presume the TLS protocol paketizes the data, how does the SSL_write() length affect that packetization, and what would sort of ranges would be an optimal length ? Is it due to that packetization the SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER was thought about at sometime in the past ? How does this affect my future writes after SSL_write -1 WANT_WRITE ? Would it be true that the receive end of the SSL stream would be able to pull off chunks of data which have boundaries set by the sending ends SSL_write() lengths ? This would be to ensure correct flushing of data occurred at the receive side ? What sized writes result in the least amount of block padding ? Sorry so many questions. Trying to lift the mist. Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: FIPS Security Policy question
All cryptography used by the US Federal Government must be done in compliance with FIPS 140-2. (Other entities may choose to require FIPS compliance for their cryptographic functions as well.) Thus, if you are selling to an entity that requires FIPS, all OpenSSL (and other encryption) libraries must be put into FIPS mode, or FIPS is not satisfied and thus the application is not FIPS compliant. (In order to understand what FIPS compliance is, you first need to understand what FIPS is, and what it requires. I'd suggest that you download and read the FIPS 140-2 specification, from http://csrc.nist.gov/publications/fips/index.html , to understand why it was specified and what its purpose is.) Cheers, -Kyle H On 6/22/06, Tinnerello, Richard <[EMAIL PROTECTED]> wrote: Our application consists of multiple Unix processes each of which creates its own OpenSSL instance. Does it violate the Security Policy if some of those processes set OpenSSL into FIPS mode while others do not? In other words, does the existence of non-FIPS mode toolkit instances invalidate the FIPS mode of the other instances where FIPS mode is desired and has been set. Thanks, Richard __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SSL_peek() ?
> David Schwartz wrote: > >> My program is being told by the operating system I can write(), the > >> operating system socket send buffers are empty, but OpenSSL is returning > >> WANT_READ to SSL_write(), so I need to stop calling SSL_write() and wait > >> for data to arrive. This means temporally ignoring the operating > >> systems indication I may write data (which was previously driving my > >> SSL_write() operations). So I'm guessing I can call SSL_peek() to clear > >> that situation, that function doesn't appear documented but exists and > >> I'm guessing it will allow the read side to be serviced but not remove > >> any data (destructively) from buffering. Which is what I need at this > >> point ? > > > > Interesting issue. What's kind of puzzling me is why you are 'select'ing > > for 'write'. Is this because a previous 'write' returned a 'WOULDBLOCK' > > indication or just because you have application-level data to write? > > Generally one only puts a socket in the 'select' set for writability if one > > has already gotten a 'WOULDBLOCK' indication, in which case a 'select' write > > hit would be a definite indicator of forward progress. > > > > Maybe I'm missing something about your setup. > > > > What you're doing, of course, should work. But you still may find it > > easier > > just to use bio pairs. It saves you all these weird issues with 'select' and > > 'read'/'write' and the semantic differences between TCP and SSL. I agree you > > should not have to do things this way, but once you get over the SSL > > complexities of actually setting it up, it really simplifies your > > interactions with the TCP stack. > > > > I have switched to only using bio pairs and have never had a weird issue > > like these. Just remember the following rule: if you make any forward > > progress, try to do everything at least once before you assume you can make > > no more forward progress (a successful write of encrypted data from the > > network to OpenSSL might let you write more plaintext from the application > > to OpenSSL, you never know). Repeat until no forward progress is made, then > > 'select' or whatever. > > Yes, what I meant by "My application's main task in the moment is > sending bulk data, a lot of data (enough to cause flow control > bottleneck)." meant exactly what you are asking. A previous SSL_write() > returns WANT_WRITE, which I presume is because the OpenSSL BIO_s_socket > did a write() which returned EAGAIN, so I put the fd into a select() > fd_set for write, waiting for the O/S to have room for more data. > > My application is then woken because the O/S says I can now write. So I > call SSL_write() again, but this time I get back WANT_READ. > > So now I can't drive my next SSL_write() from the O/S writable > indication, because I'd end up in a tight loop, until the read data came > in and I did a SSL_peek() to clear the WANT_WRITE, the next iteration of > the tight loops would clear. > > Here is a theoretical syscall of the situation I forsee, this is just a > diagram of the concept, if I don't do something about the situation and > leave my code as-is: > > > select(4, [4], [4], [], {1, 0}) = 0 (Timeout) > select(4, [4], [4], [], {1, 0}) = 1 (out [4], left {0, 999000}) > write(4, ..., 4096) = 4096 > write(4, ..., 4096) = -1 EAGAIN (Resource temporarily unavailable) > select(4, [4], [4], [], {1, 0}) = 1 (out [4], left {0, 999000}) > write(4, ..., 4096) = 4096 > write(4, ..., 4096) = -1 EAGAIN (Resource temporarily unavailable) > select(4, [4], [4], [], {1, 0}) = 1 (in [4], out [4], left {0, 999000}) > read(4, ..., 4096) = 37 > [ THIS IS WHERE OUR PEER SENT SOMETHING CAUSING A FUTURE WANT_READ TO > OUR SSL_write()] > write(4, ..., 4096) = 4096 > [ THIS IS THE LAST BLOCK OF DATA BEING WRITTEN BY SSL ] > write(4, ..., 37) = -1 EAGAIN (Resource temporarily unavailable) > select(4, [4], [4], [], {1, 0}) = 1 (out [4], left {0, 999000}) > write(4, ..., 37) = 37 > read(4, ..., 4096) = -1 EAGAIN (Resource temporarily unavailable) > [ THIS IS WHERE WANT_READ FIRST OCCURS TO SSL_write(), THE read() I > HAVE STUCK INTO THE DIAGRAM, AS I HAVE WITNESSES OPENSSL DOING > OPTIMISTIC read()s WHEN NECESSARY ] > select(4, [4], [4], [], {1, 0}) = 1 (out [4], left {1, 0}) > read(4, ..., 4096) = -1 EAGAIN (Resource temporarily unavailable) > [ THE read() ABOVE WAS CAUSED BY THE SSL_write(), THIS IS WHERE I > START MY FIRST ITERATION OF A TIGHT LOOP BECAUSE I'M DRIVEN BY SELECT > WRITABLE EVENT ON fd=4, I HAVE ITERATED JUST 4 TIMES BUT IN PRACTICE IT > MAYBE FOREVER EATING MY CPU TIME] > select(4, [4], [4], [], {1, 0}) = 1 (out [4], left {1, 0}) > read(4, ..., 4096) = -1 EAGAIN (Resource temporarily unavailable) > select(4, [4], [4], [], {1, 0}) = 1 (out [4], left {1, 0}) > read(4, ..., 4096) = -1 EAGAIN (Resource temporarily unavailable) > select(4, [4], [4], [], {1, 0}) = 1 (out [4], left {1, 0}) > read(4, ..., 4096) = -1 EAGAIN (Resource temporarily unavail
FIPS Security Policy question
Our application consists of multiple Unix processes each of which creates its own OpenSSL instance. Does it violate the Security Policy if some of those processes set OpenSSL into FIPS mode while others do not? In other words, does the existence of non-FIPS mode toolkit instances invalidate the FIPS mode of the other instances where FIPS mode is desired and has been set. Thanks, Richard
Re: SSL_peek() ?
Comments below David Schwartz wrote: My program is being told by the operating system I can write(), the operating system socket send buffers are empty, but OpenSSL is returning WANT_READ to SSL_write(), so I need to stop calling SSL_write() and wait for data to arrive. This means temporally ignoring the operating systems indication I may write data (which was previously driving my SSL_write() operations). So I'm guessing I can call SSL_peek() to clear that situation, that function doesn't appear documented but exists and I'm guessing it will allow the read side to be serviced but not remove any data (destructively) from buffering. Which is what I need at this point ? Interesting issue. What's kind of puzzling me is why you are 'select'ing for 'write'. Is this because a previous 'write' returned a 'WOULDBLOCK' indication or just because you have application-level data to write? Generally one only puts a socket in the 'select' set for writability if one has already gotten a 'WOULDBLOCK' indication, in which case a 'select' write hit would be a definite indicator of forward progress. Maybe I'm missing something about your setup. What you're doing, of course, should work. But you still may find it easier just to use bio pairs. It saves you all these weird issues with 'select' and 'read'/'write' and the semantic differences between TCP and SSL. I agree you should not have to do things this way, but once you get over the SSL complexities of actually setting it up, it really simplifies your interactions with the TCP stack. I have switched to only using bio pairs and have never had a weird issue like these. Just remember the following rule: if you make any forward progress, try to do everything at least once before you assume you can make no more forward progress (a successful write of encrypted data from the network to OpenSSL might let you write more plaintext from the application to OpenSSL, you never know). Repeat until no forward progress is made, then 'select' or whatever. Yes, what I meant by "My application's main task in the moment is sending bulk data, a lot of data (enough to cause flow control bottleneck)." meant exactly what you are asking. A previous SSL_write() returns WANT_WRITE, which I presume is because the OpenSSL BIO_s_socket did a write() which returned EAGAIN, so I put the fd into a select() fd_set for write, waiting for the O/S to have room for more data. My application is then woken because the O/S says I can now write. So I call SSL_write() again, but this time I get back WANT_READ. So now I can't drive my next SSL_write() from the O/S writable indication, because I'd end up in a tight loop, until the read data came in and I did a SSL_peek() to clear the WANT_WRITE, the next iteration of the tight loops would clear. Here is a theoretical syscall of the situation I forsee, this is just a diagram of the concept, if I don't do something about the situation and leave my code as-is: select(4, [4], [4], [], {1, 0}) = 0 (Timeout) select(4, [4], [4], [], {1, 0}) = 1 (out [4], left {0, 999000}) write(4, ..., 4096) = 4096 write(4, ..., 4096) = -1 EAGAIN (Resource temporarily unavailable) select(4, [4], [4], [], {1, 0}) = 1 (out [4], left {0, 999000}) write(4, ..., 4096) = 4096 write(4, ..., 4096) = -1 EAGAIN (Resource temporarily unavailable) select(4, [4], [4], [], {1, 0}) = 1 (in [4], out [4], left {0, 999000}) read(4, ..., 4096) = 37 [ THIS IS WHERE OUR PEER SENT SOMETHING CAUSING A FUTURE WANT_READ TO OUR SSL_write()] write(4, ..., 4096) = 4096 [ THIS IS THE LAST BLOCK OF DATA BEING WRITTEN BY SSL ] write(4, ..., 37) = -1 EAGAIN (Resource temporarily unavailable) select(4, [4], [4], [], {1, 0}) = 1 (out [4], left {0, 999000}) write(4, ..., 37) = 37 read(4, ..., 4096) = -1 EAGAIN (Resource temporarily unavailable) [ THIS IS WHERE WANT_READ FIRST OCCURS TO SSL_write(), THE read() I HAVE STUCK INTO THE DIAGRAM, AS I HAVE WITNESSES OPENSSL DOING OPTIMISTIC read()s WHEN NECESSARY ] select(4, [4], [4], [], {1, 0}) = 1 (out [4], left {1, 0}) read(4, ..., 4096) = -1 EAGAIN (Resource temporarily unavailable) [ THE read() ABOVE WAS CAUSED BY THE SSL_write(), THIS IS WHERE I START MY FIRST ITERATION OF A TIGHT LOOP BECAUSE I'M DRIVEN BY SELECT WRITABLE EVENT ON fd=4, I HAVE ITERATED JUST 4 TIMES BUT IN PRACTICE IT MAYBE FOREVER EATING MY CPU TIME] select(4, [4], [4], [], {1, 0}) = 1 (out [4], left {1, 0}) read(4, ..., 4096) = -1 EAGAIN (Resource temporarily unavailable) select(4, [4], [4], [], {1, 0}) = 1 (out [4], left {1, 0}) read(4, ..., 4096) = -1 EAGAIN (Resource temporarily unavailable) select(4, [4], [4], [], {1, 0}) = 1 (out [4], left {1, 0}) read(4, ..., 4096) = -1 EAGAIN (Resource temporarily unavailable) select(4, [4], [4], [], {1, 0}) = 1 (out [4], left {1, 0}) read(4, ..., 4096) = -1 EAGAIN (Resource temporarily unavailable) select(4, [4], [4], [], {1, 0}) = 1 (in [4], out [4], left {1, 0}) read(4,
RE: SSL_peek() ?
> My program is being told by the operating system I can write(), the > operating system socket send buffers are empty, but OpenSSL is returning > WANT_READ to SSL_write(), so I need to stop calling SSL_write() and wait > for data to arrive. This means temporally ignoring the operating > systems indication I may write data (which was previously driving my > SSL_write() operations). So I'm guessing I can call SSL_peek() to clear > that situation, that function doesn't appear documented but exists and > I'm guessing it will allow the read side to be serviced but not remove > any data (destructively) from buffering. Which is what I need at this > point ? Interesting issue. What's kind of puzzling me is why you are 'select'ing for 'write'. Is this because a previous 'write' returned a 'WOULDBLOCK' indication or just because you have application-level data to write? Generally one only puts a socket in the 'select' set for writability if one has already gotten a 'WOULDBLOCK' indication, in which case a 'select' write hit would be a definite indicator of forward progress. Maybe I'm missing something about your setup. What you're doing, of course, should work. But you still may find it easier just to use bio pairs. It saves you all these weird issues with 'select' and 'read'/'write' and the semantic differences between TCP and SSL. I agree you should not have to do things this way, but once you get over the SSL complexities of actually setting it up, it really simplifies your interactions with the TCP stack. I have switched to only using bio pairs and have never had a weird issue like these. Just remember the following rule: if you make any forward progress, try to do everything at least once before you assume you can make no more forward progress (a successful write of encrypted data from the network to OpenSSL might let you write more plaintext from the application to OpenSSL, you never know). Repeat until no forward progress is made, then 'select' or whatever. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER
SSL_CTX_set_mode(3) SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER Make it possible to retry SSL_write() with changed buffer location (the buffer contents must stay the same). This is not the default to avoid the mis- conception that non-blocking SSL_write() behaves like non-blocking write(). What is that all about ? My application makes no guarantee what the exact address given to SSL_write() is, it only guarantees the first so many bytes are my valid data. Why do I need to give it such guarantees ? What if I am also using the SSL_MODE_ENABLE_PARTIAL_WRITE, but not using SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER. Do I have to make sure that the address I pass to SSL_write() for the 2nd call has exactly the same memory address offset as the original call ? i.e. if I wrote 32 byte out of 64 from my static buffer; would I need to call "SSL_write(ssl, &static_buffer[32], 32);" ? What are the implementation reasons for these unusual requirements ? Where can I find the full information about those unusual requirements and the "This is not the default to avoid the mis-conception that non-blocking SSL_write() behaves like non-blocking write()." part ? i.e. What else is going to bite me ? Maybe I can wrap OpenSSL with my own library that bring back a more friendly interface. Not my today. My application (appears to) work with OpenSSL, its only because I am now auditing all the possible what-if scenarios I am turning up things that might bite me. Help I'm sinking :) Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: SMIME subcommand
Wow. I guess I should play the lottery. Thanks for the info. I will upgrade to the FIPS level. regards, TT -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Marco Roeland Sent: Thursday, June 22, 2006 11:34 AM To: openssl-users@openssl.org Subject: Re: SMIME subcommand On Wednesday June 21st 2006 TIM TAYLOR wrote: > I am running OpenSSL 0.9.7d 17 Mar 2004 on SunOS 5.9 Generic_118558-14 > sun4u sparc SUNW,Ultra-Enterprise. > > The smime subcommand of openssl coredumps. Is there a known bug or is > this supposed to work? > > $ openssl smime -encrypt -in msgtext -to [EMAIL PROTECTED] \ -from > [EMAIL PROTECTED] -out mail.msg -aes128 mycert.pem Segmentation > Fault(coredump) Yes, this is a specific known bug in the 0.9.7d release. If possible use a later version (or even earlier!); otherwise there are simple patches for this issue. It occurs on "smime -encrypt" in general in that version. -- Marco Roeland __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED] __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
SSL_peek() ?
SSL_peek() there isn't much documentation about this one. My situation is this: * I'm using full non-blocking IO. * My application's main task in the moment is sending bulk data, a lot of data (enough to cause flow control bottleneck). * My application is not interested in reading any more data (yet) so it doesn't want to call SSL_read() as that might return some data (destructively) from the buffering. * But I want to allow OpenSSL to service the connection for the WANT_READ condition I got back from SSL_write(). So what do I do ? My program is being told by the operating system I can write(), the operating system socket send buffers are empty, but OpenSSL is returning WANT_READ to SSL_write(), so I need to stop calling SSL_write() and wait for data to arrive. This means temporally ignoring the operating systems indication I may write data (which was previously driving my SSL_write() operations). So I'm guessing I can call SSL_peek() to clear that situation, that function doesn't appear documented but exists and I'm guessing it will allow the read side to be serviced but not remove any data (destructively) from buffering. Which is what I need at this point ? But I then need to know from SSL_peek() that the problem is now cleared, so I can try SSL_write() again, but surely it will always return WANT_READ. Maybe there is another indicator I can use to ask OpenSSL "Did the read operation I just do, clear the WANT_READ condition I got to my last SSL_write() function ?". If I get a Yes back, I can then re-enable my SSL_write() operations driven from operating system socket is writable event indication. Your thoughts appreciated, Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: Possible to given SSL initial connection negotiation data from static buffer ?
Hello, > This is because my application has already read it from the socket > (which was in plain text), but then I'm happy for OpenSSL to use the > socket directly after that. > > Would creating a temporary BIO containing the over read data work, allow > OpenSSL to read it in. Then once empty switch the BIO for one with the > socket assigned ? Can you switch BIOs like that ? For me this should work. BIO read/write are called on SSL record layer, function which sets BIO on SSL object take care of free() already allocated BIO in SSL (which even may be treated like support for this functionality :-) so switching for example, from mem to socket BIO should work. Best regards, -- Marek Marcola <[EMAIL PROTECTED]> __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Possible to given SSL initial connection negotiation data from static buffer ?
I would like to give OpenSSL the first part of the initial SSL connect negotiation. This is because my application has already read it from the socket (which was in plain text), but then I'm happy for OpenSSL to use the socket directly after that. Would creating a temporary BIO containing the over read data work, allow OpenSSL to read it in. Then once empty switch the BIO for one with the socket assigned ? Can you switch BIOs like that ? Thanks Darryl -- Darryl L. Miles __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: Accessing Manual Pages in openssl
Hello > And for the great unwashed using Windows Marek :-) > Is it just the online versions? On Unix pod2man.pl script is used. I think that pod2chm from CPAN perl module may help :-) Best regards, -- Marek Marcola <[EMAIL PROTECTED]> __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: Accessing Manual Pages in openssl
On 22/06/06, Marek Marcola <[EMAIL PROTECTED]> wrote: If you have installed OpenSSL (for example in /usr/local/openssl) you may: $ nroff -man /usr/local/openssl/ssl/man/man1/rsa.1 | more or: $ export MANPATH=$MANPATH:/usr/local/openssl/ssl/man $ man rsa If you have untared source: $ pwd /some/path/openssl-0.9.8b $ perldoc doc/crypto/rsa.pod Best regards, -- Marek Marcola And for the great unwashed using Windows Marek :-) Is it just the online versions? regards -- Dave Pawson XSLT XSL-FO FAQ. http://www.dpawson.co.uk __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: On select and blocking
David Schwartz wrote: God I hope so.. I'm right in the middle of trying to get this non-blocking stuff to work consistently (with a timeline fast approachingarg!) and I can't tell if it's something I am doing wrong and what exactly that is. Too many variables to be easy. If you are trying to use blocking socket operations and be sure you will not block, you can never be certain that your code will work. If you do not ask for non-blocking behavior, there is no way for the implementation to know that you want it. DS Thanks David. This helps me A LOT. Joe __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: On select and blocking
On Thu, Jun 22, 2006 at 10:24:02AM -0700, David Schwartz wrote: > (combined responses) > > > What is true for two stacked layers, maybe false for one. > > (and the other way round). > > No standard guarantees that you are only dealing with one layer. In > fact, > in the Windows world, multiple layers (hidden from the programmer) are > common thanks to LSPs. (And yes, a lot of broken Windows code only shows its > breakage with particular LSPs thanks to broken assumptions exactly like this > one.) Enough already. Please stop. -- Viktor. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: On select and blocking
(combined responses) > What is true for two stacked layers, maybe false for one. > (and the other way round). No standard guarantees that you are only dealing with one layer. In fact, in the Windows world, multiple layers (hidden from the programmer) are common thanks to LSPs. (And yes, a lot of broken Windows code only shows its breakage with particular LSPs thanks to broken assumptions exactly like this one.) > God I hope so.. I'm right in the middle of trying to get this > non-blocking stuff to work consistently (with a timeline fast > approachingarg!) and I can't tell if it's something I am doing wrong > and what exactly that is. Too many variables to be easy. If you are trying to use blocking socket operations and be sure you will not block, you can never be certain that your code will work. If you do not ask for non-blocking behavior, there is no way for the implementation to know that you want it. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: On select and blocking
Darryl Miles wrote: David Schwartz wrote: I don't get it. DS Ah, finally something concrete. Hey thats ok; sit back and relax. I'm sure a patch is on its way. God I hope so.. I'm right in the middle of trying to get this non-blocking stuff to work consistently (with a timeline fast approachingarg!) and I can't tell if it's something I am doing wrong and what exactly that is. Too many variables to be easy. Joe Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: Accessing Manual Pages in openssl
Hello, > > > Folks, how do you access the manual pages for openssl? > I know that there are man pages downloaded with the software > but I am unable to find how to open and display them using > openssl. If you have installed OpenSSL (for example in /usr/local/openssl) you may: $ nroff -man /usr/local/openssl/ssl/man/man1/rsa.1 | more or: $ export MANPATH=$MANPATH:/usr/local/openssl/ssl/man $ man rsa If you have untared source: $ pwd /some/path/openssl-0.9.8b $ perldoc doc/crypto/rsa.pod Best regards, -- Marek Marcola <[EMAIL PROTECTED]> __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Accessing Manual Pages in openssl
Greetings, Folks, how do you access the manual pages for openssl? I know that there are man pages downloaded with the software but I am unable to find how to open and display them using openssl. Thanks in advance. Bob Richardson Allina Hospitals and Clinics Minneapolis, MN This message contains information that may be confidential and privileged. Unless you are the addressee (or authorized to receive for the addressee), you may not use, copy or disclose to anyone the message or any information contained in the message. If you have received the message in error, please advise the sender by reply e-mail and delete the message.
RE: On select and blocking
Hello, > > No, of course no. In this context we are talking of kernel/system > > implementation of select()/read() and you mix this with SSL. > > Because it demonstrates precisely the problem. The 'select' function > has no > way to know what type of read function will follow, and there are several > with different blocking semantics. A plain 'read' does not block the same > way as 'recvmsg(MSG_WAITALL)'. No, you simply combine misc layers for temporary response. What is true for two stacked layers, maybe false for one. (and the other way round). But I agree with Victor that this kind of discussion have no sense and this is my last post in this thread. Best regards, -- Marek Marcola <[EMAIL PROTECTED]> __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: renegotiating problem - connection hanging?
Darryl Miles wrote: All you have to do inside OpenSSL is know if your underlying IO mode is blocking or non-blocking and set a flag everytime to you enter a high-level call from the application context. Then _BEFORE_ you issue any low-level I/O you test to see it your low-level IO is in blocking mode and the flag is reset. If that condition is true to return -1 and WANT_READ/WANT_WRITE depending on what you just about to try and do. Then _AFTER_ everytime you issue a low-level I/O (read or write) you reset that flag. You now get one I/O per high level call. An omission. The test _BEFORE_ low level IO need to also check that SSL_MODE_AUTO_RETRY is not set (along with the other things listed above). Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: On select and blocking
David Schwartz wrote: I don't get it. DS Ah, finally something concrete. Hey thats ok; sit back and relax. I'm sure a patch is on its way. Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SMIME subcommand
On Wednesday June 21st 2006 TIM TAYLOR wrote: > I am running OpenSSL 0.9.7d 17 Mar 2004 on SunOS 5.9 Generic_118558-14 > sun4u sparc SUNW,Ultra-Enterprise. > > The smime subcommand of openssl coredumps. Is there a known bug or is > this supposed to work? > > $ openssl smime -encrypt -in msgtext -to [EMAIL PROTECTED] \ > -from [EMAIL PROTECTED] -out mail.msg -aes128 mycert.pem > Segmentation Fault(coredump) Yes, this is a specific known bug in the 0.9.7d release. If possible use a later version (or even earlier!); otherwise there are simple patches for this issue. It occurs on "smime -encrypt" in general in that version. -- Marco Roeland __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: On select and blocking
(combined responses) > No, of course no. In this context we are talking of kernel/system > implementation of select()/read() and you mix this with SSL. Because it demonstrates precisely the problem. The 'select' function has no way to know what type of read function will follow, and there are several with different blocking semantics. A plain 'read' does not block the same way as 'recvmsg(MSG_WAITALL)'. > Can you focus and give precision answer to this question > without involving "hypothetical operation/situation", without mixing > with some "protocol data", unnamed sophisticated systems and ZONE51 ? When you code to standards, you do not make assumptions based on what you see in current implementations or what you think will continue to happen. You code based on the actual guarantees that you in fact have. The standard provided a perfect way to get the behavior needed. The current case shows the problem. The 'select' function has no way to know what type of read function you have in mind. This has bitten real code many times now. There was the Linux UDP recvmsg issue. There was the Solaris listening socket accept issue. Now there is the got protocol data not application data issue. This faulty assumption has bitten real code in at least three different ways now. It's time to put it to rest. -- >> Do you know of any implementation where this is the case? For example, I >> call 'read'. There is no data, but a TCP window advertisement is sent. Work >> has been done, should 'read' return? If so, *what* should it return? >> EWOULDBLOCK?! >Euh. Now you are confusing Kernel level background processing with the >application processing. You do not need to call read() to make the >kernel process an ACK packet which reduces TCP window advertisement. >This is done concurrently inside the kernel without any application >assistance. What standard says an implementation cannot check to see if it was about to send an ACK when you called 'read' and decide to send it a few microseconds early since it's looking at the connection anyway? You keep stating things that are not guaranteed as if you can synthesize a guarantee from them. No combination of unguaranteed assertions that are true of some particular platform will give a guarantee. >In short you have yet to find a case where the application layer could >not forsee or incited the select/poll/read/write event model to stop >working in the way you claim it can. Then how did this conversation start in the first place? How did the Linux recvmsg UDP problems happens? Real code breaks because people make assumptions based on how one kernel works or that they've never seen a system do otherwise. That's really fine when you have no other choice, say due to missing standards or inadequate documentation. It sucks, but if there's no other choice, you live with it. But here, you have pearls. The standard gives you a simple, guaranteed way to get precisely the behavior you want. You spit on it and instead synthesize a method that will work if and only if a large combination of platform-specific things remain true. You persist even though this exact same type of assumption has broken applications before and is breaking one now. I don't get it. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: On select and blocking
Alain Damiral wrote: I'm wondering if it would not be highly appropriate to have an 'SSL_select' call defined by OpenSSL to have all operations on sockets fully encapsulated and allow to reach the desired behaviour without short-circuiting the layer approach... (it would provide similar behaviour as classical select on plain sockets with regard to application data). Now I apologize if this thought is trivially appropriate or trivially inappropriate - I missed the beginning of this thread to be honest. Nice idea, but... It doesn't play well with existing programs. In the application we need one point of control (the select() in the main event loop) where the program can wait to do more work or timeout. The only way to work a SSL_select() is either: * Make SSL_select() aware of all the other fd's in an application so it can watch them all for you. This means you hand over your select() model to OpenSSL, I'd say this is unnecessary and not OpenSSL's job. * Use the timeout value of the applications select() and poll. But what do you set the timeout too ? Results in unusable delays in data processing and extra CPU usage having to poll when no events occur, a problem if you have 100 processes doing that loop. Neither approach would be as simple solution a solution than a transparent-blocking mode. Looking at the OpenSSL code maybe I can prepare a patch to provide that as a new option (based on how I explained the implementation would work in another thread), which can then be activated through a new option. Before it seeks approval for use by default and the current spongey-blocking mode dropped (or retained via an option in reverse). This is all possible and presumes the current non-blocking mode state machine inside OpenSSL is flawless by design, but as I say if this exposes non-blocking bugs they should be fixed too. I think the bulk patch will be less than 60 lines of code changes. I am currently looking at the code to fix the SSL_accept() in non-blocking as thats not consistant either. Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: On select and blocking
David Schwartz wrote: On Thu, 22 Jun 2006, David Schwartz wrote: Bingo! And work may or may not translate into application data. I thought that a recv on a blocking socket returns immediately after it was able to do some work, no matter whether it resulted in receiving any actual data (e.g. socket closed). If a blocking call managed to do something it should return. If a blockig SSL_read call managed to finish the negotiation but didn't get any app data it should return. Do you know of any implementation where this is the case? For example, I call 'read'. There is no data, but a TCP window advertisement is sent. Work has been done, should 'read' return? If so, *what* should it return? EWOULDBLOCK?! Euh. Now you are confusing Kernel level background processing with the application processing. You do not need to call read() to make the kernel process an ACK packet which reduces TCP window advertisement. This is done concurrently inside the kernel without any application assistance. TCP windows advertisement is handled in the background of the application by the kernel and as far as I know the kernel network programming is _independant_ of the memory buffers used to store data commited by the application. On Linux investigate what /proc/sys/net/core/rmem_default and /proc/sys/net/core/wmem_default are all about. They are the applications socket buffer allocation in the kernel, some of this data may or may not have been sent over the TCP connection. The TCP connection control block controls the current allowed TCP window and therefore controls what can be sent over the wire to the peer. It can create back pressure to the application but it never does this in a way that jepodizes the select/poll/read/write event model. The flow control of the rmem/wmem is independant of the TCP window flow control. One concerns application<>kernel; the other concerns kernel<>kernel (over TCP). I would agree that is you muck around with setsockopt() between select/poll and read/write to alter the high water marks and/or the application hint for window sizes you might have a problem. But this again is _OUTSIDE_ the scope of the original problem. If the application layer incited a problem IT IS A GIVEN IF YOU MESS WITH YOUR FD IN OTHER WAYS YOU MAY HIT PROBLEMS. In short you have yet to find a case where the application layer could not forsee or incited the select/poll/read/write event model to stop working in the way you claim it can. recv() with flags=0 is the same a read(). It is just a read with a few extra modes, those extra modes again fall _OUTSIDE_ the scope of the original problem. David, please find your next argument. Maybe we can get to something solid. Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: On select and blocking
Hello, > >* A readability event can disappear (after it has been first indicated > >by poll/select and no read() family of functions have been called, > >recvmsg()/recv() etc... > > >* A writability event can disappear (after it has been first indicated > >by poll/select and no write() family of functions have been called, > >sendmsg()/send() etc... > > Yep, and that's exactly what happened IN THIS VERY CASE WE ARE ALL > TALKING > ABOUT. You got a 'select' hit because there was data, but then 'read' > blocked because the data was not application data. I'm not claiming it > happened, it happened. That's why we're having this thread. No, of course no. In this context we are talking of kernel/system implementation of select()/read() and you mix this with SSL. Can you focus and give precision answer to this question without involving "hypothetical operation/situation", without mixing with some "protocol data", unnamed sophisticated systems and ZONE51 ? Best regards, -- Marek Marcola <[EMAIL PROTECTED]> __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: On select and blocking
On Thu, Jun 22, 2006 at 07:35:15AM -0700, David Schwartz wrote: > Since 'select' does not guarantee that a subsequent read operation won't > block (since it can't even know what operation that's going to be), the > subsequent read operation (which was 'SSL_read') blocked. That's because > SSL_read blocks for *application* *data* while 'select' checks for *any* > *data*. This is no longer a productive use of the list's time. Please redirect to a suitable system programming list. From a pragmatic viewpoint (doing non-blocking I/O on blocking sockets right is subtle, requires care, and some not always applicable pre-conditions) I agree with you, but your zeal is excessive. Just because it is difficult, does not mean that OpenSSL should flagrantly make it more difficult. Ideally behind the scenes, while we argue philosophy, someone on the OpenSSL team is looking at the code... Perhaps they will make more progress if we now desist from this discussion. -- Viktor. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: On select and blocking
Hello everybody ! David Schwartz wrote: Since 'select' does not guarantee that a subsequent read operation won't block (since it can't even know what operation that's going to be), the subsequent read operation (which was 'SSL_read') blocked. That's because SSL_read blocks for *application* *data* while 'select' checks for *any* *data*. Can I play ? :) I'm wondering if it would not be highly appropriate to have an 'SSL_select' call defined by OpenSSL to have all operations on sockets fully encapsulated and allow to reach the desired behaviour without short-circuiting the layer approach... (it would provide similar behaviour as classical select on plain sockets with regard to application data). Now I apologize if this thought is trivially appropriate or trivially inappropriate - I missed the beginning of this thread to be honest. Goodbye everybody ! -- Alain Damiral, I hope this message makes me look like a very intelligent person Université Catholique de Louvain - student __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: On select and blocking
> >Same thing, no guarantee about what an actual future operation will > > do. By > > "would not block", they mean a hypothetical operation taking place at > > the > > time the indication is given to you. > No. That's stupid. It's useless. Not at all. It's the same as every other status reporting function. > By 'would not block' they mean 'if > nobody else messes with the descriptor, the operation would not block.' The operation you didn't perform because you are performing a 'select' at the time. Why would they say "would not block" instead of "will not block" if they meant an actual future read call? Tell me, if this is correct, why does no implementation assure that a subsequent recvmsg(MSG_WAITALL) doesn't block? And consider if you decided you want to do that, how you would figure out which 'recvmsg' calls were subsequent to a 'select' and which weren't in a general way. > Your meaning means that select is absolutely *useless* to a programmer > unless the socket is set to non-blocking mode; No. Only if their intention is to never, ever block. > there is no mention in the > select manpage that the socket must be in non-blocking mode. It doesn't have to be. You can set it non-blocking later or maybe you really do want to block until application-level data is ready. The implementation cannot know what you want unless you tell it. > Further, > since a non-blocking selectd can return EWOULDBLOCK for any operation, > select on non-blocking becomes nothing more than an optimization hint to > avoid a read system call. That's a good, defensive way to think about it. Since things really can change in-between when 'select' returns and you call a read function. If you really believe what you're saying, you would have to say that any implementation that allows an 'accept' to block after a read 'select' hit on a listening socket is broken. Otherwise, select is absolutely *useless* to a programmer unless his listening socket is set to non-blocking mode. Just as things can change on a listening socket, things can change on a stream socket. Packets can be received. Timeouts can occur. Data that might have been application may turn out to be protocol. At one time, people thought nothing could change on a listening socket and they wrote code that operated on precisely the assumption you continue to recommend. Their code broke because of a situation they didn't anticipate but that was never guaranteed not to happen. You now go out of your way to repeat that error. Either a future operation is guaranteed not to block or it's not. The assumption that it is guaranteed is falsifiable with many examples. The examples are not all precisely on point, but they suffice to show you don't have the guarantee. And, of course, this very case is precisely on point. This very same assumption broke in this very same code we are talking about. It broke because 'select' was thinking "application data or protocol data" and his read function was for application data. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: On select and blocking
>Same thing, no guarantee about what an actual future operation will do. By > "would not block", they mean a hypothetical operation taking place at the > time the indication is given to you. No. That's stupid. It's useless. By 'would not block' they mean 'if nobody else messes with the descriptor, the operation would not block.' Your meaning means that select is absolutely *useless* to a programmer unless the socket is set to non-blocking mode; there is no mention in the select manpage that the socket must be in non-blocking mode. Further, since a non-blocking selectd can return EWOULDBLOCK for any operation, select on non-blocking becomes nothing more than an optimization hint to avoid a read system call. /r$ -- SOA Appliances Application Integration Middleware __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: On select and blocking
> No, not "they mean", you have no authority to tell what "they mean". > You have only authority to tell what is your interpretation > of this text. > Only authors may tell what "they mean" (are you one of them ?) What? I presented an argument to show that they must mean this. You snipped it. For example, when they say that 'stat' provides the "size of a file", when must the file be that size? It can only mean at some undefined point between when you called 'stat' and when it returned. You notice they carefully chose a counterfactual "would" rather than a guarantee "will". That's because the operation that would not have blocked is one that never takes place. You get a read hit from 'select' if a hypothetical 'read' operation would not block at the instant the read indication is generated. It is not always clear what 'read' operation is being talked about, since there are many. > > Just like 'access' or 'stat', the information is no longer > > guaranteed valid > > when you receive it. Notice the absence of phrases like 'guarantee' or > > 'future operation will not block'. > > Notice the absence of phrase like "hypothetical operation taking > > place ..." > > The instant case again shows this as well as anything. At > > 'select' time, we > > believe a 'read' call will not block because there's data. Then at read > > time, we discover that things have changed, the data was not application > > data but protocol data. So the 'read' blocks because of information > > available at 'read' time but not at 'select' time. > What protocol data ? In the case that caused this whole problem, 'select' gave him a hit on read. He assumed this meant that a subsequent read operation would not block. The OS was thinking of 'read' as the subsequent operation but it was wrong (how could it know?). He was thinking the subsequent operation was 'SSL_read'. Since 'select' does not guarantee that a subsequent read operation won't block (since it can't even know what operation that's going to be), the subsequent read operation (which was 'SSL_read') blocked. That's because SSL_read blocks for *application* *data* while 'select' checks for *any* *data*. You can create many similar instances. All that's needed is the thing that 'select' checks for to not be *exactly* the same as the conditions that make the subsequent operation not block. This was the case with the example of Linux UDP messages, which bit lots of real, actual code that assumed there was no way exactly this could happen. > We are talking of pure sockets and select() description (from standard). > This standard clearly tells of application data, you will not receive > "TCP flags" from this read(). > You mix information. Because the 'select' function has no way to know what kind of 'read' function will follow it, it has no way to assure that that function will not block. The only well to tell the 'read' function that you are expecting it not to block is to tell it by setting the socket non-blocking. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: On select and blocking
> On Thu, 22 Jun 2006, David Schwartz wrote: > > Bingo! And work may or may not translate into application data. > I thought that a recv on a blocking socket returns immediately after it > was able to do some work, no matter whether it resulted in receiving any > actual data (e.g. socket closed). If a blocking call managed to do > something it should return. If a blockig SSL_read call managed to finish > the negotiation but didn't get any app data it should return. Do you know of any implementation where this is the case? For example, I call 'read'. There is no data, but a TCP window advertisement is sent. Work has been done, should 'read' return? If so, *what* should it return? EWOULDBLOCK?! I really wish that were the way things were defined from the beginning. That would make a lot of things much simpler. It would mean that all blocking applications always had to check that work of the particular kind they wanted was done, but that simplifies things enormously. (And they pretty much have to do that anyway. Short writes are possible. Empty reads are not as far as I know.) However, even if this is true, it still doesn't change the basic issue here. You don't have the guarantee you think you have. Even if you construe 'select' as guaranteeing at the time the hit is generated that work can be done, it does not follow that work can be done later. So you could still block. This really is an error akin to checking 'access' or 'stat' and assuming that whatever information you got will still be valid by the time you come back to doing something else. It's wrong even when you can't think of anything that could change it in the meantime and it's right only when you've assured that nothing could possibly change it in the meantime. It would be nice if they had defined 'select' as a hint that you should retry. It would be nice if they had defined 'read' as returning when work has been done and made 'EWOULDBLOCK' a legal return on a blocking socket. It would be nice if people wrote defensive code that didn't break in corner cases because they relied on guarantees they didn't have that break in strange and subtle ways when their code is used outside its design environment. Reality is what it is. What bothers me here is that people are specifically advocating this type of coding on no stronger grounds than that they can't think of any way it can break on any current implementation when this whole thread started because it broke. (The application did not anticipate a transport that can carry both application and non-application data.) DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Why SSE2 are not enabled and at library init check if CPU support it ?
Hi This is my sugestion for the developers ... Why SSE2 are not enabled at compilation and at library init check if CPU support it using CPUID ? Best Regards Leandro Gustavo Biss Becker Engenheiro Eletrônico / Electronic Engineer eSysTech - Embedded Systems TechnologiesTravessa da Lapa, 96 conjunto 73Curitiba - Paraná - Brasilhttp://www.esystech.com.br Telefone: +55 (41) 3029-2960 Esta mensagem e seus anexos podem conter informações confidenciais ou privilegiadas. Se você não é o destinatário dos mesmos você não está autorizado a utilizar o material para qualquer fim. Solicitamos que você apague a mensagem e avise imediatamente ao remetente. O conteúdo desta mensagem e seus anexos não representam necessariamente a opinião e a intenção da empresa, não implicando em qualquer obrigação ou responsabilidade da parte da mesma.This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. The contents of this message and its attachments do not necessarily express the opinion or the intention of the company, and do not implies any legal obligation or responsabilities from this company.
RE: On select and blocking
Hello, > > For short: > > > > A descriptor shall be considered ready for reading when a call to an > > input function with O_NONBLOCK clear would not block, whether or not > > the function would transfer data successfully. > > (The function might return data, an end-of-file indication, > > or an error other than one indicating that it is blocked, > > and in each of these cases the descriptor shall be considered > > ready for reading.) > > > > How hard is that to read new releases ? > > Same thing, no guarantee about what an actual future operation will do. > By > "would not block", they mean a hypothetical operation taking place at the > time the indication is given to you. No, not "they mean", you have no authority to tell what "they mean". You have only authority to tell what is your interpretation of this text. Only authors may tell what "they mean" (are you one of them ?) > Just like 'access' or 'stat', the information is no longer guaranteed > valid > when you receive it. Notice the absence of phrases like 'guarantee' or > 'future operation will not block'. Notice the absence of phrase like "hypothetical operation taking place ..." > The instant case again shows this as well as anything. At 'select' > time, we > believe a 'read' call will not block because there's data. Then at read > time, we discover that things have changed, the data was not application > data but protocol data. So the 'read' blocks because of information > available at 'read' time but not at 'select' time. What protocol data ? We are talking of pure sockets and select() description (from standard). This standard clearly tells of application data, you will not receive "TCP flags" from this read(). You mix information. Best regards, -- Marek Marcola <[EMAIL PROTECTED]> __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: On select and blocking
On Thu, 22 Jun 2006, David Schwartz wrote: > > Bingo! And work may or may not translate into application data. I thought that a recv on a blocking socket returns immediately after it was able to do some work, no matter whether it resulted in receiving any actual data (e.g. socket closed). If a blocking call managed to do something it should return. If a blockig SSL_read call managed to finish the negotiation but didn't get any app data it should return. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: On select and blocking
> If we are talking about standards, maybe you should read new releases > of documents which you are citing as an authority. From 1997 to 2004 > many things changed: > http://www.opengroup.org/onlinepubs/009695399/functions/select.html Nothing that bears on this issue. > For short: > > A descriptor shall be considered ready for reading when a call to an > input function with O_NONBLOCK clear would not block, whether or not > the function would transfer data successfully. > (The function might return data, an end-of-file indication, > or an error other than one indicating that it is blocked, > and in each of these cases the descriptor shall be considered > ready for reading.) > > How hard is that to read new releases ? Same thing, no guarantee about what an actual future operation will do. By "would not block", they mean a hypothetical operation taking place at the time the indication is given to you. That is, at the instant the determination is made to give you, say, a 'read' indication from 'select', the kernel must believe that had their been an "appropriate read operation" at the time, it would not block. Just like 'access' or 'stat', the information is no longer guaranteed valid when you receive it. Notice the absence of phrases like 'guarantee' or 'future operation will not block'. Also, there's no way to tell *what* input operation it's talking about. For example, clearly a 'recvmsg(MSG_WAITALL)' *will* block. The instant case again shows this as well as anything. At 'select' time, we believe a 'read' call will not block because there's data. Then at read time, we discover that things have changed, the data was not application data but protocol data. So the 'read' blocks because of information available at 'read' time but not at 'select' time. Now, had we called 'read' instead of 'SSL_read', we would not have blocked. But how can the kernel know at 'select' time what "input function" we plan to call? Certainly when 'recv' will block is not the same as when 'recvmsg(MSG_WAITALL)' will block. Again, you don't have the guarantee you need. The standard provides you a simple way to get it. You don't do that. That type of reasoning has resulted in more code breakage than I can shake a stick at. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Reading/Writing to disk files on Windows...
Hi All, First, before I get blasted for asking the some old question that has been asked at least 100 times this year,Yes, I read the FAQ, and the very large number of posts on this subject. PEM_write_PrivateKey AND /MD…… So everyone knows…. Now then, I have successfully compiled and ran the complete 0.9.8b release of OpenSSL. Many thanks to the team that brought it to the world. Now, I have a Unix Project that runs wonderfully on Linux/Aix/Solaris….. I am porting it to Windows. Yes, I know, WHY Like I said, the OpenSSL built great the first time, the INSTALL.W32 doc is very clear and easy to follow; Way to go {Insert Author’s Name}. Yes, it was built with /MD Onward, I have my project to port. It has a library component, call it LIB_. I created an makefile with /MD as one of the COPTS It builds cleanly and ‘seems’ to be OK. Next I build one the programs that needs the LIB_, libeay32.lib, and ssleay32.lib libraries. The makefile for this program has /MD in the COPTS When I build it, it compiles and links great. NOTE: I am NOT USING VISUAL STUDIO, I am using nmake on the command line. (in case this makes a difference) Now, when I run this program, is simply creates a collection of PEM files with a private key and X509 cert. There are two lines of code that cause a windows exception. They are PEM_write_PrivateKey(fp, NewKeyReq, Cipher, GetCode(0),strlen(GetCode(0)), NULL, NULL); PEM_write_X509(fp, x509_Cert); If I comment out these lines, the program runs perfectly except, the PEM file is empty. As I indicated above, I read many messages on the mailing list but, nothing there seems to help. As I said, I built everything with /MD. So, any help would be great. Thanks. Jerry
RE: On select and blocking
Hello > This would be a valid argument if the standard didn't specifically > provide > us a way to get the exact guarantee you want. It's this simple: > > 1) You need a particular guarantee, specifically, that a 'read' won't > block. > > 2) The standard provides you a clear way to get that guarantee, by > setting > the socket non-blocking. > > 3) You can't think of any way for the 'read' to block, but you are not > guaranteed it. > > How hard is that to follow? If we are talking about standards, maybe you should read new releases of documents which you are citing as an authority. From 1997 to 2004 many things changed: http://www.opengroup.org/onlinepubs/009695399/functions/select.html For short: A descriptor shall be considered ready for reading when a call to an input function with O_NONBLOCK clear would not block, whether or not the function would transfer data successfully. (The function might return data, an end-of-file indication, or an error other than one indicating that it is blocked, and in each of these cases the descriptor shall be considered ready for reading.) How hard is that to read new releases ? Best regards, -- Marek Marcola <[EMAIL PROTECTED]> __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: On select and blocking
Combined replies. > Maybe we should look on other papers, for example part of select(2) > man page from hpux 11.23: > > Ttys and sockets are ready for reading or writing, respectively, if a > read() or write() would not block for one or more of the following > reasons: > + input data is available. > + output data can be accepted. > + an error condition exists, such as a broken pipe, >no carrier, or a lost connection. > > More, this statement already exists in hpux 10.0 (documentation > from 1995). Yes, I agree with this. As I said, 'select' should indicate a 'read' hit if at some point in between when you called 'select' and when it returned, a 'read' would not have blocked. Notice this says nothing about any *future* operations. > And of course this may change if two threads will use in parallel > one socket in the Bermuda Triangle, but this is not that case. This does not provide any future guarantees. It says a socket is ready for reading if a read will not block, but it doesn't say that once a socket is ready for reading it must remain so until any particular time. Note also that this says it wouldn't block for specific reasons. This strongly suggests that it *could* block for other reasons not listed. So actually it's hard to imagine what the list is for if not to caution you that a subsequent operation *can* block. For example, one of the cases where code making this assumption actually broke was Linux code that used 'select' followed by 'read' on UDP sockets. Surprise, the 'read' call blocked for a reason not listed that nobody at the time realized wasn't listed -- the checksum on the packet was bad. Input data was available, but it didn't satisfy the 'read' request (because 'read' implicitly asks for *valid* data only). The guarantee you want does not exist. You cannot produce it. You do not have it. Really, it's as simple as that. > And of course this may change if two threads will use in parallel > one socket in the Bermuda Triangle, but this is not that case. This would be a valid argument if the standard didn't specifically provide us a way to get the exact guarantee you want. It's this simple: 1) You need a particular guarantee, specifically, that a 'read' won't block. 2) The standard provides you a clear way to get that guarantee, by setting the socket non-blocking. 3) You can't think of any way for the 'read' to block, but you are not guaranteed it. How hard is that to follow? --- From Windows documentation: > For other sockets, readability means that queued data is available > for reading such that a call to recv, WSARecv, WSARecvFrom, > or recvfrom is guaranteed not to block. That's not a standard, so it applies to Windows only. Also, it does not specify *what* call to WSARecv is guaranteed not to block or how you can tell which of those functions the guarantee applies to. Unlike POSIX, at least the intention was there to provide this guarantee on Windows. --- >I call again for David to prove an existing implementation of >poll/select which does not confirm to the above guarantees. David is >claiming that: That is not how you code to standards. You don't give a fig what existing implementation do, you care about what gurantees you have. In any event, a platform the spliced 'select'/'read'/'write' to OpenSSL is an existing implementation that does not provide these guarantees. >* A readability event can disappear (after it has been first indicated >by poll/select and no read() family of functions have been called, >recvmsg()/recv() etc... >* A writability event can disappear (after it has been first indicated >by poll/select and no write() family of functions have been called, >sendmsg()/send() etc... Yep, and that's exactly what happened IN THIS VERY CASE WE ARE ALL TALKING ABOUT. You got a 'select' hit because there was data, but then 'read' blocked because the data was not application data. I'm not claiming it happened, it happened. That's why we're having this thread. >The specifications for poll/select dont talk in terms of "nonblocking" >or "blocking" of other system call because that does not concern the >select system call. What does concern the select is the "readability" >and "writability" of the file descriptor. >What this meant by those terms is that the file descriptor "can do more >work" during the next syscall call related to it. This can also mean >partial writes are possible, or error return would be indicated, or >indicated end-of-stream. This can be through about as there is more >information the kernel has to convey to the application about that file >descriptor and kernel is ready to tell the application. Bingo! And work may or may not translate into application data. >Then by virtue of that fact the next read() or write() related call does >not block because we know there
Re: renegotiating problem - connection hanging?
David Schwartz wrote: David Schwartz wrote: I hate to be rude, but do you understand *anything* about programming to standards? The 'select' and 'read' functions are standardized, as is blocking and non-blocking I/O. You have the guarantees specifically enumerated in the standard and cannot assume things not assured because future implementations might violate those assumptions. I've explained why the standards you quite are written in the way they are in another thread. The standards have to speak in terms of the entire scope of the subject. The subject is the select() system all. So the select() standards have to take into account other file descriptor contexts that are not socket related and still be factually correct. This is why terms readable and writable are used. You then have to relate that to a socket file descriptor. Find me some coded proof or write a program to demonstrate the behavior you believe in. For the love of god, this whole thread started because of a program that demonstrated the behavior I "believe in". Actual programs have broken because of corner cases. Again this is an incorrect belief of yours. But I can understand your point of view building another incorrect belief on top of another one is just human nature so we'll let you off. Your miss-understand there is that you think that SSL layer API blocking concept is interchangable with socket layer blocking concept. They are not. But virute of that the SSL layer is. Your incorrect belief are: * How application/kernel interacts with socket file descirptors when using select/poll/read/write. * That OpenSSL already implements a compatible/interchangable event model in relation to the above. A SSL_read() may infact issue a write() system call. Trying to tar the whole situation with the same brush is an incorrect belief. The SSL is another layer with its own wants and wishes, it does not conform to a simple data transform that can be driven by IO in one direction. Blocking mode of the higher level API calls is not interchangable with the blocking mode of the lower level system calls. But if you execute one system call per SSL level call they _DO_ become interchargable and transparent. At the moment they are not transparent. Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: renegotiating problem - connection hanging?
Marek Marcola wrote: Your next argument will be: "if you do select() and space shuttle is flying ..." I am in complete agreement here. The crux of the situation is that I'm (we're) saying its possible to have working OpenSSL blocking mode that uses a blocking socket which conforms the to host platforms select/poll/read/write event notification characteristics. This is what I'm calling transparency. Any quirks are also transparent, any application for that host would already need to take into account those quirks, so quirks become a non-issue too. I'm saying the only reason this is the problem inside OpenSSL is that one high level API call SSL_read()/SSL_write() may end up calling two low level socket calls read()/write(), each of which may block. In my wisdom of having written IO layers before in order to keep event notification characteristics your first low-level call you make (per high level invocation) you treat it will may block. Any futher low level calls you want to make you must do so non-blocking or return -1 WANT_READ/WANT_WRITE. Since OpenSSL can almost do full non-blocking mode as every part is restartable then its 95% there to getting a transparent blocking mode since the hard bit is already done. If this is not the case I would vote to fix that too :). All you have to do inside OpenSSL is know if your underlying IO mode is blocking or non-blocking and set a flag everytime to you enter a high-level call from the application context. Then _BEFORE_ you issue any low-level I/O you test to see it your low-level IO is in blocking mode and the flag is reset. If that condition is true to return -1 and WANT_READ/WANT_WRITE depending on what you just about to try and do. Then _AFTER_ everytime you issue a low-level I/O (read or write) you reset that flag. You now get one I/O per high level call. This may also expose/fix bugs within OpenSSL where non-blocking mode is not correctly implemented and they never showed up before because 99% of the time the other end of the connection sent a packet with all the data we needed in it, so the successive read() never returned EAGAIN. So maybe OpenSSL should have 4 IO modes. * Non-blocking, * Fully-blocking (with SSL_MODE_AUTO_RETRY, based on application data through put), * Spongey-blocking (as now without SSL_MODE_AUTO_RETRY), * Transparent-blocking (one I/O per call, based on low level IO throughput). IMHO Spongey-blocking mode is of little practical purpose to anyone. It blocks when you dont want it to and may sometimes return -1 WANT_READ/WANT_WRITE anyway. I would vote to replace spongey-blocking mode with transparent-blocking if my vote ever meant anything. Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: renegotiating problem - connection hanging?
David Schwartz wrote: David you are bringing completely unrelated issues into the situation. No, you are failing to understand my argument. A Kernel does its job of arbitration like this on a shared/duped file descriptor that both processes have as fd=4: Thread/Process A Kernel Lower IO Layer | === select(4, [4], NULL, NULL, {10,0}) [we enter the kernel from the application context and proceed to test fd 4 for readability, we do that test atomically with the following pseudo code:] mutex_lock(flip->mutex); if((flip->events & FLIP_BIT_READ_EVENT)) select_readability_fd_four = 1; else select_readability_fd_four = 0; mutex_unlock(flip->mutex); Data arrives on TCP kernel marks read event: mutex_lock(flip->mutex); flip->events |= FLIP_BIT_READ_EVENT; mutex_unlock(flip->mutex); In practice the reporting mechanism may not work exactly like this but if it did it wouldn't alter the characteristics of it. In practice there may not even be any event bitmask in use since most applications don't use select/poll for I/O. This makes select/poll the minority so they chose the other approach to do all the work inside of the select/poll system call that is needed to calculate readability/writability. Many target CPUs wont need mutexes since its possible to use machine code instructions to atomically set and reset bit patterns from a memory location (that is natually aligned). On intel i386 GNU syntax this maybe "movl 0x0001,%eax; lock orl 0x012300" this would set EAX with 0x0001, and then logically OR it with memory location 0x012300, thus setting bit0. For better understanding check out atomic_set_mask() from linux kernel include/asm-i386/atomic.h. So there is set ordering of events, a well defined order. The whole point is that event trigger mechanism and the event test mechanism (which is constitutes the interaction between select/poll/read/write/etc..) : * do not cause events to be revoked reported once they are first reported to an application and not yet cleared by further read() / write(). * do not loose the posting of events during the event setting process, classic transactional "lost write" scenario, or race. The Kernel IO Layer: only sets readability or writability events (new data arrive in for application, output buffer is below its water mark to guarantee some form of write again) The select/poll: only looks at events in read-only fashion, it does not have the ability to set or reset events. The read() family of functions: are the only things that can clear readability events The write() family of functions: can reset writability events (buffer full is driven from application write, its reset by kernel low level i/o) It does not matter how many processes or threads you have with access to the same file descriptor. It does not matter that the select() call pre-dates what we now call threading on unix, because that is irrelevant too. A process is the original form of parallel execution, a file descriptor is inherited across fork() so two processes have access to the same file descriptor and multi-cpu machine have existed in unix for a long time. The kernel was still doing its job of arbitration then as it does now. I call again for David to prove an existing implementation of poll/select which does not confirm to the above guarantees. David is claiming that: * A readability event can disappear (after it has been first indicated by poll/select and no read() family of functions have been called, recvmsg()/recv() etc... * A writability event can disappear (after it has been first indicated by poll/select and no write() family of functions have been called, sendmsg()/send() etc... We are also only interested in condition concerning file descriptors being used for bulk read/write. We dont care a donkey about accept() or any other system calls and the quirks of a particular platform. That is off-topic. We dont care a donkey about unrelated theoretically situations like what if I call close(fd), they are irrelevant to the original discussion. The specifications for poll/select dont talk in terms of "nonblocking" or "blocking" of other system call because that does not concern the select system call. What does concern the select is the "readability" and "writability" of the file descriptor. What this meant by those terms is that the file descriptor "can do more work" during the next syscall call related to it. This can also mean partial writes are possible, or error return would be indicated, or indicated end-of-stream. This can be through about as ther
RE: On select and blocking
Hello, > > David, > > Please post a link to a manpage or other documentation that > > justifies your > > description of select. > > I posted a link to the SuSv2 description of 'select'. There is no > guarantee > there that a future operation will not block. > http://www.opengroup.org/onlinepubs/007908799/xsh/select.html Part of select() description from Visual C++: The parameter readfds identifies the sockets that are to be checked for readability. If the socket is currently in the listen state, it will be marked as readable if an incoming connection request has been received such that an accept is guaranteed to complete without blocking. For other sockets, readability means that queued data is available for reading such that a call to recv, WSARecv, WSARecvFrom, or recvfrom is guaranteed not to block. For now we have three "broken" systems: Linux, HPUX, Windows Best regards, -- Marek Marcola <[EMAIL PROTECTED]> __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: On select and blocking
Hello, > > David, > > Please post a link to a manpage or other documentation that > > justifies your > > description of select. > > I posted a link to the SuSv2 description of 'select'. There is no > guarantee > there that a future operation will not block. > http://www.opengroup.org/onlinepubs/007908799/xsh/select.html You posted link to short document from 1997. This document does not discuss may of select() facts and this does not mean that they do not exists. Maybe we should look on other papers, for example part of select(2) man page from hpux 11.23: Ttys and sockets are ready for reading or writing, respectively, if a read() or write() would not block for one or more of the following reasons: + input data is available. + output data can be accepted. + an error condition exists, such as a broken pipe, no carrier, or a lost connection. More, this statement already exists in hpux 10.0 (documentation from 1995). And of course this may change if two threads will use in parallel one socket in the Bermuda Triangle, but this is not that case. Best regards, -- Marek Marcola <[EMAIL PROTECTED]> __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: renegotiating problem - connection hanging?
> Hello, > > And, I'd like to point out one more time, we know of cases > > where a blocking > > read after a select will block. For example, if someone > > interposes OpenSSL > > between select/read/write and the OS. Someone *can* do this and > > people *do* > > do this. > I'd like to point out one more time, that most of your arguments is > not related to this discussion. Should I just take your word for it? > We have two file descriptors and we > relay data between them. You mean that's what you *think* you're doing. You have no way of knowing what's going on under the hood. You have no way of knowing whether, for example, encryption has been invisibly added and the fact that you don't know that is not grounds for breaking because of it. > If you build system with interposes OpenSSL > between select/read/write and the OS - fine, if you build system with > "sophisticated timeout handing" - fine, but this is unrelated to this > discussion. I'm sorry, how is it unrelated? > Your next argument will be: > "if you do select() and space shuttle is flying ..." I don't get your attitude. You have a set of specific guarantees. The guarantees are adequate to make your code guaranteed to work. You insist on writing code that relies on a guarantee you don't have. It breaks, because the thing that wasn't guaranteed didn't happen. I am saying that's not surprising. Lots of code built on guarantees you don't have break. I can cite example after example where people built code based on guarantees they didn't have, not able to to think of any possible way their code could break at the time, but lo and behold, later it broke. Those who stuck to the guarantees they actually had don't have this problem. When things change, they keep the guarantees you actually have. I can come up with example after example where at the time the broken code was written, *nobody* could imagine any way to break it. Everyone knew it wasn't guaranteed to work, but some people foolishly argued that it was guaranteed to work because nobody could think of a way to break it. The future *proved* them wrong. Can I say for sure it will happen in this case? No. But I have seen it happen too many times causing too much pain to not feel obligated to point it out. The code is broken. It happens to work. Tomorrow it might not happen to work. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
SMIME subcommand
Title: SMIME subcommand I am running OpenSSL 0.9.7d 17 Mar 2004 on SunOS 5.9 Generic_118558-14 sun4u sparc SUNW,Ultra-Enterprise. The smime subcommand of openssl coredumps. Is there a known bug or is this supposed to work? $ openssl smime -encrypt -in msgtext -to [EMAIL PROTECTED] \ -from [EMAIL PROTECTED] -out mail.msg -aes128 mycert.pem Segmentation Fault(coredump) $ regards, TT 317-510-7346 OpenSSL 0.9.7d 17 Mar 2004
RE: renegotiating problem - connection hanging?
Hello, > And, I'd like to point out one more time, we know of cases where a > blocking > read after a select will block. For example, if someone interposes OpenSSL > between select/read/write and the OS. Someone *can* do this and people *do* > do this. I'd like to point out one more time, that most of your arguments is not related to this discussion. We have two file descriptors and we relay data between them. If you build system with interposes OpenSSL between select/read/write and the OS - fine, if you build system with "sophisticated timeout handing" - fine, but this is unrelated to this discussion. Your next argument will be: "if you do select() and space shuttle is flying ..." Best regards, -- Marek Marcola <[EMAIL PROTECTED]> __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]