Re: leaking ?
Nope, still didn't solve the memory issue. For now it looks like openssl isn't able to share connections in between threads. On 8/31/07, Marek Marcola [EMAIL PROTECTED] wrote: Hello, StartupThreads are getting a incomming connection, create a SSL_new(ctx), create a BIO_new(BIO_s_socket()), BIO_set_fd, and SSL_set_bio. Then they SSL_accept(ssl), and SSL_read what is comming in. So far so good. When the input is something this httpsd should react to, i put a user class in an array, with bio, ssl and socket as class variables. To pass them on to a worker thread, which loops over this array and SSL_writes to the ssl. (once every second) When SSL_write 1, the socket gets closed and gets set to -1 and the destructor for that user gets called by cleanup thread. This is what is in there : if(bio) BIO_ssl_shutdown(bio); if(ssl){ ERR_clear_error(); //added and removed these, no difference ERR_remove_state(0); //idem SSL_shutdown(ssl); SSL_free(ssl); } So what is the problem here : It works fine, it just leaves a lot of used memory behind (memory leak ?) I'm not sure, but maybe you should call ERR_remove_state(0) last (after SSL_free()) because error stack is allocated automatically and after this SSL_* calls you may allocate this stack again. Best regards, -- Marek Marcola [EMAIL PROTECTED] __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED] __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: leaking ?
Well, not like I'm doing it now anyway : Initthread - array of connections - push thread - array of connections - cleanup thread I guess I must be missing something :) On 9/3/07, kris vandercapellen [EMAIL PROTECTED] wrote: Nope, still didn't solve the memory issue. For now it looks like openssl isn't able to share connections in between threads. On 8/31/07, Marek Marcola [EMAIL PROTECTED] wrote: Hello, StartupThreads are getting a incomming connection, create a SSL_new(ctx), create a BIO_new(BIO_s_socket()), BIO_set_fd, and SSL_set_bio. Then they SSL_accept(ssl), and SSL_read what is comming in. So far so good. When the input is something this httpsd should react to, i put a user class in an array, with bio, ssl and socket as class variables. To pass them on to a worker thread, which loops over this array and SSL_writes to the ssl. (once every second) When SSL_write 1, the socket gets closed and gets set to -1 and the destructor for that user gets called by cleanup thread. This is what is in there : if(bio) BIO_ssl_shutdown(bio); if(ssl){ ERR_clear_error(); //added and removed these, no difference ERR_remove_state(0); //idem SSL_shutdown(ssl); SSL_free(ssl); } So what is the problem here : It works fine, it just leaves a lot of used memory behind (memory leak ?) I'm not sure, but maybe you should call ERR_remove_state(0) last (after SSL_free()) because error stack is allocated automatically and after this SSL_* calls you may allocate this stack again. Best regards, -- Marek Marcola [EMAIL PROTECTED] __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED] __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: leaking ?
kris vandercapellen wrote: Well, not like I'm doing it now anyway : Initthread - array of connections - push thread - array of connections - cleanup thread I guess I must be missing something :) I think the best bet would be to valgrind with some minimal connections and identify the source of the leak. Then someone here would be able to help you out. OR You could post the code so that people could look for obvious mistakes. Option 1 seems better to me ;) -jb -- No snowflake in an avalanche ever feels responsible. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: How to use RSA?
Thanks a lot. :-))) Original-Nachricht Datum: Sat, 1 Sep 2007 20:13:50 +0300 Von: Stefan Vatev [EMAIL PROTECTED] An: openssl-users@openssl.org Betreff: Re: How to use RSA? You might find this useful : http://e-doc.svn.sourceforge.net/viewvc/e-doc/trunk/ossl/ Regards, Stefan On 8/30/07, Martin Salo [EMAIL PROTECTED] wrote: Hello Mailinglist, I want to use RSA for encryption. So I need to know how to create a RSA key pair and how to De- Encrypt. Both must be done within the RAM. I want to use the OpenSSL Api, but all I can find was this page: http://www.openssl.org/docs/crypto/rsa.html# 1. Is somewhere explained in more details how this funktions work? 2. Is a simple C/C++ code snippet available that shows how I can use RSA from OpenSSL? Regards Martin -- GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS. Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED] -- GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS. Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: usage of DES_encrypt1 in 0.9.7j
Really no help with that? I spend so much time on it and you are my last hope... Thanks, Koza Koza wrote: I'd like to use function des_encrypt1 and I have the following code: const_DES_cblock key= {0x11,0x11,0x11,0x11,0x11,0x11,0x11,0x11}; DES_key_schedule k; DES_set_key_unchecked(key,k); DES_LONG dataenc1[2]; dataenc1[0] = 0x01234567; dataenc1[1] = 0x89abcdef; DES_encrypt1(dataenc1,k,1); but it doesn't work... instead of 8a5ae1f81ab8f2dd I receive eb0b38b6de16165 DES_ecb_encrypt works fine with that data (8a5ae1f81ab8f2dd is valid test vector for key/plain set above) Any idea how to use that function? -- View this message in context: http://www.nabble.com/usage-of-DES_encrypt1-in-0.9.7j-tf4246728.html#a12460262 Sent from the OpenSSL - User mailing list archive at Nabble.com. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Use Rand_Seed on windows?
Hello Mailinglist, in the OpenSSL documentation is written that I should initialize with RAND_seed() before using RSA_public_encrypt() and RSA_generate_key_ex(). But I havent found any good examples that show how to do this. (For Windows) I found this example in the test folder over and over again. But it seems that this is only a dummy. static const char rnd_seed[] = string to make the random number generator think it has entropy; RAND_seed(rnd_seed, sizeof rnd_seed); /* or OAEP may fail */ What do you think if I use rand() from cstdlib.h to create a char* string and put it into RAND_seed(): const int SeedLen=100; char RandSeed[SeedLen]; srand((unsigned)time(NULL)); for (int i=0; iSeedLen; i++) RandSeed[i] = rand() % 256; Or is there a better way to set rand_seed? I cannot bother a user with key input or else. In some test applications I havent seen any rand_seed calls. So maybe it is not so important? Regards Martin -- GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS. Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: Use Rand_Seed on windows?
On Mon, Sep 03, 2007, Martin Salo wrote: Hello Mailinglist, in the OpenSSL documentation is written that I should initialize with RAND_seed() before using RSA_public_encrypt() and RSA_generate_key_ex(). But I havent found any good examples that show how to do this. (For Windows) OpenSSL now uses several sources of entropy automatically on Windows so you shouldn't need to do this yourself. Steve. -- Dr Stephen N. Henson. Email, S/MIME and PGP keys: see homepage OpenSSL project core developer and freelance consultant. Funding needed! Details on homepage. Homepage: http://www.drh-consultancy.demon.co.uk __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: Use Rand_Seed on windows?
Ok, thanks a lot :) Original-Nachricht Datum: Mon, 3 Sep 2007 15:02:49 +0200 Von: Dr. Stephen Henson [EMAIL PROTECTED] An: openssl-users@openssl.org Betreff: Re: Use Rand_Seed on windows? On Mon, Sep 03, 2007, Martin Salo wrote: Hello Mailinglist, in the OpenSSL documentation is written that I should initialize with RAND_seed() before using RSA_public_encrypt() and RSA_generate_key_ex(). But I havent found any good examples that show how to do this. (For Windows) OpenSSL now uses several sources of entropy automatically on Windows so you shouldn't need to do this yourself. Steve. -- Dr Stephen N. Henson. Email, S/MIME and PGP keys: see homepage OpenSSL project core developer and freelance consultant. Funding needed! Details on homepage. Homepage: http://www.drh-consultancy.demon.co.uk __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED] -- Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: usage of DES_encrypt1 in 0.9.7j
Hello, Really no help with that? I spend so much time on it and you are my last hope... Thanks, Koza Koza wrote: I'd like to use function des_encrypt1 and I have the following code: const_DES_cblock key= {0x11,0x11,0x11,0x11,0x11,0x11,0x11,0x11}; DES_key_schedule k; DES_set_key_unchecked(key,k); DES_LONG dataenc1[2]; dataenc1[0] = 0x01234567; dataenc1[1] = 0x89abcdef; DES_encrypt1(dataenc1,k,1); but it doesn't work... instead of 8a5ae1f81ab8f2dd I receive eb0b38b6de16165 DES_ecb_encrypt works fine with that data (8a5ae1f81ab8f2dd is valid test vector for key/plain set above) Any idea how to use that function? This is data conversion problem, look at example. Best regards, -- Marek Marcola [EMAIL PROTECTED] #include openssl/des.h #define c2l(c,l)(l =((DES_LONG)(*((c)++))), \ l|=((DES_LONG)(*((c)++))) 8L, \ l|=((DES_LONG)(*((c)++)))16L, \ l|=((DES_LONG)(*((c)++)))24L) #define l2c(l,c)(*((c)++)=(unsigned char)(((l))0xff), \ *((c)++)=(unsigned char)(((l) 8L)0xff), \ *((c)++)=(unsigned char)(((l)16L)0xff), \ *((c)++)=(unsigned char)(((l)24L)0xff)) int main() { char *in_data = \x01\x23\x45\x67\x89\xab\xcd\xef; unsigned char *in = (unsigned char*)in_data; char out_data[8]; unsigned char *out = (unsigned char*)out_data; const_DES_cblock key= {0x11,0x11,0x11,0x11,0x11,0x11,0x11,0x11}; DES_key_schedule k; DES_set_key_unchecked(key,k); DES_LONG ll[2]; DES_LONG l; int i; printf( in = ); for(i=0; i8; i++){ printf(%02x , in_data[i] 0xFF); } printf(\n); c2l(in,l); ll[0] = l; c2l(in,l); ll[1] = l; printf( in ll[0] = %08lx\n, ll[0]); printf( in ll[1] = %08lx\n, ll[1]); DES_encrypt1(ll,k,DES_ENCRYPT); printf(out ll[0] = %08lx\n, ll[0]); printf(out ll[1] = %08lx\n, ll[1]); l = ll[0]; l2c(l,out); l = ll[1]; l2c(l,out); printf(out = ); for(i=0; i8; i++){ printf(%02x , out_data[i] 0xFF); } printf(\n); return(0); }
Re: BIO_set_nbio_accept functionality
Jim Marshall wrote: I'm looking at using non-blocking I/O in some places in my code, and I have a question. The 'BIO_set_nbio_accept' says it will set the underlying socket to blocking/non-blocking mode, but all the examples and stuff I see say to use 'BIO_socket_ioctl(SSL_get_fd(ssl),FIONBIO,sl)'. Can 'BIO_set_nbio_accept' be used to change the state of an SSL socket? Thank you Jim __ OpenSSL Project http://www.openssl.org User Support Mailing List openssl-users@openssl.org Automated List Manager [EMAIL PROTECTED] No one has a comment on this? Did I miss something in a FAQ or something? __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SSL_peek vs. SSL_pending...
* David Schwartz wrote on Thu, Aug 30, 2007 at 13:44 -0700: If the first byte (or any part of the buffer) could be written instantly or (e.g. if no select returned ready before :)) after some amount of time waited, write should return to give the calling application the control. I can think of no situation where you'd want to wait forever for the first byte to be sent but only for a certain amount of time for the second byte to be sent. That's one of the strangest suggestions I've ever heard. I cannot imagine any situation where to wait forever, right, but for some small command line tools (interruptable by ^C or so) sometimes it makes sense to program in such a way. Some small helper tool may wait for a request to arive (forever or ^C, whatever happens first :)), but `during' communication, i.e. after the first write worked, some different error handling after some reasonable timeout is needed (however, this may not match exactly here, because there is is on top of some protocol on top of a serial link). Maybe infinite timeouts or blocking I/O makes sense only in more or less interactive things. I looked up write(2) on opengroup.org and found a page that surprised me :) The information on opengroup.org tell `The write() function shall attempt to write nbyte bytes from the buffer...'. My man page tell `write writes up to count bytes to the file...'. My man page claims to be conforming to `SVr4, SVID, POSIX, X/OPEN, 4.3BSD'. The opengroup.org page distinguishes write semantics based on what the fd is kind of (file, FIFO, STREAM, ...) which IMHO cannot be correct because it destroys the abstraction. You don't really have a choice. If they published only the semantics for 'write' that applied to every possible thing you could ever want to write to, they wouldn't be enough to allow you to write sane programs to deal with sockets, files, or anything for that matter. mmm... this would be a pitty... I expect that write does the same `something' reasonable on a socket as on a serial line. I mean, I don't want a WaitForMultipleObject in case the object accidently is something selectable :) but of course there are always specifics. For example, suppose they only documented the semantics for 'select' that applied to everything you could ever 'select' on. That would mean they couldn't tell you that a listening TCP socket that had a new connection would be marked ready for reading, because that applies only to sockets. So then how would you know how to use 'select' for that? yeah, but using select before accept has a little taste of a `workaround' in absence of another call, hasn't it? In this case probably someone uses select just because its description is most close to what is desired; according to my man page technically select should not be influenced by accept situations but would be a pitty. So yes, how would I know how to use 'select' for that? now that you rose it, nice question. /dev/urandom isn't a file, it's a device. For me, it is a file. Wikipedia mentiones /dev/null as file (clarifying that names are something associated with the file itself) When the file is /dev/urandom (a random number generator device at least on linux), it shall NOT return the data previously written. For devices (and sockets :)), I think this is obvious. anyway. Again, /dev/urandom is not a file. If it helps, where you see the word file replace the definition of file from the standard. (Or the words regular file.) Would be horrible I think; such a limitation would reduce the flexiblity a lot. I think device files and many other files (including sockets) are a great idea. Its something where you can read from and/or write to, why should it matter if it is on a local hard disk. correct implementation needs to guarantee that. If queue discarding is possible, a flag must be stored (or so) to make read return EAGAIN or whatever (probably causing most code to break anyway, because noone expects EAGAIN after select 0 :-)). That's simply impossible to do. The problem is that there is no unambiguous way to figure out whether an operation is the one that's not supposed to block. Consider: 1) A thread calls 'select'. 2) That thread later calls 'read'. If the 'select' changes the semantics of the 'read', then if the thread didn't know that some other code called 'select' earlier, the later read-calling code can break. If some other thread called read (or another function), of course before the next read a select must be called again. Only for the next read guarantees may be made, not for the 103th read called after the second reboot :) No, you missed my entire point. Please read it again. I was talking about *THAT* read, not another read after that. sorry, seems I'm unable to get it (I read it several times :)). I think the select could (if needed) store some flag (associated with some fd) to
Re: BIO_set_nbio_accept functionality
Doesn't need a faq. The man page says the purpose of the BIO_set_nbio_accept macro is to set blocking or non-blocking mode. Seems like that's what it will do. Jim On Sep 3, 2007, at 11:31 AM, Jim Marshall wrote: Jim Marshall wrote: I'm looking at using non-blocking I/O in some places in my code, and I have a question. The 'BIO_set_nbio_accept' says it will set the underlying socket to blocking/non-blocking mode, but all the examples and stuff I see say to use 'BIO_socket_ioctl(SSL_get_fd (ssl),FIONBIO,sl)'. Can 'BIO_set_nbio_accept' be used to change the state of an SSL socket? Thank you Jim _ _ OpenSSL Project http:// www.openssl.org User Support Mailing Listopenssl- [EMAIL PROTECTED] Automated List Manager [EMAIL PROTECTED] No one has a comment on this? Did I miss something in a FAQ or something? __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED] __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: SSL_peek vs. SSL_pending...
sorry, seems I'm unable to get it (I read it several times :)). I think the select could (if needed) store some flag (associated with some fd) to remember that it returned that read must not block by guarantee. Maybe some list including all fds where select returned this. Any OS function (or, if possible, any OS function that may influence this fd) resets the flag (no guarantee anymore). But if read is called and would block because of some changed situation it could decide to return right before resetting the flag, maybe setting errno to EAGAIN. So I think the guarantee itself could be given (not claiming that this would be a good idea). As the examples show, there is no way to figure out *which* 'read' must not block. There is no unamibiguous way to figure out which 'read' the application thinks of as being the one that should not block. It's hard to show with 'read', but I can show a simple example with 'write'. Imagine an implementation that tried to ensure that a 'write' after 'select' did not block. Consider: 1) An application calls 'select'. But this comes from a denial-of-service attack detection code that's checking to see if the kernel buffers stays full for too long. It has nothing to do with the I/O code. 2) The application calls 'write', expecting it to block until all the data can be written. Your change would break this application, as the 'select' would change the semantics of the 'write'. Now, consider this, there's a 'select' from one thread followed by a 'write' from another thread. Are these two events unrelated, and the application expects blocking semantics? Or is this the subsequent write that the application expects not to block? There's no way to tell. In other words, 'select' must predict the future. Sorry, that's not possible. There is no way for 'select' to know what integrity checks will be performed at read time. Why predict future? Because whether or not the subsequent operation blocks depends upon the condition of the network connection at that time. If data was put to the read buffer (whether verified or not), select and read won't block. If data is in the buffer and by contract can be only removed by read (or close maybe, doesn't matter), read won't block. Wouldn't this work? I mean, at least theoretically? No, because 'select' has to work on protocols with all different kinds of semantics. It is not theoretically possible to ensure that these semantics will make sense with every protocol 'select' might be used with. Consider the following: 1) An application disables UDP cheksums. 2) An application calls 'select' and gets a 'read' hit on a packet with a bad checksum. So this would mean the bad checksum would not be detected/evaluated and the data would be stored to the buffer, right? Not likely. That would mean the kernel has to verify the checksum in a separate operation, which is a waste of memory bandwidth. 3) An application peforms a socket option call asking for checksum checking to be enabled. ok, so from now new arriving data would not stored anymore to the input buffer unless checksum was proven (not applied retroactive of course - wouldn't be possible, because the checksums where not even stored). Suppose the input buffer *is* the packet buffer. 4) An application calls 'recvmsg'. Should it get the packet with the bad checksum? In other words, are you really sure you want 'select' to *change* the semantics of the socket? It gets the data arrived in the packet where the checksum was not evaluated at all, because this was configured, yes, that would be what I expect. select should not influence the mechanism at all (checksum verification). But you are demanding that 'select' influence the mechanism, because you are saying that because there was a 'select' hit, the packet must be returned, even if a subsequent change asks for it to be discarded. If not for the 'select' hit, it would be perfectly reasonable to discard the packet later. yes, of course, the device may not even know about accept at all. But I mean, for read non-blocking is guaranteed according to some understandings but for accept noone claims this (but expects it because the working examples show it :)). This is the problem. At one time, people expected it for 'accept', and their code broke. Why tell people to repeat that mistake? I bring them up to address the fundamental point -- you don't want 'select' to change the semantics of future socket operations. As a result, you can't ask for 'select' to make future guarantees. You really can't have one without the other. Ahh, this one I understand! My `flag setting' example surely would change the semantics of future socket operation. But in a case the application cannot distinguish (because it has no way to decide whether this was changed or original behavior). So you say that it is not only that select does not give this non-block guarantee, but also