[openssl-users] DTLS fragmentation and mem BIO
Hi all, first of all, apologies if this has been asked before. I've searched archives pretty much everywhere, and only came to partial indications as to how this should be dealt with. The problem I'm facing deals with using DTLS with mem BIOs, as I have to take care of transport myself. Specifically, I've implemented a WebRTC gateway called Janus, which means all the connectivity related stuff is delegated to another library (libnice in this case). This mostly works great (kudos to you guys!), but I have problems as soon as packets exceed the MTU, which can easily happen whenever, for instance, you try to handshake with certificates larger than 1024 bits. I read around that the DTLS stack in OpenSSL automatically deals with this, and in fact this seems to be happening: what isn't working is the BIO mem part of this. More specifically, OpenSSL does indeed take care of fragmenting the packets according to what is assumed to be the MTU (1472 by default, or the value as set in s-d1-mtu). The problem is that the mem BIO ignores that fragmentation info completely, and so, when you do an BIO_read, makes available at the application the whole packet anyway. This results in the whole buffer being passed to nice_agent_send (the method libnice exposes to send packets), which means it's just as not fragmenting anything: the packet is too large and the network drops it. You can verify this by using, e.g., a 4096 bits certificate, and capture the DTLS traffic with Wireshark: you'll see that the message is recognized as composed of not only multiple messages, but also fragments. Is there any way I can force the BIO to return the invididual fragments/messages when I do a BIO_read, so that I can send properly sized packets? I've tried looking around but found no insight on how to do that. The only approach that came to my mind was to manually inspect the buffer that is returned, and split messages/fragments myself, but I'd rather avoid delving within the specifics of the protocol if possible. Thanks in advance for any help you may provide me with! Lorenzo ___ openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Re: [openssl-users] DTLS fragmentation and mem BIO
On 05/06/15 08:09, Lorenzo Miniero wrote: Hi all, first of all, apologies if this has been asked before. I've searched archives pretty much everywhere, and only came to partial indications as to how this should be dealt with. The problem I'm facing deals with using DTLS with mem BIOs, as I have to take care of transport myself. Specifically, I've implemented a WebRTC gateway called Janus, which means all the connectivity related stuff is delegated to another library (libnice in this case). This mostly works great (kudos to you guys!), but I have problems as soon as packets exceed the MTU, which can easily happen whenever, for instance, you try to handshake with certificates larger than 1024 bits. I read around that the DTLS stack in OpenSSL automatically deals with this, and in fact this seems to be happening: what isn't working is the BIO mem part of this. More specifically, OpenSSL does indeed take care of fragmenting the packets according to what is assumed to be the MTU (1472 by default, or the value as set in s-d1-mtu). The problem is that the mem BIO ignores that fragmentation info completely, and so, when you do an BIO_read, makes available at the application the whole packet anyway. This results in the whole buffer being passed to nice_agent_send (the method libnice exposes to send packets), which means it's just as not fragmenting anything: the packet is too large and the network drops it. You can verify this by using, e.g., a 4096 bits certificate, and capture the DTLS traffic with Wireshark: you'll see that the message is recognized as composed of not only multiple messages, but also fragments. Is there any way I can force the BIO to return the invididual fragments/messages when I do a BIO_read, so that I can send properly sized packets? I've tried looking around but found no insight on how to do that. The only approach that came to my mind was to manually inspect the buffer that is returned, and split messages/fragments myself, but I'd rather avoid delving within the specifics of the protocol if possible. Thanks in advance for any help you may provide me with! H. An interesting problem. The issue is that a mem BIO has no knowledge of datagram semantics (perhaps we need to add something for OpenSSL 1.1.0). In a dgram BIO each BIO_write translates to a single datagram being produced. In a mem BIO you just have a big bucket of memory, and every time you get a BIO_write you just add the data onto the end of everything that we've go so far, and so the packet boundaries are not respected. How about you create a custom filter BIO? All it would need to do is proxy all calls down to the underlying mem BIO. Along the way though it could take note of where the packet boundaries are, so when you call BIO_read it only gives it to you a datagram at a time. Matt ___ openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Re: [openssl-users] DTLS fragmentation and mem BIO
2015-06-05 10:31 GMT+02:00 Matt Caswell m...@openssl.org: On 05/06/15 08:09, Lorenzo Miniero wrote: Hi all, first of all, apologies if this has been asked before. I've searched archives pretty much everywhere, and only came to partial indications as to how this should be dealt with. The problem I'm facing deals with using DTLS with mem BIOs, as I have to take care of transport myself. Specifically, I've implemented a WebRTC gateway called Janus, which means all the connectivity related stuff is delegated to another library (libnice in this case). This mostly works great (kudos to you guys!), but I have problems as soon as packets exceed the MTU, which can easily happen whenever, for instance, you try to handshake with certificates larger than 1024 bits. I read around that the DTLS stack in OpenSSL automatically deals with this, and in fact this seems to be happening: what isn't working is the BIO mem part of this. More specifically, OpenSSL does indeed take care of fragmenting the packets according to what is assumed to be the MTU (1472 by default, or the value as set in s-d1-mtu). The problem is that the mem BIO ignores that fragmentation info completely, and so, when you do an BIO_read, makes available at the application the whole packet anyway. This results in the whole buffer being passed to nice_agent_send (the method libnice exposes to send packets), which means it's just as not fragmenting anything: the packet is too large and the network drops it. You can verify this by using, e.g., a 4096 bits certificate, and capture the DTLS traffic with Wireshark: you'll see that the message is recognized as composed of not only multiple messages, but also fragments. Is there any way I can force the BIO to return the invididual fragments/messages when I do a BIO_read, so that I can send properly sized packets? I've tried looking around but found no insight on how to do that. The only approach that came to my mind was to manually inspect the buffer that is returned, and split messages/fragments myself, but I'd rather avoid delving within the specifics of the protocol if possible. Thanks in advance for any help you may provide me with! H. An interesting problem. The issue is that a mem BIO has no knowledge of datagram semantics (perhaps we need to add something for OpenSSL 1.1.0). In a dgram BIO each BIO_write translates to a single datagram being produced. In a mem BIO you just have a big bucket of memory, and every time you get a BIO_write you just add the data onto the end of everything that we've go so far, and so the packet boundaries are not respected. How about you create a custom filter BIO? All it would need to do is proxy all calls down to the underlying mem BIO. Along the way though it could take note of where the packet boundaries are, so when you call BIO_read it only gives it to you a datagram at a time. Matt Thanks for the very quick answer! Your suggestion does indeed make much more sense that manually inspecting the buffer as I thought of, as you don't need to know anything about the protocol but only to be ready to index the packets you see passing by. I never tried writing a BIO filter but there's a first time for everything :-) Just one quick question about this: are messages/packets passed to the BIO actually splitted, and then just queued by the mem BIO in the buffer, or can there be cases where a larger than normal buffer is passed to the BIO anyway, meaning a manual splitting could be needed nevertheless from time to time? Thanks, Lorenzo ___ openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Re: [openssl-users] DTLS fragmentation and mem BIO
On 05/06/15 10:20, Lorenzo Miniero wrote: Just one quick question about this: are messages/packets passed to the BIO actually splitted, and then just queued by the mem BIO in the buffer, or can there be cases where a larger than normal buffer is passed to the BIO anyway, meaning a manual splitting could be needed nevertheless from time to time? No, there should be no need for the BIO to do any splitting. Everything that gets written to the BIO should be a datagram. One issue that does spring to mind is that in your filter BIO you may want to implement some of the dgram ctrls that DTLS uses. This depends on how you want to manage setting your MTU. Do you set an MTU size explicitly using SSL_set_mtu(ssl, mtu) or DTLS_set_link_mtu(ssl, mtu)? Also, do you set the option SSL_OP_NO_QUERY_MTU? If you use the option then you should set an MTU size explicitly. If you don't set the SSL_OP_NO_QUERY_MTU option then the DTLS code will attempt to query the underlying BIO for information about the mtu size. That would mean you would have to implement the following additional ctrls: BIO_CTRL_DGRAM_GET_FALLBACK_MTU - returns a default MTU size if querying fails for some reason BIO_CTRL_DGRAM_QUERY_MTU - queries the transport for the MTU size to be used BIO_CTRL_DGRAM_SET_MTU - sets the MTU size on the underlying transport BIO_CTRL_DGRAM_MTU_EXCEEDED - returns true if the datagram we just tried to send failed because we exceeded the max MTU size Matt ___ openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Re: [openssl-users] Building OpenSSL with FIPS crypto Module Linker forking too many processes
REPOSTING TO PUSH TO OFFICIAL GROUP I was wondering if someone has seen this issue before. I am guessing the problem is on my side because can replicate it on Debian 8 and Ubuntu 14.4. I am using OpenSSL 1.0.2a and the crypto module from OpenSSL ecp 2.0.9 env settings CC=/home/myssluser/workspace/libs/openssl-fips-ecp-2.0.9/fips/fipsld FIPSLD_CC=/usr/bin/gcc FIPSDIR=/usr/local/ssl/fips-2.0 for building fips canister ./config fipscanisterbuild no-asm make make install using ./config fips no-asm make make install This seemed to be pretty straight forward. I think i created the fipscanister.o correctly. Everything compiled and linked for the canister. I liked it to a small test app that worked. I then tried to build openssl, it fine but on the last linking step the linker just keep forking processes out of control on both OSs until i got a message that the linker cannot fork any new processes. Any pointers would be appreciated. /home/myssluser/workspace/libs/openssl-fips-ecp-2.0.9/fips/fipsld: 174: /home/myssluser/workspace/libs/openssl-fips-ecp-2.0.9/fips/fipsld: Cannot fork ../Makefile.shared:164: recipe for target 'link_app.' failed make[2]: *** [link_app.] Error 2 make[2]: Leaving directory '/home/myssluser/workspace/libs/openssl-1.0.2a/apps' Makefile:153: recipe for target 'openssl' failed make[1]: *** [openssl] Error 2 make[1]: Leaving directory '/home/myssluser/workspace/libs/openssl-1.0.2a/apps' Makefile:285: recipe for target 'build_apps' failed make: *** [build_apps] Error 1 mssluser@debian8:~/workspace/libs/openssl-1.0.2a$ -- View this message in context: http://openssl.6102.n7.nabble.com/Building-OpenSSL-with-FIPS-crypto-Module-Linker-forking-too-many-processes-tp58444p58471.html Sent from the OpenSSL - User mailing list archive at Nabble.com. ___ openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Re: [openssl-users] DTLS fragmentation and mem BIO
I see you got it working! Just some comments below On 05/06/15 12:34, Lorenzo Miniero wrote: I've started looking into filters and I have some doubts, though, also taking into account what you suggested, and I apologize again if this turns out to be silly. As far as I've understood, what I should do is changing the current pattern I use for outgoing packets: application memBIO ssl to something like this: application memBIO filter ssl or this: application filter memBIO ssl that is, a new BIO filter that enforces the fragmentation I talked about. Not exactly sure about which one should be the way to go, but I've given this some thought. I took a very brief look at your code and I see you went with the first option. That's fine, although I would have done it slightly differently: application -- -- ssl | | | V filter ^ V memBIO i.e. the filter does all the reading and writing to the memBIO. libssl calls BIO_write(), the filter takes note of the packet sizes, and then writes to the membBIO. When the application wants to read data it calls BIO_read on the filter, and the filter figures out how big the packet needs to be and reads that amount out of the memBIO. Your way works too though. Matt ___ openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Re: [openssl-users] DTLS fragmentation and mem BIO
2015-06-05 20:18 GMT+02:00 Matt Caswell m...@openssl.org: I see you got it working! Just some comments below On 05/06/15 12:34, Lorenzo Miniero wrote: I've started looking into filters and I have some doubts, though, also taking into account what you suggested, and I apologize again if this turns out to be silly. As far as I've understood, what I should do is changing the current pattern I use for outgoing packets: application memBIO ssl to something like this: application memBIO filter ssl or this: application filter memBIO ssl that is, a new BIO filter that enforces the fragmentation I talked about. Not exactly sure about which one should be the way to go, but I've given this some thought. I took a very brief look at your code and I see you went with the first option. That's fine, although I would have done it slightly differently: application -- -- ssl | | | V filter ^ V memBIO i.e. the filter does all the reading and writing to the memBIO. libssl calls BIO_write(), the filter takes note of the packet sizes, and then writes to the membBIO. When the application wants to read data it calls BIO_read on the filter, and the filter figures out how big the packet needs to be and reads that amount out of the memBIO. Your way works too though. Matt Ah I didn't know that was an option: I'm quite unfamiliar with how BIO filters worked, and so I just went with what made sense to me while experimenting with them. I'll try doing something along the lines you suggested as soon as I have some time, thanks! Lorenzo ___ openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Re: [openssl-users] Building OpenSSL with FIPS crypto Module Linker forking too many processes
Well, since you're using the fips-ecp tarball, you'll need to include no-ec2m when configuring OpenSSL 1.0.2a. But this isn't why you're seeing a fork error from fipsld. I'm using Ubuntu 14.04 (Is there a 14.4?) and don't see any issue. However, I'm not setting CC, FIPSLD and FIPSDIR. You shouldn't have to set these. Also, you're not doing a make depend after the config for OpenSSL 1.0.2a. Here's a summary of the procedure that worked for me: wget --no-check-certificate https://www.openssl.org/source/openssl-1.0.2a.tar.gz wget --no-check-certificate https://www.openssl.org/source/openssl-fips-ecp-2.0.9.tar.gz tar -xzvf openssl-fips-ecp-2.0.9.tar.gz cd openssl-fips-ecp-2.0.9/ ./config fipscanisteronly no-asm --prefix=/nobackup/tmp/x88/fips make make install cd .. tar -xzvf openssl-1.0.2a.tar.gz cd openssl-1.0.2a/ ./config fips no-ec2m no-asm --with-fipsdir=/nobackup/tmp/x88/fips make depend make clean make On 06/05/2015 09:23 AM, OpenSSL Curious wrote: REPOSTING TO PUSH TO OFFICIAL GROUP I was wondering if someone has seen this issue before. I am guessing the problem is on my side because can replicate it on Debian 8 and Ubuntu 14.4. I am using OpenSSL 1.0.2a and the crypto module from OpenSSL ecp 2.0.9 env settings CC=/home/myssluser/workspace/libs/openssl-fips-ecp-2.0.9/fips/fipsld FIPSLD_CC=/usr/bin/gcc FIPSDIR=/usr/local/ssl/fips-2.0 for building fips canister ./config fipscanisterbuild no-asm make make install using ./config fips no-asm make make install This seemed to be pretty straight forward. I think i created the fipscanister.o correctly. Everything compiled and linked for the canister. I liked it to a small test app that worked. I then tried to build openssl, it fine but on the last linking step the linker just keep forking processes out of control on both OSs until i got a message that the linker cannot fork any new processes. Any pointers would be appreciated. /home/myssluser/workspace/libs/openssl-fips-ecp-2.0.9/fips/fipsld: 174: /home/myssluser/workspace/libs/openssl-fips-ecp-2.0.9/fips/fipsld: Cannot fork ../Makefile.shared:164: recipe for target 'link_app.' failed make[2]: *** [link_app.] Error 2 make[2]: Leaving directory '/home/myssluser/workspace/libs/openssl-1.0.2a/apps' Makefile:153: recipe for target 'openssl' failed make[1]: *** [openssl] Error 2 make[1]: Leaving directory '/home/myssluser/workspace/libs/openssl-1.0.2a/apps' Makefile:285: recipe for target 'build_apps' failed make: *** [build_apps] Error 1 mssluser@debian8:~/workspace/libs/openssl-1.0.2a$ -- View this message in context: http://openssl.6102.n7.nabble.com/Building-OpenSSL-with-FIPS-crypto-Module-Linker-forking-too-many-processes-tp58444p58471.html Sent from the OpenSSL - User mailing list archive at Nabble.com. ___ openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users ___ openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Re: [openssl-users] Building OpenSSL with FIPS crypto Module Linker forking too many processes
-- View this message in context: http://openssl.6102.n7.nabble.com/Building-OpenSSL-with-FIPS-crypto-Module-Linker-forking-too-many-processes-tp58444p58472.html Sent from the OpenSSL - User mailing list archive at Nabble.com. ___ openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
[openssl-users] fipsld linker out of control forking, last step of build
MAY BE REPOSTING TRYING TO MOVE FROM NABBLE TO OFFICIAL OPENSSL POSTINGI was wondering if someone has seen this issue before. I am guessing the problem is on my side because can replicate it on Debian 8 and Ubuntu 14.4. I am using OpenSSL 1.0.2c and the crypto module from OpenSSL ecp 2.0.9 env settings CC=/home/myssluser/workspace/libs/openssl-fips-ecp-2.0.9/fips/fipsld FIPSLD_CC=/usr/bin/gcc FIPSDIR=/usr/local/ssl/fips-2.0 for building fips canister ./config fipscanisterbuild no-asm make make install using ./config fips no-asm make make install This seemed to be pretty straight forward. I think i created the fipscanister.o correctly. Everything compiled and linked for the canister. I liked it to a small test app that worked. I then tried to build openssl, it fine but on the last linking step the linker just keep forking processes out of control on both OSs until i got a message that the linker cannot fork any new processes. Any pointers would be appreciated. /home/myssluser/workspace/libs/openssl-fips-ecp-2.0.9/fips/fipsld: 174: /home/myssluser/workspace/libs/openssl-fips-ecp-2.0.9/fips/fipsld: Cannot fork ../Makefile.shared:164: recipe for target 'link_app.' failed make[2]: *** [link_app.] Error 2 make[2]: Leaving directory '/home/myssluser/workspace/libs/openssl-1.0.2a/apps' Makefile:153: recipe for target 'openssl' failed make[1]: *** [openssl] Error 2 make[1]: Leaving directory '/home/myssluser/workspace/libs/openssl-1.0.2a/apps' Makefile:285: recipe for target 'build_apps' failed make: *** [build_apps] Error 1 myssluser@debian8:~/workspace/libs/openssl-1.0.2a$ ___ openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Re: [openssl-users] DTLS fragmentation and mem BIO
Eureka, I got it working! Thanks to the feedback from Matt and Alfred, I managed to create a filter that does what I need. To those who may be interested, it's available here: https://github.com/meetecho/janus-gateway/pull/254 Thanks for your great support! Lorenzo 2015-06-05 13:34 GMT+02:00 Lorenzo Miniero lmini...@gmail.com: 2015-06-05 12:30 GMT+02:00 Matt Caswell m...@openssl.org: On 05/06/15 10:20, Lorenzo Miniero wrote: Just one quick question about this: are messages/packets passed to the BIO actually splitted, and then just queued by the mem BIO in the buffer, or can there be cases where a larger than normal buffer is passed to the BIO anyway, meaning a manual splitting could be needed nevertheless from time to time? No, there should be no need for the BIO to do any splitting. Everything that gets written to the BIO should be a datagram. One issue that does spring to mind is that in your filter BIO you may want to implement some of the dgram ctrls that DTLS uses. This depends on how you want to manage setting your MTU. Do you set an MTU size explicitly using SSL_set_mtu(ssl, mtu) or DTLS_set_link_mtu(ssl, mtu)? Also, do you set the option SSL_OP_NO_QUERY_MTU? If you use the option then you should set an MTU size explicitly. If you don't set the SSL_OP_NO_QUERY_MTU option then the DTLS code will attempt to query the underlying BIO for information about the mtu size. That would mean you would have to implement the following additional ctrls: BIO_CTRL_DGRAM_GET_FALLBACK_MTU - returns a default MTU size if querying fails for some reason BIO_CTRL_DGRAM_QUERY_MTU - queries the transport for the MTU size to be used BIO_CTRL_DGRAM_SET_MTU - sets the MTU size on the underlying transport BIO_CTRL_DGRAM_MTU_EXCEEDED - returns true if the datagram we just tried to send failed because we exceeded the max MTU size Matt Hi Matt, thanks for the clarification and for the hints. I've started looking into filters and I have some doubts, though, also taking into account what you suggested, and I apologize again if this turns out to be silly. As far as I've understood, what I should do is changing the current pattern I use for outgoing packets: application memBIO ssl to something like this: application memBIO filter ssl or this: application filter memBIO ssl that is, a new BIO filter that enforces the fragmentation I talked about. Not exactly sure about which one should be the way to go, but I've given this some thought. The first one seems conceptually correct, in the sense that the filter receives the properly sized packets from the stack; I see issues in how to progressively make them available to the memBIO, though, especially considering that we cannot relay a BIO_read call (that is, have the mem BIO ask for data to the next BIO in the chain when data is requested). My guess, looking at the BIOs code, is that this is all asynchronous, that is, the DTLS stack issues a BIO_write that reaches the filter, and then it's the filter that forwards the written data (modified or not) to the next BIO, in this case the mem one, using another BIO_write. My concern here is how to figure out when to issue such a write: in fact, if we want to make sure that the mem BIO never returns too much data when a BIO_read is issued, we should never issue a new BIO_write to feed it with new data from the filter until the previous one has been read, something we cannot do if we don't know when the data has been read in the first place. The second one, as a consequence, may actually be more suited for the purpose, as we can always only return what we want to in a BIO_read. The issue there is what I mentioned in my previous post, that is, my fear being that the memBIO could feed too much data to my filter in a BIO_write, which would force my filter to inspect the payload and manually split packets as I'd do in my application. But according to what you said at the beginning of your reply, this shouldn't be the case, right? That is, the DTLS stack will issue different BIO_writes towards the memBIO for each packet/fragment, and this would automatically we forwarded to my filter, am I correct? Since in this case there wouldn't be any explicit BIO_read done from the application that may return what's been buffered so far. Apologies if I'm adding confusion with these questions, I'm just trying to figure out the best approach to the new filter. As an alternative, I guess I could just extend the existing mem BIO to a new, custom BIO, and handle it all there, but my feeling is that a filter would be a cleaner way to do that. Thanks again! Lorenzo ___ openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Re: [openssl-users] DTLS fragmentation and mem BIO
2015-06-05 12:30 GMT+02:00 Matt Caswell m...@openssl.org: On 05/06/15 10:20, Lorenzo Miniero wrote: Just one quick question about this: are messages/packets passed to the BIO actually splitted, and then just queued by the mem BIO in the buffer, or can there be cases where a larger than normal buffer is passed to the BIO anyway, meaning a manual splitting could be needed nevertheless from time to time? No, there should be no need for the BIO to do any splitting. Everything that gets written to the BIO should be a datagram. One issue that does spring to mind is that in your filter BIO you may want to implement some of the dgram ctrls that DTLS uses. This depends on how you want to manage setting your MTU. Do you set an MTU size explicitly using SSL_set_mtu(ssl, mtu) or DTLS_set_link_mtu(ssl, mtu)? Also, do you set the option SSL_OP_NO_QUERY_MTU? If you use the option then you should set an MTU size explicitly. If you don't set the SSL_OP_NO_QUERY_MTU option then the DTLS code will attempt to query the underlying BIO for information about the mtu size. That would mean you would have to implement the following additional ctrls: BIO_CTRL_DGRAM_GET_FALLBACK_MTU - returns a default MTU size if querying fails for some reason BIO_CTRL_DGRAM_QUERY_MTU - queries the transport for the MTU size to be used BIO_CTRL_DGRAM_SET_MTU - sets the MTU size on the underlying transport BIO_CTRL_DGRAM_MTU_EXCEEDED - returns true if the datagram we just tried to send failed because we exceeded the max MTU size Matt Hi Matt, thanks for the clarification and for the hints. I've started looking into filters and I have some doubts, though, also taking into account what you suggested, and I apologize again if this turns out to be silly. As far as I've understood, what I should do is changing the current pattern I use for outgoing packets: application memBIO ssl to something like this: application memBIO filter ssl or this: application filter memBIO ssl that is, a new BIO filter that enforces the fragmentation I talked about. Not exactly sure about which one should be the way to go, but I've given this some thought. The first one seems conceptually correct, in the sense that the filter receives the properly sized packets from the stack; I see issues in how to progressively make them available to the memBIO, though, especially considering that we cannot relay a BIO_read call (that is, have the mem BIO ask for data to the next BIO in the chain when data is requested). My guess, looking at the BIOs code, is that this is all asynchronous, that is, the DTLS stack issues a BIO_write that reaches the filter, and then it's the filter that forwards the written data (modified or not) to the next BIO, in this case the mem one, using another BIO_write. My concern here is how to figure out when to issue such a write: in fact, if we want to make sure that the mem BIO never returns too much data when a BIO_read is issued, we should never issue a new BIO_write to feed it with new data from the filter until the previous one has been read, something we cannot do if we don't know when the data has been read in the first place. The second one, as a consequence, may actually be more suited for the purpose, as we can always only return what we want to in a BIO_read. The issue there is what I mentioned in my previous post, that is, my fear being that the memBIO could feed too much data to my filter in a BIO_write, which would force my filter to inspect the payload and manually split packets as I'd do in my application. But according to what you said at the beginning of your reply, this shouldn't be the case, right? That is, the DTLS stack will issue different BIO_writes towards the memBIO for each packet/fragment, and this would automatically we forwarded to my filter, am I correct? Since in this case there wouldn't be any explicit BIO_read done from the application that may return what's been buffered so far. Apologies if I'm adding confusion with these questions, I'm just trying to figure out the best approach to the new filter. As an alternative, I guess I could just extend the existing mem BIO to a new, custom BIO, and handle it all there, but my feeling is that a filter would be a cleaner way to do that. Thanks again! Lorenzo ___ openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Re: [openssl-users] DTLS fragmentation and mem BIO
On 05/06/15 11:20, Lorenzo Miniero wrote: 2015-06-05 10:31 GMT+02:00 Matt Caswell m...@openssl.org mailto:m...@openssl.org: On 05/06/15 08:09, Lorenzo Miniero wrote: Hi all, first of all, apologies if this has been asked before. I've searched archives pretty much everywhere, and only came to partial indications as to how this should be dealt with. The problem I'm facing deals with using DTLS with mem BIOs, as I have to take care of transport myself. Specifically, I've implemented a WebRTC gateway called Janus, which means all the connectivity related stuff is delegated to another library (libnice in this case). This mostly works great (kudos to you guys!), but I have problems as soon as packets exceed the MTU, which can easily happen whenever, for instance, you try to handshake with certificates larger than 1024 bits. I read around that the DTLS stack in OpenSSL automatically deals with this, and in fact this seems to be happening: what isn't working is the BIO mem part of this. More specifically, OpenSSL does indeed take care of fragmenting the packets according to what is assumed to be the MTU (1472 by default, or the value as set in s-d1-mtu). The problem is that the mem BIO ignores that fragmentation info completely, and so, when you do an BIO_read, makes available at the application the whole packet anyway. This results in the whole buffer being passed to nice_agent_send (the method libnice exposes to send packets), which means it's just as not fragmenting anything: the packet is too large and the network drops it. You can verify this by using, e.g., a 4096 bits certificate, and capture the DTLS traffic with Wireshark: you'll see that the message is recognized as composed of not only multiple messages, but also fragments. Is there any way I can force the BIO to return the invididual fragments/messages when I do a BIO_read, so that I can send properly sized packets? I've tried looking around but found no insight on how to do that. The only approach that came to my mind was to manually inspect the buffer that is returned, and split messages/fragments myself, but I'd rather avoid delving within the specifics of the protocol if possible. Thanks in advance for any help you may provide me with! H. An interesting problem. The issue is that a mem BIO has no knowledge of datagram semantics (perhaps we need to add something for OpenSSL 1.1.0). In a dgram BIO each BIO_write translates to a single datagram being produced. In a mem BIO you just have a big bucket of memory, and every time you get a BIO_write you just add the data onto the end of everything that we've go so far, and so the packet boundaries are not respected. How about you create a custom filter BIO? All it would need to do is proxy all calls down to the underlying mem BIO. Along the way though it could take note of where the packet boundaries are, so when you call BIO_read it only gives it to you a datagram at a time. Matt Thanks for the very quick answer! Your suggestion does indeed make much more sense that manually inspecting the buffer as I thought of, as you don't need to know anything about the protocol but only to be ready to index the packets you see passing by. I never tried writing a BIO filter but there's a first time for everything :-) Just one quick question about this: are messages/packets passed to the BIO actually splitted, and then just queued by the mem BIO in the buffer, or can there be cases where a larger than normal buffer is passed to the BIO anyway, meaning a manual splitting could be needed nevertheless from time to time? hey, I just want to point out that we have been using OpenSSL in the libre stack for a long time, with successful deployment. the DTLS code is here: http://www.creytiv.com/doxygen/re-dox/html/tls__udp_8c_source.html we are using 2 different BIOs; one for outgoing, one for incoming: tc-sbio_in = BIO_new(BIO_s_mem()); if (!tc-sbio_in) { ERR_clear_error(); err = ENOMEM; goto out; } tc-sbio_out = BIO_new(bio_udp_send); if (!tc-sbio_out) { ERR_clear_error(); BIO_free(tc-sbio_in); err = ENOMEM; goto out; } /alfred ___ openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Re: [openssl-users] DTLS fragmentation and mem BIO
2015-06-05 12:34 GMT+02:00 Alfred E. Heggestad a...@db.org: On 05/06/15 11:20, Lorenzo Miniero wrote: 2015-06-05 10:31 GMT+02:00 Matt Caswell m...@openssl.org mailto: m...@openssl.org: On 05/06/15 08:09, Lorenzo Miniero wrote: Hi all, first of all, apologies if this has been asked before. I've searched archives pretty much everywhere, and only came to partial indications as to how this should be dealt with. The problem I'm facing deals with using DTLS with mem BIOs, as I have to take care of transport myself. Specifically, I've implemented a WebRTC gateway called Janus, which means all the connectivity related stuff is delegated to another library (libnice in this case). This mostly works great (kudos to you guys!), but I have problems as soon as packets exceed the MTU, which can easily happen whenever, for instance, you try to handshake with certificates larger than 1024 bits. I read around that the DTLS stack in OpenSSL automatically deals with this, and in fact this seems to be happening: what isn't working is the BIO mem part of this. More specifically, OpenSSL does indeed take care of fragmenting the packets according to what is assumed to be the MTU (1472 by default, or the value as set in s-d1-mtu). The problem is that the mem BIO ignores that fragmentation info completely, and so, when you do an BIO_read, makes available at the application the whole packet anyway. This results in the whole buffer being passed to nice_agent_send (the method libnice exposes to send packets), which means it's just as not fragmenting anything: the packet is too large and the network drops it. You can verify this by using, e.g., a 4096 bits certificate, and capture the DTLS traffic with Wireshark: you'll see that the message is recognized as composed of not only multiple messages, but also fragments. Is there any way I can force the BIO to return the invididual fragments/messages when I do a BIO_read, so that I can send properly sized packets? I've tried looking around but found no insight on how to do that. The only approach that came to my mind was to manually inspect the buffer that is returned, and split messages/fragments myself, but I'd rather avoid delving within the specifics of the protocol if possible. Thanks in advance for any help you may provide me with! H. An interesting problem. The issue is that a mem BIO has no knowledge of datagram semantics (perhaps we need to add something for OpenSSL 1.1.0). In a dgram BIO each BIO_write translates to a single datagram being produced. In a mem BIO you just have a big bucket of memory, and every time you get a BIO_write you just add the data onto the end of everything that we've go so far, and so the packet boundaries are not respected. How about you create a custom filter BIO? All it would need to do is proxy all calls down to the underlying mem BIO. Along the way though it could take note of where the packet boundaries are, so when you call BIO_read it only gives it to you a datagram at a time. Matt Thanks for the very quick answer! Your suggestion does indeed make much more sense that manually inspecting the buffer as I thought of, as you don't need to know anything about the protocol but only to be ready to index the packets you see passing by. I never tried writing a BIO filter but there's a first time for everything :-) Just one quick question about this: are messages/packets passed to the BIO actually splitted, and then just queued by the mem BIO in the buffer, or can there be cases where a larger than normal buffer is passed to the BIO anyway, meaning a manual splitting could be needed nevertheless from time to time? hey, I just want to point out that we have been using OpenSSL in the libre stack for a long time, with successful deployment. the DTLS code is here: http://www.creytiv.com/doxygen/re-dox/html/tls__udp_8c_source.html we are using 2 different BIOs; one for outgoing, one for incoming: tc-sbio_in = BIO_new(BIO_s_mem()); if (!tc-sbio_in) { ERR_clear_error(); err = ENOMEM; goto out; } tc-sbio_out = BIO_new(bio_udp_send); if (!tc-sbio_out) { ERR_clear_error(); BIO_free(tc-sbio_in); err = ENOMEM; goto out; } Hi Alfred, thanks for sharing this. So you've basically created a new BIO source/sink type, and you use that one instead of a mem BIO for sending, right? That might be an interesting approach (e.g., creating a custom BIO based on BIO mem), especially if it turns out that
Re: [openssl-users] OpenSSL.cnf File path
On 6/4/2015 1:17 PM, Cathy Fauntleroy wrote: Hello, I have OpenSSL 1.0.2a installed on my Windows 7 box. I am attempting to generate a CSR so new security certificates can be issued and am running into the following error when the command to generate the .csr file is issued from the C:\OpenSSL-Win64\bin directory: WARNING: can't open config file: /usr/local/ssl/openssl.cnf Unable to load config info from /usr/local/ssl/openssl.cnf This is not a valid path on my Windows box.openssl.cnf resides in C:\OpenSSL-Win64\bin. I verified the system PATH is correct also. Any ideas? Thanks. Cathy Reboot your computer. The installer failed to notify all windows of the declaration of OPENSSL_CONF. A reboot corrects the issue. -- Thomas Hruska Shining Light Productions Home of BMP2AVI and Win32 OpenSSL. http://www.slproweb.com/ ___ openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users