I'm running into a lot of the same error as was reported in the forum at: http://mailman.nginx.org/pipermail/nginx-devel/2013-October/004385.html
> SSL: error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac I've got an nginx server doing front-end SSL, with the upstream also over SSL and also nginx (fronting Apache). They're all running 1.5.13 (all Precise 64-bit), so I can goof with various options like ssl_buffer_size. These are running SSL-enabled web sites for my customers. I'm curious if there is any workaround for this besides patching openssl, as mentioned a couple of weeks ago in http://trac.nginx.org/nginx/ticket/215 In the wake of heartbleed, I'm not super excited about rolling my own openssl/libssl packages (and straying from easy updates), but I also need to put a lid on these SSL errors. I've also not tested yet to verify that the openssl patch fixes my issue (wanted to check here first). Like the forum notes, they seem to happen just in larger files (I've not dug extensively, but every one that I've seen is usually at least a 500k file). I've also noticed that if I request *just* the file, it seems to succeed every time. It's only when it's downloading a number of other files that it seems to occur. On a lark, I tried turning off front-end keepalives but that didn't make any difference. I've been playing with the ssl_buffer_size on both the frontend (which is where the errors show up) and the upstream servers to see if there was a magic combination, but no combo makes things happy. Am I doomed to patch openssl? Thanks!
_______________________________________________ nginx mailing list [email protected] http://mailman.nginx.org/mailman/listinfo/nginx
