On Wed, Jun 30, 2010 at 9:12 PM, Brian Makin <ma...@vivisimo.com> wrote:

>
> Thank you... this is mostly what I expected.
> In our case we having a problem with a CGI program so the response time
> is important and initialization happens many times.
>
> We may just have to hope no other boxes display this behavior :)
>
>
I hadn't thought about the Win7 screen issue mentioned by Dr. Henson as you
said you had several machines (of, and I assume this, similar make, OS wise)
and only one playing up, which is a symptom I have learned to associate with
the Rand_poll() heap walking issue. Which doesn't say I'm right, so checking
makes sure here.

Since, in your case, the timeout cost is significant (as it's happening in a
CGI app), it's worth checking out which is it on your box. I'd  be
interested to hear whether it's the heapwalk loop or the (IIRC)
read_screen() itself causing the delay. (I bet on the former as Win7 ~~
Server2008 in this and you mentioned you're running on 2003)



[OT here] just a thought but since you're init-ing and exit-ing OPenSSL
repeatedly due to starting and exiting the CGI exe, you may consider
'improving your random pool over time' by adding code in the CGI to dump the
random pool to file (open for write and exclusive access, just shrug and
don't write at all when the OS says another instance is already /writing/,
this isn't about keeping everything, best effort is what we're after) at the
end of the CGI run, while RAND_add()ing the file content at CGI start. Note
that I say RAND_add() so this loading some 'unknown' amount of previously
gathered entropy from a file doesn't replace the 'regular' entropy gathering
that happens which each start!

The thought here is that you can alleviate any suspected reduction in
entropy gathering 'quality' - as sources are removed from the gathering - by
accumulating the gathered entropy over an extended time, surpassing the CGI
run-time lifetime boundaries by 'persisting' entropy gathered so far.
The whole file I/O thing is best effort based so for multiple CGI instances
running in parallel only one gets to 'win' and the collection in the other
instances are 'lost', but that's okay, in a way, as we're considering longer
term here, where a CGI instance I(t+n) now gets to have a /chance/ at
obtaining more entropy than it would on its own, thanks to the successful
persisting action by previous instance I(t) (instance 'I' at time 't'). [An
alternative to classic fopen/fwrite/fclose might be memory mapped I/O which
shared write access; no interprocess locking needed as we don't care who
gets to write his stuff in there, just as long as it's happening fast and we
don't run the risk of a completely zeroed file content or some such
nastiness.]

It's not ideal, but it at least helps cover your tracks when you cut out the
offending source in OpenSSL itself.


-- 
Met vriendelijke groeten / Best regards,

Ger Hobbelt

--------------------------------------------------
web:    http://www.hobbelt.com/
       http://www.hebbut.net/
mail:   g...@hobbelt.com
mobile: +31-6-11 120 978
--------------------------------------------------

Reply via email to