On 12/05/2012 06:22 PM, [email protected] wrote:
>
> Zitat von Martijn Brinkers <[email protected]>:
>
>> On 12/05/2012 06:03 PM, [email protected] wrote:
>>>
>>> Zitat von Martijn Brinkers <[email protected]>:
>>>
>>>> On 12/05/2012 05:44 PM, [email protected] wrote:
>>>>>
>>>>>
>>>>> Is it possible that it is related to the CRL downloading. Most of the
>>>>> time we have something like this nearby:
>>>>>
>>>>> 05 Dez 2012 17:34:23 | WARN  IO Exception downloading CRL. URI:
>>>>> ldap://ldap.sbca.telesec.de/CN=Deutsche%20Telekom%20Root%20CA%202,OU=T-TeleSec%20Trust%20Center,O=Deutsche%20Telekom%20AG,C=DE?AuthorityRevocationList.
>>>>>
>>>>>
>>>>> Message: javax.naming.NamingException: LDAP response read timed out,
>>>>> timeout used:120000ms.
>>>>> (mitm.common.security.crl.CRLStoreUpdaterImpl)
>>>>> [CRL Updater thread]
>>>>
>>>> Downloading of CRLs are done on a separate thread. The only think I can
>>>> think of is that downloading the CRL somehow uses all the CPU for some
>>>> time although that should not happen since all threads have the same
>>>> priority. Do you have the certificate that contains that CRL
>>>> distribution point so I can test whether I can download the CRL?
>>>>
>>>> Kind regards,
>>>>
>>>> Martijn
>>>
>>> There is more than one with crappy CRL download URL. I will try to pick
>>> them and send you in private mail. There is obviously something wrong
>>> with the CRLs anyway:
>>>
>>> /var/log/old/james.wrapper.log.1:Exception in thread "CRL Updater
>>> thread" java.lang.OutOfMemoryError: GC overhead limit exceeded
>>> /var/log/old/james.wrapper.log.1:Exception in thread "CRL Updater
>>> thread" java.lang.OutOfMemoryError: Java heap space
>>> /var/log/old/james.wrapper.log.1:Exception in thread "CRL Updater
>>> thread" java.lang.OutOfMemoryError: Java heap space
>>> /var/log/old/james.wrapper.log.1:Exception in thread "CRL Updater
>>> thread" java.lang.OutOfMemoryError: Java heap space
>>> /var/log/old/james.wrapper.log.1:Exception in thread "CRL Updater
>>> thread" java.lang.OutOfMemoryError: Java heap space
>>> /var/log/old/james.wrapper.log.1:Exception in thread "CRL Updater
>>> thread" java.lang.OutOfMemoryError: Java heap space
>>> /var/log/old/james.wrapper.log.1:Exception in thread "CRL Updater
>>> thread" java.lang.OutOfMemoryError: Java heap space
>>> /var/log/old/james.wrapper.log.2.gz:Exception in thread "CRL Updater
>>> thread" java.lang.OutOfMemoryError: GC overhead limit exceeded
>>> /var/log/old/james.wrapper.log.2.gz:Exception in thread "CRL Updater
>>> thread" java.lang.OutOfMemoryError: Java heap space
>>> /var/log/old/james.wrapper.log.2.gz:Exception in thread "CRL Updater
>>> thread" java.lang.OutOfMemoryError: GC overhead limit exceeded
>>> /var/log/old/james.wrapper.log.3.gz:Exception in thread "CRL Updater
>>> thread" java.lang.OutOfMemoryError: Java heap space
>>> /var/log/old/james.wrapper.log.3.gz:Exception in thread "CRL Updater
>>> thread" java.lang.OutOfMemoryError: Java heap space
>>> /var/log/old/james.wrapper.log.3.gz:Exception in thread "CRL Updater
>>> thread" java.lang.OutOfMemoryError: Java heap space
>>> /var/log/old/james.wrapper.log.4.gz:Exception in thread "CRL Updater
>>> thread" java.lang.OutOfMemoryError: Java heap space
>>> /var/log/old/james.wrapper.log.4.gz:Exception in thread "CRL Updater
>>> thread" java.lang.OutOfMemoryError: Java heap space
>>>
>>> I have now raised the Memory Limit in wrapper.conf from 512MB to 700MB
>>> to see if this fixes at least the OutOfMemory :-(
>>> And yes, one CPU is at 100% load for the time of CRL updates but at
>>> least we have two of them :-)
>>
>> That might explain it. Unfortunately CRLs can be large and memory
>> consuming. I think 512 MB is too low for some large CRLs. For the
>> Virtual Appliance I have enabled "dynamic memory allocation" which uses
>> more memory if the system is configured with more memory. It's really
>> simple but effective (see /etc/default/djigzo). This is not enabled by
>> default since djigzo might be installed on systems that run other things
>> as well.
>>
>>  >java.lang.OutOfMemoryError: GC overhead limit exceeded
>>
>> This is a warning from the JVM that the system spends more time on
>> garbage collection then some max value. I think that the system is then
>> restarted. Somehow you have a CA with a very large CRL. The system sets
>> a max size of a downloaded CRL (see djigzo.properties parameter:
>> system.cRLDownloadParameters.maxBytes). You can lower this value, but
>> then it might be that a CRL is no longer downloaded since it's too big.
>>
>> I'll need to spend time refactoring the CRL part. The problem is that
>> Java tries to load the CRL in memory which can take a lot of memory. The
>> thing I have been thinking of for some time is to make a database backed
>> CRL. But this is not trivial to do.
>>
>> Can you see which CRL is giving you grieve?
>>
>> Kind regards,
>>
>> Martijn
>
> I don't know how to quickly find out the size of the CRLs. I can provide
> you a sample list with all CAs for which CRL trouble is listed. What
> about CRLs from non-root CAs??

Yes that would help. You could also select all CRLs and download them as 
a p7b (but that might be a large file)

Regards,

Martijn

-- 
DJIGZO email encryption
_______________________________________________
Users mailing list
[email protected]
http://lists.djigzo.com/lists/listinfo/users

Reply via email to