configuration question

2003-08-19 Thread Henrik Bentel
Hi

I have a web app which serves both static and non static content, both 
secure and unsecure(https and http).
Now, all my ssl configuration is under my secure virtual host, such that it 
applies to everything. However, I have quite a bit static content(images, 
css, javascript.,...) which doesn't need to be very secure. I somewhat only 
want to secure my dynamic content.
But, I don't want to generate absolute URLs on the fly to link to 
non-secure static content. What I want is to make request to certain urls 
less secure such that processing is faster. For example, I have a 
directory called art, which is just a defined alias for a directory. Is 
there a way to make ssl processing for this directory less restrictive than 
for the generic requests to the virtual host so that processing is faster?

Home someone can help

Henrik Bentel

__
Apache Interface to OpenSSL (mod_ssl)   www.modssl.org
User Support Mailing List  [EMAIL PROTECTED]
Automated List Manager[EMAIL PROTECTED]


Re: configuration question

2003-08-19 Thread Cliff Woolley
On Wed, 20 Aug 2003, Henrik Bentel wrote:

 Now, all my ssl configuration is under my secure virtual host, such that it
 applies to everything. However, I have quite a bit static content(images,
 css, javascript.,...) which doesn't need to be very secure. I somewhat only
 want to secure my dynamic content.

If I understand your question correctly, what you're wanting is to have
some web page that's served up with https, but to have the images on that
page be served by regular http.  You could do that, but every browser I
know of will throw a security warning in that case.  You can't mix secure
and non-secure content in the same document.

Does that answer your question?

--Cliff
__
Apache Interface to OpenSSL (mod_ssl)   www.modssl.org
User Support Mailing List  [EMAIL PROTECTED]
Automated List Manager[EMAIL PROTECTED]


RE: configuration question

2003-08-19 Thread Boyle Owen
-Original Message-
From: Henrik Bentel [mailto:[EMAIL PROTECTED]

I have a web app which serves both static and non static content, both 
secure and unsecure(https and http).
Now, all my ssl configuration is under my secure virtual host, 
such that it applies to everything. However, I have quite a bit static 
content(images, css, javascript.,...) which doesn't need to be very
secure. I 
somewhat only want to secure my dynamic content.

To add to Cliff's comment about browsers complaining about the mix of
secure an insecure content there is a genuine security reason for *not*
doing what you propose.

Put yourself in the position of a crook who has gained access to the
datastream flowing into your SSL server. As you are probably aware, all
encryption ciphers can be cracked by a brute force attack (making
repeated attempts at guesssing the key). Hopefully, the time-to-crack
will be long, but you don't know how fast the crook's computer is. If
he works for the NSA, it might be very fast indeed. If you serve all
content via SSL, he has no idea which packets are important and which
are just images etc. so he has to crack everything. If you decide to
save a teeny bit of processing on the server by encrypting only the
important things, he then sees lots of en clair packets (containing
image data etc.) which he can safely ignore and only a few occasional
nuggets of encrypted data which he can be sure are worth cracking. Thus
he can focus his efforts on these. Therefore, you make life easy for the
cracker by highlighting the packets that are worth cracking! In other
words, the best place to hide a leaf is in the forest.

You shouldn't need to worry about the processing load of the SSL
encryption. If it is slowing your server, then, frankly, your server is
not powerful enough to serve the traffic you have - get more memory,
upgrade the chipset, do whatever is necessary to get up to speed.

Rgds,
Owen Boyle
Disclaimer: Any disclaimer attached to this message may be ignored. 

But, I don't want to generate absolute URLs on the fly to link to 
non-secure static content. What I want is to make request to 
certain urls 
less secure such that processing is faster. For example, I have a 
directory called art, which is just a defined alias for a 
directory. Is 
there a way to make ssl processing for this directory less 
restrictive than 
for the generic requests to the virtual host so that 
processing is faster?

Home someone can help

Henrik Bentel

__
Apache Interface to OpenSSL (mod_ssl)   www.modssl.org
User Support Mailing List  [EMAIL PROTECTED]
Automated List Manager[EMAIL PROTECTED]

Diese E-mail ist eine private und persönliche Kommunikation. Sie hat
keinen Bezug zur Börsen- bzw. Geschäftstätigkeit der SWX Swiss Exchange.
This e-mail is of a private and personal nature. It is not related to
the exchange or business activities of the SWX Swiss Exchange. Le
présent e-mail est un message privé et personnel, sans rapport avec
l'activité boursière de la SWX Swiss Exchange.

This message is for the named person's use only. It may contain
confidential, proprietary or legally privileged information. No
confidentiality or privilege is waived or lost by any mistransmission.
If you receive this message in error, please notify the sender urgently
and then immediately delete the message and any copies of it from your
system. Please also immediately destroy any hardcopies of the message.
You must not, directly or indirectly, use, disclose, distribute, print,
or copy any part of this message if you are not the intended recipient.
The sender's company reserves the right to monitor all e-mail
communications through their networks. Any views expressed in this
message are those of the individual sender, except where the message
states otherwise and the sender is authorised to state them to be the
views of the sender's company. 


__
Apache Interface to OpenSSL (mod_ssl)   www.modssl.org
User Support Mailing List  [EMAIL PROTECTED]
Automated List Manager[EMAIL PROTECTED]


Re: configuration question

2003-08-19 Thread Arthur Chan
Hi Boyle,
I've been debating with myself over whether to encrypt everything, that's a
cogent argument you have offered. I have a few questions myself :
(1) assuming an openssl encrypted packet is bigger than a plain text one,
would mod_gzip shrink it significantly to warrant the effort?
(2) and would that slow down the client browser display of content ?
On the other hand, with these new  1GHz+ P4 desk- and lap-tops around, maybe
not.

- Original Message -
From: Boyle Owen [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, August 19, 2003 04:49 PM
Subject: RE: configuration question


-Original Message-
From: Henrik Bentel [mailto:[EMAIL PROTECTED]

I have a web app which serves both static and non static content, both
secure and unsecure(https and http).
Now, all my ssl configuration is under my secure virtual host,
such that it applies to everything. However, I have quite a bit static
content(images, css, javascript.,...) which doesn't need to be very
secure. I
somewhat only want to secure my dynamic content.

To add to Cliff's comment about browsers complaining about the mix of
secure an insecure content there is a genuine security reason for *not*
doing what you propose.

Put yourself in the position of a crook who has gained access to the
datastream flowing into your SSL server. As you are probably aware, all
encryption ciphers can be cracked by a brute force attack (making
repeated attempts at guesssing the key). Hopefully, the time-to-crack
will be long, but you don't know how fast the crook's computer is. If
he works for the NSA, it might be very fast indeed. If you serve all
content via SSL, he has no idea which packets are important and which
are just images etc. so he has to crack everything. If you decide to
save a teeny bit of processing on the server by encrypting only the
important things, he then sees lots of en clair packets (containing
image data etc.) which he can safely ignore and only a few occasional
nuggets of encrypted data which he can be sure are worth cracking. Thus
he can focus his efforts on these. Therefore, you make life easy for the
cracker by highlighting the packets that are worth cracking! In other
words, the best place to hide a leaf is in the forest.

You shouldn't need to worry about the processing load of the SSL
encryption. If it is slowing your server, then, frankly, your server is
not powerful enough to serve the traffic you have - get more memory,
upgrade the chipset, do whatever is necessary to get up to speed.

Rgds,
Owen Boyle
Disclaimer: Any disclaimer attached to this message may be ignored.

But, I don't want to generate absolute URLs on the fly to link to
non-secure static content. What I want is to make request to
certain urls
less secure such that processing is faster. For example, I have a
directory called art, which is just a defined alias for a
directory. Is
there a way to make ssl processing for this directory less
restrictive than
for the generic requests to the virtual host so that
processing is faster?

Home someone can help

Henrik Bentel

__
Apache Interface to OpenSSL (mod_ssl)   www.modssl.org
User Support Mailing List  [EMAIL PROTECTED]
Automated List Manager[EMAIL PROTECTED]

Diese E-mail ist eine private und persönliche Kommunikation. Sie hat
keinen Bezug zur Börsen- bzw. Geschäftstätigkeit der SWX Swiss Exchange.
This e-mail is of a private and personal nature. It is not related to
the exchange or business activities of the SWX Swiss Exchange. Le
présent e-mail est un message privé et personnel, sans rapport avec
l'activité boursière de la SWX Swiss Exchange.

This message is for the named person's use only. It may contain
confidential, proprietary or legally privileged information. No
confidentiality or privilege is waived or lost by any mistransmission.
If you receive this message in error, please notify the sender urgently
and then immediately delete the message and any copies of it from your
system. Please also immediately destroy any hardcopies of the message.
You must not, directly or indirectly, use, disclose, distribute, print,
or copy any part of this message if you are not the intended recipient.
The sender's company reserves the right to monitor all e-mail
communications through their networks. Any views expressed in this
message are those of the individual sender, except where the message
states otherwise and the sender is authorised to state them to be the
views of the sender's company.


__
Apache Interface to OpenSSL (mod_ssl)   www.modssl.org
User Support Mailing List  [EMAIL PROTECTED]
Automated List Manager[EMAIL PROTECTED]

__
Apache Interface to OpenSSL (mod_ssl) 

RE: configuration question

2003-08-19 Thread Boyle Owen


-Original Message-
From: Arthur Chan [mailto:[EMAIL PROTECTED]

Hi Boyle,
I've been debating with myself over whether to encrypt 
everything, that's a
cogent argument you have offered. I have a few questions myself :
(1) assuming an openssl encrypted packet is bigger than a 
plain text one,

Why would you assume this? Essentially;

encrypted_text = f(plain_text, key)

where f() is a mathematical function. I guess the 2nd law of thermodynamics (entropy 
increases) would tend to cause the output to increase but not necessarily by much. In 
the simple case of a substitutional cipher, the encrypted text would be precisely the 
same size as the plain text.

would mod_gzip shrink it significantly to warrant the effort?

Zipping algorithms work by replacing repetitive sequences in the input with shorter 
instructions to regenerate them (e.g. 1000 blue pixels - 1 blue pixel x 1000). 
Compression works best with highly structured input data (bitmaps, WAV files, human 
language etc). With random data, it can't make much difference and will even cause the 
file to grow! (try repeatedly zipping a file to see this happening).

(2) and would that slow down the client browser display of content ?

Unzipping requires the client to have winzip - not a default on a windows client! 
Probably this would slow the whole thing down.

Remember that SSL is well-defined on the web and all recent browsers contain fast and 
effective SSL software - I would trust it to do its job and not try to re-invent the 
wheel.

Rgds,
Owen Boyle
Disclaimer: Any disclaimer attached to this message may be ignored. 

On the other hand, with these new  1GHz+ P4 desk- and lap-tops 
around, maybe
not.

- Original Message -
From: Boyle Owen [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, August 19, 2003 04:49 PM
Subject: RE: configuration question


-Original Message-
From: Henrik Bentel [mailto:[EMAIL PROTECTED]

I have a web app which serves both static and non static content, both
secure and unsecure(https and http).
Now, all my ssl configuration is under my secure virtual host,
such that it applies to everything. However, I have quite a bit static
content(images, css, javascript.,...) which doesn't need to be very
secure. I
somewhat only want to secure my dynamic content.

To add to Cliff's comment about browsers complaining about the mix of
secure an insecure content there is a genuine security reason for *not*
doing what you propose.

Put yourself in the position of a crook who has gained access to the
datastream flowing into your SSL server. As you are probably aware, all
encryption ciphers can be cracked by a brute force attack (making
repeated attempts at guesssing the key). Hopefully, the time-to-crack
will be long, but you don't know how fast the crook's computer is. If
he works for the NSA, it might be very fast indeed. If you serve all
content via SSL, he has no idea which packets are important and which
are just images etc. so he has to crack everything. If you decide to
save a teeny bit of processing on the server by encrypting only the
important things, he then sees lots of en clair packets (containing
image data etc.) which he can safely ignore and only a few occasional
nuggets of encrypted data which he can be sure are worth cracking. Thus
he can focus his efforts on these. Therefore, you make life 
easy for the
cracker by highlighting the packets that are worth cracking! In other
words, the best place to hide a leaf is in the forest.

You shouldn't need to worry about the processing load of the SSL
encryption. If it is slowing your server, then, frankly, your server is
not powerful enough to serve the traffic you have - get more memory,
upgrade the chipset, do whatever is necessary to get up to speed.

Rgds,
Owen Boyle
Disclaimer: Any disclaimer attached to this message may be ignored.

But, I don't want to generate absolute URLs on the fly to link to
non-secure static content. What I want is to make request to
certain urls
less secure such that processing is faster. For example, I have a
directory called art, which is just a defined alias for a
directory. Is
there a way to make ssl processing for this directory less
restrictive than
for the generic requests to the virtual host so that
processing is faster?

Home someone can help

Henrik Bentel

__
Apache Interface to OpenSSL (mod_ssl)   www.modssl.org
User Support Mailing List  [EMAIL PROTECTED]
Automated List Manager[EMAIL PROTECTED]

Diese E-mail ist eine private und persönliche Kommunikation. Sie hat
keinen Bezug zur Börsen- bzw. Geschäftstätigkeit der SWX Swiss 
Exchange.
This e-mail is of a private and personal nature. It is not related to
the exchange or business activities of the SWX Swiss Exchange. Le
présent e-mail est un message privé et personnel, sans rapport avec
l'activité boursière de la SWX Swiss 

RE: configuration question

2003-08-19 Thread Dave Paris
In addition to Owen's salient points about compression working efficiently
on repetitive strings in plaintext/binary data (e.g. whitespace in a Word
document) and not on random data (e.g. encrypted data), some encryption
algorithms can actually be weakened by compressing the resulting data,
giving a cryptanalyzer clues to the inner workings of the algorithm.

The bottom line here is that SSL works on the socket/transport layer and not
at the application layer.  If you're generating a .gz file on-the-fly within
Apache (mod_gzip, etc), the result will still be encrypted *after*
compression.  The output chain of Apache applies SSL as the last stage, so
something like mod_gzip will operate *before* SSL.  Most modern browsers
produced in the last four or five years will decompress a .gz file (not
.zip!) for the user - even on Windows (just tested IE6 on XP .. works fine).
If you've ever experimented with VRML, one best practices is to send files
as .wrl.gz and not straight .wrl.

As for SSL packets being larger - they're not to any appreciable degree -
for the exact reason Owen pointed out below.  Even symmetric cipher
algorithms don't produce appreciably larger amounts of data.  For example,
using Chained Block Cipher (CBC) mode will only increase the amount of data
by 8 bytes from adding an Initialization Vector (IV) to the beginning of the
ciphertext and padding the end of the ciphertext to get a complete final
block (with an 8 byte block cipher like Blowfish, the largest amount of
padding will only be 7 bytes).  So, at most, you've added 15 bytes to even
the largest amount of plaintext data using Blowfish in CBC mode.  There are
a few exotic exceptions here, like interleaved chaining block ciphers which
will add an IV (of the same size as above) per parallel operation (so if
you've got four parallel encryption operations using interleaved CBC, you're
adding 24 bytes at the beginning of the ciphertext).  However, these are
exceptionally rare and typically limited to proprietary
implementations/applications.

Addressing one other misconception here.. a packet can contain up to 1500
bytes - including headers (assuming your network handles MTUs of 1500, some
are less (like ATM @ 53 bytes [48 bytes of payload w/5 bytes of header),
some are more (like Frame Relay @ up to 4500 bytes), but hey, not many
desktops are connected with ATM or Frame, so we'll call the connection
standard ethernet with a MTU of 1500.  The way networks operate and packets
are forwarded, smaller packets actually transmit *less* data for any given
amount of time over larger packets.  Switches and routers (OSI layer 2 and 3
devices) operate on packet forwarding rates, regardless of the amount of
data in the packet.  The more data in the packet, the more data you're going
to get for X period of time - this is one factor that introduces latency
into a network.  Lots of small packets going through a network simply
transmit less data than lots of large packets .. and since the only
appreciable metric is the number of packets and the packet forwarding rate
of the network device, the larger the packet, the happier the network and
the more data getting to the end user.  The *only* place this is going to
make a difference is if you've got an -inline- intrusion
detection/prevension system (IDS/IPS), in which case you've got what most
network engineers would consider to be a design flaw anyway.  In this case,
each packet needs to be inspected and the more data there is, the more there
is to be inspected.  Most IDS sensors will simply discard packets being
inspected rather than slow the network down (Snort does this when it's
either misconfigured or overloaded).

So.. go for it.  Use mod_gzip (or similar) to generate .gz docs on the fly
.. let Apache handle your SSL.  If anything, your win comes from SSL having
to encrypt *less* data.  This won't speed up the handshake phase, but will
speed up the rest of the transaction since there's simply less data to
encrypt and transmit.  How much speed improvement you get is completely
dependent on how much compression you're getting.  If you can take a 100K
document and compress it to 25K, that's a 75% reduction in the amount of
data SSL needs to encrypt and reduces the number of packets from about 66 to
around 16 - again, not including the SSL handshake/setup and general TCP
setup/teardown.

If you're bogging down your server with all the SSL transactions, look at
investing in a SSL accelerator.  If your business model depends on both
security *and* performance, then the cost (starting around 20K$USD) should
be easily justified.  But that's the subject of another mail and I've got
some coffee getting cold over here. ;-)

Hope this didn't glaze your eyes over. :-)
Best~
-dsp


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Boyle Owen
Sent: Tuesday, August 19, 2003 7:02 AM
To: [EMAIL PROTECTED]
Subject: RE: configuration question




-Original Message-
From: Arthur 

Re: configuration question

2003-08-19 Thread Henrik Bentel
At 02:22 AM 8/19/2003 -0400, you wrote:
On Wed, 20 Aug 2003, Henrik Bentel wrote:

 Now, all my ssl configuration is under my secure virtual host, such that it
 applies to everything. However, I have quite a bit static content(images,
 css, javascript.,...) which doesn't need to be very secure. I somewhat only
 want to secure my dynamic content.
If I understand your question correctly, what you're wanting is to have
some web page that's served up with https, but to have the images on that
page be served by regular http.  You could do that, but every browser I
know of will throw a security warning in that case.  You can't mix secure
and non-secure content in the same document.
Does that answer your question?
Hi

not quite.
I still want everything under https, but I was wondering if there is a way 
to speed up processing per directory directive but still use https, such as 
my image -directory.
Currently I have everything for ssl configured  in the virtual host and 
server config. SSL configuration included below.
Certificate is self signed from 1024 bit RSA key.

Listen 443
AddType application/x-x509-ca-cert .crt
AddType application/x-pkcs7-crl .crl
SSLPassPhraseDialog builtin
SSLSessionCache dbm:/var/opt/apache/run/ssl_scache
SSLSessionCacheTimeout 300
SSLMutex sem
#SSLMutex file:/var/opt/apache/run/ssl_mutex
SSLRandomSeed startup builtin
SSLRandomSeed connect builtin
ErrorLog /var/log/httpd/secure_error_log
CustomLog /var/log/httpd/secure_access_log common
LogLevel warn
VirtualHost 192.168.1.1:443
ServerName 192.168.1.1
DocumentRoot /opt/mydocRoot
ErrorLog /var/log/httpd/secure_error_log
TransferLog /var/log/httpd/secure_access_log
LogLevel warn
SSLEngine on
SSLCipherSuite ALL:!ADH:!EXP56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL
SSLCertificateFile /opt/app/conf/mycert.crt
SSLCertificateKeyFile /opt/app/conf/mycert.key
SetEnvIf User-Agent .*MSIE.* \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0
#CustomLog /var/log/httpd/ssl_request_log %t %h %{SSL_PROTOCOL}x 
%{SSL_CIPHER}x \%r\ %b
/VirtualHost



-Henrik Bentel

--Cliff
__
Apache Interface to OpenSSL (mod_ssl)   www.modssl.org
User Support Mailing List  [EMAIL PROTECTED]
Automated List Manager[EMAIL PROTECTED]
__
Apache Interface to OpenSSL (mod_ssl)   www.modssl.org
User Support Mailing List  [EMAIL PROTECTED]
Automated List Manager[EMAIL PROTECTED]


Re: configuration question

2003-08-19 Thread Eric Rescorla
Boyle Owen [EMAIL PROTECTED] writes:

 -Original Message-
 From: Arthur Chan [mailto:[EMAIL PROTECTED]
 
 Hi Boyle,
 I've been debating with myself over whether to encrypt 
 everything, that's a
 cogent argument you have offered. I have a few questions myself :
 (1) assuming an openssl encrypted packet is bigger than a 
 plain text one,
 
 Why would you assume this? Essentially;
 
   encrypted_text = f(plain_text, key)
 
 where f() is a mathematical function. I guess the 2nd law of
 thermodynamics (entropy increases) would tend to cause the output
 to increase but not necessarily by much. In the simple case of a
 substitutional cipher, the encrypted text would be precisely the
 same size as the plain text.
SSL-enciphered data is always somewhat larger than the plaintext.
The overhead is from three sources:

(1) the record header (5 bytes)
(2) the MAC (16-20 bytes)
(3) block cipher padding (if applicable).

Note that all of this overhead is roughly fixed with respect to
the record size (block cipher padding depends on the record
size mod the block size). So, small records have enormous
amounts of overhead (as high as 20 or more times for single-byte
records). For large records the overhead is largely irrelevant.
(e.g. 20/15000) If you're doing bulk data transfer you should
always use large records.

 would mod_gzip shrink it significantly to warrant the effort?
  Zipping algorithms work by replacing repetitive sequences in the
 input with shorter instructions to regenerate them (e.g. 1000 blue
 pixels - 1 blue pixel x 1000). Compression works best with highly
 structured input data (bitmaps, WAV files, human language etc). With
 random data, it can't make much difference and will even cause the
 file to grow! (try repeatedly zipping a file to see this happening).
One would apply mod_gzip PRIOR to encryption, so it will work
unless the data is already pre-compressed (e.g. a GIF or a JPG).

-Ekr
__
Apache Interface to OpenSSL (mod_ssl)   www.modssl.org
User Support Mailing List  [EMAIL PROTECTED]
Automated List Manager[EMAIL PROTECTED]


Re: configuration question

2003-08-19 Thread Eric Rescorla
Dave Paris [EMAIL PROTECTED] writes:
 In addition to Owen's salient points about compression working efficiently
 on repetitive strings in plaintext/binary data (e.g. whitespace in a Word
 document) and not on random data (e.g. encrypted data), some encryption
 algorithms can actually be weakened by compressing the resulting data,
 giving a cryptanalyzer clues to the inner workings of the algorithm.
No reasonable encryption algorithm will be weakened this way.

 As for SSL packets being larger - they're not to any appreciable degree -
 for the exact reason Owen pointed out below.  Even symmetric cipher
 algorithms don't produce appreciably larger amounts of data.  For example,
 using Chained Block Cipher (CBC) mode will only increase the amount of data
 by 8 bytes from adding an Initialization Vector (IV) to the beginning of the
 ciphertext and padding the end of the ciphertext to get a complete final
 block (with an 8 byte block cipher like Blowfish, the largest amount of
 padding will only be 7 bytes).  So, at most, you've added 15 bytes to even
 the largest amount of plaintext data using Blowfish in CBC mode.  There are
 a few exotic exceptions here, like interleaved chaining block ciphers which
 will add an IV (of the same size as above) per parallel operation (so if
 you've got four parallel encryption operations using interleaved CBC, you're
 adding 24 bytes at the beginning of the ciphertext).  However, these are
 exceptionally rare and typically limited to proprietary
 implementations/applications.
You're forgetting the MAC.

 Addressing one other misconception here.. a packet can contain up to 1500
 bytes - including headers (assuming your network handles MTUs of 1500, some
 are less (like ATM @ 53 bytes [48 bytes of payload w/5 bytes of header),
 some are more (like Frame Relay @ up to 4500 bytes), but hey, not many
 desktops are connected with ATM or Frame, so we'll call the connection
 standard ethernet with a MTU of 1500. 
The PMTU is largely irrelevant here since SSL records can be
much larger than the MTU. What's relevant here is the size
of the SSL stream vis a vis the plaintext stream.

-Ekr
__
Apache Interface to OpenSSL (mod_ssl)   www.modssl.org
User Support Mailing List  [EMAIL PROTECTED]
Automated List Manager[EMAIL PROTECTED]


CRL updating with mod_ssl

2003-08-19 Thread Roberto Hoyle
I'm trying to understand when a CRL list gets read by Apache.  I have 
cases of it being read when a new CRL is placed in the directory and 
the make is run, and cases when it does not get read under identical 
circumstances.

The only reliable way that I have to make sure that the CRL gets 
updated is by restarting the server.

Is this supposed to be the case?  I'm confused that it works sometimes 
and doesn't work on others.

Right now, I'm running 1.3.19 with mod_ssl 2.8.1 (yes, I know that they 
are old, but I am not able to update them for support reasons...).  We 
have the SSLCARevocationPath directive set to the proper location, and 
a script that downloads a new CRL every evening and runs the make.  The 
script does not kick the server.  Our CRLs expire in seven days, but 
get published every evening.

Should I just stop worrying and learn to love restarting Apache?

Thanks,

r.
--
Roberto Hoyle
PKI Lab Programmer
Dartmouth College
__
Apache Interface to OpenSSL (mod_ssl)   www.modssl.org
User Support Mailing List  [EMAIL PROTECTED]
Automated List Manager[EMAIL PROTECTED]


File Acknowledgement

2003-08-19 Thread Nauman, Ahmed [IT]
Hi All,

How can we know at server side in apache that a GET or PUT request has been
received and it was failed or successfull ? Can we get somehow the response
code so that some script and/or tool at Server side can delete/archive the
file which have been retrieved by the client in some specific folders?. Is
there any industry standard for such file acknowledgement.


Regards,
Nauman
__
Apache Interface to OpenSSL (mod_ssl)   www.modssl.org
User Support Mailing List  [EMAIL PROTECTED]
Automated List Manager[EMAIL PROTECTED]


RE: CRL updating with mod_ssl

2003-08-19 Thread Dave Paris
Your actual message issue notwithstanding, the versions you're running are
not just old, they've got security flaws and vulnerabilities well documented
at both CERT, apache.org, and openssl.org.

http://www.cert.org/advisories/CA-2002-27.html  (Linux, Apache, OpenSSL,
mod_ssl)
http://www.cert.org/advisories/CA-2002-23.html  (OpenSSL)
http://www.cert.org/advisories/CA-2002-17.html  (Apache)


If you've got support preventing *you* from upgrading, *DEMAND* they be
updated to reduce your security risks, vulnerability, and liability.  If
your support contract won't do that, you don't have support and you should
upgrade to current anyway.

Respectfully,
-dsp

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Roberto Hoyle
Sent: Tuesday, August 19, 2003 1:56 PM
To: [EMAIL PROTECTED]
Subject: CRL updating with mod_ssl


I'm trying to understand when a CRL list gets read by Apache.  I have
cases of it being read when a new CRL is placed in the directory and
the make is run, and cases when it does not get read under identical
circumstances.

The only reliable way that I have to make sure that the CRL gets
updated is by restarting the server.

Is this supposed to be the case?  I'm confused that it works sometimes
and doesn't work on others.

Right now, I'm running 1.3.19 with mod_ssl 2.8.1 (yes, I know that they
are old, but I am not able to update them for support reasons...).  We
have the SSLCARevocationPath directive set to the proper location, and
a script that downloads a new CRL every evening and runs the make.  The
script does not kick the server.  Our CRLs expire in seven days, but
get published every evening.

Should I just stop worrying and learn to love restarting Apache?

Thanks,

r.
--
Roberto Hoyle
PKI Lab Programmer
Dartmouth College

__
Apache Interface to OpenSSL (mod_ssl)   www.modssl.org
User Support Mailing List  [EMAIL PROTECTED]
Automated List Manager[EMAIL PROTECTED]



__
Apache Interface to OpenSSL (mod_ssl)   www.modssl.org
User Support Mailing List  [EMAIL PROTECTED]
Automated List Manager[EMAIL PROTECTED]


Re: File Acknowledgement

2003-08-19 Thread Cliff Woolley
On Tue, 19 Aug 2003, Nauman, Ahmed [IT] wrote:

 How can we know at server side in apache that a GET or PUT request has
 been received and it was failed or successfull ? Can we get somehow the
 response code so that some script and/or tool at Server side can
 delete/archive the file which have been retrieved by the client in some
 specific folders?. Is there any industry standard for such file
 acknowledgement.

If it were me, I'd just write a CGI script to do this... as for how you
know for certain that the client received the entire response, that's a
bit tricky.  The http response code (even if it's 200 OK) doesn't tell you
what happened on the client end.  The client never sends an
acknowledgement response code.  Apache internally knows whether it
finished sending or not, but it's hard to get at that information except
by directly accessing the internal structures from a module.  Perhaps the
easiest way is to have the client request some other URL after it gets the
full document (javascript redirect?), and have that second URL be your
acknowledgement and trigger to delete the file.

--Cliff
__
Apache Interface to OpenSSL (mod_ssl)   www.modssl.org
User Support Mailing List  [EMAIL PROTECTED]
Automated List Manager[EMAIL PROTECTED]


Re: configuration question

2003-08-19 Thread Cliff Woolley
On Tue, 19 Aug 2003, Eric Rescorla wrote:

 Dave Paris [EMAIL PROTECTED] writes:
  In addition to Owen's salient points about compression working efficiently
  on repetitive strings in plaintext/binary data (e.g. whitespace in a Word
  document) and not on random data (e.g. encrypted data), some encryption
  algorithms can actually be weakened by compressing the resulting data,
  giving a cryptanalyzer clues to the inner workings of the algorithm.

 No reasonable encryption algorithm will be weakened this way.

I agree.  I'm guessing what he meant is that some encryption algorithms
are weakened if their /input/ is pre-compressed by some known algorithm.
If the cleartext is in some known format, it might possibly be easier to
recover it from the ciphertext.

--Cliff
__
Apache Interface to OpenSSL (mod_ssl)   www.modssl.org
User Support Mailing List  [EMAIL PROTECTED]
Automated List Manager[EMAIL PROTECTED]


Re: configuration question

2003-08-19 Thread Arthur Chan
Well, my eyes did glaze over somewhere betw thermodynamics and mobile
perpetuum ;-)
So does this mean that if I work in a less sophisticated infrastructure
where only 56kbps ppp dialup is available, I can get some incremental gain
by zipping it up before encrypting it ? [yes/no]
Caveats ?
And here is where I really get it, with this next question :
I've got all this openssl key stuff working, and signed my own cert using
openssl.
On starting Netscape6.2 I got the little lock to close. I got Netscape to
register my own site as a trusted site in WebSites
But I want Netscape to load my certificate as an Authority for our testing
purposes.
How does one go about doing that, both in Netscape and MSIE5 ?
TIA :-)


- Original Message -
From: Dave Paris [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, August 19, 2003 07:58 PM
Subject: RE: configuration question


 In addition to Owen's salient points about compression working efficiently
 on repetitive strings in plaintext/binary data (e.g. whitespace in a Word
 document) and not on random data (e.g. encrypted data), some encryption
 algorithms can actually be weakened by compressing the resulting data,
 giving a cryptanalyzer clues to the inner workings of the algorithm.

 The bottom line here is that SSL works on the socket/transport layer and
not
 at the application layer.  If you're generating a .gz file on-the-fly
within
 Apache (mod_gzip, etc), the result will still be encrypted *after*
 compression.  The output chain of Apache applies SSL as the last stage, so
 something like mod_gzip will operate *before* SSL.  Most modern browsers
 produced in the last four or five years will decompress a .gz file (not
 .zip!) for the user - even on Windows (just tested IE6 on XP .. works
fine).
 If you've ever experimented with VRML, one best practices is to send
files
 as .wrl.gz and not straight .wrl.

 As for SSL packets being larger - they're not to any appreciable degree -
 for the exact reason Owen pointed out below.  Even symmetric cipher
 algorithms don't produce appreciably larger amounts of data.  For example,
 using Chained Block Cipher (CBC) mode will only increase the amount of
data
 by 8 bytes from adding an Initialization Vector (IV) to the beginning of
the
 ciphertext and padding the end of the ciphertext to get a complete final
 block (with an 8 byte block cipher like Blowfish, the largest amount of
 padding will only be 7 bytes).  So, at most, you've added 15 bytes to even
 the largest amount of plaintext data using Blowfish in CBC mode.  There
are
 a few exotic exceptions here, like interleaved chaining block ciphers
which
 will add an IV (of the same size as above) per parallel operation (so if
 you've got four parallel encryption operations using interleaved CBC,
you're
 adding 24 bytes at the beginning of the ciphertext).  However, these are
 exceptionally rare and typically limited to proprietary
 implementations/applications.

 Addressing one other misconception here.. a packet can contain up to 1500
 bytes - including headers (assuming your network handles MTUs of 1500,
some
 are less (like ATM @ 53 bytes [48 bytes of payload w/5 bytes of header),
 some are more (like Frame Relay @ up to 4500 bytes), but hey, not many
 desktops are connected with ATM or Frame, so we'll call the connection
 standard ethernet with a MTU of 1500.  The way networks operate and
packets
 are forwarded, smaller packets actually transmit *less* data for any given
 amount of time over larger packets.  Switches and routers (OSI layer 2 and
3
 devices) operate on packet forwarding rates, regardless of the amount of
 data in the packet.  The more data in the packet, the more data you're
going
 to get for X period of time - this is one factor that introduces latency
 into a network.  Lots of small packets going through a network simply
 transmit less data than lots of large packets .. and since the only
 appreciable metric is the number of packets and the packet forwarding rate
 of the network device, the larger the packet, the happier the network and
 the more data getting to the end user.  The *only* place this is going to
 make a difference is if you've got an -inline- intrusion
 detection/prevension system (IDS/IPS), in which case you've got what most
 network engineers would consider to be a design flaw anyway.  In this
case,
 each packet needs to be inspected and the more data there is, the more
there
 is to be inspected.  Most IDS sensors will simply discard packets being
 inspected rather than slow the network down (Snort does this when it's
 either misconfigured or overloaded).

 So.. go for it.  Use mod_gzip (or similar) to generate .gz docs on the fly
 .. let Apache handle your SSL.  If anything, your win comes from SSL
having
 to encrypt *less* data.  This won't speed up the handshake phase, but will
 speed up the rest of the transaction since there's simply less data to
 encrypt and transmit.  How much speed improvement you get is completely
 

Re: configuration question

2003-08-19 Thread Cliff Woolley
On Wed, 20 Aug 2003, Arthur Chan wrote:

 But I want Netscape to load my certificate as an Authority for our
 testing purposes. How does one go about doing that, both in Netscape and
 MSIE5 ?

Google knows everything... an I'm feeling lucky for installing CA
certificate yields:

http://www.pseudonym.org/ssl/ssl_ca.html

Which explains how to do just that.

--Cliff
__
Apache Interface to OpenSSL (mod_ssl)   www.modssl.org
User Support Mailing List  [EMAIL PROTECTED]
Automated List Manager[EMAIL PROTECTED]