Re: ld: double free or corruption

2011-04-11 Thread Geoff Thorpe
Looks like a bug in the compiler tool-chain. Consider rolling back to 
something stable. If you're willing, you might want to scan the gcc bug 
database in case this is a known issue, and perhaps report it if it 
isn't? It might also be some system library the tool-chain is linked 
against, who knows. In any case, source code shouldn't be able to crash 
a compiler (nor linker), so the problem isn't with openssl.


However, you may find that if you reduce the level of optimisation or 
change some other flags, that the compiler is able to cope, YMMV. But of 
course that's just an exercise in avoiding bugs rather than eliminating 
them. And as for how to change the flags/optimisation, well, changing 
the flags in the Configure script (or in the Makefile after configuring) 
is left as an exercise for the reader... :-)


Cheers,
Geoff

On 11-04-06 05:15 AM, Kyle wrote:

Hi, when trying to compile openssl 1.0.0d with this configure:

./Configure mingw64 no-shared
--openssldir=/home/kyle/software/ffmpeg/external-libraries/win64

and then this make:

make CC=x86_64-w64-mingw32-gcc RANLIB=x86_64-w64-mingw32-ranlib

I get this error:

make[1]: Entering directory
`/home/kyle/software/ffmpeg/external-libraries/rtmpdump/openssl/source/openssl-1.0.0d/apps'

rm -f openssl.exe
shlib_target=; if [ -n  ]; then \
shlib_target=cygwin-shared; \
fi; \
LIBRARIES=-L.. -lssl -L.. -lcrypto ; \
make -f ../Makefile.shared -e \
APPNAME=openssl.exe OBJECTS=openssl.o verify.o asn1pars.o req.o dgst.o
dh.o dhparam.o enc.o passwd.o gendh.o errstr.o ca.o pkcs7.o crl2p7.o
crl.o rsa.o rsautl.o dsa.o dsaparam.o ec.o ecparam.o x509.o genrsa.o
gendsa.o genpkey.o s_server.o s_client.o speed.o s_time.o apps.o s_cb.o
s_socket.o app_rand.o version.o sess_id.o ciphers.o nseq.o pkcs12.o
pkcs8.o pkey.o pkeyparam.o pkeyutl.o spkac.o smime.o cms.o rand.o
engine.o ocsp.o prime.o ts.o \
LIBDEPS= $LIBRARIES -lws2_32 -lgdi32 -lcrypt32 \
link_app.${shlib_target}
make[2]: Entering directory
`/home/kyle/software/ffmpeg/external-libraries/rtmpdump/openssl/source/openssl-1.0.0d/apps'

( :; LIBDEPS=${LIBDEPS:--L.. -lssl -L.. -lcrypto -lws2_32 -lgdi32
-lcrypt32}; LDCMD=${LDCMD:-x86_64-w64-mingw32-gcc};
LDFLAGS=${LDFLAGS:--DOPENSSL_THREADS -D_MT -DDSO_WIN32 -DL_ENDIAN -O3
-Wall -DWIN32_LEAN_AND_MEAN -DUNICODE -D_UNICODE -DOPENSSL_IA32_SSE2
-DOPENSSL_BN_ASM_MONT -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM
-DAES_ASM -DWHIRLPOOL_ASM}; LIBPATH=`for x in $LIBDEPS; do echo $x;
done | sed -e 's/^ *-L//;t' -e d | uniq`; LIBPATH=`echo $LIBPATH | sed
-e 's/ /:/g'`; LD_LIBRARY_PATH=$LIBPATH:$LD_LIBRARY_PATH ${LDCMD}
${LDFLAGS} -o ${APPNAME:=openssl.exe} openssl.o verify.o asn1pars.o
req.o dgst.o dh.o dhparam.o enc.o passwd.o gendh.o errstr.o ca.o pkcs7.o
crl2p7.o crl.o rsa.o rsautl.o dsa.o dsaparam.o ec.o ecparam.o x509.o
genrsa.o gendsa.o genpkey.o s_server.o s_client.o speed.o s_time.o
apps.o s_cb.o s_socket.o app_rand.o version.o sess_id.o ciphers.o nseq.o
pkcs12.o pkcs8.o pkey.o pkeyparam.o pkeyutl.o spkac.o smime.o cms.o
rand.o engine.o ocsp.o prime.o ts.o ${LIBDEPS} )
*** glibc detected ***
/home/kyle/software/mingw-w64/mingw-w64-x86_64/lib/gcc/x86_64-w64-mingw32/4.6.0/../../../../x86_64-w64-mingw32/bin/ld:
double free or corruption (!prev): 0x02a8aaa0 ***
=== Backtrace: =
/lib/libc.so.6(+0x774b6)[0x2b65b045a4b6]
/lib/libc.so.6(cfree+0x73)[0x2b65b0460c83]
/home/kyle/software/mingw-w64/mingw-w64-x86_64/lib/gcc/x86_64-w64-mingw32/4.6.0/../../../../x86_64-w64-mingw32/bin/ld[0x4592bc]

/home/kyle/software/mingw-w64/mingw-w64-x86_64/lib/gcc/x86_64-w64-mingw32/4.6.0/../../../../x86_64-w64-mingw32/bin/ld[0x417a53]

/home/kyle/software/mingw-w64/mingw-w64-x86_64/lib/gcc/x86_64-w64-mingw32/4.6.0/../../../../x86_64-w64-mingw32/bin/ld[0x416953]

/lib/libc.so.6(__libc_start_main+0xfe)[0x2b65b0401d8e]
/home/kyle/software/mingw-w64/mingw-w64-x86_64/lib/gcc/x86_64-w64-mingw32/4.6.0/../../../../x86_64-w64-mingw32/bin/ld[0x4023a9]

=== Memory map: 
0040-00513000 r-xp  08:01 440336
/home/kyle/software/mingw-w64/mingw-w64-x86_64/x86_64-w64-mingw32/bin/ld
00712000-00713000 r--p 00112000 08:01 440336
/home/kyle/software/mingw-w64/mingw-w64-x86_64/x86_64-w64-mingw32/bin/ld
00713000-00717000 rw-p 00113000 08:01 440336
/home/kyle/software/mingw-w64/mingw-w64-x86_64/x86_64-w64-mingw32/bin/ld
00717000-0071d000 rw-p  00:00 0
01b5f000-02b5d000 rw-p  00:00 0 [heap]
2b65affbc000-2b65affdc000 r-xp  08:01 786679 /lib/ld-2.12.1.so
2b65affdc000-2b65b002e000 rw-p  00:00 0
2b65b006e000-2b65b00ae000 rw-p  00:00 0
2b65b01dc000-2b65b01dd000 r--p 0002 08:01 786679 /lib/ld-2.12.1.so
2b65b01dd000-2b65b01de000 rw-p 00021000 08:01 786679 /lib/ld-2.12.1.so
2b65b01de000-2b65b01df000 rw-p  00:00 0
2b65b01df000-2b65b01e1000 r-xp  08:01 786687 /lib/libdl-2.12.1.so
2b65b01e1000-2b65b03e1000 ---p 2000 08:01 786687 /lib/libdl-2.12.1.so
2b65b03e1000-2b65b03e2000 r--p 2000 08:01 786687 /lib/libdl-2.12.1.so

Re: OpenSSL and kernel __read_nocancel() blocking under heavy network congestion

2009-05-26 Thread Geoff Thorpe
Hi Mark,

Mark Laubach wrote:
 Hi David,
 
 Thanks and yes, these are the conundrums I'm curious about:
 1) why does the process get hung on __read_nocancel (), when the
 connection is set to non-blocking, and only under heavy congestion?,
 and 2) if the connection did turn blocking, why aren't the added
 timeouts working?
 
 I'll keep looking and any 'what to look at' pointers or confirmation
 would be appreciated.

If the sockets aren't being set up correctly, it's likely to be the
layer above openssl - ie. gSoap. Perhaps run this issue by them?

Regards,
Geoff


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: OpenSSL 1.0.0 beta 1 released

2009-04-02 Thread Geoff Thorpe
On Wednesday 01 April 2009 16:34:35 Rene Hollan wrote:
 This is an April Fools' joke, right?

It's April 2, so I can reply now.

Z80. Java. Casiotone. Doesn't the question sort of answer itself?

Cheers,
Geoff

 -Original Message-
 From: owner-openssl-us...@openssl.org on behalf of Geoff Thorpe
 Sent: Wed 4/1/2009 12:11 PM
 To: openssl-users@openssl.org
 Subject: Re: OpenSSL 1.0.0 beta 1 released

 On Wednesday 01 April 2009 09:05:05 Thomas J. Hruska wrote:
  The problem is that I was under the distinct impression 0.9.9 was
  the next release and 1.0.0 was a pipe dream a few years down the
  road (at least).

 The choice of a 1.0 release is to clearly mark the fact that openssl
 is shifting to a common base platform, namely Java. Platform
 independence is going to make future development much easier, but
 represents a significant enough change to warrant the new major
 version. This decision has been driven by increasing demands to
 support as-yet unsupported platforms; primarily Amiga, Z80, Casiotone,
 and Windows.

 Regards,
 Geoff

-- 
Un terrien, c'est un singe avec des clefs de char...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: OpenSSL 1.0.0 beta 1 released

2009-04-02 Thread Geoff Thorpe
On Thursday 02 April 2009 11:24:56 Dr. Stephen Henson wrote:
 On Thu, Apr 02, 2009, Geoff Thorpe wrote:
  On Wednesday 01 April 2009 16:34:35 Rene Hollan wrote:
   This is an April Fools' joke, right?
 
  It's April 2, so I can reply now.
 
  Z80. Java. Casiotone. Doesn't the question sort of answer itself?

 Personally I think mentioning Windows gave it away...

Exactly. I mean porting to Z80 is far-fetched, but porting to Windows?!?! 
I mean, OpenSSL is about *security* - porting it to windows is like 
fetching a saddle after the horse has bolted...

Cheers,
Geoff

PS: :-), I was responding to a post by someone involved in win32 
openssl, so there was no shortage of clues...

-- 
Un terrien, c'est un singe avec des clefs de char...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: OpenSSL 1.0.0 beta 1 released

2009-04-01 Thread Geoff Thorpe
On Wednesday 01 April 2009 09:05:05 Thomas J. Hruska wrote:
 The problem is that I was under the distinct impression 0.9.9 was the
 next release and 1.0.0 was a pipe dream a few years down the road (at
 least).

The choice of a 1.0 release is to clearly mark the fact that openssl is 
shifting to a common base platform, namely Java. Platform independence 
is going to make future development much easier, but represents a 
significant enough change to warrant the new major version. This 
decision has been driven by increasing demands to support as-yet 
unsupported platforms; primarily Amiga, Z80, Casiotone, and Windows.

Regards,
Geoff

-- 
Un terrien, c'est un singe avec des clefs de char...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Openssl Engine Performance Benchmarks

2009-03-31 Thread Geoff Thorpe
On Tuesday 31 March 2009 23:16:10 Shasi Thati wrote:
 Hi,

 I have a question regarding the openssl speed command. When I use this
 command to test the crypto offload engine performance  what is the
 right command to use?

 Is it

 openssl speed -evp aes-128-cbc -engine xx -elapsed

 or

 openssl speed -evp aes-128-cbc -engine xx

 I have seen examples with both of them on the internet and I get
 different results with each of them. What exactly does elapsed
 option  add here?

It means elapsed. :-) Ie. how much time elapsed during the benchmark. 
The normal measurement is cpu usage, which is something less than or 
equal to the elapsed time - if the benchmark used half the available cpu 
cycles during the elapsed period (according to scheduler stats, accurate 
or otherwise), the time given would be half the elapsed time.

The usefulness of using cpu-time (instead of -elapsed) is to eliminate;
(a) skewed statistics due to the system running other tasks while the 
benchmark was in progress (ie. you're only billed for what you use), and
(b) to eliminate time the s/w (and driver) spent waiting for the crypto 
accelerator to respond to crypto operations.
The value of (b) is to interpolate certain theoretical limits. Ie. if 80% 
of the time is spent waiting on the accelerator, the cpu-time for the 
benchmark run would be 1/5 of the elapsed time and so the calculated 
number of crypto ops per second would be 5 times what actually happened 
in real/elapsed time. If the latency of the accelerator is roughly 
constant but it can process multiple things at once due to having 
multiple execution units, then this inflated number is a 
useful estimate of how much you could theoretically process if you had 
multiple threads/processes keeping the cpu busy rather than waiting. In 
this example you'd need at least 5 threads to achieve such a performance 
level. (Which also assumes the accelerator performance would continue to 
scale up that far.)

Cheers,
Geoff

-- 
Un terrien, c'est un singe avec des clefs de char...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: openSSL : digest command (md5) to crypto driver

2008-12-12 Thread Geoff Thorpe
On Friday 12 December 2008 01:07:04 Madhusudan Bhat wrote:
 Hi Geoff,

 I appreciate your reply. Currently, I dont have any engine supported
 at the openssl side. I have crypto driver at the kernel side, which
 registered with the kernel for the hashing and encryption algos.

 From the openssl, when I issue enc or dgst commands, I dont give

 engine parameter. Basically, I dont set any engine.  With my
 understanding, openssl will pass the command to kernel, kernel will
 search the first available registered crypto driver which is capable
 of handling requested operation and submit the request to that crypto
 driver.

If no engine is set up, then openssl will use its own software 
implementations to perform all crypto operations. If openssl is passing 
anything to hardware via the kernel, that's because an engine has been 
setup. You are probably using the cryptodev engine without realising it. 
What is your platform, and what is your application? In particular, does 
it call ENGINE_load_builtin_engines() at all?

Cheers,
Geoff

-- 
Un terrien, c'est un singe avec des clefs de char...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: openSSL : digest command (md5) to crypto driver

2008-12-11 Thread Geoff Thorpe
On Thursday 11 December 2008 12:44:24 Madhusudan Bhat wrote:
 Hi All,

 I am having a issue when using digest command from openssl. When I
 issue digest command md5 from openssl, kernel side it will never
 receive IOCTL - CIOCGSESSION with sop-mac getting set, also it wont
 receive IOCTL - CIOCCRYPT with mac operation set. Tho, crypto driver
 which I have written registered new session,  free session, process
 functions for CRYPTO_MD5, CRYPTO_MD5_HMAC.

 But when I issue des/3des/aes enc commands from openssl, open crypto
 device at the kernel side receives proper IOCTL and calls my crypto
 driver with new session and process functions with sop-cipher and
 other fields related to cipher get set.

 Is there anything I might be missing in my driver or is there anything
 which I have to enable to receive any digest commands?
 BTW, I dont have any engine supported, so I dont use engine params
 while issueing command from openssl.

My guess is that you're initialising your engine too late - your engine 
will only become the default for crypto algorithms/modes that it 
supports and that *haven't been used yet*. When something tries to use 
md5 for the first time, a default md5 implementation will be chosen and 
cached. You probably loaded your engine early enough to be there before 
anyone needed des/3des/aes, but after someone had already started using 
md5.

Specifically, I'm guessing that randomness gathering is your problem. The 
random code uses hashes extensively, and if that kicks in before you 
register your engine's md5 implementation, then the default s/w 
implementation has already become the live default. Try building your 
openssl libraries with -DENGINE_TABLE_DEBUG and add a big printf() just 
before you load your engine. If there is engine logging related to md5 
that occurs before you load your engine, that's the problem. Another 
thing to try is to call ENGINE_set_default() on your engine once it's 
loaded - your MD5 code after that should use your engine, even if the 
randomness stuff won't.

Cheers,
Geoff

-- 
Un terrien, c'est un singe avec des clefs de char...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: FIXED - CRYPTO_set_dynlock_* mystery ... (was: Engine Issue: nShield 500)

2008-11-28 Thread Geoff Thorpe
On Friday 21 November 2008 14:50:41 Sander Temme wrote:
[snip]
 I would suggest a
 documentation fix, like so:

 Index: engines/e_chil.c
 ===
 RCS file: /home/openssl/cvs/openssl/engines/e_chil.c,v
 retrieving revision 1.9
 diff -u -r1.9 e_chil.c
 --- engines/e_chil.c  19 Nov 2008 14:21:26 -  1.9
 +++ engines/e_chil.c  21 Nov 2008 19:24:37 -
 @@ -164,11 +164,11 @@
   ENGINE_CMD_FLAG_STRING},
   {HWCRHK_CMD_FORK_CHECK,
   FORK_CHECK,
 - Turns fork() checking on or off (boolean),
 + Turns fork() checking on (non-zero) or off (0),
   ENGINE_CMD_FLAG_NUMERIC},
   {HWCRHK_CMD_THREAD_LOCKING,
   THREAD_LOCKING,
 - Turns thread-safe locking on or off (boolean),
 + Turns thread-safe locking on (0) or off (non-zero),
   ENGINE_CMD_FLAG_NUMERIC},
   {HWCRHK_CMD_SET_USER_INTERFACE,
   SET_USER_INTERFACE,
[snip]

Applied, thanks.

Cheers,
Geoff

-- 
Un terrien, c'est un singe avec des clefs de char...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: CRYPTO_set_dynlock_* mystery ... (was: Engine Issue: nShield 500)

2008-11-21 Thread Geoff Thorpe
On Friday 21 November 2008 03:01:33 Massimiliano Pala wrote:
 Hi David,

 that is really nice.. although.. after I gave it a try... it does not
 really work :(

 Actually, it seems that the dynamic functions are never called... :(

 Investigating...

The attached example seems to work. I put it in the top-level directory 
of the (built) openssl tree and compiled with;
   gcc -Wall -Iinclude -o foobar foobar.c -L. -lcrypto

The output shows that dynamic lockids are negative;
About to test
Created lock 'dyn_lck.c:257'
Created lock 'dyn_lck.c:257'
Got new locks -1, -2
Doing mode 9 lock on 'dyn_lck.c:257' from foobar.c:47
Locked the lock
Doing mode 10 lock on 'dyn_lck.c:257' from foobar.c:49
Unlocked the lock
Destroying lock 'dyn_lck.c:257' from dyn_lck.c:329
Destroying lock 'dyn_lck.c:257' from dyn_lck.c:329
Destroyed the locks, DONE

Perhaps that'll help distinguish what's going on in your code?

Cheers,
Geoff

-- 
Un terrien, c'est un singe avec des clefs de char...
#include stdlib.h
#include stdio.h
#include string.h
#include openssl/crypto.h

struct CRYPTO_dynlock_value;

static struct CRYPTO_dynlock_value *l_create(const char *f, int l)
{
	char *foo = malloc(256);
	if (!foo)
		return NULL;
	snprintf(foo, 255, %s:%d, f, l);
	printf(Created lock '%s'\n, foo);
	return (void *)foo;
}

static void l_lock(int mode, struct CRYPTO_dynlock_value *v,
			const char *f, int l)
{
	char *foo = (char *)v;
	printf(Doing mode %d lock on '%s' from %s:%d\n,
		mode, foo, f, l);
}

static void l_destroy(struct CRYPTO_dynlock_value *v,
			const char *f, int l)
{
	char *foo = (char *)v;
	printf(Destroying lock '%s' from %s:%d\n,
		foo, f, l);
	free(foo);
}

int main(int argc, char *argv[])
{
	int lock, lock2;

	CRYPTO_set_dynlock_create_callback(l_create);
	CRYPTO_set_dynlock_lock_callback(l_lock);
	CRYPTO_set_dynlock_destroy_callback(l_destroy);

	printf(About to test\n);
	lock = CRYPTO_get_new_dynlockid();
	lock2 = CRYPTO_get_new_dynlockid();
	printf(Got new locks %d, %d\n, lock, lock2);
	CRYPTO_w_lock(lock);
	printf(Locked the lock\n);
	CRYPTO_w_unlock(lock);
	printf(Unlocked the lock\n);
	CRYPTO_destroy_dynlockid(lock);
	CRYPTO_destroy_dynlockid(lock2);
	printf(Destroyed the locks, DONE\n);
	return 0;
}



Re: CRYPTO_set_dynlock_* mystery ... (was: Engine Issue: nShield 500)

2008-11-21 Thread Geoff Thorpe
On Friday 21 November 2008 11:07:19 Max Pala wrote:
 P.S.: As this code is basically the same for every application, what
 about integrating a nice OPENSSL_init_pthread() function that will
 initiate all the static locks and the dynamic functions ? That would
 save *a lot of time* to many people.. :D If you do not use pthreads,
 then you might want to provide your own.. but at least 90% of apps can
 be safely ported to be multi threaded...

This is part of some overhauls I'm hoping to put together for the next 
release, if I can achieve that before 0.9.9 is tagged and released. It's 
won't be possible to do this is for the 0.9.8-stable branch though.

Cheers,
Geoff

-- 
Un terrien, c'est un singe avec des clefs de char...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: FIXED - CRYPTO_set_dynlock_* mystery ... (was: Engine Issue: nShield 500)

2008-11-21 Thread Geoff Thorpe
On Friday 21 November 2008 14:41:08 Max Pala wrote:
 Hi Sander,

 I debugged the init process and it seems that you were right. The
 disable_mutex_callbacks is set to 1 at e_chil.c:578. Definitely it
 is due to initialization, at this point...

 ... looked into that, and... et voilas! Found the problem! The PRE
 commands were wrong. Indeed the following:

   5.engine_pre = THREAD_LOCKING:1

 caused the disable_mutex_callbacks to be set to 1, therefore no
 callbacks were used! A... what a nightmare! If you want to be
 sure, you can set it to 0:

   5.engine_pre = THREAD_LOCKING:0

 Przemek, this should solve also your problem - so you can enable
 multiple threads and get rid of your 'lock' around the signing
 function.

 I think that the config variable should have been called:

   DISABLE_THREAD_LOCKING

 because if THREAD_LOCKING is set to 1 - then the
 disable_mutex_callbacks is set to 1.. which should be the contrary
 (developer's error ?).

Yeah, that's unfortunate. Glad you found the problem.


 Very confusing... and besides, it should give out some warning!!!
 Anyhow, now the callbacks are called, and the server seems to run
 pretty ok with a relatively large amount of threads (150). But I still
 have to stress-test it...

 Thanks to all of you who helped me - now I have a single file with
 the code for OpenSSL and pthreads, both static and dynamic locks..

 Shall we include it into OpenSSL ?

   void OpenSSL_pthread_init( void );

As I stated in another post, I'm looking to overhaul the way 
certain infrastructure is setup - ie. right now; static locks, dynamic 
locks, memory allocation, thread IDs, ex_data, [...] are all specified 
independently - despite the fact they often need to be 
mutually-compatible. So I'm looking at combining these into platforms. 
My motivation is async-crypto, which requires additional infrastructure 
that adds to the mutual-compatibility requirements. In doing so, it'll 
be easier to provide pre-packaged platforms that include plug-in 
implementations for these things (eg. a pthreads platform, win32 
platform, whatever). It should also be possible to specify default 
platforms from the Configure target, without limiting application 
ability to override.

If only there were more hours in the day ...

Do you agree with Sander's patch suggestion? If so, I'll put that into 
0.9.8-stable and HEAD.

Cheers,
Geoff

-- 
Un terrien, c'est un singe avec des clefs de char...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: CRYPTO_set_dynlock_* mystery ... (was: Engine Issue: nShield 500)

2008-11-20 Thread Geoff Thorpe
On Thursday 20 November 2008 20:57:10 Max Pala wrote:
 it seems that I am missing the usage of the set of obscure functions:

   CRYPTO_set_dynlock_create_callback()
   CRYPTO_set_dynlock_lock_callback()
   CRYPTO_set_dynlock_destroy_callback()

 but I have no idea how to initialize those functions - is there any
 example on how to do that by using pthreads ?

You might find the following of some help;

http://www.openssl.org/docs/crypto/threads.html

Cheers,
Geoff

-- 
Un terrien, c'est un singe avec des clefs de char...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: problems with VIA Eden sha1 HW acceleration in ssl

2008-11-19 Thread Geoff Thorpe
On Wednesday 19 November 2008 14:09:06 Jan Klod wrote:
 On Wednesday 19 November 2008 19:40:06 Michael S. Zick wrote:
  On Wed November 19 2008, Jan Klod wrote:
   On Wednesday 19 November 2008 19:28:51 Michael S. Zick wrote:
That simplifies things, try 0.9.8i
http://gentoo-portage.com/dev-libs/openssl
  
   Why? It worked for you?
 
  Because it is the current release version and
  takes next to no effort at all for you to try it.
 
  Mike

 Well, the result is: same .
 Maybe I should try to find some openssl dev and ask directly? You
 tried?

You're on the right list, the devs who can answer probably just aren't 
poised waiting for jump on list mail. Eg. real jobs and what-not. For 
turnarounds of half-a-day, you'll need a chequebook ... :-)

If neither Michael (Ludvig) nor Andy (Polyakov) respond in the next day 
or so, I'll try to take a look at (and understand) the state of the 
padlock engine code.

Cheers,
Geoff

-- 
Un terrien, c'est un singe avec des clefs de char...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: problems with VIA Eden sha1 HW acceleration in ssl

2008-11-19 Thread Geoff Thorpe
On Wednesday 19 November 2008 15:14:21 Jan Klod wrote:
 On Wednesday 19 November 2008 21:02:06 Geoff Thorpe wrote:
  If neither Michael (Ludvig) nor Andy (Polyakov) respond in the next
  day or so, I'll try to take a look at (and understand) the state of
  the padlock engine code.
 
  Cheers,
  Geoff

 Well, thank you. I am currently looking at
 www.logix.cz/michal/devel/padlock and it looks like there is a
 ready-to-use solution! I don't know about the quality, though...
 One thing to ask right now: is it safe to patch 0.9.8i with patch,
 that has been intended (at least initially - it has been updated) for
 0.9.8b?

Please try for yourself if you're waiting on this. Eg. there are nightly 
snapshots downloadable and you can browse the source online 
too. patch --dry-run should also come in handy.

Cheers,
Geoff

-- 
Un terrien, c'est un singe avec des clefs de char...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: openssl on Sun solaris failed

2008-10-16 Thread Geoff Thorpe
Responding to openssl-users which is the place for this sort of 
discussion, the openssl-dev list is for development of openssl itself 
(rather than using openssl or developing external code that uses it).

It appears your system (or your PATH) doesn't include the make binary. 
Compiling source code invariably requires a functioning 'make'. This is 
not an openssl problem, it is unlikely you could compile *any* source 
code with that environment. If you have a sysadmin or other support 
channel, you probably want them to fix the environment or install the 
missing package(s).

Cheers,
Geoff

On Thursday 16 October 2008 07:23:05 Mustafa Ayoub wrote:
 Dear Sir,

 i am working on sun Solaris 10 environment i tried to install open SSL
 (openssl-0.9.8h)

 Find my steps below :

 # pwd/usr/src/openssl-0.9.8h# . ./config -tOperating system:
 sun4v-whatever-solaris2Configuring for solaris-sparcv9-cc/usr/bin/perl
 ./Configure solaris-sparcv9-cc #make
 bash: make: command not found

 Is this a right patch for openSSL ?if not please advice the right one
 , your support is highly appreciated .


 Best Regards
 Mustafa Ayoub (I.T.S.) (www.its.ws) Kuwait +965-97256940 Jordan
 +962-795266356
 _
 Explore the seven wonders of the world
 http://search.msn.com/results.aspx?q=7+wonders+worldmkt=en-USform=QB
RE



-- 
Un terrien, c'est un singe avec des clefs de char...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: how to fix bugs in openssl?

2008-10-16 Thread Geoff Thorpe
On Thursday 16 October 2008 12:32:01 Евгений wrote:
 Could I commit my patch to openssl source code to fix bug that I
 found?

No, but you're welcome to post details of the bug plus any fixes you have 
to propose. There is also a request tracker where you could describe the 
bug and your patch (which automatically forwards details to the 
mail-list).

http://www.openssl.org/support/rt.html

Cheers,
Geoff


-- 
Un terrien, c'est un singe avec des clefs de char...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Year 2038 problem

2008-10-06 Thread Geoff Thorpe
On Monday 06 October 2008 11:19:08 Michael S. Zick wrote:
 A more likely possibility -
 All of the crypto-locks on the physical facilities will not work,
 nor any of the access cards - nobody will be able to get in.
 Meaning the world will be effectively, totally disarmed.

Or even better: effectively disarmed, totally.

If only it was an intentional DoS. I for one would gladly take the credit 
for such a bug.

Cheers,
Geoff

-- 
Un terrien, c'est un singe avec des clefs de char...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Cannot create keystore using Purify instrumented binaries.

2008-10-02 Thread Geoff Thorpe
On Thursday 02 October 2008 06:40:53 Sanjith Chungath wrote:
 I am getting thousands of UMRs and finally one segmentation error and
 a core dump while trying to create a keystore. Am using 0.9.8g.
 Everything works fine without purify. I also tried rebuilding openssl
 with PURIFY compiler option. But that also didnt help me.

Out of curiosity, did you configure openssl with no-asm before building 
it? I don't know if purify's instrumentation conflicts with assumptions 
in the assembly-optimised code, but disabling assembly optimisations 
would be one way of finding out.

Cheers,
Geoff

-- 
Un terrien, c'est un singe avec des clefs de char...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: network connection encrypted/secure using ssl and sockets ?!

2008-09-03 Thread Geoff Thorpe
On Wednesday 03 September 2008 11:46:29 Ger Hobbelt wrote:
 On Wed, Sep 3, 2008 at 5:03 PM, Manuel Sahm [EMAIL PROTECTED] wrote:
 I want to make my network connection encrypted/secure using ssh.

 Please note that SSH is not SSL: SSH is a protocol on top of SSL.
 Since you're talking about sockets there, I take it you mean SSL.

Um, SSH is not a protocol on top of SSL. I haven't read anything else in this 
thread, but that one sort of stuck out ...

Cheers,
Geoff

-- 
Un terrien, c'est un singe avec des clefs de char...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Compiling static vs. dynamic and building a universal binary

2008-07-17 Thread Geoff Thorpe
On Wednesday 16 July 2008 14:56:26 Kenneth Goldman wrote:
 [EMAIL PROTECTED] wrote on 07/16/2008 10:08:31 AM:
  2) using static builds has a benefit: you know exactly what your
  application is going to get SSL-wise: you will be sure it is installed
  on the target system because you brought it along. The drawback is
  that you have to provide your own update path to track security fixes
  -- that is compared to an OS/platform where others do the tracking and
  updating for you (e.g. active Linux distros  with dynamic libraries).

 Is this really a drawback?  Since OpenSSL updates break backward
 compatibility, there a problem as well with dynamic libraries.
 Someone installs an update, possibly automated, possibly the install
 of another program, and suddenly you application fails in strange
 ways.

 [... my quixotic plea for NEVER breaking backward compatibilty]

Has this ever been (in recent history) an issue within a given release branch? 
Ie. has 0.9.8(n+1) ever broken apps that were running ok against 0.9.8n? 
0.9.8x is of course not backwards compatible with 0.9.7y, and 0.9.9 will not 
be backwards compatible with 0.9.8 either. But that's why (reputable) distros 
allow these branches to coexist and be upgraded independently.

Cheers,
Geoff

-- 
Un terrien, c'est un singe avec des clefs de char...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Compiling static vs. dynamic and building a universal binary

2008-07-17 Thread Geoff Thorpe
On Thursday 17 July 2008 12:26:33 Bruce Stephens wrote:
 Geoff Thorpe [EMAIL PROTECTED] writes:

 [...]

  Has this ever been (in recent history) an issue within a given
  release branch?  Ie. has 0.9.8(n+1) ever broken apps that were
  running ok against 0.9.8n?  0.9.8x is of course not backwards
  compatible with 0.9.7y, and 0.9.9 will not be backwards compatible
  with 0.9.8 either. But that's why (reputable) distros allow these
  branches to coexist and be upgraded independently.

 I suspect an application using PKCS12_create and passing a non-NULL
 name will segfault on 0.9.8h.  (I confess I've not actually tried
 that---I've only tried with an application built against 0.9.8h.)

 However, I guess (presuming the problem really exists, and I didn't
 mess up somehow) that's more a bug than a binary incompatibility.

yep, bugs are why 0.9.8x gets followed by 0.9.8y :-)

 0.9.8g (IIRC) broke source compatibility in the sense that at least
 some C++ compilers don't accept some of the headers.

Right, which would also be a bug. But in fact, the original question was about 
binary compatibility - ie. the analogous question would not be whether a C++ 
app could be built against 0.9.8g headers, but whether the pre-built app can 
link and run against 0.9.8g shared-libs *if* it had been built against 0.9.8f 
headers (and had been running ok with 0.9.8f shared-libs, aside from the bugs 
that 0.9.8g is supposed to fix).

Cheers,
Geoff

-- 
Un terrien, c'est un singe avec des clefs de char...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Errors while building OpenSSL in Windows

2008-07-11 Thread Geoff Thorpe
Did you try building with an up-to-date CVS snapshot?
  ftp://ftp.openssl.org/snapshot/
I don't know if you were using some already-released package version, but if 
so, then you would miss any fixes since then. (Ie. we don't rerelease 0.9.8x 
when bugs are found, we release 0.9.8y instead...)

And FWIW, there's a number of windows issues[1] a contributor is helping me to 
fix right now, I hope that we'll be done with that soon. So it may be that 
things will get a little smoother at that point.

[1] And when I say windows issues, I of course mean issues with OpenSSL 
compilation ... Vista is beyond anyone's handyman mojo ... grumble

Cheers,
Geoff

On Thursday 10 July 2008 16:50:20 Panthers Rock wrote:
  I am trying to do a default build of OpenSSL on Windows.  The compiler
 does not like building with ASM files and complains the following:

   ml /Cp /coff /c /Cx /Focrypto\sha\asm\s1_win32.obj
 .\crypto\sha\asm\s1_win32.asm

Assembling: .\crypto\sha\asm\s1_win32.asm

   Microsoft (R) Macro Assembler Version 8.00.50727.762

   Copyright (C) Microsoft Corporation.  All rights reserved.

   .\crypto\sha\asm\s1_win32.asm(13) : error A2008: syntax error :
 integer



   NMAKE : fatal error U1077: 'C:\Program Files (x86)\Microsoft Visual
 Studio 8\VC\bin\ml.EXE' : return code '0x1'

   Stop.


 This problem seems to be a known issue.
 http://marc.info/?l=openssl-devm=121204499318732w=1

 I tried both the solutions mentioned but to no avail.

 Any other suggestions?

 Cheers,
 Simon M


-- 
Un terrien, c'est un singe avec des clefs de char...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Wider fallout from Debian issue?

2008-05-30 Thread Geoff Thorpe
On Friday 30 May 2008 07:39:08 [EMAIL PROTECTED] wrote:
 I personally don't like the idea of generating keys that people will
 try, or using a weak/known key with small probability, but in this
 case I think it's so small that simply scanning for and banning such
 keys is good enough.

 I was hoping someone would release a tool to search for them in the
 authorized_keys files on any OS (e.g. my OpenBSD box), but AFAIK,
 nobody has.

In the debian (+ubuntu+...) case, their package updating machinery now brings 
in a black-listing package that essentially blocks the use of host keys or 
user keys that match the Debian Weak Key Space(tm). I believe there's also 
a tool in there to scan - ah, found it. This is from the blacklist README;

   To check all keys on your system:
 sudo ssh-vulnkey -a
   To check a key in a non-standard location:
 ssh-vulnkey /path/to/key

Though there is some note in their README to the effect that this isn't 100% 
bullet-proof, ie. a weak key might not be detected as such. I'm not sure why 
though, if they were really only hashing the PID then you have to figure that 
the affected keys are a distinctly finite set. Perhaps the issue is 
64bit-vs-32bit and endianness platform variations. In any case, if in doubt, 
regenerate, grumble, and move on.

 I certainly don't want a kluge to the RNG...

Indeed, I've seen one RNG kludge so far this year, and that was one too 
many ...

Cheers,
Geoff


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: CRYPTO_add_lock() segmentation fault (core dump included)

2008-04-08 Thread Geoff Thorpe
On Tue, 2008-04-08 at 03:35 -0500, Ion Scerbatiuc wrote:
 Hello!
 I wrote a multithreaded server using OpenSSL v 0.9.7a (running on a RH
 Enterprise Linux 2.6.9-55.0.2.ELsmp).
 The problem is my server is crashing at random times (it could stay
 alive for 24 hours or can crash within 4 hours). Inspecting the cores
 file I found that it crashes in the same location every time
 
 #0  0x00ba503f in CRYPTO_add_lock () from /lib/libcrypto.so.4
 
 I defined the two needed callbacks (according to crypto man page) like
 this:
 
 struct CRYPTO_dynlock_value
 {
 pthread_mutex_t mutex;
 };
 
 static pthread_mutex_t *mutex_buf = NULL;
 
 static void locking_function(int mode, int n, const char *file, int
 line)
 {
 if (mode  CRYPTO_LOCK) {
 pthread_mutex_lock(mutex_buf[n]);
 } else {
 pthread_mutex_unlock(mutex_buf[n]);
 }
 }
 
 static unsigned long id_function(void)
 {
 return ((unsigned long) pthread_self());
 }

Did you call CRYPTO_set_add_lock_callback() as well? You probably want
to set that and use the callback to do pthread_mutex_init().

Cheers,
Geoff



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Re: CRYPTO_add_lock() segmentation fault (core dump included)

2008-04-08 Thread Geoff Thorpe
On Tue, 2008-04-08 at 10:04 -0500, Ion Scerbatiuc wrote:
 Thank you for your reply!

You're welcome :-)

 I didn't find any refferences to CRYPTO_set_add_lock_callback() in
 openssl man pages nor the meaning of this functions/callbacks.

Ahh, well once you start to understand this stuff better, consider
yourself invited to submit patches to the documentation (look for
the .pod files in ./doc/crypto/).

 I didn't understand what does CRYPTO_add_lock () do.

Nor did I until I saw your mail and took a quick look in the relevant
headers and code (crypto/crypto.h and crypto/cryptlib.c, respectively).

 Can you provide some information on this functions and maybe some code
 examples.

Nope, but I would if I could. May the source be with you. :-)

Cheers,
Geoff



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Nagios plugin installation for check_http ssl

2008-03-27 Thread Geoff Thorpe
Hello again,

I replied to this already on the openssl-dev list, although
openssl-users is the more appropriate of the two lists. Please don't
cross-post though. Thanks.

Cheers,
Geoff

On Wed, 2008-03-26 at 17:07 -0400, Azam Syed wrote:
 I loaded openssl 0.9.8g and when I complie Nagios plugin it says yes
 next to openssl, but when I do the make I get the following. I
 complied Nagios plugin with [EMAIL PROTECTED]
 nagios-plugins-1.4.11]# ./configure --prefix=/usr/local/nagios/libexec
 --with-ssl-dir=/usr/local/ssl --with-libs=-ldl”
[snip]


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Question regarding use of SSL_get_ex_new_index

2008-03-25 Thread Geoff Thorpe
On Mon, 2008-03-24 at 17:38 -0400, Amit Sharma wrote:
 I have an application that creates a bunch of SSL connections during
 its life. For each of these connections, I have to store “application
 data” in an SSL object (in my case this is SSL_client object).  The
 trouble is that the memory allocated in the SSL_get_ex_new_index is
 never freed until the end of the application. I am tracking this
 through valgrind and can create a simple test case if that would help,
 but I think my problem is simply misusing the API.
[snip]
 My question is how can I use SSL_get_ex_new_index such that I can free
 memory once the SSL connection closes? Should I be re-using the index
 returned instead of calling the function multiple times – after all I
 have a new SSL_client object each time? 
[snip]
 I have tried setting the function pointers in the
 SSL_get_ex_new_index, but for some reason the callbacks are never
 called. Moreover the memory leaked is not an allocation that I have
 made and thus am unable to free it even if they were called. I have
 made sure that I am calling all the SSL freeing routines .. SSL_close,
 SSL_free etc.

I think maybe you're misunderstanding the API. SSL_get_ex_new_index() is
to register a new *type* of per-SSL application data. This is why that
API has no SSL parameter! :-) Once you've registered your new type of
SSL-association data (which includes arbitrary long and pointer values,
and callbacks for new, dup, and free), then every time an SSL object is
created, duplicated, or destroyed, your callbacks will get invoked.
Moreover, your callbacks are passed the long and ptr values associated
with this data type, as well as the type's index that was returned
from SSL_get_ex_new_index() (so the same callbacks and state could
implement multiple types of associated data via different index values).
To actually set a value in a new SSL object, you use SSL_set_ex_data()
(eg. called from within your new callback). In your free callback,
you probably want to use SSL_get_ex_data() for cleanup.

Hope that helps,
Geoff



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Clarification questions on OpenSSL thread-safe support

2008-03-11 Thread Geoff Thorpe
On Mon, 2008-03-10 at 17:23 -0600, Bryan Sutula wrote:
 My questions:
  1. What I understand from this is that OpenSSL can be thread safe.
 In order for it to be safely used in multi-threaded
 applications, it needs:
  A. to be built with multi-threaded versions of the standard
 libraries,
  B. to have the application provide the two callback
 functions, and
  C. the application must avoid using the same SSL connection
 by two different threads.
 All of the above are necessary.  In other words, it isn't
 sufficient that OpenSSL was built with the multi-threaded
 versions of the standard libraries.  The application must also
 set up the callbacks. (True or False, please?)

True. OpenSSL's role is not to be a threads implementation, nor a
wrapper around all known or conceivable threads implementations, it's
role is to allow you to glue it to the thread implementation you intend
to use. Hence the callbacks.

  2. Related to question 1, the thread-safe requirements (A and B
 above) are needed even if the different threads are not sharing
 an SSL connection.  (My understanding is that connections can't
 ever be shared, and that the library still needs A and B in
 order to be thread-safe.)  (True or false?)

True. The different SSL contexts operating in different threads will
still be using shared structures that need to be synchronised. Eg. the
SSL_CTX that the SSL objects were derived from, certificates and
certificate-stores, public-keys, some global data specific to some other
corners of the libcrypto and/or libssl implementations, etc.

  3. Instead of B (implementing the two callback functions), is it
 sufficient for the application to provide it's own locking
 around all SSL library calls?  In other words, if the
 application guarantees that only one thread will be in the
 library at a time, is that sufficient?

Sure, but you'd have to mutex all accesses to *any* openssl interface.
Eg. see my answer to 2. If time spent in openssl is a miniscule
percentage of your run-time profile, then this may be tolerable. But
otherwise, you really don't want to be serialising all threads for any
use of openssl interfaces. The locking callbacks allow you to implement
the minimum required locking at the appropriate granularities (there are
CRYPTO_num_locks() different locks). The contention is likely to be a
lot less than if you lock your own code before it calls into openssl ...

  4. I'm guessing from the semantics of CRYPTO_set_locking_callback()
 and CRYPTO_set_id_callback(), that they are not to be called
 more than once from an application.  It seems like they have to
 be called only at the beginning of the program, and not ever
 again.  (True or False?)  Is there a way to know if they have
 already been called later on?

True, I think. I guess if you have some guarantee that the process is
quiesced (ie. everyone please shut up for a moment while I change the
locking), then you could conceivably change these. But not on the
fly, if you get my point. There may also be some fail if they've
already been set checking in openssl, in which case this would fail. As
always, you have access to the source code - feel free to dig.

Finally, yes, you can call CRYPTO_get_locking_callback() and
CRYPTO_get_id_callback() to see if they've already been called.

  5. There are some other dynlock functions described in the
 threads(3) man page.  The wording on that page implies that they
 are only needed for performance, or maybe in a future version.
 In my current application, they don't seem to be called.  Is it
 necessary to implement these?  Will they only be for
 performance?  If I don't implement them, will my application
 break in some future version of OpenSSL, or will it just run
 slower?  (The confusion results because the current man page has
 wording: Multi-threaded applications might crash at random if
 it is not set, but also says dynamic locks are currently not
 used internally by OpenSSL, but may do so in the future and
 some parts of OpenSSL need it for better performance.)  What's
 the real situation here?

I agree that this is fuzzy. It should be fine not to provide dynlock
functionality for now, but of course be aware that if you maintain your
code going forward there may come a time where functionality within
openssl (or more likely, run-time/loadable extensions to openssl, such
as hardware/enhancement support) will fail due to its inability to
create new locks on the fly. Only you can judge whether this is or will
be an issue.

  6. Question 4 applies to the dynlock setup functions as well.  Same
 answer about calling them multiple times?  Any 

Re: Licenses...

2006-04-17 Thread Geoff Thorpe
On April 17, 2006 06:48 pm, Ted Mittelstaedt wrote:
   Since SSLeay is part of OpenSSL, Eric Young is by definition an
 OpenSSL author.

Egads man, would you please stop twatting on like this?! This is truly 
truly painful to watch. As if it weren't annoying enough to see the 
license getting (re)debated, despite the fact there's fsck all that can 
be done about it as things stand, we have to sift through your steaming 
piles of histrionics. Laxatives should be a strictly private matter, 
surely?!

grumble

 Therefore my statement is valid, as much as you may not like it.
 That is, the OpenSSL authors (you included) DON'T want to change
 the license.

Richard has generously attempted to discuss this with you in a reasonable 
manner. Allow me to try another tack; blow it out your ear, doofus.

If your point is that the openssl authors don't want to change the 
license because Eric (presumably) doesn't want to change the license, 
then there is nowhere left to take this discussion. Well, except perhaps 
to raise the issue with Richard Dawkins, who will no doubt be worried to 
discover that, busily posting away to openssl-users, is clear evidence of 
a significant anomaly that Darwinism can't explain. Yes, that's an 
insult, but I can assure you it's public domain - feel free to make lots 
of copies for yourself.

 I am sure you are going to squawk and claim that you want to change
 it.  But, Richard, you appointed yourself to talk for the rest of the 
 OpenSSL authors when you started arguing with me

?! What the hell have you been smoking?? Richard was speaking for himself, 
as were you. Unlike yourself however, Richard clearly was taking very 
few, if any, hallucinogens.

If you want to redistribute openssl with any of the license clauses 
removed (and/or replaced) - go ahead and try it. Good luck. In the mean 
time, please, for the love of all that is holy, stop buzzing like a 
detuned radio. This may be a difficult idea to swallow, but many people 
have thought about this issue, some have even discussed it with lawyers 
well-versed in the subject-matter, and bursting onto the forum as though 
you've had some mythic vision of a license nirvana for openssl is a sad 
spectacle to have to endure. Ragging on Richard because he apparently 
just can't understand your brilliance or worse, refuses to be enlightened 
by it, just makes this fscking aggravating to boot.

Discuss, question, reflect - by all means. But deranged evangalism should 
stay confined to the privacy of your own home (or nearest foreign policy 
think-tank).

Sincerely,
An author other than Richard

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.openssl.org/

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: steps to use a dynamic engine from an application

2005-12-03 Thread Geoff Thorpe
On December 1, 2005 01:20 pm, Anil Gunturu wrote:
 Thank you for your response. I have couple of more questions:
   - If I use ENGINE_by_id(athena), what should be the name and path
 of engine implementation.

It depends on how the source was configured/built. Typically it will be 
within an 'engines' sub-directory of the installation path. The source is 
your friend for things like this. If you're using a prebuilt package, 
that is just as difficult to predict from here, but running 'strace' on 
an openssl binary when it tries to load dynamic engines would be a quick 
trick to figure it out (grep the output for libathena, for example).

   - I understand that ENGINE_cleanup() should 
 be called before shutting down the application, but can I call
 ENGINE_finish() and ENGINE_free() before application has done using the
 Engine?

If you have your own references, yes you have to release them. Likewise, 
if the library maintains its own internal references (eg. when you 
register an engine into the internal list(s) - whether as a default 
implementation or not) you have to tell the library to release its own 
references too, using ENGINE_cleanup(). Failing to do either will result 
in the ENGINE not being unloaded (although if your app exits, the kernel 
will of course clean up anything stray). Again, if you're not sure what's 
going on here, take a look at the source (in ./crypto/engine/) and it may 
become clearer. The engine structure, internally, maintains two reference 
counts; 'struct_ref' and 'funct_ref'. The latter is like a specialised 
form of the former - if you increment funct_ref, you should also 
increment struct_ref - so struct_ref=funct_ref at all times. 
'struct_ref' represents references to the structure itself, whether it's 
enabled or not. 'funct_ref' represents 'enabled' references - so the 
engine is initialised if and only if funct_ref=1.

Hope that helps,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.openssl.org/

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: steps to use a dynamic engine from an application

2005-11-29 Thread Geoff Thorpe
Hi there,

On November 29, 2005 03:05 pm, Anil Gunturu wrote:
 I am just wondering about the steps to use a dynamic engine. Can
 somebody verify this:

   e = ENGINE_by_id(dynamic); 
   if (!e) {
 return RC_ERROR;
 }
   if ((!ENGINE_ctrl_cmd_string(e, SO_PATH, so_path, 0)) ||
 (!ENGINE_ctrl_cmd_string(e, ID, ATHENA, 0)) ||
 (!ENGINE_ctrl_cmd_string(e, LOAD, NULL, 0)))
 {
 ENGINE_free(e);
 return RC_ERROR;
 }

All of that should be equivalent to ENGINE_by_id(athena) if the engine 
has the appropriate name/path and you're using a recent version of 
openssl. But if that works for you, cool.

   if (!ENGINE_init(e)) {
 ENGINE_free(e);
 return RC_ERROR;
 }

   ENGINE_set_default_RSA(e);

   Also, when do I need to call ENGINE_finish() and ENGINE_free()?

Up until you call ENGINE_init() all you have is a *structural* reference, 
the engine may not be able to do anything (eg. if it's for hardware you 
don't have) but it lets you manipulate it. This reference should be 
released by ENGINE_free(). If ENGINE_init() succeeds, you have a 
*functional* reference as well, which is released by ENGINE_finish(). In 
your case, you've got one of each kind of reference so you'd need to 
release both.

However, ENGINE_set_default_RSA() will attempt to initialise the engine if 
it's not already initialised anyway (it can't be a default unless it's 
*working*). So don't bother trying to initialise it, then you only need 
to call ENGINE_free() once you're done. You need to check the return 
value of ENGINE_set_default_RSA() though if you want to know if it 
succeeded.

BTW, your application needs to call ENGINE_cleanup() when closing down, as 
this releases any/all internal references. Eg. ENGINE_set_default_RSA() 
causes an internal functional reference to be kept internally to prevent 
the engine from deinitialising/unloading.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.openssl.org/

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Problem with OpenSSL on Solaris x86 *

2005-10-04 Thread Geoff Thorpe
On October 4, 2005 08:00 am, Ted Mittelstaedt wrote:
   OpenSSL builds but fails tests.  Here's the particulars:

[snip]

 (cd ..; \
   OPENSSL=`pwd`/util/opensslwrap.sh; export OPENSSL; \
   /usr/bin/perl tools/c_rehash certs)
 Doing certs
 Segmentation Fault - core dumped
 argena.pem = .0
 Segmentation Fault - core dumped
 WARNING: Skipping duplicate certificate argeng.pem
 Segmentation Fault - core dumped
 WARNING: Skipping duplicate certificate eng1.pem
 Segmentation Fault - core dumped
 WARNING: Skipping duplicate certificate eng2.pem

[snip]

As you're getting core-dumps, it would be instructive to use those to get 
a bracktrace. However I'd also mention that it would be more useful to 
rerun this with more debugging compiled-in. Eg. edit Makefile to remove 
the -O3 and add -g -ggdb3 or something like that, than make clean  
make. Then if you still get the problem, the core-dump will provide a 
more useful backtrace.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

Même ceux qui se sentent pas des nôtres, ne nous voyant plus à genoux,
seront, plus que jamais, chez eux chez nous.
  -- Loco Locass
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Using OpenSSL with 'ubsec' hardware on FreeBSD

2005-04-19 Thread Geoff Thorpe
On April 19, 2005 06:40 am, [EMAIL PROTECTED] wrote:
 the issue with LD_LIBRARY_PATH appears to be void as there is NO
 libubsec.so on the filesystem. its simply not made. where can
 I get it from??? (on Redhat and Fedora Core  3  this file
 appears in the mystical 'hycrypto' package)

The ENGINE in openssl is little more than a shim to the user-space 
libraries that support the hardware - this is not part of openssl or any 
openssl distribution (that I'm aware of), it is provided by the vendor 
just as the kernel-drivers and associated bits-n-bobs are provided by the 
vendor. Openssl's engine was originally compiled internally to openssl, 
but more recently it has been possible to build them as external 
libraries - this is probably what you see in the fedora package. In this 
way, the openssl shim is *also* external and so can be shipped by vendors 
(or distributions) at the same time as their proprietary user-space 
libraries and APIs.

This doesn't change the fact that the openssl engine knows nothing about 
the syscall interface or software environment of the hardware. It merely 
converts the API language openssl speaks into whatever interface the 
hardware's libraries use. So the library the ubsec engine is trying to 
load is the *vendor* library, the one that actually causes the real 
actions to happen. The library shipped in fedora was probably just a 
shared-library version of the ubsec engine, but it should *also* have 
needed to load the vendor library to work.

Whether that vendor library would work ok with the engine shim at a 
version-compatibility level is another thing - it probably should but no 
promises. However you need to find that library, and then convince 
openssl of how to find it too. If you got fedora running with the card at 
some point, then it must have had the vendor libraries installed and in 
some location where it could find them. Or it ships with the hardware 
support packaged-in somehow. Or have I misunderstood something.

BTW, someone mentioned in another post that the /dev/crypto engine might 
work on Free/OpenBSD if the kernel has a built-in driver, but that might 
only provide access to cipher/hash functionality - I doubt public-key 
crypto stuff goes through /dev/crypto. I should check, but I don't recall 
seeing this get added.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

Greedy Genghis George, Guru of God and Guns.

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Using OpenSSL with 'ubsec' hardware on FreeBSD

2005-04-18 Thread Geoff Thorpe
On April 18, 2005 02:09 pm, [EMAIL PROTECTED] wrote:
 # /usr/local/bin/openssl engine ubsec -vvv
 (ubsec) UBSEC hardware engine support
  SO_PATH: Specifies the path to the 'ubsec' shared library
   (input flags): STRING
 # /usr/local/bin/openssl speed rsa -engine ubsec
 can't use that engine
 34349:error:25066067:DSO support routines:DLFCN_LOAD:could not load the
 shared library:dso_dlfcn.c:153:filename(libubsec.so): Shared object
 libubsec.so not found, required by openssl 34349:error:25070067:DSO
 support routines:DSO_load:could not load the shared
 library:dso_lib.c:244: 34349:error:84069067:ubsec engine:UBSEC_INIT:dso
 failure:hw_ubsec.c:390: 34349:error:260B806D:engine
 routines:ENGINE_TABLE_REGISTER:init failed:eng_table.c:182:


 so, both OpenSSL versions are compiled with ubsec support...however,
 the world version doesnt support DSO's at all. the ports version
 DOES, but it cannot find libubsec.so - this, under RedHat, was supplied
 by 'hwcrypto' package - is there an official source for the libubsec
 software for FreeBSD for OpenSSL folk to use?

Which version of openssl is the ports tree based on? I don't know about 
the world version, but the problem with the ports one seems to be (so 
far) just a matter of paths. I don't do bsd, but I assume that tweaking 
with LD_LIBRARY_PATH or some such thing ought to be able to convince 
openssl to find libubsec.so. Whether the result will be 
version-compatible is another issue, but you might be lucky.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

Greedy Genghis George, Guru of God and Guns.

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: 22 NOv 2004 SNAPSHOTS

2004-11-22 Thread Geoff Thorpe
On November 22, 2004 08:44 am, The Doctor wrote:
 Seem to be unavailable.

I know there was a problem with a full disk partition, this could be a 
consequence of that. I can't follow up directly, but please check the on 
the next snapshot as that should be ok if this was just a disk-space 
issue.

Salut,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

Greedy Genghis George, Guru of God and Guns.

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: Hardware Acclerator for Mod exp calculations

2004-11-12 Thread Geoff Thorpe
On November 11, 2004 05:32 pm, fiero b wrote:
 I am having an API provided by hardwarwe crypto for
 public key mod exponent calculations.
 Please let me know what is the best way to hook up
 this Mod exp routine into the openssl public key
 operations so that DH,RSA will make use of the
 Hardware Mod exp rather than software Mod exp.

Take a look at the atalla engine implementation as an example. In CVS 
snapshots, it's in engines/e_atalla.c, and in 0.9.7 it's in 
crypto/engine/hw_atalla.c.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

Greedy Genghis George, Guru of God and Guns.
__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: DOD Root Certificates and OpenSSL

2004-10-22 Thread Geoff Thorpe
On October 22, 2004 10:54 am, Golub Heath wrote:
 Sorry in advance but I am fairly new to OpenSSL and though I have read
 a lot .. .I just can't seem to get it right. Any help, even direction
 pointing (eg. a URL) would be greatly appreciative.

You needn't worry, what you're asking is far from the easiest thing people 
have problems with :-)

 I have also tried the certificates with just the DOD Class 3 CA-3 in
 the DoDSub-ca and all the rest in the DoDRoot-ca files. Any advice?

As a first step, I'd recommend testing this out with openssl at both ends, 
ie. verify your understanding of which certs are which (and what effect 
they have). Use openssl s_client on the client side, openssl s_server 
on the server side. The -CAfile, -cert, -key, and -[Vv]erify arguments 
are what you need to control the cert behaviour, and -showcerts 
wouldn't be a bad idea either.

The second step would be to run IE on the client side but stick with 
s_server on the server - this will tell you whether IE is causing your 
problems. It won't respond with a web-page of course, but the browser 
should pause waiting for a response while you sift through the s_server 
output.

Good luck,
Geoff
-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: In custom RSA_METHOD, rsa_priv_enc() is enough?

2004-10-10 Thread Geoff Thorpe
Hi Peter,

On October 8, 2004 07:42 am, Peter 'Luna' Runestig wrote:
 I'm sorry, I was a bit unclear in my original post. I'm using this on a
 SSL/TLS *client*, to connect to a (telnet) server. So I am using client
 authentication. And I have tried numerous different cipher setups, but
 I can only trigger rsa_priv_enc() to be called. Do you have any tip for
 me, how I might trigger e.g. rsa_priv_dec() to be called?

Ah, ok. What about public-key ops, are you getting called for those? The 
client should certainly be trying to authenticate the server which would 
require at least one public key operation, probably rsa_pub_dec() in 
RSA_METHOD-speak. But I'm not sure you need any rsa_priv_dec() operation. 
Certainly, both sides need to sign for authentication, but only one side 
should need to decrypt for the key-exchange. Because the server always 
authenticates (client-authentication is quite rare), it's a pretty safe 
bet with non-ephemeral RSA key-exchange that the client encrypts and the 
server decrypts. That would certainly explain what you're seeing.

 Yes :-) I post the code here, in case someone is interested/have
 feedback/finds errors. It's all wrapped in a function:

 int SSL_CTX_use_CryptoAPI_certificate(SSL_CTX *ssl_ctx, const char
 *cert_prop);

Ah, I think you'd probably find it generally to spread your code if you 
used something other than a new API to stitch the application up to your 
CryptoAPI stuff - I can say from personal experience that you'll have a 
difficult time getting applications to support one-off APIs. Eg. if you 
exported the certificate details so they can be used directly by openssl 
apps as PEM files, then you only need to make sure the application uses 
your ENGINE for it to be able to hook all the private key work to the 
appropriate CryptoAPI token. (You could put in a placebo key-file to 
satisfy any applications that don't support the ENGINE_load_private_key() 
API.)

Cheers,
Geoff
-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: In custom RSA_METHOD, rsa_priv_enc() is enough?

2004-10-06 Thread Geoff Thorpe
Hi Peter,

On October 5, 2004 12:38 pm, Peter 'Luna' Runestig wrote:
 I've managed to hack together a custom RSA_METHOD, based on Microsoft
 CryptoAPI (on Windows XP in my test case), to use a smart card for
 authentication. And it actually works, as far as I have managed to test
 it anyway. But I'm a little puzzled: When I'm running it, the only
 (crypto-related) RSA_METHOD callback that gets called, is
 rsa_priv_enc(), once. Even with a negotiated crypto like AES256-SHA,
 that, AFAICS, uses RSA for key exchange. Is this as expected, or is
 there other test cases that might trigger other callbacks (that needs
 to be implemented then)?

This makes sense if you're using SSL/TLS with simple cipher-suites and 
configs (ie. no client authentication, no ephemeral key-exchange, etc.) 
Essentially, the server needs to be able to sign (for RSA, this is 
priv_enc) some data using the private key corresponding to the public key 
that's already been sent to the client via the certificate. Glad you got 
this working with CryptoAPI BTW ;-)

 BTW, RSA_new_method() isn't called with a RSA_METHOD*, but with an
 ENGINE*. Confusing?

Well a lot of things are confusing when you're trailing around legacy APIs 
and trying to stick to the nomenclature. :-) RSA_METHOD used to be the 
be-all-and-end-all wrapper for plug-in RSA implementations. The problem 
is that they're assumed to be statically linked and always-on (ie. no 
initialisation required, etc). ENGINE is a modular wrapper around 
RSA_METHOD and various other interfaces that takes care of these extra 
admin details. If you provide a NULL engine to RSA_new_method(), it'll 
pick the default (as is/was the case with RSA_METHOD). If you provide a 
non-NULL engine, it'll try to use that ENGINE's RSA_METHOD implementation 
- however it'll also make sure to verify the implementation is 
initialised, bump the reference count for use by the newly generated RSA 
object, etc etc etc.

Cheers,
Geoff
-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: traffic sniff

2004-09-28 Thread Geoff Thorpe
On September 28, 2004 03:15 pm, Shen, Lei wrote:
 Anyone knows how I can know how many negotiations is actually going on
 by just sniff the raw tcp ip data?

 Is there any sniff package can roughly understand the ssl protocol.
 I do not need to decode the actual message, I just want to know stuff
 at the protocol level, like when a negotiation happened, when an error
 happened due to one party is not working as protocol defined.

Ethereal can categorise SSL/TLS packets at the record-layer if that's good 
enough. If you're wanting to get down into the protocol more deeply, try 
Eric's 'ssldump' tool.

Cheers,
Geoff
-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: Trouble using PKCS5_pbe2_set()

2004-09-13 Thread Geoff Thorpe
On September 13, 2004 10:04 am, Steve Hay wrote:
 The above seems to work OK after a quick test, but why does one need

 EVP_CIPHER cipher = *EVP_bf_cbc();
 cipher.key_len = ...

 rather than

 EVP_CIPHER *cipher = (EVP_CIPHER *)EVP_bf_cbc();
 cipher-key_len = ...

 ?

Because with the latter you are trying to alter the original structure, 
which is probably compiled const in the library and your binary loader 
is probably putting it in read-only-mapped memory.

 typedef struct fred_st FRED;
 struct fred_st { int foo; };

 const FRED *new_fred(void) {
 static FRED fred;
 fred.foo = 16;
 return fred;
 }

 void main(void) {
 FRED *fred = (FRED *)new_fred();
 EVP_CIPHER *cipher = (EVP_CIPHER *)EVP_bf_cbc();

 printf(FRED foo = %d\n, fred-foo);
 fred-foo = 24;
 printf(FRED foo = %d\n, fred-foo);

 printf(BF-CBC key len = %d\n, EVP_CIPHER_key_length(cipher));
 // The next line causes an Access Violation:
 cipher-key_len = 24;
 printf(BF-CBC key len = %d\n, EVP_CIPHER_key_length(cipher));
 }

Try defining your FRED structure as const and see if that doesn't help it 
crash. Anyway, the fact remains that you are better to copy the original 
implementation and then manipulate your copy.

Cheers,
Geoff
-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: max sessions

2004-07-29 Thread Geoff Thorpe
On July 29, 2004 02:20 pm, Joseph Bruni wrote:
 The other thing I noticed was that (according to the man page for
 select()) the results of the FD_ macros are undefined if the descriptor
 value is greater than FD_SETSIZE, which is 1024 on my system. I find
 this odd since the hard limit of the number of files any given process
 can have open is kern.maxfilesperproc = 10240. Is this a limitation of
 the POSIX API or could the man page for select() be wrong? Does anyone
 have any insight into the proper use of select() if the descriptor
 values are larger than FD_SETSIZE? Or maybe some other function that
 replaces select() for programs with LOTS of descriptors?

I don't know which system you're runing, but perhaps you might have more 
luck with poll(2)?

Cheers,
Geoff
-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: Use of Engine

2004-07-07 Thread Geoff Thorpe
On July 7, 2004 06:39 pm, Joe smith wrote:
 I am new to openssl and am still exploring its use. Can someone tell me
 what is the use of the various Engines in openssl.

Well that depends on who you listen to, there are some who would tell you 
that the sole use of those engines is to grow your libraries by a few 
Kb :-) The other explanation that's been floated from time to time 
suggests that those engines provide alternative implementations of 
various cryptographic algorithms/services in a plug-in form. Most of the 
current ones are there to support cryptographic hardware of various 
sorts, but there's also one that provides an RSA implementation based on 
the GMP library (http://swox.com/gmp/), and [Open|Free|?]BSD systems also 
get an engine that exposes their /dev/crypto kernel crypto interface.

 And what happens if 
 I disable the engine?

Well, if you disable engine functionality in the openssl libs, it means 
that your application won't be able to use engines. Of course, if your 
application doesn't make any engine API calls (or it does, but you're 
already using recent openssl CVS snapshots, where engines can usually be 
built as separate load-on-demand libraries), then there won't be any 
significant engine footprint for your application unless it actually 
loads and uses an engine at run-time. The ability to build openssl libs 
with engine support completely disabled was more relevant in older 
releases where footprint bloat was a problem, though it may still be 
relevant for some restricted (eg. embedded) environments where disk space 
(or flash memory) is limited.

Hope that helps.

Cheers,
Geoff
-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: Using 2 or more engines

2004-03-02 Thread Geoff Thorpe
On March 2, 2004 10:58 am, Giovanni Calzuola wrote:
 I'd like to use 2 or more engines without using the functions
 ENGINE_set_default, due to problems of concurrency. I want to sign with
 a hardware key, while using software keys for SSL.
 How can I do this?

That depends rather heavily on what hardware key means. If the 
corresponding ENGINE supports it, you should use 
ENGINE_load_private_key().

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: Using 2 or more engines

2004-03-02 Thread Geoff Thorpe
On March 2, 2004 11:40 am, Giovanni Calzuola wrote:
  That depends rather heavily on what hardware key means. If the
  corresponding ENGINE supports it, you should use
  ENGINE_load_private_key().

 I'd like to use a software engine by default and occasionally get a key
 from a pkcs#11 engine.
 Such a pkcs#11 engine, in order to reteive the private key, makes a
 call to PEM_read_PUBKEY(fp, .. .. ..), which, through several calls,
 calls RSA_new_method(NULL), and consequently returns the method of the
 default engine, which is the software one.
 If I could pass an engine structure to the PEM_read_PUBKEY, and tell it
 how to get the corresponding RSA, I think that I'll find the solution
 to my problem.
 Any idea about it?

Why is a pkcs11 engine calling PEM_read_***? The ENGINE_load_private_key() 
functionality was created to do precisely what you're asking for, and 
this hooks off a callback provided by the engine implementation that 
should allow it to provide hardware-specific key-loading support. If it 
only calls PEM functions, then it is not written to handle HSM keys.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


openssl-based win32 programs

2003-10-15 Thread Geoff Thorpe
This is a note for any developers of openssl-based products on win32 
platforms. Due to some mild curiosity recently, I tried running a win32 
build of openssl libs and utils under Wine[1] to see what problems there 
might be.

As it turns out things are pretty smooth with the libraries, the one 
exception was that some of the PRNG-seeding techniques were crashing on 
some unimplemented win32 API functions so I've sent in some stubs for 
those that have since been applied to wine CVS. With respect to the 
openssl binary utilities (openssl.exe and friends), one of the wine 
developers managed to fix a couple of bugs openssl highlighted in its 
console behaviour, so this is now working fine too. So as far as I can 
tell, there should be no remaining problems using openssl win32 libs or 
executables under Wine. The current release should already have these 
fixes incorporated, though otherwise it's certain the next release will.

If any of you have considered or tried running your own win32 
applications/products under Wine rather than porting them to 
glibc/gtk/etc, you may have already abandoned this possibility due to 
OpenSSL's use of these missing APIs. If it's not too late to do so, you 
should consider trying this out again, it may just work. :-) If you 
haven't tried this before and your product only supports the windows 
platforms, it could be very helpful to you, the Wine project, and open 
source causes in general to check out how well your product works under 
Wine. If there are any issues, they can usually be diagnosed quickly when 
the authors of the programs concerned are cooperating with the debugging 
efforts, and all such efforts go to improve the overall win32 
compatibility of Wine and its ability to give users of win32 programs a 
meaningful alternative to MS windows. I will provide all the help I can 
to anyone trying to Q/A their win32 crypto/SSL applications under Wine, 
and there are many active Wine developers who are also likely to welcome 
and help anyone who's willing to accompany some new win32 test cases 
through the required testing.

So, anyone out there with win32 programs they want to try on Wine?

Cheers,
Geoff

[1] FYI: wine (from www.winehq.org) is an implementation of win32 
binary-loading and the system DLLs that allow win32 programs to execute 
natively on *nix platforms. Roughly speaking, the wine binary loader does 
to win32 PE executables and libraries what the linux binary loader 
(ld.so) does for native ELF executables and libraries. The Wine DLLs 
are equivalents for their MS windows counterparts, and are usually built 
as native shared-libraries on the host platform. In essence, the main 
difference between running win32 applications under Wine/*nix instead of 
windows, apart from any unfinished work in Wine itself or unreproduced 
bugs from windows, is what the underlying kernel is. On both platforms 
the loader, linker, and API shared libraries do equivalent jobs, and 
performance of applications should be more or less comparable in most 
cases (with only a few exceptions heavily favouring one platform or the 
other). The main thing to remember w.r.t. any performance fears is the 
acronym; WINE, Wine Is Not an Emulator. :-)

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: openssl-based win32 programs

2003-10-15 Thread Geoff Thorpe
On October 15, 2003 11:47 pm, Brian Hatch wrote:
  As it turns out things are pretty smooth with the libraries, the one
  exception was that some of the PRNG-seeding techniques were crashing
  on some unimplemented win32 API functions so I've sent in some stubs
  for those that have since been applied to wine CVS. With respect to
[snip]
 Do these stubs actually provide randomness as if it's in a
 Windows environment, is it just seeding with predictable
 data, or is it snagging true entropy from the underlying
 unix-like system (/dev/*random, etc) ?

No, the standard mechanism for this sort of thing is to stub the API 
implementation with a place-holder function that returns whatever is the 
appropriate equivalent of failure. This way, linking works, and 
error-tolerant applications also work without anyone actually having to 
devise an appropriate posix-based implementation of the corresponding 
win32 concept. MS windows does this sort of thing itself too in cases 
where the presence of features in an API function depend on the system, 
version, drivers, etc. In this particular case, it was obscure APIs for 
walking stacks/heaps/etc that are still missing. This stub in Wine isn't 
a problem for PRNG security because the PRNG scores entropy sources as 
it adds them, so the failure of these APIs means that the PRNG doesn't 
consider them as contributing entropy either. The win32 PRNG 
implementation must be getting plenty of entropy successfully from the 
other sources it uses.

OTOH what you're describing is a form of porting or simulation which isn't 
really appropriate here, as the desire is to have existing executables 
and libraries work as-is within Wine - even to the point that (in theory) 
you should be able to switch between dynamic linking with any mixture of 
Wine and/or MS versions of DLLs (except ntdll and kernel, for what should 
be fairly obvious reasons).

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: OpenSSL + ECC

2003-10-07 Thread Geoff Thorpe
On October 7, 2003 01:49 pm, Jeroen wrote:
 Does OpenSSL still have Sun's non-free ECC cryptography?

The copyright note in the ECC header file, ./crypto/ec/ec.h, should put 
your mind at ease. I won't get into the what is non-free debate, but 
would rather simply make the comment that I'd be distinctly more 
concerned about dancing around in patent minefields than having conflicts 
over source code licensing when it comes to running elliptic curve 
technology. Then again IANAL, and I don't use ECC knowingly anyway. 
However, for your peace of mind, here's the clipping from that header 
file.

/* 
 * Copyright 2002 Sun Microsystems, Inc. ALL RIGHTS RESERVED.
 *
 * Portions of the attached software (Contribution) are developed by 
 * SUN MICROSYSTEMS, INC., and are contributed to the OpenSSL project.
 *
 * The Contribution is licensed pursuant to the OpenSSL open source
 * license provided above.
 *
 * The elliptic curve binary polynomial software is originally written by 
 * Sheueling Chang Shantz and Douglas Stebila of Sun Microsystems 
Laboratories.
 *
 */

 I've contacted the maintainer. He didn't find references in the code
 nor heard about it.

A grep on Sun or SUN would have turned this up easy, or are you 
dealing with an older version?

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.openssl.org/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: reversing md5, sha

2003-09-24 Thread Geoff Thorpe
On September 24, 2003 07:53 pm, Michael Sierchio wrote:
 David Schwartz wrote:
  Nonsense. See US patent 5,533,051 (Compression of Random Data) and
  5,488,364 (Recursive Data Compression). Basically, you separate the
  ones from the zeroes and compress them independently.

 Does it compress to one bit, or two?

It compresses to zero bits, as you can easily demonstrate using an 
inductive proof.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: Dodgy Microsoft fix emails

2003-09-22 Thread Geoff Thorpe
On September 22, 2003 02:44 pm, Frank wrote:
 Finally somebody with a clue!!!

Whatever the quality of the entries in this philosophical discussion, it 
is totally off-topic for this list. It seems that very few viral emails, 
if any, have turned up through the list server. 

OTOH: what *is* flooding this mail list is a tiresome discussion about 
mail scanning philosophy coupled with a rather onanistic battle about 
who's been on the 'net longer than who. Or whom, if I might beg 
pardons, whatever.

Innumerable IRC groups, message boards, newsgroups, blog rings and other 
non-openssl mail lists exist for this sort of merriment, so please take 
it there.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.openssl.org/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: Official way to increment SSL_SESSION reference count

2003-09-08 Thread Geoff Thorpe
Hi Brian,

On September 8, 2003 11:36 am, Brian Hatch wrote:
 Is there a function that I can use to increment the refernce
 count of an SSL_SESSION* ?  Obviously I can just
 ssl_session-reference += 1 -- the program is single threaded,
 so that's not even a problem.  I'd like to avoid using an
 additional SSL_get1_session call because it's not clean for the
 purpose in this case.  (One section of code has a SSL_SESISON*
 and another needs to get a copy without having access to the
 underlying SSL*, but the code come from two products, so I wouldn't
 want to introduce a memory leak in the one that actually does the
 SSL stuff.)

Your best bet is to increment the reference count directly, there's no 
existing SSL_SESSION API function for doing this. If you want to make 
your code thread-safe in case it gets reused later under threading 
circumstances, then wrap it with the appropriate locking;

CRYPTO_w_lock(CRYPTO_LOCK_SSL_SESSION);
sess-references++;
CRYPTO_w_unlock(CRYPTO_LOCK_SSL_SESSION);

Oh, and thanks for making me look at this - I've just realised the locking 
in ssl/ssl_sess.c is wrong ... commiting a fix shortly. :-)

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: Official way to increment SSL_SESSION reference count

2003-09-08 Thread Geoff Thorpe
Hi,

On September 8, 2003 12:38 pm, Dr. Stephen Henson wrote:
 On Mon, Sep 08, 2003, Geoff Thorpe wrote:
[snip]
  CRYPTO_w_lock(CRYPTO_LOCK_SSL_SESSION);
  sess-references++;
  CRYPTO_w_unlock(CRYPTO_LOCK_SSL_SESSION);
 
  Oh, and thanks for making me look at this - I've just realised the
  locking in ssl/ssl_sess.c is wrong ... commiting a fix shortly. :-)

 Any reason for not using CRYPTO_add()?

In Brian's case, no - you're probably right that it's simpler. I was 
following the logic in ssl_sess.c which (I think) can not use 
CRYPTO_add() (otherwise there could be a race between checking if 
ssl-session exists and calling CRYPTO_add() on it's reference counter).

Either way, Brian's quibble about violating encapsulation remains valid - 
you have to manipulate structure elements directly in some form or other.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: Foundational questions

2003-09-05 Thread Geoff Thorpe
Hi Dann,

On September 5, 2003 08:08 pm, Dann Daggett wrote:
 But your answer brings up yet another question :) Most people do not
 have their own certificate, yet are able to do https transactions with
 secure web servers. Does each browser have a default certificate it
 presents in this case? And does that need to be verified? If so, how
 would I know which root certs need to be available for such cases?

In typical situations (I'm ignoring weird stuff to make things simpler), 
SSL/TLS will be server authenticated, which is to say that the client and 
server establish communications that are secure between the two 
end-points, however it is only the client that has any confidence in the 
identity of the server. In particular configurations, the server can ask 
(as part of the handshake) that the client supply a certificate as well, 
so that both sides are authenticating the identity of the other. Or more 
accurately, they can authenticate the identity that the certificate (and 
the CA who signed it) claims the certificate owner to be.

Anyway, even if the server does ask for the client to provide a 
certificate (most secure web-servers don't do this, for example), the 
client may elect not to. If this happens, it is up to the server to 
decide whether to continue or not. The client might choose not to 
authenticate itself because he/she doesn't want to, he/she has no 
certificate to use, or the certificate(s) that he/she has available are 
not signed by any of the CA ceritificates that the server mentioned in 
its CertificateRequest message. (When the server asks the client to 
authenticate itself, it specifies the ids of those CA certificates it is 
prepared to trust for authenticating the client).

In short, don't worry about it. There are not many situations where 
SSL/TLS servers (particular web-servers) ask for client authentication.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.openssl.org/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: Need of client session reuse advice

2003-08-04 Thread Geoff Thorpe
Hi Henrik,

On August 4, 2003 11:16 am, Henrik Nordstrom wrote:
 I am looking into how to best add client session reuse to Squid when
 acting as a SSL client. (yes, Squid does SSL these days)

As an avid user of Squid, I'd certainly be chuffed if I can help.

 I think I have got the SSL_get1_sess and SSL_set_session picture, and
 have some kind of idea of how to use this using your own cache
 structure. Still needs to look into how to correctly manage
 time-to-live etc.

 But what confuses me is the fact that there is a SSL_SESS_CACHE_CLIENT
 session cache mode (SSL_set_session_cache_mode). Can this cache mode be
 used to make life easier somehow? I understand it will cache client
 sessions, but how to access the cached sessions? And how to find the
 correct set of cached sessions among all different sessions used in
 this SSL_CTX if the same SSL_CTX is used for connecting to different
 SSL servers?

If I were you, I would attempt to disable SSL_CTX-internal session caching 
completely. Your later suggestion (providing cache callbacks) is a 
better/more-reliable way to go, and I'm thinking that with Squid you want 
to very carefully managing your own storage (you are pretty 
leak-intolerant after all :-).

   * Set up application data fields to identify which server connection
 the session belongs to (the keys needed to later look up the session
 etc, i.e. ip:port).

   * Register a SSL_CTX_sess_set_new_cb to index the cached sessions
 using the data set in 1.

Yes. There's normally no reason to cache more than one (client) session 
for any given server, and usually the best strategy is to cache the most 
recent one (or more correctly, the one that expires last). Question: how 
are you handling client-authentication if it arises? There is a can of 
worms waiting for you here, perhaps you're already aware of it? If squid 
is proxying on behalf of multiple clients, then you risk introducing some 
weird security issues. For better or worse, HTTPS is often used in a 
rather layer-violating way. Eg. if web-server logic attaches/indexes 
state based on SSL/TLS session details, then if you (as squid) reuse that 
session for a different client, you risk having that client be 
interpreted by the server as the client who happened to be proxying when 
the session was negotiated. So my offhand comments about how to cache the 
sessions should be taken with a rock of salt - how you match 
squid-accepts up to squid-connects influences greatly how you should 
handle this. Also, https-https proxying is a different kettle of fish to 
http-https proxying. How are you doing this anyway? Do you specify a new 
CA cert to the clients that, once accepted, allows you to do dynamic MITM 
SSL/TLS proxying by faking server certificates according to the CONNECT 
string? Or something else?

* Should the SSL session be reused for multiple concurrent
 connections to the same server where possible, or only one connection
 at a time?

There's no harm reusing it for multiple concurrent connections - this is 
the most common application (a browser negotiating a session for an html 
page will then typically fire off various concurrent session resumes to 
go back and pick up all the image files too). As I say, the question is 
more how you identify/index SSL sessions in a satisfactory way (and with 
suitable granularity) so that you get the maximum performance pay-off 
from resumes, but without creating mistaken identities for any server 
that matches up browser-clients to corresponding SSL/TLS state.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: FQDN

2003-07-26 Thread Geoff Thorpe
Hi David,

I'm going to exist myself from this discussion at the conclusion of this 
mail - it's consumed enough list bandwidth without further eating into my 
own limited resources.

Clipping;

On July 25, 2003 11:48 pm, David Schwartz wrote:
[snip]
   Not at all. SSL with comparison of the certificate name to name of the
 intended recipient is invulnerable to MITM attacks. All a MITM can do
 is prevent data from being exchanged. (And, of course, a MITM can
 always do that.)
[snip]
   How will you know something's wrong if you don't compare the name in
 the certificate to the name of the server you intended to speak to?
 That's not part of SSL/TLS. SSL/TLS itself is vulnerable to plaintext
 compromise from a MITM attack unless you confirm the certificate in
 some way.
[snip]

You are confusing SSL/TLS with use-cases for SSL/TLS. SSL/TLS gives you 
protection against MITM attacks modulo the identity assurances you can 
determine (and define) from X509. What you do with X509 is your business 
and if you foul that up (or ignore it, which amounts to the same thing) 
then *you* are vulnerable to MITM. Saying SSL/TLS itself is vulnerable to 
MITM unless you do something extra that's not defined in SSL/TLS is 
like saying a home alarm system is provides no protection against 
intrusion until you install it in a house. NB: OpenSSL's *implementation* 
will attempt to provide a *correct* handling of X509, but will not 
magically conjur up context for you. Ie. it should apply expiry checking, 
constraints, etc. It will not check certificate fields for intended 
peer information, because it doesn't know the application, the 
requirements, or the transport (you know, BIOs exist for more than just 
hiding TCP/IPv4 ...).

   A MITM can deny service. A MITM can make himself known. A MITM can
 fail to obtain any plaintext. All of these things are within the scope
 of what a MITM can do, they just might fail to make successful attacks.
[snip]

You are trying to bamboozle us, I think. A MITM in your world is any 
programmable location between the end-points who does anything except 
ignore your traffic. Fine, whatever. This is not even relevant for 
discussing MITM attacks on poortly implemented HTTPS applications, and is 
ludicrous in trying to wiggle the MITM term into the SSL/TLS protocol 
itself by games of definition. I'm sorry you've spent so much time on 
google over this - that can't have been much fun. Trying to make sense of 
this thread, and feeling obliged to wade in when it risked staining the 
archives and misleading impressionable users, has not been much fun for 
me either. Please think more carefully about what SSL/TLS is defined as - 
you won't find host names, or even transport details. There is a reason 
for this, and the fact Netscape (and subsequent parties) have so 
carefully made sure to keep use-case specifics out of the specs is 
further evidence that the layering is no accident. Indeed these parties 
have almost exclusively shared a common intended use, yet they kept the 
protocol definition free of those details. Please share their 
enlightenment.

I'm not going to waste space disagreeing with you about HTTPS, but that's 
because (a) I don't necessarily disagree, and (b) as far as this 
mail-list is concerned, I couldn't care less about HTTPS. Really.

   I'm amazed that you would disagree with every published definition of
 a MITM I could find.

An explanation of an acryonym does not give it the context that it assumes 
in the field over time. Man In The Middle is a pretty symbollic phrase, 
appropriate for discussing diplomacy, customer support, and would 
probably make a pretty naff (and successful) pop song. MITM attack is 
already something more refined. Moreover, the layer you are discussing it 
at, as well as which particular aspect of that layer, matters. You can 
discuss MITM attacks against IPv4 and/or HTTPS all you like. Likewise you 
can discuss MITM attacks against X509 until you are blue in the face. If 
you want to discuss MITM attacks against the *SSL/TLS protocol* then you 
have been wildly overreaching. And to shuffle your feet around what you 
actually mean by attack, to the extent that any disruption or 
nuisance-making at the wire-level automatically counts as a MITM attack 
on SSL/TLS, can only hold water if you define it to. But that takes you 
outside any reasonable definition that matters to anyone else.

Anyway like Brian, that's all I have to say on this, for whatever it's 
worth.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: FQDN

2003-07-25 Thread Geoff Thorpe
Hi,

On July 25, 2003 01:45 pm, David Schwartz wrote:
   Hijacks and redirects are all within the scope of what a MITM can do.

No, they only within the scope of what an attacker can do. The attacker 
becomes a MITM if they can do it without you knowing anything's wrong. 
Note doing it without you knowing anything's wrong means one of two 
things; one is to manipulate data in such a way that the end parties do 
not know that data has been changed (or created) in transit 
(authenticity), and the other is to be able to read the encapsulated data 
(secrecy).

   You want a simple definition of a MITM? Here it is -- you think you
 have:

   server - network - client

   But under a MITM attack, you really have:

   server - MITM - client

   The MITM can do anything he wants from his position, including pass
 the data unmolested, drop bytes, or change them in both directions.
 Hijacking and redirection all occur on the wire between the server and
 the client, so they're all within the scope of a MITM attack.

   To put it simply, a MITM attack is any attack that can be performed by
 someone who has complete control over the network between the server
 and the client, that is, he is in the middle instead of a trusted
 network.

   If you think MITM means something else, please present your
 definition. I have a feeling you'll find it becomes incoherent.

Your definition is a waste of time, I'm sorry to say. What you're saying 
leads logically to the trivial extreme that any network protocol passing 
through the internet is vulnerable to MITM attacks. If you're happy with 
that definition then this email thread is without point.

SSL/TLS never claims that it can prevent active traffic manipulation by 
undesirable parties, it just claims you'll know something's wrong when 
and if it happens and that all data passing through the SSL/TLS streams 
until that point will be both tamper-free and secret. Our definition of 
MITM is any attack that could passively or actively attack the 
communications such that you are none the wiser (or that you may have 
lost confidentiality or authenticity of data prior to knowing something 
was wrong).

FWIW: there are limited MITM possibilities in SSLv2 that fit your 
definition *and* ours, but that's a different issue. It seems that you 
are defining your statement to be correct and working backwards from 
there. The one true MITM attack seems to be this enormous email thread - 
consisting of one side working from a sensible definition of MITM towards 
conclusions, and another working from an tautological conclusion 
backwards towards an unreasonable definition of MITM.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: FQDN

2003-07-25 Thread Geoff Thorpe
 
interpretation of certificate contents and identities (which is cruel and 
unusual punishment to abstraction).

 For that matter, what about running 'stunnel -c -r www.example.com:443'
 ? That specifies no certificate sources and doesn't say to verify. It
 uses SSL - does that mean MITM attack is possible?

Yes because (i) you're running over TCP/IPv4, (ii) you're running with the 
presumption that you are serving HTTPS, and (iii) the consequent 
presumption is that correct (HTTPS-specific) behaviour is that the CN 
of the server cert should match the requested host-name *and* it should 
have a certificate(-chain) that successfully validates to a trusted root 
cert from a list of CAs you consider valid for authorising HTTPS 
certificates. phew

SSL/TLS provides the abstract means (X509) to give meaning to identity 
validation and thus, a meaning to the term MITM. HTTPS gives you a more 
concrete environment in which to *define* identity validation, so you 
should take any and all complaints you might have to that level. Your 
stunnel invocation is a case in point of a totally incomplete and 
unsatisfactory application of HTTPS, at least if you want to have some 
assurance that you are communicating with something that has a reasonable 
claim to being www.example.com.

 What you've been saying above is that if something uses SSL, it
 is secure.  I still say it must have SSL done *right* for it to
 be secure.

No this is confusing abuse of a protocol with a flaw in the protocol. 
Anyone can abuse a protocol, and you just showed a perfect example of 
that.

SSL/TLS will help you establish comms that are confidential and 
authenticated to be between you and the owner of the X509 certificate's 
private key, such that no widely known attack is possible for a MITM to 
decrypt communications mid-stream or to tamper with communications 
undetected. All the MITM talk I've heard about so far is just people 
communicating (securely) with the wrong party, and to define whether a 
party is wrong or right depends on the transport, application, and 
mapping of identity to X509 certs (and authorised CAs). This is all 
outside the scope of SSL/TLS.

 All over the world, applications are being written by people with
 no crypto background, using third party libraries, who blindly
 piece together sample code until an SSL handshake completes
 successfully, and then they ship it.

Right. Their applications may be vulnerable to MITM attacks viz-a-viz the 
application itself, the transport used, and the differences between what 
it *should* consider trusted compared to what it *accepts* as 
trustworthy. At the SSL/TLS level, this is not MITM, it is simply 
communicating (and authenticating) with the wrong peer. I'm afraid that's 
not a vulnerability of SSL/TLS to MITM attacks, it's a misunderstanding 
and misapplication of the protocol to a particular use-case. Take it up 
with the browser writers and other HTTPS-interested parties.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: purify errors in openssl crypto

2003-07-18 Thread Geoff Thorpe
Hi there,

On July 18, 2003 08:29 am, Andrew Marlow wrote:
 I have found that the ssltest program raises several purify errors
 which I am looking at. They are UMRs, uninitialised memory reads,
 i.e memory is read and acted on even though it was never
 given a value. Here is the first one (based on v0.9.7b):

Actually the problem is in crypto/rand/md_rand.c, search for PURIFY. 
Our randomess routines are hashing in various sources of entropy, and 
before manipulating a buffer on the stack it simply adds in the data 
already in that buffer. This is obviously a source of uninitialised 
data but that can only help us right? No assumptions are made about the 
level of entropy in that buffer (indeed it may well be all zeroes in most 
environments), but we're certainly not worried about getting 
unpredictable behaviour because of it - we *are* trying to generate 
randomness after all.

If you compile with -DPURIFY, it will deliberately avoid this behavour 
so as not to throw up false positives from purify, valgrind, or anything 
similar.

 routine SHA1_Update, line 460 of md32_common.h
 It examines the first byte of a block allocated
 at line 304 of mem.c (CRYPTO_malloc).

This is where the uninitialised data is first touched, if you look a level 
or two up the call stack, you'll see that you're in md_rand.c at the 
point I mentioned.

 A little further down there is a statement that
 does initialise this byte but only if the block size
 is greater than 2048. A block size of 64 was allocated.

This is for another reason entirely, unrelated issue. It's actually a way 
of foiling compiler optimisations that would otherwise get rid of 
memset(,0,) statements when it knows the memory is about to be freed - we 
want those memsets to happen precisely *because* memory is about to be 
freed and contains stuff we want sanitised first. To get the dependencies 
foolproof against smart (I use the word reluctantly) compilers 
therefore, there's a bit of tomfoolery involved and you stumbled on to 
part of it. This has nothing to do with the purify issue.

 The comment says (and I quote) NB: We only do this
 for 2Kb so the overhead doesn't bother us.
 The UMR says that the initialise should always be done,
 IMHO.

?? As I say, define PURIFY and openssl will build in a way that doesn't 
(or shouldn't) do anything purify doesn't like. Search for purify 
and/or valgrind in the mail lists and you'll see this has been 
discussed before. Likewise, the memory sanitisation discussions are 
probably easy to pick up if you hit the archives.

 I also notice that SHA1_Update is called from
 ssleay_rand_bytes (md_rand.c, line 468) where
 an ifdef for PURIFY has been added, indicating
 that this area has received purify attention before.

 Any thoughts?

I hope that clarifies the situation.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: SSL_CTX_free messes with external session cache

2003-03-26 Thread Geoff Thorpe
Hi,

* Nadav Har'El ([EMAIL PROTECTED]) wrote:
 Hi,
 
 I noticed that SSL_CTX_free() takes all the sessions in the given CTX's
 internal session cache, and also removes them from the external session cache
 (i.e., calls the delete-session callback).
 
 Why was this done? I can't think of a security or a logical explanation to
 this, because these sessions in the external cache are still valid, and other
 contexts or processes might still want to reuse them!

Yeah this is dumb, but I think probably you are overreading the design
of the relationship between the SSL_CTX-internal cache and external
caching callbacks. For the good oil on this, you'll probably have to get
in contact with Eric Young and convince him to discuss it with you.
Short of that, my impression has been that it's no more than a horrible
hack that was jammed in to solve a particular need at the time, and it
has since installed itself in ssl/ and would now be difficult to
substantially re-engineer without pissing off lots of people.

One of the problems is the relationship between the internal cache
operations and the cache callbacks. Ie. is the external cache supposed
to replace the internal cache, or to allow it to bridge multiple SSL_CTX
contexts (or copies of the same one inside forked child processes) using
an external parent cache? Things aren't terribly natural in either
case; eg. the SSL_SESS_CACHE_NO_INTERNAL_LOOKUP flag is pretty much
obligatory if you want sane behaviour in the second case but is not the
default for historical reasons. The alternative to that flag would be to
add a new external callback similar to get_session that merely checks
the continued existence of a session in the external cache, something
like has_session. This way, the internal cache could resume sessions
it has locally cached by first checking that the external cache has not
(through some other SSL_CTX's activity) explicitly invalidated or
destroyed the corresponding session. BTW, this is the approach used in
the www.distcache.org model.

What you're noticing with the expiry of all sessions upon SSL_CTX_free()
is perhaps evidence that the original intention of the external
callbacks was to replace the internal cache rather than provide an
umbrella for multiple internal caches. And then again, the fact the
internal cache is maintained in parallel with the external one (rather
than being ignored) suggests that perhaps someone was trying to have
their cake and eat it too. Initially, it probably looked like that
default behaviour killed two birds with one stone, but I think now the
picture is a little less satisfying. I'm not sure about the real reasons
to be honest though.

 Looking at the SSL_CTX_free() code (ssl/ssl_lib.c), I see that
 SSL_CTX_flush_sessions(a,0) is called - and from the manual page of
 that function I understand that what this means is to mark sessions older
 than time 0 (i.e., all sessions) as *expired*, and all these sessions
 are also deleted from the external session cache. I don't understand why
 this kind of behavior should be part of SSL_CTX_free().

I feel your pain.

 By the way, it's relatively easy for me to overcome this behavior by
 cancelling the delete-session callback before calling SSL_CTX_free() - but
 I was wondering why I have to do that...

IMHO, you're probably better off in the mean time disabling the internal
caching altogether and implement a coherent model entirely from the
external callbacks - this way the SSL_CTX_free() behaviour won't matter
because the internal cache is empty so it won't be deleting anything in
the external cache. Again, a shameless plug in the direction of
www.distcache.org and the apache-1.3/mod_ssl and apache-2 patches show
an illustration of this approach. For my own activities with caching, I
felt quite early on that to bother implementing an external cache pretty
much obligated me to forget about the internal caching.

The ideal thing for openssl would be to wait until we have a good
opportunity to well and truly ignore backwards compatibility and then
just uproot the entire caching interface and replace it with something
cleaner. This is not meant to be me bitching about Eric's SSLeay work -
it's obvious we benefit from a certain retrospective 20/20 vision that
Eric never had at the time. However, we're not yet at a point where we
can go breaking large blocks of application code in non-trivial ways so
we're sort of obligated to make gentle modifications, add extra flags,
and make do. However, when the revolution comes ...

I don't know if that helps with your problem though? Are you able to do
away with the internal cache, or are you committed to having sane
interaction between internal and external caching? Note also that this
is all IMHO, there may be others who consider the internal/external
caching semantics to be fine as they are.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.openssl.org

Re: SSL_CTX_free messes with external session cache

2003-03-26 Thread Geoff Thorpe
* Nadav Har'El ([EMAIL PROTECTED]) wrote:
  The ideal thing for openssl would be to wait until we have a good
  opportunity to well and truly ignore backwards compatibility and then
  just uproot the entire caching interface and replace it with something
 
 I understand that backward compatibility is important, if people rely on
 the current behaviour. In this case, I suggest that the manual pages (in this
 case, of SSL_CTX_free()) be updated to explain what actually happens, and 
 perhaps how to get the other behaviour. Nobody can complain about this if
 it is explained in the manual :)

As someone who now has an excellent working familiarity with the API
behaviour, I am sure any patches (diff -u format) you were to
contribute in this direction would be most warmly welcomed :-)

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.openssl.org/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: -fomit-frame-pointer ?

2003-02-14 Thread Geoff Thorpe
* Jasper Spit ([EMAIL PROTECTED]) wrote:
 Hi,
 
 I have developed some c++ wrapper classes around openssl. Now I need to
 be able to throw exceptions from within the different openssl callbacks
 like e.g. the password  verification callback. This is not a problem on
 windows, but it is when using a gcc platform like e.g. linux. For this
 to work on these platforms I understand I have to compile openssl with
 the -fexceptions flag, to avoid that my application will SIGABRT because
 my exception being thrown is 'unexpected'. For this to work, I figured
 out I also have to remove the omit-frame-pointer flag when building
 openssl, otherwise my application will crash. My question is : is this
 flag used for optimilization only or can I expect runtime problems when
 not using this flag ? The gcc manpage also says that using this flag
 will make debugging impossible on some platforms. Is this the case ?

More or less, yes. -fomit-frame-pointer allows gcc to do more creative
things when optimising and this hurts nothing really except, as you
noted, debugging. There should be very few optional gcc flags that
*don't* work with openssl - if they break anything, it indicates a bug
in the code or a bug in gcc. A variety of -W*** warning flags probably
won't work if you also specify -Werror because the headers may not be up
to pedantic standards, however that's about it really.

In other words, you should be ok. If you're in any doubt, please do the
following;

   # ./config -f... -W... [etc - whatever flags you want]
   # make
   # make tests

If you want to see the consequences of your actions in terms of
performance of, for example, RSA, then run some before-and-after
benchmarks using;

   # ./apps/openssl speed whatever... use -? to see your options

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.openssl.org/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: Problems with DSA and engine ubsec

2003-02-14 Thread Geoff Thorpe
Hi Jonathan,

Nice diagnosis, I'm just checking a few things in the source before I
comment, but in the mean-time could you please (for now) just split out
item (1) and create a ticket in the RT tracker system with it?
http://www.aet.tu-cottbus.de/rt2/

We'll deal with the other items in good time (and perhaps in separate
tickets), depending what comes out of (1).

* Jonathan Hersch ([EMAIL PROTECTED]) wrote:
 Hi,
 
 I'm signing and verifying documents using DSA and have run into a couple of
 problems.
 
 I'm working with OpenSSL 0.9.7 on Linux with a Broadcom crypto card based on
 the 5821 (so OpenSSL engine type is ubsec).  I have version 1.81 of the
 Broadcom driver.
 
 (1) While testing I found that verification of certain signed documents crashed
 OpenSSL.  The problem appears to be that hw_ubsec.c:ubsec_dsa_verify() calls
 p_UBSEC_dsa_verify_ioctl() and if this call fails then the code tries using
 software crypto, indirectly calling dsa_ossl.c:dsa_do_verify().  However,
 dsa_do_verify() tries to do:
 
if (!ENGINE_get_DSA(dsa-engine)-dsa_mod_exp(dsa, t1,dsa-g,u1,
  dsa-pub_key,u2,
  dsa-p,ctx,mont))
   goto err;
[snip]

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.openssl.org/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: libcrypto key data structures reentrant?

2003-02-12 Thread Geoff Thorpe
* [EMAIL PROTECTED] ([EMAIL PROTECTED]) wrote:
 Hello,
 Quick question.
 I have a multi-threaded app in which a master thread creates a RSA key-pair
 and subsequently spawns a number of worker threads.
 I am assuming that routines like RSA_public_encrypt() are reentrant because
 an RSA * pointer is passed (please correct me if this assumption is wrong).
 Multiple threads could potentially be in one of the RSA_...en/decrypt()
 routine at any point in time.
 My question is: can the worker threads all use the RSA * created by the
 master thread or do they need to have a private copy of it?
 Any help greatly appreciated.

Use of RSA objects are thread-safe provided you implement locking
callbacks for use by OpenSSL;

   http://www.openssl.org/docs/crypto/threads.html#

If OpenSSL was configured to not support threading then all bets are off
- this is covered in the NOTE section of the man page I've referred
you to.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.openssl.org/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: engine's performance (what's wrong?)

2003-01-30 Thread Geoff Thorpe
* Aleix Conchillo Flaque ([EMAIL PROTECTED]) wrote:
 
 i was thinking the same thing yesterday morning: if you need your CPU to
 do other things, the cryptographic hardware can help you. the problem is
 when you only need cryptographic results in a real-time large process
 (let's say talling votes from an election, which is what i'm doing). in
 this kind of applications you really need speed. obviously everything is
 not cryptographic calculations, there is access to disks, network... but
 crypto is a major one.

Yes, but perhaps most important is to ask the question what your machine
would do in the mean time if you do elect to send the crypto off to
hardware? If the answer is oodles of stuff, the app is very parallel
and we have much more than just crypto to worry about then that's some
support for the idea of crypto acceleration. If the answer is probably
sleep waiting for the response, then your best bet is almost certainly
to not bother. Take a look at commodity PC hardware side-by-side with
your realistic hardware options in terms of rsa1024-signs per second per
dollar. Ie. if you add $1000 to your budget, how much speed-up can it
buy you? FWIW: Mark and I did some investigation of this sort of thing
quite a while ago and the paper is online, though the numbers (and some
of the material) is quite likely a little out of date by now. Still, if
you want some food for thought...
  http://www.geoffthorpe.net/apcon2000/

(note also that the distributed session caching stuff described in there
has since been coded commercially and then released as open source, and
is sitting in sourceforge at http://www.distcache.org).

 regarding to speed again, GMP is a really cool kick ass (sorry for the
 expression) library, we've used it for some mathematicals calculations,
 instead of using OpenSSL BN. if you've done a wrapper with GMP... let me
 say that we'll have to spend lots of money in hardware to be as fast as
 in software.

That depends on your hardware - I no longer see any speed up on athlon.
In fact, I see a slight slow down which is probably due to the bignum
conversions and no GMP support (that I'm aware of) for caching
montgomery forms. Pentium IV, I don't know - perhaps you'll see some
improvement if you make sure you have a PentiumIV-optimised build of
GMP. On other chipsets however, I'd probably give GMP the head-start.

I went ahead and dredged up the GMP-ENGINE source and banged it into
commitable shape - it's now in CVS and should appear in the next
snapshot if you want to take a look (check the CHANGES entry as a guide
for how to configure it - and the engines/e_gmp.c code has some other
info near the top if you're interested).

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.openssl.org/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: engine's performance (what's wrong?)

2003-01-29 Thread Geoff Thorpe
Hi,

* Aleix Conchillo Flaque ([EMAIL PROTECTED]) wrote:
 
 as you can see, hardware is slower. my box is an Intel P4 at 1,4 GHz and
 is a bit faster than the processors in the nShiled (i think the model
 i'm trying is one of the worstest). i've tried the hardware version of
 my program with a multiprocess and i've gain more performace (uses two
 processors).

Yup, makes sense.

 but, what if i use a dual pentium box with P4 at 2GHz or a fastest
 machine? this will be faster and cheaper than the cryptographic
 hardware. eventhough, the cryptographic hardware has more fetures than
 just do operations (at least the nShield), which may be is the good
 thing.

That's the rub, essentially. I'll back off from making any grotesque
personal commentaries on the merits of cryptographic hardware for this
sort of thing - it's a can of worms and I've no particular desire to go
fishing.

The least grotesque/personal version I can manage summarises something
like this; many people would have you believe there is a security gain
from using hardware that can generate and operate private keys and never
expose the raw key material outside a dedicated hardware perimeter.
There are others who would have you believe the actual security gain is
neglible from said devices because they addresses exposures that can be
as easily addressed by good software design and not doing stupid
things. There is also an argument that states that if you can get
into the memory-space (and user-privileges) containing the crypto
implementation, you may not be able to run key-scanning attacks when
using hardware devices, but you can quite likely still perform
key-operations *using* the hardware device just as the application can
(eg. you could sign a certificate revocation request and revoke the the
server's certificate, etc).

Who you listen to and how the arguments pan out depend very much on your
application, architecture, threat-model, political leaning, and
astrological charts.

However, getting past all that - yes, conventional PC hardware gives you
more PKC bang for your buck, but you shouldn't carelessly disregard how
your machine will handle the load and how that could affect other
things. Eg. if you run secure and non-secure services, you could find
that the performance curve of any non-secure services suffers greatly
when the secure services get overloaded. PKC crypto doesn't really need
disk, I/O, nor swap, so it's hellishly CPU bound. OTOH: if you are
offloading the crypto operations to hardware (or any remote machine for
that matter), the secure services could still become unresponsive but
perhaps with a less dramatic impact on non-secure services. Ie. the
secure services will perhaps result in giant queues of incoming
connections and/or start rejecting new connection attempts, and you may
also have loads of threads/processes sleeping whilst waiting for the
crypto services to catch up, but that will allow other things to get
on with life more easily than if the CPU was busily thrashing away at
(too many) RSA calculations. Try running a few looped copies of openssl
speed rsa1024 at the same time on your host machine, you'll probably
find that anything else you try to do will *crawl* (eg. switch X
desktops).

YMWV.

Also this reminds me I should tidy my GMP wrapper ENGINE back up and
look at committing it - even on the OpenSSL's mostly commonly used
platform (x86) together with the overheads of conversion between GMP and
OpenSSL bignum formats, the GMP wrapper ENGINE resulted in significant
speed ups in RSA private key operations. I would suspect that on other
chipsets where GMP has been actively working the speed up would be more
significant still (I had reports of 3x speed ups on some PPC system or
other, though I can't confirm/deny this myself).

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.openssl.org/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: engine's performance (what's wrong?)

2003-01-28 Thread Geoff Thorpe
* Aleix Conchillo Flaque ([EMAIL PROTECTED]) wrote:
 hi again,
 
 as i said yesterday i'm doing some tests with cryptographic hardware (in
 my case nCipher's).
 
 now that i have loaded the engine, i'm getting real strange results. the
 same test with hardware enabled is much slower than the software version.
 
 it is really weird, because the openssl speed -engine chil command
 seems to be as fast as desired.
 
 do i have to set something else? is there any documentation on the net?
 am i getting more dummy everyday?

I've written oodles about this sort of thing in the past - so perhaps a
trawl of the archives might turn up enough documentation to keep you
happy. Of if not happy, at least busy. :-)

The short answer(s) are probably as follows, but not having seen your
numbers I'm working blind and so you should take this with a pinch of
salt.

When using hardware, you probably want to use the -elapsed flag to
openssl speed. The reason is that by default it will measure CPU usage
of the openssl speed command so that the benchmarks are more accurate
and resistant to external tasks that might be happening on the host
system. However, if the crypto is taking place in hardware, the library
representing the hardware is probably spending the majority of time in
blocking calls (ioctl() if it talks directly to kernel drivers, though I
think ncipher/chil uses select/poll to talk to a privileged daemon
process instead?). Anyway, this will seriously mess up the numbers and
give wildly overstated estimates. If the program spends most of the time
sleeping waiting for responses from hardware, it'll appear that the
program used very little CPU time to achieve the crypto operations and
so will deduce that if it could have all the CPU time to itself it would
be very very fast indeed! Which is nonsense of course. -elapsed will
simply measure the running time, rather than CPU usage, which (provided
you try to keep the system from doing other tasks at the same time)
should prove more accurate.

If the above is true, that should merely make openssl speed say your
hardware is as slow as you thought it was from your own test program. As
for the reason why that might be the case, here are some possibilities;

  (1) your crypto hardware could actually be slow,
  (2) your host system could actually be quite fast in software,
  (3) your crypto hardware could be highly parallel and your application
  could be linear (as is certainly the case with openssl speed).

W.r.t. (3), it may happen that the crypto hardware has a number of
internal cryptographic units that it can distribute workload too, such
that if you keep providing it with enough crypto operations to do *AT
THE SAME TIME*, then the total throughput of operations could be what
you were expecting. What this means however is that each individual unit
on its own would be perhaps slower than you were expecting.

Or put another way, if you have a device claiming to do 1000 RSA
operations a second that is internally built out of 10 parallel
processing units, then each processing unit itself is probably capable
of doing 100 RSA operations a second, ie. 10 milliseconds each. If you
only ever give the whole device one operation to do at a time, only one
processing unit will be in use at a time, and so your total performance
will be that of one unit and not that of all ten.

openssl speed only does one crypto operation at a time unless you use
the -multi n switch (and it is supported on your version of openssl
and host system). Looking briefly at your sample source code, that has
the same problem. This is probably what is limiting the performance you
are seeing - try executing a few copies of your program at the same time
and see what the total performance between them looks like.

Regards,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.openssl.org/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: ssltest and on-the-fly ZLIB compression

2003-01-15 Thread Geoff Thorpe
* Andrew Marlow ([EMAIL PROTECTED]) wrote:
 
 I have done some more investigation and have found that ssltest
 will compress when the TLS1 protocol is explicitly selected.

I also took a look - it seems the problem is the v23 SSL/TLS method,
it's there to provide a handshake that can negotiate any protocol level,
but it also seems to preclude any negotiation of compression. Eg. if
you've built with zlib, you can change into the apps/ directory and in
one shell run;

./openssl s_server

you'll find that (in another shell) both of the following result in
compression;

./openssl s_client -ssl3
./openssl s_client -tls1

but the following does not;

./openssl s_client -no_ssl2 -no_ssl3

As for why - this could be impossible to get around because of the
implicit constraints of SSLv2 compatibility, I'm not sure. Certainly if
you use the SSLv3 or TLSv1 client methods (and thus give up on talking
with any SSLv2 servers), then you'll probably be OK w.r.t. compression
unless you hit an SSLv2 server. The crap way to address this (something
Lutz mentioned in another thread) is to try connecting with an
SSLv3/TLSv1 method first and if that fails on protocol troubles, retry
with SSLv2. Yes I know, bleurgh.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.openssl.org/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: Slapper denial-of-service problem - why isn't this fixed?

2002-12-21 Thread Geoff Thorpe
Hi there,

* David Schwartz ([EMAIL PROTECTED]) wrote:
 
   Let's go back to how we got into this. The position I was
   refuting was that this is a fundamental problem that can't be
   solved at the application level.  But this is utterly false --
   there are any number of ways, at the application level, that
   resistance to this type of denial of service attack could be
   provided.

I've been making this point about Apache for some time - it needs to
scale better than one request-per-X for X=process, thread, etc. It
seems that the only way to get any asynchronous processing is to use
user-threading to manufacturing the asynchronous behaviour explicitly,
as Apache itself does not appear to have this capability. Eg. Gnu Pth
perhaps.

However, w.r.t. slapper and what-not, there is *no* theoretically
acceptable approach to tackle DDoS attacks even if you could rewrite the
web-server from scratch (or put something more scalable in front of it,
like squid). HTTPS in this respect is no more or less immune than HTTP,
though it is unfortunately slightly worse from a quantitative point of
view. Ie. a design oversight that has persisted in SSL/TLS from the
outset is that the server is the first side to have to do any
computationally-expensive crypto operations - this enhances a DDoS
client's ability to force the server into heavy work without having to
do much itself. However, even were that reversed - it just bumps up the
limit at which your server will start to fall over, it won't eliminate
it completely. The higher your server's DoS resistence is, the higher
the number of exploited machines the DDoS will require, but sooner or
later it'll probably exploit enough machines. In theory you have to
*assume* it will find enough machines and so the neither application
logic nor the protocol can be expected to provide robustness against
DDoS. The frustration in this case is that the server spends most of
the DDoS attack sleeping rather than working too hard! :-)

There must be people out there providing analytic routing logic tools of
some form? Anyone know of anything recommendable? Ie. something to
identify DoS source addresses on-the-fly and start blocking/unblocking
them according to some statistical rules. This should at least adapt
itself to DDoS attacks enough to put the breaking point back on your
network capacity rather than your web-server's parallelism? (And in
Apache, that's the target you need to protect most). Perhaps there's
some contrib thing with Apache to hook this logic using its logs??

But before this gets way off-topic for the list ... are we agreed then
that all this discussion *is* about network I/O timouts in Apache and
*not* about any SSL/TLS vulnerabilities in OpenSSL?? If not, someone say
so please.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.openssl.org/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: Once again

2002-12-21 Thread Geoff Thorpe
* Marcin Giedz ([EMAIL PROTECTED]) wrote:
 Hi,
 
 I think my rectent mail disappeard. So I reapeat my question:
 
 Is it possible to check how does zlib compression work?? - 
 openssl-0.9.7-SNAP20021216 (any values)

Configure with zlib before building openssl (you'll notice a
corresponding -D... flag being used during compilation). If you then use
s_server and/or s_client, you should be able to determine from the
handshake information they spit out whether compression is happening.
The other thing would be to use ssldump to see what the handshake is
agreeing on (or not agreeing on as the case may be), or of course just
plonk some debugging/logging junk into the compression code inside
crypto/comp/ and see if it lights up like a Christmas tree at run-time
(I just had to squeeze a seasonal metaphor in there at some point).

Note, you won't get any compression unless both sides support it.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.openssl.org/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: Slapper denial-of-service problem - why isn't this fixed?

2002-12-17 Thread Geoff Thorpe
Hi there,

* Eric Rescorla ([EMAIL PROTECTED]) wrote:
 I've long suspected that you could connect to Apache and consume all
 the processes until a timeout. It's very hard to defend against this
 attack since it's hard to distinguish attackers from slow clients.  I

This is what I was wondering about too because as far as I am aware,
there are no (known) outstanding issues with OpenSSL in this respect.
Apache (1.3)'s one-request-per-process model has always been a bit
vulnerable to this issue of idle DoS attacks, especially given how
responsive and parallel http is generally required to be (eg. this same
possibility with fork()'d mail servers is generally not a problem at
all). But unless mod_ssl itself is very lax, I can't even see how this
problem is specific to https, let alone openssl. Ie. if the problem is
read/write timeouts in apache's network code then surely you can DoS it
by opening a connection to port 80 and trickling through an http request
one-byte-at-a-time (so each delay is less than apache's read timeout).
In theory, this should would allow an ancient 386 sitting on a slow
modem to take down an apache server. (And means setting the timeout to
300 seconds in an apache configuration is an open invitation for pain).

Or is there more to this problem than that? I don't see any details to
explain where this problem lies, but if it is the network timeouts as
Eric suggested, then this an Apache/mod_ssl issue at best. If the TCP
connects to the server were OK but it looked like Apache's accept() was
jammed, this would certainly suggest that the child-processes were all
tied up in read() sleeps. Typically, apache child processes all sleep
inside accept() when they're idle and the kernel will get to choose
which ones get woken up by any incoming connections.

 just didn't understand why Slapper was doing it since it only tries to
 probe your machine once AFAIK. But if you have a lot of IPs

But along the lines of what the original poster mentioned, this courtesy
from Slapper can hardly be relied upon - someone could easily modify it
to DoS any apache servers that it can't otherwise exploit. Ie. make the
virus tie up all the child-processes (doing the DoS connections from any
previously exploited/controlled servers). The question however is; what
*exactly* is the problem?

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.openssl.org/

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: OpenSSL Project Environment Migration on 10-Dec-2002 11:00 am CET

2002-12-12 Thread Geoff Thorpe
* Richard Levitte - VMS Whacker ([EMAIL PROTECTED]) wrote:
 In message [EMAIL PROTECTED] on Thu, 
12 Dec 2002 12:38:26 -, [EMAIL PROTECTED] said:
 
 John.Airey Can you give us more details about the move, like where,
 John.Airey who, and whether it has bigger bandwidth please Ralf?
 
 I dunno about bandwidth, but the two user-visible benefits are:
 
 - faster machine
 - stable networking hardware (on the previous machine, the networking
   hardware was failing, lately)

The most useful information can be gained from surfing the site,
especially CVSweb (um, or whatever that replacement is called). Or
rsync'ing against the CVS repository. Things are ... *quicker* ... :-)

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.openssl.org/
__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: using an on-disk session caching framework

2002-10-31 Thread Geoff Thorpe
On Thursday 31 Oct 2002 8:56 pm, Bear Giles wrote:
 Edward Chan wrote:
  The default behavior of server-side session caching is
  to cache session in memory.  This is probably not
  gonna work very well if there are a lot of connections
  to the server
 
  It says to open file named according to session id.
  However, session_id contains non-ascii chars, chars
  that are illegal in a filename.  So how can I name my
  file according to the session_id?

 If you have enough sessions that you need to cache them on disk, you
 probably don't want to write them one-to-a-file either.  Don't be so
 literal about the open file comment.

 Instead, open a single database instance (e.g., a Berkeley DB in hash
mode, since you don't care about ordering) and use the session ID as
 your key ID.  The non-ASCII characters aren't an issue since you
 specify a pointer and length, not a null-terminated string, as your
 key.

 In practice, I believe apache's mod_ssl uses sdb instead of traditional
 db files for some reason, and you should definitely investigate why.
 But definitely go with a single, very efficient container object
 instead of using the filesystem as one.  Even if you're guaranteed to
 be running on a new FS that uses btrees for the directory info, it's
 still much faster to do a hash lookup than a btree search, O(1) vs O(lg
 N).

I'd actually contradict you here, one of the main problems with the
performance of the disk-based ((s)dbm) cache implementation is precisely
the fact that it uses a hash-table! It's often misunderstood as being
slower but more stable because it's a file. In reality it's not
disk-access that's going to *really* slow things down (the db file
usually ends up cached in the kernel anyway), and neither is it more
stable because of disk-access - for precisely the same reason! :- The
actual performance problem is how to algorithmically expire old sessions
flush the database of old data so it doesn't grow without limit - in the
case of mod_ssl's dbm-based cache design, these two problems are
actually the same problem. The hash-database means the only way to
remove expired sessions is to iterate across the entire database! This
is the same problem as one of mod_ssl's other cache modes, 'shmht' -
though shmht is implemented using shared-memory instead of dbm. The
result is that genuine expiry operations are only done every once in a
while; you lose storage (and memory-caching) efficiency, and you
periodically do a very high overhead O(n) search where n is the number
of cached sessions.

So, if you save each session to a different file I guess it would be
possible to use the path to make the expiry logic easier. Eg. each
minute in the future has its own theoretical directory (it is only
created if its ever needed). When saving a session, you could put it in
the directory corresponding to the minute it will be expired. The
current directory you look at (the current minute) will contain a
mixture of sessions that are just about to expire or have just expired -
but any directories representing minutes in the past contain only old
sessions (you can delete/unlink them whenever you like) and all
directories representing minutes in the future contain healthy unexpired
sessions. This makes 'expiry' and 'flush' operations O(1), which is hard
to beat. Inserts are O(1) too. And if you name the session files
according to the sessions' ID, 'lookup' operations (and non-expiry
'delete's) become O(n), where n is the length of the session timeout in
minutes (so it's a constant anyway) rather than 'n' growing the number
of sessions in the cache. Of course, if you don't want to thrash the
disk to hell with this example technique (because this wouldn't benefit
from kernel-caching like a single dbm file would), I'd suggest doing it
inside a loopback file-system so it's all virtualised in memory anyway.

Or you could push session caching out of the server and on to the
network;
   http://www.distcache.org/

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/


__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: SUN Crypto Accelerator + OpenSSL

2002-10-07 Thread Geoff Thorpe

 Unfortunately, there is no support for that card built in to OpenSSL,
 as far as I know.  If I had the developpers manual for it, I could
 possibly write something and test it with you.  If I had access to a
 machine with that accelerator, even better.  Do you have the
 possibility to provide that?

IIRC the Sun card is a rebadging of something else that *is* supported 
in OpenSSL - but I can't remember which one of the engines supports it 
off-hand (and anyway, again, IIRC there are subtle issues about making 
sure certain libraries are in the LD_LIBRARY_PATH etc). Search the 
archives for openssl-users and openssl-dev and I've no doubt you'll 
drag up what you're looking for.

Cheers,
Geoff

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: Help with nasty app bug

2002-08-13 Thread Geoff Thorpe

Hi Jonathan,

On Thu, 8 Aug 2002, Jonathan Hersch wrote:

 I'm working on an SSL proxying device using OpenSSL
 0.9.6e on Linux with engine support and Broadcom
 accelerator cards.  I'm testing the box by putting
 about 250 connections/sec through it, so for each test
 connection it has to establish both SSL client side
 and SSL server side connections.  After 10-20 minutes
 of this the device crashes.  The backtraces I'm
 getting (which I'm not positive I trust since the
 stack looks a bit whacked) typically look something
 like:

 SSL_read
 ssl3_read
 ssl3_write_bytes
 ssl3_read_bytes
 ssl3_accept
 ssl_update_cache
 SSL_CTX_add_session
 remove_session_lock
 SSL_SESSION_free
 X509_free
 ssl_sess_cert_free
 sk_pop_free
 sk_free

 and then a segmentation fault. I've combed through
[snip]

Could this be SIGPIPE rather than a segfault? Eg.
  struct sigaction sig;
  sig.sa_handler = SIG_IGN;
  sigemptyset(sig.sa_mask);
  sig.sa_flags = 0;
  sigaction(SIGPIPE, sig, NULL);

Particularly if you're using non-blocking sockets, and you're getting
occasional premature-disconnects from the peer - which would be a
reasonable assumption from the kind of SSL3_GET_RECORD:decryption failed
or bad record mac errors you were seeing in the log. Other than that, I
would need to know more. Threads? Platform? How did you configure? etc.

Oh yes, and I won't be reading mail for a week, so don't be offended by a
slow response ... :-)

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]


__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: RE : openssl engine use

2002-07-25 Thread Geoff Thorpe

Hey there,

On Thu, 25 Jul 2002, Frederic DONNAT wrote:

 A sample of programming with engine is mod-ssl (initialize ENGINE before
 everything else). You can also see apps directory of OpenSSL s_client,
 s_server ... files

 Be also carefull between openssl-engine-0.9.6x and openssl-0.9.7 there
 is some diff for engine use.
[snip]
 -Message d'origine-
 De : Rob McMonigal [mailto:[EMAIL PROTECTED]]
[snip]
 I like to know how difficult it would be to have an existing application that
 uses openssl to be converted over to use the engine version openssl and the
 hardware accelerator functions.  I cannot find any information on programming
 openssl with hardware accelerators.  Any help would be appreciated.
[snip]

I'm also in the process of rejigging mistakes in the 0.9.7-dev
documentation (it wasn't adjusted to constification and ENGINEification
changes for RSA/DSA/DH/etc...) and at the same time have a monster
engine.pod in progress that I intend to include before the next 0.9.7
beta. That man page may sound terrifying (and no, I haven't split it out
to provide API documentation per-function), but at least it'll be better
than zero documentation. Hopefully.

Failing that - take a read of engine.h (it's relatively well
self-documented) and check out the source that Frederic suggested.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]


__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



RE: log shows connection from server, but then can't connect from internet client

2002-07-18 Thread Geoff Thorpe

On Thu, 18 Jul 2002, du Breuil, Bernard L ERDC-ITL-NH wrote:

 It was fun.  What are ipchains?

Easy: patent lawyers ...

or Linux firewalling/filtering/NAT/etc is another response I suppose -
please take a browse at the innumerable Linux HOWTOs and web-pages, a
simple google search should dredge up more than you can possibly use. I
think we should probably avoid continuing a discussion on this list now
that the openssl aspect of the issue has been resolved.

Cheers,
Geoff


__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: Re:an advise

2002-07-17 Thread Geoff Thorpe

Agreeing with Lutz's points, but querying a particular item ...

On Wed, 17 Jul 2002, Lutz Jaenicke wrote:

 * An AMD K6/500MHz will do approx 30 1024bit private RSAs per second.

um, perhaps if you configure no-asm -ggdb3?? Otherwise I can only assume
you're getting 30 ops/sec because someone is running timing attacks on the
same machine you're running speed :-)

   Thus a top notch 2GHz machine without hardware accelerator could do
   around 120 RSA per second (of course, one cannot simply scale like I
   have just done, but it gives an idea about the range we are talking about).

I have a 1Ghz AMD that is in the ballpark of the speeds you mention.

[Not that anything I've just said affects the points you were making.]

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]



__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: Anyone using OpenSSL for a CA or PKI Deployment?

2002-07-16 Thread Geoff Thorpe

Hi,

On Tue, 16 Jul 2002, Wienckowski, Justin wrote:

 My company is using some Windows software to run a Certificate Authority
 to generate certs for corporate employees and resources.  However, this
 software has proven to be extremely buggy and support is horrible, so
 we're looking at alternatives.

not surprisingly

 I'd love to re-implement our CA and directory in Unix using OpenSSL.
 Anyone know of companies or organizations who may have already done
 this?  I'm finding very little publicized on the web, and dropping some
 names would help immensely.

I haven't had a chance to play with it - but you might want to try OpenCA
and see how it pans out. http://www.openca.org

Good luck,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]

Pop quiz:
(1) guess the nations of the following three elected leaders;
(i) a war-mongering oil millionaire,
   (ii) a hard-liner found responsible for massacres of civilians,
  (iii) a nobel peace-prize winner.
(2) guess which countries do and do not need a change of leadership.

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: RSA public and private key lengths (newbie question)

2002-07-13 Thread Geoff Thorpe

Hi,

On Sat, 13 Jul 2002, Manish Ramesh Chablani wrote:

   Here is the snippet of my code which generates RSA key pair and then
 saves the public and private keys in character buffer. However the output shows
 the public key and private keys are of different sizes.. I was under the
 impression that pub and priv keys are of same sizes.. is my understanding wrong
 or some problem with my code ?

[snip]

 The output generated is:
 Length of public key is 140
 Length of private key is 609

The private key contains all the RSA key data whereas the public key
contains just the public components. So yes, this is normal.

Cheers,
Geoff



__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: OpenSSH 3.4 and OpenSSL 0.9.6d

2002-06-26 Thread Geoff Thorpe

Hi,

On Wed, 26 Jun 2002, Silvex Security Team wrote:

 I am trying to configure OpenSSH 3.4P1 using OpenSSL 0.9.6d without success

 # LIBS=-ldl CPPFLAGS=-I/usr/include/openssh ./configure
 --with-tcp-wrappers --with-ssl-dir=/usr/share/ssl/lib

 bunch_of stuff

 checking for getpagesize... yes
 checking whether snprintf correctly terminates long strings... yes
 checking whether getpgrp requires zero arguments... yes
 checking whether OpenSSL's headers match the library... no
 configure: error: Your OpenSSL headers do not match your library

[snip]

 I also recompiled and installed OpenSSL:

 # ls -l /usr/share/ssl/lib
 total 8388

[snip]

Is it possible that pre-existing openssl headers are being #included but
you're linking against the newer libs you installed in /usr/share/ssl/lib?
If so, try rerunning configure with an include pointing to the openssl
headers you installed with the libs. Ie;

# LIBS=-ldl CPPFLAGS=-I/usr/include/openssh -I/usr/share/ssl/include \
./configure --with-tcp-wrappers --with-ssl-dir=/usr/share/ssl/lib

Cheers,
Geoff



__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: Compression Doubt in Specifications

2002-06-24 Thread Geoff Thorpe

Hi,

On 24 Jun 2002, Shalendra Chhabra wrote:

 HI
 I fail to understand the following:

 In SSL 3.0, the Plaintext blocks are blocks of 2^14
 But when they are compressed it is written:

 Compression must be lossless and may not increase the content
 length by more than 1024 bytes.
 I just wanted to know how can compression increase length?

Take data, compress it, then try to compress it again, and again, ...
sooner or later (probably after the first iteration) successive
compressions won't reduce the data size at all - and in fact may well grow
it because it will still need to include metadata in the output to
describe how to decompress the compressed content. If you want another
easy example, try compressing a 1-byte file using *any* well-defined
format (ie. that is unambiguously defined for any input) and I bet you'll
get expansion.

The SSL3 spec is saying that even if the payload doesn't really shrink
under compression, the worst-case that is permitted by the standard is
that the compressed data, including all metadata required by the
compression format, may not exceed the original (uncompressed) data size
by more than 1K. Of course, it would be a pretty lame compression
algorithm/format that didn't work well within those limits.

Cheers,
Geoff


__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: Problem with cryptoswift card

2002-02-07 Thread Geoff Thorpe

Hi there,

 I use openssl-engine-0.9.6.c in conjunction with a cryptoswift card.
 To test it, I did a openssl speed -engine cswift.
 First everything seemed to work fine - astonishingly most operations
 were performed in exactly 2.99 secs - but then errors occured.
 I include the relevant lines of output below.
 Maybe someone could explain to me what happened?

The first error line tells you;

 5272:error:2606607A:engine routines:CSWIFT_MOD_EXP:size too large or too
 small:hw_cswift.c:391:

The cryptoswift engine implementation doesn't support the use of keys that 
large. Try specifying rsa1024/rsa512 etc when using speed.

Also for timing hardware crypto, you may want to use the -elapsed switch. 
The more accurate timing method measures CPU usage to calculate the rate of 
operations, eg. in case the system does something else while you're running 
benchmarks. The problem with hardware is that the process calling the 
crypto operation is often sleeping waiting for the device to respond and 
that time isn't counted, so the rate appears much higher than the actual 
rate of crypto operations per second.

Cheers,
Geoff

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: OpenSSL and POP3 Integration Question

2002-01-21 Thread Geoff Thorpe

Hi there,

 * The OpenSSL API does not offer a call to remove the private key
   information from memory as long as any TLS functionality is still
   set up.
   (- reminder: check, whether the memory overwritten when performing
   SSL_free()/SSL_CTX_free()..)
   To be compliant with RFC2246 we do not the private key, as a
   renegotiation may take place.

I assume you mean we do need rather than we do not? If so - it could be 
skipped if was known that the request in question wouldn't involve any 
renegotiations. Renegotiations are quite unlikely if the connection carries 
a single transaction and the fork()d process exit()s after that is dealt 
with. In that case, you don't even need to bother with SSL_free() or any 
such SSL API thing - just BN_clear_free(rsa-[d|p|q|dmp1|...]); 
rsa-[d|p|...] = NULL; will do. Ie. just zero out the key entries as they 
won't be called upon anyway - you're just protecting against scanning 
attacks on the memory (or core dump as the case may be). If you really want 
to be ruthless - edit every RSA_new() and RSA_free() function to 
(de)register each RSA pointer in a global list of some kind, then after the 
fork() and SSL handshake has taken place, just iterate through every living 
RSA structure and wipe out any private key components you find in them (not 
all with have them, eg. not those keys in the certificate objects!).

If you need persistence of the key because child processes handle multiple 
requests or renegotiations may take place in one request, then you'll need 
some other mechanism for doing the key operation outside the child process 
so the child-process doesn't need the key components, eg. dedicated 
hardware/smart-card, distinct crypto process, etc.

 * On the other hand, if your process started with root permissions
   and later dropped privileges using setuid(), your kernel should protect
   you from the user being able to attach to it.
   (At least on HP-UX: I just tried to attach to an imapd process
   and the kernel did not allow it.)

This is also true for Linux and Solaris IIRC. I suspect it's probably 
required by correct implementation of various aspects of POSIX etc, but 
whether various platforms implement them that accurately is another 
question of course.

Anyway, I doubt assuming protection against attaching to processes 
necessarily implies protection against key-scans ... core files, memory 
paging, etc can all be ways to get read access to the process memory (or 
record/copy of it). Better to ensure that the key data is simply not 
accessible in any sense at all before the process is running as that user 
*or* at least get rid of that data as early as possible (ie. lowering the 
window of time the process is running as that user with the key-data in 
memory). There's also the risk of third-party code binding in that could 
launch key-scans, eg. mail-filters, auto-responders, whatever. Some could 
be written deliberately by users to compromise the server keys of course 
(the threat model includes your users, I assume?). If you scrub the key 
data before any of those are loaded - that at least cuts out another avenue 
of attack even if you can't have the keys removed before the process is 
running as a non-root user in the first place.

Cheers,
Geoff

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: conflicts in openssh/openssl with smartcards

2002-01-20 Thread Geoff Thorpe

Hi there,

 there are two projects supporting smartcard use in openssh (that i'm
 aware of): muscle (www.linuxnet.com) and citi
 (www.citi.umich.edu/projects/smartcard). the citi code is included
 in openssh 3.0.2p1 (didn't check older versions).

 the muscle code uses the RSA meth attribute.
 take a look at openssl/rsa.h:
 ...

[snip]

 however the citi code included in openssh requires the engine version
 of openssl with such an openssl/rsa.h:
 ...

[snip]

 so, if anyone has a nice idea, how i can use try both at the same time,
 without haveing to openssl versions, this would be nice.

The 0.9.7 development tree has merged in the 'engine' functionality and has 
improved it in a number of ways - so whatever either of these projects is 
doing will presumably unify on the next openssl release. (touch wood).

In the mean time - the code that is based simply on providing an RSA_METHOD 
could be encapsulated in an ENGINE. If you take a look at any of the engine 
implementations in the 0.9.6-engine (including the 'citi' code I assume) 
you'll notice that it implements an RSA_METHOD that is encapsulated in a 
wrapper ENGINE. If you feel bold enough to do so, try wrapping the 'muscle' 
RSA_METHOD in an ENGINE too and you should be able to get it up an running 
in the 'engine' version of openssl - ie. side-by-side with the 'citi' 
implementation.

Otherwise you could ask the muscle project if they are moving their 
implementation to the 'engine' API. One of us might also take a peek at 
some point (thanks for the URLs) but I can't guarantee when.

Cheers,
Geoff

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: picking the right cipher

2001-12-27 Thread Geoff Thorpe

Hi there,

On Friday 28 December 2001 12:49, Patrick Li wrote:
 Hi,

 I have implemented the SSL client and server applications and I will be
 using them to conduct SSL sessions.  Since I have control on the client
 and the server, I want to find a cipher which offers strong encryption
 but does not require a lot of CPU cycles.  I think using TLSv1 protocol
 is the logical choice over SSL v3 or SSL v2 right?  How about the
 individual cipher?  How
 about

 EDH-RSA-DES-CBC3-SHASSLv3 Kx=DH   Au=RSA  Enc=3DES(168) Mac=SHA1
 DES-CBC3-SHASSLv3 Kx=RSA  Au=RSA  Enc=3DES(168) Mac=SHA1

 Any advice or pointer will be greatly appreciated.

Well I'm assuming you're not using any hardware acceleration - in which 
case you would want to be using EDH *only* in circumstances where Perfect 
Forward Secrecy is worth the cost of massive CPU overheads. If you conduct 
any benchmarks, you'll soon notice that the EDH cipher suites take a *much* 
bigger CPU hit during (re)negotiations than others.

Try playing around with ssltest in the openssl self-test suite ... you 
should be able to get it to do handshakes with itself using whatever cipher 
suites you like - that'll give you some ideas about comparitive speeds. You 
didn't mention whether you'd be using server *and* client authentication or 
just server authentication. If you don't use client certs - the CPU 
overhead at the client end is going to be much lower than that at the 
server end anyhow, so your problem can be discussed simply in terms of the 
server. In which case, you could try using swamp (a shameful plug I might 
add, http://www.geoffthorpe.net/crypto/swamp/) to hit your server side and 
use the -cipher switch to see how its performance holds up given 
different cipher suites.

A cipher suite involving RC4 will probably be lower overhead than one 
involving DES, though the scale of that difference depends on your traffic, 
the number of (re)negotiations per unit of traffic, your CPU, etc etc etc. 
You should also probably decide what size certificate/keys to use - if 
512-bit RSA keys are sufficient, your (re)negotiation speed will be way 
faster than 1024-bit RSA keys - but less secure. secure is quoted 
because we all know breaking your RSA key based on traffic snooping is 
likely to be the *last* actual target of any determined attack.

Good luck,
Geoff

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: Importing Self Signed Cert in Oracle 8i

2001-11-20 Thread Geoff Thorpe

Hi there,

I have no idea what it is that is bothering Oracle 8i about your cert(s) so 
I can simply make guesses here ...

On Tuesday 20 November 2001 02:32, viswanath wrote:
 Here are the differences found

MY CERT|VERISIGN

 1) 1024-bit   1) 512-bit

 2) serial no. 02) serial no.
 52:a9:f4:24:da:67:4c:9d:af:4f:53:78:52:ab:ef:6e

 3) has C,L,ST,O,OU,CN  3) has O,OU,OU only.

 4)has the x509 v3 extension 4) does not have any x509 v3 extensions

 Wat i did was the last differences were removed? but still it did not
 work

You removed all the differences? In particular did you generate a a non-v3 
cert?

A quick search on google turned up this;
  http://www-rohan.sdsu.edu/doc/oracle/network803/A54088_01/conc1.htm

which mentions in passing that it doesn't support v3 certs (for now). 
There may be other things it doesn't support, but that's one they come 
clean about. :-)


Another difference I noted with a quick scan was that your cert contained 
email addresses - in particular these are encoded as IA5STRING whereas the 
verising one has nothing but PRINTABLESTRINGs. I'd have hoped that wouldn't 
make a difference but you never know - are you able to play around with 
generating a few varieties of certs and importing each in turn to see if 
you can find the difference between acceptable and unacceptable?

Cheers,
Geoff

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: Importing Self Signed Cert in Oracle 8i

2001-11-19 Thread Geoff Thorpe

On Tuesday 20 November 2001 00:20, viswanath wrote:
 But the self signed certificate that has been generated contains the
 following

 X509v3 Basic Constraints:
 CA:TRUE
 X509v3 Key Usage:
 Certificate Sign, CRL Sign
  Netscape Cert Type:
 SSL CA, S/MIME CA, Object Signing CA

 which means that it is a CA certificate.
 So what else could be the problem.

Can you give us a side-by-side of the differences between the CA cert that 
was imported OK and the CA cert you can't get imported? Logic (or a 
first-order approximation thereof) tells me that's where you should find 
your answer ... though of course it could be something like the way the 
strings are encoded rather than the nature of the attributes.

Perhaps openssl asn1parse -i the two and take a look at what kind of 
differences you find?

Cheers,
Geoff


__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: openssl performance

2001-11-02 Thread Geoff Thorpe

On Monday 09 July 2001 13:52, Steven A. Bade wrote:
 OK Stupid question Where can one find SWAMP???

There's a downloadable tarball at;
   http://www.geoffthorpe.net/crypto/

However, expect a heavily revamped version soon ...

Cheers,
Geoff

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: another oddball question

2001-09-20 Thread Geoff Thorpe

Hi there,

On Wed, 19 Sep 2001, Tom Biggs wrote:

 I've got a nearly rhetorical question, but I thought I'd toss
 it into the ring anyway.
 
 I'm wondering how much overlap there might be between _all_ of
 the modulus values used across all OpenSSL modular exponentiation
 calls.  If there is a good probability that some reasonably-sized
 set of moduli (plural sp?) are used again and again, I can get
 increased performance from our hardware by caching values derived
 from the moduli.

For general BN_mod_exp[_crt|_mont] implementations, not really. You
could if you want and yes, you will often getting matching modulii (and
exponents for that matter), but you'd have to use a hash-table as there may
be a few commonly used keys rather than one.

Better still is to simply do it at the RSA, DSA, and DH levels (which is
after all where most of the action will be happening through). Consult the
ex_data functions - you can cache any per-key stuff you like using the
ex_data storage inside each one of those objects. This is also useful for
converted BIGNUM forms (eg. not recomputing and reallocating BN_bn2bin()
output every time, etc). If your cache is attached to each key, you know
extremely well that the modulus, exponent, or any other key details won't
be changing much, if at all. :-)

 My gut feeling is that even if I maintained a global cache,
 I wouldn't get too many second hits on the same modulus.

You certainly would. Ditto on the exponents. It's just that it may not be
just *one* commonly occuring value (or, if you're running a single SSL
server, you'll probably have one very common modulus and exponent from the
server's key, but lots of other always-changing values when verifying
client certificates and/or using EDH cipher-suites). For this sort of
BIGNUM level cache trickery, you're better to simply build the support into
a hardware driver or on the hardware itself. However, at the RSA, DSA (etc)
structure levels ... you know a *lot* more about what will and won't
change at run-time.

 My boss is betting that I will see the same moduli re-used
 quite often [1].  I'm guessing not - but mathematically challenged
 as I am,  I have no way to test this theory except by metering
 a production OpenSSL system...
 
 [1] He may be thinking of exponents...

You're right on both counts, but if you abstractly hook just the mod_exp
(without regard to keys) you will never get any guarantees.

Cheers,
Geoff


__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: odds of getting an all-zero result from a modexp

2001-09-19 Thread Geoff Thorpe

Hi,

On Tue, 18 Sep 2001, Tom Biggs wrote:

 OK, so I'm not very maths-literate...
 
 I was just wondering what the odds are of a modular exponentiation
 returning a result of zero in any OpenSSL usage of the modexp.
 
 It seems like odds are very much against it, but is it still
 possible?  Or is it ruled out by some property of the inputs
 that are used by OpenSSL? (large primes or whatever, I don't know.)
 
 It would be nice to know, because I'm dealing with some hardware
 that does modexp.  Currently I flag a zero result as an error because
 it aided debugging the hardware interface.  But if zero is legal I
 need to remove that test.

It's possible for modular exponentiation to return zero, but I doubt it's
possible in the uses of modular exponentiation you care about. RSA for
example, would not do this - the simple reason being that RSA
exponentiations (encrypts, decrypts, signs, or verifies) are invertible by
other exponentiations. As zero will always yield zero after an
exponentiation of any kind, you are therefore confronted with the fact that
no (invertible) RSA operation can yield a zero result except a zero input.
Presumably if zero represents an error marker, then your code wouldn't
let zero input in to begin with anyway? (needless to say, RSA is never used
in the real world with zero input, padding takes care of that).

However, I can't see why you would need to use zero as an error indicator,
and if you are hooking all *arbitrary* mod_exp operations, you would
theoretically have to consider the possibility of a zero result (even from
non-zero input).

Cheers,
Geoff


__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: Does Open SSL work on win2000???

2001-07-12 Thread Geoff Thorpe

On Thu, 12 Jul 2001, Gary Fletcher wrote:

 Does Open SSL work on win2000 running Apache???

work is a relative word, and relative to win2000, yes OpenSSL works.  Whether
anything works in win2000 relative to proper systems is anyone's guess.

Cheers,
Geoff

PS: OK, :-), just in case you weren't sure.

__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



RE: openssl-0.9.6a

2001-04-25 Thread Geoff Thorpe

On Wed, 25 Apr 2001 [EMAIL PROTECTED] wrote:

  And we've stated that binary compatibility doesn't exist 
  either. Given those two
  things, you'd think that OS distributions wouldn't build 
  everything based on
  OpenSSL as shared libraries wouldn't you ... funny what 
  people will do with
  experimental support.
  
  Cheers,
  Geoff
 It's probably worth pointing out also that the engine code is also
 experimental, and at least two companies already advertise their SSL
 accelerators as working with openssl. If I was being really pedantic, I
 would say that neither of them officially work with openssl. However, I
 would like stay friends with these companies.

umm ... engine code is experimental???

Support for it within certain popular applications may be marked as
experimental, but that is not the case with the code itself. The fact it has
been released side-by-side with a non-ENGINE version of OpenSSL is more about
introducing it gradually and not forcing it down application-programmers throats
in one go than it is about any experimental status.

Here of course I assume we mean experimental in the (semi-)officially stated
sense, not in the quality sense. As you go on to say in the next paragraph,
trying to grapple with the quality form of experimental opens up a can of
worms far bigger than OpenSSL, or indeed the whole unix community. grin

 In my experience though, experimental code for openssl (and mod_ssl) is
 more stable than the finished code that comes from that well known place
 is Washington State.

Indeed.

However heading back to the original subject - binary compatibility has been
something we've stated *will* not exist from version to version, except by luck.
As Richard pointed out, shared library support itself (ie. putting OpenSSL in
shared library form) is experimental. That is as stated by developers.
Distribution packagers have decided to play russian roulette with both of those
*stated* warnings, by including the shared library forms rather than building
any OpenSSL-dependant applications statically. Remember too that these shared
library forms are system-wide and intended to be used by all openssl-based
applications you install and upgrade over time. When you think about the
implications of trying to do a package upgrade on the openssl libs, and how that
could affect older openssl-based software. Well ... the potential for problems
is clear, whether or not they have actually hurt anyone yet.

Previously unknown bugs/quirks in supported software is one thing, using
features that are *stated* to be experimental by their maintainers is another.
(Translation: all things are relative - if you're scared by release quality
software from that place in Washington State, how terrified are you when even
*they* state something is experimental??)

Cheers,
Geoff


__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: RSA Private Encrypt

2001-03-25 Thread Geoff Thorpe

Hey there,

I know the original poster already has his code working, but well ... I had
already begun this reply so I'll just press on anyway! This may be of use to
others now (or in the future) if they're trying to implement custom RSA_METHODs
and/or ENGINEs.

On Sun, 25 Mar 2001, Dr S N Henson wrote:

 "Kenneth R. Robinette" wrote:
  
  I was hoping that this was the case.  Now if I set the
  RSA_FLAG_EXT_PKEY flag, how do I specify the function that will
  be called by OpenSSL to do the private encrypt?  Is this available to
  a client program?  I tried following the logic but quite frankly got lost
  at the rsa_eay_private_encrypt function.  Is there any
  documentation on what the "private" function is passed and how the
  results should be returned?
  
 
 There's some documentation in the relevant rsa manual pages.
 
 What you do effectively is to create an RSA_METHOD structure, copy any
 relevant default methods and then replace whichever ones you want. Then
 create an RSA structure and set its method to the custom method just
 created and of course set RSA_FLAG_EXT_PKEY.

Basically (with hand-waving aplenty), the 4 rsa_[pub|priv]_[enc|dec] method
functions are the high-level RSA functions that get used by "external" code (ie.
anything outside RSA internals). So it's a reasonably safe bet that the code for
SSL, S/MIME, X509, etc etc etc are coming in through those four functions (via
method-independant RSA functions in rsa_lib.c). Those 4 functions, at least as
implemented in the default method in rsa_eay.c, handle such things as padding,
setting up BN_CTX structures for the lower-level computations, pre-calculating
montgomery forms if they're desired and not already there, and other optional
things (RSA blinding, etc etc). As implemented in rsa_eay.c, they then delegate
computation to either the rsa_mod_exp and/or bn_mod_exp method functions. If
RSA_FLAG_EXT_PKEY is set, all private key operations go through the rsa_mod_exp
handler. If not, private key operations go through rsa_mod_exp only if all the
private key components exist (for CRT) otherwise they fall back to bn_mod_exp.
Public key operations go through bn_mod_exp no matter what (as there is no use
for CRT with public exponents). Finally, those four functions, as implemented in
rsa_eay.c, defer to the bn_mod_exp or rsa_mod_exp in the RSA key's current
RSA_METHOD and not just to the bn_mod_exp and rsa_mod_exp used in rsa_eay.c.
The point there is that you can create a new RSA_METHOD, copy the four function
pointers from the default software method, and provide alternative rsa_mod_exp
and bn_mod_exp implementations in your new RSA_METHOD - the four higher level
functions will then use your rsa_mod_exp and bn_mod_exp replacements.

You can go and replace the 4 higher-level functions too if you wish (as Steve
was suggesting may be necessary in the case of CryptoAPI and such), but if
you're just trying to hook the maths and key operations, but not do anything
"weird" with padding and such, it can be easier to use the "steal and hook"
approach.

You probably want to avoid using the init()/finish() pair from rsa_eay.c (the
default software RSA_METHOD) because they explicitly set the flags,
RSA_FLAG_CACHE_PUBLIC and RSA_FLAG_CACHE_PRIVATE, that cause montgomery forms
to be calculated and cached. It's highly unlikely you're using these :-)

By explicitly setting your required flags in the RSA_METHOD structure, you only
need an init() (and finish()) handler if you want to "prepare" method-specific
data in the ex_data store at that point (I'll mention shortly why even this may
be the wrong place to do such initialisation).

 Well that's what you do in non ENGINE builds. In the ENGINE stuff the
 method would be in an ENGINE structure and you'd set the RSA structures
 ENGINE... or something like that.

The engine stuff doesn't affect this - it just provides a higher-level way to
bolt things in. An ENGINE can provide an RSA_METHOD, DSA_METHOD, DH_METHOD,
and/or RAND_METHOD at the implementors choice, a sort of self-contained shopping
basket of algorithm implementations if you will - a quick flick through the
crypto/engine/ source should show how this is set up and the "openssl engine"
sub-command (particularly with the "-vct" switch) can help to get a feel for
what's going on. As far as how to write the RSA_METHOD, that is exactly as
before.

 rsa_mod_exp() is a low level function that does the actual mathematical
 private key operation:
 
 int (*rsa_mod_exp)(BIGNUM *r0,const BIGNUM *I,RSA *rsa);
 
 it expects an RSA private key operation to be performed on I and the
 result placed in r0.

And if RSA_FLAG_EXT_PKEY is set, that handler gets called for all private key
operations whether or not the RSA structure has all the private key (and CRT)
values populated. So by implementing this handler - you can even perform private
key operations when you only have public key elements available in memory. No
other code in OpenSSL (or dependant applications) 

Re: virtual memory exhausted

2001-02-23 Thread Geoff Thorpe

Hi there,

On Fri, 23 Feb 2001, Richard Levitte - VMS Whacker wrote:

 (this is to be compared to talking with your car dealer.  You won't
 get an answer by just calling them and say "there's something wrong
 with my car, tell me how I get it right")

Richard, you're comparing us to car dealers? Gee ... thanks ...

:-)

Cheers,
Geoff


__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: echoping 4.1 released : a tool to test SSL servers

2001-02-23 Thread Geoff Thorpe

Hi there,

Before I reply - why the cross-posting? There's been a lot of cross-posting
between mod_ssl-users and openssl-users - are there good reasons for it? I can
only assume that subjects fit for both lists at the same time probably involve
people who are on both lists anyway ...

On Wed, 14 Feb 2001, Ben Laurie wrote:

 [EMAIL PROTECTED] wrote:
  
  Thanks Ben for cheering me up. Perhaps If I have a machine that can change
  it's IP number constantly I could get round it. Or perhaps not. Maybe I
  could disable session caching altogether. This is only a development machine
  anyway (and has been trashed many times).
 
 That wasn't exactly what I meant: in a live server you do less RSA and
 more symmetric because of session caching.

Yup, but if I might go so far as to cheer John up even less (grin), you also
need to take a close look at your expected usage. Eg. Let's imagine you run a
banking site, with session caching and an expectation that a large percentage of
your hits will come from logged-in, session-resuming users - ie. you don't have
millions of users hitting your SSL site randomly one time each, but typically
have thousands of users hitting your SSL site and staying "involved". You might
then calculate an expected load, an expected profile, calculate a little
contingency into your capacity, and then away you go.

And probably every Monday morning at 9am your site will *drown*.

This is an example of one way to look at a problem, think you've nailed it, and
then get embarassingly thumped for your troubles. But there are more if you look
for them. Especially on something as fickle and unpredictable as the internet.
Ever heard of the "slashdot effect"?

The only safe way is to test all kinds of extreme cases - the ones designers
call "hypothetical" which of course translates to "shouldn't happen, but of
course *will* happen the moment we flick the switch". Eg. yes - turn off session
caching, and slam your server with a vast number of brand new session attempts
all at the same time, and try to simulate the numbers and usage you'd expect if,
miraculously, tonnes of your users decide to conspire to do this sort of thing.
Remember, such a "spike" can come from anywhere - the stock market may go
spiralling dow... oh wait, it has ... well, anyway - there could be any number
of reasons why your "expected profile" may just get turfed out the window -
reality has a strange sense of humour when it comes to forecasts and
expectancies. You can't rely in cases like this for users to organise themselves
to ensure they don't all ask for the same thing at the same time. Whatever it
might be that causes a user to want to hit your site at any given time may be a
perfectly good reason for the rest of them to do the same. Especially at 9am
Monday morning. :-)

Similarly, switch on session caching, get a few "users" (ie. simulated test
clients if need be) connected (ie. sessions negotiated), then see how well your
session caching and application concurrency holds up if those "few" (a relative
term of course, it could equal 2 depending on who you are) users then slam
away at the server resuming sessions all the time. It's another kind of load -
one that's heavy on any system that has contention (eg. locking for shared
resources, such as a session cache, or more likely, something in the
web-server's application logic - if every request to your application servers
adds a log to the same shared table in an external SQL database then you may
find the whole system grinds to a crawl on the locking for that table).

Then there's "user friendly" testing you might consider - ie. in a "normal"
situation (where the system *is* finally operating at the loads and profiles you
predicted), what individual latencies do the clients experience? Many server
architectures use distribution to scale up (here, distribution covers some
things you may not normally call "distributed" - eg. using external databases,
nfs-mounted file systems, CGIs that do anything network related, LDAP lookups,
remote authentication, etc). This is usually to spread the work around more than
one system to increase scalability and redundancy - but it usually takes a
penalty of some kind in the latency of each individual request. A server
architecture that can maintain your largest possible throughput and have cycles
left to burn is completely useless if each client's request takes 10 seconds to
return and 2 seconds is the longest you (or your client) is prepared to accept.

Just trying to help. :-)

Cheers,
Geoff



__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: What does the e-value do?

2001-02-16 Thread Geoff Thorpe

Hi there,

On Fri, 16 Feb 2001, Deng Lor wrote:

 I'm eager to know why 65537 is selected as the e, and are there
 any fact proofing it is better than other primes seleted out
 randomly?

"e" itself doesn't have to be prime but it does have to satisfy certain
conditions relative to "d" and the underlying RSA primes. However, usually
"e" is chosen in advance, and then the rest of the keygen process is
rigged to try and work out the rest to fit. The common choices for "e" are
3, and 65537 (the latter otherwise known as F4, F=Fermat).

The reasons; 3 is the smallest and fastest exponent to do public-key
operations with. 65537 is also quite fast and small, but "feels" more
secure than 3 (for various reasons already mentioned and more). You'll
note that the binary expansions of the two numbers are;

3 = 11
65537 = 10001

(ie. they're a little sparse on "1"s which is a good thing for speed ...)

The part mostly responsible for the speed of public key RSA operations are
"mod_exp" calculations - they are of the form "a^e mod n" where (n,e) is
the public key and a is the input data. A technique often used to
calculate this for a given "a" involves calculating a series of squares;

  a0 = a mod n
  a1 = a^2 mod n = a0^2 mod n
  a2 = a^4 mod n = a1^2 mod n
  a3 = a^8 mod n = a2^2 mod n
  

and then you can cross-multiply according to the binary expansion of the
desired exponent "e". In other words,

a^3 mod n = (a^2 * a^1) mod n = (a1 * a0) mod n

or
a^65537 mod n = (a^65536 * a^1) mod n = (a16 * a0) mod n

(ie. the multiply only has two operands because the above binary
expansions only have two "1"s in each - if they had lots of "1"s, you'd
have lots of multiplies to do).

So with e=65537, you need to calculate the series up to a16 whereas e=3
only requires the series up to a1. This is still way faster than using
some arbitrary "e" (which would on average require calculating the series
up to a1023 for 1024-bit keys, and the cross-multiplication would on
average use 512 of the series' values). However, for an arbitrarily chosen
"e", CRT or montgomery forms would probably be used anyway rather than
this clunky technique (which is not optimal at all for "noisy" exponents
as is used in private key operations).

That's a highly rough, corner-cutting, hand-wavey, and in some instances,
quite possibly wrong, explanation of the "e". That's all you're going to
get from me however, and with the speed it was banged out you'll probably
have to wait for the more sage lurkers on this list to pick it apart
carefully and give you the *real* truth. :-)

Cheers,
Geoff


__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: Distributed session caching

2001-01-24 Thread Geoff Thorpe

Hi there,

On Mon, 22 Jan 2001, Shridhar Bhat wrote:

 Hi,
 
 We are trying to deploy multiple SSL-based servers
 in a cluster. We want to share the session cache of each
 of these servers so that connections from same client
 (with session id reuse) can be handled by any server in
 the same cluster. The scheme is simple: 

[snip]

Various solutions to the problem exist. Various problems with various
solutions exist also. As you mentioned, this can also have some impact on
the scalability, and fault-tolerance of your architecture (and what this
does to load-balancing configurations adds another ladle of spaghetti to
the confusion).

I co-wrote a paper on this subject quite a while ago and it will probably
require an update in the not-too-distant future - but it does at least
discuss some details about this stuff and related issues;

   http://www.geoffthorpe.net/apcon2000/

or

   http://www.awe.com/mark/apcon2000/

Cheers,
Geoff


__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: rsaref/crypto in openssl

2000-12-13 Thread Geoff Thorpe

Hi there,

On Wed, 13 Dec 2000, Richard Levitte - VMS Whacker wrote:

 For user questions (like users of OpenSSL or application developpers
 that use OpenSSL as a library), openssl-users is exactly right.  For
 talking about development of OpenSSL itself, openbssl-dev is often a
 better channel to use.
 
 Patience is a virue.

"virue"?? Perhaps you meant "virus"? But it's certainly not a terribly
contagious one. :-)

Cheers,
Geoff

PS: Yes, guilty as charged of wasting bandwidth ... gulp


__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



Re: CANT BE DONE!

2000-12-08 Thread Geoff Thorpe

On Fri, 8 Dec 2000, Jackie Chan wrote:

 From what I can see it is impossible to create a client and server
 interaction that allows the following behavior using Net::SSLeay
 
 client sends data to the server
 
 server handles data
 
 server waits for more data from the same client over the same connection
 
 Pay close attention to that last line, it seems the only way to get
 write() to hangup is to send a shutdown $socket, 1;

I don't do Perl and this is a Perl issue. That's not a great start for a
response is it ... oh well. This sort of issue is the same no matter what
the application or protocol (or language). What is to stop someone from
initiating attacks on telnet, ftp, ssh, pop3, smtp, http, https, or any
other server protocol in the same way? Such attacks could be by just
opening loads of connections over time but never "completing" any of them,
or just by continuing to hold them open through whatever mechanisms
required (eg. holding open FTP connections from timing out by periodically
trickling a "dir" command through). This has to be handled in the code
that deals with the "real-time" ... namely the application, or network
framework - or in short, wherever it is that chooses when and how to do
reads and writes.

That still doesn't, in itself, solve your problem - but I'm getting there
hopefully. The techniques are many, but the standard ones are non-blocking
IO, timers, connection limits, and so on. non-blocking IO is, of course,
non blocking. I have no idea how that fits into the Perl scheme of things
(let alone Net::SSLeay itself) but I would be highly surprised if there
isn't transparent support for such things. OpenSSL itself (specifically
the socket BIOs and the SSL state machine) have implicit support for
non-blocking IO so you should be able to utilise that somehow from the
Perl wrapper. Another technique of course is to use some kind of timer
whereby you interrupt yourself (or get another process to do it perhaps)
with a signal or whatever your platform/language's support for such a
concept is called (life in Windows might or can be "different"). All the
standard POSIXy (or BSD perhaps) IO functions return from any blocking
calls when a non-blocked signal is received. I'd assume that is (or at
least *can be*) observed from inside the Perl world too so that's one way
to go. Connection limits are probably not what you're after but are the
perfect solution for some other use-cases, eg. telnet, ftp, etc -
protocols where having connections open indefinitely is sometimes
*desirable*, so you control the number of connections rather than
controlling how long they live.

 I have tried to use write_CRLF() and read_CRLF() but they do not act as
 described.  If anyone actually HAS done this, I would be most pleased to
 see an example, or even what the respective calls are on either side of
 the connection.

I don't know if your problem here will be with standard DoS logic or the
fact that the Net::SSLeay API itself is being unyielding. If it's the
former - *any* server written in Perl that claims to not be trivial to
attack, stall, or bury would be a good place to look first (eg.
interrupting blocking IO functions, setting connections to non-blocking
mode, doing signal stuff, etc). If it's the latter - I hope you find what
you're looking for.

Cheers,
Geoff



__
OpenSSL Project http://www.openssl.org
User Support Mailing List[EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]



  1   2   >