Re: [openssl-dev] evp cipher/digest - add alternative to init-update-final interface

2018-01-17 Thread Peter Waltenberg
Or just add another EVP_CIPHER_CTX_ctrl() option (EVP_CTRL_CIPHER_ONE_SHOT 
or similar.) and handle it the way CCM does now and finish the operation 
on the first data update.

That doesn't require a new API and would probably simplify some existing 
code.

Peter




From:   Patrick Steuer 
To: openssl-dev 
Date:   18/01/2018 04:10
Subject:[openssl-dev] evp cipher/digest - add alternative to 
init-update-final interface
Sent by:"openssl-dev" 



libcrypto's interface for ciphers and digests implements a flexible
init-update(s)-final calling sequence that supports streaming of
arbitrary sized message chunks.

Said flexibility comes at a price in the "non-streaming" case: The
operation must be "artificially" split between update/final. This
leads to more functions than necessary needing to be called to
process a single paket (user errors). It is also a small paket
performance problem for (possibly engine provided) hardware
implementations for which it enforces a superfluous call to a
coprocessor or adapter.

libssl currently solves the problem, e.g for tls 1.2 aes-gcm record
layer encryption by passing additional context information via the
control interface and calling EVP_Cipher (undocumented, no engine
support. The analoguously named, undocumented EVP_Digest is just an
init-update-final wrapper). The same would be possible for tls 1.3
pakets (it is currently implemented using init-update-final and
performs worse than tls 1.2 record encryption on some s390 hardware).

I would suggest to add (engine supported) interfaces that can process a
paket with 2 calls (i.e. init-enc/dec/hash), at least for crypto
primitives that are often used in a non-streaming context, like aead
constructions in modern tls (This would also make it possible to move
tls specific code like nonce setup to libssl. Such interfaces already
exist in boringssl[1] and libressl[2]).

What do you think ?

Best,
Patrick

[1] 
https://urldefense.proofpoint.com/v2/url?u=https-3A__commondatastorage.googleapis.com_chromium-2Dboringssl-2Ddocs_aead.h.html=DwICAg=jf_iaSHvJObTbx-siA1ZOg=K53ZTnW2gq2IjM1tbpz7kYoHgvTfJ_aR8s4bK_o2xzY=dCZ2v-6pJfzfrbfJZcLHkWMH1rQl8LHFyYrTC8IWaDQ=upMfA8eZGxh6kmIwqjO38Chm2MNi_BocHjrm84jCvOU=

[2] 
https://urldefense.proofpoint.com/v2/url?u=http-3A__man.openbsd.org_EVP-5FAEAD-5FCTX-5Finit=DwICAg=jf_iaSHvJObTbx-siA1ZOg=K53ZTnW2gq2IjM1tbpz7kYoHgvTfJ_aR8s4bK_o2xzY=dCZ2v-6pJfzfrbfJZcLHkWMH1rQl8LHFyYrTC8IWaDQ=YXrque0c5mOqsKzVMjt2T5m4mIcgo3GVThIqnGLJeRo=

-- 
openssl-dev mailing list
To unsubscribe: 
https://urldefense.proofpoint.com/v2/url?u=https-3A__mta.openssl.org_mailman_listinfo_openssl-2Ddev=DwICAg=jf_iaSHvJObTbx-siA1ZOg=K53ZTnW2gq2IjM1tbpz7kYoHgvTfJ_aR8s4bK_o2xzY=dCZ2v-6pJfzfrbfJZcLHkWMH1rQl8LHFyYrTC8IWaDQ=-TsrGPSFfFkhWasxuHDt19pNsDGsEW3BQp19rT507Xw=






-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-08-23 Thread Peter Waltenberg
The bad case I'm aware of is the fork() one as it's critical that the RNG 
state diverge on fork(). Without that you can get some very nasty 
behaviour in things like TLS servers. Some of which have a thread pool + 
fork() model to handle increasing load.

While ideally you'd do a complete reseed, just different state in each RNG 
is a LOT better than nothing, and even PID + whatever else you can 
scrounge up will help a lot. Even the high res counters available on most 
current CPU's would help there because forking multiple processes isn't 
quite synchronous.

I don't think 'telling the user to fix it' is a particularly good option 
in these cases as in general the user will be even less capable of dealing 
with this than OpenSSL. By all means warn the users that behaviour may not 
be ideal, but do your best first.

Peter





From:   Paul Kehrer 
To: "openssl-dev@openssl.org" 
Date:   24/08/2017 07:13
Subject:Re: [openssl-dev] Work on a new RNG for OpenSSL
Sent by:"openssl-dev" 



On August 19, 2017 at 2:48:19 AM, Salz, Rich via openssl-dev (
openssl-dev@openssl.org) wrote:

I think the safest thing is for us to not change the default. Programs 
that know they are going to fork can do the right/safe thing. It would be 
nicer if we could automatically always do the right thing, but I don’t 
think it’s possible. 


It appears the current position is that since there will be edge cases 
where a reseed would fail (thus either halting the RNG or silently not 
reseeding it) that we should not attempt to reseed? I would argue it is 
better to attempt to reseed and document that edge cases may need to 
reseed themselves. This dramatically narrows the window from "everybody 
needs to do it" to "users in certain scenarios that are becoming rarer by 
the day need to do it". Given that backwards compatibility is a concern 
maybe failure to reseed on fork should only drop an error on the child 
process's error queue though? That behavior could potentially be a 
separate flag that OpenSSL uses by default (OPENSSL_TRY_TO_INIT_ATFORK), 
and then OPENSSL_INIT_ATFORK can be more strict about reseed failures if 
desired.

-Paul-- 
openssl-dev mailing list
To unsubscribe: 
https://urldefense.proofpoint.com/v2/url?u=https-3A__mta.openssl.org_mailman_listinfo_openssl-2Ddev=DwICAg=jf_iaSHvJObTbx-siA1ZOg=K53ZTnW2gq2IjM1tbpz7kYoHgvTfJ_aR8s4bK_o2xzY=fF3EsvxEsVa_Gqvi75hhE7fVtPzma348fsGdbb_GMLc=OhHmJAPRZokV7vWQaJy75wCYUxEWJLfAf79YJH6Mhao=
 





-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-28 Thread Peter Waltenberg
Debian also screwed up here at one point and the SSH keys for Debian 
installs came from a very small subset of keys. This CLASS of problem is 
common and it's something you need to make efforts to avoid. And again, it 
is something you need to address as far as you can because you simply 
can't rely on the users of your software to be able to do better.

Seeding is a hard problem as is using the seed material correctly.

The overall objective is security, security requires instance unique keys, 
keys that aren't trivially guessed. Quite a few of the suggestions made so 
far would compromise that. It's a very different problem from generating 
good pseudo-random sequences and by it's nature doesn't lend itself well 
to clean and elegant solutions. 

Peter 






From:   Cory Benfield <c...@lukasa.co.uk>
To: openssl-dev@openssl.org
Date:   28/06/2017 17:15
Subject:Re: [openssl-dev] Work on a new RNG for OpenSSL
Sent by:"openssl-dev" <openssl-dev-boun...@openssl.org>




> On 28 Jun 2017, at 04:00, Paul Dale <paul.d...@oracle.com> wrote:
> 
> 
> Peter Waltenberg wrote:
>> The next question you should be asking is: does our proposed design 
mitigate known issues ?. 
>> For example this:
>> 
http://www.pcworld.com/article/2886432/tens-of-thousands-of-home-routers-at-risk-with-duplicate-ssh-keys.html

> 
> Using the OS RNG won't fix the lack of boot time randomness unless there 
is a HRNG present.
> 
> For VMs, John's suggestion that /dev/hwrng should be installed is 
reasonable.
> 
> For embedded devices, a HRNG is often not possible.  Here getrandom() 
(or /dev/random since old kernels are common) should be used.  Often 
/dev/urandom is used instead and the linked article is the result.  There 
are possible mitigations that some manufacturers include (usually with 
downsides).

When you say “the linked article”, do you mean the PCWorld one? Because 
that article doesn’t provide any suggestion that /dev/urandom has anything 
to do with it. It is at least as likely that the SSH key is hard-coded 
into the machine image. The flaw here is not “using /dev/urandom”, it’s 
“exposing your router’s SSH access on the external side of the router”, 
plus the standard level of poor configuration done by shovelware router 
manufacturers.

Cory

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev




-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-27 Thread Peter Waltenberg
If the desired outcome is security you must generate instance unique keys 
and elegant software design alone is simply not enough to achieve that. 

And I didn't say solve below I said mitigate. 
You can't solve the problem of someone using already created keys in 
multiple VM's. 
But you can and should reduce the chances that someone will create them 
from a fresh keygen because that simply can't be mitigated anywhere else 
but in your code.

Simillar issues exist with fork(), and again, you should make efforts to 
mitigate that risk because the user can't.

Magic fairy dust like (/dev/hwrng) undoubtedly helps where it exists, but 
you still have to apply it correctly to achieve the desired outcome.

Peter



From:   John Denker via openssl-dev <openssl-dev@openssl.org>
To: "openssl-dev@openssl.org" <openssl-dev@openssl.org>
Date:   28/06/2017 12:19
Subject:Re: [openssl-dev] Work on a new RNG for OpenSSL
Sent by:"openssl-dev" <openssl-dev-boun...@openssl.org>



On 06/27/2017 06:41 PM, Peter Waltenberg wrote:

> Consider most of the worlds compute is now done on VM's where images are 

> cloned, duplicated and restarted as a matter of course. Not vastly 
> different from an embedded system where the clock powers up as 00:00 
> 1-Jan, 1970 on each image. If you can trust the OS to come up with 
unique 
> state each time you can rely solely on the OS RNG - well provided you 
> reseed often enough anyway, i.e. before key generation. That's also why 
> seeding a chain of PRNG's once at startup is probably not sufficient 
here.

That is approximately the last thing openssl should be
fussing over.  There is a set of problems there, with a
set of solutions, none of which openssl has any say over.

===>  The VM setup should provide a virtual /dev/hwrng  <===

Trying to secure a virtual machine without a virtual hwrng
(or the equivalent) is next to impossible.  There may be
workarounds, but they tend to be exceedingly locale-specific,
and teaching openssl to try to discover them would be a
tremendous waste of resources.

So stop trying to operate without /dev/hwrng already.

It reminds me of the old Smith & Dale shtick:
  -- Doctor, doctor, it hurts when I do *this*.
  -- So don't do that.
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev





-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-27 Thread Peter Waltenberg
The next question you should be asking is: does our proposed design 
mitigate known issues ?. 
For example this:

http://www.pcworld.com/article/2886432/tens-of-thousands-of-home-routers-at-risk-with-duplicate-ssh-keys.html

Consider most of the worlds compute is now done on VM's where images are 
cloned, duplicated and restarted as a matter of course. Not vastly 
different from an embedded system where the clock powers up as 00:00 
1-Jan, 1970 on each image. If you can trust the OS to come up with unique 
state each time you can rely solely on the OS RNG - well provided you 
reseed often enough anyway, i.e. before key generation. That's also why 
seeding a chain of PRNG's once at startup is probably not sufficient here.

And FYI. On systems not backed with hardware RNG's /dev/random is 
extremely slow. 1-2 bytes/second is a DOS attack on it's own without any 
other effort required.

This isn't solely a matter of good software design. And yes, I know, hard 
problem. If it wasn't a hard problem you probably wouldn't be dealing with 
it now.


Peter




From:   Benjamin Kaduk via openssl-dev 
To: openssl-dev@openssl.org, Kurt Roeckx , John Denker 

Date:   28/06/2017 09:38
Subject:Re: [openssl-dev] Work on a new RNG for OpenSSL
Sent by:"openssl-dev" 



On 06/27/2017 04:51 PM, Kurt Roeckx wrote:
On Tue, Jun 27, 2017 at 11:56:04AM -0700, John Denker via openssl-dev 
wrote:


On 06/27/2017 11:50 AM, Benjamin Kaduk via openssl-dev wrote:


Do you mean having openssl just pass through to
getrandom()/read()-from-'/dev/random'/etc. or just using those to seed
our own thing?

The former seems simpler and preferable to me (perhaps modulo linux's
broken idea about "running out of entropy")


That's a pretty big modulus.  As I wrote over on the crypto list:

The xenial 16.04 LTS manpage for getrandom(2) says quite explicitly:


Unnecessarily reading large quantities  of data will have a
negative impact on other users of the /dev/random and /dev/urandom
devices.


And that's an understatement.  Whether unnecessary or not, reading
not-particularly-large quantities of data is tantamount to a
denial of service attack against /dev/random and against its
upstream sources of randomness.

No later LTS is available.  Reference:
  http://manpages.ubuntu.com/manpages/xenial/man2/getrandom.2.html

Recently there has been some progress on this, as reflected in in
the zesty 17.04 manpage:
  http://manpages.ubuntu.com/manpages/zesty/man2/getrandom.2.html

However, in the meantime openssl needs to run on the platforms that
are out there, which includes a very wide range of platforms.


And I think it's actually because of changes in the Linux RNG that
the manpage has been changed, but they did not document the
different behavior of the kernel versions.

In case it wasn't clear, I think we should use the OS provided
source as a seed. By default that should be the only source of
randomness.



I think we can get away with using OS-provided randomness directly in many 
common cases.  /dev/urandom suffices once we know that the kernel RNG has 
been properly seeded.  On FreeBSD, /dev/urandom blocks until the kernel 
RNG is seeded; on other systems maybe we have to make one read from 
/dev/random to get the blocking behavior we want before switching to 
/dev/urandom for bulk reads.

-Ben-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev




-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] License change agreement

2017-03-23 Thread Peter Waltenberg
 OpenSSL has a LOT of commercial users and contributors. Apache2 they can live with, GPL not so much. There's also the point that many of the big consumers (like Apache :)) are also under Apache2. Least possible breakage and I think it's a reasonable compromise. Of course I am biased because I work for the one of the commercial users.Peter-"openssl-dev"  wrote: -To: openssl-dev@openssl.orgFrom: Richard Moore Sent by: "openssl-dev" Date: 03/24/2017 07:34AMSubject: Re: [openssl-dev] License change agreementOn 23 March 2017 at 18:04, Salz, Rich via openssl-dev  wrote:> The new license also conflicts with the GPLv2.  This was immediately brought> up as a serious problem when this discussion began in July of 2015.  It> appears that the feedback that the APL does not solve these serious> problems with how OpenSSL was licensed was ignored.  Sad to see that.No it was not ignored.  (Just because we disagree doesn't mean we ignore the feedback.) The team felt that the Apache license better met our needs.​It's a fairly large elephant in the room that the press release does not address at all though. ​I think it's reasonable to expect some kind of reasoning.CheersRich.-- openssl-dev mailing listTo unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] please make clear on website that 1.1.0e is Development release, not GA / Production release

2017-03-21 Thread Peter Waltenberg
 Just commenting on this: I had very few problems moving from 1.0.2 to 1.1.0. We'd already cleaned up most of the issues OpenSSL fixed between 1.0.2 and 1.1.0, those fixups were well isolated so migrating was just a matter of ifdef'ing out accessors/allocators/deallocators we'd created to civilize the API and replace those with the equivalents native to 1.1.0. Things like that you can't fix without breaking someone, and without fixing that you can't provide stable ABI's going forward, as Richard says someone will break at some point when you do that anyway.  I'll concede we realized ABI stability would be an issue well in advance of 1.1.0 but it was just good defensive programming practice achieved that, not inside information.Mind you, some of the problems in 1.1.0x are awesome, older HP/UX PA-RISC compilers turn some of the macros deep in OpenSSL to local functions - embedded in every object file. Our footprint there went from 2M to 20M. Solaris had similar issues but not quite as bad in practice.Peter-"openssl-dev"  wrote: -To: openssl-dev@openssl.orgFrom: Richard Levitte Sent by: "openssl-dev" Date: 03/21/2017 06:56PMSubject: Re: [openssl-dev] please make clear on website that 1.1.0e is Development release, not GA / Production releaseIn message  on Tue, 21 Mar 2017 00:13:57 +, Jason Vas Dias  said:jason.vas.dias> On 20/03/2017, Kurt Roeckx  wrote:jason.vas.dias> > The ed25519 support in openssh doesn't even come from openssl.jason.vas.dias> >jason.vas.dias> What happens is OpenSSH's cipher.c callsjason.vas.dias>        if (EVP_CipherInit(cc->evp, type, NULL, (u_char *)iv,jason.vas.dias>           (do_encrypt == CIPHER_ENCRYPT)) == 0) {jason.vas.dias> ret = SSH_ERR_LIBCRYPTO_ERROR;jason.vas.dias> goto out;jason.vas.dias> }jason.vas.dias> which always does 'goto out' for any ED25519 file.That would happen if ssh_host_ed25519_key is password protected andthe cipher used to encrypt the key isn't recognised in OpenSSL 1.1.0(and considering the current master of openssh-portable doesn't buildcleanly against OpenSSL 1.1.0e and I therefore suppose you've hackedaround, I can't even begin to say where the fault came in).  It alsodepends on your OpenSSL configuration, since you can disable mostalgorithms it carries...jason.vas.dias> >> which mainlyjason.vas.dias> >> involved including the '*_lo?cl.h' & '*_int.h'  headersjason.vas.dias> >jason.vas.dias> > Including the internal headers is not a good patch. This willjason.vas.dias> > break.jason.vas.dias> >jason.vas.dias> jason.vas.dias> It doesn't break at all - the code remains 100% unchanged  - just differentjason.vas.dias> headers need including - and seems to work fine including the APIjason.vas.dias> hiding headers.The structures you find in there are made private for a reason, weneed the liberty to make changes in them in future developmentswithout disturbing the ABI (not just the API).  So some time in thefuture, it will break.jason.vas.dias> And my point is really not to criticize your effort, it is just a plea to makejason.vas.dias> clear on the web-page that the 1.1.0 branch is a development branch andjason.vas.dias> does not work yet with most OpenSSL using applications .It isn't a development branch.  We see it as a stable release, i.e. nofurther development apart from bug fixes.  "master" is the developmentbranch.jason.vas.dias> OpenSSL in its 1.0.2 incarnation has been hardened by over (10,15,20)? yearsjason.vas.dias> of testing , and its API is usable by all OpenSSL using applications,jason.vas.dias> unlike 1.1.0 .Jyst to put things in perspective, OpenSSL 1.0.0 was released2010-Mar-29.  That was the start of the 1.0.x series.  OpenSSL 1.0.2was released 2015-Jan-22.OpenSSL 1.1.0 marks the start of the 1.1.x series, which isn't sourcecompatible with the 1.0.x series.  We have talked about this indifferent ways even before the first Alpha release was made (over ayear ago).Either way, the 1.0.2 branch is supported until the end of 2019.One could say that's how long other application authors have to reworktheir source, although that's not really true since anyone can keepthe 1.0.2 source around as long as they want (hey, even we do).Maybe you expected all applications to have converted the moment wedeclared our 1.1.0 release stable?  That will not happen...  as far aswe've observed, most are hardly even looking before we've made astable release (which I agree is unfortunate).Cheers,Richard-- Richard Levitte         levi...@openssl.orgOpenSSL Project         http://www.openssl.org/~levitte/-- openssl-dev mailing listTo unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] MD5 speed

2017-01-29 Thread Peter Waltenberg
 No one cares ?.I'd suggest you check for alignment issues first as that tends to dominate at small block sizes.The no one cares is only partly in jest as MD5 is dead, but not yet buried. And in the grand scheme of things even a 2:1 performance hit on 16 byte blocks is unlikely to change the world.Peter-"openssl-dev"  wrote: -To: openssl-dev@openssl.orgFrom: Kurt Roeckx Sent by: "openssl-dev" Date: 01/30/2017 08:35AMSubject: [openssl-dev] MD5 speedI had some surprising results of the speed command when testing themd5 speed on the 1.1.0-stable branch (for both a shared and a staticbuild):openssl speed md5 returns:type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes  16384 bytesmd5             115869.46k   268237.29k   473617.41k   589905.92k   636772.35k   639429.29kopenssl speed -evp md5 returns:type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes  16384 bytesmd5              53991.08k   160454.36k   364985.86k   537559.38k   624238.59k   633066.84kOn the other hand, with 1.0.1 and 1.0.2-stable using a static build I get:md5              38045.25k   123423.76k   310729.30k   505120.09k   620333.74kmd5              43182.80k   135651.38k   331369.48k   518042.97k   622193.32kUsing a shared build I get:md5              57529.01k   169850.56k   376685.74k   545938.09k   626952.87kmd5              65634.19k   186133.65k   397608.96k   558070.78k   629697.19kSo was surprised me is that speed for small packets seems to be alot better in 1.1.0 when not using the EVP interface, but worse whenusing it compared to 1.0.2. It this expected behaviour?Sha1 doesn't seem to have this difference for instance.Kurt-- openssl-dev mailing listTo unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [EXTERNAL] Re: use SIPhash for OPENSSL_LH_strhash?

2017-01-11 Thread Peter Waltenberg
Yes, but. LHash hashes internal object names not externally presented input.Certainly if it's used on externally presented data it's a worthwhile change, but AFAIK that isn't the case.Peter-"openssl-dev" <openssl-dev-boun...@openssl.org> wrote: -To: openssl-dev@openssl.orgFrom: Jeremy Farrell <jeremy.farr...@oracle.com>Sent by: "openssl-dev" <openssl-dev-boun...@openssl.org>Date: 01/12/2017 11:17AMSubject: Re: [openssl-dev] [EXTERNAL] Re: use SIPhash for OPENSSL_LH_strhash?  

  
  
For something like SipHash, knowing "whichever
  algo the server uses" effectively implies knowing the 128-bit
  random key currently being used for the hash table in question.Regards,    jjfOn 12/01/2017 00:39, Sands, Daniel
  wrote:  With a small number of buckets, it seems to me that no hash algo willmake you safe from a flooding attack.  You can simply generate yourhashes locally using whichever algo the server uses, and only send thosethat fit into your attack scheme.  The data could even be pre-generated.The only way to guard against a flood that makes sense to me is to limitthe number of items that can be accepted before deciding you're beingtrolled.On Wed, 2017-01-11 at 23:29 +, J. J. Farrell wrote:  Are the issues you raise true of SipHash, given that a prime motivatorfor its design was generating hash tables for short inputs while beingsecure against hash flooding attacks? It achieves this with theperformance of a portable C implementation the order of four timesfaster than MD5, and not much slower than other modern hashalgorithms.I'd have thought the main thing to consider is whether or not there isany practical way a hash flooding attack could be used againstOpenSSL's hash tables, and it sounds like there isn't. In that case,the fastest algorithm for the usage patterns would be best.Regards,                          jjfOn 11/01/2017 22:25, Peter Waltenberg wrote:  And the reason I said you certainly don't need a keyed hash ?Behaviour of the hash function will change with key and in somecases performance would degenerate to that of a linked list. (Ouch).And since the obvious thing to do is use a random key, OpenSSL'sperformance would get *very* erratic.Simpler functions than cryptographic hashes will almost certainlyyield better results here. I note someone further up the threadsomeone else has pointed that out. PeterFrom:        "Salz, Rich" <rs...@akamai.com>To:        "openssl-dev@openssl.org" <openssl-dev@openssl.org>Date:        11/01/2017 13:14Subject:        Re: [openssl-dev] use SIPhash forOPENSSL_LH_strhash?Sent by:        "openssl-dev" <openssl-dev-boun...@openssl.org>The needs for OpenSSL's LHASH are exactly what SipHash was designedfor: fast on short strings.OpenSSL's hash currently *does not* call MD5 or SHA1; the MD5 codeis commented out.Yes, performance tests would greatly inform the decision.-- J. J. FarrellNot speaking for Oracle-- openssl-dev mailing listTo unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev-- J. J. Farrellw: +44 161 493 4838  

-- openssl-dev mailing listTo unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] use SIPhash for OPENSSL_LH_strhash?

2017-01-11 Thread Peter Waltenberg
 It pretty much has to be true of any keyed hash if you think about it. If it didn't distribute the hashes differently each time it wouldn't be working, if it distributes the hashes differently, performance has to be key dependent. And with a hash size the same as the key, at least one of the possible combinations has to be the pathological case.I can't currently see any possible vector for a flooding attack, well O.K., I certainly can if you use SipHash with random keys :) and even that would be hard to exploit, but otherwise no. If it's significantly faster using it with a pre-tested fixed key is probably fine, but it gives up the security characteristic you were after. My suspicion is also that simply compressing the string with XOR will work at least as well.Peter-"openssl-dev" <openssl-dev-boun...@openssl.org> wrote: -To: openssl-dev@openssl.orgFrom: "J. J. Farrell" <jeremy.farr...@oracle.com>Sent by: "openssl-dev" <openssl-dev-boun...@openssl.org>Date: 01/12/2017 10:05AMSubject: Re: [openssl-dev] use SIPhash for OPENSSL_LH_strhash?  

  
  
Are the issues you raise true of SipHash, given
  that a prime motivator for its design was generating hash tables
  for short inputs while being secure against hash flooding attacks?
  It achieves this with the performance of a portable C
  implementation the order of four times faster than MD5, and not
  much slower than other modern hash algorithms.I'd have thought the main thing to consider is whether or not
  there is any practical way a hash flooding attack could be used
  against OpenSSL's hash tables, and it sounds like there isn't. In
  that case, the fastest algorithm for the usage patterns would be
  best.Regards,        jjfOn 11/01/2017 22:25, Peter Waltenberg
  wrote:And the reason I said
you certainly don't
need a keyed hash ?Behaviour of the hash function
will
change with key and in some cases performance would degenerate
to that
of a linked list. (Ouch). And since the obvious thing to do is
use a random
key, OpenSSL's performance would get *very* erratic.Simpler functions than
cryptographic
hashes will almost certainly yield better results here. I note
someone
further up the thread someone else has pointed that out. PeterFrom:      
 "Salz, Rich"
<rs...@akamai.com>  To:      
 "openssl-dev@openssl.org"<openssl-dev@openssl.org>  Date:      
 11/01/2017 13:14  Subject:    
   Re: [openssl-dev]
use SIPhash for OPENSSL_LH_strhash?  Sent by:    
   "openssl-dev"
<openssl-dev-boun...@openssl.org>The needs for OpenSSL's LHASH are exactly what
  SipHash
  was designed for: fast on short strings.  OpenSSL's hash currently *does not* call MD5 or SHA1; the MD5
  code is commented
  out.  Yes, performance tests would greatly inform the decision.-- J. J. FarrellNot speaking for Oracle  

-- openssl-dev mailing listTo unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] use SIPhash for OPENSSL_LH_strhash?

2017-01-11 Thread Peter Waltenberg
And the reason I said you certainly don't need a keyed hash ?

Behaviour of the hash function will change with key and in some cases 
performance would degenerate to that of a linked list. (Ouch). And since 
the obvious thing to do is use a random key, OpenSSL's performance would 
get *very* erratic.

Simpler functions than cryptographic hashes will almost certainly yield 
better results here. I note someone further up the thread someone else has 
pointed that out. 

Peter




From:   "Salz, Rich" 
To: "openssl-dev@openssl.org" 
Date:   11/01/2017 13:14
Subject:Re: [openssl-dev] use SIPhash for OPENSSL_LH_strhash?
Sent by:"openssl-dev" 



The needs for OpenSSL's LHASH are exactly what SipHash was designed for: 
fast on short strings.
OpenSSL's hash currently *does not* call MD5 or SHA1; the MD5 code is 
commented out.
Yes, performance tests would greatly inform the decision.
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev





-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] use SIPhash for OPENSSL_LH_strhash?

2017-01-10 Thread Peter Waltenberg
Reality check

Others have pointed this out but I don't think it's making it through. 
LHash doesn't need a cryptographic hash and it doesn't have security 
implications. It certainly doesn't need a keyed hash.

LHash does need to be something that's good at distinguishing short text 
strings, that's not necessarilly the same thing as a good cryptographic 
hash, and possibly it's exactly the opposite thing due to the limitted 
incoming symbol space (ascii text).
About the only thing LHash needs is high performance in it's use area. I'd 
suspect that switching MD5 to SHA-1 in the existing algorithm would get 
you that simply because SHA-1 is asm optimized on most platforms now and 
MD5 typically isn't.
I'd suggest that anyone wishing to change this should at least have to 
demonstrate improved performance in the OpenSSL use case before it's 
accepted.

Peter



From:   "Short, Todd" 
To: "openssl-dev@openssl.org" 
Date:   11/01/2017 08:42
Subject:Re: [openssl-dev] use SIPhash for OPENSSL_LH_strhash?
Sent by:"openssl-dev" 



I think I might have an init/update/final version of siphash24 lying 
around somewhere that would be compatible with OpenSSL’s EVP_PKEY 
mechanism (similar to Poly1305, in that it needs a key).
--
-Todd Short
// tsh...@akamai.com
// "One if by land, two if by sea, three if by the Internet."

On Jan 10, 2017, at 4:55 PM, Richard Levitte  wrote:



Benjamin Kaduk  skrev: (10 januari 2017 20:19:21 CET)
On 01/10/2017 12:31 PM, Richard Levitte wrote:

Benjamin Kaduk  skrev: (10 januari 2017 18:48:32
CET)
On 01/09/2017 10:05 PM, Salz, Rich wrote:
Should we move to using SIPHash for the default string hashing
function in OpenSSL?  It’s now in the kernel
https://lkml.org/lkml/2017/1/9/619

Heck, yes!
-Ben
I fail to see what that would give us. OPENSSL_LH_strhash() is used
to get a reasonable index for LHASH entries. Also SIPhash gives at
least 64 bits results, do we really expect to see large enough hash
tables to warrant that? 


We don't need to use the full output width of a good hash function.

My main point is, "why would we want to ignore the last 20 years of
advancement in hash function research?"  Section 7 of the siphash paper
(https://131002.net/siphash/siphash.pdf) explicitly talks about using
it
for hash tables, including using hash table indices H(m) mod l.

I agree with the advice when one can expect huge tables. The tables we 
handle are pretty small (I think, please correct me if I'm wrong) and 
would in all likelihood not benefit very much if at all from SIPhash's 
relative safety. 

Of course, one can ask the question if someone uses LHASH as a general 
purpose hash table implementation rather than just for the stuff OpenSSL. 
Frankly, I would probably look at a dedicated hash table library first... 

Cheers 
Richard 
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev




-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Add a new algorithm in "crypto" dir, how to add the source code into the build system

2016-12-22 Thread Peter Waltenberg
It's changed in recent OpenSSL.

1.1.0c the directories are in Configure. 

# Top level directories to build
$config{dirs} = [ "crypto", "ssl", "engines", "apps", "test", "util", 
"tools", "
fuzz" ];
# crypto/ subdirectories to build
$config{sdirs} = [
"objects",
"md2", "md4", "md5", "sha", "mdc2", "hmac", "ripemd", "whrlpool", 
"poly1305"
, "blake2",
"des", "aes", "rc2", "rc4", "rc5", "idea", "bf", "cast", "camellia", 
"seed",
 "chacha", "modes",
"bn", "ec", "rsa", "dsa", "dh", "dso", "engine",
"buffer", "bio", "stack", "lhash", "rand", "err",
"evp", "asn1", "pem", "x509", "x509v3", "conf", "txt_db", "pkcs7", 
"pkcs12",
 "comp", "ocsp", "ui",
"cms", "ts", "srp", "cmac", "ct", "async", "kdf", "sha3"  <  Added 
sha3 to the list
];

Persist, it can be done but there was quite a bit of trial and error 
before I got it working.

Peter



From:   "Wei, Changzheng" 
To: "openssl-dev@openssl.org" 
Date:   23/12/2016 10:41
Subject:Re: [openssl-dev] Add a new algorithm in "crypto" dir, how 
to add the source code into the build system
Sent by:"openssl-dev" 



Hi
Thanks for your reply. 
My question is that, I add a new subdir(named abc) in openssl/crypto/abc, 
and implement codes , Makefile and build.info in the crypt/abc directory, 
but when I re-build OpenSSL, I found that this new added sub dir is not 
involved into the build system, any source file in this subdir is not 
compiled. So I want to know how to compile these new added files in 
OpenSSL build system.
 
Thanks
 
From: openssl-dev [mailto:openssl-dev-boun...@openssl.org] On Behalf Of 
Short, Todd
Sent: Friday, December 23, 2016 5:14 AM
To: openssl-dev@openssl.org
Subject: Re: [openssl-dev] Add a new algorithm in "crypto" dir, how to add 
the source code into the build system
 
Easiest way is to fork the OpenSSL Github repo and then clone it down to 
your local machine where you can do the work locally. Once you are happy, 
push it back up to your forked Github repo, and then make a pull request 
back to the OpenSSL repo. 
 
There are lots of places you can get information on git and Github; but 
this list isn’t one of them.
--
-Todd Short
// tsh...@akamai.com
// "One if by land, two if by sea, three if by the Internet."
 
On Dec 22, 2016, at 8:12 AM, Wei, Changzheng  
wrote:
 
Hi, 
I want to implement some new algorithm. To make my future work smoothly, I 
want to add a new algorithm method like “RSA_METHOD” in OpenSSL framework 
so as to I can use an “engine” to support such algorithm.
So I add a new subdir in “crypto” and implement the code and build.info 
refer to “crypto/rsa”.
My question is how to add my new source code into the build system?
 
Thanks in advance!
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
 -- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev




-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Making assembly language optimizations working onCortex-M3

2016-06-07 Thread Peter Waltenberg

That may not be a good idea.

The vast majority of OpenSSL in use isn't targetted at a specific processor
variant. It's compiled by an OS vendor and then installed on whatever.
IF you are in the situation where you are compiling for a space constrained
embedded processor then hopefully your engineers also have enough smarts to
fix the code. I'd also point out that a lot of dev. setups for embedded
aren't actually compiled on the target machine either so auto-detection at
build time isn't that sensible anyway.

The problem here is you can't have both and having the capability switch at
runtime depending on hardware quirks is the better option for the majority
of users. You certainly don't want to mess with the runtime
OPENSSL_armcap_P as that likely breaks 'the rest of the world' (tm)

Peter



From:   Brian Smith 
To: openssl-dev@openssl.org
Date:   08/06/2016 11:49
Subject:Re: [openssl-dev] Making assembly language optimizations
working on  Cortex-M3
Sent by:"openssl-dev" 



Andy Polyakov  wrote:
>> > Cortex-M platforms are so limited that every bit of performance
and
>> > space savings matters. So, I think it is definitely worthwhile to
>> > support the non-NEON ARMv7-M configuration. One easy way to do
this
>> > would be to avoid building NEON code when __TARGET_PROFILE_M is
defined.
>>
>> I don't see no __TARGET_PROFILE_M defined by gcc
>>
>>
>> I see. I didn't realize that GCC didn't emulate this ARM compiler
>> feature. Never mind.
>
> But gcc defines __ARM_ARCH_7M__, which can be used to e.g.

Thanks. That's useful to know.

>> I can try to make a patch to bring BoringSSL's OPENSSL_STATIC_ARMCAP
>> mechanism to OpenSSL, if you think that is an OK approach.
>
> I don't understand. Original question was about conditional *omission*
> of NEON code (which incidentally means even omission of run-time
> switches), while BoringSSL's OPENSSL_STATIC_ARMCAP is about *keeping*
> NEON as well as run-time switch *code*, just setting OPENSSL_armcap_P to
> a chosen value at compile time... I mean it looks like we somehow
> started to talk about different things... When I wrote "care to make
> suggestion" I was thinking about going through all #if __ARM_ARCH__>=7
> and complementing some of them with !defined(something_M)...

> Compiler might remove dead code it would generate itself, but it still
> won't omit anything from assembly module. Linker takes them in as
> monolithic blocks.

If the target is Cortex-M4, there is no NEON. So then, with the
OPENSSL_STATIC_ARMCAP, we won't set define OPENSSL_STATIC_ARMCAP_NEON
and so that bit of the armcap variable won't be set.

I think what you're trying to say is that, if we just stop there, then
all the NEON code will still get linked in. That's true. But, what I
mean is that we should then also change all the tests of the NEON bit
of OPENSSL_armcap_P (and, more generally, all tests of
OPENSSL_armcap_P) to use code that the C compiler can do constant
propagation and dead code elimination on. We can do this, for example,
by defining `OPENSSL_armcap_P` to be a macro that can be seen to have
a constant compile-time value, when using the OPENSSL_STATIC_ARMCAP
mechanism. And/or, we can surround the relevant code with `#if
!defined(OPENSSL_STATIC_ARMCAP ) ||
defined(OPENSSL_STATIC_ARMCAP_NEON)`, etc. This latter technique would
(IIUC) work even in the assembly language files.

In this way, if we know at build time that NEON will be available, we
can avoid compiling/linking the non-NEON code. Conversely, if we know
that NEON will NOT be available, we can avoid compiling/linking the
NEON code.

I hope this clarifies my suggestion.

Cheers,
Brian
--
https://briansmith.org/
--
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev



-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #4401] [PATCH] plug potential memory leak(s) in OpenSSL 1.1 pre 4 in 'ec_lib.c'

2016-03-08 Thread Peter Waltenberg via RT
 No, you got that right, NULL being 'safe' to free varies with OS.

But - you aren't calling free() directly, THIS makes it safe. That's one of the
other benefits of having objects allocated and released by internal functions
rather than doing it directly.

void BN_MONT_CTX_free(BN_MONT_CTX *mont)
{
if (mont == NULL)
return;

BN_clear_free(&(mont->RR));
BN_clear_free(&(mont->N));
BN_clear_free(&(mont->Ni));
if (mont->flags & BN_FLG_MALLOCED)
OPENSSL_free(mont);
}


-"openssl-dev"  wrote: -From: Bill
Parker via RT
Sent by: "openssl-dev"
Date: 03/09/2016 07:53AM
Cc: openssl-dev@openssl.org
Subject: Re: [openssl-dev] [openssl.org #4401] [PATCH] plug potential memory
leak(s) in OpenSSL 1.1 pre 4 in 'ec_lib.c'

I must be brain dead today, since free'ing something that is already NULL
is not a problem (geez)...

Heh

On Tue, Mar 8, 2016 at 12:01 PM, Salz, Rich via RT  wrote:

>
> > + if (dest->mont_data != NULL)
> > + BN_MONT_CTX_free(dest->mont_data);
>
> Free routines don't need to check for non-NULL.
>
>
> --
> Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4401
> Please log in as guest with password guest if prompted
>
>

--
Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4401
Please log in as guest with password guest if prompted

--
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev

-- 
Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4401
Please log in as guest with password guest if prompted

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #4401] [PATCH] plug potential memory leak(s) in OpenSSL 1.1 pre 4 in 'ec_lib.c'

2016-03-08 Thread Peter Waltenberg
 No, you got that right, NULL being 'safe' to free varies with OS. But - you aren't calling free() directly, THIS makes it safe. That's one of the other benefits of having objects allocated and released by internal functions rather than doing it directly.void BN_MONT_CTX_free(BN_MONT_CTX *mont){    if (mont == NULL)    return;    BN_clear_free(&(mont->RR));    BN_clear_free(&(mont->N));    BN_clear_free(&(mont->Ni));    if (mont->flags & BN_FLG_MALLOCED)    OPENSSL_free(mont);}-"openssl-dev"  wrote: -From: Bill Parker via RT Sent by: "openssl-dev" Date: 03/09/2016 07:53AMCc: openssl-dev@openssl.orgSubject: Re: [openssl-dev] [openssl.org #4401] [PATCH] plug potential memoryleak(s) in OpenSSL 1.1 pre 4 in 'ec_lib.c'I must be brain dead today, since free'ing something that is already NULLis not a problem (geez)...HehOn Tue, Mar 8, 2016 at 12:01 PM, Salz, Rich via RT  wrote:>> > +               if (dest->mont_data != NULL)> > +                   BN_MONT_CTX_free(dest->mont_data);>> Free routines don't need to check for non-NULL.>>> --> Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4401> Please log in as guest with password guest if prompted>>-- Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4401Please log in as guest with password guest if prompted-- openssl-dev mailing listTo unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Question about dynamically loadable engines on Cygwin / Mingw

2016-02-15 Thread Peter Waltenberg
 Possibly the best fix is to simply not specify the library prefix or suffix.i.e. -engine capiAnd let OS/build specific code sort out the rest. You still have .so and .sl on different variants of HP/UX for example.Next best, specify the complete library name in all cases - and I'll concede, best and second best are pretty equal here.Pete-"openssl-dev"  wrote: -To: openssl-dev@openssl.orgFrom: Richard Levitte Sent by: "openssl-dev" Date: 02/15/2016 10:13AMSubject: [openssl-dev] Question about dynamically loadable engines on Cygwin/ MingwHi,I've got a question to the Cygwin / Mingw community, regarding thenaming of dynamic engines.From looking at Makefile.shared et al, the engines get the same kindof prefixes as a standard shared library (but without the accompanyingimport library, of course).  So the capi engine gets named like this:  Cygwin: cygcapi.dll  Mingw: capieay32.dllDoes that mean that using engines with the openssl commands looksstrangely different depending on the platform you happen to be on?Like, would a run of openssl s_server with the capi engine likesomething like this?  Cygwin: openssl s_server -engine cygcapi.dll ...  Mingw:  openssl s_server -engine capieay32.dll ...  Unix:   openssl s_server -engine capi ...(note that on Unix, it's assumed that the engine *may* be prefixedwith "lib", which might be a reason for discussion as well, as it'snot really meant to be used as a shared library)Apart from the fact that the current ENGINE framework has no supportfor the ".dll" suffix internally (that's an easy fix), is there anyreason to name the dynamic engines anything but, in this example,capi.dll or libcapi.dll?This is assuming, btw, that no one mixes the different Windows POSIXlayers on top of each other.  If such mixes are commonplace, it'sworth considering, of course...Cheers,Richard-- Richard Levitte         levi...@openssl.orgOpenSSL Project         http://www.openssl.org/~levitte/-- openssl-dev mailing listTo unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #4229] Bug - OpenSSL 1.0.2e on AIX has sha256p8-ppc.s assembler build issue...

2016-02-12 Thread Peter Waltenberg
 You can also add some more macros to the perlasm which already translates a LOT of opcodes into something older assemblers won't choke on.Pete-"openssl-dev"  wrote: -To: robert.go...@igt.comFrom: Jeremy Farrell via RT Sent by: "openssl-dev" Date: 02/13/2016 03:46AMCc: openssl-dev@openssl.orgSubject: Re: [openssl-dev] [openssl.org #4229] Bug - OpenSSL 1.0.2e on AIXhas sha256p8-ppc.s assembler build issue...On 11/02/2016 22:36, Andy Polyakov via RT wrote:>> I am attempting to build OpenSSL 1.0.2e on AIX and I'm seeing an issue with the "stvx" assembler instruction in the sha256p8-ppc.s module.  I have built prior version OpenSSL packages on AIX without issue until now (prior was 1.0.1c), and I haven't varied the steps I typically use.  Specifics are: AIX: 5200-08> I'm not quite familiar with AIX lingo. What does 5200-08 mean? Is it 5.2?Yes, AIX 5.2 TL 8. I believe that IBM stopped providing fixes for that particular technology level in February 2007, stopped standard support for AIX 5.2 final technology level (TL 10) in April 2009, stopped admitting to ever having heard of 5.2 sometime in 2012. no-asm seems to be the appropriate way to deal with this.-- J. J. Farrell-- Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4229Please log in as guest with password guest if prompted-- openssl-dev mailing listTo unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #4229] Bug - OpenSSL 1.0.2e on AIX has sha256p8-ppc.s assembler build issue...

2016-02-12 Thread Peter Waltenberg via RT
 You can also add some more macros to the perlasm which already translates a
LOT of opcodes into something older assemblers won't choke on.

Pete

-"openssl-dev"  wrote: -To:
robert.go...@igt.com
From: Jeremy Farrell via RT
Sent by: "openssl-dev"
Date: 02/13/2016 03:46AM
Cc: openssl-dev@openssl.org
Subject: Re: [openssl-dev] [openssl.org #4229] Bug - OpenSSL 1.0.2e on AIX has
sha256p8-ppc.s assembler build issue...

On 11/02/2016 22:36, Andy Polyakov via RT wrote:
>> I am attempting to build OpenSSL 1.0.2e on AIX and I'm seeing an issue with
the "stvx" assembler instruction in the sha256p8-ppc.s module. I have built
prior version OpenSSL packages on AIX without issue until now (prior was
1.0.1c), and I haven't varied the steps I typically use. Specifics are:
>>
>> AIX: 5200-08
> I'm not quite familiar with AIX lingo. What does 5200-08 mean? Is it 5.2?

Yes, AIX 5.2 TL 8. I believe that IBM stopped providing fixes for that
particular technology level in February 2007, stopped standard support
for AIX 5.2 final technology level (TL 10) in April 2009, stopped
admitting to ever having heard of 5.2 sometime in 2012. no-asm seems to
be the appropriate way to deal with this.

--
J. J. Farrell


--
Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4229
Please log in as guest with password guest if prompted

--
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev

-- 
Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4229
Please log in as guest with password guest if prompted

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #4301] [BUG] OpenSSL 1.1.0-pre2 fails to parse x509 certificate in DER format

2016-02-11 Thread Peter Waltenberg
The problem with making those little "Oh we'll allow it  for
interoperability' choices is that they may end up as security
vulnerabilities elsewhere. Particularly when there are multiple of them
made.

So - it is quite reasonable to reject a change like that because it's near
impossible to check all the little corner cases that it might expose.

Peter





From:   "Blumenthal, Uri - 0553 - MITLL via RT" <r...@openssl.org>
To: bcri...@gmail.com
Cc: openssl-dev@openssl.org
Date:   12/02/2016 10:13
Subject:Re: [openssl-dev] [openssl.org #4301] [BUG] OpenSSL 1.1.0-pre2
fails to parse x509 certificate in DER format
Sent by:"openssl-dev" <openssl-dev-boun...@openssl.org>



Again, you are right, but what's the lesser evil‎ - being unable to use the
new OpenSSL because it refuses to deal with the cert that some dim-witten
TPM maker screwed up, or accept a certificate with a (minor) violation of
DER (but not of BER)? What bad in your opinion could happen if OpenSSL
allowed parsing an integer with a leading zero byte (when it shouldn't be
there by DER)?

Even in crypto (and that's the area I've been working in for quite a while)
there are some shades of gray, not only black and white.

P.S. My platform of choice is Mac, and Apple does not put TPM there - so I
won't gain from this decision, whichever way it turns. ;-)

Sent from my BlackBerry 10 smartphone on the
Verizon Wireless 4G LTE network.
  Original Message
From: Kurt Roeckx
Sent: Thursday, February 11, 2016 18:03‎
To: openssl-dev@openssl.org‎
Reply To: openssl-dev@openssl.org
Cc: Stephen Henson via RT; bcri...@gmail.com
Subject: Re: [openssl-dev] [openssl.org #4301] [BUG] OpenSSL 1.1.0-pre2
fails to parse x509 certificate in DER format‎

On Thu, Feb 11, 2016 at 10:53:25PM +, Blumenthal, Uri - 0553 - MITLL
wrote:
> Might I suggest that the right thing in this case would be to keep
generation strict, but relax the rules on parsing? "Be conservative in what
you send, and liberal with what you receive"?

This might be good advice for some things, but ussually not when it‎
comes to crypto.


Kurt

--
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


--
Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4301
Please log in as guest with password guest if prompted

[attachment "smime.p7s" deleted by Peter Waltenberg/Australia/IBM] --
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #4301] [BUG] OpenSSL 1.1.0-pre2 fails to parse x509 certificate in DER format

2016-02-11 Thread Peter Waltenberg via RT
The problem with making those little "Oh we'll allow it  for
interoperability' choices is that they may end up as security
vulnerabilities elsewhere. Particularly when there are multiple of them
made.

So - it is quite reasonable to reject a change like that because it's near
impossible to check all the little corner cases that it might expose.

Peter





From:   "Blumenthal, Uri - 0553 - MITLL via RT" <r...@openssl.org>
To: bcri...@gmail.com
Cc: openssl-dev@openssl.org
Date:   12/02/2016 10:13
Subject:Re: [openssl-dev] [openssl.org #4301] [BUG] OpenSSL 1.1.0-pre2
fails to parse x509 certificate in DER format
Sent by:"openssl-dev" <openssl-dev-boun...@openssl.org>



Again, you are right, but what's the lesser evil‎ - being unable to use the
new OpenSSL because it refuses to deal with the cert that some dim-witten
TPM maker screwed up, or accept a certificate with a (minor) violation of
DER (but not of BER)? What bad in your opinion could happen if OpenSSL
allowed parsing an integer with a leading zero byte (when it shouldn't be
there by DER)?

Even in crypto (and that's the area I've been working in for quite a while)
there are some shades of gray, not only black and white.

P.S. My platform of choice is Mac, and Apple does not put TPM there - so I
won't gain from this decision, whichever way it turns. ;-)

Sent from my BlackBerry 10 smartphone on the
Verizon Wireless 4G LTE network.
  Original Message
From: Kurt Roeckx
Sent: Thursday, February 11, 2016 18:03‎
To: openssl-dev@openssl.org‎
Reply To: openssl-dev@openssl.org
Cc: Stephen Henson via RT; bcri...@gmail.com
Subject: Re: [openssl-dev] [openssl.org #4301] [BUG] OpenSSL 1.1.0-pre2
fails to parse x509 certificate in DER format‎

On Thu, Feb 11, 2016 at 10:53:25PM +, Blumenthal, Uri - 0553 - MITLL
wrote:
> Might I suggest that the right thing in this case would be to keep
generation strict, but relax the rules on parsing? "Be conservative in what
you send, and liberal with what you receive"?

This might be good advice for some things, but ussually not when it‎
comes to crypto.


Kurt

--
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


--
Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4301
Please log in as guest with password guest if prompted

[attachment "smime.p7s" deleted by Peter Waltenberg/Australia/IBM] --
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev



-- 
Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4301
Please log in as guest with password guest if prompted

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl-team] Discussion: design issue: async and -lpthread

2015-11-30 Thread Peter Waltenberg
 I'd suggest checking where the bottlenecks are before making major structural changes. I'll admit we have made a few changes to the basic OpenSSL sources but I don't see unacceptable amounts of locking even on large machines (100's of processing units) with thousands of threads.Blinding and the RNG's were the hot spots and relatively easy to address. Also, you use TRNG's for things like blinding where a PRNG will do, fixing that also helps performance.Peter-"openssl-dev"  wrote: -To: paul.d...@oracle.com, openssl-dev@openssl.orgFrom: Nico Williams Sent by: "openssl-dev" Date: 12/01/2015 10:16AMSubject: Re: [openssl-dev] [openssl-team] Discussion: design issue: async and -lpthreadOn Tue, Dec 01, 2015 at 09:21:34AM +1000, Paul Dale wrote:> However, the obstacle preventing 100% CPU utilisation for both stacks> is lock contention.  The NSS folks apparently spent a lot of effort> addressing this and they have a far more scalable locking model than> OpenSSL: one lock per context for all the different kinds of context> versus a small number of global locks.I prefer APIs which state that they are "thread-safe provided theapplication accesses each XYZ context from only one thread at a time".Leave it to the application to do locking, as much as possible.  Manythreaded applications won't need locking here because they may naturallyhave only one thread using a given context.Also, for something like a TLS context, ideally it should be naturallypossible to have two threads active, as long as one thread only readsand the other thread only writes.  There can be some dragons here withrespect to fatal events and deletion of a context, but the simplestthing to do is to use atomics for manipulating state like "had a fatalalert", and use reference counts to defer deletion (then if theapplication developer wants it this way, each of the reader and writerthreads can have a reference and the last one to stop using the contextdeletes it).> There is definitely scope for improvement here.  My atomic operation> suggestion is one approach which was quick and easy to validate,> better might be more locks since it doesn't introduce a new paradigm> and is more widely supported (C11 notwithstanding).A platform compatibility atomics library would be simple enough (plentyexist, I believe).  For platforms where no suitable implementationexists you can use a single global lock, and if there's not even that,then you can use non-atomic implementations and pretend it's all OK orfail to build (users of such platforms will quickly provide realimplementations).(Most compilers have pre-C11 atomics intrinsics and many OSes haveatomics libraries.)Nico-- ___openssl-dev mailing listTo unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev

___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl-team] Discussion: design issue: async and -lpthread

2015-11-23 Thread Peter Waltenberg

"
Please do.  It will make this much safer.  Also, you might want to run
some experiments to find the best stack size on each platform.  The
smaller the stack you can get away with, the better.
"

It does, but it also requires code changes in a few places. probable_prime
() in bn_prime.c being far and away the worst offender. We instrumented our
test code so we could find out what the stack usage was, for libcrypto you
can get it under 4k for 32 bit and under 8k for 64 bit code on x86 Linux.

FYI, nothing elegant there, just have your code allocate and fill a large
stack array then add check points further down to see how far you've eaten
into it.

"
> > A guard page
> > would allow one to safely tune down fiber stack size to the whatever
> > OpenSSL actually needs for a given use.
"

Unless someone allocates a stack array larger than the size of the guard
page and scribbles over another threads stack. This is another reason to
never use large arrays on the stack.

"
Is there something wrong with that that I should know?  I suppose the
test could use threads to make real sure that it's getting thread-
locals, in case the compiler is simply ignoring __thread.  Are there
compilers that ignore __thread??
"

Only that it's a compile time choice and OpenSSL is currently 'thread
neutral' at runtime, not at compile time ?.
Compile time is easy, making this work at runtime is hard and occasionally
is really valuable - i.e. way back in the dim distant path when Linux had
multiple thread packages available.

Peter






From:   Nico Williams 
To: openssl-dev@openssl.org
Date:   24/11/2015 06:49
Subject:Re: [openssl-dev] [openssl-team] Discussion: design issue:
async and -lpthread
Sent by:"openssl-dev" 



On Mon, Nov 23, 2015 at 08:34:29PM +, Matt Caswell wrote:
> On 23/11/15 17:49, Nico Williams wrote:
> > On a slightly related note, I asked and Viktor tells me that fiber
> > stacks are allocated with malloc().  I would prefer that they were
> > allocated with mmap(), because then you get a guard page.  A guard page
> > would allow one to safely tune down fiber stack size to the whatever
> > OpenSSL actually needs for a given use.
>
> Interesting. I'll take a look at that.

Please do.  It will make this much safer.  Also, you might want to run
some experiments to find the best stack size on each platform.  The
smaller the stack you can get away with, the better.

> > Still, if -lpthread avoidance were still desired, you'd have to find an
> > alternative to pthread_key_create(), pthread_getspecific(), and
friends.
>
> Just a point to note about this. The async code that introduced this has
> 3 different implementations:
>
> - posix
> - windows
> - null
>
> The detection code will check if you have a suitable posix or windows
> implementation and use that. Otherwise the fallback position is to use
> the null implementation. With "null" everything will compile and run but
> you won't be able to use any of the new async functionality.
>
> Only the posix implementation uses the pthread* functions (and only for
> thread local storage). Part of the requirement of the posix detection
> code is that you have "Configured" with "threads" enabled. This is the
> default. However it is possible to explicitly configure with
> "no-threads". This suppresses stuff like the "-DRENENTERANT" flag. It
> now will also force the use of the null implementation for async and
> hence will not use any of the pthread functions.

Ah, I see.  I think that's fine.  Maybe Viktor misunderstood this?

> One other option we could pursue is to use the "__thread" syntax for
> thread local variables and avoid the need for libpthread altogether. An
> earlier version of the code did this. I have not found a way to reliably
> detect at compile time the capability to do this and my understanding is
> that this is a lot less portable.

I use this in an autoconf project (I know, OpenSSL doesn't use autoconf):

  dnl Thread local storage
  have___thread=no
  AC_MSG_CHECKING(for thread-local storage)
  AC_LINK_IFELSE([AC_LANG_SOURCE([
  static __thread int x ;
  int main () { x = 123; return x; }
  ])], have___thread=yes)
  if test $have___thread = yes; then
 AC_DEFINE([HAVE___THREAD],1,[Define to 1 if the system supports
__thread])
  fi
  AC_MSG_RESULT($have___thread)

Is there something wrong with that that I should know?  I suppose the
test could use threads to make real sure that it's getting thread-
locals, in case the compiler is simply ignoring __thread.  Are there
compilers that ignore __thread??

Nico
--
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev



___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl-team] Discussion: design issue: async and -lpthread

2015-11-23 Thread Peter Waltenberg

I wasn't saying there was anything wrong with mmap(), just that guard pages
only work if you can guarantee your overrun hits the guard page (and
doesn't just step over it). Large stack allocations increase the odds of
'stepping over' the guard pages. It's still better than not having guard
pages, but they aren't a hard guarantee that you won't have mysterious bugs
still.

You obviously realize that, but bn_prime() is the classic example of
allocating very large chunks of memory on the stack.


As for fibre's, I doubt it'll work in general, the issue there is simply
the range of OS's OpenSSL supports. If you wire it in you still have to run
with man+dog+world in the process, that's a hard ask. One of the good
points about OpenSSL up until now, it tends to not break those big messy
apps where a whole lot of independly developed code ends up in the same
process.


Peter




From:   Nico Williams <n...@cryptonector.com>
To: openssl-dev@openssl.org
Date:   24/11/2015 10:42
Subject:Re: [openssl-dev] [openssl-team] Discussion: design issue:
async and -lpthread
Sent by:"openssl-dev" <openssl-dev-boun...@openssl.org>



On Mon, Nov 23, 2015 at 09:53:15PM +1000, Peter Waltenberg wrote:
>
> "
> Please do.  It will make this much safer.  Also, you might want to run
> some experiments to find the best stack size on each platform.  The
> smaller the stack you can get away with, the better.
> "
>
> It does, but it also requires code changes in a few places.
probable_prime
> () in bn_prime.c being far and away the worst offender. We instrumented
our
> test code so we could find out what the stack usage was, for libcrypto
you
> can get it under 4k for 32 bit and under 8k for 64 bit code on x86 Linux.

Are you saying that using mmap() would be onerous?  Something else?

> FYI, nothing elegant there, just have your code allocate and fill a large
> stack array then add check points further down to see how far you've
eaten
> into it.

Sure.

> "
> > > A guard page
> > > would allow one to safely tune down fiber stack size to the whatever
> > > OpenSSL actually needs for a given use.
> "
>
> Unless someone allocates a stack array larger than the size of the guard
> page and scribbles over another threads stack. This is another reason to
> never use large arrays on the stack.

alloca() and VLAs aren't safe for allocating more bytes than fit in a
guard page.  One should not use alloca()/VLAs for anything larger than
that.

This is no reason not to have a guard page!

This is a reason to have coding standards that address alloca()/VLAs.

> "
> Is there something wrong with that that I should know?  I suppose the
> test could use threads to make real sure that it's getting thread-
> locals, in case the compiler is simply ignoring __thread.  Are there
> compilers that ignore __thread??
> "
>
> Only that it's a compile time choice and OpenSSL is currently 'thread
> neutral' at runtime, not at compile time ?.

OpenSSL is "thread-neutral" at run-time as to locks and thread IDs
because of the lock/threadid callbacks.  But here we're talking about a
new feature (fibers) that uses thread-locals, and here using pthread
thread locals (pthread_getspecific()) clearly means no longer being
"thread-neutral" -- if I understand your definition of that term
anyways.

It's perfectly fine to use __thread in compiled code regardless of what
threading library is used, provided -of course- that __thread was
supported to begin with and that the compiler isn't lying.

> Compile time is easy, making this work at runtime is hard and
occasionally
> is really valuable - i.e. way back in the dim distant path when Linux had
> multiple thread packages available.

If the compiler accepts __thread but allows it to break at run-time
depending on the available threading libraries, then the compiler is
broken and should now have allowed __thread to begin with.  I can't find
anything describing such brokenness.  If you assert such brokenness
exists then please post links or instructions for how to reproduce it.

BTW, https://en.wikipedia.org/wiki/Thread-local_storage#C_and_C.2B.2B
says that even Visual Studio supports thread-locals.  Though there's a
caveat that requires some care at configuration time:

  On Windows versions before Vista and Server 2008, __declspec(thread)
  works in DLLs only when those DLLs are bound to the executable, and
  will not work for those loaded with LoadLibrary() (a protection fault
  or data corruption may occur).[9]

There must, of course, be compilers that don't support thread locals
(pcc?).  Wouldn't it be fair to say that OpenSSL simply doesn't support
fibers on those compilers?  I think so.

Nico
--
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev



___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl-users] Removing obsolete crypto from OpenSSL 1.1 - seeking feedback

2015-11-20 Thread Peter Waltenberg
 Quite reasonable except. I'm not sure you have majority and minority the right way around. My guess would be that the majority of OpenSSL users are libcrypto. consumers rather than SSL/TLS consumers. A point several of us have been trying to get through for some time. Peter-"openssl-dev"  wrote: -To: "openssl-dev@openssl.org" From: "Short, Todd" Sent by: "openssl-dev" Date: 11/21/2015 08:28AMCc: "openssl-us...@openssl.org" Subject: Re: [openssl-dev] [openssl-users] Removing obsolete crypto from OpenSSL 1.1 - seeking feedback



While I am all for simplicity, I also think that removing functionality is a “bad idea”.
To reduce the support burden, deprecate the ciphers:1. Under support, indicate that these ciphers will no longer receive fixes.2. Remove any assembly implementations3. Disable them by default.I suggest following the 80/20 rule (sometimes the 95/5 rule):Those “who care” (the minority) about the ciphers can re-enable them and rebuild the library.Those “who don’t care” (the majority) about the ciphers, should get the functionality that most people care about, basically SSL/TLS connectivity.---Todd Short// tsh...@akamai.com// "One if by land, two if by sea, three if by the Internet."---Todd Short// tsh...@akamai.com// "One if by land, two if by sea, three if by the Internet."On Nov 18, 2015, at 1:52 PM, Blumenthal, Uri - 0553 - MITLL  wrote:On 11/18/15, 12:12 , "openssl-dev on behalf of Benjamin Kaduk" wrote:On 11/18/2015 07:05 AM, Hubert Kario wrote:So, a full CAdES-A, XAdES-A or PAdES-A implementation _needs_ tosupport both relatively modern TLS with user certificates, preferably thenewest cryptosystems and hashes as well as the oldest ones that werestandardised and used.That means that old algorithms MUST remain in OpenSSL as supportedfunctionality. It may require linking to a specific library to make theEVP* with old ciphers, MACs, etc. work, but they MUST NOT be removedfrom it completely, definitely not before at least 50 years _after_they became obsolete and broken.There seems to be a logical leap between these two paragraphs.  Why isit necessary that OpenSSL be the only cryptographic library used byCAdES-A/etc. implementations?Because it used to be the only real game in town, and *people learned torely upon it*.Is it in fact even necessary that only asingle version of a single cryptographic library be used for suchsoftware? No, of course not. But after letting people depend on this “singlecryptographic library” for many years, telling them “too bad” isn’t verynice.While OpenSSL may try to be a general-purpose crypto library,when a software has stringent or unusual crypto requirements, it seemsreasonable that such a software may need to involve unusualimplementations.The requirements did not change. What changed was the maintainersexpressing their desire to stop supporting some of them.I do not believe that OpenSSL has promised anywhere that it will supportthis sort of use case.Implicitly, by providing that kind of service for so long. And explicitly,as pointed out by Hubert:From the main web page of project:The OpenSSL Project is a collaborative effort to develop a robust,commercial-grade, *full-featured*, and Open Source toolkitimplementing the Transport Layer Security (TLS) and Secure SocketsLayer (SSL) protocols as well as a full-strength *general purpose**cryptography library* .___openssl-dev mailing listTo unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev

___openssl-dev mailing listTo unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev

___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Fwd: Re: [openssl-users] Removing obsolete crypto from OpenSSL 1.1 - seeking feedback

2015-11-17 Thread Peter Waltenberg

> This is an interesting idea. For completeness, it has failed in other
contexts

Well yes but it's a different context. Policy level rather than capability,
That's why I'm not in favour of removing algorithms, even changing policy
higher up the stack can cause problems, but removing basic capabilities
tends to have even unwanted side effects. I obviously have a personal
interest in this, in my case it's because I work for a company that does
provide insane support lifetimes for products.

For libcrypto itself the attack surface is near zero, it doesn't open
sockets, connect to networks, accept input. It's simply a toolbox and
there's always something else between libcrypto and an attack, if SSL
doesn't want to use MD5, well don't use MD5 but there are other users of
the toolbox. As an analogy throwing out all those 3/8th spanners just
because you've officially gone metric doesn't always work that well in
practice either.

Peter








 Phone: 61-7-5552-4016 L11 & L7 
Seabank 
 E-mail: pwal...@au1.ibm.comSouthport, QLD 
4215 
  
Australia 








From:   Jeffrey Walton <noloa...@gmail.com>
To: OpenSSL Developer ML <openssl-dev@openssl.org>
Date:   17/11/2015 20:23
Subject:Re: [openssl-dev] Fwd: Re: [openssl-users] Removing obsolete
crypto from OpenSSL 1.1 - seeking feedback
Sent by:"openssl-dev" <openssl-dev-boun...@openssl.org>





On Mon, Nov 16, 2015 at 9:06 PM, Peter Waltenberg <pwal...@au1.ibm.com>
wrote:
  Why not offer another set of get_XYZ_byname() which resticts the caller
  to socially acceptable algorithms. Or allows the opposite, it really
  doesn't matter but restricted being the newer API breaks less code by
  default.


This is an interesting idea. For completeness, it has failed in other
contexts. For example, the IETF's TLS Working Group refuses to provide such
an abstraction. See, for example,
https://www.ietf.org/mail-archive/web/tls/current/msg17611.html.
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev






___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Removing obsolete crypto from OpenSSL 1.1 - seeking feedback

2015-11-16 Thread Peter Waltenberg
The reason for keeping the old crypto. algorithms around is the obvious
one, that's been stated over and over. OpenSSL's SSL isn't the only
consumer of the algorithms, remove the low level algorithms and you risk
breaking more than OpenSSL.  SSH, IKE,IPSec, Kerberos and I'm sure there
are more, and the scripting languages like Perl that use OpenSSL to provide
algorithm support.

There are a lot of ecosystems built on top of OpenSSL's crypto, it's not
just SSL, and for someone like a distro. maintainer it's between a rock and
a hard place, stick with the old code and patch the security
vulnerabilities, or break stuff. Which is why them being still available in
the old code isn't a good enough answer to the problems this would create.

And in this case 'breaking stuff' is unecessary. Do what you like with TLS
in terms of pruning algorithms in use, but removing the algorithms is a lot
like burning books in a library for being irrelevant. They may be
irrelevant to you, but they aren't necessarilly irrelevant to everyone.

Peter





From:   Richard Moore 
To: openssl-dev@openssl.org
Cc: openssl-us...@openssl.org
Date:   17/11/2015 06:29
Subject:Re: [openssl-dev] Removing obsolete crypto from OpenSSL 1.1 -
seeking feedback
Sent by:"openssl-dev" 




On 16 November 2015 at 19:05, Hubert Kario  wrote:
  Example: CAdES V1.2.2 was published in late 2000, the first serious
  attacks on MD2 were not published until 2004. I think it is not
  unreasonable for CAdES-A documents to exist today which were originally
  signed with MD2 while it was still considered secure and that are still
  relevant today, just 15 years later.


​This doesn't explain why the code needs to exist in future versions of
openssl. The previous ones aren't going to vanish and can be compiled and
used to rescue data in theoretical edge cases like this. You're making it
sound like this is making the data totally inaccessible which is not the
case.

Cheers

Rich.​ ___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Fwd: Re: [openssl-users] Removing obsolete crypto from OpenSSL 1.1 - seeking feedback

2015-11-16 Thread Peter Waltenberg


Why not offer another set of  get_XYZ_byname() which resticts the caller to
socially acceptable algorithms. Or allows the opposite, it really doesn't
matter but restricted being the newer API breaks less code by default.

Give it the same call syntax and it's simply an #ifdef in the OpenSSL
headers for anyone who wants to spend hours working out why their code
doesn't work any more.

i.e. EVP_get_digestbyname() becomes EVP_get_digestbyname_r(), and if anyone
actually wants only the restricted set from say a Linux distro. they can
#define  EVP_get_digestbyname(a) EVP_get_digestbyname_r(a)

At the crypto library level this is just maths and it really doesn't make
any sense to try and enforce policy at this point. I can understand the
maintenance issues, but C code really isn't a problem and dropping
algorithms from the sources here simply makes more work for other people
elsewhere.

Peter




From:   Viktor Dukhovni 
To: openssl-dev@openssl.org
Date:   17/11/2015 10:02
Subject:Re: [openssl-dev] Fwd: Re: [openssl-users] Removing obsolete
crypto from OpenSSL 1.1 - seeking feedback
Sent by:"openssl-dev" 



On Mon, Nov 16, 2015 at 11:23:52PM +, Matt Caswell wrote:

> Disabling algorithms isn't the right answer IMO. I do like the idea of a
> "liblegacycrypto". That way people that only have need of current
> up-to-date crypto can stick with the main library. Others who need the
> older crypto can still get at it. Yes, that means we still have to
> maintain this code - but I don't see it as that big a burden.

What becomes a bit tricky is having an EVP interface that can find
the algorithms in liblegacrypto.  This I think means two different
builds of the crypto library, one that depends on liblegacycrypto
and provides its algorithms, and another than does not.

Systems might then ship with:

 libcrypto-legacy.so - Just the legacy algorithms

 libcrypto-compat.so - Libcrypto that supports the
above
 libcrypto-secure.so - Libcrypto with just the 
strong
algos

 libcrypto.so- Symlink to one of the
two above

Some applications might be linked directly to "-secure" or "-compat"
to make sure they get one or the other.  This is a bunch of work.

At this time, with the resources at our disposal, I think it makes
more sense to take a more gradual approach and just drop the assembly
support.


> Being the "swiss army knife" is no bad thing (even where that includes
> old crypto). We just have to find a way to separate the two concerns:
> current crypto (and only current crypto) for most (and probably most
> importantly for libssl users); broader crypto support for those that
> want it (which is why I like the liblegacycrypto idea because it enables
> us to do that).

I like the idea, but don't see a manageable implementation...

> Whether this is the right thing to do in the 1.1.0 timeframe is another
> consideration though. Viktor's arguments are quite convincing.

The timeline is a concern.  We're fairly far into the 1.1.0
development cycle (alphas and betas soon), and this is a major
change.  I think major changes like removing the ciphers or a whole
new optional library should wait for a better opportunity.

--
 Viktor.
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev



___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Removing obsolete crypto from OpenSSL 1.1 - seeking feedback

2015-11-13 Thread Peter Waltenberg
 I also can't see any point expunging old algorithms from the sources, making them not build by default should be enough. They are known to work and there's always the issue of 'legacy' support. With the number and variety of consumers OpenSSL has that's likely to be a problem for years to come.The only thing I would suggest is dropping assembler support for anything that's been retired, just to cut the maintenance effort / risk of breakage. If it's legacy only, performance shouldn't be an issue.Peter-"openssl-dev"  wrote: -To: openssl-dev@openssl.orgFrom: Viktor Dukhovni Sent by: "openssl-dev" Date: 11/14/2015 11:55AMSubject: Re: [openssl-dev] Removing obsolete crypto from OpenSSL 1.1 - seeking feedbackOn Fri, Nov 13, 2015 at 10:02:02PM +, Salz, Rich wrote:> > So I'm trying to help move forward, without creating artificial barriers.> > Let's fix TLS (libssl) first, and we can tackle libcrypto in a later> > release.> > I disagree.> > I think the main driver will be OpenSSL 1.1-next, which will have TLS 1.3> support.  For me, the main driver will be all the internal code qualityimprovements in OpenSSL 1.1.0, that I hope will make the codesubstantially more resilient, and subject to a lower CVE rate thanits predecessors.  Hardening of the existing TLS 1.2 implementationwill also be a win.Sexy new features like TLS 1.3 will not be compelling for quitesome time.I do not want to dissuade downstream distributions from adoptingOpenSSL 1.1.0 because porting is difficult to impossible.> So the purpose of this realease will be to flush out bad code> and bad crypto, completely refresh and overhaul many things.  And if some> folks wait because they need to still use old, bad or unsupported, crypto> algorithms, so be it.  Can't please everyone.  And they've got time to> fix it before they decide they really really want TLS 1.3 :)This may not take into account the compexity of the ecosystem, thefolks with "bad code and bad crypto" are not necessarily thedistribution maintainers who ship prebuilt packages, libraries,and scripting languages.  Which OpenSSL should Perl's Net::SSLeaylink against?  Or some CPAN module that provides libcrypto algorithms?The distribution maintainers face immense backwards compatibilitychallenges, especially with late binding software.  We cannot becavalier about their problems.> So I don't view this as an artificial barrier.  I view it as a preview> for the real thing people will want, which is the *next* release.I expect on the contrary, that a more solid 1.1.0 is more compellingthan a shiny 1.2.0.  Just adjusting to the API changes will beenough work, and we want that work to start.  At least those showup at link time.  Delaying the API adjustment by removing functionalityis likely too radical.Yes we'd prefer to not maintain the old ciphers and digests forever,but as soon as we're doing something other than TLS, we're supportingdata at rest not just data in motion, and data at rest has a ratherlong shelf-life.Sadly, we have to tread very carefully with algorithm removal fromlibcrypto, but we have a lot more flexibility in libssl.-- Viktor.___openssl-dev mailing listTo unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev

___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Improving OpenSSL default RNG

2015-10-23 Thread Peter Waltenberg
 If you are going to make all that effort you may as well go for FIPS compliance as the default.SP800-90 A/B/C do cover the areas of concern, the algorithms are simple and clear as is the overall flow of processing to start from 'noise' to produce safe and reliable TRNG/PRNG's. More importantly, you already have most of the necessary code in OpenSSL-FIPS.And you can always swap out AES/SHA in the core for other 
algorithms to cater for the very paranoid and those who don't trust US algorithms, or just leave the RNG code 'pluggable' as it is now.
Peter-"openssl-dev"  wrote: -To: openssl-dev@openssl.orgFrom: Benjamin Kaduk Sent by: "openssl-dev" Date: 10/24/2015 08:46AMSubject: Re: [openssl-dev] Improving OpenSSL default RNGOn 10/23/2015 08:22 AM, Alessandro Ghedini wrote:> Hello everyone,>> (sorry for the wall of text...)>> one of the things that both BoringSSL and LibreSSL have in common is the> replacement of OpenSSL's default RNG RAND_SSLeay() with a simpler and saner> alternative. Given RAND_SSLeay() complexity I think it'd be worth to at least> consider possible alternatives for OpenSSL.I heartily support this; the existing RAND_SSLeay() is a bit frightening(though I take some solace in the existence of ENGINE_rdrand()).> BoringSSL started using the system RNG (e.g. /dev/urandom) for every call to> RAND_bytes(). Additionally, if the RDRAND instruction is available, the output> of RDRAND is mixed with the output of the system RNG using ChaCha20. This uses> thread-local storage to keep the global RNG state./dev/urandom is simple and safe absent the chroot case.  (Note thatcapsicum-using applications will frequently open a fd for /dev/urandombefore entering capability mode and leave it open; the same might beworth considering.)  Concerns about "running out of entropy" areunfounded; the kernel uses a CS-PRNG and if we trust its output to seedour own scheme, we can trust its output indefinitely.Intel recommends calling RDRAND in a loop since it does not alwaysreturn successfully, and IIRC best practice is to mix it in with otherinputs (i.e., not use it directly).> Incidentally, BoringSSL added a whole new API for thread-local storage which> OpenSSL could adopt given that e.g. the ASYNC support could benefit from it> (there are other interesting bits in BoringSSL, like the new threading API that> could also be adopted by OpenSSL).>> The BoringSSL method is very simple but it needs a read from /dev/urandom for> every call to RAND_bytes() which can be slow (though, BoringSSL's RAND_bytes()> seems to implement some sort of buffering for /dev/urandom so the cost may be> lower).Keeping the default method simple, slow, and reliable could be areasonable approach, given that there is always the option of insertingan alternate implementation if performance is a concern.  ("Simple"probably means "rely on the kernel for everything, do not usethread-local storage, etc.")It might also be worth having a more complicated scheme that does usethread-local storage (on systems where we know how to implement it) andruns fortuna or something similar, but that does not necessarily need tobe the default implementation, in my opinion.>> On the other hand, LibreSSL replaced the whole RAND_* API with calls to> OpenBSD's arc4random(). This is a nice and simple scheme that uses ChaCha20 to> mix the internal RNG state, which is regularly reseeded from the system RNG.> The core logic of this (excluding ChaCha20 and platform-specific bits) is> implemented in less than 200 lines of code and, at least in theory, it's the> one that provides the best performance/simplicity trade-off (ChaCha20 can be> pretty fast even for non-asm platform-generic implementations).A single syscall to get entropy is nice, whether it's a sysctl node,getentropy(), getrandom(), or some other spelling; a library call likearc4random() is almost as good.  But I don't think we're in a positionto rip out the RAND_* API layer as LibreSSL did.> Both of these methods are robust and mostly platform-indipendent (e.g. none of> them uses the system time, PID or uninitilized buffers to seed the RNG state)> and have simple implementations, so I think OpenSSL can benefit a lot from> adopting one of them. The astute readers may point out that OpenSSL doesn't> support ChaCha20 yet, but that's hopefully coming soon.>> I think there's also room for improvement in the platform-specific RAND_poll()> implementations, e.g.:>> - on Linux getrandom() should be used if available> - on OpenBSD getentropy() should be used instead of arc4random()> - the /dev/urandom code IMO can be simplified> - the non-CryptGenRandom() code on Windows is just crazy. Do we even support>   Windows versions before XP?> - is EGD actually used anywhere today?"I really hope not."> - what about Netware, OS/2 and VMS, do we have any users on them? IIRC support>   for other platforms has already been removed, what are the 

Re: [openssl-dev] [openssl.org #4045] RSA_generate_key()

2015-09-16 Thread Peter Waltenberg

Depends on the CPU, if you have a slow CPU RSA key gen will be slow.

It seems to take ~ 1/10th of a second here with current x86_64 hardware.

Something less capable. (ARM7) ~ 5 seconds.

Your mips hardware is slow but in the ballpark.



Peter



From:   BeomGeun Bae via RT 
To:
Cc: openssl-dev@openssl.org
Date:   16/09/2015 08:24 PM
Subject:[openssl-dev] [openssl.org #4045] RSA_generate_key()
Sent by:"openssl-dev" 



I don't know where i need to ask but have a question for RSA_generate_key
().
Do you have minimum cpu performance to run RSA_generate_key() for 2048bits?
When I tested it in our system (4,000mips), it task more than 10 seconds.
Is this expected?

___
openssl-bugs-mod mailing list
openssl-bugs-...@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-bugs-mod
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #3955] [PATCH] Reduce stack usage in PKCS7_verify()

2015-07-23 Thread Peter Waltenberg

bn/bn_prime.c

static int probable_prime(BIGNUM *rnd, int bits)
{
int i;
prime_t mods[NUMPRIMES];   ==
BN_ULONG delta, maxdelta;


This one is also excessive.

The problem is that even on OS's with dynamic thread stack if you do cause
a stack overrun, the entire process gets frozen, a new stack for that
thread is allocated, stack copied, process restarted.
Sounds O.K., but if you have a 1000 threads and they all sequentially hit
their guard pages performance suffers rather badly with the entire process
being stalled for each thread.
OS's without dynamic thread stacks just crash.

And yes, 256 bytes is usually O.K., but it's overall thread stack use for
the component that really needs to be audited and kept within some fixed
budget.
Any single stack allocation  4k is generally bad news as that's large
enough to reach past the (typical) 4k guard pages.

Peter



From:   Salz, Rich via RT r...@openssl.org
To: dw...@infradead.org
Cc: openssl-dev@openssl.org
Date:   24/07/2015 06:35 AM
Subject:Re: [openssl-dev] [openssl.org #3955] [PATCH] Reduce stack
usage inPKCS7_verify()
Sent by:openssl-dev openssl-dev-boun...@openssl.org



How about 256 on the stack?


___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev



___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #3955] [PATCH] Reduce stack usage in PKCS7_verify()

2015-07-23 Thread Peter Waltenberg via RT

bn/bn_prime.c

static int probable_prime(BIGNUM *rnd, int bits)
{
int i;
prime_t mods[NUMPRIMES];   ==
BN_ULONG delta, maxdelta;


This one is also excessive.

The problem is that even on OS's with dynamic thread stack if you do cause
a stack overrun, the entire process gets frozen, a new stack for that
thread is allocated, stack copied, process restarted.
Sounds O.K., but if you have a 1000 threads and they all sequentially hit
their guard pages performance suffers rather badly with the entire process
being stalled for each thread.
OS's without dynamic thread stacks just crash.

And yes, 256 bytes is usually O.K., but it's overall thread stack use for
the component that really needs to be audited and kept within some fixed
budget.
Any single stack allocation  4k is generally bad news as that's large
enough to reach past the (typical) 4k guard pages.

Peter



From:   Salz, Rich via RT r...@openssl.org
To: dw...@infradead.org
Cc: openssl-dev@openssl.org
Date:   24/07/2015 06:35 AM
Subject:Re: [openssl-dev] [openssl.org #3955] [PATCH] Reduce stack
usage inPKCS7_verify()
Sent by:openssl-dev openssl-dev-boun...@openssl.org



How about 256 on the stack?


___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev




___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] sizeof (HMAC_CTX) changes with update, breaks binary compatibility

2015-06-11 Thread Peter Waltenberg

Which is exactly why our hacked version of OpenSSL has
allocators/deallocators for all these private struct's.

It'd be really nice if OpenSSL would fix this, adding them won't break
backwards compatibility (i.e. API breakage isn't an excuse for not fixing
these)  and going forwards problems like this would stop occuring.

Peter

___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: Error _armv7_tick openssl

2014-10-10 Thread Peter Waltenberg

 ARM is one of those awkward processors, the event counter isn't always directly readable from userspace, if it's not directly readable you get an illegal instruction trap. A syscall to access the event counters is only present in recent kernels. And even more fun, the event counter data is only readable from the thread that enabled the event counters.
The bad news is that without the event counters, there are no good entropy sources - and dying with a segv is likely the best possible outcome.
PeterThis code works (for a certain value of 'work') on recent kernels.
#elif defined(__ARMEL__) || defined(__ARMEB__)/* ARM */
#include stdlib.h#include stdio.h#include unistd.h
#include string.h#include sys/ioctl.h#include linux/perf_event.h
#include asm/unistd.hstatic inline long longarmtick(void)
{ int fddev = -1;long long result = 0;static struct perf_event_attr attr;
attr.type = PERF_TYPE_HARDWARE;attr.config = PERF_COUNT_HW_CPU_CYCLES;
fddev = syscall(__NR_perf_event_open, attr, 0, -1, -1, 0);
if (read(fddev, result, sizeof(result))  sizeof(result)) return 0;
close(fddev);return result;}Not exactly efficient.
Peter-owner-openssl-...@openssl.org wrote: -

To: openssl-dev@openssl.orgFrom: Andy Polyakov 
Sent by: owner-openssl-...@openssl.orgDate: 10/11/2014 01:17AMSubject: Re: Error _armv7_tick openssl
 If I press continue, then also it give segmentation fault. It is not
 working normally, it exits with seg fault: saying illegal instruction.
??? Segmentation fault != illegal instruction. What does "exits with seg
fault saying illegal instruction" mean? Where is the segmentation fault?
 Could you suggest any other solution? The assembly instruction which I mentioned in my log was identified as illegal
 instruction.
https://www.openssl.org/support/faq.html#PROG17. There are two options
listed.__
OpenSSL Project 
http://www.openssl.orgDevelopment Mailing Listopenssl-dev@openssl.org
Automated List Manager  majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: Single-Makefile Build Experiment report

2014-08-14 Thread Peter Waltenberg
Just a comment. the OpenSSL build already depends on Perl and Perl already
has a Make of it's own .
That would at least relieve some of the problems with being dependent on
lowest common denominator features common to the various platform makes.

I'll admit, I have no idea whether the Perl variant of make is an
improvement or not, but at least it would remove one dependency and provide
the same features across platforms.

Peter




From:   Nathan Typanski ntypan...@gmail.com
To: openssl-dev@openssl.org
Date:   15/08/2014 09:40 AM
Subject:Re: Single-Makefile Build Experiment report
Sent by:owner-openssl-...@openssl.org



On 08/14, Tim Hollebeek wrote:
 Have you considered moving to CMake?  It makes lots of the issues
 you discuss in the document just go away.  cmake should work on the
 vast majority of supported operating systems, if not all of them ...

Cmake has disadvantages. I haven't actually used it enough to comment
on what it's like to use, but I can link to a project that has:

https://wiki.openoffice.org/wiki/Build_System_Analysis#CMake

OpenOffice was trying to solve the recursive make problem in their
project, much like OpenSSL is attempting. They ultimately decided
against CMake and gave a good writeup of their reasoning.

There's also a nice debate at LWN.net about GNU Make/CMake and the
related tradeoffs.

http://lwn.net/Articles/466137/

Also, consider the scenario:

- I'm an embedded developer and I want to compile OpenSSL on my
  embedded system (or any platform that isn't my workstation). It
  doesn't have CMake, I can't get CMake on it or don't have the
  resources (or desire) to get CMake installed on the target platform.
- To solve this, I download OpenSSL on my workstation and tell CMake
  to generate a GNU Makefile for me. I copy the source over to the
  platform I want to build OpenSSL on.
- I do `./configure  make  make install` and pray.
- The build fails and dumps and unhelpful error message. I go digging
  into the generated Makefile looking for the build error and realize
  CMake has built absolute paths into everything.
- I go on the CMake wiki and read this:
  
http://www.cmake.org/Wiki/CMake_FAQ#Why_does_CMake_use_full_paths.2C_or_can_I_copy_my_build_tree.3F

  where the answer is basically no, you can't use CMake like that. Go
  install CMake on your embedded device, or figure out how the hell to
  cross-compile a CMake build.
- Researching cross-compiling CMake turns up this:
  http://blog.beuc.net/posts/Cross-compiling_with_CMake/
- I come complain on this mailing list because OpenSSL has rejected
  both GNU and UNIX-like standards in favor of this stupid more
  advanced build system.

Maybe I'm biased, but from what I've seen with projects using CMake,
CMake is only portable in the sense that KDE is portable: yes, if
you're willing to enforce complete buy-in from the
users/packagers/maintainers, people can build OpenSSL easily on more
than one system.

But from my eyes it doesn't look like a low-level, relatively tiny C
library has any good reason to switch to CMake.

Nathan
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #782] IBM patches to OpenSSL-0.9.7c

2014-08-14 Thread Peter Waltenberg
That's essentially correct.
Any IBM contributions from me have been dealt already, just to save time if
you hit more.

Thanks
Peter





From:   Rich Salz via RT r...@openssl.org
To: Peter Waltenberg/Australia/IBM@IBMAU
Cc: openssl-dev@openssl.org
Date:   15/08/2014 12:27 PM
Subject:[openssl.org #782] IBM patches to OpenSSL-0.9.7c



The assembly code seems to have been included already.
The platforms we want are included already.
I think we've got the 'good bits' from this; if not, please
open a new ticket to cover it. thanks.
--
Rich Salz, OpenSSL dev team; rs...@openssl.org



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: Re : Re: [openssl.org #3442] [patch] AES XTS: supporting custom iv from openssl enc command

2014-07-12 Thread Peter Waltenberg

Doh.ThanksPete
-owner-openssl-...@openssl.org wrote: -

To: openssl-dev@openssl.orgFrom: "Dr. Stephen Henson" <st...@openssl.org>
Sent by: owner-openssl-...@openssl.orgDate: 07/12/2014 10:16PM
Subject: Re: Re: Re: [openssl.org #3442] [patch] AES XTS: supporting custom iv from openssl enc command
On Sat, Jul 12, 2014, Peter Waltenberg wrote:
 Or extend EVP_CIPHER_CTX_ctrl() to handle things like changing IV's ? Modes
 like XTS may gain a lot from that, you could use EVP_CIPHER_CTX_copy() to
 avoid repeated key expansion costs, change the IV with EVP_CIPHER_CTX_ctrl()
 and do the next block.There is already a method to change IVs without expnanding the key again which
should work for XTS (looking at code, not tried it explicitly). You set all
parameters to EVP_EcnryptInit_ex et al to NULL apart from the context and IV.
Subsequenty calls to EVP_EncryptUpdate etc should then use the new IV.
Steve.--Dr Stephen N. Henson. OpenSSL project core developer.
Commercial tech support now available see: 
http://www.openssl.org__
OpenSSL Project 
http://www.openssl.orgDevelopment Mailing Listopenssl-dev@openssl.org
Automated List Manager  majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re : Re: [openssl.org #3442] [patch] AES XTS: supporting custom iv from openssl enc command

2014-07-11 Thread Peter Waltenberg

Or extend EVP_CIPHER_CTX_ctrl() to handle things like changing IV's ?
Modes like XTS may gain a lot from that, you could use EVP_CIPHER_CTX_copy() to avoid repeated key expansion costs, change the IV with EVP_CIPHER_CTX_ctrl() and do the next block.
Peter-owner-openssl-...@openssl.org wrote: -

To: openssl-dev@openssl.orgFrom: nicolas@free.frSent by: owner-openssl-...@openssl.org
Date: 07/11/2014 11:46PMSubject: Re: Re: [openssl.org #3442] [patch] AES XTS: supporting custom iv from openssl enc command
Hi and sorry to interfere, 
I had to review all ciphers available in openssl, and what seemed weird to me is that algorithms are always given as a combination of a symmetric cipher and a mode of operation
This approach was convenient since stream ciphers and block cipher with a mode of operation didn't need to be distinguished, and it worked fine as long as there were not so much chaining modes and none requiring more than an IV.
However, with much modes available now, and with these mode being quite independent from the block cipher used, I wonder if it wouldn't be nice to add something like EVP_MODE and EVP_MODE_CTX structures to manage modes independently. Practically, modes only need to know from a block cipher ecb encryption and decryption routines and block size. Thus treating block ciphers and modes separately could improve modularity (eg. Blowfish in GCM mode doesn't seems to exist, while both BF and GCM are implemented)
EVP_MODE would provide init, update and final routines, depending on encrypt/decrypt routines from block cipher, and EVP_MODE_CTX would store Ivs, AADs, buffers and any other mode specific data.
If I'm not mistaken, this is compliant with the API philosophy.
Obviously, it can be tricky to make it compatible with the current API but it does not appear to be impossible.
For example using a function that could create an EVP_CIPHER from a block cipher and a mode (basically by using the encrypt/decrypt routines from the block cipher).
Or given all block ciphers and modes, one can also declare all possible combinations, without too big efforts since it would mainly consist in setting some pointers.
With this approach, some functions EVP_MODE_CTX_set_xxx can also be added to manage mode-specific operations (setting IV length in XTS, or AAD in GCM) while maintaining backward compatibility.
I hope this is a more constructive proposition. However I think this could be nice to have such feature from an user point-of-view, and that it would also make the library easier to maintain.
Best regardsNicolas- Mail d'origine -
De: Andy Polyakov ap...@openssl.orgÀ: openssl-dev@openssl.org
Cc: lull...@yahoo.comEnvoyé: Fri, 11 Jul 2014 12:56:50 +0200 (CEST)
Objet: Re: [openssl.org #3442] [patch] AES XTS: supporting custom iv from openssl enc command
 Bottom line [still] is that enc is not the place to perform XTS,
 *unless* it's treated specially. In other words question should not be
 about setting IV, but about *if* XTS should be supported by enc, and if
 so, how exactly.  It seems to me this is why jamming modes like XTS into standard EVP as
 if they were like other modes is a less than great idea.But providing own interface for every specific mode is also hardly fun.
I mean there ought to be balance. Now we have EVP that implies different
semantic in different modes. In other words application might have to
perform extra calls depending on mode (and in this particular caseproblem is that enc doesn't do those calls). What would be alternative?
Distinct interface for every class of modes? Can we define what makes up
such class? What do we expect from interface? Also note that either way,
the fact that it needs to be treated in enc in special way doesn'tchange. It's not like I'm rejecting alternatives to EVP, but discussion
have to be more constructive.__
OpenSSL Project 
http://www.openssl.orgDevelopment Mailing Listopenssl-dev@openssl.org
Automated List Manager  majord...@openssl.org
__
OpenSSL Project 
http://www.openssl.orgDevelopment Mailing Listopenssl-dev@openssl.org
Automated List Manager  majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #3424] Misaligned pointers for buffers cast to a size_t*

2014-07-07 Thread Peter Waltenberg

Personally, I have test coverage of the ciphers and hashes we use which deliberately misaligns input and output buffers.
I think we picked up one problem, several years ago.
O.K. someone COULD use a compiler the OpenSSL team doesn't, but frankly test coverage seems the best option. No risk, no performance loss by default.
It's a lot like "the compiler could remove the code we use to scrub keys argument", well, if you suspect that, test for it. Because, seriously, no amount of argument and discussion will resolve it.
Pete-owner-openssl-...@openssl.org wrote: -

To: David Jacobson dmjacob...@sbcglobal.netFrom: Jeffrey Walton 
Sent by: owner-openssl-...@openssl.orgDate: 07/07/2014 06:27PM
Cc: OpenSSL Developer ML openssl-dev@openssl.orgSubject: Re: [openssl.org #3424] Misaligned pointers for buffers cast to a size_t*
On Sun, Jul 6, 2014 at 6:06 PM, David Jacobson dmjacob...@sbcglobal.net wrote:
 On 7/6/14 1:44 PM, Andy Polyakov via RT wrote: ...
 As for warning. I personally would argue that we are looking at
 platform-specific i.e. implementation-defined behaviour, not undefined.
 Once again, this applies to all three tickets. One is effectively
 identical to this one, second is about variable shift in CAST. As
 mentioned they all are conscious choices and are proven to work. BTW,
 specification gives following example of undefined behaviour:
 "EXAMPLE: An example of undefined behavior is the behavior on integer
 overflow." ... According to C99 overflow/wrapping of unsigned integer values is defined to
 be modulo the range of the type. Here are the quotes:
 For conversions, Section 6.3.1.3 paragraph 2: 2 Otherwise, if the new type is unsigned, the value is converted by
 repeatedly adding or subtracting one more than the maximum value that can be
 represented in the new type until the value is in the range of the new
 type.49) For binary operators, Section 6.2.5 paragraph 9:
 A computation involving unsigned operands can never overflow, because a
 result that cannot be represented by the resulting unsigned integer type is
 reduced modulo the number that is one greater than the largest value that
 can be represented by the resulting type. The "EXAMPLE: ..." quote above is incorrect since, only overflow of _signed_
 integers results in undefined behavior.I think the paragraph of interest for the Clang finding is 6.3.2.3,
paragraph 7 [ISO/IEC 9899:2011]: A pointer to an object or incomplete type may be
 converted to a pointer to a different object or incomplete type. If the resulting pointer is not
 correctly aligned for the referenced type, the behavior is undefined.
I could be wrong because I am not a C language expert. I generally use
a tool to help me spot transgressions and then fix them (once pointed
out, I usually understand them).I think Andy is right with respect to processor behavior. But I'm not
certain if its the best strategy given the C langauge rules.Jeff
__
OpenSSL Project 
http://www.openssl.orgDevelopment Mailing Listopenssl-dev@openssl.org
Automated List Manager  majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: EVP_CIPHER_CTX_copy() segv with XTS

2014-06-30 Thread Peter Waltenberg

Test code suggests it segv's.

XTS128_CONTEXT contains a couple of pointers to expanded AES keys, the expanded keys and the pointers inside the XTS128_CONTEXT are copied, but if the original context has gone away by the time the copy is used the pointers are to disposed of data. Game over.

Something like this is probably the fix.
static int aes_xts_ctrl(EVP_CIPHER_CTX *c, int type, int arg, void *ptr)
{
 EVP_AES_XTS_CTX *xctx = c-cipher_data;
 switch(type) {
 case EVP_CTRL_INIT:
  /* key1 and key2 are used as an indicator both key and IV are set */
  xctx-xts.key1 = NULL;
  xctx-xts.key2 = NULL;
  return 1;
 default:
  return -1;  
 case EVP_CTRL_COPY:
  {
   EVP_CIPHER_CTX *out = ptr;
   EVP_AES_XTS_CTX *xctx_out = out-cipher_data;
   xctx_out-xts.key1 = (xctx_out-ks1);
   xctx_out-xts.key2 = (xctx_out-ks2);
  }
  return 1;
 }   
}
...#define XTS_FLAGS(EVP_CIPH_FLAG_DEFAULT_ASN1 | EVP_CIPH_CUSTOM_IV \
| EVP_CIPH_ALWAYS_CALL_INIT | EVP_CIPH_CTRL_INIT \
| EVP_CIPH_CUSTOM_COPY)
Pete
-owner-openssl-...@openssl.org wrote: -

To: openssl-dev@openssl.orgFrom: Huzaifa Sidhpurwala <sidhpurwala.huza...@gmail.com>
Sent by: owner-openssl-...@openssl.orgDate: 06/30/2014 07:19PM
Subject: Re: EVP_CIPHER_CTX_copy() segv with XTS
Hi Peter,Are you facing any issues similar to 
http://rt.openssl.org/Ticket/Display.html?user=guestpass=guestid=3272
 ? or are just commenting on the previous GCM fix? 
A quick look at the EVP_AES_XTS_CTX suggests that the only pointer in there is (*stream) which points to the function which is responsible for doing encryption/decryption and should be safe to copy to the new CTX
On Mon, Jun 30, 2014 at 9:42 AM, Peter Waltenberg 
pwal...@au1.ibm.com
 wrote:
This appears to be the same 'pattern' error as GCM. For XTS ctx-
cipher_data contains pointers and the contents are aren't being fully
duplicated by the copy.Peter__
OpenSSL Project 
http://www.openssl.orgDevelopment Mailing List
openssl-dev@openssl.orgAutomated List Manager  
majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


EVP_CIPHER_CTX_copy() segv with XTS

2014-06-29 Thread Peter Waltenberg
This appears to be the same 'pattern' error as GCM.  For XTS ctx-
cipher_data contains pointers and the contents are aren't being fully
duplicated by the copy.


Peter



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


RE: [openssl.org #3373] [BUG] [WIN] DLL copyright message not synchronize for quite a while

2014-06-17 Thread Peter Waltenberg

On the other hand, we try to keep the advertised (c) on binaries up to date.
About the only way to do that is to make updating the (c) date part of the build scripts, that's relatively easy on Windows as the resource file is text and gets compiled.
Which reminds me ... :{Peter
-owner-openssl-...@openssl.org wrote: -

To: "openssl-dev@openssl.org" openssl-dev@openssl.orgFrom: "Salz, Rich" 
Sent by: owner-openssl-...@openssl.orgDate: 06/17/2014 08:46AM
Subject: RE: [openssl.org #3373] [BUG] [WIN] DLL copyright message not synchronize for quite a while
For what it's worth, the policy at IBM (where I used to work, and where they know quite a few things about software intellectual property), is that you only update the copyright on an individual file *when you modify it.*
/r$-- Principal Security EngineerAkamai Technologies, Cambridge, MA
IM: rs...@jabber.me; Twitter: RichSalz:I"rm(Z+7zZ)1xhW^^%
j.+-1j:+vh


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: Locking inefficiency

2014-06-12 Thread Peter Waltenberg

Please correct me if I'm wrong, but the ERR/OID structures only need locking because they are loaded dynamically ?.
Preload them all at startup with a global lock held, delete them at shutdown with a global lock held. If all the other access is 'read' the structures don't need a lock between times.
Might be something to consider putting on the "to do" list. I can understand things being done like that when memory was in short supply, but now, probably not so important.
Peter
-owner-openssl-...@openssl.org wrote: -

To: openssl-dev@openssl.orgFrom: Florian Weimer 
Sent by: owner-openssl-...@openssl.orgDate: 06/12/2014 06:23PM
Subject: Re: Locking inefficiency
On 06/11/2014 02:26 PM, Salz, Rich wrote: What kinds of operations are protected by read locks?
 Looking at almost any of the global data structures, such as error tables, OID tables, and so on.
 Often, RW locks aren't a win because maintaining just the read locks (without any writers) introduces contention at the hardware level, and parallelism does not increase all that much as a result. Paul McKenney's dissertation on RCU has some examples.
 We've monitored one of our applications, an SSL-terminating HTTP proxy server, under load. Of all the mutexes (futex, actually) in the system, the "error" lock is the most highly contended one. I'll see about posting some statistics.
Is this CRYPTO_LOCK_ERR? It would be interesting which locking path 
actually triggers the contention. If it's the thread-local storage 
re-implementation, it should be possible to use an ERR implementation 
which uses native thread-local storage, which should be mostly contention-free.
-- Florian Weimer / Red Hat Product Security Team__
OpenSSL Project 
http://www.openssl.orgDevelopment Mailing Listopenssl-dev@openssl.org
Automated List Manager  majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: Locking inefficiency

2014-06-11 Thread Peter Waltenberg

It's a thread from a few months ago. OpenSSL needs to establish a default thread model in many cases.
The one we (IBM) hit is composite apps. Multiple independently developed lumps of code thrown together - someone has to set the locks, but deciding who is a problem. We deal with it easily as we put a wrapper around OpenSSL and do it there, but that's not always an option.
The OS installed OpenSSL's should probably also use the OS default thread model by default for similar reasons.
There were also a number of cases of 'user error' with people just forgetting to set up the locking around that time.
That's the background.I wouldn't consider efficiency a major problem as if it's a concern you still have the option of rolling your own fix, but nothing wrong with improving it, I'd suggest making sure though that by default it builds properly on all supported platforms otherwise it's a step backwards.
Peter-owner-openssl-...@openssl.org wrote: -

To: "Levin, Igor" ile...@akamai.comFrom: Geoffrey Thorpe 
Sent by: owner-openssl-...@openssl.orgDate: 06/11/2014 05:54PM
Cc: "openssl-dev@openssl.org" openssl-dev@openssl.org, "Salz, Rich" rs...@akamai.com
Subject: Re: Locking inefficiency
On Tue, Jun 10, 2014 at 3:27 PM, Levin, Igor 
ile...@akamai.com
 wrote:


 Geoff,

 we did not seem to be able to figure out what openssl Makefile actually builds crypto/threads/th-lock.c



In our particular case we explicitly included that file when building our server, but for pure OpenSSL, what make includes th-lock.c ?
Apparently, nothing. I believe the intention is to do precisely what you do. Ie. incorporate that code into your app, so that it registers the pthread locking callbacks through the openssl API.
Cheers,Geoff


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


RE: patch for make depend, chacha

2014-06-04 Thread Peter Waltenberg

IMHO, that's a good call. If a 'broken' algorithm gets in, it tends to stay there for a very long time.
DES_OLD, SHA0 are examples already in the OpenSSL code base.
Something else that could easily be killed now.
Pete-owner-openssl-...@openssl.org wrote: -

To: "openssl-dev@openssl.org" openssl-dev@openssl.orgFrom: "Salz, Rich" 
Sent by: owner-openssl-...@openssl.orgDate: 06/04/2014 02:31AM
Subject: RE: patch for make depend, chacha
 Is there somebody working on it to get Chacha/Poly cipher suites production ready?
It's expected that the way the ciphers are used will change as it goes through the IETF TLS group. Therefore, Google has not been encouraging folks to pick up and use these patches other than an "on your own" basis until after the they're done. (They == IETF and GOOG I suppose:)
/r$-- Principal Security EngineerAkamai Technologies, Cambridge, MA
IM: rs...@jabber.me; Twitter: RichSalz:I"rm(Z+7zZ)1xhW^^%
j.+-1j:+vh


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: AW: Which platforms will be supported in the future on which platforms will be removed?

2014-06-03 Thread Peter Waltenberg
It's a simple solution, and obvious and I don't think it'll work.

This is NOT the Linux kernel, the Linux kernel is directly funded by
several of the larger companies, they have employees contributing directly
on the kernel, with access to internal hardware resources.

OpenSSL doesn't. Yes, it has people funded by the larger companies USING
OpenSSL with access to hardware resources , but they don't usually
contribute directly to OpenSSL - consumers, not producers. They may
contrinute the occasional patch, butthat's about it. There's a problem of
scale here between the kernel and OpenSSL.

Donating server scale hardware would be a punishment, not a benefit. You
have to power it, you have to find space for it and it's noisy, and on the
other side of the equation, there's no way those companies are going to let
outsiders into their corporate networks.

I think the best you'd manage is insisting that larger companies wanting
support run some sort of continuous build system internally and feed
results back to the OpenSSL team.

Alternately, the OpenSSL team could give people from those companies
checkin access - but that has more fishhooks than the obvious, export
compliance is the obvious problem, but there are other issues, trust for
example.

Peter





From:   Theodore Ts'o ty...@mit.edu
To: openssl-dev@openssl.org
Date:   04/06/2014 12:18 AM
Subject:Re: AW: Which platforms will be supported in the future on
which platforms will be removed?
Sent by:owner-openssl-...@openssl.org



On Tue, Jun 03, 2014 at 02:22:07PM +1000, Peter Waltenberg wrote:

 One of the uglier problems is that unless you can build/test on all the
 platforms on each change you'll almost certainly break platforms
 unexpectedly - that lack of hardware has been one of the long term
problems
 and it's likely one of the inhibtors to cleanup as well.

There's a very simple solution to that problem, especially since we
now have the support and attention of many hardware companies.  The
rule should be very simple.  If a company doesn't contribute either
(a) exclusive, dedicated hardware, or (b) reliable, continuous access
to hardware, it doesn't get supported by the OpenSSL developers.
Period.

If it's not important for a company to provide access to hardware,
then they can take on the support burdens of providing OpenSSL support
to their platform, or clearly *they* don't care about the security of
their users.  And if they don't care, again, it's not fair to impose a
security tax on the rest of the Internet.

(And especially in the case of embedded products, it's not enough that
OpenSSL provide a new release with a security fix; the company needs
to be willing to create a firmware load and get it to all of its 10
year old customers.  And if they aren't willing to provide hardware to
critical infrastructure provider such as OpenSSL, it seems unlikely
they will be creating a new firmware load anyway, so what's the
point?)

The Linux kernel doesn't tie itself in knots wringing its hands about
how it can't make forward progress because it might break, say, the
break the m68k or alpha port.  They continue to exist only because a
number of m68k and alpha maintainers are sufficiently motivated to
keep them alive, *and* the impact on the core code is largely nil.  If
a largely dead architecture or CPU started getting in the way of
everyone else, it would either have to get fixed so it wasn't getting
in the way, or it would be removed.  (Which, for example, was the
decision of the x86 maintainers over the fate of 80386 support.)

Cheers,


 - Ted
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: AW: Which platforms will be supported in the future on which platforms will be removed?

2014-06-02 Thread Peter Waltenberg

The other thing to consider is that perhaps OpenBSD really has the
right approach, which is that portability should be done via support
libraries, and not part of the core code.  That might impact
performance on some legacy piece of cr*p, but presumably, impacted
performance is better than no support at all, or some massive security
hole that resulted from having to support legacy code hiding some
horrible security bug


Disagree there.

OpenSSL sits at the bottom of the stack. It either builds on a platform and
provides the function or the function doesn't exist on that platform
anymore.

The platform support stuff doesn't typically cause security problems, just
go through the list of OpenSSL CVE's. More typical is all platform bugs or
someone who had this great extension or code change they wanted and got it
into the code base.

I won't argue that sometimes legacy support makes the code hard to read,
but in itself I don't think it's causing bugs.

I'd also point out that legacy platforms are pretty common in the embedded
space and may even make up the majority of instances of OpenSSL in the
wild.

Peter




From:   Theodore Ts'o ty...@mit.edu
To: openssl-dev@openssl.org
Date:   03/06/2014 02:30 AM
Subject:Re: AW: Which platforms will be supported in the future on
which platforms will be removed?
Sent by:owner-openssl-...@openssl.org



On Mon, Jun 02, 2014 at 03:38:22PM +0200, stefan.n...@t-online.de wrote:
 * How much do you gain by removing support for the platform?

 Is there any relevant amount of code, that is really NT/2000/XP specific
 and unneeded for newer Windows releases? Breaking the support for
 the ancient platform by removing just a dozen lines of code seems like
 an unnecessary annoyance to (admittedly few) users.
 If on the other hand you can throw away hundreds of lines of code that
 nobody understands or even looks at, then go for it ...

What I'd suggest is as people create lists of legacy OS's that might
be removed, along with a deprecation schedule, that there also be an
explanation about why support for an ancient OS is causing pain.  Even
if the decision is to support some legacy system for some period of
time, an explanation of what code could be removed when it can finally
be dropped would be good to have archived, so that people don't have
to rediscover and reargue the case for why VMS deserves live over and
over again.   :-)

The other thing to consider is that perhaps OpenBSD really has the
right approach, which is that portability should be done via support
libraries, and not part of the core code.  That might impact
performance on some legacy piece of cr*p, but presumably, impacted
performance is better than no support at all, or some massive security
hole that resulted from having to support legacy code hiding some
horrible security bug


 - Ted
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: AW: Which platforms will be supported in the future on which platforms will be removed?

2014-06-02 Thread Peter Waltenberg
 (c) EBCDIC.

z/OS is still alive. I'll concede that one is weird and hard to get hold
of, but it has a lot of users still.

This ISN'T the Linux kernel. It's userspace code and longer lived and wider
spread than Linux and pretty fundamental to security.
Even with the 'dead' platforms crossed out, it has far more variants to
support than Linux, and typically longer support lifetimes.

So, some device like a router has been out in the field ten years but still
works just fine, are you going to block security updates for it ?

You won't get major cleanups without purging platforms like Windows, OS/X,
AIX, HP/UX.

Windows, I'd suggest most of the cruft there could be removed by insisting
that it builds with gnu make/cygwin installed but using the native MS
compiler. That's probably the biggest single cleanup possible and it's very
much a 'live' platform.

Peter



From:   Theodore Ts'o ty...@mit.edu
To: openssl-dev@openssl.org
Date:   03/06/2014 12:01 PM
Subject:Re: AW: Which platforms will be supported in the future on
which platforms will be removed?
Sent by:owner-openssl-...@openssl.org



On Tue, Jun 03, 2014 at 11:22:58AM +1000, Peter Waltenberg wrote:

 I won't argue that sometimes legacy support makes the code hard to read,
 but in itself I don't think it's causing bugs.

The OpenBSD people are right here.  If it's hard to read, then we
don't have many eyeballs on the code.  And while that isn't the only
way to curtail an active development community (Sun Microsytems came
up with many more), it's certainly one of the more effective ones.

It's not like someone wakes up and says, I know!  I'll screw over the
entire internet by introducing a security bug!  It happens by
accident, and the messier your code is, the more likely it is to
happen.  Code needs to be easy to read; or else you get bugs.  There's
a reason why the Linux kernel coding style strongly discourages
in-line #ifdef's in code.

 I'd also point out that legacy platforms are pretty common in the
embedded
 space and may even make up the majority of instances of OpenSSL in the
 wild.

I don't think there are a lot of embedded systems using (a) VMS, (b)
Windows 3.1, or (c) EBCDIC.

Cheers,


  - Ted
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: AW: Which platforms will be supported in the future on which platforms will be removed?

2014-06-02 Thread Peter Waltenberg
Look at the sources.

The build related mess is mainly Windows support.

#ifdef hell is mainly around external engine support, asm to get
performance, or object sizes/endianess which intrinsically varies platform
to platform. The code was written over a lot of years with a lot of
different styles and that shows, but again, it really hasn't had a single
full time person coordinating comits and enforcing style.

Some of the mess is going to be hard to fix. OpenSSL isn't one size fits
all, some end users need small footprint, some need the backend engines,
some don't need the backend engines because they play merry hell with
exportability, some algorithms had to be disabled for some users for patent
reasons - it all adds up.

I'm not saying that it wouldn't be nice to have a lot of this cleaned up,
but 'dead platforms' isn't the biggest problem in the source tree. I'm also
sure some of the clutter could be cleaned up, but equally really glad that
it isn't yet me having to do it.

It's a serious comitted effort that's required to fix the real issues not
something easy like dropping a few platforms.

One of the uglier problems is that unless you can build/test on all the
platforms on each change you'll almost certainly break platforms
unexpectedly - that lack of hardware has been one of the long term problems
and it's likely one of the inhibtors to cleanup as well.

Peter



From:   Theodore Ts'o ty...@mit.edu
To: openssl-dev@openssl.org
Date:   03/06/2014 12:55 PM
Subject:Re: AW: Which platforms will be supported in the future on
which platforms will be removed?
Sent by:owner-openssl-...@openssl.org



On Tue, Jun 03, 2014 at 12:20:17PM +1000, Peter Waltenberg wrote:
  (c) EBCDIC.

 z/OS is still alive. I'll concede that one is weird and hard to get hold
 of, but it has a lot of users still.

z/OS supports ASCII, and UTF-8, and has its own conversion routines
built into the system.  So it's not clear OpenSSL needs to have any
EBCDIC built into its core code.  If there are z/OS support functions
that needed to decrypt and encrypt EBCDIC, that's fine, but it
shouldn't be a tax on all the support for all other operating systems
out there.

 This ISN'T the Linux kernel. It's userspace code and longer lived and
wider
 spread than Linux and pretty fundamental to security.
 Even with the 'dead' platforms crossed out, it has far more variants to
 support than Linux, and typically longer support lifetimes.

I've maintained userspace code before, including krb5 and e2fsprogs,
which works on a very large number of platforms.  Yes, I never had to
support VMS, but who cares about VMS?  (Hint: No one, including HP, by
2020...)

 You won't get major cleanups without purging platforms like Windows,
 OS/X, AIX, HP/UX.

OS/X, AIX, and HP/UX are all POSIX platforms, with support for BSD
sockets.  Supporting them with common code and without tons and tons
of in-line #ifdef's isn't hard.  In fact, e2fsprogs does compile on a
wide variety of legacy Unix platforms, without looking nearly as
horrible as OpenSSL's source code

 Windows, I'd suggest most of the cruft there could be removed by
insisting
 that it builds with gnu make/cygwin installed but using the native MS
 compiler. That's probably the biggest single cleanup possible and it's
very
 much a 'live' platform.

I said Windows 3.1.  Win16 and Win32 are quite different, and I'd
suggest Win16 is pretty dead.  (As is MacOS pre-OSX.  Again, quite
different from OSX, and equally, just as dead.)

Cheers,


 -
Ted
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: Prime generation

2014-05-27 Thread Peter Waltenberg

Not quite correct, the prime rands shouldn't come from a DRBG, they should come from an NRBG (NIST terminology).
There's a considerable difference between the performance of an entropy source and a DRBG.
The output of a DRBG not being non-deterministic being the important point.
/dev/random V /dev/urandom performance. (1-2 bytes/second V 100k+/second)
I did change the RNG sources for some of the OpenSSL code in our hacked version to help with the performance problems using the wrong source causes, for example RSA blinding data can safely come from a DRBG (pseudo_rand_bytes()).
Peter-owner-openssl-...@openssl.org wrote: -

To: openssl-dev@openssl.orgFrom: David Jacobson 
Sent by: owner-openssl-...@openssl.orgDate: 05/27/2014 05:16PM
Subject: Re: Prime generation
On 5/26/14 2:01 PM, mancha wrote: On Mon, May 26, 2014 at 08:49:03PM +, Viktor Dukhovni wrote:
 On Mon, May 26, 2014 at 08:20:43PM +, mancha wrote:
 For our purposes, the operative question is whether the distribution
 bias created can be leveraged in any way to attack factoring (RSA)
 or dlog (DH). The maximum gap between primes of size $n$ is conjectured to be around
 $log(n)^2$. If $n$ is $2^k$, the gap is at most $k^2$, with an
 average value of $k$. Thus the most probable primes are most $k$
 times more probable than is typical, and we lose at most $log(k)$ bits
 of entropy. This is not a problem. One consequence of the k-tuple conjecture (generally believed to be
 true) is that the size of gaps between primes is distributed poisson.
 You're right when you say the entropy loss between a uniform
 distribution to OpenSSL's biased one is small. In that sense there is
 not much to be gained entropy-wise from using a process that gives
 uniformly distributed primes over what OpenSSL does. However, if a way exists to exploit the OpenSSL distribution bias, it
 can be modified to be used against uniformly distributed primes with
 only minimal algorithmic complexity increases. In other words, the gold
 standard here isn't a uniform distribution. --mancha
I doubt the claim (in one of the messages of this thread, but not above) 
that generating a fresh random value for each prime check adds considerable expense. If we include a CTR-DRBG generator (from NIST SP 
800-90A) in the implementation, then the cost for 2048 bit RSA or DH is 
18 AES block encryption operations (and it could be lowered to very 
close to 16). Years ago, AES was said to take 18 cycles per byte on a 
Pentium Pro (according to Wikipedia article). That comes to 2304 cycles. That's got to be peanuts relative to prime testing. On modern 
processors the case is even stronger. According to the white paper at 

https://software.intel.com/sites/default/files/m/d/4/1/d/8/10TB24_Breakthrough_AES_Performance_with_Intel_AES_New_Instructions.final.secure.pdf
, an Intel Core i7 Processor Extreme Edition, i7-980X can achieve 1.3 
cycles per byte on AES, which would be 375 cycles. (There are a lot of 
assumptions here. For one thing the paper was reporting CBC mode 
decryption. If the hardware is specific to CBC mode, so it can can't 
get parallelism with encryption, then it would be quite a bit slower.)
If it doesn't cost much to generate a new random value for each trial, 
why not just do it? --David__
OpenSSL Project 
http://www.openssl.orgDevelopment Mailing Listopenssl-dev@openssl.org
Automated List Manager  majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: Prime generation

2014-05-27 Thread Peter Waltenberg

It may have been unreliable, our version isn't. We hook the RNG callbacks and direct them into our own code. That makes some sense of why OpenSSL hasn't fixed those problems, but that probably should be done now you have decent DRBG's.
As for the prime generation, I'll try to dig up a reference, but I'd put relying on a NIST DRBG solely for RSA key generation on the list of things to avoid ?. 
i.e. A few months back and assuming everything was working you'd have picked the strongest DRBG (Dual-EC) and generated server keys from that .
As a sequence generator those DRBG's are very good, and I don't think anything but Dual-EC has real problems, but seriously, you have to ask why you want real entropy for generating long lived keys rather than a sequence generator, particularly a NIST specified one ?.
FIPS 186-4 (page 23) states an approved (pseudo) random generator shall be used, the wording implies either. Running a DRBG in prediction resistance mode (continual reseed) satisfies the criteria (mainly the approved bit), AND calms my paranoia, but doesn't help you with the entropy rate issues.
Peter
-owner-openssl-...@openssl.org wrote: -

To: openssl-dev@openssl.orgFrom: Joseph Birr-Pixton <jpix...@gmail.com>
Sent by: owner-openssl-...@openssl.orgDate: 05/27/2014 07:14PM
Subject: Re: Prime generation
On 27 May 2014 08:45, Peter Waltenberg pwal...@au1.ibm.com wrote:
 ... I did change the RNG sources for some of the OpenSSL code in our hacked
 version to help with the performance problems using the wrong source causes,
 for example RSA blinding data can safely come from a DRBG (pseudo_rand_bytes()).
I assume you mean RAND_pseudo_bytes. In which case you should know
that RAND_pseudo_bytes has a broken interface and cannot ever be used
safely in a way which makes it different from RAND_bytes.To restate:
Callers of RAND_pseudo_bytes are either unreliable, or equivalent to
RAND_bytes. Do not use it.Cheers,Joe__
OpenSSL Project 
http://www.openssl.orgDevelopment Mailing Listopenssl-dev@openssl.org
Automated List Manager  majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: Upgrading OpenSSL on RHEL5

2014-04-23 Thread Peter Waltenberg
I stumbled across this a few days ago. Which will at least tell you if the
OS openssl package was patched on RedHat based systems.

rpm -q --changelog openssl

or to save time

rpm -q --changelog openssl | grep CVE


Peter



From:   Paul Vander Griend paul.vandergri...@gmail.com
To: openssl-dev@openssl.org
Date:   24/04/2014 06:37 AM
Subject:Re: Upgrading OpenSSL on RHEL5
Sent by:owner-openssl-...@openssl.org



Shruti,

  No worries. The command should be yum update all. Again, this does
not guarantee that there are not packages that depend on an older
version of openssl. For more questions related to this topic you
should try an RHEL or Fedora forum.

Good luck.

-Paul

On Wed, Apr 23, 2014 at 3:18 PM, Shruti Palshikar shr...@buysidefx.com
wrote:
 Hi Paul,

 I misunderstood the community for being a discussion thread for common
 issues faced.
 Thank you for the help. The yum command does not run as expected


 On Wed, Apr 23, 2014 at 4:02 PM, Paul Vander Griend
 paul.vandergri...@gmail.com wrote:

 Shruti,

  This is probably not the right list to ask that question but i'm
 going to help you anyways.

   OpenSSL is a library and you can't simply upgrade it across your
 entire RHEL installation. What you need is for the packages that you
 have installed who have dependencies on OpenSSL to update their
 packages to have a dependency on the newer version. I believe there is
 a yum update or yum upgrade command which will attempt to update any
 packages that are out of date. You are at the mercy of the package
 owners and the RHEL repository folk.

 -Paul


 On Wed, Apr 23, 2014 at 10:50 AM, Shruti Palshikar
shr...@buysidefx.com
 wrote:
  Hello,
 
  I am trying to upgrade my openSSL version on RHEL5. WHen I tried to
  update
  it using yum commad (it kept pausing with the messages - No packages
  marked
  for update) I found out that this was not installed from the source
but
  was
  present along with RHEL in the /usr directory. Following are some
  helpful
  commands to give you an idea of the machine and openSSL I am using
 
  1. yum search openSSL
 
  Loaded plugins: downloadonly, replace, rhnplugin, security
   This system is receiving updates from RHN Classic or RHN Satellite.
   drivesrvr
  |951 B 00:00
   rhel-raxmon
  |951 B 00:00
   Excluding Packages from Red Hat Enterprise Linux (v. 5 for 64-bit
  x86_64)
   Finished
 
 
==

  Matched: openssl
 
 
==

  easy-rsa.noarch : Simple shell based CA utility
  globus-gsi-openssl-error.i386 : Globus Toolkit - Globus OpenSSL Error
  Handling
  globus-gsi-openssl-error.x86_64 : Globus Toolkit - Globus OpenSSL
Error
  Handling
  globus-gsi-openssl-error-devel.i386 : Globus Toolkit - Globus OpenSSL
  Error
  HandlingDevelopment Files
  globus-gsi-openssl-error-devel.x86_64 : Globus Toolkit - Globus
OpenSSL
  Error Handling Development Files
  globus-gsi-openssl-error-doc.x86_64 : Globus Toolkit - Globus OpenSSL
  Error
  Handling Documentation Files
  globus-openssl-module.i386 : Globus Toolkit - Globus OpenSSL Module
  Wrapper
  globus-openssl-module.x86_64 : Globus Toolkit - Globus OpenSSL Module
  Wrapper
  globus-openssl-module-devel.i386 : Globus Toolkit - Globus OpenSSL
  Module
  Wrapper Development Files
  globus-openssl-module-devel.x86_64 : Globus Toolkit - Globus OpenSSL
  Module
  Wrapper Development Files
  globus-openssl-module-doc.x86_64 : Globus Toolkit - Globus OpenSSL
  Module
  Wrapper Documentation Files
  globus-openssl-module-progs.x86_64 : Globus Toolkit - Globus OpenSSL
  Module
  Wrapper Programs
  libssh.i386 : A library implementing the SSH2 protocol (0xbadc0de
  version)
   libssh.x86_64 : A library implementing the SSH2 protocol (0xbadc0de
  version)
   lua-sec.x86_64 : Lua binding for OpenSSL library
   m2crypto.x86_64 : Support for using OpenSSL in python scripts
   mingw32-openssl.noarch : MinGW port of the OpenSSL toolkit
   openscada-Transport-SSL.x86_64 : Open SCADA transports
   openssl.i686 : The OpenSSL toolkit
   openssl.x86_64 : The OpenSSL toolkit
   openssl-devel.i386 : Files for development of applications which will
  use
  OpenSSL
   openssl-devel.x86_64 : Files for development of applications which
will
  use
  OpenSSL
   openssl-perl.x86_64 : Perl scripts provided with OpenSSL
   openssl097a.i386 : The OpenSSL toolkit
   openssl097a.x86_64 : The OpenSSL toolkit
   openvpn.x86_64 : A full-featured SSL VPN solution
   perl-Crypt-OpenSSL-AES.x86_64 : Perl interface to OpenSSL for AES
   perl-Crypt-OpenSSL-Bignum.x86_64 : Perl interface to OpenSSL for
Bignum
   perl-Crypt-OpenSSL-DSA.x86_64 : Perl interface to OpenSSL for DSA
   perl-Crypt-OpenSSL-RSA.x86_64 : Perl interface to OpenSSL for RSA
   perl-Crypt-OpenSSL-Random.x86_64 : Perl interface to OpenSSL for
Random
   

Re: AW: [openssl.org #3312] OpenSSL :: crypto/mem.c without memset() calls?

2014-04-16 Thread Peter Waltenberg
In fact, it doesn't. The memset() function called has to be unknown to the compiler (i.e. not builtin) and in another module, but even there, the linker could optimize it out. And yes, there have been linkers 'capable' of optimizing that call out. Personally, I blame OS/2 for most of these problems.I dealt with the tinfoilhattery in our usage by explicitly testing that all sensitive objects freed were in fact cleaned up before release. Since OpenSSL allows you to hook malloc/free calls those tests aren't as difficult to write as it seems.If a compiler we use does ever misbehave, I'll deal with it if and when the tests for 'erasure' fail - and be able to be sure I've 'fixed' the feature.Peter-owner-openssl-...@openssl.org wrote: -To: openssl-dev@openssl.orgFrom: Vladimir Zatsepin Sent by: owner-openssl-...@openssl.orgDate: 04/16/2014 06:06PMSubject: Re: AW: [openssl.org #3312] OpenSSL :: crypto/mem.c without "memset()" calls?Hi,Personally I use this functionvoid* secure_memset(void *ptr, unsigned char c, size_t size){  unsigned char *tmp = (unsigned char *) ptr;
  if(!tmp) return NULL;  while(size  0)  { *tmp++ = c; size--;  }  return ptr;
}It is not too fast as memset(), but gives some guaranties that memory will be filled correctly.2014-04-16 8:31 GMT+04:00 David Jacobson dmjacob...@sbcglobal.net:
On 4/15/14 10:33 AM, stefan.n...@t-online.de wrote:


  Hi,


I have "checked" the current source code of 'crpyto/mem.c' and I'm a
little bit suprised that no memset()-calls are made before the free_*()
functions are entered. I think a "zeroing" of the previous used memory
is a good solutions to beware for accessing old memory content.

Leaving aside the problem that just zeroing the memory simply
doesn't work (for a start into that discussion see e.g.
http://bytes.com/topic/c/answers/660296-memset-free), there is
OPENSSL_cleanse which does something similar (actually, it
overwrites the memory with "garbage", not just with zeros) in a
way that works. Attempting to be faster at run time, this needs to be
called explicitly, though (and it's called in a lot of places if you look
into the source code).
But it might in fact be a good idea to put that call simply in the
free function and be done with it. With modern processors, the
slowdown is probably hardly nocticeable anyway.

Regards,
  Stefan


__
OpenSSL Project http://www.openssl.org
Development Mailing Listopenssl-dev@openssl.org
Automated List Manager  majord...@openssl.org


Here is a means of using memset so that it can't be optimized out.


#include stdint.h
#include string.h

void *
safe_memset(void *s, int c, size_t n)
{
 if (n  0) {
   volatile unsigned volatile_zero = 0;
   volatile uint8_t *vs = (volatile uint8_t*)s;

   do {
 memset(s, c, n);
   } while (vs[volatile_zero] != (uint8_t)c);
 }

 return s;
}


Since vs points to a volatile, the load in the while clause actually has to be done. That forces the compiler to actually store c into at least the byte that is tested, in practice byte zero. But the fact that the index is volatile zero, and since it is volatile it could spontaneously change to anything, the compiler has to store c into all bytes.


The key observation is that while you can't pass a volatile to memset (you get a warning and the volatile gets stripped away), you can use a volatile in a test that could go the wrong way if the memset were elided.


Could you C language lawyers please check this out and make sure I've not made a mistake.

Thank you,

  --David

__
OpenSSL Project http://www.openssl.org
Development Mailing Listopenssl-dev@openssl.org
Automated List Manager  majord...@openssl.org

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: heartbeat RFC 6520 and silently drop behaviour

2014-04-14 Thread Peter Waltenberg
Not a good idea, particularly with DTLS as it'd be an instant DOS attack.Peter-owner-openssl-...@openssl.org wrote: -To: openssl-dev@openssl.orgFrom: David Jacobson Sent by: owner-openssl-...@openssl.orgDate: 04/14/2014 07:55PMSubject: Re: heartbeat RFC 6520 and silently drop behaviourOn 4/13/14 3:54 AM, Michael Tuexen wrote: On 13 Apr 2014, at 01:54, tolga ceylan tolga.cey...@gmail.com wrote: The RFC has a lot of statements about silently dropping packets in case of various anomalies. But the correct action should be to drop the connection. This would uncover faulty implementations and other bugs that may slide due to 'silently drop' behavior. It'll also make malicious activity a bit more difficult and exposed due to the necessity to reestablish connections for any brute force attempts. What is your opinion on this? There are two MUST discards. One is the the payload being reflected doesn't match, the other is the the payload_length is too large. The second one is the critical one for the heartbleed attack. Let us consider this case. It is clear that you don't respond. You could keep the connection or drop it. When dropping it, you give the attacker an immediate indication that you are not vulnerable. So the attacker can move on. If you don't drop the connection, the attacker has to wait until he decides that the stack is not vulnerable. So it takes more resources on his side. However, the crucial point is to follow the MUST and not send the heartbeat response... Best regards Michael Cheers, Tolga CeylanFirst, dropping the connect does not comply with the RFC, which says that the heartbeat request MUST be silently discarded.Second, it is debatable as to whether dropping the connection is a good idea. First is contrary to Postel's Law: "an implementation should be conservative in its sending behavior, and liberal in its receiving behavior". There may be a coding error in some client and the length is 1 byte too large.  Now that client can't communicate with the server. The client user can't do whatever he wants to do. The server user may be losing business. Neither of these are responsible for the problem nor can they do anything about it. Second, yes, it makes the attacker do a bit more work. But it is very little, and the attacker can run attacks in parallel, so it doesn't make much difference in this throughput. --David Jacobson__OpenSSL Project http://www.openssl.orgDevelopment Mailing Listopenssl-dev@openssl.orgAutomated List Manager  majord...@openssl.org
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: SHA-3 availability

2014-02-12 Thread Peter Waltenberg
Personally I'd advise against this until NIST publishes test vectors for the finalist.We already have:DES(_OLD) and DES.SHA0 and SHA1Rjindael and AES.NIST has a long track record of changing algorithms before going 'final' and the problem is that once people start using the 'bad' version of the algorithm, it has to be supported forever, and that causes interop AND security issues.If you really want to speed the adoption of SHA3 :), ask NIST to finalize the standard ?, until that happens it's a poison pill.Peter-owner-openssl-...@openssl.org wrote: -To: openssl DEV openssl-dev@openssl.orgFrom: Francis GASCHET Sent by: owner-openssl-...@openssl.orgDate: 02/12/2014 08:45PMSubject: SHA-3 availability
  


  
  
Dear all,
  
  The OFTP2 support group is going to
start upgrading the cipher suite supported in OFTP2.
The proposal includes SHA-3, which
  is supported by Java implementations (BouncyCastle
  at least).
  Is there some plan to support it
in openSSL ?

Thanks and best regards,
  
-- 
  
  


Re: Define a method to rename dynamic libraries [patch]

2014-01-28 Thread Peter Waltenberg
This solves the problem of applications not being able to uniquely select a
specific instance of OpenSSL libraries.

That isn't sufficient for anything except possibly windows.

On Unix you'll also need to change SONAME (or the equivalent), prefix all
the public entry points to the libraries and massage the OpenSSL headers to
match to avoid crashes with a mix of OpenSSL binaries in the same process.
On the Unix's symbol resolution is something of a lottery and varies even
between releases of the same OS. Since the OpenSSL data objects change in
size between releases of OpenSSL calling the wrong entry point with data
structure that isn't exactly what was expected causes some interesting
problems.

It's not an impossible problem to solve, but it does require a lot more
than a simple rename of the libs.

Peter




From:   Eichenberger, John john.eichenber...@intermec.com
To: openssl-dev@openssl.org openssl-dev@openssl.org,
Date:   29/01/2014 07:55
Subject:Define a method to rename dynamic libraries [patch]
Sent by:owner-openssl-...@openssl.org



This patch was developed for use with Windows Mobile Dlls, but I think it
either works or is close to working for any OS build.
The patch itself only enables the ability to rename dynamic libraries using
an environment variable named CRYPTO_PREFIX.
Unless that environment variable is defined, nothing is really different.

When it is defined it is prepended to the names of the libraries,
effectively creating uniquely named libraries.
This solves the problem of applications not being able to uniquely select a
specific instance of OpenSSL libraries.

-Ike-
John Eichenberger
Principal Engineer: Sustaining Engineering: Intermec by Honeywell
425.265.2108  john.eichenber...@intermec.com



This message is intended only for the named recipient. If you are not the
intended recipient, you are notified that disclosing, copying, distributing
or taking any action based on the contents of this information is strictly
prohibited.

[attachment RenameDLLs.patch deleted by Peter Waltenberg/Australia/IBM]



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [PATCH] Reseed PRNG on PID change

2014-01-15 Thread Peter Waltenberg
The necessary code for MOST platforms is already in OpenSSL

Look for OPENSSL_rdtsc in the crypto. directory.
O.K. Not all platforms have this, but Sparc, x86/x86_64, s390, ppc,
Itanium, apha,PA-RISC , ARM have some variant - admitted, quite a few
hardware variants don't have a TSC, but that's determinable at runtime and
you could drop back to gettimeofday() to cover those.

It's at least better than gettimeofday() for this purpose - i.e. most of
the bits can't be determined from outside the box, it moves faster, and
it's cheaper  to read.

Peter




From:   Stephan Mueller smuel...@chronox.de
To: openssl-dev@openssl.org,
Date:   16/01/2014 07:45
Subject:Re: [PATCH] Reseed PRNG on PID change
Sent by:owner-openssl-...@openssl.org



Am Donnerstag, 16. Januar 2014, 07:41:21 schrieb Peter Waltenberg:

Hi Peter,

You have access to high speed event counters on most platforms now.
Where those are available, use them for reseed data instead of
gettimeofday(). Far higher resolution, far less performance impact.

That implies, however, either hardware-specific code (i.e. special CPU
instruction) or per operating system specific code (special system call,
library call).

Ciao
Stephan

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #3202] Request to remove _sparcv9_random

2013-12-24 Thread Peter Waltenberg
FWIW. We have a similar problem on AIX with the capability probes there. The debugger has an 'ignore' option - which allows us to bypass the sigill traps.I can understand the logic of not probing for an instruction that'll never exist, but some archictectures you WILL hit this problem as there's no other way to do capability probes in user space code. All you can do there is hope the debugger has some way of coping.Peter-owner-openssl-...@openssl.org wrote: -To: David Miller da...@davemloft.netFrom: Dan Anderson Sent by: owner-openssl-...@openssl.orgDate: 12/22/2013 03:37PMCc: openssl-dev@openssl.orgSubject: Re: [openssl.org #3202] Request to remove _sparcv9_randomOn 12/21/2013 7:07 PM, David Miller via RT wrote: From: Dan Anderson dan.ander...@oracle.com Date: Sat, 21 Dec 2013 17:54:52 -0800 I think we need to clarify why this should be done. The SPARC "random" instruction was designed at Sun Microsystems (now Oracle Corporation) for a never-released processor several years ago. For SPARC, randomness is obtained by reading a special control register. The SPARC "random" instruction was never implemented and never will be implemented. Please remove code to detect this instruction. Thanks! The patch was presented as a way to get rid of SIGILL dropping the application into the debugger.True, but forget this for the sake of argument. The same problem is going to exist if people run this library on chips without the crypto instructions, or other ones we check for.You are checking for a SPARC instruction that was never implemented, is not on any SPARC processor, and never will exist.All I'm suggesting is to not check for this instruction.Dan __ OpenSSL Project http://www.openssl.org Development Mailing Listopenssl-dev@openssl.org Automated List Manager  majord...@openssl.org-- uospu up dan.ander...@oracle.com, Oracle Solaris, San Diego, +1 858-526-9418__OpenSSL Project http://www.openssl.orgDevelopment Mailing Listopenssl-dev@openssl.orgAutomated List Manager  majord...@openssl.org
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: Self-initialization of locking/threadid callbacks and auto-detection of features

2013-10-23 Thread Peter Waltenberg
No, multiple independently developed libraries in the same process space
calling the same crypto. code was the problem.

Multiple thread models can't work if they call common code, agreed
there :).

The problem we hit early on was that as a library the only way we could
ensure the stack above us was stable was to use the OS default locking
scheme internally, nothing else would work once the complexity started
climbing. We did it as you suggest, hooked the shared library 'init' entry
points and created the locks when the library was loaded.



Peter




From:   Kurt Roeckx k...@roeckx.be
To: openssl-dev@openssl.org,
Date:   24/10/2013 06:44
Subject:Re: Self-initialization of locking/threadid callbacks and
auto-detection of features
Sent by:owner-openssl-...@openssl.org



On Wed, Oct 23, 2013 at 12:59:53AM -0500, Nico Williams wrote:
 On Wed, Oct 23, 2013 at 08:32:35AM +1000, Peter Waltenberg wrote:
  There is no 'safe' way to do this other than hardwired. Admitted, we
have a
  fairly ugly stack on which to find that out, multiple independently
  developed lumps of code jammed into the same process, quite a few using
  dlopen()/dlclose() on other libraries - multiples of them calling the
  crypto. code.

 Oh, good point.

 I think what I'll do is add targets that denote always use the OS
 thread library; disallow setting these callbacks, and a corresponding
 command-line option to ./config.  This should be the best option in
 general because of the possibility of the text for callbacks being
 unmapped when the provider gets dlclose()ed.

 Then maybe there's no need to bother with the pthread_once()/
 InitOnceExecuteOnce() business.  I had assumed, going in, that I needed
 to preserve existing semantics as much as possible, but because that
 might still be the case (even if you're right as to what the ideal
 should be) I will do *both*.  (Who knows, maybe there's a program out
 there that insists on using the gnu pth library and not the OS' native
 threading library.  Or maybe there's no need to support such oddities.)

You're conserned that you might have 2 libraries in your address
space implementing pthreads?  That might of course happen, but
unless they're using symbol versioning it's going to fail.  So I
suggest you forget about it and let whoever wants to do that fix
things.


Kurt

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: Self-initialization of locking/threadid callbacks and auto-detection of features

2013-10-22 Thread Peter Waltenberg
  The simplest fix involves setting the default only once, as wih the
callbacks, but here I feel that's a shaky idea, that I should allow RAND
method changes at any time, in a thread-safe manner -- more work for
me, but less surprising.


There is no 'safe' way to do this other than hardwired. Admitted, we have a
fairly ugly stack on which to find that out, multiple independently
developed lumps of code jammed into the same process, quite a few using
dlopen()/dlclose() on other libraries - multiples of them calling the
crypto. code.

All those lumps of code think they 'own' the crypto. stack - worst case
scenario was a dlopen()'d library setting the callbacks, then being
unloaded while other parts of the stack were still using crypto.
Surprisingly - that still worked on some OS's - but some (like AIX/HPUX)
ummap program text immediately on dlclose().

Personally, I'd suggest making it  build option to turn off default locking
and use whatever the OS provides by default. That'll allow the few corner
cases to continue doing whatever wierd things they were doing before, but
remove the big risk factor for the vast majority of users. And it is
becoming a big risk factor now.

It certainly shouldn't be an issue for the OS installed OpenSSL which
probably covers most of your users, the only sane choice there is the OS
default locking scheme anyway.

Peter



From:   Ben Laurie b...@links.org
To: openssl-dev@openssl.org,
Date:   23/10/2013 00:33
Subject:Re: Self-initialization of locking/threadid callbacks and
auto-detection of features
Sent by:owner-openssl-...@openssl.org






On 22 October 2013 06:47, Nico Williams n...@cryptonector.com wrote:
  On Monday, October 21, 2013, Salz, Rich wrote:
   I like your proposal, but I'd prefer to see an already initialized
   error code returned. Or a flag to the (new?) init api that says ignore
   if already set

  Thanks for your reply!

  I can add an error, but note that the caller can set then get the
  callbacks and compare to check whether the caller's callbacks were taken.
  I could also add a new set of callback setters with ignore-if-set flags.
  As long as the existing ones behave reliably in the already-set case.

  In the already-set case I think it may well be best to ignore without
  failing on the theory that the caller that first set the callbacks must
  have set sufficiently useful ones anyways... and that where the OS has a
  good enough default threading library, that's the one that will be used
  by all DSOs calling OpenSSL in the same process, as otherwise all hell
  would already be breaking loose anyways!  (I can imagine twisted cases
  where this would not be true, but they seem exceedingly unlikely.)

  If you want to see the half-baked bits I have (which build on Linux, but
  which aren't tested) to see what I'm up to, see
  https://github.com/nicowilliams/openssl, specifically the thread_safety
  branch.  See the XXX comments in rand_lib.c in particular.  The outline:
  add a thread-safe one-time initialization function, built on whatever the
  OS provides, then use that to make callback init thread-safe.

  What I need to know:

   - should i add new targets to ./Configure?  for now I modified the
  linux-elf target, but this feels wrong to me.

   - what about Windows?  I either need to have different targets for
  pre-vista/2008 or. i have to write a once initialization function for
  older Windows (which I can and know how to do, it's just more work that,
  and in particular i couldn't test it, so I'm not inclined to do it).

   - if so, should ./config automatically pick the new targets where there
  is appropriate threading support?

I've been musing about a more autoconf-like approach for some time now
(but, for the love of all that is fluffy, not using autoconf itself, which
sucks) - it seems this is a good reason to go down that path.

Interesting question is: what to do if no appropriate locking mechanism is
discovered?


   - how to allocate error codes for already initialized errors that you
  suggest?

   - should I work to make sure that it's possible to change the default
  RAND method after it's been set once?

     The code in rand_lib.c is currently fundamentally thread-unsafe,
  though it could be accidentally thread-safe if, e.g., ENGINE_finish()
  doesn't actually tear down state at all.  The simplest fix involves
  setting the default only once, as wih the callbacks, but here I feel
  that's a shaky idea, that I should allow RAND method changes at any time,
  in a thread-safe manner -- more work for me, but less surprising.

  Nico
  --

  (sent from a mobile device with lousy typing options, and no plain text
  button)
  (my patches need rebasing to squash and split up, need tests, need
  finishing, but if you have comments I would love them sooner than
  later! :)


__
OpenSSL Project 

Re: [PATCH] libssl: Hide library private symbols

2013-07-25 Thread Peter Waltenberg
Doing this at link time is far easier and can cover all the OS's.
Static doesn't work for symbols that are called inter-module but which
shouldn't be in the public API and GCC specific constructs only work for -
well, GCC.

libeay.num and ssleay.num already list all the public symbols. Parse those
with Perl and generate the necessary linker files - there are only minor
formatting differences between OS's to deal with and some minor differences
in how the files are specified.

Windows -def:
;
; Definition file for the DLL version of the LIBEAY library from OpenSSL
;

LIBRARY LIBEAY32

EXPORTS
SSLeay  @1
   ...

AIX   -bexport:

#!
*DESCRIPTION 'LIBSSL EXPORT FILE'
SSLeay
...

HP/UX   -c

#DESCRIPTION 'LIBSSL EXPORT FILE'

+e SSLeay
...

Linux  -Wl,--version-script,

#DESCRIPTION 'LIBSSL EXPORT FILE'

LIBSSL {
  global:
SSLeay;
...
  local:
*;
};

OSX   -exported_symbols_list

SSLeay

Solaris  -Wl,-M

#DESCRIPTION 'LIBSSL EXPORT FILE'

LIBSSL {
  global:
SSLeay;
...
  local:
*;
};



Peter





From:   Kurt Roeckx k...@roeckx.be
To: openssl-dev@openssl.org,
Cc: Cristian Rodríguez crrodrig...@opensuse.org
Date:   26/07/2013 03:57
Subject:Re: [PATCH] libssl: Hide library private symbols
Sent by:owner-openssl-...@openssl.org



I've submitted a patch in 2007 to make as much as possible static,
but it never got applied, so I never bothered writing a patch to
make the rest hidden.  I think making things static is even better
than hiding them, and should work on all platforms.  It's just
that you can't making everything that isn't public static.

But I do have a patch that only tells the linker which symbols
to export that's used in Debian, and so only those that are
public are exported.  It would of course be better to hide the
rest like your patch so that more things can be optimised.


Kurt

On Wed, Jul 24, 2013 at 11:33:33PM -0400, Cristian Rodríguez wrote:
 This patch only contains the libssl part (the easy one)
 patch to libcrypto will follow after it is complete and good enough.

 It hides all the library symbols that are not part of the public
 API/ABI when GCC 4 or later is used.
 ---
  ssl/kssl_lcl.h | 9 +
  ssl/ssl_locl.h | 8 
  2 files changed, 17 insertions(+)

 diff --git a/ssl/kssl_lcl.h b/ssl/kssl_lcl.h
 index c039c91..69972b1 100644
 --- a/ssl/kssl_lcl.h
 +++ b/ssl/kssl_lcl.h
 @@ -61,6 +61,10 @@

  #include openssl/kssl.h

 +#if defined(__GNUC__)  __GNUC__ = 4
 +#pragma GCC visibility push(hidden)
 +#endif
 +
  #ifndef OPENSSL_NO_KRB5

  #ifdef  __cplusplus
 @@ -84,4 +88,9 @@ int kssl_tgt_is_available(KSSL_CTX *kssl_ctx);
  }
  #endif
  #endif/* OPENSSL_NO_KRB5  */
 +
 +#if defined(__GNUC__)  __GNUC__ = 4
 +#pragma GCC visibility pop
 +#endif
 +
  #endif/* KSSL_LCL_H   */
 diff --git a/ssl/ssl_locl.h b/ssl/ssl_locl.h
 index 56f9b4b..dde4e3e 100644
 --- a/ssl/ssl_locl.h
 +++ b/ssl/ssl_locl.h
 @@ -165,6 +165,10 @@
  #include openssl/ssl.h
  #include openssl/symhacks.h

 +#if defined(__GNUC__)  __GNUC__ = 4
 +#pragma GCC visibility push(hidden)
 +#endif
 +
  #ifdef OPENSSL_BUILD_SHLIBSSL
  # undef OPENSSL_EXTERN
  # define OPENSSL_EXTERN OPENSSL_EXPORT
 @@ -1357,4 +1361,8 @@ void tls_fips_digest_extra(
const EVP_CIPHER_CTX *cipher_ctx, EVP_MD_CTX *mac_ctx,
const unsigned char *data, size_t data_len, size_t orig_len);

 +#if defined(__GNUC__)  __GNUC__ = 4
 +#pragma GCC visibility pop
 +#endif
 +
  #endif
 --
 1.8.3.1

 __
 OpenSSL Project http://www.openssl.org
 Development Mailing List   openssl-dev@openssl.org
 Automated List Manager   majord...@openssl.org

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [PATCH] libssl: Hide library private symbols

2013-07-25 Thread Peter Waltenberg
The compiler can't optimize if the symbols are called inter-module either.
And seriously, do you REALLY think that any changes the compiler makes at
that level will have measurable performance impacts ?.
There are good reasons to hide parts of the API that you don't want used by
external code - hiding symbols to improve performance is a big stretch.

And there have been linkers which did do a final optimization pass. (OS/2
for example).

Peter





From:   Cristian Rodríguez crrodrig...@opensuse.org
To: openssl-dev@openssl.org,
Date:   26/07/2013 11:55
Subject:Re: [PATCH] libssl: Hide library private symbols
Sent by:owner-openssl-...@openssl.org



El 25/07/13 21:46, Peter Waltenberg escribió:
 Doing this at link time is far easier and can cover all the OS's.

Yes, but this is the worst possible way, as the compiler cannot perform
optimizations as it does not know that the symbols are hidden.



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: AES-XTS mode doesn't chain between successive calls to EVP_CipherUpdate?

2013-04-27 Thread Peter Waltenberg
 The OpenSSL implementation passes the NIST XTS compliance tests ?XTS was designed to do in-place encryption of blocks of data. (disk encryption etc).Feature rather than bug ?Pete-owner-openssl-...@openssl.org wrote: -To: "openssl-dev@openssl.org" openssl-dev@openssl.orgFrom: "Greg Bryant (grbryant)" Sent by: owner-openssl-...@openssl.orgDate: 04/26/2013 11:32PMSubject: AES-XTS mode doesn't chain between successive calls to EVP_CipherUpdate?I sent this to openssl-users a couple of days ago, but havent gotten a reponse. Perhaps its more of a dev question:

Looking at the xts128.c code, it looks like the tweak is recalculated from scratch every time CRYPTO_xts128_encrypt() is called:

memcpy(tweak.c, iv, 16);

 (*ctx-block2)(tweak.c,tweak.c,ctx-key2);

It seems like this would break the chaining between successive calls to EVP_CipherUpdate, requiring that the plaintext be encrypted in its entirety with one call to EVP_CipherUpdate. Other chaining modes preserve the chaining state in the
 context (CTR mode, for example, saves the ctr in IVEC). Theres nothing in the XTS context structure that would preserve the tweak, though.

Am I missing where this chaining occurs? Or is this a bug? Or is it a requirement that XTS mode only use a single call to EVP_CipherUpdate per data stream? (which seems to violate the definition of EVP_CipherUpdate.)

I saw this in openssl-1.0.1, but Ive checked that the relevant code in openssl-1.0.1e is no different.

thanks,

Greg Bryant
Technical Leader
Cisco Systems, Inc.
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: MD5 in openSSL internals

2013-04-25 Thread Peter Waltenberg
 Your answers lie here:http://tools.ietf.org/html/rfc2246The RFC for TLS 1.0OpenSSL implements that, as per specification. And incidentally, as rfc2246 pre-dates (Jan 1999) SHA-256 (2001) the answers aren't the ones you want to hear.NOT an OpenSSL problem that, simply the fact that time has passed and the security landscape has changed.If you want secure, TLS 1.2 (Published March 2011) is it now, and OpenSSL 0.9.8d (Released September 2006) dosn't support TLS 1.2Peter-owner-openssl-...@openssl.org wrote: -To: openssl-us...@openssl.org, "openssl-dev@openssl.org" openssl-dev@openssl.orgFrom: "Nikola Vassilev" Sent by: owner-openssl-...@openssl.orgDate: 04/25/2013 02:21AMSubject: Re: MD5 in openSSL internalsFrom:  Venkataragavan Narayanaswamy v...@brocade.com
Sender:  owner-openssl-us...@openssl.org
Date: Tue, 23 Apr 2013 00:29:17 -0600To: openssl-dev@openssl.orgopenssl-dev@openssl.org; openssl-us...@openssl.orgopenssl-us...@openssl.orgReplyTo:  openssl-us...@openssl.org
Subject: MD5 in openSSL internalsHi, We are currently analyzing and understanding the security strength of the openSSL internal implementation to certify the products. In version 0.9.8d, TLSv1.0 alone is supported. Can you please answer the following or provide me with the documentation reference 1. Does openSSL library use MD5 internally for any operation? 2. Can we have SHA256 in the ciphersuite with TLSv1.0? Thanks, Venkat 
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: RC4 - asm or C?

2012-11-14 Thread Peter Waltenberg
Quite a simple answer.
The maximum TLS record size is 16k - overhead. Optimize for that (16k).

Yes but ...

The other cases don't matter, as the packet size decreases, other factors,
like TCP/IP stack and network latency dominate performance - so if you send
lots of small packets your net throughput is going to be limitted by things
other than encryption speed anyway.

For other uses of encryption, it might matter, but for SSL, it's an easy
answer.

Peter




From:   Timur I. Bakeyev ti...@com.bat.ru
To: openssl-dev openssl-dev@openssl.org
Date:   14/11/2012 23:58
Subject:RC4 - asm or C?
Sent by:owner-openssl-...@openssl.org



Hi all!

I know, it's an old topic, been discussed several times in the past, but
I've decided to check in my own environment the difference between asm and
C implementations of RC4 in OpenSSL 1.0.1c on
 Intel(R) Xeon(R) CPU X5679 @ 3.20GHz.

http://zombe.es/post/405783/openssl-outmoded-asm

Well, results are quite interesting.

# ./openssl -evp rc4

ASM
type 16 bytes 64 bytes    256 bytes   1024 bytes   8192
bytes
rc4 287633.90k   573238.77k   735101.34k   777062.91k
794848.66k
rc4 286393.18k   572485.03k   731541.58k   795963.08k
817934.21k

vs.

NO ASM
type 16 bytes 64 bytes    256 bytes   1024 bytes   8192
bytes
rc4 462543.94k   530657.76k   539455.79k   547207.11k
548447.55k
rc4 472625.58k   531457.61k   541795.39k   547749.59k
548894.14k

For the small blocks C implementation still rocks(performance gain is
almost 200%), but
while the block size grows, assembler code outperforms C one.

I guess, from now on asm implementation of RC4 should be preferred.

But I'm curious, why there is such a drop in performance of asm code and
what can be done to address that issue? Also, what is the common size of
the RC4 block in SSL traffic, which test is more realistic?

With best regerds,
Timur Bakeyev.

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Encrypt/Decrypt in place ?

2012-11-05 Thread Peter Waltenberg
Can the same pointer safely be used for the input and output buffers in
encrypt and decrypt operations ?
i.e. is something like AES_encrypt(out,out,key) guaranteed not to rewrite
the input before it's been processed ?

The following IMPLIES this is safe but lingering doubts remain.

(from crypto/aes/aes_core.c)

 /*
 * Encrypt a single block
 * in and out can overlap
 */
 void AES_encrypt(const unsigned char *in, unsigned char *out, const
AES_KEY *key) {


Note: I'm interested in the general case. AES was just used as an example
of the type of operation and it's the example I found which implied this
works.

Alternatively do any test cases exist that'd fail if someone provided asm
which broke this behaviour ?.

Checking the source code only goes so far, it'd be really hard to verify
all the asm modules.

We could write our own tests for this, but it'd be preferable that the
OpenSSL behaviour was known to preserve this feature - patching some random
asm module to 'fix' a break of this in the future wouldn't be trivial.

Thanks
Peter

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


DLL naming

2012-09-09 Thread Peter Waltenberg
 The "easy" fix is to relink the objects yourself.i.e. use the OpenSSL build process to generate everything, ignore it's generated libraries and simply create what you want from the objects that now exist. That way you have almost complete control and it doesn't require changes to the OpenSSL build scripts.Peter-owner-openssl-...@openssl.org wrote: -To: "openssl-dev@openssl.org" openssl-dev@openssl.orgFrom: Erik Tkal Sent by: owner-openssl-...@openssl.orgDate: 09/07/2012 05:58AMSubject: DLL namingIs it easy/possible to tell the OpenSSL build process to name the generated DLLs (and corresponding LIBs) differently than ssleay32 and libeay32? I don't see anything obvious other than by manually renaming them after the fact (and not sure how that would affect DLLs/EXEs linking with the LIBs that then need to locate the DLL).Erik TkalJuniper OAC/UAC/Pulse Development__OpenSSL Project http://www.openssl.orgDevelopment Mailing Listopenssl-dev@openssl.orgAutomated List Manager  majord...@openssl.org
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: ppc32_asm for BSD targets

2012-08-03 Thread Peter Waltenberg
 Not a definitive answer, but I know we (IBM) never tested the PPC asm on BSD. It's possible that because no one had a PPC machine running BSD to test the asm paths they were left disabled. There may be other reasons, but make tests should at least show any gross problems.The only subtle problem I can think of that might be there in recent code is use of 64 bit registers in 32 bit code, if the kernel doesn't preserve the upper halves of registers you can get clobbered during signal handling. I don't know enough about current BSD to know if that's a problem or not.Peter-owner-openssl-...@openssl.org wrote: -To: openssl-dev openssl-dev@openssl.orgFrom: Kevin Fowler Sent by: owner-openssl-...@openssl.orgDate: 08/04/2012 01:04AMSubject: ppc32_asm for BSD targetsFor the BSD-generic32 target, which gets used for *bsd on ppc cpu, Configure script uses ${no_asm}. Other OS's (linux, darwin, AIX) on ppc cpu use ${ppc32_asm}.Are the ppc asm routines not valid for *bsd OS? If so, what about BSD invalidates them?
Thanks,Kevin
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: How encrypt long string RSA

2012-03-27 Thread Peter Waltenberg
Traditionally, you handle this by encrypting a fixed length symetric key
using RSA (i.e. an AES key) and use that key to encrypt any serious amounts
of data.

Peter



From:   Frater fr...@poczta.fm
To: openssl-dev@openssl.org
Date:   27/03/2012 19:53
Subject:How encrypt long string RSA
Sent by:owner-openssl-...@openssl.org



Where is any working example to encrypt file or long string using
RSA Public or private key.
in demos/maurice is example 1 but using certificate not privkey.

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: OS-independent entropy source?

2012-01-22 Thread Peter Waltenberg
HT processors are a nightmare for security yes :). You are assuming the target software is collecting data continuously as fast as it can - which I agree, simply turns it into the designated victim :). Don't do that - the data rate it high enough you can sample on demand and you can afford some delay between samples. And make sure your sample collection code is branch free - you can still attack it via the cache, but it's a lot harder to know exactly where the victim is and your attack code has to be able to get that exactly right. The usual assumptions - the attacker doesn't have root privileges.Pete-owner-openssl-...@openssl.org wrote: -To: openssl-dev@openssl.orgFrom: Andy Polyakov Sent by: owner-openssl-...@openssl.orgDate: 01/21/2012 12:53AMSubject: Re: OS-independent entropy source? My comments were to clarify why this works 'quite well' on multi-user systems even though the underlying source may not be truely random - and why it may not be as usable on single user ones.Attached is circular cross-correlation vector for two synchronizedthreads running on multi-core non-hyperthreading x86 processor."Synchronized" means that one thread blocks on semaphore and thencollects data, while another thread unlocks the semaphore and thencollects data. "Multi-core" means that both threads exercise sameexternal memory interface. As mentioned earlier high spikes ismanifestation of system timer interrupt, nothing to worry about. Butwhat do we make from the fact that there are areas with effectively"guaranteed" correlation of 0.02? How does the value translate in"tangible" terms? Is it acceptable?Naturally single-CPU system can't exhibit such behavior...

Re: OS-independent entropy source?

2012-01-22 Thread Peter Waltenberg
Well, if you had say a single thread collecting data to feed an entropy
pool, once an attacker syncronized on that, they'd win. Not sure that's
possible, but it's probably better for security if this is done inline by
each thread as needed. (Particularly when you consider the real OpenSSL
usage scenarios - web servers with a lot of running threads - good luck
making a timing attack work in that use case).

There's one more point. The upper bits of those registers are easier to
guess than the lower, again the 'fix' is obvious, what's more difficult is
knowing which of the lower bits are actually changing.

i.e. P4 the lower 4 bits are effectively 'stuck' as every instruction is a
multiple of 16 clocks long, quite a few processors have quirks here.

Pete




From:   Andy Polyakov ap...@openssl.org
To: openssl-dev@openssl.org
Date:   23/01/2012 03:38
Subject:Re: OS-independent entropy source?
Sent by:owner-openssl-...@openssl.org



 HT processors are a nightmare for security yes :).

I've attempted the experiment even on hyper-threading P4. No anomalies
in sense that it looks pretty much like another P4. Well, one thread
appears to get more interrupts, while spikes tend to be higher on the
other thread. But when it comes to fine print, i.e. variations between
interrupts, there is no essential difference and cross-correlation looks
essentially the same as on real multi-core. No maximum at zero lag
though... On the second thought why would there be difference, when
every sample takes several *hundred* clock cycles to complete?
Hyper-threading operates at single clock cycle resolution, not hundreds,
right?

 You are assuming the target software is collecting data continuously as
 fast as it can - which I agree, simply turns it into the designated
 victim :). Don't do that - the data rate it high enough you can sample
 on demand and you can afford some delay between samples.

But data will have to be collected in bursts and not exactly short
ones, e.g. ~700 samples or 300 microseconds are suggested on the page,
initial calibration can be tens milliseconds... Would it be appropriate
to say that these are not long enough to detect and synchronize on?
[Naturally provided that detection and synchronization can give
adversary the edge.] Assuming that that collection is continuous is
simply first approximation on the problem...

 And make sure your sample collection code is branch free - you can still
 attack it via the cache, but it's a lot harder to know exactly where the
 victim is and your attack code has to be able to get that exactly right.

Loop bodies are branch-free on all platforms. Though I don't think it
matters a lot, because, once again, sample is several *hundred* cycles,
much higher than [mis-]branch penalties.
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: OS-independent entropy source?

2012-01-18 Thread Peter Waltenberg


No. For following reason. Originally idea was to attempt to gather OS
noise. I mean entropy would come from interrupts, interaction with say
DMA, etc. Therefore no explicit attempts to perform the experiment
outside OS were made. Besides it would be impossible for me to set it
up in most cases (because normally access is remote non-privileged). But
having observed it in OS-free environment made me wonder...

I think you underestimate the contribution that OS noise makes to security.

Even if the 'entropy' we are seeing really is just a hardware PRNG, at
least it's a free running one. i.e. it's generating new samples
continuously and we get a sample from a continuous stream when we read that
PRNG. (We don't just get the next sample in a sequence as we would reading
a software PRNG).

So those minor disturbances have a significant effect on the data we
actually collect becasue they perturb the timing of the sampling. Someone
trying to predict the output of our RNG has to be able to either sample at
exactly the same times as us (down to timer resolution)  or know the
hardware PRNG sequence and predict when we sampled it (again, down to timer
resolution) - quite a hard thing to do even from from another process on
the same machine. It's not like attacking a software PRNG where once you
know where you are now, you can predict the future output.

Again, that's why I'm happier about the security of this approach on a
multi-user system than on a single user embedded system where there's no
additional timing perturbation. But provided you can show you don't get
repeated sequences at boot, it's at least as good as anything else you have
even in those scenarios.

Peter



From:   Andy Polyakov ap...@openssl.org
To: openssl-dev@openssl.org
Date:   18/01/2012 18:19
Subject:Re: OS-independent entropy source?
Sent by:owner-openssl-...@openssl.org



 Come on, having me preparing bootable CF card image for a gizmo I'm not
 familiar with is unrealistic. Don't you have anything you can compile
 10-lines C code and some assembler to add to?

 Well you mentioned tests on x86 in your paper, I thought you
 do have some minimal test setup ready for it.

No. For following reason. Originally idea was to attempt to gather OS
noise. I mean entropy would come from interrupts, interaction with say
DMA, etc. Therefore no explicit attempts to perform the experiment
outside OS were made. Besides it would be impossible for me to set it
up in most cases (because normally access is remote non-privileged). But
having observed it in OS-free environment made me wonder...

 No problem to compile
 something here, I just do not want to run it under an operating
 system that sets the hardware who knows how (disabling
 ints is not enough if something is trying to do a DMA
 or something).

Arguably OS only adds entropy, and it appears to be rather little on
idle system. So just running it under low load is good enough.

 but ARM might be too weak requirement. OPENSSL_instrument_bus is
 dependent on presence of clflush instruction which is normally available
 with SSE2. Does your Geode support it? It's exposed in /proc/cpuinfo
 under Linux. And of course rdtsc.

 Yes, the processor does have tsc and clflush.

 FWIW on the ARMs I have I am able to manipulate/disable cache
 (on some there is no cache) and to read a counter ticking
 synchronously with the processor clock.

I know that newer ARM even allows you to make tick counter accessible to
user-land and there is even a way to flush cache line, but it's all
privileged operations and I work under assumption that code runs in
user-land. Therefore OPENSSL_instrument_bus is reduced to return 0; on ARM.

 What data do you need? OPENSSL_instrument_bus with 128k probes
 taken?

Yes, as simple as

#include stdio.h

#define N (128*1024)

main()
{ int i,n;
  static int arr[N];

memset(arr,0,sizeof(arr));
n=OPENSSL_instrument_bus(arr,N);
for (i=0;in;i++)
printf(%d\n,arr[i]);
}

would do. I also collect for n=OPENSSL_instrument_bus2(arr,N,0);...

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: OS-independent entropy source?

2012-01-17 Thread Peter Waltenberg
Depends on the PLL design - which we don't know. But yes, generally they
are notoriously sensitive to thermal effects.

I think my point is valid though - even if it is a PRNG, provided it's a
good one (and distribution will tell you that) if an attacker can't tell
exactly when you are sampling the PRNG effectively it's a usable entropy
source.
There are use cases where it may not be a good source - as in my previous
comments, a smart card for example, where the owner has pysical access and
*can* dunk it into a thermos full of liquid nitrogen ;) but in most of the
OpenSSL use cases it's reasonable to exclude those scenarios.

The same is true of events we consider to be really random - i.e.
radioactive material, thermal shot noise - the real situation may simply be
that we don't yet know enough at present  to be able to predict when an
indivdual nucleus will decay - that doesn't mean that'll always be true or
that someone with physical access to the hardware can't fake the 'random'
events anyway.

Peter



From:   Andy Polyakov ap...@openssl.org
To: openssl-dev@openssl.org
Date:   18/01/2012 01:53
Subject:Re: OS-independent entropy source?
Sent by:owner-openssl-...@openssl.org



 In praxis the feedback loop will exhibit both deterministic
 (e.g. quantization) and random (thermal) noise. For example
 if the common input clock changes, feedback loops in both
 PLLs go through their transfer functions until they stabilize
 on the new frequency. The resulting jitter will probably
 appear quite random, but is not.

Maybe relevant question is not how [in]predictable is PLL's reaction on
input frequency variation, but that there is one. I mean even if PLL
reaction is predictable, *when* [thermal] variation and consequent
reaction occurs is not, right?
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: OS-independent entropy source?

2012-01-17 Thread Peter Waltenberg
One of the problems is for example to get a suitably random number
soon after booting an embedded device, without external activity.
A PRNG is no good here - the sampling occurs at quite predictable
time since the power was applied.

Yes, that's why Andy needs to check multiple samples gathered after a reset
or power on :), not just an auto-correlation function, the hardware PRNG
could just have a long period.

And I certainly have used processors where these tricks won't work, but
again, those were so basic that running OpenSSL wouldn't be an option.

Well if this assumption breaks the RNGs will be probably the least
thing to worry about ;)

There have been attacks demonstrated on quantum communications systems
which rely on blinding the detectors - so even without threatening the
stability of the universe :), attacks on what we currently consider to be
really 'random' events have already been demonstrated, that's why I don't
consider this to be instrinsically much worse than using a 'real' hardware
source.  With access to the hardware you can probably mess up devices
relying on shot noise or simillar anyway.

He just needs to be sure that the initial state isn't predictable, the
distribution is reasonable and that he can detect failures of the source.


Peter




From:   Stanislav Meduna st...@meduna.org
To: openssl-dev@openssl.org
Date:   18/01/2012 11:21
Subject:Re: OS-independent entropy source?
Sent by:owner-openssl-...@openssl.org



On 17.01.2012 23:55, Peter Waltenberg wrote:

 I think my point is valid though - even if it is a PRNG, provided it's a
 good one (and distribution will tell you that) if an attacker can't tell
 exactly when you are sampling the PRNG effectively it's a usable entropy
 source.

One of the problems is for example to get a suitably random number
soon after booting an embedded device, without external activity.
A PRNG is no good here - the sampling occurs at quite predictable
time since the power was applied.

For a typical OpenSSL usage you are probably right, at least if you
are able to save the gathered entropy across reboots.

 The same is true of events we consider to be really random - i.e.
 radioactive material, thermal shot noise - the real situation may simply
be
 that we don't yet know enough at present  to be able to predict when an
 indivdual nucleus will decay - that doesn't mean that'll always be true

Well if this assumption breaks the RNGs will be probably the least
thing to worry about ;)

--
Stano
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: OS-independent entropy source?

2012-01-16 Thread Peter Waltenberg
We've been using this general design on multi-user CPU's for a few years.

I'm happier with it there because even if the entropy source  is just a
hardware PRNG there's enough other noise (bus stalls from other running
processes, interrupts etc) to ensure that it's going to be very very
difficult for someone to determine the point at which you sample that PRNG
with sufficient accuracy to compromise you anyway.

All you care about is that you have a stream of numbers which can't be
guessed by another running process or from outside the box - whether it's a
'real' entropy source or a PRNG doesn't make a difference. (Unless there's
a flaw of course :) - which is Andy's concern here).

You can exclude obvious  attacks from the kernel, or from the same process,
or from someone with physical access to the machine - anyone with that
level of access would find it simpler to simply read things like generated
RSA keys from memory than to try and guess your bitstream. Which is useful
because there's a clear point at which you can cut off the paranoia -
beyond this point we don't have to care because we'd have no protection
even if we had a proven good entropy source  rather than something that
just looks like it could be an entropy source. There are some gray areas
there - devices which are supposedly tamper-proof, but generally with
physical access many such devices have been compromised anyway.

The single user case, a lot less confident, yes, PLL's tend to be noisy
devices, and may well be behaving like a real entropy source here, however
my EE background is 20 years old, and things may have changed - plus, you
don't really know what the source of the noise is, that'd require access to
the underlying hardware design, and probably quite a lot of time.
I suspect that simply checking the distribution is going to be as effective
in practice as having more knowledge in any case - I think I'd also be
resetting the machine, grabbing a sample (repeat) and  and running multiple
samples through cross-correlation functions, just to ensure it isn't
effectively just a sequence generator initialized at boot.

However, with the caveats that you don't get repeated sequences at boot,
that you can determine when the 'entropy' source fails and that an attacker
can't get physical access to the device it's probably an adequate entropy
source. i.e. in something like a smart card - or a PS3 :) - possibly not
good enough - in a box sitting in a telecom rack in a locked room, probably
yes. Certainly mixing this in is better than *just* using date/time at boot
to seed a PRNG for example.

Peter




From:   Andy Polyakov ap...@openssl.org
To: openssl-dev@openssl.org
Date:   16/01/2012 19:32
Subject:Re: OS-independent entropy source?
Sent by:owner-openssl-...@openssl.org



 Comments on http://www.openssl.org/~appro/OPENSSL_instrument_bus/ are
 welcomed.

 I did not analyse the architecture of tested processors to check
 how many frequencies they are using and how they are generating
 them, but isn't this just manifestation of the PLL characteristics?

Are you aware of any quantitative PLL characteristics that can be
relevant in the context? Can *you* argue in favor of the hypothesis?
It's essential that somebody else but me argues to confirm or dismiss it.
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #2627] SPARC T4 support for OpenSSL

2011-11-07 Thread Peter Waltenberg
I'm more used to dealing with PKCS#11 where the call overhead is usually
measurable, but even so, doing just AES, probably not a problem.
Doing something like AES-GCM with the AES in the engine and GCM Hash in
OpenSSL though I'd expect to see an impact, you are basically doingthe AES
a blcok at a time in that sceenario.
That's where I'm claiming that you'll be sacrificing performance long term.
And for instructions that are wired into the CPU and unpriviledged there's
no real gain using an engine.

The other issue, FIPS, you already covered. Yes, I care for that reason as
well, FIPS certifiying with code in an engne will be more difficult , but
it really only impacts people who do their own FIPS certifications. Pretty
much our problem to deal with it.

Like I said though, your call.

Peter






From:   Andy Polyakov ap...@openssl.org
To: openssl-dev@openssl.org
Date:   08/11/2011 05:00
Subject:Re: [openssl.org #2627] SPARC T4 support for OpenSSL
Sent by:owner-openssl-...@openssl.org



Peter Waltenberg wrote:
 There are some fairly severe performance hits in engine support unless
the
 engine includes all the submodes as well.
 That includes things you are just starting to play with now, like the
combined
 AES+SHA1 on x86.

??? Here is output for 'speed -engine intel-accel -evp
aes-128-cbc-hmac-sha1' for 1.0.0d, i.e. through engine.

type 16 bytes 64 bytes256 bytes   1024 bytes   8192
bytes
aes-128-cbc-hmac-sha1   202516.18k   322609.98k   432125.60k
480232.03k   496191.36k

And here is output for 'speed -evp aes-128-cbc-hmac-sha1' for HEAD, i.e.
without engine.

aes-128-cbc-hmac-sha1   237351.62k   326968.34k   432138.62k
482383.80k   497401.86k

Engine overhead is significant at 16-byte chunks *only* and hardly
noticeable otherwise. What severe performance hits are we talking about?
 EVP has overhead, but I can't see that it's engine specific. Combined
cipher+hash implementations do minimize EVP overhead (you don't have to
make two EVP calls), but that was not the reason for implementing above
mentioned stitched modes, higher instruction-level parallelism was.

 For features that are part of CPU's - rather than plug in cards - my
preference
 would be that the implementation is inline so that every last drop of
 performance can eventually be wrung out of it.

As mentioned, there are other factors in play, such as maintenance,
adoption time...
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #2627] SPARC T4 support for OpenSSL

2011-11-05 Thread Peter Waltenberg
There are some fairly severe performance hits in engine support unless the engine includes all the submodes as well.That includes things you are just starting to play with now, like the combined AES+SHA1 on x86.For features that are part of CPU's - rather than plug in cards - my preference would be that the implementation is inline so that every last drop of performance can eventually be wrung out of it. With card drivers, the internal call stack in OpenSSL is generally noise compared with the driver stack and card I/O overhead so there's no significant gain in inlining, plus the potential pain of cards/drivers changing under you makes inlining a net lose anyway.Of course, that's only an opinion, and it's up to you to decide what to implement.Pete-owner-openssl-...@openssl.org wrote: -To: darren.mof...@oracle.comFrom: "Andy Polyakov via RT" r...@openssl.orgSent by: owner-openssl-...@openssl.orgDate: 11/05/2011 09:44PMCc: openssl-dev@openssl.orgSubject: Re: [openssl.org #2627] SPARC T4 support for OpenSSL As some of you may be aware the new Oracle SPARC T4 processor has  hardware crypto support just like its predecessors SPARC T1,T2,T3. However unlike the prior SPARC T series processors the hardware crypto  is not hyper-privileged but is instead new instructions accessible from  unprivileged userland code. Basically a very similar model to what is  available in Intel processors with AES-NI,Cool! Or should I say "finally"? :-)BTW, https://blogs.oracle.com/BestPerf/entry/20110928_sparc_t4_opensslsays that 3.46GHz Westmere delivers 660MBps on AES-256-CBC benchmark.Given the result we are talking about encrypt. I wonder about decrypt.Westmere delivers 3x on CBC decrypt [Sandy Bridge more than 5x] andquestion is how does it, a parallelizable mode, look on T4? but it is much more than just  AES. The hardware supports instructions for: 	AES, DES, Camellia 	MD5, SHA1, SHA256, SHA512 	MONTMUL, MPULIs there publicly available documentation? If not, is there non-publiclyavailable documentation and under which terms? We currently have an new "t4" engine implemented that provides support  for AES,MD5,SHA1,SHA256/384/512 using the hardware instructions on the  SPARC T4 processor.  We implemented this as a new engine because at the time we made the  choices this is how Intel AES-NI support was done in OpenSSL CVS head.  We have noticed that the Intel AES-NI support has changed and it is now  directly integrated rather than being an engine.  We would like to contribute patches for SPARC T4 support to OpenSSL with  the intention of them being part of the core release.  We can contribute the engine as we currently have it if that is of  interest. However we would like to know if the OpenSSL community  believes that SPARC T4 should be done similar to Intel AES-NI instead  and integrated "inline" into the main implementation.I can't speak for whole community, but I'd argue that "inline"implementation can become of interest only if it targets FIPS. Otherwiseengine is just as appropriate, especially if it's patch-free engine inhttp://www.openssl.org/contrib/intel-accel-1.4.tar.gzstyle. This wouldallow for easier and faster adoption (that's what matters, right?). Asfor possibility of integrating it in the core release. By taking codeinto core we also implicitly undertake its maintenance. And the latteris problematic, because we don't have access to appropriate hardware ordocumentation. Question if somebody [like Oracle] wants to do somethingabout it is implied.__OpenSSL Project http://www.openssl.orgDevelopment Mailing Listopenssl-dev@openssl.orgAutomated List Manager  majord...@openssl.org

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: Reseed testing in the FIPS DRBG implementation

2011-08-20 Thread Peter Waltenberg
That interpretation seems - brain dead - to be polite.The problem is that running the health check trashes the state of the DRBG you are using, so running it on every reseed means that the DRBG is re-initialized each time - and you may as well be in PR mode anyway. 
O.K. you could save and restore the state before reseeding - but it's excessive and pointless - and if you restore the state, running the health check proves nothing. It's really really unlikely that the DRBG *code* is corrupted even in a 
general purpose OS (and even more unlikely if it's a hardware 
implementation) and far more likely that it's internal state *data* is messed 
up - which the health check won't find.I think your contact at the lab. needs to check the meaning of this with NIST.Peter-owner-openssl-...@openssl.org wrote: -To: openssl-dev@openssl.orgFrom: Henrik Grindal Bakken h...@ifi.uio.noSent by: owner-openssl-...@openssl.orgDate: 08/16/2011 05:50PMSubject: Re: Reseed testing in the FIPS DRBG implementation"Dr. Stephen Henson" st...@openssl.org writes: The OpenSSL DRBG implementation tests all variants during the POST and also tests specific versions on instantiation. That includes an extensive health check and a KAT. So in that sense there will be two KATs before a reseed takes place but no KAT immediately before a reseed takes place. According to my reading of the standard you don't need a KAT before ressed if you support PR. However different labs will have different opinions and should we require one it can be added easily enough.I've now asked our contact at the lab, and he says that you're onlyexempted from the reseed test if you actually do predictionresistance. From what I can see in the code, prediction resistanceisn't used when using the FIPS_drbg_method(), since fips_drbg_bytes()call FIPS_drbg_generate() with 0 as the prediction_resistanceargument, hence the test is lacking.-- Henrik Grindal Bakken h...@ifi.uio.noPGP ID: 8D436E52Fingerprint: 131D 9590 F0CF 47EF 7963 02AF 9236 D25A 8D43 6E52__OpenSSL Project http://www.openssl.orgDevelopment Mailing Listopenssl-dev@openssl.orgAutomated List Manager  majord...@openssl.org

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [CVS] OpenSSL: openssl/ CHANGES openssl/crypto/ecdsa/ ecs_ossl.c

2011-05-27 Thread Peter Waltenberg
FWIW: This isn't like RSA blinding where the impact was significant.The performance impact of this is negligible, it may as well be unconditional. Peter-owner-openssl-...@openssl.org wrote: -To: openssl-dev@openssl.orgFrom: Mounir IDRASSI mounir.idra...@idrix.netSent by: owner-openssl-...@openssl.orgDate: 05/28/2011 12:49AMSubject: Re: [CVS] OpenSSL: openssl/ CHANGES openssl/crypto/ecdsa/ ecs_ossl.cHi ,I agree with Bruce: we should default to a constant time behavior so definitely the code must use #ifndef instead of #ifdef since the patch makes the scalar a fixed bit length value.I think the paper authors got confused when they wrote the code.Cheers,--Mounir IDRASSIIDRIXhttp://www.idrix.frOn 5/27/2011 4:10 PM, Bruce Stephens wrote: "Dr. Stephen Henson"st...@openssl.org writes: [...]  +#ifdef ECDSA_POINT_MUL_NO_CONSTTIME  +		/* We do not want timing information to leak the length of k,  +		 * so we compute G*k using an equivalent scalar of fixed  +		 * bit-length. */  +  +		if (!BN_add(k, k, order)) goto err;  +		if (BN_num_bits(k)= BN_num_bits(order))  +			if (!BN_add(k, k, order)) goto err;  +#endif /* def(ECDSA_POINT_MUL_NO_CONSTTIME) */  + Almost certainly my misunderstanding, but isn't the sense of this wrong? That is, surely the new code should be added if we want the CONSTTIME behaviour (i.e., if NO_CONSTTIME is not defined), and we'd want that by default so it should be #ifndef rather than #ifdef? (I agree it's #ifdef in the eprint too, which increases the likelyhood that I'm just misunderstanding something.) [...] __ OpenSSL Project http://www.openssl.org Development Mailing Listopenssl-dev@openssl.org Automated List Manager  majord...@openssl.org__OpenSSL Project http://www.openssl.orgDevelopment Mailing Listopenssl-dev@openssl.orgAutomated List Manager  majord...@openssl.org

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: EC curve names

2011-03-21 Thread Peter Waltenberg
The only good way I found was to use the defined OID's - something like
this - no guarantees this table is correct, you should check it.

const char *NIST_by_OID[] = {
  1.2.840.10045.3.1.1, /* P-192 */
  1.3.132.0.33,/* P-224 */
  1.2.840.10045.3.1.7, /* P-256 */
  1.3.132.0.34,/* P-384 */
  1.3.132.0.35,/* P-521 */
  1.3.132.0.1, /* K-163 */
  1.3.132.0.26,/* K-233 */
  1.3.132.0.16,/* K-283 */
  1.3.132.0.36,/* K-409 */
  1.3.132.0.38,/* K-571 */
  1.3.132.0.15,/* B-163 */
  1.3.132.0.27,/* B-233 */
  1.3.132.0.17,/* B-283 */
  1.3.132.0.37,/* B-409 */
  1.3.132.0.39,/* B-571 */
  NULL
};

OBJ_txt2nid() will handle these as well as the names you are more familliar
with.

Peter





  From:   Massimiliano Pala   



  To: OpenSSL Devel openssl-dev@openssl.org   



  Date:   22/03/2011 10:08 AM   



  Subject:EC curve names



  Sent by:owner-openssl-...@openssl.org 








Hi all,

I was wondering: how do I verify if a pkey used in an ECDSA certificate is
on one specific curve ? Or, better, how to easily print out the txt
identifier
of the curve used in a certificate ? That would be a useful addition to the
output in a ECDSA certificate. Something like:

...
Curve Name: secp384r1
...

Or better, is there an easy way to know if a curve is one of the NIST
approved (SuiteB) ones ?

Cheers,
Max


--

Best Regards,

 Massimiliano Pala

--o
Massimiliano Pala [OpenCA Project Manager]   ope...@acm.org

project.mana...@openca.org

Dartmouth Computer Science Dept   Home Phone: +1 (603) 369-9332
PKI/Trust Laboratory  Work Phone: +1 (603) 646-8734
--o
People who think they know everything are a great annoyance to those of us
who do.

-- Isaac Asimov



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: NIST SP 800-90 recommended RNGs

2010-07-26 Thread Peter Waltenberg
The OpenSSL team has FIPS compliant SP800-90 PRNG code already.

The SP800-90 PRNG's are fairly greedy however so a re-write of the seed
source is probably needed as well - and that's a tough problem.

Peter






  
  From:   Kriloff kril...@gmail.com   
  

  
  To: openssl-dev@openssl.org   
  

  
  Date:   27/07/2010 12:06 AM   
  

  
  Subject:NIST SP 800-90 recommended RNGs   
  

  
  Sent by:owner-openssl-...@openssl.org 
  

  





Are there any plans on implementing any of NIST SP 800-90
(
http://csrc.nist.gov/publications/nistpubs/800-90/SP800-90revised_March2007.pdf
)
recommended RNGs, for example CTR_DRBG in OpenSSL?
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


RE: [openssl.org #2293] OpenSSL dependence on external threading functions is a critical design flaw

2010-06-28 Thread Peter Waltenberg

...
For vendors it could mean that OpenSSL is shipped with
a default set of locking callbacks (e.g. Solaris could use
phreads/solaris threads) but that is obviously not a generic solution
and will not be suitable for all platforms/distributions.
I seem to remember having this conversation a few months ago... :)


I disagree, at least for the OS vendors defaulting to using the standard
threading model *is* the sane thing to do. I'd go further and say that
using the default OS thread model is the only reasonable choice for an OS
supplied library.

All OpenSSL needs to do is provide that as a config time option and most
of these problems go away.

Anyone with a really unusual configuration can build and ship their own
OpenSSL libraries without the default thread support built in and provide
their own callbacks - as they presumably do now.

Peter

Peter Waltenberg
Architect
IBM Crypto for C Team
Tivoli Software
Australia Development Laboratory
Gold Coast

Phone: +61 7 5552 4016
Fax: +61 7 5571 0420

ICC Home page:
https://cs.opensource.ibm.com/projects/icc




 
  From:   Mark Phalan mark.pha...@sun.com 
 

 
  To: openssl-dev@openssl.org   
 

 
  Date:   29/06/2010 02:00 AM   
 

 
  Subject:RE: [openssl.org #2293] OpenSSL dependence on external threading 
functions is a critical design flaw   

 
  Sent by:owner-openssl-...@openssl.org 
 

 





On Mon, 2010-06-28 at 07:06 -0700, David Schwartz wrote:
  Guess I replied too quickly... I see why you thought I was spreading
  misinformation. Of course I agree that every library could be modified
  to use atomic instructions available on their CPU to synchronize. Its
  just a lot of modifications to be made considering the vast amount of
  code out there that uses OpenSSL. I'd like to see OpenSSL take care of
  this (which is what I though you were arguing for too).
 
  -M

 Actually, that wouldn't work. What if you're using a threading library
that
 permits threads to run in different SMP domains? In that case, the atomic
 instructions would only synchronize between threads running in the same
SMP
 domain.

 If one thread in one SMP domain attempts to set the thread
synchronization
 primitives at the same time as a thread in another SMP domain does so,
the
 CPU synchronization instructions would be insufficient. Such a platform
 might require a specific synchronize memory view across domain
 instructions when a lock was acquired or released, for example.

 I think you're missing the point that OpenSSL doesn't just support the
 typical platform-provided threading libraries. It supports *any*
threading
 library the compiler can support. There is no guarantee that CPU-atomic
 operations are atomic in the same scope as any possible threading library
 the compiler can support requires.

 Trying to supply the library with atomic operations known safe for the
 required atomicity of the threading library in use creates a chicken and
egg
 problem. You would need atomic operations to specify the atomic
operations.

Point taken. I suppose what I'm really after is a way out for libraries
to be able to do the right thing with a minimum of code change. From
what I've seen in the wider opensource world is that either libraries
don't set the callbacks at all or set them without even checking to see
if they're already set. The way OpenSSL currently works I don't see a
simple solution. For vendors it could mean that OpenSSL is shipped with
a default set of locking callbacks (e.g. Solaris could use
phreads/solaris threads) but that is obviously not a generic solution
and will not be suitable for all platforms/distributions.
I seem to remember having this conversation a few months ago... :)

-M

__
OpenSSL Project http://www.openssl.org
Development

Re: [openssl.org #2232] OpenSSL 1.0.0 - Mac OS X Univesal Binary Build Link errors

2010-04-11 Thread Peter Waltenberg
Either build with no-asm (and throw away a lot of performance), or build it
multiple times and glue the results together with lipo.

Peter





   
  From:   Yvan BARTHÉLEMY via RT r...@openssl.org  
   
  To:  
   
  Cc: openssl-dev@openssl.org  
   
  Date:   11/04/2010 10:19 PM  
   
  Subject:[openssl.org #2232] OpenSSL 1.0.0 - Mac OS X Univesal Binary 
Build Link errors
   
  Sent by:owner-openssl-...@openssl.org
   





Hello,

I'm trying to build an Universal Binary (with 4 darwin architectures)
version of OpenSSL libraries for Leopard.

When linking, ld returns the following error message:

Undefined symbols:
  K0, referenced from:
  _sha256_block_data_order in libcrypto.a(sha256-x86_64.o)
  _sha512_block_data_order in libcrypto.a(sha512-x86_64.o)
  _OPENSSL_ia0cap_P, referenced from:
  _AES_cbc_encrypt in libcrypto.a(aes-x86_64.o)
  _RC4_set_key in libcrypto.a(rc4-x86_64.o)
  _RC4_options in libcrypto.a(rc4-x86_64.o)
ld: symbol(s) not found
collect2: ld returned 1 exit status
make[4]: *** [link_a.darwin] Error 1
make[3]: *** [do_darwin-shared] Error 2
make[2]: *** [libcrypto.1.0.0.dylib] Error 2
make[1]: *** [shared] Error 2


I hacked the generated .s to replace symbols references this way, then
deleted the .o to force the build:
K0 = K256 (sha256-x86_64.s)
K0 = K512 (sha256-x86_64.s)
_OPENSSL_ia0cap_P = _OPENSSL_ia32cap_P (aes-x86_64.s, rc4-x86_64[x2])

This allows me to build, but I assume it's not the right way to do it, and
the build might be broken as there might be other side effects that I was
unable to find (assembly is definitely not my mother tongue...).

I'd like to know the right way to do this, and to know if I need to do the
build again in case it has broken the binaries.

PS: Re-sent, as I had no feedback from the ticket system after 2 hours
(message lost or in queue ?)

Thanks,
Yvan

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: Symmetric algorithms with Cell architecture

2010-04-06 Thread Peter Waltenberg
http://www.ibm.com/developerworks/power/library/pa-cellperf/

AES has been done before, unfortunately most of the links from that page
don't work.
Google also shows a few hits.

Peter




 
  From:   Eduardo Ruiz tooran...@gmail.com
 

 
  To: openssl-dev@openssl.org   
 

 
  Cc: openssl-us...@openssl.org   
 

 
  Date:   04/06/2010 05:03 AM   
 

 
  Subject:Symmetric algorithms with Cell architecture   
 

 
  Sent by:owner-openssl-...@openssl.org 
 

 





Is there anyone working with symmetric algorithms in Cell platform, i
want suggestions to work with AES, taking advantage of the IBM Cell SPUs

Thanks in advance[attachment smime.p7s deleted by Peter
Waltenberg/Australia/IBM] [attachment PGP.sig deleted by Peter
Waltenberg/Australia/IBM]

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


RE: libcrypto safe for library use?

2010-03-31 Thread Peter Waltenberg

Which is essentially what we did at IBM to resolve this - but - closed
ecosystem. It was a lot easier.

What you could do 

Provide system default callbacks, allow them to be overridden at most once
ONLY if it's done before OpenSSL is usable - i.e. before any
OpenSSL_add_all_algorithms() type calls are made.

Document that this can only be done from the top level executable NOT from
a shared library - and the top level app can switch the lock model if it
wants. Changing the locking model is something that really can only be done
by whatever owns main() anyway - it's not something that can ever be safe
in a shared library.

Peter



  
  From:   David Schwartz dav...@webmaster.com   
  

  
  To: mark.pha...@sun.com, openssl-dev@openssl.org  
  

  
  Date:   31/03/2010 09:21 PM   
  

  
  Subject:RE: libcrypto safe for library use?   
  

  
  Sent by:owner-openssl-...@openssl.org 
  

  






Mark Phalan wrote:

 Imagine the above case happening in one thread while another thread
 makes a similar seemingly innocuous call with a similar effect (dlopen
 a
 library which uses OpenSSL). What should pkinit and the second library
 which uses OpenSSL do? If they set callbacks they'll be racing against
 each other. If they don't they will not be MT safe.
 The application never sets the callbacks because as far as it's aware
 it's only calling POSIX APIs.

If we're talking about existing code, they *must* already set callbacks,
otherwise they're hopelessly broken. Since the setting of callbacks will
unsafely override the set defaults, the suggested fix (to default to
callbacks suitable for the platform's default threading model) actually
will
*not* fix this case. If this is the case we care about, why implement a fix
that won't fix this case?

The purported advantage of this fix is that it solves the horse has
already
left the stable case where we aren't willing to change the libraries that
call OpenSSL. But it doesn't fix that case.

The only way to fix that case that I can think of is for OpenSSL to start
out using callbacks that are safe for the platform's default threading API
and to ignore, but report success, on all attempts to change the locking
callbacks. That may actually be the right behavior for the existing API.
(And, of course, OpenSSL would implement a newer, better API that new
applications and libraries should use that would include reference counting
and being informed of the threading model in use.)

DS

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


RE: libcrypto safe for library use?

2010-03-28 Thread Peter Waltenberg
Historically I suspect the reason there were no default callbacks is that a
sizeable proportion of OpenSSL users didn't use threading at all, and the
baggage hauling in the thread libraries imposed was significant.

I don't think that's an issue anymore - threading is the common case now.

But - there's another issue you've all missed. You can have multiple
independently developed libraries in the same process all using OpenSSL -
who gets to set the thread callbacks ?. They have to be set to ensure
thread safe operation, but no individual library can assume that someone
else has done it now.

Even better - a library does set the callbacks - and gets unloaded while
other libs are still using OpenSSL. (Not just a what if - that one I've
seen in the wild).

So - yes, you probably do need to set the callbacks by default now, and you
probably need to make that API a no-op as well. By all means have a compile
time option to restore the old behaviour for the set of users who need the
legacy behaviour - but that's likely a very small set now.


Peter




  
  From:   David Schwartz dav...@webmaster.com   
  

  
  To: mark.pha...@sun.com, openssl-dev@openssl.org  
  

  
  Date:   27/03/2010 07:57 AM   
  

  
  Subject:RE: libcrypto safe for library use?   
  

  
  Sent by:owner-openssl-...@openssl.org 
  

  






Mark Phalan wrote:

 Unfortunately that's not really practical. To take an example I'm
 familiar with - libgss. libgss can end up calling into OpenSSL in the
 following way:

 libgss - kerberos - pkinit plugin - openssl

 It's simply not practical to change libkrb5 and libgss and all
 applications using those libraries.

In this case, I presume 'pkinit' only supports one threading model (or one
set of compatible threading models). So it can set the callbacks. Any
application that uses 'pkinit' must be okay with those callbacks.

  It can't do that, it has no idea what threading model the application
  is
  using. It would have no way to know whether the locks it provided
  were
  suitable or sensible.


 Well on Solaris it's most likely going to be using either POSIX threads
 or Solaris threads which are interoperable and can be used in the same
 application. If an application wants to do something unusual it can set
 the callbacks. I'm not suggesting that applications should lose the
 power to set locking callbacks.
 Having default callbacks will simply mean that applications which don't
 use OpenSSL or don't set callbacks will be more likely to work.

Then set default callbacks in your code that calls OpenSSL. OpenSSL can't
do
it, because it has no idea what threading models your code uses.

  I agree. Your library should impose a requirement on any application
  that
  uses it that it inform you of the threading model it's using so that
  you can
  use appropriate locking as well. Then you can set the OpenSSL locking
  callbacks (just pass them through) and there's no chance of a race or
  problem.

 See above. That's simply not practical (the horse has left the stable).

If the horse has left the stable and the code supports more than one
threading model, then the problem is provable insolvable. There is simply
no
way for OpenSSL to know what kind of locks are adequate. If your code
supports only one threading model, then you can tell OpenSSL this by
setting
the callbacks.

Multi-threading issues, as a general rule, have to be resolved at
application level. It cannot be done by libraries because they don't have
sufficient knowledge. Things like signal handlers are process-level
resources. The same is true of what kind of mutexes are needed to protect
structures from concurrent accesses that come into a library from outside
it.

 I should also point out that libraries are setting the callbacks already.
 libldap_r (openldap) for example. I haven't done an extensive survey of
 common 

Re: libcrypto safe for library use?

2010-03-28 Thread Peter Waltenberg
You can't push and pop the callbacks.

The software is running in multiple threads and being used by multiple
independently developed libraries at the same time.
Do you really plan to swap the thread locking mechanism - (which is
protecting you while you swap the locking mechanism around) while threads
are running ?.

I hit this with IBM's bastard son of OpenSSL a few years back - the only
viable fix I could come up with was to internalize the callbacks and set
them to the OS default.

I agree there will be some users who want to use their own threading model
and that should be catered for but I don't think it should be the default
now.
Making the old behaviour a compile time option still works for closed
ecosystem users - but for most end users - i.e. on the Linux's or BSD's
having OpenSSL defaulting to sane and safe system locking is the best
solution I can come up with.

The memory callbacks have the same issue in that only one caller in the
process can set them sanely - though those at least default to system
default malloc()/free() so everyone leaving them alone works.
We don't have that option with the thread callbacks - they must be set, but
there's no safe way for multiple users in the same process to do that at
present - all I'm suggesting is that you fix this in the same way the
memory callback use is made safe.



Peter







On Mon, Mar 29, 2010 at 12:03 AM, Peter Waltenberg pwal...@au1.ibm.com
wrote:
  I don't think that's an issue anymore - threading is the common case now.

common case != everybody

  But - there's another issue you've all missed. You can have multiple
  independently developed libraries in the same process all using OpenSSL -
  who gets to set the thread callbacks ?. They have to be set to ensure
  thread safe operation, but no individual library can assume that someone
  else has done it now.

  Even better - a library does set the callbacks - and gets unloaded while
  other libs are still using OpenSSL. (Not just a what if - that one I've
  seen in the wild).

Even if OpenSSL would submit to your wish, then that would /not/ fix your
failure scenario above. At least not for everybody who needs to replace
that 'default implementation', whatever it will be.

The proper way to fix that kind of conundrum (at least one of the ways and
IMO the most feasible for OpenSSL) is to allow callers (= other libs and
app using OpenSSL) a way to replace-and-restore those callbacks. push and
pop if you will, but I'd rather see that bit of management done by the
outside world (from the perspective of OpenSSL as a highly portable lib).

So that would be an API where you either

a) have an extra API function which delivers the references to the
currently installed callbacks (NULL if none are set up), or

b) a kinda signal() style API function set up where the function which is
used to register those callbacks with OpenSSL returns a reference to the
previously installed callback.

Both ways allow for multiple independent setup and termination
implementations like this (for style (b)):

init:  global f *old_callback_ref = CRYPTO_set_lock_callback(my_lock_func);

... // do thy thing, lib/app

exit:  // restore original:  CRYPTO_set_lock_callback(old_callback_ref);

and the only thing that needs to be changed for style (b) is the return
type of CRYPTO_set_lock_callback and friends (i.e. the dynlocks):

void CRYPTO_set_locking_callback(void (*locking_function)(int mode,
   int n, const char *file, int line));

-- e.g.

typedef void CRYPTO_userdef_locking_function(int mode,

   int n, const char *file, int line);

CRYPTO_userdef_locking_function *
CRYPTO_set_locking_callback(CRYPTO_userdef_locking_function *new);


which is at least compile-time backwards 'compatible' as current code
expects a 'void' return type for this API.



  So - yes, you probably do need to set the callbacks by default now, and
  you

Nope. Doesn't solve anything. (Maybe 'solves' -- on /some/ platforms --
'weird issues' happening to those, em, programmers who don't check sample
code or read man pages and forget about those locks altogether, but that's
a whole 'nother subject matter.)



My 2 cents, donated to the cause.

--
Met vriendelijke groeten / Best regards,

Ger Hobbelt

--
web:    http://www.hobbelt.com/
       http://www.hebbut.net/
mail:   g...@hobbelt.com
mobile: +31-6-11 120 978
--


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: libcrypto safe for library use?

2010-03-28 Thread Peter Waltenberg

The right way to do it is have the app set it up at init time, either
through calling the OpenSSL functions directly or through a module/lib's
'init/setup' API if that's the one 'responsible' for OpenSSL activity in
your application.
Just in case some lib developer takes matters in his own hands a bit too
much, a 'only first invocation matters' style of setup might be used for
these and other bits of OpenSSL setup.


Sure - it works if you have a simple application, main - OpenSSL
 even main -lib doing SSL - OpenSSL still works.

What's giving us grief (and I suspect the person who first raised this
grief) is:
main - lib that needs SSL to do client server comms - OpenSSL
  - Another lib that does client server comms. -OpenSSL
  - Another lib that does crypto-OpenSSL

All the libraries of the big fat composite application expect to be able to
access OpenSSL's function, all were created independently - the top level
app doesn't do SSL or crypto. at all - it just uses libraries that need to
do SSL or crypto to function.

Yes, it's ugly - alas it's also what happens as time goes by and functions
that were regarded as standalone applications are now library functions -
you only have to look at any of the Unix desktops to see the sort of chaos
that results.

So - even if you don't want to change the defaults  - at least add an
option to allow OpenSSL to be built with the system thread model by default
so all those modern  (but oh so ugly) apps can still run safely.


Pete


   
  From:   Ger Hobbelt g...@hobbelt.com
   
  To: openssl-dev@openssl.org  
   
  Date:   03/29/2010 11:47 AM  
   
  Subject:Re: libcrypto safe for library use?  
   
  Sent by:owner-openssl-...@openssl.org
   





Hrgh. No, you don't init or terminate anything when you're already trying
to execute it from several threads; such init and termination should be
done before and after that.

When a lib (instead of the app itself) is using OpenSSL on its own and that
lib requires a certain approach it is probably coded to set it up. With an
OpenSSL API augmented a la style (a) it can do this safely, i.e.:

init:
if (get_callbacks() != NULL)
{  // flag this situation for termination time and leave those callbacks
the hell alone
}


When the app is using several such libraries and/or doing OpenSSL work of
its own, it should call the OpenSSL init and shutdown/termination APIs
itself to ensure it is set up during the entire lifetime of the app, no way
around that.

What OpenSSL /could/ do  is provide a few bits and pieces so that
libraries/modules who think they should be responsible for setting up and
shutting down OpenSSL themselves can check whether someone has done so
already beforehand and change their own actions accordingly; preferably by
leaving the setup alone. Indeed, wrong words by me 'push and pop'; it ain't
a stack.

The 'was OpenSSL set up already? And in a way we expect/require?' check has
to be performed in such [third party?] modules/libraries, at least in
modules/libs who would attempt to init/shutdown OpenSSL on their own.
What OpenSSL init/shutdown code /could/ do to help is maybe 'count' the
number of init and shutdown invocations and only really act on the first
one.
And setting up the callbacks counts as individual pieces of init code, so
they might be 'counted' individually (lock, dynlock, threadid).

That would fix this scenario:

On Mon, Mar 29, 2010 at 2:02 AM, Peter Waltenberg pwal...@au1.ibm.com
wrote:
  You can't push and pop the callbacks.

  The software is running in multiple threads and being used by multiple
  independently developed libraries at the same time.

--


  Do you really plan to swap the thread locking mechanism - (which is
  protecting you while you swap the locking mechanism around) while threads
  are running ?.

Definitely not.
  We don't have that option with the thread callbacks - they must be set,
  but
  there's no safe way for multiple users in the same process to do that
  at
  present - all I'm suggesting is that you fix this in the same way the
  memory callback use is made safe.

When the case is init from multiple threads simultaneously, then the memory
callback setup isn't 'safe' for that kind of thing either (none would).
It's just that that particular init is almost never invoked by any 'users'
as malloc/free is a /very/ common answer there. pthreads or what have you
aren't so common so

Re: [openssl.org #2177] New CFB block length breaks old encrypted data

2010-03-01 Thread Peter Waltenberg
I'm not sure the old code was wrong either.
It's unintuitive, but it is at least possible to pass the NIST compliance
tests with the old code - are you sure that's going to be possible with the
new code ?

Yes, I'm aware that there have been a lot of complaints about CFB in the
past - but it was at least functional for all the awkwardness.

Peter Waltenberg




  
  From:   Kurt Roeckx via RT r...@openssl.org   


  
  To:   
  

  
  Cc: openssl-dev@openssl.org   
  

  
  Date:   03/01/2010 06:44 PM   
  

  
  Subject:[openssl.org #2177] New CFB block length breaks old encrypted 
data  

  
  Sent by:owner-openssl-...@openssl.org 
  

  





Hi,

With version 0.9.8m we're unable to read encrypted data written by
older versions.  The commit that breaks it has this changelog:
The block length for CFB mode was incorrectly coded as 1 all the
time. It
should be the number of feedback bits expressed in bytes. For CFB1 mode
set
this to 1 by rounding up to the nearest multiple of 8.

And this diff:
--- crypto/evp/evp_locl.h
+++ crypto/evp/evp_locl.h
@@ -127,9 +127,9 @@ BLOCK_CIPHER_def1(cname, cbc, cbc, CBC, kstruct, nid,
block_size, key_len, \
 #define BLOCK_CIPHER_def_cfb(cname, kstruct, nid, key_len, \
  iv_len, cbits, flags, init_key,
cleanup, \
  set_asn1, get_asn1, ctrl) \
-BLOCK_CIPHER_def1(cname, cfb##cbits, cfb##cbits, CFB, kstruct, nid, 1, \
-  key_len, iv_len, flags, init_key, cleanup,
set_asn1, \
-  get_asn1, ctrl)
+BLOCK_CIPHER_def1(cname, cfb##cbits, cfb##cbits, CFB, kstruct, nid, \
+(cbits + 7)/8, key_len, iv_len, \
+flags, init_key, cleanup, set_asn1, get_asn1,
ctrl)

 #define BLOCK_CIPHER_def_ofb(cname, kstruct, nid, key_len, \
  iv_len, cbits, flags, init_key,
cleanup, \

I'm not really sure what to do with this, but I will probably revert
that change for the Debian package.


Kurt

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


[openssl.org #2162] Updated CMAC, CCM, GCM code

2010-02-04 Thread Peter Waltenberg via RT

(See attached file: ibmupdate1.tgz)

This is an update to the sources (only) for the CMAC, CCM and GCM code we
donated previously.
It rolls up various bug fixes for those who need them collected in one
place, but isn't a full patch to OpenSSL.

Current status.
GCM appears solid now with a 96 bit IV. There may be problems with variable
length IV's.
CCM we have a test failure on one platform - I don't know what's causing
that but it's as likely to be the test code as the implementation.
CMAC we have test failures on several platforms - looks like a real bug but
I havn't had time to investigate in detail yet, again it could be the test
code or the implementation.

Thanks to all those who've sent in bug reports.

Peter


ibmupdate1.tgz
Description: Binary data


Re: bn_mul_add_words() hangs on my linux-x86_64 built

2010-01-08 Thread Peter Waltenberg
run make tests in the OpenSSL build tree, or even openssl speed rsa.
That'll test the code paths with known good code.

If it doesn't hang it's a problem in your code somewhere (try running under
valgrind at that point)- if it does hang , you should get better
diagnostics from make tests.

Peter




 
  From:   Brendan Plougonven bplougon...@infovista.com
 

 
  To: openssl-dev@openssl.org openssl-dev@openssl.org   
 

 
  Date:   01/09/2010 01:42 AM   
 

 
  Subject:bn_mul_add_words() hangs on my linux-x86_64 built 
 

 
  Sent by:owner-openssl-...@openssl.org 
 

 





I built an application that includes omniORB which statically links to
openssl-0.9.8k and it hangs with the following stack:

#0  0x002aa7774419 in bn_mul_add_words ()
#1  0x002aa77bdd5c in BN_from_montgomery ()
#2  0x002aa77bdb7b in BN_mod_mul_montgomery ()
#3  0x002aa77b63c0 in BN_mod_exp_mont ()
#4  0x002aa77bc93e in witness ()
#5  0x002aa77bc880 in BN_is_prime_fasttest_ex ()
#6  0x002aa77bc324 in BN_generate_prime_ex ()
#7  0x002aa78ca089 in rsa_builtin_keygen ()
#8  0x002aa78c9e20 in RSA_generate_key_ex ()
#9  0x002aa78c05fa in RSA_generate_key ()
#10 0x002aa78b9944 in sslContext::set_ephemeralRSA ()
#11 0x002aa78ba06f in sslContext::internal_initialise ()
#12 0x002aa78b92af in omni::omni_sslTransport_initialiser::attach ()
#13 0x002aa780542c in omni::omni_hooked_initialiser::attach ()
#14 0x002aa7802adb in CORBA::ORB_init ()
#15 0x002aa76f4926 in CORBAorb::initialize ()

That looks like the issue 11. In the [BUILD] section of the FAQ.
./config –t and openssl version –p give the same value linux-x86_64

I built openssl with:

./Configure shared no-rc5 no-idea enable-fips no-asm no-sse2 linux-x86_64
--prefix=`pwd`

I tried ./config shared, but that led to the same results.

It was built on Red Hat Enterprise Linux AS release 4 (Nahant Update 8).
Linux 2.6.9-67.0.7.EL #1 Wed Feb 27 04:37:13 EST 2008 x86_64 x86_64 x86_64
GNU/Linux
with gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-11)

Would anyone have a clue as to what is wrong with my build ?

Brendan


Bug in IBM contributed AES-CCM code (large AAD)

2009-12-20 Thread Peter Waltenberg
I'll post a full patch at some point - but in the interim.
This isn't so much a bug as something I forgot to go back and fix when I
coded it originally.
CCM will fail with AAD  0xff00 bytes as I forgot to add the formatting
bytes for the larger AAD's.
Note that it still hasn't been tested with AAD's  2^32 bytes .

With normal use of CCM this was probably harmless, as it's typically used
with small packets.

--- openssl-0.9.8e.orig/crypto/aes/aes_ccm.c2009-12-18
08:38:39.0 +1000
+++ openssl-0.9.8e/crypto/aes/aes_ccm.c 2009-12-18 10:29:51.0 +1000
@@ -180,7 +180,8 @@
 unsigned int aadbytes = 0;
 unsigned int offset = 0;
 int outl = 0;
-unsigned int i,j;
+unsigned int i,j,k;
+int aadenc = 2;
 #if defined(AES_CCM_DEBUG)
 int b = 0; /* Index counters to aid formatting during debug */
 int s = 0;
@@ -283,15 +284,22 @@
   if(aad != NULL  aadlen  0) {
if(aadlen  (0x1L - 0x100L)) {
  aadbytes = 2;
+ aadenc = 2;
} else if(aadlen = 0x) {
  aadbytes = 6;
+ aadenc = 4;
+ A0[0] = 0xff;
+ A0[1] = 0xfe;
} else {
  aadbytes = 10;
+ aadenc = 8;
+ A0[0] = 0xff;
+ A0[1] = 0xff;
}
j = aadlen;
-   for(i = aadbytes-1; i  0; i--) {
- A0[i] = j  0xff;
- j = 8;
+   for(i = 0, k = aadbytes-1; i  aadenc; i++,k--) {
+ A0[k] = j  0xff;
+ j = j / 256;
}
/* Now roll through the aad ? */
   }
@@ -364,7 +372,7 @@
/* AES_encrypt(CTR,A0,akey); */
EVP_EncryptUpdate(ctx,A0,outl,CTR,AES_BLOCK_SIZE);
printbinCTR(S,s,A0,AES_BLOCK_SIZE);
-   /* Increment the ounter */
+   /* Increment the counter */
AES_CCM_inc(CTR,q);

/* XOR the encrypted counter with the incoming data */

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #2046] OpenSSL 1.0.0 beta 3 ASM fails on z/Linux 64-bit

2009-09-17 Thread Peter Waltenberg
Doesn't help much, but we do use that code, and at least slightly older
versions do compile and run on 64 bit Z.

(I had to backport it from a later OpenSSL to 0.9.8, so I can't be sure I'm
using exactly the same asm at this point).

I'll double check tomorrow and see if there are differences between the
beta 3 asm and the version we use here.

Peter





   
  From:   Tim Hudson via RT r...@openssl.org
 

   
  To:   
   

   
  Cc: openssl-dev@openssl.org   
   

   
  Date:   09/17/2009 06:08 PM   
   

   
  Subject:[openssl.org #2046] OpenSSL 1.0.0 beta 3 ASM fails on z/Linux 
64-bit 

   
  Sent by:owner-openssl-...@openssl.org 
   

   





 I kicked off some builds last night as I was curious as to the answer to
 the question - 0.9.8d fails in make test, 0.9.8k passes in make test.

The 1.0.0 beta 3 fails with the SHA1 asm code and in the AES asm code.
I haven't had a chance to look into this in any detail - just noting that
the
out-of-the-box build isn't working. ./config -no-asm works so the issues
are all
in the asm code.

0.9.8k passes make test, 0.9.8d fails make test in BN code.

./config
make
make test

tjh:~/work/openssl-1.0.0-beta3/test gdb sha1test
GNU gdb 6.4
Copyright 2005 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you
are
welcome to change it and/or distribute copies of it under certain
conditions.
Type show copying to see the conditions.
There is absolutely no warranty for GDB.  Type show warranty for details.
This GDB was configured as s390x-suse-linux...ruUsing host libthread_db
library /lib64/libthread_db.so.1.

(gdb) run
Starting program: /home/tjh/work/openssl-1.0.0-beta3/test/sha1test

Program received signal SIGILL, Illegal instruction.
sha1_block_data_order () at sha1-s390x.s:13
13  lg  %r0,16(%r15)
Current language:  auto; currently asm
(gdb)


Linux somewhere 2.6.16.21-0.8-default #1 SMP Mon Jul 3 18:25:39 UTC 2006
s390x
s390x s390x GNU/Linux

tjh:~/work/openssl-1.0.0-beta3 gcc -v
Using built-in specs.
Target: s390x-suse-linux
Configured with: ../configure --enable-threads=posix --prefix=/usr
--with-local-prefix=/usr/local --infodir=/usr/share/info
--mandir=/usr/share/man
--libdir=/usr/lib64 --libexecdir=/usr/lib64
--enable-languages=c,c++,objc,fortran,java --enable-checking=release
--with-gxx-include-dir=/usr/include/c++/4.1.0 --enable-ssp --disable-libssp

--enable-java-awt=gtk --enable-gtk-cairo --disable-libjava-multilib
--with-slibdir=/lib64 --with-system-zlib --enable-shared
--enable-__cxa_atexit
--enable-libstdcxx-allocator=new --without-system-libunwind
--with-tune=z9-109
--with-arch=z900 --with-long-double-128 --host=s390x-suse-linux
Thread model: posix
gcc version 4.1.0 (SUSE Linux)

tjh:~/work/openssl-1.0.0-beta3 cat /proc/cpuinfo
vendor_id   : IBM/S390
# processors: 1
bogomips per cpu: 888.01
processor 0: version = FF,  identification = 0117C9,  machine = 2064


[attachment PGP.sig deleted by Peter Waltenberg/Australia/IBM]

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: interface stability

2009-09-11 Thread Peter Waltenberg

Currently the ABI changes depending on compile time options.
New functionally ussually means that some struct needs to get
new members, and all those structs are public, and applications
make direct use of them.  And compile time options will
add those members.

The API for those functions on the other hand might be more
stable, but I'm afraid you won't be able to use them
without knowing the members of the structs.

Which is pretty easy to fix in most cases. Just make sure you have
allocators/deallocators for all internal objects, add functions to access
any internal object members that really need direct access (there are very
few cases where this is needed) and treat the internal objects as blobs

I did this for the bastard son of OpenSSL IBM uses.
About the only thing that gives us some grief is access to some object
members to hanlde the data conversions needed for the formal NIST
compliance tests - which is internal functional testing anyway, it's not
something end users of the API would normally need to do

Peter




  
  From:   Kurt Roeckx k...@roeckx.be  
  

  
  To: openssl-dev@openssl.org   
  

  
  Date:   09/11/2009 08:07 AM   
  

  
  Subject:Re: interface stability   
  

  
  Sent by:owner-openssl-...@openssl.org 
  

  





On Tue, Sep 01, 2009 at 02:23:38PM +0200, Mark Phalan wrote:

 In OpenSolaris we follow an interface stability classification system
 which marks interfaces according to how stable they are believed to be.
 You can see more information here if interested:
 http://opensolaris.org/os/community/arc/policies/interface-taxonomy/

 Currently OpenSSL APIs are classified as External which basically
 means that no stability guarantees are made and ABI and API
 compatibility may break at any time. In order to use these interfaces
 within OpenSolaris a contract is required. The interfaces covered by the
 contract believed to be fairly stable are:

 Interface
 -
 ASN1_
 BN_
 BIO_
 CRYPTO_
 EVP_
 HMAC
 OpenSSL_
 OBJ_
 PEM_
 PKCS7
 PKCS12_
 RAND_
 SMIME_
 SSL_
 X509_

 We'd like to promote the above interfaces to a slightly higher level
 of stability so that contracts for use are no longer required in
 OpenSolaris. Is the above list of APIs fairly stable? within lettered
 releases only? any missing APIs or APIs which probably shouldn't be
 there? will 1.0 change things a lot?

Going from 0.9.8 to 1.0.0 they change the soname, so they clearly
want to indicate that it's not binary compatible.  If it was compatible
there would be no need to change the soname.

This issue has been brought up a few times already, and it seems
to me that they want to go to a stable API/ABI but on the other
hand don't want to make any changes to get there.

Currently the ABI changes depending on compile time options.
New functionally ussually means that some struct needs to get
new members, and all those structs are public, and applications
make direct use of them.  And compile time options will
add those members.

The API for those functions on the other hand might be more
stable, but I'm afraid you won't be able to use them
without knowing the members of the structs.


Kurt

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #1935] AES-GCM, AES-CCM, CMAC updated for OpenSSL1.0 beta 2

2009-05-25 Thread Peter Waltenberg
Sorry about the C++ comments, I just found them in the GCM acceleration
code. I'll fix those.

There's no EVP layer for an encrypt + hash EVP API which would be needed
for GCM and CCM to be usable via an EVP type interface.
AES-CCM also has it's own quirks which would bite if you ever wanted to
have it FIPS certified and it was used via a generic upper layer API.
By specification it's not supposed to produce decrypted output if the hash
fails which breaks the normal Init/Update/Update/.../Final pattern.

If you mean the use of AES_set_encrypt_key() and AES_encrypt() - you'd have
to get CCM and GCM explicitly tested to get them FIPS certified anyway,
they won't gain FIPS  certfication just because the underlying AES is FIPS
certified so there's no real loss there.
However I take the point about not using AES hardware where it exists. The
code IBM uses is software only and the catch was that there's was a fairly
decent performance penalty using EVP and it didn't seem worth the hit when
CCM and GCM are only specified for use with AES.
I'll change GCM and CCM to call the EVP functions.

As a comment, there are a number of hardware cores offering AES-GCM now, so
re-vectoring they whole of AES-GCM is possibly a more desirable option than
using just AES hardware when available.
Creating an encrypt +hash EVP API though - at least one that would also
cope with CCM just gave me a headache - and there doesn't seem enough gain
to do it for only one algorithm.

As for CMAC, I just copied HMAC - which also lacks upper level EVP style
entry points.  Again, both algorithm famillies (CMAC/HMAC) probably should
have a single generic EVP wrapper.
CMAC is cleaner in that it does use the underlying EVP calls - the
performance trade off there was against the ability to support multiple
ciphers - no contest.

Peter





   
  From:   Dr. Stephen Henson st...@openssl.org  
   

   
  To: openssl-dev@openssl.org   
   

   
  Date:   25/05/2009 09:58 PM   
   

   
  Subject:Re: [openssl.org #1935] AES-GCM, AES-CCM, CMAC updated for 
OpenSSL1.0 beta 2 

   
  Sent by:owner-openssl-...@openssl.org 
   

   





On Mon, May 25, 2009, Peter Waltenberg wrote:

 Up to the OpenSSL team. I'm happy to do any maintenance required, but
it's
 up to them to merge it - or not.
 Given that there are a number of people using the patch now and AES-GCM
is
 needed for new TLS modes, I'd hope it gets merged.


I had a brief look at the patch. Thre are quite a few C++ style comments in
there which cause issues on some compilers. If you up the gcc warning
levels
these will be obvious (see $gcc_devteam_warn in the Configure script).

This can't go into 1.0.0 because that's in a feature freeze. It could go
into
HEAD (which will be 1.1.0) and 1.0.1 (no branch exists for this yet).

This really needs EVP support though. Applications should avoid use of low
level APIs because they prohibit the use of ENGINEs and such things as FIPS
require the use of EVP.

Steve.
--
Dr Stephen N. Henson. Email, S/MIME and PGP keys: see homepage
OpenSSL project core developer and freelance consultant.
Homepage: http://www.drh-consultancy.demon.co.uk
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: AES-CCM and -GCM in Release 1.0.0?

2009-05-24 Thread Peter Waltenberg
I'm working on updating the patch to apply cleanly to the Beta now.

Peter




 
  From:   Paul Suhler paul.suh...@quantum.com   
 

 
  To: openssl-dev@openssl.org 
 

 
  Date:   05/22/2009 02:39 AM   
 

 
  Subject:AES-CCM and -GCM in Release 1.0.0?
 

 
  Sent by:owner-openssl-...@openssl.org 
 

 





Hi,e veryone.


Is there a particular reason that AES-CCM and AES-GCM are not included in
1.0.0?


Thanks,


Paul
___
Paul A. Suhler | Firmware Engineer | Quantum Corporation | Office:
949.856.7748 | paul.suh...@quantum.com
___
Disregard the Quantum Corporation confidentiality notice below.  The
information contained in this transmission is not confidential.  Permission
is hereby explicitly granted to disclose, copy, and further distribute to
any individuals or organizations, without restriction.



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


RE: [openssl.org #1935] AES-GCM, AES-CCM, CMAC updated for OpenSSL 1.0 beta 2

2009-05-24 Thread Peter Waltenberg
Up to the OpenSSL team. I'm happy to do any maintenance required, but it's
up to them to merge it - or not.
Given that there are a number of people using the patch now and AES-GCM is
needed for new TLS modes, I'd hope it gets merged.

Peter




  
  From:   Paul Suhler paul.suh...@quantum.com   
  

  
  To: openssl-dev@openssl.org 
  

  
  Date:   05/25/2009 10:48 AM   
  

  
  Subject:RE: [openssl.org #1935] AES-GCM, AES-CCM, CMAC updated for 
OpenSSL 1.0 beta 2   

  
  Sent by:owner-openssl-...@openssl.org 
  

  





Thanks very much, Peter.

Will this be made a part of the 1.0.0 distribution, or will it only be
distributed as a patch?

Thanks,

Paul
___
Paul A. Suhler | Firmware Engineer | Quantum Corporation | Office:
949.856.7748 | paul.suh...@quantum.com
___
Disregard the Quantum Corporation confidentiality notice below.  The
information contained in this transmission is not confidential.
Permission is hereby explicitly granted to disclose, copy, and further
distribute to any individuals or organizations, without restriction.


-Original Message-
From: owner-openssl-...@openssl.org
[mailto:owner-openssl-...@openssl.org] On Behalf Of Peter Waltenberg via
RT
Sent: Sunday, May 24, 2009 11:55 AM
Cc: openssl-dev@openssl.org
Subject: [openssl.org #1935] AES-GCM, AES-CCM, CMAC updated for OpenSSL
1.0 beta 2


See attached file:  ibm2.patch
(See attached file: ibm2.patch)

This version fixes all know bugs, a few build and test case problems,
(c) notices in a couple of support files I'd missed and updates the
patch so it'll apply cleanly to
openssl-1.0_beta2

The export notifications from the previous patches are still valid.
As noted previously, IBM is donating this code under the terms of the
OpenSSL license.

Note that CMAC won't work with DES/3DES without mods. New keys are
generated on the fly, but the current code doesn't fix the parity of the
new DES keys.
That could be done, but will make the code uglier and slower - if anyone
really needs DES CMAC let me know and I'll fix that.

Peter
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Bug in IBM contributed AES-CCM code

2009-03-18 Thread Peter Waltenberg
The routine AES_CCM_inc()  keeps propagating the carry from an overflow,
which means our implementation fails for large blocks of data.

The following code fragment should address that bug.

/*! @brief
  Increment the CCM CTR, which is variable length, big endian
  @param counter the counter to increment
  @param q the number of bytes in the counter
*/
static void AES_CCM_inc(unsigned char *counter,unsigned q) {
  int i;
  for(i = 15; q  0 ; i--,q--) {
counter[i]++;
if(0 != counter[i] ) break;
  }
}

I'll post a full update to the request tracker when things ease up in my
day job.

Peter

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: AES-GCM and AES-CCM

2009-02-16 Thread Peter Waltenberg
IBM submitetd a patch for AES-GCM and AES-CCM  some months ago.
It's sitting in the request tracker, the later version with Aaron
Christensen's acceleration patches and NIST known answer tests is probably
the one you want.

Merging that into the OpenSSL code base (or not) is up to the OpenSSL team.

Peter




  From:   Roger No-Spam roger_no_s...@hotmail.com 



  To: openssl-dev@openssl.org 



  Date:   17/02/2009 00:20  



  Subject:AES-GCM and AES-CCM   








Hi,

Are there any plans to add support for AES-GCM and AES-CCM in openssl in
general and in the openssl-0.9.8 branch in particular?

--
R





__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


  1   2   >