[openssl-dev] Is Intel ADX instruction extension supported by the latest OpenSSL?

2016-08-29 Thread wang xiuyan
Dear OpenSSL developers,

I'm looking at how to adopt Intel ADX instruction extension in OpenSSL. Below 
man page mentions ADCX/ADOX instructions:

https://www.openssl.org/docs/manmaster/crypto/OPENSSL_ia32cap.html

but I can not find ADCX/ADOX related words/expressions from OpenSSL-1.0.2h 
source code. I wonder whether OpenSSL already supports ADX. If so, what's the 
related code?

Thanks,
Xiuyan
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


[openssl-dev] [openssl.org #4665] Bug with OpenSSL 1.1.0 when path to Perl contains spaces

2016-08-29 Thread John L Veazey via RT
I have a Windows machine where Perl is installed in "C:\Program
Files\Perl64\bin\perl.exe". 

 

When executing "perl Configure VC-WIN32", I get the following error

 

'C:\Program' is not recognized as an internal or external command,

operable program or batch file.

 

I've fixed the problem, by modifying line #2394 in Configure and adding
double quotes around $config{perl}.

 

my $cmd = "\"$config{perl}\" \"-I.\" \"-Mconfigdata\" \"$dofile\"
-o\"Configure\" \"".join("\" \"",@templates)."\" > \"$out.new\"";


-- 
Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4665
Please log in as guest with password guest if prompted



smime.p7s
Description: S/MIME cryptographic signature
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [PATCH] Support broken PKCS#12 key generation.

2016-08-29 Thread Andy Polyakov
> So let's try a better example. The password is "ĂŻ" (U+0102 U+017b),
> and the locale (not that it *should* matter) is ISO8859-2.

When it comes to locale in *this* case you should rather wonder what
does your terminal emulator program do and how does it interact with
your shell. Because these two are responsible for composing the string
that OPENSSL_[asc|utf8]2uni gets.

> The correct rendition is 01 02 01 7b.

Yes. And now note that it's passed as 'c4 82 c5 bb' to openssl pkcs12 as
argument or console input under an UTF-8 locale. Otherwise it would have
been passed as 'c3 af'.

> The "old buggy OpenSSL" rendition, in the ISO8859-2 locale, would be
> to *convert* to ISO8859-2 (giving c3 af).

No, no conversion from UTF-8 to ISO8859-x or any other conversion was
*ever* performed. Nor is it performed now. It was and still is all about
how string is composed by *somebody else*. That's why I said that "we
assume that you don't change locale when you upgrade OpenSSL".

> Then to interpret those bytes
> as ISO8859-1 (in which they would mean "ï") and convert *that* to
> UTF16LE to give 00 c3 00 af

Previously OpenSSL would convert 'c4 82 c5 bb' to '00 c4 00 82 00 c5 00
bb'. Now it converts it to '01 02 01 7b', but attempts even '00 c4 00 82
00 c5 00 bb' for "old times sake". And for 'c3 af' question is if it's
valid UTF-8 encoding. If it is (it is in this example), then it would
attempt '00 ef' (in this example) and then '00 c3 00 af', and if not,
then it would go straight to '00 c3 00 af'.


-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [PATCH] Support broken PKCS#12 key generation.

2016-08-29 Thread David Woodhouse
On Mon, 2016-08-29 at 18:40 +0100, David Woodhouse wrote:
> 
> So... let's have a password 'nałve', in a ISO8859-2 environment where
> that is represented by the bytes 6e 61 b3 76 65.
> 
> First I should try the correct version according to the spec:
>  006e 0061 0142 0076 0065
> 
> Then we try the "OpenSSL assumed it was ISO8859-1" version:
>  006e 0061 00b3 0076 0065
> 
> And finally we try the "OpenSSL assumed it was UTF-8" version:
>  006e 0061 ... actually you *can't* convert that byte sequence "from"
> UTF-8 since it isn't valid UTF-8. So what will new OpenSSL do here,
> again?

In this specific example (the byte stream is not valid UTF-8 it looks
like OPENSSL_utf8touni() will assume it's actually ISO8859-1 and thus
the final case will be identical to the previous one.

So let's try a better example. The password is "ĂŻ" (U+0102 U+017b),
and the locale (not that it *should* matter) is ISO8859-2.

The correct rendition is 01 02 01 7b.

The "old buggy OpenSSL" rendition, in the ISO8859-2 locale, would be
to *convert* to ISO8859-2 (giving c3 af). Then to interpret those bytes
as ISO8859-1 (in which they would mean "ï") and convert *that* to
UTF16LE to give 00 c3 00 af

The "new buggy OpenSSL" rendition, again in the ISO8859-2 locale, would
again be to convert to ISO8859-2 (c3 af). Then to interpret those bytes
as UTF-8 (in which they would mean "ï") and convert *that* to UTF16LE
to give 00 ef.

Right?

-- 
dwmw2




smime.p7s
Description: S/MIME cryptographic signature
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [PATCH] Support broken PKCS#12 key generation.

2016-08-29 Thread David Woodhouse
On Mon, 2016-08-29 at 15:25 +0200, Andy Polyakov wrote:
> First of all. *Everything* that is said below (and what modifications in
> question are about) applies to non-ASCII passwords. ASCII-only passwords
> are not affected by this and keep working as they were.
> 
> > 
> > > 
> > > commit 647ac8d3d7143e3721d55e1f57730b6f26e72fc9
> > > 
> > > OpenSSL versions before 1.1.0 didn't convert non-ASCII
> > > UTF8 PKCS#12 passwords to Unicode correctly.
> > > 
> > > To correctly decrypt older files, if MAC verification fails
> > > with the supplied password attempt to use the broken format
> > > which is compatible with earlier versions of OpenSSL.
> > > 
> > > Reviewed-by: Richard Levitte 
> > 
> > Hm, this sounds like something that other crypto libraries also ought
> > to try to work around. 
> > 
> > Please could you elaborate on the specific problem, and/or show a test
> > case?
> 
> Note that there is 80-test_pkcs12.t that refers to shibboleth.pfx.

Thanks.

> > I'm not quite sure how to interpret the patch itself. You pass the
> > password through OPENSSL_asc2uni() and then OPENSSL_uni2utf8() — which
> > is essentially converting ISO8859-1 to UTF-8.
> 
> You make it sound like these two are called one after another. But
> that's not what happens. It *tries* to access data with passwords
> converted with OPENSSL_asc2uni *or* OPENSSL_utf82uni, effectively
> independently of each other, in order to see which one is right. So that
> you can *transparently* access old broken data, as well as
> specification-compliant one.

Hm... at line 541 of pkcs12.c we call PKCS12_verify_mac(…mpass…) with
the original provided password. Which is in the locale-native character
set, is it not? No attempt to convert to anything known.

Then if that *fails* we do indeed convert it via OPENSSL_asc2uni() and
OPENSSL_uni2utf8() to make 'badpass' and try again:

} else if (!PKCS12_verify_mac(p12, mpass, -1)) {
/*
 * May be UTF8 from previous version of OpenSSL:
 * convert to a UTF8 form which will translate
 * to the same Unicode password.
 */
unsigned char *utmp;
int utmplen;
utmp = OPENSSL_asc2uni(mpass, -1, NULL, &utmplen);
if (utmp == NULL)
goto end;
badpass = OPENSSL_uni2utf8(utmp, utmplen);
OPENSSL_free(utmp);
if (!PKCS12_verify_mac(p12, badpass, -1)) {
BIO_printf(bio_err, "Mac verify error: invalid password?\n");
ERR_print_errors(bio_err);
goto end;
} else {
BIO_printf(bio_err, "Warning: using broken algorithm\n");
if (!twopass)
cpass = badpass;
}

The shibboleth.pfx test seems to pass on the *first* call to
PKCS12_verify_mac() — it *isn't* testing this fallback. If I add a
space to the end of the correct password and stick a breakpoint on
PKCS12_verify_mac(), I see the following calls:

#0  PKCS12_verify_mac (p12=0x956e40, 
pass=0x956a30 "σύνθημα γνώρισμα ", passlen=-1)
at crypto/pkcs12/p12_mutl.c:152
#1  0x00425567 in pkcs12_main (argc=0, argv=0x7fffdd90)
at apps/pkcs12.c:541


And then it converts that string from ISO8859-1 (which it wasn't) to
UTF-8, and calls PKCS12_verify_mac() again:

#0  PKCS12_verify_mac (p12=0x956e40, 
pass=0x9597e0 "Ï\302\203Ï\302\215νθημα
γνÏ\302\216Ï\302\201ιÏ\302\203μα ", passlen=-1) at
crypto/pkcs12/p12_mutl.c:152
#1  0x004255fc in pkcs12_main (argc=0, argv=0x7fffdd90)
at apps/pkcs12.c:554

> > 
> > So, if my password is "naïve". In UTF-8 that's 6e 61 c3 af 76 65, which
> > is the correct sequence of bytes to use for the password?
> 
> 00 6e 00 61 00 ef 00 76 00 65, big-endian UTF-16.

Is that conversion done inside PKCS12_verify_mac()? Because we
definitely aren't passing UTF-16-BE into PKCS12_verify_mac().

> > 
> > And you now convert that sequence of bytes to 6e 61 c3 83 c2 af 76 65
> > by assuming it's ISO8859-1 (which would be 'naïve') and converting to 
> > UTF-8?
> 
> I don't follow. Why would it have to be converted to this sequence?

That's what it's doing. Changing the example above and showing the same
breakpoints as they get hit again...

Starting program: /home/dwmw2/git/openssl/apps/openssl pkcs12 -noout -password 
pass:naïve -in test/shibboleth.pfx
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".

Breakpoint 1, PKCS12_verify_mac (p12=0x956e30, pass=0x956a30 "naïve", 
passlen=-1) at crypto/pkcs12/p12_mutl.c:152
152 if (p12->mac == NULL) {
(gdb) x/7bx pass
0x956a30:   0x6e0x610xc30xaf0x760x650x00
(gdb) c
Continuing.

Breakpoint 1, PKCS12_verify_mac (p12=0x956e30, pass=0x959960 "naïve", 
passlen=-1) at crypto/pkcs12/p12_mutl.c:152
152 if (p12->mac == NULL) {
(gdb) x/8bx pass
0x959960:   0x6e0x610xc30x830xc20x

Re: [openssl-dev] [openssl.org #4664] Enhancement: better handling of CFLAGS and LDFLAGS

2016-08-29 Thread Tomas Mraz via RT
On Po, 2016-08-29 at 14:27 +, Richard Levitte via RT wrote:
> On Mon Aug 29 12:27:59 2016, appro wrote:
> > 
> > Or maybe ("maybe" is reference to "I don't quite grasp" above) what
> > we
> > are talking about is Configure reading CFLAGS and LDFLAGS and
> > *adding*
> > them to generated Makefile. I mean we are not talking about passing
> > them
> > to 'make', but "freezing" them to their values at configure time.
> > Could
> > you clarify?
> I assume, and please correct me if I'm wrong, that the request is to
> treat the
> environment variables CFLAGS and LDFLAGS the same way we treat CC,
> i.e. as an
> initial value to be used instead of what we get from the
> configuration target
> information.
> 
> This should be quite easy to implement, and we can also continue to
> use
> whatever additional Configure arguments as compiler or linker flags
> to be used
> *in addition* to the initial value (that comes from the config target
> information, or if we decide to implement it, CFLAGS)

Ideally the optimization/debugging flags not affecting directly the
code that is being compiled would be replaced with what is placed into
CFLAGS/LDFLAGS. But things like -D would be kept from the config target
information.

-- 
Tomas Mraz
No matter how far down the wrong road you've gone, turn back.
  Turkish proverb
(You'll never know whether the road is wrong though.)




-- 
Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4664
Please log in as guest with password guest if prompted

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


[openssl-dev] [openssl.org #4664] Enhancement: better handling of CFLAGS and LDFLAGS

2016-08-29 Thread Richard Levitte via RT
On Mon Aug 29 12:27:59 2016, appro wrote:
> > With the current build system, user-defined CFLAGS are accepted as
> > any
> > unrecognized command line argument passed to Configure. This seems a
> > little dangerous to me since it means a simple typo could end up
> > getting passed to the C compiler.
>
> Is it more dangerous than interactive access? But on serious note I
> don't quite grasp the problem. How do you pass CFLAGS? Or rather how
> is
> it specific to OpenSSL? I mean you either set CFLAGS environment
> variable and call 'make -e' or call 'make CFLAGS=something', right?
> But
> this is the way *make* is. Or is suggestion that we should implement
> own
> internal .ext1.ext2 rules independent on CFLAGS?
>
> > As well, the current build system forcibly enables optimization (-O3)
> > or debug symbols (-g) depending on the build type (--release or
> > --debug).
>
> Could you elaborate on the problem? If you need release build with
> debug
> symbols you can simply add -g when configuring...
>
> > There does not appear to be any method for the user to pass
> > additional
> > LDFLAGS to the build system.
>
> You can pass them as arguments to ./Configure script. It recognizes
> -L,
> -l and -Wl,.
>
> > I would like to propose the following changes:
> >
> > 1. Read CFLAGS and LDFLAGS from the environment within the Configure
> > script.
> > 2. Allow the user to opt-out of the default release or debug cflags,
> > perhaps by adding a third build type (neither release nor debug).
> >
> > These changes would my job as a distro maintainer on Gentoo Linux
> > just
> > a little bit easier.
>
> Or maybe ("maybe" is reference to "I don't quite grasp" above) what we
> are talking about is Configure reading CFLAGS and LDFLAGS and *adding*
> them to generated Makefile. I mean we are not talking about passing
> them
> to 'make', but "freezing" them to their values at configure time.
> Could
> you clarify?

I assume, and please correct me if I'm wrong, that the request is to treat the
environment variables CFLAGS and LDFLAGS the same way we treat CC, i.e. as an
initial value to be used instead of what we get from the configuration target
information.

This should be quite easy to implement, and we can also continue to use
whatever additional Configure arguments as compiler or linker flags to be used
*in addition* to the initial value (that comes from the config target
information, or if we decide to implement it, CFLAGS)

Cheers,
Richard

--
Richard Levitte
levi...@openssl.org

-- 
Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4664
Please log in as guest with password guest if prompted

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [PATCH] Support broken PKCS#12 key generation.

2016-08-29 Thread Andy Polyakov
First of all. *Everything* that is said below (and what modifications in
question are about) applies to non-ASCII passwords. ASCII-only passwords
are not affected by this and keep working as they were.

>> commit 647ac8d3d7143e3721d55e1f57730b6f26e72fc9
>>
>> OpenSSL versions before 1.1.0 didn't convert non-ASCII
>> UTF8 PKCS#12 passwords to Unicode correctly.
>>
>> To correctly decrypt older files, if MAC verification fails
>> with the supplied password attempt to use the broken format
>> which is compatible with earlier versions of OpenSSL.
>>
>> Reviewed-by: Richard Levitte 
> 
> Hm, this sounds like something that other crypto libraries also ought
> to try to work around. 
> 
> Please could you elaborate on the specific problem, and/or show a test
> case?

Note that there is 80-test_pkcs12.t that refers to shibboleth.pfx.

> I'm not quite sure how to interpret the patch itself. You pass the
> password through OPENSSL_asc2uni() and then OPENSSL_uni2utf8() — which
> is essentially converting ISO8859-1 to UTF-8.

You make it sound like these two are called one after another. But
that's not what happens. It *tries* to access data with passwords
converted with OPENSSL_asc2uni *or* OPENSSL_utf82uni, effectively
independently of each other, in order to see which one is right. So that
you can *transparently* access old broken data, as well as
specification-compliant one.

> So, if my password is "naïve". In UTF-8 that's 6e 61 c3 af 76 65, which
> is the correct sequence of bytes to use for the password?

00 6e 00 61 00 ef 00 76 00 65, big-endian UTF-16.

> And you now convert that sequence of bytes to 6e 61 c3 83 c2 af 76 65
> by assuming it's ISO8859-1 (which would be 'naïve') and converting to UTF-8?

I don't follow. Why would it have to be converted to this sequence?

> So... what was the bug that was actually being worked around?

6e 61 c3 af 76 65 was expanded to 00 6e 00 61 00 c3 00 af 00 76 00 65,
in violation of specification.

> That
> older versions were converting from the local charset to UTF-8 twice in
> a row? So you've implemented a "convert ISO8859-1 to UTF-8" fallback
> which will cope with that in *some* locales but not all...?

Yeah, something like that. Note that if you have created PKCS#12 file
with a non-UTF8 locale, it's not given that you can read it with another
locale, UTF-8 or not. This was *always* the case. And that's *not* what
we try to address here. We assume that you don't change locale when you
upgrade OpenSSL version. Ultimate goal is to make it compliant with
specification and therefore interoperable with other compliant
implementations. But we can't just switch, because then it stops
interoperate with older OpenSSL versions. This is the reason for why it
looks messy. It's about having it both ways... Though there is one
ambiguity in this. Said interoperability assumes UTF-8 locale (on *ix,
or OPENSSL_WIN32_UTF8 opt-in environment variable on Windows). I.e. it's
not given that users that use non-UTF8 locale can actually interoperate
with other implementations. It's assumed that such users are not
actually interested to, and if they express interest, they will be
advised to switch to UTF-8 locale and convert their data to
interoperable format.

Is this helpful?

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #4664] Enhancement: better handling of CFLAGS and LDFLAGS

2016-08-29 Thread Andy Polyakov via RT
> With the current build system, user-defined CFLAGS are accepted as any
> unrecognized command line argument passed to Configure. This seems a
> little dangerous to me since it means a simple typo could end up
> getting passed to the C compiler.

Is it more dangerous than interactive access? But on serious note I
don't quite grasp the problem. How do you pass CFLAGS? Or rather how is
it specific to OpenSSL? I mean you either set CFLAGS environment
variable and call 'make -e' or call 'make CFLAGS=something', right? But
this is the way *make* is. Or is suggestion that we should implement own
internal .ext1.ext2 rules independent on CFLAGS?

> As well, the current build system forcibly enables optimization (-O3)
> or debug symbols (-g) depending on the build type (--release or
> --debug).

Could you elaborate on the problem? If you need release build with debug
symbols you can simply add -g when configuring...

> There does not appear to be any method for the user to pass additional
> LDFLAGS to the build system.

You can pass them as arguments to ./Configure script. It recognizes -L,
-l and -Wl,.

> I would like to propose the following changes:
> 
> 1. Read CFLAGS and LDFLAGS from the environment within the Configure script.
> 2. Allow the user to opt-out of the default release or debug cflags,
> perhaps by adding a third build type (neither release nor debug).
> 
> These changes would my job as a distro maintainer on Gentoo Linux just
> a little bit easier.

Or maybe ("maybe" is reference to "I don't quite grasp" above) what we
are talking about is Configure reading CFLAGS and LDFLAGS and *adding*
them to generated Makefile. I mean we are not talking about passing them
to 'make', but "freezing" them to their values at configure time. Could
you clarify?



-- 
Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4664
Please log in as guest with password guest if prompted

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #4664] Enhancement: better handling of CFLAGS and LDFLAGS

2016-08-29 Thread Tomas Mraz via RT
I would like to join this request as maintainer of OpenSSL for Fedora
and Red Hat Enterprise Linux. It would clean up things for us as well.

-- 
Tomas Mraz
No matter how far down the wrong road you've gone, turn back.
  Turkish proverb
(You'll never know whether the road is wrong though.)




-- 
Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4664
Please log in as guest with password guest if prompted

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev