Re: [openssl-project] Style guide updates

2018-01-26 Thread Andy Polyakov
> Multiline conditionals, such as in an if, should be broken before the
> logical connector and indented an extra tabstop.  For example:
> 
>  
> 
>     while (this_is_true
>       && that_is_not_false) {
>     frobit();
>     more_stuff();
>     }

One has to recognize that this is if-specific problem. I mean while is
*perfectly* readable without extra indentation. And given that it's not
unthinkable that you exceed line width, enforcing this on cases other
than if can feel excessive. In other words I'd suggest to limit extra
indentation specifically to if cases.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Style guide updates

2018-01-26 Thread Andy Polyakov
> - Don't use else after return?

I'd argue against making it an absolute requirement. To give an example.
You don't a problem with returns in switch, so why should it be a
problem with returns from switch-like if-elseif chain?
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Style guide updates

2018-01-26 Thread Andy Polyakov
> What else needs to be updated?

Relax requirement that argument names in function definition has to
match function declaration to permit adding 'unused_' prefix prior
unused arguments.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Style guide updates

2018-01-27 Thread Andy Polyakov
>> - Use size_t for sizes of things
> 
> How do you feel about ssize_t?

One has to keep in mind that ssize_t is not part of C language
specification, but POSIX thing. C specification defines ptrdiff_t with
[presumably] desired properties. However, there is natural ambiguity
originating from fact that size_t customarily "covers" twice as much
space. So if you are to rely on positivity of signed value, object has
to be small enough. In other words you would have to perform sanity
checks before you do so. So it's not exactly walk on roses. I mean if
one assumes the premise that signed is "easier" to handle. Well, one can
make all kind of practical arguments about practicality of such
situation, i.e. what it takes to run into ptrdiff_t vs. size_t
ambiguity, and argue that it never happens. Well, while it would be case
on most systems, there are two cases, arguably not that impractical.
64-bit VMS, where we have sizeof(size_t)https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Style guide updates

2018-01-28 Thread Andy Polyakov
> Multiline conditionals, such as in an if, should be broken before the
> logical connector and indented an extra tabstop.  For example:

One can wonder if it would be appropriate to explicitly say that
preferred way to organize multi-line conditionals with same chain
condition per line even if pair of them fit into line. I mean

if (this-is-long
  && that
  && even-that)

vs.

if (this-is-long
  && that && even-that)

with first example being preferred. And this is even if 'this-is-long'
is short.

[Or should one tolerate following provided that 'this' is short enough?

if (this && that
  && even-that)
]

Either way, suggestion is that *if* you find yourself breaking condition
to multiple lines, you should take it step further, if not all the way.
But it shouldn't preclude things like

if (this-is-long
  && (that || (even-that && not-the-one))
  && don't-forget-this)

This actually means that one should be able to write second example as

if (this-is-long
  && (that && even-that))

but it would imply that logical relation between 'that' and 'even-that'
is strong enough to justify the parentheses.


[Also note that in above examples additional indentation is just two
spaces. It's not a coincidence, it's also kind of suggestion, for
if-specific indentation.]
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] chatty make output

2018-02-01 Thread Andy Polyakov
> I think this belongs in CHANGES or NEWS, not with every single
> reconfigure build
> 
> 
> Creating Makefile
> 
> 
> NOTE: Starting with OpenSSL 1.1.1, 'Configure' doesn't display all the
> disabled
> options or the "make variables" with their values.  Instead, you must use
> 'configdata.pm' as a script to get a display of the configuration data.  For
> help, please do this:
> 
> 
>         perl configdata.pm --help

BTW, I didn't appreciate this change. Thing about original output was
that it was telling things about user's environment. Idea was that one
didn't have to ask for additional information when user reported "ran
config, it printed this, help"... I would even say that new message
is meaningless to user. "Instead, you mist use 'configdata.pm'"? User
will simply wonder "instead of what"? Then execute --help and wonder
what is it that needs to be included into problem report... In other
words if config doesn't produce output that appears worthy to include
into problem report, then it should tell how to produce information that
we would appreciate in problem report.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] crypto/bn/asm/x86/ gone, but still used by crypto/bn/asm/x86.pl?

2018-02-06 Thread Andy Polyakov
> In master and OpenSSL_1_1_0-stable we have:
> 
> commit df8c39d52256c2e5327a636928b6d1ed05f695a2
> Author: Rich Salz 
> Date:   Tue Sep 30 17:30:19 2014 -0400
> 
> RT3549: Remove obsolete files in crypto
> 
> Reviewed-by: Andy Polyakov 
> 
>  crypto/bn/asm/x86/add.pl |  76 --
>  crypto/bn/asm/x86/comba.pl   | 277 
>  crypto/bn/asm/x86/div.pl |  15 --
>  crypto/bn/asm/x86/f  |   3 -
>  crypto/bn/asm/x86/mul.pl |  77 --
>  crypto/bn/asm/x86/mul_add.pl |  87 ---
>  crypto/bn/asm/x86/sqr.pl |  60 -
>  crypto/bn/asm/x86/sub.pl |  76 --
>  36 files changed, 5815 deletions(-)
> 
> All the crypto/bn/asm/x86/*.pl are deleted, but they are still
> required by:
> 
> $ head -n18 crypto/bn/asm/x86.pl 
> #! /usr/bin/env perl
> # ... copyright ...
> 
> push(@INC,"perlasm","../../perlasm");
> require "x86asm.pl";
> 
> require("x86/mul_add.pl");
> require("x86/mul.pl");
> require("x86/sqr.pl");
> require("x86/div.pl");
> require("x86/add.pl");
> require("x86/sub.pl");
> require("x86/comba.pl");
> 
> Is that file obsolete also?

Looks like it.

> Is there no need for asm support for "bn"
> on x86?

There are bn-586, co-586 and x86-mont modules. Those are currently used.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] OS/X builds failing

2018-02-09 Thread Andy Polyakov
> apps/enc.c:568:54: error: format specifies type 'uintmax_t' (aka
> 'unsigned long') but the argument has type 'uint64_t' (aka 'unsigned
> long long') [-Werror,-Wformat]
> BIO_printf(bio_err, "bytes written: %8ju\n",
> BIO_number_written(out));
>  ^~~
> %8llu
> 2 errors generated.
> 
> 
> The thing is though this is *our* BIO_printf, and we do seem to expect a
> uint64_t for a "%ju". So I'm not sure how to fix that.

I'd suggest -Wno-format. With rationale that a) Xcode is arguably
somewhat unreasonable, b) we can rely on Linux builds to catch warnings
that actually matter.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] OS/X builds failing

2018-02-09 Thread Andy Polyakov
>> apps/enc.c:568:54: error: format specifies type 'uintmax_t' (aka
>> 'unsigned long') but the argument has type 'uint64_t' (aka 'unsigned
>> long long') [-Werror,-Wformat]
>> BIO_printf(bio_err, "bytes written: %8ju\n",
>> BIO_number_written(out));
>>  ^~~
>> %8llu
>> 2 errors generated.
>>
>>
>> The thing is though this is *our* BIO_printf, and we do seem to expect a
>> uint64_t for a "%ju". So I'm not sure how to fix that.
> 
> I'd suggest -Wno-format. With rationale that a) Xcode is arguably
> somewhat unreasonable, b) we can rely on Linux builds to catch warnings
> that actually matter.

Another possibility *could* be suppressing corresponding __attribute__
in bio.h, but it's public header, so question would be if change can be
justified in minor release. Just in case, suppress specifically for
Xcode [which would cover even iOS].
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] OS/X builds failing

2018-02-09 Thread Andy Polyakov
For the record. I don't generally appreciate fast commits, but I feel
like 100 times stronger when it comes to public headers [naturally in
supported or minor release] and I can't qualify 19 minutes turnaround
for merge request as appropriate.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Removing assembler for outdated algorithms

2018-02-11 Thread Andy Polyakov
> (side note: I've just started compiling the ia64 code on VMS. It currently 
> bombs for reasons I haven't fathomed yet, but am thinking it's a pointer size 
> thing...

You ought to generate assembly listing for 'int foo(int *p) {return
*p;}' with different flags and send it to me. I told you that long time
ago :-)

> It's about the quirkiest assembly I've seen, but I'm hell bent to get through 
> it) 

Well...
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


[openssl-project] x25519-x86_64

2018-02-21 Thread Andy Polyakov
Hi,

A word of clarification about recently introduced x25519-x86_64 module.
Specifically in the context of an apparently common shift toward
auto-generated C such fiat-crypto or hacl64. As far as I'm concerned
problem with just mentioned auto-generated C implementations is that
while being verified by themselves, they rely on effectively unverified
compiler and its [unverifiable] ability to perform advanced
optimizations. Or in other words it's problematic on two levels: a) can
output from unvalidated compiler count as validated? b) will compilers
get performance right across a range of platforms? And as one can
imagine I imply that answer to both questions is "not really". As for
b). Well, current [x86_64] compilers were observed to generate decent
code that performs virtually on-par with assembly on contemporary
high-end Intel processors. At least for 2^51 radix customarily chosen
for C. Indeed, if you consider the improvement coefficients table in
x25519-x86_64.pl, you'll notice as low coefficients as 11%, 13%, 9% in
comparison to code generated by gcc-5.4. Normally it's not large enough
improvement [to subject yourself to misery of assembly programming],
right? But assembly does shine on non-Sandy Bridge and non-Haswell
processors, most notably in ADCX/ADOX cases. As for a). Intention is to
work with group at Academia Sinica to formally verify the
implementation. On related note they did verify amd64-51(*)
implementation that x25519-x86_64 refers to, so it wouldn't be
unprecedented.

Cheers.

(*) Just in case for reference. Being hardly readable, amd64-51 is hard
to integrate, because it targets specifically Unix ABI and is not
position-independent. It also performs sub-optimally on non-"Intel Core
family" processors, nor does it solve ADCX/ADOX problem. These are
arguments against taking in amd64-51 that is.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Freezing the repo soon

2018-02-26 Thread Andy Polyakov
> Just a reminder to everyone that we are doing the alpha2 release
> tomorrow, so we will be freezing the repo soon (in about an hour or so).

master is read at https://travis-ci.org/openssl/openssl/branches since
https://github.com/openssl/openssl/commit/b38ede8043439d99a3c6c174f17b91875cce66ac.
appears as ubsan failure...
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Freezing the repo soon

2018-02-26 Thread Andy Polyakov
> ssh openssl-...@git.openssl.org freeze openssl matt

done.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


[openssl-project] to fully overlap or not to

2018-02-28 Thread Andy Polyakov
Hi,

I'd like to request more opinions on
https://github.com/openssl/openssl/pull/5427. Key dispute question is
whether or not following fragment should work

unsigned char *inp = buf, *out = buf;

for (i = 0; i < sizeof(buf); i++) {
EVP_EncryptUpdate(ctx, out, &outl, inp++, 1);
out += outl;
}

[Just in case, corresponding corner case is effectively exercised in
evp_test.] Bernd argues that this is unreasonable, which I counter with
assertion that it doesn't matter how unreasonable this snippet is,
because since we support in-place processing, it's reasonable to expect
stream-cipher-like semantic as above to work even with block ciphers. As
it stands now, suggested modified code would accept in-place calls only
on block boundaries. Collateral question also is whether or not it would
be appropriate to make this kind of change in minor release.

Just in case, to give a bit of more general background. Benrd has shown
that current checks are not sufficient to catch all corner cases of
partially overlapping buffers. It was discussed that partially
overlapping is not same as fully overlapping, a.k.a. in-place
processing, with latter being in fact supported. And even though above
snippet can be formally viewed as a stretch, it's argued that it does
exhibit "legitimate intention" that deserves to be recognized and
supported. At least it was so far, in sense that it's not exactly a
coincidence that it currently works. [The fact that other corner cases
are not recognized as invalid is of course a bug, but there is no
contradiction, as fixing the bug doesn't have to mean that specific
corner case is no longer recognized.]

Thanks in advance.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] to fully overlap or not to

2018-02-28 Thread Andy Polyakov
> Collateral question also is whether or not it would
> be appropriate to make this kind of change in minor release.

One can wonder if this is actually more burning question. But keep in
mind that ...

> ... there is no
> contradiction, as fixing the bug doesn't have to mean that specific
> corner case is no longer recognized.]

In sense that fixing the bug, i.e. passing cases *other* than one in
question as valid, doesn't [necessarily] conflict with compatibility goals.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] to fully overlap or not to

2018-02-28 Thread Andy Polyakov
>>> I'd like to request more opinions on
>>> https://github.com/openssl/openssl/pull/5427. Key dispute question is
>>> whether or not following fragment should work
>>>
>>>   unsigned char *inp = buf, *out = buf;
>>>
>>>   for (i = 0; i < sizeof(buf); i++) {
>>>   EVP_EncryptUpdate(ctx, out, &outl, inp++, 1);
>>> out += outl;
>>>   }
>>
>> This should work.
> 
> On second thought, perhaps not.

It seems that double-clarification is appropriate. As it stands now it
*does* work. So question is rather if it should keep working [and if
not, is it appropriate that stops working in minor release].

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] to fully overlap or not to

2018-03-01 Thread Andy Polyakov
>>> I'd like to request more opinions on
>>> https://github.com/openssl/openssl/pull/5427. Key dispute question is
>>> whether or not following fragment should work
>>>
>>> unsigned char *inp = buf, *out = buf;
>>>
>>> for (i = 0; i < sizeof(buf); i++) {
>>> EVP_EncryptUpdate(ctx, out, &outl, inp++, 1);
>>> out += outl;
>>> }
>>
>> This should work.
>>
> 
> It does only work, if you know that ctx->buf_len == 0
> before the loop is entered.

It's naturally implied that ctx is used *consistently*. I mean it's not
like it's suggested that you can just use the above snippet in random
place. Application would have to adhere to above pattern all along, for
the life time of in-place processing "session". [I write "session" in
quotes, because there might be better term.]

> It does not work with CFB1, CCM, XTS and WRAP modes.

Yes, some modes are so to say "one-shot", i.e. you have to pass all the
data there is at once, no increments are allowed. But what does it have
to do with question at hand, question about in-place processing that is?
These two things are independent. So that question is rather should the
snippet work in cases when incremental processing is an option. Related
thing to recognize in the context is that *disputable* condition, the
one that triggered this discussion, is exercised only when
ctx->block_size is larger than 1, because then ctx->buf_len remains 0.
Note that ctx->block_size for CFB1, CCM and XTS is 1. As for WRAP, yeah,
it's special...

> There is no access function to get the internal state of the cipher
> context.

But there is notion of in-place processing "session".

> I the documentation does not cover this at all, and most of
> all I really wonder how the wording could look like.

That in-place processing "session" would be customarily tied up with
call to EVP_EncryptFinal[_ex]?
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] to fully overlap or not to

2018-03-01 Thread Andy Polyakov
> Related
> thing to recognize in the context is that *disputable* condition, the
> one that triggered this discussion, is exercised only when
> ctx->block_size is larger than 1, because then ctx->buf_len remains 0.

I naturally meant "because *otherwise* ctx->buf_len remains 0". In other
words ctx->buf_len remains 0 when ctx->block_size is 1.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] to fully overlap or not to

2018-03-08 Thread Andy Polyakov

> Andy pointed out that my last e-mail was probably not clear enough.
> 
> I want to drop the current partially overlapping checks
> on the WRAP mode ciphers (which were ineffective anyways).
> 
> And allow the following additional use case for any cipher
> 
> unsigned char *in = buf + sizeof(buf) - X, *out = buf;
> EVP_EncryptUpdate(ctx, out, &outl, &head, sizeof(head));
> out += outl;
> EVP_EncryptUpdate(ctx, out, &outl, in, X);
> out += outl;
> EVP_EncryptUpdate(ctx, out, &outl, &tail, sizeof(tail));
> out += outl;
> 
> with X > 75% sizeof(buf)
> sizeof(head), sizeof(tail) not necessary multiple of block size
> 
> And make the definition of allowed in-place partially overlapping
> effectively be driven by the implementation requirements.

Nobody? OK. *Formal* objection to this is that you are making assumption
that underlying implementation processes data in specific order. Note
that it's not actually unreasonable assumption(*), yet in most general
sense it's an assumption, and question rather is if you are in position
to **formally** make it(**). One can argue that all our underlying
implementations do that, process data in expected order that is, but it
doesn't lift the question. One can even argue that suggested code worked
in 1.0.1, yet it doesn't lift the question. So instead one simply aims
for not making the assumption [or making least amount of assumptions].
Yes, you can sense contradiction, because in-place processing also rests
on an assumption. Yeah, world is not ideal and life is full of
compromises. I mean there are pros and cons, and in-place is considered
to bring more pros.

(*) And in some cases it's even 100% justified, most notably in CBC
encrypt case, when each block is dependent on previous.

(**) Note that you can avoid the question by processing data strictly on
per-block basis, so that you'll be in control of accessing data in
specific order, not underlying implementation. But that's not what is
suggested, right?
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] cppflags, cflags and ldflags

2018-03-25 Thread Andy Polyakov
>> Note: this mail follows the discussion on github started here:
>> https://github.com/openssl/openssl/pull/5742#discussion_r176919954

In the light of the new evidence presented in
https://github.com/openssl/openssl/pull/5745 one can argue in favour of
splitting -pthread flag to cppflags and ldflags. Yes, but I'd still be
reluctant to formulating it like that. As implied I'm kind of arguing
against too many options/variables/joints and call rather for
simplification than increased complexity. I feel that there are more
efficient ways to cover corner/odd-ball places...
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] cppflags, cflags and ldflags

2018-03-25 Thread Andy Polyakov
> appro> We already touched this in Windows context, one option (more suitable 
> in
> appro> my mind) is toolchain-specific *rules*. I.e. instead of trying to fit
> appro> cube into circular hole, just make square hole on the side. For example
> appro> *if* target has ldonly attribute, just emit link rule without CFLAGS,
> appro> but with those listed in ldonly.
> 
> So you want some kind of flag attributes in the config targets?
> That's what 'ldonly' sounds like.  I don't understand how that goes
> together with not wanting fragmentation, it mostly seems that you're
> arguing for another kind, which results in more hackery in the
> Makefile template (another type of fragmentation).

Well, I didn't say I liked it, but I'd say it would be more appropriate,
because we compare complex relationships between multiple variables vs.
limited amount of "override" rule. Or in other words. Complexity of
working "override" rules is "linear" in sense that it depends on limited
amount of "override" rules, each being isolated[!] so that it doesn't
affect other platforms. While complexity of working with
one-size-supposedly-fits-all thing is "superlinear" and depends on
amount variables times platforms involved.

> appro> Another option is wrapper. Instead of struggling with ever-increasing
> appro> amount of options to accommodate one[!!!] odd-ball platform, why can't
> appro> *they* provide a wrapper script that would omit offending options prior
> appro> calling the linker? [BTW, we do have an odd-ball platform, VMS, so why
> appro> can't this odd ball handled similarly, i.e. with own template? Too
> appro> expensive you'd say. Well, why not make *them* bare the cost? But even
> appro> if not, formally speaking ever-increasing amount of
> appro> variables/options/joints and complex relationship among them all has 
> its
> appro> cost too. And by suggesting to solve it in this specific way you kind 
> of
> appro> imply that cost is not an issue here. How come?]
> 
> Actually, I do imply that cost is an issue.  For example, I think it's
> quite costly for *them*,

But my question was about *our* costs.

> who do have enough of a sh like environment
> for unix-Makefile.tmpl to be useful with the exception of a few lines
> of changes, to have to make a copy of that template and maintain it
> against whatever changes we throw in for the sake of those few lines,
> and for this to be repeated by everyone who has a different enough
> setup, when changing those few lines wouldn't be that hard on our end.

Keyword is "would" here. Since it's obviously not possible (otherwise we
wouldn't have this discussion), then you can only say that it *would*
cost us a little. But this happens in the parallel would-universe, not
here. Of course one can argue that it's appropriate to make this
"investment", because it would solve other problems in future. But thing
is that I do feel that build system complexity is growing beyond
reasonable proportion, and it's because that there is tendency to
consider our costs as negligible to none. Well, one can also say that
letting somebody else do things never worked, as you end up fixing it
anyway. But is it because nobody else can actually solve it, or because
you're not forcing them to work harder for *their* cause. [BTW, wouldn't
the fact that nobody else can solve it be a sign of excessive complexity?]

And once again, [in this odd-ball case] why not a wrapper script that
omits offending options?

> And yeah, I get that the config targets need to be reshaped for this.
> That's your main argument, isn't it?  Is that so actually so costly?

If we have to upon every introduction of an odd-ball community platform,
then yes.

> Another side thing that I've been thinking of for quite a while, and I
> think you may have argued for even though I feel a bit unsure, and
> that's to support command line attributes as an alternative to that
> increasing amount of specialised attributes, so something like this:
> 
>   link => '$(CCLD) $(LDFLAGS) -o $@ ...',

You've mentioned once that it would take a special language to do that
something, and I recall myself thinking "why not just use make syntax".
I'm unsure if I actually said this, but I did think the thought. So that
I for one wouldn't actually be opposed to something like this. It's just
a way to specify "override" rule (and on per-toolchain basis:-).
However! It all depends on whether or not straight make syntax would
actually be expressive enough in the context. If it is, then it would be
totally appropriate. But if not, then ... I'm not so sure. Maybe[!]
if-else in makefile template could turn out to be more appropriate.
(Once again, each clause in if-else is isolated and complexity is
linearly proportional to number of clauses in if-else, which is viewed
as advantage.)

> Note that I haven't really thought it through, this is really a draft
> of a possible idea, and it might end up having a very different form.
> But still, that might be better th

Re: [openssl-project] cppflags, cflags and ldflags

2018-03-25 Thread Andy Polyakov
> Note: this mail follows the discussion on github started here:
> https://github.com/openssl/openssl/pull/5742#discussion_r176919954
> That discussion started as a simple side note, but I should probably
> have known better...  it ends up derailing a PR that shouldn't really
> be affected, so I'm bringing the discussion here.
> 
> The discussion is about the division of flags for the different parts
> of building programs, shared libraries and DSOs.
> 
> So we have one side that argues for putting the majority of the flags
> in the cflags variable / config attribute.  To quote from the github
> discussion:
> 
>> Or to put a little bit more practical spin on it. Since we *do* pass
>> cflags at *all* stages, it's only appropriate to assign multi-stage
>> flags [such as -pthread] to cflags. Viewing it from another angle,
>> only pure single-stage flags [such as -L ,-l, -I, -D] are
>> appropriate to separate as make variables. And to variables specific
>> to corresponding stage, not some other grouping. Last remark is
>> reference to the fact and if one follows suggested logic [that
>> presented classification imposes specific separation as make
>> variables], you'd have to pass -L at pre-processor stage and -I at
>> linker, because they both are listed in "Directory Options".
> 
> My spin on it goes in a different direction.  It's true that we
> *currently* pass cflags to all stages.  I would argue, though, that
> this is mostly because we assume that the cc command is an appropriate
> frontend for everything involving building the stuff.

There is glue that holds it together, and it's makefile template.
Besides, question also is if there is a set of flags that we *have to*
pass at *all* stages. I.e. not just "do", but "have to". Yes, there is,
one example being -m64. One can as well say that it just so happens that
it's now called cflags. Very natural and intuitive choice. *If* we
choose to change that, I'd still argue that collecting multi-stage flags
in one variable would be totally appropriate, at least for platforms
where it works. Like Linux, Solaris, AIX, HP-UX, BSD, Mac OS X,
effectively *all* the platforms we actually directly support. So that in
a sense, this question and reference to odd-ball problem are not
actually that tightly interwoven as suggested. I mean odd-ball problem
is used as an argument *for* excessive variables fragmentation on *all*
platforms, but collecting any specific flag in one variable on vast
majority of platforms doesn't actually contradict the suggestion to
introduce a variable that would be meaningful to one [and fragmenting on
that one].

> Enters the odd
> ball that needs to use the linker (som form of ld) directly [1],
> because that's what makes practical sense, or simply because that's
> how things work on that platform, and suddenly, it's obviously
> *inappropriate* to pass cflags on that command.  In assuming that we
> can always pass cflags, we're making live difficult for those who want
> to make things work on platforms that don't quite meet that assumption.

Well, it's no news that life is not fair. Two points.

We already touched this in Windows context, one option (more suitable in
my mind) is toolchain-specific *rules*. I.e. instead of trying to fit
cube into circular hole, just make square hole on the side. For example
*if* target has ldonly attribute, just emit link rule without CFLAGS,
but with those listed in ldonly.

Another option is wrapper. Instead of struggling with ever-increasing
amount of options to accommodate one[!!!] odd-ball platform, why can't
*they* provide a wrapper script that would omit offending options prior
calling the linker? [BTW, we do have an odd-ball platform, VMS, so why
can't this odd ball handled similarly, i.e. with own template? Too
expensive you'd say. Well, why not make *them* bare the cost? But even
if not, formally speaking ever-increasing amount of
variables/options/joints and complex relationship among them all has its
cost too. And by suggesting to solve it in this specific way you kind of
imply that cost is not an issue here. How come?]

> So following my spin, I would rather say that any flag that affects
> preprocessing should go into cppflags, any flag that affects linking
> should go into ldflags (the attribute lflags), the rest can go to
> cflags.  And for compartmentalization, I'd like to see that:
> 
> - for preprocessing, only cppflags are used.  (note: this is more of a
>   wishlist, not really terribly important for now...)

It has already been established that it doesn't work. Because there are
flags that are customarily viewed as compile-time, but that affect
pre-defined macros that we relay upon. In other words pre-processing has
to be performed with cppflags *and* cflags, just like compiling.

> - for compiling (i.e. source code to object code), cppflags and cflags
>   are used (I cannot see how one would make that differently).
> - for linking, only ldflags should be used.  (this I think is
>   importa

Re: [openssl-project] rule-based building attributes (Re: cppflags, cflags and ldflags)

2018-03-26 Thread Andy Polyakov
> appro> > Another side thing that I've been thinking of for quite a while, and 
> I
> appro> > think you may have argued for even though I feel a bit unsure, and
> appro> > that's to support command line attributes as an alternative to that
> appro> > increasing amount of specialised attributes, so something like this:
> appro> > 
> appro> >  link => '$(CCLD) $(LDFLAGS) -o $@ ...',
> appro> 
> appro> You've mentioned once that it would take a special language to do that
> appro> something, and I recall myself thinking "why not just use make syntax".
> appro> I'm unsure if I actually said this, but I did think the thought. So 
> that
> appro> I for one wouldn't actually be opposed to something like this. It's 
> just
> appro> a way to specify "override" rule (and on per-toolchain basis:-).
> appro> However! It all depends on whether or not straight make syntax would
> appro> actually be expressive enough in the context. If it is, then it would 
> be
> appro> totally appropriate. But if not, then ... I'm not so sure. Maybe[!]
> appro> if-else in makefile template could turn out to be more appropriate.
> appro> (Once again, each clause in if-else is isolated and complexity is
> appro> linearly proportional to number of clauses in if-else, which is viewed
> appro> as advantage.)
> 
> Straight make syntax will probably not cut it.  How would you pass a
> list of object file names to something that might have a limited
> command line length, for example?

These are actually two different problems, aren't they? I mean a) how do
you pass, and b) what if there is some limitation. As for former. Even
though it spells "use make syntax", it would still be rather
"make-*like* syntax". In other words it doesn't mean that you will be
able to just copy referred line into makefile, it would still have to be
processed somehow first. Well, in case at hand it might be possible to
drop a rule with $+ if you know you'll use GNU make, but others don't
understand it, so $+ would have to be processed. (And even in GNU make
case to keep things unified.) So you end up using mostly-make-like
syntax to formulate an override rule, but you do some replacements
before emitting actual rule into makefile. Naturally one might have to
make up more internally recognized make-looking idioms, in addition to
$+ that is...

> With nmake, we do have a feature we
> can use, the inline file text thing, but others may not be so lucky
> (the nonstop case I mentioned is such a case), and one might need to
> do a bit of pre-processing with perl-level variables (an idea could be
> to have some well defined hooks that users can define values for, such
> as function names...  again, this is a draft of an idea).  For
> example, imagine that we have something that dumps object file names
> into a text file and then expects that file name to appear in the
> linking command line (for example, and typically in our case), how do
> we pass the text file name to the command line, and where in the
> command line?

Do you mean that you have like 'libcrypto.so: all-the-object-files' and
want a rule that would put all-the-object-files into text file and
invoke linker with that file name as argument? And putting them into
text file may not exceed a length limit either, so that you can't say
for example 'echo $+ > o-files.$$.' either...

> So no, a straight makefile rule for a config attribute value isn't
> going to be good enough.

How about this. We have touched this when discussing windows-makefile. I
mean when I called it VC-specific, you disagreed, and I said that
embedding manifest effectively makes it VC-specific. In the context I
also suggested post-link stage, a command one could *add* to rule. Or
let's rephrase this and say that you can simply specify multiple
commands in an override rule. So with this in mind.

link => [ 'script-that-composes-list-of-objects $@.$$',
  'link -o $@ -list $@.$$' ]

Well, in this particular case you'd probably be more interested to
specify multi-line command instead of multiple commands. As '(trap "rm
-f $@.* INT 0; script ... && link ...)', because you might want to clean
up temporary file even if make is interrupted. But multi-line can be
useful if different lines reside in different templates. I mean base
template can say "link" and target can add a post-link command.

Of course I'm not suggesting to take this for immediate implementation,
these are just ideas to ponder about. They might turn unusable (or give
a spark for some other idea :-)
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] rule-based building attributes

2018-03-27 Thread Andy Polyakov
> appro> How about this. We have touched this when discussing windows-makefile. 
> I
> appro> mean when I called it VC-specific, you disagreed, and I said that
> appro> embedding manifest effectively makes it VC-specific. In the context I
> appro> also suggested post-link stage, a command one could *add* to rule. Or
> appro> let's rephrase this and say that you can simply specify multiple
> appro> commands in an override rule. So with this in mind.
> appro> 
> appro> link => [ 'script-that-composes-list-of-objects $@.$$',
> appro>   'link -o $@ -list $@.$$' ]
> 
> So zooming in on this particular idea, I was concerned about how those
> object file names would be passed to the script...  but then you
> mention manifests, and it dawns on me that there's *nothing* stopping
> us from generating a file with the list of object files when building
> the Makefile.  That does make some kind of sense to me.
> 
> Or were you thinking something different?

Well, when I said "manifest embedding" I meant specifically $(MT) step
of the link rule. But if you want to call $@.$$ output from
'script-that-composes-list-of-objects' for manifest, sure you can do that.

> It would of course be
> possible to have the script you suggest pull data from configdata.pm,
> right?

Or from Makefile, whichever simplest.

> But considering we're talking about third parties building
> their own, that raises another concern, that we suddenly give the
> whole %unified_info structure public exposure that I'm not entirely
> sure were ready for.

You mean not ready to commit to not changing it as our discretion? Well,
then maybe grep-ing Makefile for libcrypto.a: would be more appropriate...
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] to fully overlap or not to

2018-03-29 Thread Andy Polyakov
On 03/09/18 17:13, Bernd Edlinger wrote:
> On 03/08/18 12:06, Andy Polyakov wrote:
>>
>>> Andy pointed out that my last e-mail was probably not clear enough.
>>>
>>> I want to drop the current partially overlapping checks
>>> on the WRAP mode ciphers (which were ineffective anyways).
>>>
>>> And allow the following additional use case for any cipher
>>>
>>> unsigned char *in = buf + sizeof(buf) - X, *out = buf;
>>> EVP_EncryptUpdate(ctx, out, &outl, &head, sizeof(head));
>>> out += outl;
>>> EVP_EncryptUpdate(ctx, out, &outl, in, X);
>>> out += outl;
>>> EVP_EncryptUpdate(ctx, out, &outl, &tail, sizeof(tail));
>>> out += outl;
>>>
>>> with X > 75% sizeof(buf)
>>> sizeof(head), sizeof(tail) not necessary multiple of block size
>>>
>>> And make the definition of allowed in-place partially overlapping
>>> effectively be driven by the implementation requirements.
>>
>> Nobody? OK. *Formal* objection to this is that you are making assumption
>> that underlying implementation processes data in specific order. Note
>> that it's not actually unreasonable assumption(*), yet in most general
>> sense it's an assumption, and question rather is if you are in position
>> to **formally** make it(**). One can argue that all our underlying
>> implementations do that, process data in expected order that is, but it
>> doesn't lift the question. One can even argue that suggested code worked
>> in 1.0.1, yet it doesn't lift the question. So instead one simply aims
>> for not making the assumption [or making least amount of assumptions].
>> Yes, you can sense contradiction, because in-place processing also rests
>> on an assumption. Yeah, world is not ideal and life is full of
>> compromises. I mean there are pros and cons, and in-place is considered
>> to bring more pros.
>>
>> (*) And in some cases it's even 100% justified, most notably in CBC
>> encrypt case, when each block is dependent on previous.
>>
>> (**) Note that you can avoid the question by processing data strictly on
>> per-block basis, so that you'll be in control of accessing data in
>> specific order, not underlying implementation. But that's not what is
>> suggested, right?
> 
> No.  The intention is to make the partially overlapping check follow
> the underlying cipher's implementation needs, not the other way round.

I don't follow. What does "underlying cipher's implementation needs"
mean? Or aren't you making quite specific assumption about
implementation the moment you recognize its "needs"? Assumption that it
does in fact processes data in specific order? Does it need to do that?
As *hypothetic* extreme example, consider ECB mode. What prevents
implementation to process blocks in arbitrary order? And if nothing,
what effect would reverse order have for suggested approach? Once again,
I'm not saying that any of our ECB implementation do that, only that
it's *formally* inappropriate to make that kind of assumption. To
summarize, if one wants to permit partially overlapping buffers, then
the only appropriate thing to do would be to perform operations in
chunks on non-overlapping sections. However, I'm not saying that we
should do that, I'd say that it would be unjustified complication of the
code. https://github.com/openssl/openssl/pull/5427#discussion_r172441266
is sufficient.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] to fully overlap or not to

2018-03-30 Thread Andy Polyakov
> Once again:
> - This PR *recognizes* the fact that special cases do in fact *exist*.

??? Is there application that heavily relies on some specific overlap
percentage?

> - The EVP layer has to be more aware of special cases anyhow.

Where does it come from? Why does it *have to* be more aware of special
cases? There is one special case that we support, fully overlapping
buffer, and that's it, it doesn't *have to* be more than that. Besides,
EVP is *generalized* layer and is not supposed to make overly specific
assumptions. While in order to support suggested cases and the suggested
way, you'll be forced to make an assumption about underlying cipher.
Specifically that it processes blocks in ascending memory order, i.e.
from lower to higher addresses. Once again, all our implementations do
that, but *generalized* EVP layer is not the right spot to assume that
they actually do.

> - The test coverage of error cases is not perfect, but I try to
>improve that.
> - I don't see why we can't relax a check in cases that are known to
>work, especially when test cases are added at the same time.

Because we don't have to. Because goal is not to provide maximum
flexibility, but maximum *robustness*. Well, it's not entirely true,
because robustness is not the only goal, but it's not inappropriate to
say that complexity has to be justified by something beside sheer why-not.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] FW: April Crypto Bulletin from Cryptosense

2018-04-06 Thread Andy Polyakov
> This is one reason why keeping around old assembly code can have a cost. :(
> 
> https://github.com/openssl/openssl/pull/5320

There is nothing I can add to what I've already said. To quote myself.
"None of what I say means that everything *has to* be kept, but as
already said, some of them serve meaningful purpose..."

Well, I also said that "I'm *not* saying that bit-rot is not a concern,
only that it's not really assembly-specific." And I can probably add
something here, in addition to already mentioned example of legacy code
relying on formally undefined or implementation-specific behaviour. It's
not actually that uncommon that *new* C code is committed[!!!]
"bit-rotten". So one can *just as well* say that supporting another
operating system has a cost, and so does using another compiler... Why
not get "angry" about that? What's the difference really? Relevant
question is what's more expensive, supporting multiple compilers?
multiple OSes? multiple assembly? To give a "tangible" example in the
context of forwarded message [that mentions PA-RISC assembly code.] How
long time did it take me to figure out what's wrong and verify that
problem is resolved? Couple of hours (mostly because old systems are
slw and it takes time to compile our stuff). How long time did it
take me to resolve HP-UX problems triggered by report on 20th of March?
I'm presumably[!] done by about now... To summarize, one can make same
argument about multiple things, yet we do them. Why? Because it works to
our advantage [directly or indirectly]...
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


[openssl-project] SM2

2018-04-09 Thread Andy Polyakov
Hi,

I'd like to hear *more* opinions in light of recent comments to
https://github.com/openssl/openssl/pull/4793. (Strangely enough I get
"This page is taking way too long to load" if attempt to access it when
I'm logged on[!?]. But I have no problems opening requests from the
middle, long closed.) As for my opinion I found myself objecting the way
SM2 is hammered into ec_pmeth.c, and it's irregardless concerns risen in
4793, which merely enforced the opinion. Concern is that we risk being
stuck with maintaining quirky behaviour for long time. It should really
be up to application to choose the scheme (as suggested in 4793) or SM2
methods should be parameterizeable. This means that I'd like to raise a
voice in favour of removal the hacks. If it means missing release, I'd
argue that it's better to miss it than to get stuck with something that
would be problematic to support.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] The problem of (implicit) relinking and changed behaviour

2018-04-15 Thread Andy Polyakov
> 2. Make TLSv1.2 the absolutely maximum TLS version available for
>programs linked with libssl 1.1.0.  This is what's done in this PR:
>https://github.com/openssl/openssl/pull/5945
>This makes sense insofar that it's safe, it works within the known
>parameters for the library these programs were built for.

Here is a thing. If we assume that 110/test/sslapitest.c is
*representative* example of an 1.1.0 application, then it's not at all
given that #5945 would actually solve anything. Indeed, observing the
first failing test, recompiled 110/test/sslapitest.c would still fail,
because it makes assumption that session id is available right after
negotiation, something that doesn't hold true for TLS 1.3. This gives
you either a) no 1.1.0 application goes safe, recompiled[!] or not; b)
110/test/sslapitest.c is not representative example. Well, as far as
first failing test goes, I'd say it's b), which means that it should be
adjusted in one way or another or failing check omitted. [It's b),
because it's unnatural to put session id you've just got back to id
cache, the test is ultimately synthetic.] This naturally doesn't answer
all the questions, but it does show that above mentioned premise is
somewhat flawed. I mean "programs were built for [1.1.0]" would still
work if recompiled with #5945.

>It also makes sense if we view TLSv1.3 as new functionality, and
>new functionality is usually only available to those who
>explicitely build their programs for the new library version.
>TLSv1.3 is unusual in this sense because it's at least it great
>part "under the hood", just no 100% transparently so.

In such case it should rather be #ifdef OPENSSL_ENABLE_TLS1_3 instead of
#ifndef OPENSSL_DISABLE_TLS1_3. And we don't want that.

To summarize, failing tests in 110 should be revisited to see if they
are actually representative, before one can consider as drastic measures
as #5945.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Potentially bad news on TLS 1.3 compatibility (sans SNI)

2018-04-18 Thread Andy Polyakov
> When I link posttls-finger with the OpenSSL 1.1.1 library, I see:

Just in case for reference, one can reproduce all this with 1.1.1
s_client. Below case is -starttls smtp -noservername. "Fun" part is that
OU for the self-signed certificate reads "No SNI provided; please fix
your client." Another "fun" part is that they don't seem to be concerned
with SNI value, it can be literally whatever.

>   posttls-finger: gmail-smtp-in.l.google.com[173.194.66.26]:25 
> CommonName invalid2.invalid
>   posttls-finger: certificate verification failed for
> gmail-smtp-in.l.google.com[173.194.66.26]:25:
> self-signed certificate
>   posttls-finger: gmail-smtp-in.l.google.com[173.194.66.26]:25:
> subject_CN=invalid2.invalid, issuer_CN=invalid2.invalid
>   posttls-finger: Untrusted TLS connection established to
> gmail-smtp-in.l.google.com[173.194.66.26]:25:
> TLSv1.2 with cipher ECDHE-RSA-CHACHA20-POLY1305 (256/256 bits)

Below case is -starttls smtp -tls1_2 [with of without servername].

> The same command with OpenSSL 1.1.0 yields (no CAfile/CApath
> so authentication fails where it would typically succeed):
> 
>   posttls-finger: certificate verification failed for
> gmail-smtp-in.l.google.com[173.194.66.27]:25:
> untrusted issuer /OU=GlobalSign Root CA - R2/O=GlobalSign/CN=GlobalSign
>   posttls-finger: gmail-smtp-in.l.google.com[173.194.66.27]:25:
> subject_CN=gmail-smtp-in.l.google.com,
> issuer_CN=Google Internet Authority G3,
>   posttls-finger: Untrusted TLS connection established to
> gmail-smtp-in.l.google.com[173.194.66.27]:25:
> TLSv1.2 with cipher ECDHE-RSA-CHACHA20-POLY1305 (256/256 bits)
> 
> This is a substantial behaviour change from TLS 1.2, and a rather
> poor decision on Google's part IMHO, though I'm eager to learn why
> this might have been a good idea.
> 
> In the mean-time, Richard's objection to automatic TLS 1.3 use
> after shared-library upgrade is starting to look more compelling?

Question is what would be users' expectation. If we arrange it so that
application compiled with 1.1.0 simply won't negotiate 1.3, then would
it be reasonable of them to expect that it will start working if they
simply recompile application? I.e. without changing application code!
But in such case it wouldn't actually help for example this case, but
only make things more confusing. I mean you end up with "all I wanted
1.3 and now it doesn't work at all". I'd say that it would be more
appropriate to make user ask "I want 1.3 but don't get it, why? it keeps
working with 1.2 though." With this in mind, wouldn't it be more
appropriate to simply not offer 1.3 capability if application didn't
provide input for SNI?
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Potentially bad news on TLS 1.3 compatibility (sans SNI)

2018-04-18 Thread Andy Polyakov
> ... what does not make any sense to me is what Google is doing.  Snatching 
> defeat from the jaws of victory by needlessly forcing clients to downgrade to 
> TLS 1.2.  Is there a justification for this?

It can either be a probe just to see if it's reasonable to demand it, or
establish a precedent that they can refer to saying "it was always like
that, *your* application is broken, not ours." Also note that formally
speaking you can't blame them for demanding it. But you can blame them
for demanding it wrong. I mean they shouldn't try to communicate through
OU of self-signed certificate, but by terminating connection with
missing_extension alert, should they?
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Help deciding on PR 6341 (facilitate reading PKCS#12 objects in OSSL_STORE)

2018-06-01 Thread Andy Polyakov
> I think that the gist of the difference of opinion is whether it's OK
> to use locale dependent functions such as mbstowcs in libcrypto or
> not.
> 
> The main arguments against allowing such functions in libcrypto is
> that we should push applications to run with an UTF-8 input method
> (whether that's provided by the terminal driver, or the process
> holding the terminal, or the application itself...) rather than have
> them call setlocale() with appropriate arguments (which is needed for
> mbstowcs to work right).

Assertion is rather that we can't/shouldn't rely on application to call
setlocale and with argument suitable for specific purpose [of accessing
PKCS#12 in this case]. And since we can't do that, we would be better
off not calling mbstowcs. Because it adds a variable we have no control
over. Given some specific input it would be more honest/appropriate to
return error to application than to make outcome conditional on whether
or not application called setlocale and with which argument. One can
also view it as following: all applications get exactly same treatment.
Pushing applications and users toward UTF-8 input method is merely a
consequence of the suggestion to give all applications same treatment,
it's not alternative by itself.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] To use or not use the iconv API, and to use or not use other libraries

2018-06-07 Thread Andy Polyakov
> This PR has been blocked, forcing a vote:
> 
> https://github.com/openssl/openssl/pull/6392
> 
> Background: we have been sloppy when producing PKCS#12 files, creating
> objects that aren't interoperable.  This can only happen with non-UTF8
> input methods, so this PR adds a higher level of control in the
> openssl application, so that it will do the best it can to make sure a
> pass phrase encoded with something other than UTF-8 gets correctly
> re-encoded, and failing that, try and make the user aware that they
> are about to create a non-interoperable object.  This triggered the
> use of the iconv API, and in the case of Mac OS/X, the use of the
> separate libiconv library.

I find the reference to Mac OS X a bit misleading, because it suggests
that assessment was made on limited amount of data points. Basically on
how does it look on *contemporary* Linux/Unix platforms and Mac OS X.
But question runs deeper than that and should cover all platform that we
consider supporting. Which covers even ranges of older versions, in
sense that judging on latest version alone is hardly sufficient. For
example do we know *when* was libiconv introduced to Mac OS X? One can
naturally say that we are not obliged to care about *that* old versions,
but this is no excuse for not making thorougher assessment? I mean it's
only appropriate if we can answer the question how old does system have
to be for us to say "we don't care". And same question applies even to
other platforms, OpenBSD, FreeBSD, Android, Cygwin, Solaris, AIX, HP-UX,
DJGPP, Tru64, IRIX, ... One can argue that iconv was actually
standardized, and in such case it would be appropriate to make it
conditional on _POSIX_VERSION. [Though it doesn't seem to be part of
pull request in question. Why not?] But as far as _POSIX_VERSION goes,
we kind of know that some systems by *default* offer lower version,
presumably in order to facilitate backward portability. So that it would
mean that we would have to explicitly rise the bar in some cases. Which
ones? And how high? This brings us to following question. Is *this*
actually right moment to introduce that kind of *multi-variable*
problem? In other words the problem kind of has two sides: a) principal,
to do or not to do; b) *when* would it be appropriate to start, is minor
release right moment? Is b) part of the vote?
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] To use or not use the iconv API, and to use or not use other libraries

2018-06-07 Thread Andy Polyakov
> Regarding general use of other libraries, please think carefully before 
> voting, 'cause this *is* tricky. If you have a look, you will see that we 
> *currently* depend on certain standard libraries, such as, for example, libdl.

One has to recognize that each dependency has to be justified. Sometimes
you don't have a choice. For example if you want to talk network on
Solaris, you have to link with -lsocket -lnsl. It's just part of the
game. But in cases one has a choice, well, one has a choice to *make*.
And key question is how do you anchor it. It's only natural to have as
little dependencies as possible, so question is what would justify extra
dependency.

> And perhaps we should also mention the pile of libraries used with windows.

It's not about amount, but ubiquity and stability. Windows is bad
example in the context, because it's rather "mono-cultural" environment.
But *otherwise* thing is that we already *know* that those extra
libraries work. Or at least know what to expect and how to deal with
them on different platforms. They were effectively proven to work by
lasting through several releases and years-long bug ironing. This *is*
factor too. And that's what made me pose "is b) part of vote" in my last
post.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] To use or not use the iconv API, and to use or not use other libraries

2018-06-10 Thread Andy Polyakov
> Regarding general use of other libraries, please think carefully before 
> voting, 'cause this *is* tricky. If you have a look, you will see that we 
> *currently* depend on certain standard libraries, such as, for example, 
> libdl. And perhaps we should also mention the pile of libraries used with 
> windows.
> 
> In my mind, this makes that more general vote ridiculous,

It certainly is, the vote should not be about such general principle.
It's way too general and most importantly *too natural* to vote about.
In sense that there are things that can't be voting matter. For example
"humans need air to breath" is not, and so is assertion that OpenSSL
needs libraries. And it's all but utterly natural to *minimize*
dependencies and make *minimum* assumptions. This means that one can
only vote on specifics, i.e. do we want specific dependency (at specific
level) or make specific assumption (at specific level). That's why
ballot should not be formulated "such as iconv", but *be* about iconv.

As for iconv per se. One has to recognize that it was standardized by
POSIX and one can expect it to be available on compliant system. Trick
is to reliably identify the said systems, and possibly by version, not
just by being X. So it's as important to have a fall-back to handle the
exclusions. As there will be exclusions. I can even name one already,
Android. And once again, I argue that there is even timing issue. It's
not right moment to make too broad assumptions about iconv availability
arguing that one is prepared to deal with issues. One can only argue
that it might be appropriate to enable it in cases one can actually
verify and confirm.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Forthcoming OpenSSL releases

2018-08-07 Thread Andy Polyakov
> Forthcoming OpenSSL releases
> 

I have some RSA hardening fixes in pipeline...
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Forthcoming OpenSSL releases

2018-08-07 Thread Andy Polyakov
>>> Forthcoming OpenSSL releases
>>> 
>>
>> I have some RSA hardening fixes in pipeline...
> 
> Do you have PR numbers for them?

"in pipeline" kind of means "not yet [but I'll intensify the work to put
them out]". In other words it's a pre-heads-up thing...

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Forthcoming OpenSSL releases

2018-08-07 Thread Andy Polyakov
>>> Forthcoming OpenSSL releases
>>> 
>>
>> I have some RSA hardening fixes in pipeline...
> 
> Do you suggest we wait with a release on that, or can we just put
> it in the next release?

I should be able to pull it off in before release. What I'm saying is
that it would probably be appropriate to review them as they appear.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Removal of NULL checks

2018-08-09 Thread Andy Polyakov
> Should not be changed period.  Even across major release boundaries.
> This is not an ABI compatibility issue, it is a source compatibility
> issue, and should avoided all the time.  If we want to write a *new*
> function that skips the NULL checks it gets a new name.

While I admittedly crossed a line, I would actually argue against
drawing the line as categorically as above. In sense that NULL checks
*can* be excessive, and thing is that such checks can jeopardize
application security. And in such case it wouldn't be totally
inappropriate to remove them, excessive checks, *without* renaming
function as suggested above. It would be kind of equivalent of changing
one undefined behaviour pattern for another undefined one. Or rather for
more "honest" undefined behaviour pattern:-)

As for jeopardizing application security, taking this case as an
example. What worked so far? Create stack without paying attention to
result[!], dump things into stack even if it's NULL, pull nothing if
it's NULL. Now imagine that this stack was some kind of deny list. If
entry producer didn't pay attention to error when creating stack and
putting things into it, consumer would be led to belief that there is
nothing to deny and let through the requests that are supposed to be
denied. This is kind of not-quite-right example in the context, because
it implies that *all* NULL checks should have been removed, thus making
*every* caller to pay attention, including consumer. While I left checks
in some of the consume operations reckoning that at least those who will
be putting things into NULL stack will have to pay attention and will
cancel whole operation hopefully without getting to pull-nothing step.
So that those entry producers who are not *currently* paying attention
would actually be caught. Which actually would be a positive thing!

Oh! Triggering factor was quite a number of unchecked pushes in apps.
Yes, you can sense contradiction in sense that one can't "fix" those
unchecked pushes with removal of NULL checks in questions. But they were
just something that made me look at stack.c and wonder above questions.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Forthcoming OpenSSL releases

2018-08-13 Thread Andy Polyakov
> Forthcoming OpenSSL releases
> 

 I have some RSA hardening fixes in pipeline...
>>>
>>> Do you suggest we wait with a release on that, or can we just put
>>> it in the next release?
>>
>> I should be able to pull it off in before release. What I'm saying is
>> that it would probably be appropriate to review them as they appear.
> 
> Is it #6915 you're talking about?

Updates to blinding are coming shortly.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Please freeze the repo

2018-08-13 Thread Andy Polyakov
It would be appropriate to merge
https://github.com/openssl/openssl/pull/6916 (1.0.2, commit message
would need adjustment for merged from) and
https://github.com/openssl/openssl/pull/6596 (1.1.0, was marked "pending
for 2nd" but it's apparently ready).
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


[openssl-project] dropping out

2018-10-12 Thread Andy Polyakov
I'm dropping out in a week time. It's not some single recent thing, but
combination. I don't see that my perception of project spirit aligns
with current state of affairs. This goes for both technical aspects and
policies (lack of, and/or their applications). One can probably say that
these are misguided expectations, but that's where I draw line for
myself. Another contributing factor is lack of opportunities to pursue
so to say "fundamental" goals, formal validation of assembly code being
one example. Just in case, I don't know what future holds exactly, but
it's not end of "crypto programming" for me, it's just that I won't be
lending my expertise directly/solely to this project on general basis.

As for my e-mail address. I'd like to argue that it would be appropriate
if it remains working for a while after, at the very least to see
through some of ongoing threads. I promise not to use it in context
other than purely technical.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


[openssl-project] speaking of versioning schemes

2018-10-12 Thread Andy Polyakov
As once said, what matters at the end of the day is whether or not
versioning scheme is *accepted* by "downstream". And by "accepted" I
mean to degree that ROBOT wouldn't have happened to Facebook. In other
words first question should not be how to arrange digits and letters,
but rather how to mitigate the apparent tendency to deviate from "upstream".
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] dropping out

2018-10-12 Thread Andy Polyakov
>> Another contributing factor is lack of opportunities to pursue
>> so to say "fundamental" goals, formal validation of assembly code being
>> one example.
> 
> Formal validation of the assembly code is something I would
> actually like to see, and even talked with some people about it.
> But I don't have the time to actually get this done.

To clarify. As far as *this* specific remark goes, it's not like I feel
constrained, by policies or anything of the sort. It's rather about
"chronic" failure to find opportunity to put things aside and devote a
concentrated effort. The remark belongs in "what future holds" part. (As
most should know by now, I'm not on good terms with words.)
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project