Re: Encode UTF-8 optimizations

2016-08-20 Thread Karl Williamson

On 08/20/2016 08:33 PM, Aristotle Pagaltzis wrote:

* Karl Williamson  [2016-08-21 03:12]:

That should be done anyway to make sure we've got less buggy Unicode
handling code available to older modules.


I think you meant “available to older perls”?



Yes, thanks



Re: Encode UTF-8 optimizations

2016-08-20 Thread Aristotle Pagaltzis
* Karl Williamson  [2016-08-21 03:12]:
> That should be done anyway to make sure we've got less buggy Unicode
> handling code available to older modules.

I think you meant “available to older perls”?


Re: Encode UTF-8 optimizations

2016-08-20 Thread Karl Williamson

Top posting.

Attached is my alternative patch.  It effectively uses a different 
algorithm to avoid decoding the input into code points, and to copy all 
spans of valid input at once, instead of character at a time.


And it uses only currently available functions.  Any of these that are 
missing or buggy in previous perls can and will be dealt with by the 
Devel::PPPort mechanism.


On 08/19/2016 02:42 AM, p...@cpan.org wrote:

On Thursday 18 August 2016 23:06:27 Karl Williamson wrote:

On 08/12/2016 09:31 AM, p...@cpan.org wrote:

On Thursday 11 August 2016 17:41:23 Karl Williamson wrote:

On 07/09/2016 05:12 PM, p...@cpan.org wrote:

Hi! As we know utf8::encode() does not provide correct UTF-8 encoding
and Encode::encode("UTF-8", ...) should be used instead. Also opening
file should be done by :encoding(UTF-8) layer instead :utf8.

But UTF-8 strict implementation in Encode module is horrible slow when
comparing to utf8::encode(). It is implemented in Encode.xs file and for
benchmarking can be this XS implementation called directly by:

use Encode;
my $output = Encode::utf8::encode_xs({strict_utf8 => 1}, $input)

(without overhead of Encode module...)

Here are my results on 160 bytes long input string:

Encode::utf8::encode_xs({strict_utf8 => 1}, ...):  8 wallclock secs ( 8.56 usr +

0.00 sys =  8.56 CPU) @ 467289.72/s (n=400)

Encode::utf8::encode_xs({strict_utf8 => 0}, ...):  1 wallclock secs ( 1.66 usr +

0.00 sys =  1.66 CPU) @ 2409638.55/s (n=400)

utf8::encode:  1 wallclock secs ( 0.39 usr +  0.00 sys =  0.39 CPU) @

10256410.26/s (n=400)


I found two bottle necks (slow sv_catpv* and utf8n_to_uvuni functions)
and did some optimizations. Final results are:

Encode::utf8::encode_xs({strict_utf8 => 1}, ...):  2 wallclock secs ( 3.27 usr +

0.00 sys =  3.27 CPU) @ 1223241.59/s (n=400)

Encode::utf8::encode_xs({strict_utf8 => 0}, ...):  1 wallclock secs ( 1.68 usr +

0.00 sys =  1.68 CPU) @ 2380952.38/s (n=400)

utf8::encode:  1 wallclock secs ( 0.40 usr +  0.00 sys =  0.40 CPU) @

1000.00/s (n=400)


Patches are on github at pull request:
https://github.com/dankogai/p5-encode/pull/56

I would like if somebody review my patches and tell if this is the
right way for optimizations...



I'm sorry that this slipped off my radar until I saw it in the new Encode
release

There are a couple of things I see wrong with your patch.

1) It does not catch the malformation of an overlong sequence.  This is a
serious malformation which has been used for attacks.  Basically, after you
get the result, you need to check that it is the expected length for that
result.  For example, \xC2\x80 will have an input length of 2, and evaluates
to \x00, whose expected length is 1, and so the input is overlong.  In
modern perls, you can just do an OFFUNISKIP(uv) and compare that with the
passed-in length.  This can be rewritten for perls back to 5.8 using
UNI_SKIP and UNI_TO_NATIVE


I do not see where can be a problem. At least I think my patches should
be compatible with previous implementation of Encode.xs...

First UTF8_IS_INVARIANT is checked and one character processed.

Otherwise UTF8_IS_START is checked and UTF8SKIP is used to get length of
sequence. And then len-1 characters are checked if they pass test for
UTF8_IS_CONTINUATION.

If there are less characters then following does not
UTF8_IS_CONTINUATION and error is reported. If there are more, then next
iteration of loop starts and it fail on both UTF8_IS_CONTINUATION and
UTF8_IS_START.

Can you describe in details what do you think it wrong and how to do
that attack?


This discussion has been active at
https://github.com/dankogai/p5-encode/issues/64

For the curious out there, please refer to that discussion.  My bottom line
is that I have come to believe the security risks are too high to have
modules do their own security checking for UTF-8 correctness.



2) It does not work on EBCDIC platforms.  The NATIVE_TO_UTF() call is a good
start, but the result uv needs to be transformed back to native, using
UNI_TO_NATIVE(uv).


uv is used just to check if it is valid Unicode code point. Real value
is used only for error/warn message. Previous implementation used
utf8n_to_uvuni which convert return value with NATIVE_TO_UNI.


As I said on that other thread, if this is really true, then it's faster to
use a boolean function to verify the inputs.


Value of uv is used only in warn/error message.


Also, performance should not
be a consideration for errors or warnings.  One can check validity fast; and
then spend the time getting the message right in the rare cases where a
message is generated.


Yes, fully I agree.


3) The assumptions the subroutine runs under need to be documented for
future maintainers and code readers.  For example, it assumes that there is
enough space in the input to hold all the bytes.


Function process_utf8 does not assume that. It calls SvGROW to increase
buffer size when needed.


You misunderstand what I meant