I just took at look at the source file in hex mode, and the problem is
definitely inherent in the source file.  For the one remaining test
failure, there is a character that looks like the degree symbol in both the
expected literal string and the test data.  Viewed in hex mode, the first
occurrence is encoded as 'b0'x, but in the test data, it is encoded as
'a7'x.  Looking at the same file on Windows shows both occurrences as
'b0'x.  Is it possible that this file is somehow getting munged as part of
the SVN checkout?

Add to the mystery is the fact the string test group uses the same data but
somehow manages to avoid getting munged.  The problem character is encoded
as '0b'x in both cases.

These particular tests don't really add much to the testing, other than the
fact it uses characters.  These tests should probably be done using data
specified as hex literals rather than relying on the file encodings, which
seem to have problems.  I think we should just ignore these failures for
now and look at fixing the base test to avoid these problems in the future.

Rick





On Fri, Jan 31, 2014 at 1:43 PM, Rick McGuire <object.r...@gmail.com> wrote:

> The lexical parser does not parse extended strings, it treats all data as
> just 8-bit ASCII.  And if that was the case, then the problems would also
> show up on Windows, which is not happening.  There's definitely something
> strange going on with the file encoding, but I have no idea on where the
> problem is.  I've been able to recreate this on my Fedora virtual machine
> (sort of).  However, I only got 3 errors, not the entire set you got.  In
> addition, I tried modifying one of the failures to this:
>
> ::method 'test0174'
>
>     data = ' §äè  °üé    ߢ¬' '0909'x
>     say data
>     say data~length
>     test = .MutableBuffer~new(data)
>     say test
>     say test~length
>     self~assertSame('§äè+°üé+ߢ¬', test~space(1,'+'))
>
> And the failure went away...along with one of the other failures that I
> did not change, leaving me with just a single failure.  All of this is
> telling me that the problems are in the actual encoding of the source
> files, but I have no clue as to what might be causing this to behave so
> inconsistently.
>
> Rick
>
>
> On Fri, Jan 31, 2014 at 1:17 PM, David Ashley <w.david.ash...@gmail.com>wrote:
>
>> When you create the string using  .MutableBuffer~new('20c2a720'x)
>> everything is ok, all results are as expected.
>>
>> This makes me think that there is something wrong at a lower level,
>> perhaps in the character string parsing (the lex part of the
>> interpreter). Perhaps it is not parsing extended strings correctly? That
>> is my only guess at the moment.
>>
>> David Ashley
>>
>>
>>
>> ------------------------------------------------------------------------------
>> WatchGuard Dimension instantly turns raw network data into actionable
>> security intelligence. It gives you real-time visual feedback on key
>> security issues and trends.  Skip the complicated setup - simply import
>> a virtual appliance and go from zero to informed in seconds.
>>
>> http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk
>> _______________________________________________
>> Oorexx-users mailing list
>> Oorexx-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/oorexx-users
>>
>
>
------------------------------------------------------------------------------
WatchGuard Dimension instantly turns raw network data into actionable 
security intelligence. It gives you real-time visual feedback on key
security issues and trends.  Skip the complicated setup - simply import
a virtual appliance and go from zero to informed in seconds.
http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk
_______________________________________________
Oorexx-users mailing list
Oorexx-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/oorexx-users

Reply via email to