On 2016-09-23, Kornel Benko wrote:
> Am Freitag, 23. September 2016 um 07:29:14, schrieb Guenter Milde 
> <mi...@users.sf.net>
>> On 2016-09-22, Kornel Benko wrote:
>> > Am Donnerstag, 22. September 2016 um 21:25:07, schrieb Guenter Milde 
>> > <mi...@users.sf.net>
>> >> On 2016-09-22, Kornel Benko wrote:

>> >> > Now, export/doc/es/Additional_pdf4_texF in under label lyxbugs and
>> >> > under label ert in invertedTests. At the same time in
>> >> > unreliableTests (mask = export/doc/es/.*_(pdf[45]|dvi3)_texF) under
>> >> > label wrong_output.

...

>> > You could use for instance
>> >    Sublabel: wrong_output ert lyxbugs

>> This would mean, 3 regexp patterns instead of 2 simpler ones
>> to single out the test case that currently matches for both:

> Yes, you have your points ...

> In ideal case however, the (inverted|suspended|unreliable)Tests should
> be empty.

Don't strife for this "ideal"
Rather, the (inverted|suspended|unreliable)Tests is comparable to track
issues: no open track issues would usually signalize the project is
dead...

As long as we are working on LyX, both track and *Tests-files are a valuable
buffer that allows us to prioritise which problems to solve first.


>> Currently, we have two "orthogonal" clauses:

>> 1. for a problem common to all "Additional.lyx" exports in
>>    invertedTests:

>>      #9871 LyX sends invalid Unicode to iconv when converting to ASCII
>>      # most probably due to BabelPreamble code (language specific headings 
>> for
>>      # theorems, problems , ... are written in the language's default 
>> encoding 
>>      # if they contain non-ASCII characters)
>>      export/doc/(es|fr)/Additional_pdf4_texF

>> 2. for a problem common to all Spanish documents with 8-bit fonts and
>>    Xe/LuaTeX:

>>      # Babel-Spanish uses Babel's "strings" feature to define
>>      # separate auto-strings using UTF-8 literals.
>>      # Babel uses the "unicode" strings if it detects XeTeX or LuaTeX.
>>      # This is wrong for Xe/Lua with 8-bit TeX-fonts.
>>      # set inputenc to utf8?
>>      # (Changing the default in lib/languages requires more tests for utf8 
>> first.)
>>      export/examples/es/linguistics_pdf4_texF
>>      export/doc/es/.*_(pdf[45]|dvi3)_texF


>> -1 Making an exception for Additonal.lyx in the second pattern complicates
>>    the regexp considerably.

> If we could considerably reduce the number of failings, we can also
> omit regexes and instead use full test names.

If we could solve all track issues within two weeks, we would not need
keywords, importance levels and the like...


Considerably reducing the number of failings would require one of the
following ugly options:

  * spend most developing time on minor cases instead of moving forward
  
  * use ugly hacks (either in documentation or in the LyX code) just to get
    export working
    
  * reduce the number of tests (don't test corner cases like *.texF)
  
  * don't support "tricky" or rarely used packages, combinations, ...

I strongly prefer to adapt the test system to work well with a large
number sorted and well-commented lists of "known failures".



>> -1 When one of the two problems is solved, there must be edits at two
>> places.

> True :(

>> -1 The two independent problems belong do different "*Test" labeling files.
>>    Where should the third clause for wrong_output ert lyxbugs go?


>> It is a quite common case, that documents have more than one "orthogonal"
>> problem. 

> Yes, unfortunately we cannot make tests selective to specific failure.

>> It is also a quite common case that a document currently fails to export
>> but we know that even if it compiles fine the output will be wrong.

>> I think we need to change the test system to care for this.

>> One possibility would be run the "invertedtests" filter also for test-cases
>> matching "unreliableTests" and give them two labels.

> This contradicts our intent for unreliableTests. To be clear, I have no
> better idea.

It is exactly what I have in mind with unreliableTests:

  # Regular expressions for tests that do not work as expected
  # (either unreliable or invalid).

When handling "unreliable" independent of inversion, we can still:

-> Skip these tests with `-LE unreliable`

-> Run with e.g. `-L nonstandard` if you have the extra requirements 
   or with -L wrong_output if you want to check...

-> If running the complete suite, don't care for failures of unreliable tests
   unless you know what you are doing.
   
   A report to the list would say something like
   
        There are 13 failures: 1 regression, 2 inverted tests that now work,
        and 10 unreliable tests.
        

Günter  

Reply via email to