Re: [lldb-dev] Minimum required swig version?

2020-04-16 Thread Davidino Italiano via lldb-dev


> On Apr 16, 2020, at 3:08 PM, Jonas Devlieghere  wrote:
> 
> 
> 
> On Thu, Apr 16, 2020 at 2:42 PM Davidino Italiano via lldb-dev 
> mailto:lldb-dev@lists.llvm.org>> wrote:
> 
> 
>> On Apr 16, 2020, at 2:28 PM, Ted Woodward via lldb-dev 
>> mailto:lldb-dev@lists.llvm.org>> wrote:
>> 
>> http://lldb.llvm.org/resources/build.html 
>> <http://lldb.llvm.org/resources/build.html> Says we need swig 2 or later:
>> If you want to run the test suite, you’ll need to build LLDB with Python 
>> scripting support.
>> 
>> · Python <http://www.python.org/>
>> · SWIG <http://swig.org/> 2 or later.
>>  
>> I don’t think this is correct anymore.
>>  
>> test/API/python_api/sbenvironment/TestSBEnvironment.py has this line:
>> env.Set("FOO", "bar", overwrite=True)
>>  
>> lldb built with swig 2.0.11 fails this test with the error:
>> env.Set("FOO", "bar", overwrite=True)
>> TypeError: Set() got an unexpected keyword argument 'overwrite'
>>  
>> It works when lldb is built with swig 3.0.8.
>>  
> 
> Yes, we bumped the swig requirements.
> Swig-2, among others, don’t support python 3 correctly.
> 
> I think you're confusing SWIG 1.x and SWIG 2.x. We bumped the requirements to 
> 2, because that's the first version that correctly supported Python 3. 
> Personally I don't mind bumping the version again, but this seems more like a 
> bug that we should be able to fix with SWIG 2. 
>  


While swig 2 has support for python 3, it doesn’t work in all cases (there are 
bugs). Hence the choice of the word “correctly”, rather than “at all”.
If you go past this, you’ll probably find other problems, as I did when I 
originally made the transition. Some of them are trivial, some of them cause 
the python code generate to be incorrect and tests to fail.
If you want to fix them, be my guest. But realistically everybody I’ve seen 
builds using swig-3 [or swig-4]. Pick your poison.

—
Davide___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Minimum required swig version?

2020-04-16 Thread Davidino Italiano via lldb-dev


> On Apr 16, 2020, at 2:28 PM, Ted Woodward via lldb-dev 
>  wrote:
> 
> http://lldb.llvm.org/resources/build.html 
>  Says we need swig 2 or later:
> If you want to run the test suite, you’ll need to build LLDB with Python 
> scripting support.
> 
> · Python 
> · SWIG  2 or later.
>  
> I don’t think this is correct anymore.
>  
> test/API/python_api/sbenvironment/TestSBEnvironment.py has this line:
> env.Set("FOO", "bar", overwrite=True)
>  
> lldb built with swig 2.0.11 fails this test with the error:
> env.Set("FOO", "bar", overwrite=True)
> TypeError: Set() got an unexpected keyword argument 'overwrite'
>  
> It works when lldb is built with swig 3.0.8.
>  

Yes, we bumped the swig requirements.
Swig-2, among others, don’t support python 3 correctly.

Feel free to submit a patch.

—
D___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC] Upstreaming Reproducer Capture/Replay for the API Test Suite

2020-04-06 Thread Davidino Italiano via lldb-dev


> On Apr 6, 2020, at 2:24 PM, Jonas Devlieghere via lldb-dev 
>  wrote:
> 
> Hi everyone,
> 
> Reproducers in LLDB are currently tested through (1) unit tests, (2) 
> dedicated end-to-end shell tests and (3) the `lldb-check-repro` suite which 
> runs all the shell tests against a replayed reproducer. While this already 
> provides great coverage, we're still missing out on about 800 API tests. 
> These tests are particularly interesting to the reproducers, because as 
> opposed to the shell tests, which only exercises a subset of SB API calls 
> used to implement the driver, they cover the majority of the API surface.
> 
> To further qualify reproducer and to improve test coverage, I want to capture 
> and replay the API test suite as well. Conceptually, this can be split up in 
> two stages: 
> 
>  1. Capture a reproducer and replay it with the driver. This exercises the 
> reproducer instrumentation (serialization and deserialization) for all the 
> APIs used in our test suite. While a bunch of issues with the reproducer 
> instrumentation can be detected at compile time, a large subset only triggers 
> through assertions at runtime. However, this approach by itself only verifies 
> that we can (de)serialize API calls and their arguments. It has no knowledge 
> of the expected results and therefore cannot verify the results of the API 
> calls.
> 
>  2. Capture a reproducer and replay it with dotest.py. Rather than having the 
> command line driver execute every API call one after another, we can have 
> dotest.py call the Python API as it normally would, intercept the call, 
> replay it from the reproducer, and return the replayed result. The 
> interception can be hidden behind the existing LLDB_RECORD_* macros, which 
> contains sufficient type info to drive replay. It then simply re-invokes 
> itself with the arguments deserialized from the reproducer and returns that 
> result. Just as with the shell tests, this approach allows us to reuse the 
> existing API tests, completely transparently, to check the reproducer output.
> 
> I have worked on this over the past month and have shown that it is possible 
> to achieve both stages. I have a downstream fork that contains the necessary 
> changes.
> 
> All the runtime issues found in stage 1 have been fixed upstream. With the 
> exception of about 30 tests that fail because the GDB packets diverge during 
> replay, all the tests can be replayed with the driver.
> 
> About 120 tests, which include the 30 mentioned earlier, fail to replay for 
> stage 2. This isn't entirely unexpected, just like the shell tests, there are 
> tests that simply are not expected to work. The reproducers don't currently 
> capture the output of the inferior and synchronization through external files 
> won't work either, as those paths will get remapped by the VFS. This requires 
> manually triage.
> 
> I would like to start upstreaming this work so we can start running this in 
> CI. The majority of the changes are limited to the reproducer 
> instrumentation, but some changes are needed in the test suite as well, and 
> there would be a new decorator to skip the unsupported tests. I'm splitting 
> up the changes in self-contained patches, but wanted to send out this RFC 
> with the bigger picture first.

I personally believe this is a required step to make sure:
a) Reproducers can jump from being a prototype idea to something that can 
actually run in production
b) Whenever we add a new test [or presumably a new API] we get coverage 
for-free.
c) We have a verification mechanism to make sure we don’t regress across the 
large surface API and not only what the unittests & shell tests cover.

I personally would be really glad to see this being upstreamed. I also would 
like to thank you for doing the work in a downstream branch until you proved 
this was achievable.

—
D

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev