Okay, glad you found a command line that worked. I'll get a FC VM up and work on a fix for that environment.

Yes. Regarding the lib/lib64 thing, I googled using "fedora python lib lib64" and came across http://serverfault.com/questions/60619/fedora-usr-lib-vs-usr-lib64 - amongst many other stories. So it seems that Fedora apply a patch to vanilla python, other distros (e.g. ArchLinux?) do similar things (I think I read that ubuntu symlink lib and lib64 together). The nub of it is that python packages which use pure python (no C extensions) go in lib, whereas the stuff that depends on one's bitness/platform-architecture should go in lib64.

So as far as Fedora's rationale is concerned, my initial/quick fix is not great (though it works)

Index: scripts/Python/finish-swig-Python-LLDB.sh
===================================================================
--- scripts/Python/finish-swig-Python-LLDB.sh    (revision 213650)
+++ scripts/Python/finish-swig-Python-LLDB.sh    (working copy)
@@ -101,9 +101,9 @@

     if [ -n "${PYTHON_INSTALL_DIR}" ]
     then
- framework_python_dir=`${PYTHON} -c "from distutils.sysconfig import get_python_lib; print get_python_lib(True, False, \"${PYTHON_INSTALL_DIR}\");"`/lldb + framework_python_dir=`${PYTHON} -c "from distutils.sysconfig import get_python_lib; print get_python_lib(False, False, \"${PYTHON_INSTALL_DIR}\");"`/lldb
     else
- framework_python_dir=`${PYTHON} -c "from distutils.sysconfig import get_python_lib; print get_python_lib(True, False);"`/lldb + framework_python_dir=`${PYTHON} -c "from distutils.sysconfig import get_python_lib; print get_python_lib(False, False);"`/lldb
     fi
 fi


since the first parameter to *get_python_lib* is actually plat_specific, and when set to True, this is saying that this site-package is platform-specific since it relies on C/C++ extensions. I.e.

def get_python_lib(plat_specific=0, standard_lib=0, prefix=None):

Another possible fix, is to redirect the symlink, which merely assumes that we know the binaries lib directory will be called lib. Which seems plausible ;-)

Index: scripts/Python/finish-swig-Python-LLDB.sh
===================================================================
--- scripts/Python/finish-swig-Python-LLDB.sh    (revision 213650)
+++ scripts/Python/finish-swig-Python-LLDB.sh    (working copy)
@@ -158,7 +158,7 @@
     then
         ln -s "../../../LLDB" _lldb.so
     else
-        ln -s "../../../liblldb${SOEXT}" _lldb.so
+        ln -s "../../../../lib/liblldb${SOEXT}" _lldb.so
     fi
 else
     if [ $Debug -eq 1 ]


Actually the above patch, whilst redirecting the symlink correctly, does not solve my problem, since with now suitably adjusted LD_ and PY_ paths I see:

~/src/staging/llvm/tools/lldb/test
$ LD_LIBRARY_PATH=/home/mg11/src/staging/build/lib/ PYTHONPATH=/home/mg11/src/staging/build/lib64/python2.7/site-packages/ python dotest.py --executable=/home/mg11/src/staging/build/bin/lldb -v --compiler=gcc -q . This script requires lldb.py to be in either /home/mg11/src/staging/llvm/tools/lldb/build/Debug/LLDB.framework/Resources/Python, /home/mg11/src/staging/llvm/tools/lldb/build/Release/LLDB.framework/Resources/Python, or /home/mg11/src/staging/llvm/tools/lldb/build/BuildAndIntegration/LLDB.framework/Resources/Python

I can pursue the "redirect the symlink" solution over here, if you like (?), i.e. if you think it's preferable to modifying the *get_python_lib* invocation.

Or perhaps we just hardcode the framework_python_dir for linux to be /your/path/lib/python2.7/site-packages/lldb ?

    I note that the results say:

    Ran 1083 tests in 633.125s


Reasonable - how many cores are you using?  (This was a VM, right?)

No, it's not a VM, it's an Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz. (4 hyperthreaded cores).
Yes, that's reasonable for Linux. The skipped are generally Darwin/MacOSX tests --- there are nearly 2 tests total for every one that can run on Linux. The others are generally a variant of debuginfo packaging that is only available on MacOSX.

The expected failures represent the tests that we don't have working on Linux (often paired with FreeBSD) that are code and/or test bugs that need to be addressed. (If you're ever just feeling like doing some LLDB spelunking, these are great learning opportunities for one to pick up!)

Indeed! I'm sure I'll be traversing crevices enough trying to get stack unwind to work on kalimba with lldb!

 The unexpected successes represent one of two things:

1. tests marked XFAIL that are intermittent, and so sometimes pass, falling into this bucket. This is the best we can do with these for now until we get rid of the intermittent nature of the test. Note the multi-core test running that the build systems do stress the tests more heavily than when they are run individually.

2. tests marked XFAIL that always pass now, which should no longer be marked XFAIL. The majority do not fall into this category, but it does represent a state that can occur when we do fix the underlying race and/or timing issue that caused it to be intermittent in the first place.

    The only actual failure I saw was:

    FAIL: test_stdcxx_disasm
    (TestStdCXXDisassembly.StdCXXDisassembleTestCase)
          Do 'disassemble' on each and every 'Code' symbol entry from
    the std c++ lib.


This is really the nugget of result your test run is showing. I'm not entirely sure why that one is failing. It could be a legitimate failure with changes in your code, or it could be something that surfaces in FC 20 that doesn't elsewhere. The test run should have made a directory called "lldb-test-traces". They go in different places depending on ninja vs. make builds. In ninja builds, it will be in your root build dir. In make builds it will be in the {my-build-dir}/tools/lldb/test dir. In that directory, you get a trace log file (*) for every test run that did not succeed - either because it was skipped, it failed (i.e. test assertion failed), it had an error (i.e. it failed but not because of an assert - something happened that was entirely unexpected like an i/o issue, seg fault, etc.), or it unexpectedly passed - marked xfail but succeeded. So - you should have a file called something like "Failed*test_stdcxx_disasm*.log" in that directory. You could look at the contents of that and see what failed.
Yeah, will do. I'll look at lldb-test-traces later on. Thanks for this (and other) tips.

Generally the tests are in a state where if it fails, that represents an issue. I've spent quite a bit of time trying to get the test suite into that state, so that an issue represents a real problem. In your case, it could be a FC environment issue where it is really a test that - for that environment - is just not going to ever pass. In which case we need to either fix it or annotate it as a known issue and file a bug for it. For your particular case, the way to figure that out is to do a build and a test run against a clean-slate top of tree sync (essentially shelve any changes you have locally) and see what a clean-slate test run produces. If you always see that error, it's a tip-off that the test is broken in your environment.
Yes, I'm going to rerun the tests and see what I get.

Matt



Member of the CSR plc group of companies. CSR plc registered in England and 
Wales, registered number 4187346, registered office Churchill House, Cambridge 
Business Park, Cowley Road, Cambridge, CB4 0WZ, United Kingdom
More information can be found at www.csr.com. Keep up to date with CSR on our 
technical blog, www.csr.com/blog, CSR people blog, www.csr.com/people, YouTube, 
www.youtube.com/user/CSRplc, Facebook, 
www.facebook.com/pages/CSR/191038434253534, or follow us on Twitter at 
www.twitter.com/CSR_plc.
New for 2014, you can now access the wide range of products powered by aptX at 
www.aptx.com.
_______________________________________________
lldb-dev mailing list
lldb-dev@cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev

Reply via email to