Hi,
I am facing issue (llvm assertion) in evaluating expressions for MIPS on Linux.
(lldb) p fooptr(a,b)
lldb: /home/battarde/git/llvm/lib/MC/ELFObjectWriter.cpp:791: void
{anonymous}::ELFObjectWriter::computeSymbolTable(llvm::MCAssembler&, const
llvm::MCAsmLayout&, const SectionIndexMapTy&, co
Yea, I definitely agree with you there.
Is this going to end up with an @expectedFlakeyWindows,
@expectedFlakeyLinux, @expectedFlakeyDarwin, @expectedFlakeyAndroid,
@expectedFlakeyFreeBSD?
It's starting to get a little crazy, at some point I think we just need
something that we can use like this:
My initial proposal was an attempt to not entirely skip running them on our
end and still get them to generate actionable signals without conflating
them with unexpected successes (which they absolutely are not in a semantic
way).
On Mon, Oct 19, 2015 at 4:33 PM, Todd Fiala wrote:
> Nope, I have
Nope, I have no issue with what you said. We don't want to run them over
here at all because we don't see enough useful info come out of them. You
need time series data for that to be somewhat useful, and even then it only
is useful if you see a sharp change in it after a specific change.
So I r
Don't get me wrong, I like the idea of running flakey tests a couple of
times and seeing if one passes (Chromium does this too as well, so it's not
without precedent). If I sounded harsh, it's because I *want* to be harsh
on flaky tests. Flaky tests indicate literally the *worst* kind of bugs
bec
Okay, so I'm not a fan of the flaky tests myself, nor of test suites taking
longer to run than needed.
Enrico is going to add a new 'flakey' category to the test categorization.
Scratch all the other complexity I offered up. What we're going to ask is
if a test is flakey, please add it to the 'f
> On Oct 19, 2015, at 2:54 PM, Jason Molenda via lldb-dev
> wrote:
> Greg's original statement isn't correct -- about a year ago Tong Shen changed
> lldb to using eh_frame for the currently-executing frame. While it is true
> that eh_frame is not guaranteed to describe the prologue/epilogue,
Hi all, sorry I missed this discussion last week, I was a little busy.
Greg's original statement isn't correct -- about a year ago Tong Shen changed
lldb to using eh_frame for the currently-executing frame. While it is true
that eh_frame is not guaranteed to describe the prologue/epilogue, in p
I have figured out how to get both synthetic and summary formatters
attached to a given datatype.
I call GetChildAtIndex from the summary which returns the synthetic child.
(and GetNonSyntheticValue has no effect - to which I must ask - why bother
having it then?)
Given these 2 bugs:
http://review
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
(NetBSD) Python 2.6 was retired with pkgsrc-2015Q2
http://mail-index.netbsd.org/pkgsrc-users/2015/07/06/msg021778.html
On 19.10.2015 21:43, Zachary Turner via lldb-dev wrote:
> AKA: Is Python 2.6 a supported configuration? I found this
> `argpars
Ubuntu 10.04 uses 2.6 by default; Ubuntu 12.04 uses 2.7.
We have a bunch of Ubuntu 10 machines here, but anything that runs lldb has 2.7
installed. I’m OK with dropping 2.6 support.
--
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
On Mon, Oct 19, 2015 at 12:50 PM Todd Fiala via lldb-dev <
lldb-dev@lists.llvm.org> wrote:
> Hi all,
>
> I'd like unexpected successes (i.e. tests marked as unexpected failure
> that in fact pass) to retain the actionable meaning that something is
> wrong. The wrong part is that either (1) the te
I think the older Ubuntus and the RHEL 7 line both still have a 2.7-based
python. I am not aware of any system on the Linux/OS X side where we are
seeing Python 2.6 systems anymore.
Can't speak to the BSDs.
My guess would be we don't need to worry about python < 2.7.
-Todd
On Mon, Oct 19, 2015
> I'd like unexpected successes (i.e. tests marked as unexpected failure
that in fact pass)
argh, that should have been "(i.e. tests marked as *expected* failure that
in fact pass)"
On Mon, Oct 19, 2015 at 12:50 PM, Todd Fiala wrote:
> Hi all,
>
> I'd like unexpected successes (i.e. tests marke
Hi all,
I'd like unexpected successes (i.e. tests marked as unexpected failure that
in fact pass) to retain the actionable meaning that something is wrong.
The wrong part is that either (1) the test now passes consistently and the
author of the fix just missed updating the test definition (or perh
AKA: Is Python 2.6 a supported configuration? I found this
`argparse_compat.py` file in tests, and it opens with this:
"""
Compatibility module to use the lldb test-suite with Python 2.6.
Warning: This may be buggy. It has not been extensively tested and should
only
be used when it is impossible
Okay. I think for the time being, the XFAIL makes sense. Per my previous
email, though, I think we should move away from unexpected success (XPASS)
being a "sometimes meaningful, sometimes meaningless" signal. For almost
all cases, an unexpected success is an actionable signal. I don't want it
Thanks, Tamas.
On Mon, Oct 19, 2015 at 4:30 AM, Tamas Berghammer
wrote:
> The expected flakey works a bit differently then you are described:
> * Run the tests
> * If it passes, it goes as a successful test and we are done
> * Run the test again
> * If it is passes the 2nd time then record it as
https://llvm.org/bugs/show_bug.cgi?id=25253
Bug ID: 25253
Summary: Expression evaluation crashes when base and derived
classes are the same
Product: lldb
Version: unspecified
Hardware: PC
OS: Linux
FYI, I just started a discussion on llvm-dev about the license & patents
situation in the project, it also affects LLDB, so if you’re interested, please
check it out there.
-Chris
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/
https://llvm.org/bugs/show_bug.cgi?id=25251
ravithejaw...@gmail.com changed:
What|Removed |Added
CC||ravithejaw...@gmail.com
Assi
https://llvm.org/bugs/show_bug.cgi?id=25251
Bug ID: 25251
Summary: Infinite recursion in LLDB stack unwinding
Product: lldb
Version: unspecified
Hardware: PC
OS: Linux
Status: NEW
Severity: normal
I have created this test to reproduce a race condition in
ProcessGDBRemote. Given that it tests a race condition, it cannot be
failing 100% of the time, but I agree with Tamas that we should keep
it as XFAIL to avoid noise in the buildbots.
pl
On 19 October 2015 at 12:30, Tamas Berghammer via lld
The expected flakey works a bit differently then you are described:
* Run the tests
* If it passes, it goes as a successful test and we are done
* Run the test again
* If it is passes the 2nd time then record it as expected failure (IMO
expected falkey would be a better result, but we don't have th
24 matches
Mail list logo