While adding support for 512-bit integers in `Scalar`, I figured I
could add some coverage.

TEST(ScalarTest, Signedness) {
 auto s1 = Scalar(APInt(32, 12, false /* isSigned */));
 auto s2 = Scalar(APInt(32, 12, true /* isSigned */ ));
 ASSERT_EQ(s1.GetType(), Scalar::e_uint); // fails
 ASSERT_EQ(s2.GetType(), Scalar::e_sint); // pass

The result of `s1.GetType()` is Scalar::e_sint.
This is because an APInt can't distinguish between "int patatino = 12"
and "uint patatino = 12".
The correct class in `llvm` to do that is `APSInt`.

I think there are at least a couple of possibilities to fix this:
1) Change the constructor in Scalar to always get an APSInt. This
would be fairly invasive but we could always query isSigned() to make
sure we instantiate the correct scalar type.
2) Change the implementation of Scalar(APInt v) to look at the sign
bit, and if that's set, treat the value as signed (and unsigned
otherwise). This will be definitely more correct than the current
implementation which always construct signed types from APInt(s), but
I'm not entirely sure of all the implications.

What do you think?

lldb-dev mailing list

Reply via email to