Hi everyone,
I want to study the implementation details of V8’s CodeStubAssembler (CSA).
So I built the simplest code sample as follows:
``` js
function add(a, b) {
return a + b;
}
%PrepareFunctionForOptimization(add);
add(0x4141, 0x4000);
add(0x4242, 0x4001);
```
By reading the source code, I traced this down to Generate_AddWithFeedback,
which gets embedded into AddHandler. Since the above values are added as
Smi, it eventually calls the TrySmiAdd function:
``` c++
Comment("perform smi operation");
// If rhs is known to be an Smi we want to fast path Smi operation. This
// is for AddSmi operation. For the normal Add operation, we want to fast
// path both Smi and Number operations, so this path should not be marked
// as Deferred.
TNode<Smi> rhs_smi = CAST(rhs);
Label if_overflow(this,
rhs_known_smi ? Label::kDeferred : Label::kNonDeferred);
TNode<Smi> smi_result = TrySmiAdd(lhs_smi, rhs_smi, &if_overflow); // [+] @a
```
But the implementation of TrySmiAdd really confuses me:
``` c++
if (SmiValuesAre32Bits()) { // [+] hit here since
v8_enable_pointer_compression = false
return BitcastWordToTaggedSigned(
TryIntPtrAdd(BitcastTaggedToWordForTagAndSmiBits(lhs),
BitcastTaggedToWordForTagAndSmiBits(rhs), if_overflow));
}
```
For easier debugging and clearer memory layout, I disabled pointer
compression (v8_enable_pointer_compression = false).
This means it eventually calls the TryIntPtrAdd function, and my current
confusion is mostly around that function.
## TryIntPtrAdd
```
TNode<IntPtrT> CodeStubAssembler::TryIntPtrAdd(TNode<IntPtrT> a,
TNode<IntPtrT> b,
Label* if_overflow) {
TNode<PairT<IntPtrT, BoolT>> pair = IntPtrAddWithOverflow(a, b); // [+] @b
[...]
}
```
My main question is about `@b`: how exactly is IntPtrAddWithOverflow
implemented?
By reading the source, I found two pieces of code related to its definition:
``` c++
V(IntPtrAddWithOverflow, PAIR_TYPE(IntPtrT, BoolT), IntPtrT, IntPtrT) \
// Basic arithmetic operations.
#define DECLARE_CODE_ASSEMBLER_BINARY_OP(name, ResType, Arg1Type, Arg2Type)
\
TNode<ResType> name(TNode<Arg1Type> a, TNode<Arg2Type> b);
CODE_ASSEMBLER_BINARY_OP_LIST(DECLARE_CODE_ASSEMBLER_BINARY_OP)
#undef DECLARE_CODE_ASSEMBLER_BINARY_OP
#define DEFINE_CODE_ASSEMBLER_BINARY_OP(name, ResType, Arg1Type, Arg2Type) \
TNode<ResType> CodeAssembler::name(TNode<Arg1Type> a, TNode<Arg2Type> b) { \
return UncheckedCast<ResType>(raw_assembler()->name(a, b)); \
}
CODE_ASSEMBLER_BINARY_OP_LIST(DEFINE_CODE_ASSEMBLER_BINARY_OP)
#undef DEFINE_CODE_ASSEMBLER_BINARY_OP
```
Which after macro expansion should give us something like:
``` c++
TNode<ResType> CodeAssembler::IntPtrAddWithOverflow(TNode<IntPtrT> a,
TNode<IntPtrT> b) { \
return UncheckedCast<PAIR_TYPE(IntPtrT,
BoolT)>(raw_assembler()->IntPtrAddWithOverflow(a, b)); \
}
```
At this point I have two questions I cannot figure out:
1. What exactly does raw_assembler() return? I suspect it’s
platform-related. where is its code
2. Where is the implementation of raw_assembler()->IntPtrAddWithOverflow(a,
b) located? I tried searching the V8 codebase but couldn’t find it.
If anyone could clarify this, I’d really appreciate it. Thanks!
--
--
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev
---
You received this message because you are subscribed to the Google Groups
"v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion visit
https://groups.google.com/d/msgid/v8-dev/9fbe0eab-1fb9-4f83-9313-9b1b768b3002n%40googlegroups.com.