First round of comments. Please rebase, the CL doesn't merge cleanly anymore.
Furthermore (just to be safe), run "git cl format".


https://codereview.chromium.org/817143002/diff/1/build/toolchain.gypi
File build/toolchain.gypi (right):

https://codereview.chromium.org/817143002/diff/1/build/toolchain.gypi#newcode814
build/toolchain.gypi:814: (v8_target_arch=="arm" or
v8_target_arch=="ia32" or \
*eyes hurting* This condition is a strong hint that we should probably
switch to

   foo in ["bar", "baz", "boo"]

style instead of the endless 'or' chains. Feel free to change this,
otherwise we should probably do this in a big followup change all over
the place.

https://codereview.chromium.org/817143002/diff/1/src/bailout-reason.h
File src/bailout-reason.h (right):

https://codereview.chromium.org/817143002/diff/1/src/bailout-reason.h#newcode229
src/bailout-reason.h:229: V(kTheInstructionShouldBeALis, "The
instruction should be a lis")            \
Not caused by this CL, but nevertheless: The initial semantics of
BailoutReason was a description for being unable to optimize JavaScript
code (i.e. the reason why Crankshaft gives up). Nowadays its a chaotic
mixture of this and various reasons to abort the whole process.

Please add a "TODO(svenpanne)" to clean this up by introducing an
AbortReason and partition this mess.

https://codereview.chromium.org/817143002/diff/1/src/base/cpu.h
File src/base/cpu.h (right):

https://codereview.chromium.org/817143002/diff/1/src/base/cpu.h#newcode51
src/base/cpu.h:51: int cache_line_size() const { return
cache_line_size_; }
We need to be more specific here *which* kind of cache we refer to (I or
D), see cpu-arm64.cc. You basically have 2 options:

  * Hide the cache line size handling in cpu-ppc.cc, just like
cpu-arm64.cc. We can then later handle this in a unified way in the CPU
class and remember it in CpuFeatures.

   * Do the unification already now.

As it is, it makes things more confusing, because we are half-way
between those options.

https://codereview.chromium.org/817143002/diff/1/src/base/platform/platform-posix.cc
File src/base/platform/platform-posix.cc (right):

https://codereview.chromium.org/817143002/diff/1/src/base/platform/platform-posix.cc#newcode89
src/base/platform/platform-posix.cc:89: //    be 16 byte-aligned;  see
"Mac OS X ABI Function Call Guide"
Micro-nit: Indentation.

https://codereview.chromium.org/817143002/diff/1/src/globals.h
File src/globals.h (right):

https://codereview.chromium.org/817143002/diff/1/src/globals.h#newcode89
src/globals.h:89: #if (V8_TARGET_ARCH_X64 || V8_TARGET_ARCH_ARM64 ||
V8_TARGET_ARCH_PPC64)
Simpler and more future-proof: #if V8_TARGET_ARCH_64_BIT

https://codereview.chromium.org/817143002/diff/1/src/utils.h
File src/utils.h (right):

https://codereview.chromium.org/817143002/diff/1/src/utils.h#newcode1494
src/utils.h:1494: void CopyCharsUnsigned(uint8_t* dest, const uint8_t*
src, int chars) {
Do we really need this huge switch (and below)? Do you have any
performance numbers which justify it? In the long run we want to get rid
of CopyCharsUnsigned, anyway.

https://codereview.chromium.org/817143002/diff/1/test/cctest/test-hashing.cc
File test/cctest/test-hashing.cc (right):

https://codereview.chromium.org/817143002/diff/1/test/cctest/test-hashing.cc#newcode94
test/cctest/test-hashing.cc:94: #if ABI_USES_FUNCTION_DESCRIPTORS
Move the #if into function_descriptor() itself and de-conditionalize all
call sites, it's cleaner.

https://codereview.chromium.org/817143002/diff/1/test/cctest/test-heap.cc
File test/cctest/test-heap.cc (right):

https://codereview.chromium.org/817143002/diff/1/test/cctest/test-heap.cc#newcode164
test/cctest/test-heap.cc:164: !defined(V8_TARGET_ARCH_MIPS64) &&
!defined(V8_TARGET_ARCH_PPC64)
Again: Simpler and more future-proof: #if V8_TARGET_ARCH_64_BIT.

https://codereview.chromium.org/817143002/diff/1/test/mjsunit/mjsunit.status
File test/mjsunit/mjsunit.status (right):

https://codereview.chromium.org/817143002/diff/1/test/mjsunit/mjsunit.status#newcode576
test/mjsunit/mjsunit.status:576: # Test does not handle increased
precision of multiply-add optimization
Huh? What is the exact problem here? No result should depend on the
computation being optimized or not, otherwise it's a bug.

https://codereview.chromium.org/817143002/

--
--
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev
--- You received this message because you are subscribed to the Google Groups "v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to