On Sat, Oct 15, 2016 at 3:07 AM, kugan
> Hi Bin,
> On 15/10/16 00:15, Bin Cheng wrote:
>> +/* Test for likely overcommitment of vector hardware resources. If a
>> + loop iteration is relatively large, and too large a percentage of
>> + instructions in the loop are vectorized, the cost model may not
>> + adequately reflect delays from unavailable vector resources.
>> + Penalize the loop body cost for this case. */
>> +static void
>> +aarch64_density_test (struct aarch64_vect_loop_cost_data *data)
>> + const int DENSITY_PCT_THRESHOLD = 85;
>> + const int DENSITY_SIZE_THRESHOLD = 128;
>> + const int DENSITY_PENALTY = 10;
>> + struct loop *loop = data->loop_info;
>> + basic_block *bbs = get_loop_body (loop);
> Is this worth being part of the cost model such that it can have different
> defaults for different micro-architecture?
I don't know. From my running, this penalizing function looks like a
quite benchmark specific tuning. If that's the case, tuning for
different micro architecture may not give meaningful different
results, at the cost of three parameters.
Hi Bill, I guess you are the original author? Do you recall any
motivation of this code or have some comments? Thanks very much.
Meanwhile, I can do some experiments on different AArch64 processors.