On Sat, Oct 15, 2016 at 01:07:12PM +1100, kugan wrote:
> Hi Bin,
> 
> On 15/10/16 00:15, Bin Cheng wrote:
> >+/* Test for likely overcommitment of vector hardware resources.  If a
> >+   loop iteration is relatively large, and too large a percentage of
> >+   instructions in the loop are vectorized, the cost model may not
> >+   adequately reflect delays from unavailable vector resources.
> >+   Penalize the loop body cost for this case.  */
> >+
> >+static void
> >+aarch64_density_test (struct aarch64_vect_loop_cost_data *data)
> >+{
> >+  const int DENSITY_PCT_THRESHOLD = 85;
> >+  const int DENSITY_SIZE_THRESHOLD = 128;
> >+  const int DENSITY_PENALTY = 10;
> >+  struct loop *loop = data->loop_info;
> >+  basic_block *bbs = get_loop_body (loop);
> 
> Is this worth being part of the cost model such that it can have
> different defaults for different micro-architecture?

I think this is a relevant point, even if we do choose these values for
the generic compilation model, we may want to tune this on a per-core basis.

So, pulling these magic numbers out in to a new field in the CPU tuning
structures (tune_params) is probably the right approach.

Thanks,
James

Reply via email to