On 9/17/19 2:37 AM, Richard Biener wrote:
On Mon, 16 Sep 2019, Nicholas Krause wrote:

Greetings Richard,

I don't know if it's currently possible but whats the best way to either so
about or

use a tool to expose shared state at both the GIMPLE and RTL level.  This
would

allow us to figure out much better what algorthims or data structures to
choose

to allow this to scale much better than the current prototype.
You are mixing independent issues.  Shared state needs to be identified
and protected for correctness reasons.  In some cases changing the
data structure to be protected can make it cheaper to do so.  The
scaling of the current prototype is limited by the fraction of the
compilation we parallelize as well as the granularity.

Going forward the most useful things are a) reducing the amount of
state that ends up being shared when we paralellize, b) increase
the fraction of the compilation we paralellize by tackling
RTL optimizations and the early GIMPLE pipeline

The prototype showed that paralellization is beneficial and that it
can be done with a reasonable amount of work.

Richard.

Richard,

Sorry I think your misunderstanding me. I was asking whats the best way to

write a tool to expose twhere and how the shared state is using being used.

As from experience it seems  the best way forward is to figure out what

we have in terms of shared state and write a core set of classes or API \

for scaling the shared  state. If we have a tool for collecting data this would

be much easier.

My reasoning for this is threefold:

1. It removes the issues with each pass needing be scaled seperately

2. It allows us to deal with future added passes being able to be palleralized

3.  Allows us to get data about scaling from other jobs like make -j without

issues and as the user would assume it worked. This was discussed at Cauldron

and other people seem to agree that working well with make is a good idea.

Hopefully that explains it better,

Nick

Reply via email to