Ian Lance Taylor wrote:
> "Doug Gregor" <[EMAIL PROTECTED]> writes:
> 
>> Of course, one could use TREE_CODE to see through the difference
>> between these two, e.g.,
>>
>>   #define TREE_CODE(NODE)
>>     ((enum tree_code) (NODE)->base.code == LANG_TYPE?
>>         (enum tree_code)((TYPE_LANG_SPECIFIC (NODE)->base.subcode +
>> LAST_AND_UNUSED_TREE_CODE))
>>         : (enum tree_code) (NODE)->base.code)
>>
>> Then, the opposite for TREE_SET_CODE:
>>   #define TREE_SET_CODE(NODE, VALUE)
>>      ((VALUE) >= LAST_AND_USED_TREE_CODE)?
>>         ((NODE)->base.code = LANG_TYPE, get_type_lang_specific
>> (NODE)->base.subcode = (VALUE) - LAST_AND_USED_TREE_CODE)
>>      : ((NODE)->base.code = (VALUE))
> 
> Somehow I didn't quite see that you were proposing a change to
> TREE_CODE itself.  It doesn't make sense to change TREE_CODE for
> something which is language specific: that would affect the whole
> compiler.  But I think it would be reasonable to introduce
> LANG_TREE_CODE along the lines of what you wrote above, and use that
> only in the frontend code.

I think that this approach (language-specific subcodes) is the right way
to go.  I disagree with Andrew's sentiments about not using trees at all
for C++.  But, in any case, that's an academic disagreement; nobody's
about to sign up for that project.  So, the only real solutions are (a)
expand the width of the code field and (b) use subcodes.  And, I agree
that we should do all we can to avoid expanding the size of tree nodes.
 So, I think the subcode approach is the best choice.

Like Ian, I think the macros above are fine.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713

Reply via email to