Status of trunk freeze
Mark, What's the status of the trunk freeze for going from stage 1 to stage 2? AFAICT, the number of regression on trunk has increased since you sent http://gcc.gnu.org/ml/gcc/2007-06/msg00411.html There have been a number of commits to trunk that do not address regressions. I've been holding the ISO C Binding patch (which only affects Fortran and has 0 regressions) while trunk was supposely undergoing a transition from stage 1 to stage 2. Is it time to freeze all commits to all branches and trunk unless it is a patch that address a regression? -- Steve
Re: PTR-PLUS merge into the mainline
Roman Zippel <[EMAIL PROTECTED]> wrote on 06/28/2007 07:54:43 PM: > Hi, > Notice that it generates the (i + 1) * 4 instead of (i * 4) + 4 as with > the other cases. While I tried to debug this I narrowed it down to the > changes in fold_binary(), but I don't really know how to fix this, so > I could use some help here. The main thing is that this is really PR 32120. The problem is only related to the merge because of the way fold_binary works. Thanks, Andrew Pinski
Re: LTO reader support for MEMORY_PARTITION_TAG
Mark Mitchell wrote: > Kenneth Zadeck wrote: > > >> now that we can compile, i will start looking inside today. >> > > At present, lto_read_function_body doesn't set DECL_SAVED_TREE for the > function body, so there's nothing for the back end to even try to > output. It doesn't look to me like we've got code to try to read that > back in at this point. Would you care to take a try at that? > > Thanks, > > There is a lot of stuff missing. i will start on this tomorrow. honza owes me a patch to get the back end started. We have been chatting back and forth about this. kenny
Re: PTR-PLUS merge into the mainline
Hi, On Fri, 15 Jun 2007, [EMAIL PROTECTED] wrote: > This patch merges in the pointer_plus branch. Hopefully I did not mess > anything up. I found a small regression caused by this, e.g.: int g(void); void f(int *p, int i) { p[i] = g(); p[i + 2] = g(); p[i + 1] = g(); p[i + 100] = g(); } If I compile it with -fdump-tree-original I get this in the dump: *(p + (unsigned int) ((unsigned int) i * 4)) = g (); *(p + ((unsigned int) ((unsigned int) i * 4) + 8)) = g (); *(p + ((unsigned int) i + 1) * 4) = g (); *(p + ((unsigned int) ((unsigned int) i * 4) + 400)) = g (); Notice that it generates the (i + 1) * 4 instead of (i * 4) + 4 as with the other cases. While I tried to debug this I narrowed it down to the changes in fold_binary(), but I don't really know how to fix this, so I could use some help here. Thanks. bye, Roman
Re: LTO reader support for MEMORY_PARTITION_TAG
Kenneth Zadeck wrote: > now that we can compile, i will start looking inside today. At present, lto_read_function_body doesn't set DECL_SAVED_TREE for the function body, so there's nothing for the back end to even try to output. It doesn't look to me like we've got code to try to read that back in at this point. Would you care to take a try at that? Thanks, -- Mark Mitchell CodeSourcery [EMAIL PROTECTED] (650) 331-3385 x713
Re: [ARM] Cirrus EP93xx Maverick Crunch Support - CC modes / condexec / CC_REGNUM
On Thu, 28 Jun 2007 14:55:17 +0200, "Rask Ingemann Lambertsen" <[EMAIL PROTECTED]> said: > On Wed, Jun 27, 2007 at 11:26:41AM +1000, Hasjim Williams wrote: > > G'day all, > > > > As I wrote previously on gcc-patches ( > > http://gcc.gnu.org/ml/gcc-patches/2007-06/msg00244.html ), I'm working > > on code to get the MaverickCrunch Floating-Point Co-processor supported > > on ARM. I mentioned previously that you can't use the same opcodes for > > testing GE on the MaverickCrunch, as you use on ARM. See the below > > table for NZCV values from MaverickCrunch. > > > > MaverickCrunch - (cfcmp*): > > N Z C V > > A == B 0 1 0 0 > > A < B 1 0 0 0 > > A > B 1 0 0 1 > > unord 0 0 0 0 > > > > ARM/FPA/VFP - (cmp*): > > N Z C V > > A == B 0 1 1 0 > > A < B 1 0 0 0 > > A > B 0 0 1 0 > > unord 0 0 1 1 > >The key to getting this right is to use a special comparison mode for > MaverickCrunch comparisons. You have to look at arm_gen_compare_reg(), > which > calls arm_select_cc_mode() to do all the work. I think there's probably a > mess in there: CCFPmode is used for some non-MaverickCrunch floating > point > compares as well as some MaverickCrunch ones. You'll have to sort that > out. > The safe way would be to define a new mode, say CCMAVmode, and use that. I think a new CCMAV mode is needed, if I reenable the cfcmp64 code, otherwise, the other CCFP modes aren't used, since an ARM processor can only have one floating-point co-processor, either FPA, VFP or MAVERICK, selected by the -mfpu arg. Current patches replaces the get_arm_condition_code for CCFPmode, if TARGET_MAVERICK is set. Conditional execution is tricky, since you don't check for unordered when doing a integer comparison, and you might want to check for signed comparison or unsigned comparison. I think two CC "maverick-specific" modes are needed for the comparison stuff if 64-bit comparisons are enabled, because of this reason. Probably better to leave 64-bit comparison disabled I think, and let the soft 64-bit int comparison handle it, and hence not worry about conditional execution for 64-bit int at all. > (define_insn "*cirrus_cmpdi" > [(set (reg:CC CC_REGNUM) > (compare:CC (match_operand:DI 0 "cirrus_fp_register" "v") > (match_operand:DI 1 "cirrus_fp_register" "v")))] > "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK" > "cfcmp64%?\\tr15, %V0, %V1" > [(set_attr "type" "mav_farith") >(set_attr "cirrus" "compare")] > ) > > Does this insn also set the flags according to the MaverickCrunch NZCV > table > above? If so, it needs to use a MaverickCrunch CCmode too, in which case, > with the CANONICALIZE_COMPARISON macro, you can change Yes, this instruction compares two 64-bit values in the maverick coprocessor & stores the result in NZCV flags in PC/R15. At the moment, I have disabled 64bit support in my Maverick Crunch patches, simply because the 64-bit support is signed only, or unsigned only depending on the value of UI in DSPSC in the Maverick co-processor, and most code doesn't make use of 64-bit, so the performance hit is minimal. I think there are some EABI functions to wrap unsigned 64bit functions to signed 64bit functions, but I haven't really checked this, in detail... >Once you know that all MaverickCrunch comparisons (and only those) >have > (reg:CCMAV CC_REGNUM) in them, then it is easy to write all the > corresponding comparison and branch instructions. Agreed. I have had the correct conditional executing for a while, and have run the ieee754 tests with dejagnu and they all passed. It was only a couple of days ago, that I realised that I was replacing all bge instructions. The other comparisons that can't be implemented (on the maverick crunch), shouldn't matter since they aren't used in the other comparison modes. But just to be safe, I have put the CCFP arg on them, since this is the only mode that they should be used in. The other floating point ARM co-processors make use of a CCFPE mode, but the MaverickCrunch doesn't need it. In any case I have a working gcc compiler for the MaverickCrunch, that lets me compile everything and seems to execute everything correctly now. However, I have disabled conditional execution and 64-bit int mode, for the time being... I think I will have to add a maverick_cc_register though, like you suggest if I want to enable conditional execution, otherwise the cc_register will have maverick and arm comparisons in there, and they won't combine correctly, etc. Incidentally it is mentioned in the arm.c comments that conditional execution can decrease execution time, and code size by deleting branch instructions, but only if the `target` is not an unconditional branch. The other "co-processor offset out of range" seems to be related to the length attr for "sibcall_epilogue" and "epilogue_insns" since saving each maverick crunch register to the stack takes two instructions (since
Re: I'm sorry, but this is unacceptable (union members and ctors)
mark-28 wrote: > >> Mark Mielke wrote "Why not This?": >> > class Rectangle { >> > Vector2d position; >> > Vector2d size; >> > }; >> > ... rectangle.position.x = ... ... > > On Thu, Jun 28, 2007 at 03:00:07AM -0700, michael.a wrote: >> My foremost personal requirement is that no code need change outside the >> object definition files. And besides it is ridiculous to jump through two >> hoops, when one would suffice. >> ... >> Have you read the thread? (not that you should -- but before making such >> an >> assessment) > > You find it unacceptable that GCC implements the C++ spec, and fails to > compile > your software that was not implemented according to the C++ spec. > > You find it unacceptable that you would need to change your code to > match the spec, but would instead rather modify GCC and have your patch > to GCC rushed out so that you can release your product on Linux. > > You find it unacceptable that union members not be allowed to contain > struct with constructors, because you believe that the practice is safe, > and valid, because the designer knows best. You believe the spec should > be changed and that Microsoft has lead the way in this regard. > > Do I have this right? > > Is there a reason that this discovery did not occur until late in your > development cycle? It seems to me that the first mistake on your part > was not testing on Linux/GCC when writing the original code, if you > knew that this was an intended port. > > Cheers, > mark > > P.S. I apologize for getting confused on my last post. I was tired + ill > and stupidly posting after midnight. Perhaps this one will be more > relevant > and effective? > > This is a correct sumization in effect. Though I would differ that it was not so much my mistake to not conform to gcc from the offset. I'm planning on releasing binaries, so its not a major hickup to hack gcc. It would be nice to see an official solution evolve in due course however. I see no reason why c++ should be neutered to this effect. I would sight laziness on behalf of the gcc development effort (with the effort present excuse of adhering to spec) C++ should've removed unions from the spec altogether if they won't support the fundamental basis of the need for an object oriented C language (and there is no reason why they can not be supported -- albeit, with some caveats which really any even remotely competent person could predict) There really is no call for such platitudes, you've simply chosen the losing side of the argument if you choose to believe C++ will never accomodate functional objects to be unioned. This is all I have to say. I will let everyone know how construction/etc is handled by the compiler as soon as the porting process permits (a testbed application would not likely reflect real world scenarios so easily I suspect) sincerely, michael -- View this message in context: http://www.nabble.com/I%27m-sorry%2C-but-this-is-unacceptable-%28union-members-and-ctors%29-tf3930964.html#a11353323 Sent from the gcc - Dev mailing list archive at Nabble.com.
Re: I'm sorry, but this is unacceptable (union members and ctors)
> Mark Mielke wrote "Why not This?": > > class Rectangle { > > Vector2d position; > > Vector2d size; > > }; > > ... rectangle.position.x = ... ... On Thu, Jun 28, 2007 at 03:00:07AM -0700, michael.a wrote: > My foremost personal requirement is that no code need change outside the > object definition files. And besides it is ridiculous to jump through two > hoops, when one would suffice. > ... > Have you read the thread? (not that you should -- but before making such an > assessment) You find it unacceptable that GCC implements the C++ spec, and fails to compile your software that was not implemented according to the C++ spec. You find it unacceptable that you would need to change your code to match the spec, but would instead rather modify GCC and have your patch to GCC rushed out so that you can release your product on Linux. You find it unacceptable that union members not be allowed to contain struct with constructors, because you believe that the practice is safe, and valid, because the designer knows best. You believe the spec should be changed and that Microsoft has lead the way in this regard. Do I have this right? Is there a reason that this discovery did not occur until late in your development cycle? It seems to me that the first mistake on your part was not testing on Linux/GCC when writing the original code, if you knew that this was an intended port. Cheers, mark P.S. I apologize for getting confused on my last post. I was tired + ill and stupidly posting after midnight. Perhaps this one will be more relevant and effective? -- [EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] __ . . _ ._ . . .__. . ._. .__ . . . .__ | Neighbourhood Coder |\/| |_| |_| |/|_ |\/| | |_ | |/ |_ | | | | | | \ | \ |__ . | | .|. |__ |__ | \ |__ | Ottawa, Ontario, Canada One ring to rule them all, one ring to find them, one ring to bring them all and in the darkness bind them... http://mark.mielke.cc/
Re: combine corrupts insns + dumps with insn cost problems
[One-liner alert!] On Wed, Jun 27, 2007 at 07:54:36PM +0200, Eric Botcazou wrote: > > Combine knows how to add clobbers to make insns recognizable. I'm guessing > > it accidentally clobbers the original insn in doing so. Where would I look? > > Anywhere in combine. :-) This is by design, see the SUBST macro and the undo > buffer machinery. You need to put a watchpoint on your insn. recog_for_combine (rtx *pnewpat, ...) may allocate (and return) a new pattern instead of reusing the old one: /* If we had any clobbers to add, make a new pattern than contains them. Then check to make sure that all of them are dead. */ if (num_clobbers_to_add) { rtx newpat = gen_rtx_PARALLEL (VOIDmode, rtvec_alloc (GET_CODE (pat) == PARALLEL ? (XVECLEN (pat, 0) + num_clobbers_to_add) : num_clobbers_to_add + 1)); ... pat = newpat; } *pnewpat = pat; But try_combine() doesn't take that into account: /* If we had to change another insn, make sure it is valid also. */ if (undobuf.other_insn) { rtx other_pat = PATTERN (undobuf.other_insn); ... other_code_number = recog_for_combine (&other_pat, undobuf.other_insn, &new_other_notes); ... PATTERN (undobuf.other_insn) = other_pat; This is not (necessarily) the same other_pat that we started with. I set a watchpoint on the element count on the parallel: Continuing. Hardware watchpoint 6: *(int *) 3086592448 Old value = 3 New value = 1 do_SUBST_INT (into=0xb7f9a9c0, newval=1) at ../../../cvssrc/gcc/gcc/combine.c:709 709 buf->next = undobuf.undos, undobuf.undos = buf; (gdb) print undobuf.other_insn->u.fld[5].rt_rtx $25 = (rtx) 0xb7f9b008 This is the original PATTERN (undobuf.other_insn). (gdb) c Continuing. Hardware watchpoint 6: *(int *) 3086592448 Old value = 1 New value = 3 undo_all () at ../../../cvssrc/gcc/gcc/combine.c:3755 3755 break; Good, we restore the parallel's vector to 3 elements. But: (gdb) print undobuf.other_insn->u.fld[5].rt_rtx $26 = (rtx) 0xb7f9b060 We keep the new PATTERN (undobuf.other_insn). :-( With that, the fix seems simple enough: Index: gcc/combine.c === --- gcc/combine.c (revision 125984) +++ gcc/combine.c (working copy) @@ -3298,7 +3298,7 @@ try_combine (rtx i3, rtx i2, rtx i1, int return 0; } - PATTERN (undobuf.other_insn) = other_pat; + SUBST (PATTERN (undobuf.other_insn), other_pat); /* If any of the notes in OTHER_INSN were REG_UNUSED, ensure that they are still valid. Then add any non-duplicate notes added by (gdb) print undobuf.other_insn->u.fld[5].rt_rtx $27 = (rtx) 0xb7f73008 (gdb) c Continuing. Hardware watchpoint 7: *(int *) 3086428608 Old value = 3 New value = 1 do_SUBST_INT (into=0xb7f729c0, newval=1) at ../../../cvssrc/gcc/gcc/combine.c:709 709 buf->next = undobuf.undos, undobuf.undos = buf; (gdb) c Continuing. Hardware watchpoint 7: *(int *) 3086428608 Old value = 1 New value = 3 undo_all () at ../../../cvssrc/gcc/gcc/combine.c:3755 3755 break; (gdb) print undobuf.other_insn->u.fld[5].rt_rtx $29 = (rtx) 0xb7f73008 Now the original insn pattern has been restored and the compiler works again. SVN blame points at revision 357 dated 1992-02-22, log message "Initial revision". It's amazing that this bug hasn't been discovered before. I'll test and submit the patch as usual. Of course, the replacement that combine rejected was supposed to be cheaper, so I'm off to see if I can mung^Wcorrect the costs. -- Rask Ingemann Lambertsen
Re: Static const int as array bound inside class
On 6/27/07, Torquil Macdonald Sørensen <[EMAIL PROTECTED]> wrote: Best regards, Torquil Sørensen / Example code (from the book): class X { static const int size; int array[size]; }; const int X::size = 100; It gives the error message "array bound is not an integer constant" That is the correct error message. You want: class X { static const int size = 100; int array[size]; }; const int X::size; Thanks, Andrew Pinski
Re: Proposal: adding two zeros to the integer cost to calibrate better.
On Wed, Jun 27, 2007 at 09:41:08PM +0200, J.C. Pizarro wrote: > I recommend to add 2 zeros to the integer costs as if those are 2 decimal > zeros, > for example, > > insn_cost 5: 1200 // it's 12.00 > insn_cost 6: 800 // it's 8.00 > insn_cost 7: 400 // it's 4.00 > insn_cost 8: 433 // it's 4.33 little costly than 7th, +x.xx% > better calibrating. > insn_cost 9: 466 // it's 4.66 little costly than 8th, +x.xx% > better calibrating. > insn_cost 10: 500 // it's 5.00 We already multiply the costs by four (COSTS_N_INSNS()), so you can already add or subtract one to fine tune costs. A much worse problem is that e.g. combine only passes small fragments of an insn to rtx_cost(), which means that the back end had to guess what the insn looked like to begin with, rather than simply looking at the rtx passed to it. It is a garbage in, garbage out system. -- Rask Ingemann Lambertsen
Re: [ARM] Cirrus EP93xx Maverick Crunch Support - condexec / bugfixing / "co-processor offset out of range"
On Thu, Jun 28, 2007 at 01:55:24PM +1000, Hasjim Williams wrote: > > On Wed, 27 Jun 2007 12:31:42 +0200, "Rask Ingemann Lambertsen" > <[EMAIL PROTECTED]> said: > > Additionally, look at SELECT_CC_MODE and TARGET_CC_MODE_COMPATIBLE. That should be TARGET_CC_MODES_COMPATIBLE. > >The significance of defining a CCmode is that is says that comparisons > > done in that mode set the flags in a specific way. > > Thanks. This really clears things up for me. For the moment, I will > leave conditional execution disabled for EVERYTHING when compiling for > MaverickCrunch. I think so too. Once you get the hang of using CCmodes, you can try something like ;; General predication pattern (define_cond_exec [(match_operator 0 "maverick_comparison_operator" [(match_operand:CCMAV 1 "cc_register" "") (const_int 0)])] "TARGET_32BIT" "" ) (define_cond_exec [(match_operator 0 "arm_comparison_operator" [(match_operand 1 "non_maverick_cc_register" "") (const_int 0)])] "TARGET_32BIT" "" ) where "maverick_comparison_operator" matches those operators map to a predicate, get_arm_condition_code() modified to understand CCMAVmode so print_operand() will print the correct mnemonics and a new non_maverick_cc_register which doesn't match MaverickCrunch specific CCmodes. -- Rask Ingemann Lambertsen
Re: [ARM] Cirrus EP93xx Maverick Crunch Support - "bge" pattern
On Wed, Jun 27, 2007 at 11:26:41AM +1000, Hasjim Williams wrote: > G'day all, > > As I wrote previously on gcc-patches ( > http://gcc.gnu.org/ml/gcc-patches/2007-06/msg00244.html ), I'm working > on code to get the MaverickCrunch Floating-Point Co-processor supported > on ARM. I mentioned previously that you can't use the same opcodes for > testing GE on the MaverickCrunch, as you use on ARM. See the below > table for NZCV values from MaverickCrunch. > > MaverickCrunch - (cfcmp*): > N Z C V > A == B 0 1 0 0 > A < B 1 0 0 0 > A > B 1 0 0 1 > unord 0 0 0 0 > > ARM/FPA/VFP - (cmp*): > N Z C V > A == B 0 1 1 0 > A < B 1 0 0 0 > A > B 0 0 1 0 > unord 0 0 1 1 The key to getting this right is to use a special comparison mode for MaverickCrunch comparisons. You have to look at arm_gen_compare_reg(), which calls arm_select_cc_mode() to do all the work. I think there's probably a mess in there: CCFPmode is used for some non-MaverickCrunch floating point compares as well as some MaverickCrunch ones. You'll have to sort that out. The safe way would be to define a new mode, say CCMAVmode, and use that. Just wondering (since I'm no ARM expert): (define_insn "*cirrus_cmpdi" [(set (reg:CC CC_REGNUM) (compare:CC (match_operand:DI 0 "cirrus_fp_register" "v") (match_operand:DI 1 "cirrus_fp_register" "v")))] "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK" "cfcmp64%?\\tr15, %V0, %V1" [(set_attr "type" "mav_farith") (set_attr "cirrus" "compare")] ) Does this insn also set the flags according to the MaverickCrunch NZCV table above? If so, it needs to use a MaverickCrunch CCmode too, in which case, with the CANONICALIZE_COMPARISON macro, you can change (set (reg:CCMAV CC_REGNUM) (compare:CCMAV (match_operand:DI ...) (const_int x (... (ge (reg:CCMAV CC_REGNUM) ...)) into (set (reg:CCMAV CC_REGNUM) (compare:CCMAV (match_operand:DI ...) (const_int x-1 (... (gt (reg:CCMAV CC_REGNUM) ...)) which is better if I understand correctly. Once you know that all MaverickCrunch comparisons (and only those) have (reg:CCMAV CC_REGNUM) in them, then it is easy to write all the corresponding comparison and branch instructions. -- Rask Ingemann Lambertsen
Re: I'm sorry, but this is unacceptable (union members and ctors)
On Wed, Jun 27, 2007 at 11:36:23PM -0700, michael.a wrote: > mark-28 wrote: I agree with the sentiment, but not with the relevance. I don't see how having a four field structure automatically appear as a completley different two field structure, based only upon a match up between field types and names seems more complicated and magical to me. This would disagree with much of the modern programming world. Black box programming implies that you do not need to know whether a field is a real field, or whether it is derived from other fields. Syntactically, it sounds you are asking for operator overloading on fields, which maps to properties in some other language. This is not more "simpler", as it only conceals that a function may be performed underneath. It may look prettier, or use fewer symbol characters. [/quote] I really don't think it serves the thread to mince words over these matters. I don't think having functional aliases for essentially the same data member is magical thinking. And I made the case for the black box scenario, but I think its also useful (especially for very low level objects) ...to know when you are dealing with a black box (interfaced) or a very simple direct object of the inlineable variety. [quote] Even so - I don't see why the following simplification of the example provided would not suffice for your listed requirement: class Rectangle { Vector2d position; Vector2d size; }; ... rectangle.position.x = ... ... [/quote] My foremost personal requirement is that no code need change outside the object definition files. And besides it is ridiculous to jump through two hoops, when one would suffice. [quote] To have these automatically be treated as compatible types seems very wrong: class Rectangle { int x, y, w, h; }; class Vector2d { int dx, dy; }; Imagine the fields were re-arranged: class Rectangle { int y, x, h, w; }; class Vector2d { int dx, dy; }; Now, what should it do? [/quote] All non-toy compilers have means of insuring members are ordered in memory as they are declared. I have a problem with people retreating to really pety divisive arguments inorder to make regressive points. [quote] My best interpretation of this thread is that you are substituting itnelligibility with DWIM, where DWIM for you is not the same as DWIM for me, and I don't believe you could write out what your DWIM expectation is in a way that would not break. [/quote] Have you read the thread? (not that you should -- but before making such an assessment) sincerely, (and cordially) michael -- View this message in context: http://www.nabble.com/I%27m-sorry%2C-but-this-is-unacceptable-%28union-members-and-ctors%29-tf3930964.html#a11340286 Sent from the gcc - Dev mailing list archive at Nabble.com.
Re: I'm sorry, but this is unacceptable (union members and ctors)
On Wed, Jun 27, 2007 at 11:36:23PM -0700, michael.a wrote: > mark-28 wrote: > > I don't understand what is being requested. Have one structure with > > four fields, and another with two, and allow them to be used > > automatically interchangeably? How is this a good thing? How will > > this prevent the implementor from making a stupid mistake? > Its less a question of making a stupid mistake, as code being intelligible > for whoever must work with it / pick it up quickly out of the blue. The more > intelligible (less convoluted) it is, the easier it is to quickly grasp what > is going on at the macroscopic as well as the microscopic levels. The way I > personally program, typically a comment generally only adds obscusification > to code which already more effeciently betrays its own function. I agree with the sentiment, but not with the relevance. I don't see how having a four field structure automatically appear as a completley different two field structure, based only upon a match up between field types and names seems more complicated and magical to me. > Also syntacticly, I think its bad form for a function to simply access a > data member directly / fudge its type. A function should imply that > something functional is happening (or could be happening -- in the case of > protected data / functionality) This would disagree with much of the modern programming world. Black box programming implies that you do not need to know whether a field is a real field, or whether it is derived from other fields. Syntactically, it sounds you are asking for operator overloading on fields, which maps to properties in some other language. This is not more "simpler", as it only conceals that a function may be performed underneath. It may look prettier, or use fewer symbol characters. Even so - I don't see why the following simplification of the example provided would not suffice for your listed requirement: class Rectangle { Vector2d position; Vector2d size; }; ... rectangle.position.x = ... ... To have these automatically be treated as compatible types seems very wrong: class Rectangle { int x, y, w, h; }; class Vector2d { int dx, dy; }; Imagine the fields were re-arranged: class Rectangle { int y, x, h, w; }; class Vector2d { int dx, dy; }; Now, what should it do? > Granted, often intelligibility can demand too much semantic overhead from > strict languages like c, but just as often perhaps, in just such a case, a > simple accommodation is plainly obvious. My best interpretation of this thread is that you are substituting itnelligibility with DWIM, where DWIM for you is not the same as DWIM for me, and I don't believe you could write out what your DWIM expectation is in a way that would not break. Cheers, mark -- [EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] __ . . _ ._ . . .__. . ._. .__ . . . .__ | Neighbourhood Coder |\/| |_| |_| |/|_ |\/| | |_ | |/ |_ | | | | | | \ | \ |__ . | | .|. |__ |__ | \ |__ | Ottawa, Ontario, Canada One ring to rule them all, one ring to find them, one ring to bring them all and in the darkness bind them... http://mark.mielke.cc/