Re: Opcodes (was Re: The external interface for the parser piece)
Dan Sugalski [EMAIL PROTECTED] wrote: At 06:05 PM 12/12/00 +, David Mitchell wrote: Also, some of the standard perumations would also need to do some re-invoking, eg ($int - $num) would invoke Int-sub[NUM](sv1,sv2,0), which itself would just fall through to Num-sub[INT](sv2,sv1,1) - the 0/1 paramter indictating whether args have been swapped. Nope. int-sub[num](sv1, sv2) would have a function body that would look something like: return(Num-new((NV)sv1-int - get_num(sv2))); Basically the integer subtraction routine knows that when passed a numeric arf that it needs to return a number rather than an integer. The function will probably be a little more complex than that, but that's the gist of it. Hmm, the problem with this is that all built-in numeric types need to be able to handle all other built-in numeric types. Consider for example if we decide Perl has a builtin arbitrary-precision bigint type. Then the sub[bigint] function within the int vtable needs to know how to perform multiple-precision arithmetic. In this case surely it is better to forward the request to the module that knows about it, and so better preserve encapsulation? 1. Does the Perl 6 language require some explicit syntax and/or semnatics to handle multiple and user-defined numeric types? Eg "my type $scalar", "$i + integer($r1+$r2)" and so on. Gack. No, not unless we're forced to. This sort of thing should be invisible. (This isn't C, after all...) I wasnt suggesting that such extra syntax should be part of day-to-day usage (by all means allow perl to automagically 'do the right thing'), but rather I was wondering whether explicit language features should be available to give users fine-grained control when the need it; eg perl5 provides 'use integer', and 'int()' which provide crude control over IVs vs NVs, and if we suddenly have lots of *Vs, do we also need similar but more gernealised syntax
Re: Opcodes (was Re: The external interface for the parser piece)
On Thu, Nov 30, 2000 at 06:43:35PM +, David Mitchell wrote: * do values ever get demoted - eg an expression inolving bigints that evaluates to 0: should this be returned as an int or a bigint? [I may have mailed this already] Experimentation on perl5 says yes. Making the sv_setuv actually set an iv (ie signed, rather than unsigned) whenever the unsigned integer was actually small enough to be a signed integer got a measurable speedup. I'm guessing that this was because many more of the "IV op IV" code got hit, which was usually simpler than the mixed "IV op UV" code. So I'd suggest using int rather than bigint, on the assumption that returning the simplest (accurate) thing (if it's cheap for you to determine that something simpler is accurate) is more likely to allow the next operator to go simple Likewise, adding two complex numbers and finding that the imaginary part is 0 would return a number of some real type, not some complex type. * for the code '$c = $a + $b' - is the current SV type of $c thrown away and replaced with whatever type ($a + $b) evaluates to? Not sure - I'd presume yes unless $c has overloaded = this is probably a language issue, and waiting on the language spec Nicholas Clark
Re: Opcodes (was Re: The external interface for the parser piece)
Nick Ing-Simmons [EMAIL PROTECTED] wrote: That is a Language and not an internals issue - Larry will tell us. But I suspect the answer is that it should "work" without any special stuff for simple perl5-ish types - because you need to be able to translate 98% of 98% of perl5 programs. Sorry perhaps I didnt make myself clear - I was assuming that simple types *would* continue to work as before (on the grounds that there's really only one numeric type in perl5), but that because multiple core and user-defined numeric types are a novel feature of perl6, perl6 *might* need some extra syntax for type coercion etc. Okay, how about this as a summary of what's been discussed so far: (provisonal, subject to any language features or semantics TBA by Larry) For numeric scalar types, there is a concept of 'bigness'. For binary numeric operators, the op fn associated with the vtable of the 'biggest' operand should be invoked. (For efficiency or ease of coding, the vtable fn of the smaller operand *might* get called insead, but in this case that fn just forwards the request to the fn associated with the big operand, possibly with arg swapping.) A binop for a particular scalar type normally returns an SV of its own type. At its discretion, it may return an SV of a 'smaller' type, if it is efficient to do so, and if it results in no loss of accuracy. The net effect is is that binops are executed in such a way as to minimise the risk of overflow or loss of precision. Dave M.
Re: Opcodes (was Re: The external interface for the parser piece)
On Thu, 07 Dec 2000, Dan Sugalski [EMAIL PROTECTED] mused: My original suggestion was that scalar types provide a method that says how 'big' it is (so complex bigreal real int etc), and pp_add(), pp_sub() etc use these values to call the method associated with the biggest operand, swapping args if necessary (and passing a flag indicating that arg swapping has taken place). Right, but that only works when the two scalar types are in the same line. If, for example: my Complex $s1 = 4 + 4i; my Image $s2 : filename(foo.gif); $s3 = $s1 + $s2; how would you handle the addition if perl doesn't know about complex types? You've got two entirely different scalars that really have no basis for comparison to judge which is really 'bigger' than the other. Well, in this particular case I would expect $s1 + $s2 to cause a run-time error, since an image scalar type has no natural numeric value. If the implementer of the image type (probably unwisely) chose to give a numeric meaning to images (eg overall brightness of the image), then I would expect $s1 + $s2 to return a complex value equal to $s1 + bightness($s2). In the other hand I would expect that $s2 * 1.1 would return an image with brightness 10% greater than that of $s1. My scheme doesnt cover all eventualites, but I think it covers more cases before it's necessary to punt. Also, it reduces the number of functions that need to be implemented per scalar type to O(N) rather than O(N^2): ie rather than add[INT](), add[NUM](),, sub[INT](), sub[NUM](), ..., div there is just get[INT], get[NUM], get (or get_int, get_num, ..) plus a bit of code in pp_add(), pp_sub() etc which 'does the right thing'. If we assume that ints and nums are perl builtins, and that some people have implemented the following external types: byte (eg as implemented as a specialised array type), bigreal, complex, bigcomplex, bigrat, quaternian; then the following table shows how well my system copes: num - int gives accurate num int - num gives accurate num int - byte gives accurate int byte - int gives accurate int byte - bigreal gives accurate bigreal num - complex gives accurate complex complex - complex gives accurate complex bigreal - complex gives complex with potential loss of precison from the bigreal bigreal - bigcomplexgives bigcomplex probably with no loss of precison complex - bigcomplexgives bigcomplex, but (depending on the implementation of complex), probably evaluates as modulus(complex) - bigcomplex or real(complex) - bigcomplex complex - quaterniangives quaternian, with similar coercion to num of LHS complex - bigratgives bigrat; ditto coercion to num of LHS bigcomplex - bigrat may give bigcomplex or bigrat depending on precise defintion of 'bigness'. Ditto about coercion of 1 arg to num (or bignum). quaternian - quaternian gives accurate quaternian I think in practice people would be reasonably happy where complex types (in the English rather than mathematical sense) operate fine with others of the same type and interoperate with all other types with the proviso that only a (possibly big) numeric value can be extracted from them. With the sv1-sub[typeof(sv2)](sv2) scheme, even something as simple as byte - bigreal is problematic, as this would cause byte-sub[GENERIC] to be called, which has very little chance of 'doing the right thing'.
Re: Opcodes (was Re: The external interface for the parser piece)
On Tue, Dec 12, 2000 at 02:20:44PM +, David Mitchell wrote: If we assume that ints and nums are perl builtins, and that some people have implemented the following external types: byte (eg as implemented as a specialised array type), bigreal, complex, bigcomplex, bigrat, quaternian; then the following table shows how well my system copes: num - int gives accurate num int - num gives accurate num what happens if the size of int is such that the maximum int is larger than the value at which num's can no longer maintain integer accuracy? for example 8 byte doubles as num, 8 byte longs as int ? does one promote a num in the range (min_int, max_int) to a num, and do an int calculation? for example (2**62) - 1 With the sv1-sub[typeof(sv2)](sv2) scheme, even something as simple as byte - bigreal is problematic, as this would cause byte-sub[GENERIC] to be called, which has very little chance of 'doing the right thing'. unless it uses your scheme at this point. [this might be the correct speed tradeoff - common types know how to interact with common types directly, and know how to call a slower but maximally accurate routine if they are beyond their competency] It's surprising how easy it is to slow things down with decision making code in your arithmetic ops. I'm trying to coax perl5 into doing better 64 bit integer arithmetic: http://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/2000-12/msg00499.html and the simple hack of trying to make scalars IV (signed) rather than UV (unsigned) whenever possible, and consequentially going to the first code in each op (IV OP IV) gives about 2% speed up (timing for the perl5.7 regression tests)[note, this doesn't make it faster, just claws back the slowdown other parts of my provisional changes have made] hangon, there was a point that was supposed to back up. Accuracy is needed, but I fear that a single general scheme to deliver this will slow down the common cases. Nicholas Clark
Re: Opcodes (was Re: The external interface for the parser piece)
Nicholas Clark [EMAIL PROTECTED] wrote: On Tue, Dec 12, 2000 at 02:20:44PM +, David Mitchell wrote: If we assume that ints and nums are perl builtins, and that some people have implemented the following external types: byte (eg as implemented as a specialised array type), bigreal, complex, bigcomplex, bigrat, quaternian; then the following table shows how well my system copes: num - int gives accurate num int - num gives accurate num what happens if the size of int is such that the maximum int is larger than the value at which num's can no longer maintain integer accuracy? Then some precision is lost. This seems reasonably natural to me, in the sense that nums have a wider range, and so when mixing the two, returning a num may result in loss of precision but not an overflowm, which is the lesser of two evils. A really smart implementation might choose whether to return a num or an int depending on the sizes of its operands, but personally I think that's asking for trouble. With the sv1-sub[typeof(sv2)](sv2) scheme, even something as simple as byte - bigreal is problematic, as this would cause byte-sub[GENERIC] to be called, which has very little chance of 'doing the right thing'. unless it uses your scheme at this point. [this might be the correct speed tradeoff - common types know how to interact with common types directly, and know how to call a slower but maximally accurate routine if they are beyond their competency] So are you suggesting a hybrid, where for the standard type permutations the right sub is immediately called, while the arg1-op[GENERIC] functions check the 'size' of the operands, and if necesary, swap them and invoke some other method from someone's vtable? I suppose that might work Also, some of the standard perumations would also need to do some re-invoking, eg ($int - $num) would invoke Int-sub[NUM](sv1,sv2,0), which itself would just fall through to Num-sub[INT](sv2,sv1,1) - the 0/1 paramter indictating whether args have been swapped. It's surprising how easy it is to slow things down with decision making code in your arithmetic ops. I'm trying to coax perl5 into doing better 64 bit integer arithmetic: http://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/2000-12/msg00499.html Rather you than me! I think the work you're doing with 64-bits shows up a problem in perl5 which hasn't really been addressed in perl6: In Perl 5 there is really only one numeric type (NV). Okay, so there's an IV and a UV too, but these are just variations with ill-defined semantics (eg on my 32-bit system, 2^31 + 2^31 gives a zero int, while pow (eg 2**3) just returns an NV, no questions asked). And because the semantics are so ill-defined, they have mostly broken for 64-bit architectures, hence your big patch. Thus in perl5 with only one numeric type, there aren't any well-defined language semantics for upgrading or downgrading between different numeric types, or specific language features for declaring a variable of a specific type, or requesting that an expression return a particular type. The semantics that I have suggested so far (and I'm not by any stretch suggesting that this is optimum), basically imply that expressions return a value of the type of the 'biggest' of their operands. There is no automatic upgrading (eg 2 large ints when added dont return a num), or downgrading (eg the difference of 2 similar nums will never return an int), and "my integer $i = expr" doesnt cause expr to be evaluated in a fancy integer context. I think (though I may be wrong here - if so sorry!) that Dan's stuff similarly ignores or sidesteps any implications of extra language semantics. (ie Dan's semantics are: return a type equal to the LH operand, roughly speaking) I think this this boils down to 2 important questions, and I'd be interested in hearing people's opinions of them. 1. Does the Perl 6 language require some explicit syntax and/or semnatics to handle multiple and user-defined numeric types? Eg "my type $scalar", "$i + integer($r1+$r2)" and so on. 2. If the answer to (1) is yes, is it possible to decide what the numeric part of the vtable API should be until the details of (1) has been agreed on? I supect the answers are yes and no. Dave.
Re: Opcodes (was Re: The external interface for the parser piece)
David Mitchell [EMAIL PROTECTED] writes: I think this this boils down to 2 important questions, and I'd be interested in hearing people's opinions of them. 1. Does the Perl 6 language require some explicit syntax and/or semnatics to handle multiple and user-defined numeric types? Eg "my type $scalar", "$i + integer($r1+$r2)" and so on. That is a Language and not an internals issue - Larry will tell us. But I suspect the answer is that it should "work" without any special stuff for simple perl5-ish types - because you need to be able to translate 98% of 98% of perl5 programs. So we should start from the premise "no" and see where we get ... 2. If the answer to (1) is yes, is it possible to decide what the numeric part of the vtable API should be until the details of (1) has been agreed on? I supect the answers are yes and no. I suspect the answers are "no" and (2) is eliminated as "dead code" ;-) Dave. -- Nick Ing-Simmons
Re: Opcodes (was Re: The external interface for the parser piece)
On Tue, Dec 12, 2000 at 06:05:30PM +, David Mitchell wrote: Nicholas Clark [EMAIL PROTECTED] wrote: On Tue, Dec 12, 2000 at 02:20:44PM +, David Mitchell wrote: If we assume that ints and nums are perl builtins, and that some people have implemented the following external types: byte (eg as implemented as a specialised array type), bigreal, complex, bigcomplex, bigrat, quaternian; then the following table shows how well my system copes: num - int gives accurate num int - num gives accurate num what happens if the size of int is such that the maximum int is larger than the value at which num's can no longer maintain integer accuracy? Then some precision is lost. This seems reasonably natural to me, in the sense that nums have a wider range, and so when mixing the two, returning a num may result in loss of precision but not an overflowm, which is the lesser of two evils. A really smart implementation might choose whether to return a num or an int depending on the sizes of its operands, but personally I think that's asking for trouble. I've got integer overflow detection to work on p5 (for all the platforms I can test on). But it has to make the assumption of 2s complement for useable speed. It would be so much easier in assembler. unless it uses your scheme at this point. [this might be the correct speed tradeoff - common types know how to interact with common types directly, and know how to call a slower but maximally accurate routine if they are beyond their competency] So are you suggesting a hybrid, where for the standard type permutations the right sub is immediately called, while the arg1-op[GENERIC] functions check the 'size' of the operands, and if necesary, swap them and invoke some other method from someone's vtable? I suppose that might work I was suggesting that operations should (could?) behave *as if* your scheme were followed all the time. As far as anything outside the operators was concerned the would be no detectable difference. But actually the operators would take short cuts if they knew what both sides were (and as Dan seems be thinking in terms of vtables, shortcuts would appear to be vtable methods). I suspect that this hybrid cheating approach may well be faster when most operations use a few types of builtin numbers, but still preserves the full flexibity and accuracy (well, minimal lossyness) of your scheme. Also, some of the standard perumations would also need to do some re-invoking, eg ($int - $num) would invoke Int-sub[NUM](sv1,sv2,0), which itself would just fall through to Num-sub[INT](sv2,sv1,1) - the 0/1 paramter indictating whether args have been swapped. Hadn't thought that far. Seems like a workable suggestion. It's better than assuming that you can negate an operand. I think the work you're doing with 64-bits shows up a problem in perl5 which hasn't really been addressed in perl6: In Perl 5 there is really only one numeric type (NV). Okay, so there's an IV and a UV too, but these are just variations with ill-defined semantics (eg on my 32-bit system, 2^31 + 2^31 gives a zero int, while pow (eg 2**3) just returns an NV, no questions asked). And because the semantics are so ill-defined, they have mostly broken for 64-bit architectures, hence your big patch. It's also because perl assumes that IV == (IV)(NV)IV for all IV. Which is no longer true if you want 64 bit IVs when your NVs are 64 bits, but have mantissa, sign and exponent to fit in that space. The semantics that I have suggested so far (and I'm not by any stretch suggesting that this is optimum), basically imply that expressions return a value of the They seem to be pretty close to optimal for accuracy even when new types appear after core code is written. I think (though I may be wrong here - if so sorry!) that Dan's stuff similarly ignores or sidesteps any implications of extra language semantics. Quite possibly. But until we have the language semantics I guess working out how to do perl5 language semantics better isn't a complete waste of time. We've got a working perl5 to benchmark against. (and I found it damn hard to make code that runs as fast as it, even for the "obvious" do 32 bit IV maths as integers rather than 64 bit NVs) Nicholas Clark
Re: Opcodes (was Re: The external interface for the parser piece)
On Thu, Dec 07, 2000 at 01:14:40PM +, David Mitchell wrote: Dan Sugalski [EMAIL PROTECTED] wrote: All the math is easy if the scalars are of known types. Addition and multiplication are easy if only one of the scalars involved is of known type. Math with both of unknown type, or subtraction and division with the right-hand operand of unknown type, is rather more difficult. :( I'm not clear with your scheme how addition works if one of the scalars (the adder) is of unknown type. ie given sv1 of type NUM, sv2 of type UNKNOWN; $sv1 + $sv2 would invoke: sv1-add[UNKNOWN](sv2), which somewhere will cause a function in the vtable for NUMs to be called, eg NUM_add_UNKNOWN(sv1,sv2) { } Now, how does does this function perform its calculation? I'm guessing that Dan is planning to take advantage of addition and multiplication being commutative. sv1-add[UNKNOWN](sv2) swaps to sv2-add[NUM](sv1) (It's "obvious" in the usual way - not obvious until you see it. I've been prodding in pp_add in perl5, so I've been thinking about these sort of things) Nicholas Clark
Re: Opcodes (was Re: The external interface for the parser piece)
Nicholas Clark [EMAIL PROTECTED] wrote: On Thu, Dec 07, 2000 at 01:14:40PM +, David Mitchell wrote: Dan Sugalski [EMAIL PROTECTED] wrote: All the math is easy if the scalars are of known types. Addition and multiplication are easy if only one of the scalars involved is of known type. Math with both of unknown type, or subtraction and division with the right-hand operand of unknown type, is rather more difficult. :( I'm not clear with your scheme how addition works if one of the scalars (the adder) is of unknown type. ie given sv1 of type NUM, sv2 of type UNKNOWN; $sv1 + $sv2 would invoke: sv1-add[UNKNOWN](sv2), which somewhere will cause a function in the vtable for NUMs to be called, eg NUM_add_UNKNOWN(sv1,sv2) { } Now, how does does this function perform its calculation? I'm guessing that Dan is planning to take advantage of addition and multiplication being commutative. sv1-add[UNKNOWN](sv2) swaps to sv2-add[NUM](sv1) (It's "obvious" in the usual way - not obvious until you see it. I've been prodding in pp_add in perl5, so I've been thinking about these sort of things) My original suggestion was that scalar types provide a method that says how 'big' it is (so complex bigreal real int etc), and pp_add(), pp_sub() etc use these values to call the method associated with the biggest operand, swapping args if necessary (and passing a flag indicating that arg swapping has taken place).
Re: Opcodes (was Re: The external interface for the parser piece)
At 02:01 PM 12/7/00 +, David Mitchell wrote: Nicholas Clark [EMAIL PROTECTED] wrote: On Thu, Dec 07, 2000 at 01:14:40PM +, David Mitchell wrote: Dan Sugalski [EMAIL PROTECTED] wrote: All the math is easy if the scalars are of known types. Addition and multiplication are easy if only one of the scalars involved is of known type. Math with both of unknown type, or subtraction and division with the right-hand operand of unknown type, is rather more difficult. :( I'm not clear with your scheme how addition works if one of the scalars (the adder) is of unknown type. ie given sv1 of type NUM, sv2 of type UNKNOWN; $sv1 + $sv2 would invoke: sv1-add[UNKNOWN](sv2), which somewhere will cause a function in the vtable for NUMs to be called, eg NUM_add_UNKNOWN(sv1,sv2) { } Now, how does does this function perform its calculation? I'm guessing that Dan is planning to take advantage of addition and multiplication being commutative. sv1-add[UNKNOWN](sv2) swaps to sv2-add[NUM](sv1) (It's "obvious" in the usual way - not obvious until you see it. I've been prodding in pp_add in perl5, so I've been thinking about these sort of things) And all that was what I was thinking. My original suggestion was that scalar types provide a method that says how 'big' it is (so complex bigreal real int etc), and pp_add(), pp_sub() etc use these values to call the method associated with the biggest operand, swapping args if necessary (and passing a flag indicating that arg swapping has taken place). Right, but that only works when the two scalar types are in the same line. If, for example: my Complex $s1 = 4 + 4i; my Image $s2 : filename(foo.gif); $s3 = $s1 + $s2; how would you handle the addition if perl doesn't know about complex types? You've got two entirely different scalars that really have no basis for comparison to judge which is really 'bigger' than the other. Dan --"it's like this"--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk
Re: Opcodes (was Re: The external interface for the parser piece)
At 11:24 AM 12/1/00 +, David Mitchell wrote: and Buddha Buck [EMAIL PROTECTED] wrote: I seem to remember a suggestion made a long time ago that would have the vtable include methods to convert to the "standard types", so that if the calls were b-vtable-add(b,a) (and both operands had to be passed in; this is C we're talking about, not C++ or perl. OO has to be done manually), then the add routine would do a-vtable-fetchint(a) to get the appropriate value. Or something like that. Have I confused something? That was probably me. (Which means it was probsbly a daft proposal, and everyone rightly ignored it ;-) The basic idea was that all numeric SV types must provide methods that extract their vlaue as an integer or float of a size (abitrarily large) specified by the caller, the format of which is a Perl standard. For example, one might say: While nifty, I don't know that perl's going to support numerics with that much control over them. (That one's up to Larry) Most likely any non-CPU-native numeric support will get tossed into the generic bigint/bigfloat/bigcomplex bin and be done with it. (Complex and imaginary numbers are part of the C99 standard, FWIW--I just installed a new version of Dec C that has preliminary support for it too. Looks nifty...) How to handle math on objects that aren't on the mainline (int/bigint or float/bigfloat) path is a rather dicey thing, and a part of me is really tempted to just punt. That's something that'll have to wait for Larry, though, since it revolves around what the defined behavior there is. All the math is easy if the scalars are of known types. Addition and multiplication are easy if only one of the scalars involved is of known type. Math with both of unknown type, or subtraction and division with the right-hand operand of unknown type, is rather more difficult. :( Dan --"it's like this"--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk
Re: Opcodes (was Re: The external interface for the parser piece)
At 05:59 PM 11-30-2000 +, Nicholas Clark wrote: On Thu, Nov 30, 2000 at 12:46:26PM -0500, Dan Sugalski wrote: (Note, Dan was writing about "$a=1.2; $b=3; $c = $a + $b") $a=1; $b =3; $c = $a + $b If they don't exist already, then something like: newscalar a, num, 1.2 newscalar b, int, 3 newscalar c, num, 0 add t3, a, b and $c ends up a num? why that line "newscalar c, num, 0" ? It looks to me like add needs to be polymorphic and work out the best compromise for the type of scalar to create based on the integer/num/ complex/oddball types of its two operands. I think the "add t3, a, b" was a typo, and should be "add c, a, b" Another way of looking at it, assuming that the Perl6 interpreter is stack-based, not register-based, is that the sequence would get converted into something like this: push num 1.3 ;; literal can be precomputed at compile time dup newscaler a;; get value from top of stack push int 3;; literal can be precomputed at compile time dup newscaler b push a push b add newscaler c The "add" op would, in C code, do something like: void add() { P6Scaler *addend; P6Scaler *adder; addend = pop(); adder = pop(); push addend-vtable-add(addend, adder); } it would be up to the addend-vtable-add() to figure out how to do the actual addition, and what type to return. But that probably doesn't help much. Let me throw together something more detailed and we'll see where we go from there. Hopefully it will cover the above case too. Nicholas Clark
Re: Opcodes (was Re: The external interface for the parser piece)
At 05:59 PM 11/30/00 +, Nicholas Clark wrote: On Thu, Nov 30, 2000 at 12:46:26PM -0500, Dan Sugalski wrote: (Moved over to -internals, since it's not really a parser API thing) At 11:06 AM 11/30/00 -0600, Jarkko Hietaniemi wrote: Presumably. But why are you then still talking about "the IV slot in a scalar"...? I'm slow today. Show me how $a = 1.2; $b = 3; $c = $a + $b; is going to work, what kind of opcodes do you see being used? (for the purposes of this exercise, you may not assume the optimizer doing $c = (1.2+3) behind the curtains :-) $a=1; $b =3; $c = $a + $b No, that's naughty--it's much more interesting if the scalars are different types. If they don't exist already, then something like: newscalar a, num, 1.2 newscalar b, int, 3 newscalar c, num, 0 add t3, a, b and $c ends up a num? When the add line is fixed, yup. :) This is assuming the optimizer can spend enough time and see enough of the code to know that we're adding an int and num, so that $c must be a num. If not, the newscalar line would be: newscalar c, generic, NULL to set it to be a generic empty scalar. why that line "newscalar c, num, 0" ? It looks to me like add needs to be polymorphic and work out the best compromise for the type of scalar to create based on the integer/num/ complex/oddball types of its two operands. Yup. What add does is based on the types of the two operands. In the more odd cases, I assume it's type stuff will be based on the left-hand operand, but I wouldn't bet the farm on that yet, as that's a Larry call. [Oh. but I'm blinkered in this because I'm attempting to make pp_add in perl5 do this sort of thing, so I may be missing a better way of doing it] vtables make it a lot nicer. Whether they make it faster is still up in the air... :) But that probably doesn't help much. Let me throw together something more detailed and we'll see where we go from there. Hopefully it will cover the above case too. What, the "what if one of the operands is really bizarre" case? Dan --"it's like this"--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk
Re: Opcodes (was Re: The external interface for the parser piece)
"DS" == Dan Sugalski [EMAIL PROTECTED] writes: The "add" op would, in C code, do something like: void add() { P6Scaler *addend; P6Scaler *adder; addend = pop(); adder = pop(); push addend-vtable-add(addend, adder); } it would be up to the addend-vtable-add() to figure out how to do the actual addition, and what type to return. DS Yup. I think it'll be a little more complex than that in the call, DS something like: addend- vtable-(add[typeof adder])(adder); DS The extra level of indirection may hurt in the general case, but I think DS it's a win to call the "add an int scalar to me" function rather than have DS a generic "add this scalar to me" function that figures out the type of the DS scalar passed and then Does The Right Thing. I hope. (Yeah, I'm betting DS that the extra indirect will be cheaper than the extra code. But I'm not DS writing that in stone until we can do some benchmarking) Is all that really necessary? Why not a non-vtbl function that knows how to add numeric types? I would have wanted to limit the vtbl to self manipulation functions. Set, get, convert, etc. Cross object operations would/should be outside the realm of the object. (It seems like trying to lift yourself by the bootstraps.) chaim -- Chaim FrenkelNonlinear Knowledge, Inc. [EMAIL PROTECTED] +1-718-236-0183