On Fri, May 8, 2009 at 10:47 PM, Charles Oliver Nutter
charles.nut...@sun.com wrote:
Subramanya Sastry wrote:
I may have been wrong. I had a chance to think through this a little bit
more.
Consider this ruby code:
i = 5
v1 = i + 1
some_random_method_call()
v2 = i + 1
In this code
On Sat, May 9, 2009 at 11:06 AM, Thomas E Enebo tom.en...@gmail.com wrote:
On Fri, May 8, 2009 at 10:47 PM, Charles Oliver Nutter
charles.nut...@sun.com wrote:
Subramanya Sastry wrote:
I may have been wrong. I had a chance to think through this a little bit
more.
Consider this ruby code:
Thomas E Enebo wrote:
This makes me wonder if we cannot actually ascertain the cost of
various locking/volatile/active/inactive scenarios. Of course in a
pet micro bench of these it may give an unrealistic answer, but it
would still be cool to get some understanding of cost.
Yeah, we need
Subramanya Sastry wrote:
I may have been wrong. I had a chance to think through this a little
bit more.
Consider this ruby code:
i = 5
v1 = i + 1
some_random_method_call()
v2 = i + 1
In this code snippet, the second '+' might not get optimized because
'some_random_method_call' could
Here 'call' could be a regular C call for all you care. So, what would
happen in the compiler is that you would first transform the code into
a higher-level IR (with virtual calls as above), and at some point in the
optimization, you will ratchet down the IR one level lower that might
expose