>> >> Why? For me smalltalk is a syntax and everything is an object. The >> rest is optional. > > Aren't instance variables part of the syntax? Or is Self Smalltalk?
? What if you use the same syntax and behind the scene the system makes sure that you get optimized message send? From a method reuse point of view we do not have offset based bytecode anymore. >>> Btw without instance variables you don't need mixins, cause you have traits. >>> >>> If you only want mixins (instead of stateful traits), then there's at least >>> one mixin implementation for Squeak out there. >>> >>>> >>>> Now I have a question does the JIT or the shortcut (not sure if this is in >>>> stackVM) blurry the cost of accessors >>>> vs. direct accesses? >>> >>> Bytecodes are still 10-12x faster with Cog than sends. >>> >> even those, which are optimized by jit? >> i mean, could >> >> | pt | >> pt := 1...@2. >> [ pt x ] bench >> >> '2.789668866226755e6 per second.' >> >> >> | pt | >> pt := 1...@2. >> [ pt xx ] bench >> '2.642108378324335e6 per second.' >> >> where Point>>xx is: >> xx >> ^ self x >> >> so, what are you mean by 10-12 times faster? >> > > You benchmark has several flaws. It uses bench which is a message send by > itself and does several other sends, block activations, whatever. Just > evaluate > [] bench. > to see the problem. > > Here is the benchmark I based my idea about 10-12x performance difference: > > 0 tinyBenchmarks. > '540940306 bytecodes/sec; 50274171 sends/sec' > > It shows 10.76x difference. You may say that it's inaccurate, so I wrote > another myself: http://leves.web.elte.hu/squeak/SendBenchmark.st > > To run it evaluate the following: > SendBenchmark run. > > My result is: > #(#(109 16) #(105 17) #(105 18) #(108 18) #(106 19)). > To get the difference (may not work in Pharo): > #(#(109 16) #(105 17) #(105 18) #(108 18) #(106 19)) sum in: [ :sum | > sum first / sum second roundTo: 0.01 ]. > 6.06 > > So it's 6x faster to use instance variables, than accessors. > > > Levente > >>> >>> Levente >>> >>> P.S.: IIRC one of V8's optimizations is to use a common representation >>> (class) for objects that have the same slots (instance variables). >>> >>> >>>> >>>> Does anybody run a benchmarck about >>>> self x vs x in Cog recently >>>> on a real app? >>>> >>>> Stef >>> >> >> >> >> -- >> Best regards, >> Igor Stasenko AKA sig. >> >>
