On Monday, 18 July 2016 19:37:46 UTC+3, ondrej...@gmail.com wrote:
>
> I think this pretty much covers it
>
> _, err := VirtualProtect(fn.body, 0x40)
> if err != nil {
>     panic(err)
> }
>
> // OH GOD WHAT HAVE I DONE???
> type callstub struct{ fn func(*Memory) }    
>
> Excellent and worrying work :-) A 2x difference between interpreted and 
> native is pretty rad. It strongly reminds me of the JIT endeavours I 
> researched earlier today - none of those seemed to have survived. I'd wait 
> until there's native support before pushing things like these into 
> production. But it would be nice to reopen the discussion about runtime 
> (portable) assembly, could make performance critical code in the 
> text/numerical analysis sphere quite a bit easier.
>

One of the main issue with runtime code-generation is that it requires 
special permissions from the machine, which means there must be some sort 
of fallback possible. However those fallbacks are very likely to be 2x 
slower. But, yes, there are niches that it can be very effectively used.

I would much rather see some language extension that generates 
highly-optimized and vectorized asm code based on regular Go code. *e.g. 
long compilation times would be fine here. *However, compiler gets faster 
and faster, so I'm not sure how useful it would be in the long run.

Of course there's no reason there cannot be both.

The discussion isn't so much as "how to do it" or "what exactly to do", but 
rather "somebody needs to do few first prototypes".

+ Egon


> As for all the other developments in this thread - I'll go through them 
> one by one - there are subtle things (operators from int to byte, which 
> makes for faster comparisons; caching of value lookups) and less subtle 
> things (VM), where I'll have to consider the ups and downs. After all, my 
> example is quite cut down and the actual production version has a lot more 
> complexity bolted on with a lot more data and fast turnaround (so caching 
> and small VMs might not be worthwhile). In any case, lot of excellent 
> advice all around, thank you, sir.
>
> On Monday, 18 July 2016 17:07:27 UTC+1, Egon wrote:
>>
>> On Monday, 18 July 2016 13:27:09 UTC+3, Michael Jones wrote:
>>>
>>> Anything much faster than this needs vector operations in the 
>>> interpreter so the “get to the OP function” overhead is once per time 
>>> series rather than once per element in the series.
>>>
>>
>> I managed to make it a little faster https://play.golang.org/p/Tr2PRKI23w, 
>> but runs only on windows (kind of).
>>
>> *What have I done...*
>>
>> + Egon
>>
>>  
>>
>>>  
>>>
>>> *From: *<golan...@googlegroups.com> on behalf of Egon <egon...@gmail.com
>>> >
>>> *Date: *Monday, July 18, 2016 at 1:32 AM
>>> *To: *golang-nuts <golan...@googlegroups.com>
>>> *Cc: *<ondrej...@gmail.com>
>>> *Subject: *[go-nuts] Re: An efficient runtime expression evaluation
>>>
>>>  
>>>
>>>
>>>
>>> On Monday, 18 July 2016 11:13:08 UTC+3, Egon wrote:
>>>
>>>
>>>
>>> On Monday, 18 July 2016 10:30:14 UTC+3, Egon wrote:
>>>
>>> On Monday, 18 July 2016 03:11:29 UTC+3, ondrej...@gmail.com wrote:
>>>
>>> Cheers, I tried replicating my endeavours (
>>> https://play.golang.org/p/Qxoo2ASac6), sorry if it's still too verbose. 
>>> It's essentially rewriting the inbuilt ast.Node into a simpler nested 
>>> struct and then walking it.
>>>
>>>  
>>>
>>> In testing the performance, I started adding algebraic expressions, 
>>> which make my walking more expensive, but don't change the 'native' 
>>> expression evaluation (I guess due to constant folding).
>>>
>>>  
>>>
>>> As to your suggestion three - I do the variable lookup in the parsing 
>>> stage, but I still need to retain the pointer, not the value itself, 
>>> because I'm accessing an element of that given variable (time series), and 
>>> this element (time period) changes at runtime.
>>>
>>>  
>>>
>>> https://play.golang.org/p/dd4hTpMKrp
>>>
>>>  
>>>
>>> Of course you can additionally add constant folding and similar... 
>>> Additionally instead of working on a single float at a time, make each 
>>> variable an array of 8 floats, that are computed in parallel.
>>>
>>>  
>>>
>>>  
>>>
>>> Just realized that it's trivial to write basic constant folding: 
>>> https://play.golang.org/p/iqWX5_Mweb
>>>
>>>  
>>>
>>> Although it probably will be easier to maintain and extend as a separate 
>>> pass: https://play.golang.org/p/xcz5mXoaOG
>>>
>>>  
>>>
>>>  
>>>
>>> This brings the result to:
>>>
>>>  
>>>
>>> interpreter: 17.001ms
>>>
>>> native: 7.0004ms
>>>
>>>  
>>>
>>> Which is approximately the best I would expect from an interpreter 
>>> without JIT (and not computing multiple time-points at a time).
>>>
>>>  
>>>
>>> + Egon
>>>
>>>  
>>>
>>> One performance gain I can think of is to implement some pruning through 
>>> the abovementioned constant folding and other optimisations, but I'd rather 
>>> leave that as the last resort. Another thing that comes to mind is that I 
>>> could return nested closures in some way - meaning that '1+3*x' would be, 
>>> in go-like pseudocode, add(func() { return one }, func mul(func() { return 
>>> three}, func() {return model[x]} )), where the one/tree are values passed 
>>> to the closure when parsing the equation; but that's just now off the top 
>>> of my head.
>>>
>>>  
>>>
>>> I attached a pprof result in the header.
>>>
>>>  
>>>
>>> Thanks again.
>>>
>>>
>>> On Friday, 8 July 2016 15:46:32 UTC+1, Egon wrote:
>>>
>>> On Friday, 8 July 2016 16:25:40 UTC+3, Ondrej wrote:
>>>
>>> Hi all,
>>>
>>> I have a model with variables, let's call them a, b, c, ..., z. These 
>>> are numerical values (time series loaded from a database) and I let the 
>>> user specify their relationships in a JSON, say 'z = 12; x = a + 2/3 + 3*c; 
>>> y = log(12*f) + exp(g)' etc. The syntax is trivial - it's basically just 
>>> algebraic relationships + a few functions (log, log2, log10, exp, 
>>> trigonometrics, ...; all 1:1 mappings to their math package equivalents).
>>>
>>>  
>>>
>>> *Tip: include a working piece of code that you want to make faster, it 
>>> makes it easier for people to see the problems and common issues.*
>>>
>>>  
>>>
>>>  
>>>
>>> Now, I get these relationships in a JSON and I parse them using 
>>> go/parser. Then I walk the tree once and process it a bit - replacing 
>>> keywords by pointers to my variable stores, replacing all the log/exp/sin 
>>> with function pointers, leaving literals be literals etc. Each node is then 
>>> a struct with a type and the actual contents (sadly a generic interface, 
>>> because the value can be almost anything). The prep stage is now over.
>>>
>>>  
>>>
>>> When actually running the model, I loop through years and within each 
>>> year I solve each variable - I walk the tree and evaluate it where needed. 
>>> The only non-trivial action is when I get to a model variable, I need to do 
>>> a bit of lookup (it's a time series, so I need to look up the correct time 
>>> period and other bits). Otherwise it's just literals, operators and 
>>> function calls, all of which is fairly straightforward.
>>>
>>>  
>>>
>>> This is all well and good. One of the issues is that it's rather slow. I 
>>> thought it would be the recursive nature (and interface assertions), but 
>>> converting all this into a shunting yard system didn't improve the 
>>> performance dramatically. I've profiled the thing and removed a few 
>>> hotspots, my question is not about profiling. I'm after a bit more general 
>>> advice on how to handle these runtime evaluations and if there are better 
>>> ways of doing so. Essentially some sort of a JIT (but Go does not have 
>>> runtime assembly, right?), or maybe convert each expression into a closure 
>>> or maybe a whole different algorithm or...?
>>>
>>>  
>>>
>>> Reduce the amount of code and indirection that you need to do, few basic 
>>> ideas:
>>>
>>> 1. implement a VM https://play.golang.org/p/dlmZ2lGPY7
>>>
>>> 2. operate on vectors of variables instead of single values 
>>> https://play.golang.org/p/25MIjIXs0D
>>>
>>> 3. try to do the lookup of all necessary variables before starting to 
>>> compute with them; if possible
>>>
>>>  
>>>
>>> Obviously pprof is your friend. (
>>> https://blog.golang.org/profiling-go-programs)
>>>
>>>  
>>>
>>> + Egon
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "golang-nuts" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to golang-nuts...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to