I have been doing a lot of work involving essentially declarative 
(nominally) complex data types involving several layers of encapsulation.

I had seen here and there examples of this 'Fluent' method-based pipeline 
chaining, with methods passing through receivers to invoke multiple 
distinct functions in the type. Here is a gist about it with an example:

https://git.parallelcoin.io/loki/gists/src/commit/d6cabfd0933d0cda731217c371e0295db331ebb1/tailrecursion-generic.md

It occurred to me that if one used this to construct complex graphs of 
processing, that the CPU's branch predictor would probably be on fire for 
that time, since even though these binary blobs are being possibly 
arbitrarily constructed based on random inputs, they will have a 
substantial amount off scope in common, so...

It might then be possible to further amplify this effect by allowing the 
runtime to lay the code ahead of the execution a bit like Magneto pulling 
those metal blocks up as he walks forwards.

I don't know how verbose it is, just at first blush, I am generally not 
fond of closure syntax in Go, but it seems to me like this dynamic 
construction pattern would be very good for speeding up complex processing 
with significant variance in sequence.

For example, playing back a journal into a database - a scouting thread 
could pre-process some of the key but simple and salient data about the 
segments of the journal, and construct ahead of time cache locality 
optimised code and data segmentation that will run with 100% confidence 
based on the structure and composition of the data.

At the moment I am just using it to chain validators together, but, for 
example, generating a graph from a blockchain ledger, in order to perform 
validation, can have a front-running pass that first generates the 
join/split paths of tokens intersecting with accounts. This graph forms the 
map of how to process the data, and for parallelisation, such a graph would 
allow the replay processing to be split automatically to make optimal use 
of cores, caches and memory bus. It could even farm the work out acrosss 
the network and all of the cluster nodes process their mostly isolated 
segment, then share their database tables directly and voila.

Such processing is naturally easier to construct using recursion, and with 
composition of closures in this way, it should also be quite efficient. 
Although at with current go 1.10+ syntax it is a little clumsy, each part 
is small and this helps a lot.

When I am making big changes to code, I have this sensation like I am 
walking on unstable ground, because sometimes I can get a way into 
something and discover I passed the correct route some way back and I 
didn't commit before it and now I have to start all over again.

Small pieces less than a screenful at a time are very manageable. Just 
gotta get a handle on that vertigo :)

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to