It's not just Fibonacci numbers. This is equivalent to a general method for computing continued fractions convergents in linear time, using matrix multiplication. There's a nice explanation of why it works in here:
https://perl.plover.com/classes/cftalk/INFO/gosper.txt And a J version, with phi's continued fraction 1 1 1... : %&:{./"_1 +/ .*/\ (1 0,:~,&1)"0 ]10$1 1 2 1.5 1.66667 1.6 1.625 1.61538 1.61905 1.61765 1.61818 Here's the continued fraction sequence for e: 2,1}.,1 1&,"0]2*1+i.3 2 1 2 1 1 4 1 1 6 %&:{./"_1 +/ .*/\ (1 0,:~,&1)"0 ]2,1}.,1 1&,"0]2*1+i.3 2 3 2.66667 2.75 2.71429 2.71875 2.71795 2.71831 2.71828 My timings show J taking quadratic time for the scan, so this formulation is pretty slow but shows the principle. Marshall On Sun, Jan 08, 2023 at 07:05:46PM -0800, Elijah Stone wrote: > Can we generate phi, the golden ratio, in parallel? > > Of course we can! Follows is an exposition of a classic method for doing > it, which may be of interest; I have not seen it satisfactorily described > elsewhere. > > The classic method for generating phi in j uses a continued fraction: > > (+%)/10#1 > 1.61818 > > (It can be equivalently spelt (1+%)^:n]1.) > > Using rational numbers clarifies slightly: > > (+%)/10#1x > 89r55 > > Unsurprisingly, it's generating ratios of successive fibonacci numbers. Can > it be parallelised? The operation used for reduction is not associative: > > ((1 +% 1) +% 1) -: (1 +% (1 +% 1)) > 0 > > So it's not obvious how we would parallelise it. Taking a step back, it > performs repeated division, where we don't know the divisor a priori, > instead generating it recursively, so attacking it thus seems hopeless. > > Here's another method, based more directly on fibonacci numbers, which is > more promising: > > (u=. {:,+/)^:9]1 1 > 55 89 > > Here we generate fibonacci numbers via a recurrence relation, using two > successive terms to generate the next. > > At first glance, this seems just as hopelessly sequential as the first > solution, but the use of ^: is a tell. u^:n is the same as u@:u@:u@: ... n > times. And _@:_ is associative! So if we can somehow express the operation > of u as 'data', and ditto the composition of any number of us, then we can > create a big array of n copies of u, and then reduce @: over it (in > parallel), and finally apply the reduced u^:n to our y. > > Of course, this all depends on finding an efficient way of representing > compositions of u. If we can't do that, then we'll waste a bunch of > parallel work constructing compositions, only to _still_ need linear > sequential work at the end, once we apply our composed u. > > Let's look at the operation of u _symbolically_: > > u a,b is b,a+b > u u a,b is (a+b),(a+2*b) > u u u a,b is (a+2*b),((2*a)+(3*b)) > > In other words, u gives two results, each of which is a linear combination > of its two inputs. And we have a convenient way of representing such > functions: matrices! Hence obtains an alternate implementation of u, as a > matrix product: > > u2=. (0 1,:1 1)&(+/ . *) > u2^:9]1 1 NB.same result as u > 55 89 > > Hence, we can generate an efficient representation for u^:n in parallel, > with only logarithmic span, for matrix multiplication is isomorphic to > composition (and it is associative). The resultant function always takes > the form of a 2x2 matrix, so we have only a constant amount of additional > work to do at the end. > > (+/ .*/9#,:0 1,:1 1) +/ .* 1 1 > 55 89 > > The astute may notice that this is wasted parallelism. We use a parallel > reduction to find +/ .*/n#,:0 1,:1 1 in logarithmic time, but this is in > fact just the nth power of matrix 0 1,:1 1, which can be calculated > _sequentially_ in logarithmic time, by repeated squaring. But this method > has a key advantage over that: if we want the entire fibonacci sequence, we > can generate it, still with logarithmic span and linear work, by simply > replacing our reduction with a scan: > > (+/ .*/\9#,:0 1,:1 1) +/ .* 1 1 > 1 2 > 2 3 > 3 5 > 5 8 > 8 13 > 13 21 > 21 34 > 34 55 > ---------------------------------------------------------------------- > For information about J forums see http://www.jsoftware.com/forums.htm ---------------------------------------------------------------------- For information about J forums see http://www.jsoftware.com/forums.htm