Actually, thinking about this, a path length of 100 is probably
unreasonable to measure precisely.

Here's an example case:

graph=: 0=?.2000 2000$1000
   $I.1 graph wrap 0
1 1
   $I.2 graph wrap 0
4 2

I'm being experimental here, reissuing lines, and it's inconvenient to
edit in the middle of the line, so:

   $I.(] graph wrap 0:)2
4 2
   $I.(] graph wrap 0:)3
16 3
   $I.(] graph wrap 0:)4
69 4
   $I.(] graph wrap 0:)5
317 5
   $I.(] graph wrap 0:)6
1548 6

I can roughly approximate the growth rate of the size of the data
using 4^(N-1) - the actual growth rate looks slightly higher than
that.

So,
   4^100x
1606938044258990275541962092341162602522202993782792835301376

I do not think I have enough memory to represent a result of that size.

But, wait, at some point the graph representation will run out of
possibilities to represent, which will restrict the growth of the data
set. So, what does that limit look like?

   2^2000x
1148130695274254524232833201177681984022317702088695200477642736825766261392370313856659486316506269918445964638987462773447118960863055331425931356166653185391299891453122800006887791482400448714289269900634862447816154636463883639473170260404663539709049...

... oh ...

New implementation:

   ((+/%#)+/graph)^100
3.26311e30

That's probably accurate to within a few powers of 10...

-- 
Raul
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to