I first explored the possibility of a lookup table to speed up calculations, 
this possibility would probably give wrong results.

I am interested in calculating all possibilities to see which one is truely 
best, I am also interested in trying to speed up calculations to be able to 
handle larger models.

Many other possibilities exist for further exploration:

1. One recursive function to try and split up the work as to recycle some of it.
2. Perhaps multiple recursive functions to include different functionality at 
different "depths".
3. Other tree like structures to try and safe some computations by recycling 
results from previous visits across nodes.
4. Manually incrementing indexes (though this would probably requiring storing 
results in some form or another, perhaps a stack based approach might work... 
pushing/popping as index returns to a certain one, perhaps a stack per index 
might work or something.
5. Detecting when index changes and then reloading previous computations.

I still believe that it should be possible somehow to save some computations.

Perhaps I am fooling myself ! Not sure yet ! ;) :)

6. Healths will need to be reset I think, however another possibility is 
copy&pasting healths to new nodes to compute them further.

Sort of like a propagation algorithm... spreading already calculated results to 
other nodes.

One problem with all that node thinking is that the system will run out of 
memory, which should be fine, memory can be pre-allocated to speed stuff up, 
one it runs out, it will have to do full calculations.

I wrote about this before but cancelled posting cause it was kinda messy... but 
now it makes a bit more sense, now this text a bit more clear.

7. Maybe cuda or some other SIMD solution. Still not sure if CUDA can detect 
that some instructions/data are being calculated... hmmm would guess not, not 
sure though.

It's kinda fun trying out all these possibilities to see which ones work... 
though I am also becoming a bit curious to what the actual outcome would be.

For now my heuristic for a "good enough" solution would be "object attack 
object with best victory chance".

This seems a bit too easy/cheesy heuristic and I want to know what is really 
the best way... is there perhaps a team strategy/combined attack effort that 
might work better ? ;)

I would also like to be able to change chances perhaps in feature or expand 
model a little bit so also speeding it up would be nice.

For now multi threading is doable... and the calculate while idle seems nice...

Though I am also kinda curious about cuda... these last ideas are probably 
easiest to implement and will give most statisfactory.

One problem with that is though, once I have the problem solved/calculated I 
might not be interested in trying out the harder possibilities.

So maybe it's best to try the hardest first... maybe I will learn something 
amuzing and even something usefull from it.

More failures perhaps, but in the end, success will come ! LOL.
-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to