On Mar 3, 2004, at 6:10 PM, Yan King Yin wrote:
Secondly, fusion of two trained AIs may be a very costly procedure. I'm beginning to study this topic and it seems that the complexity is generally polynomial with small exponents such as O(n^3) but with EXTREMELY large n's like 1 billion. I'm mainly studying ANNs, maybe other architectures are superior in this respect...
A true merge is a tedious problem from an implementation standpoint, but can have a time complexity of O(n) if approached in an "optimal" fashion. A much simpler mechanism for merging networks does have an exponent that is something like O(n^(log n)) -- don't hold me to that -- but is sufficiently less involved to implement that I would recommend doing it with exponential algorithms. Unless you are doing true hard merges all the time, you aren't going to save enough to make it worth it. On something like an Opteron system, a billion node merge may be coffee break time, but it isn't a show-stopper unless you have to do several an hour. And if the network is remotely efficient, you can do a LOT with a billion nodes. But as you surmised, a lot of this may be dependent on architecture specifics e.g. what and how the network is encoding information.
For many kinds of purposes, a better (and vastly faster) approach is to grow a network fabric between the two networks in a kind of "soft" merge. Not quite as smart as a true merge in theory, but if the two networks have been training on somewhat orthogonal classes of information, may have a better real-world S/N than a true merge.
j. andrew rogers
-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
