The emphasis on superstar coders may contradict the general intuition that
we are working with or within a new, nuanced engineering paradigm. I could
use a superstar or two occasionally to write in a week correct code that
powers an experiment or other of mine and, if left to my own devices, it
would take a couple of months and perhaps be of lower engineering quality,
but I seriously doubt that one could use constantly a powerhouse of an
software engineer to do different things than those that distinguish
software engineers, namely define a few classes, decide on some components
that fit together without bottlenecks, serialze-deserialize remote objects
correctly and solve a couple of graph problems here or there.

The most counter-intuitive Opencoggy issue for me is the use of the link
parser, perhaps the team does not see it in those terms but it is almost
the opposite than what is needed, an utterance generator/predictor.
Generally, parsing is "obviously" great territory for a GI to flex its
muscle and attempt to solve the problem.

The only sin I am willing to admit for myself is that my thinking was not
so distributed in the past, I was expecting a lot of good things to come
from the kind of single-box processing power we have available these days.
Now, I still believe we could have much more miraculous code running even
on a 386 with 100MB of storage, but generally speaking  the architectures
and simulations I find more relevant these days are million-dollar ones.
And yes, if I fail with millions I will say the problem was you didn't give
me billions, just letting you know :)

AT



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to