Do you mean, instead of feeding the net data and learning, to instead request 
new output data/solutions?

Doing so is how we train ourselves, mentally. We can step forward, chaining. We 
store the data output AND update the model. If you know enough knowledge, you 
can stop eating data and start fabricating deeper discoveries yourself now. 
It's better you research along the way, actually.


As for w2v, the net has every word in it, cat, hi, dog, home, run. Based on 
context, you link words to each-other with probabilities of how similar they 
are. You just started a Heterarchy. These are update-able links (weights). Done 
digesting your data (ex. ran out of data), you can keep building your 
Heterarchy. You say OK cat=horse/zebra/etc and dog=horse/zebra/etc, so I'm 
going to make a link between them / make that link stronger. This is Online 
Learning. This web is meant for translation tasks, even entailment, because 
entailing words 'look' like similar words, and you'll generate either 'dog ran 
to' or 'dog cat horse', which are both sentences. The latter is a list of items.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta664aad057469d5c-M5f3fd9534b0af29351f59add
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to