Unless I'm missing something, all of these are involved in ML research, no? 
I mostly see ML as a tool than an end in itself; so, do not want to invest 
myself in ML research as much as picking up the tried and tested approaches 
from there. If anything, I fear excessive focus on approximating humans 
will yield us an approximation so human-like that it'd become 
near-impossible to determine whether we have really achieved AGI, or just 
an approximation. I speak this wrt to GPT and variants; I'm unaware how 
true that holds of other ML things.

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/e8403048-8d44-4b66-83ae-ef1113ea47d5o%40googlegroups.com.

Reply via email to