The most interesting thing to me now is in the ~RAG and customer-data-with-ontology type roles potentially available with MeTTa. I think everybody realizes that at present reasoning is a bit sketchy with LLMs. It often works but you are left wondering what happens next time? The studies show that there is some combination of memorization of patterns and some generalization but it's still an open question (at least according to Melanie Mitchell's analysis of recent studies) how far you can get reasoning with LLMs.
------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T3ced54aaba4f0969-M2e182a4e089c96eb0f9c52cc Delivery options: https://agi.topicbox.com/groups/agi/subscription
