As others have said, AI today merely replies based upon the data it was trained on, aided by a very fast and very broad real time online search capability.  That search capability is one reason I use ChatGPT ... its replies are faster and broader than Google, for me at least.

Sometimes when I ask ChaGPT a question it gives me an inaccurate response and I need to correct it to get the proper result.  ChatGPT remembers that (for me) and (usually) doesn't make the same mistake again if I ask it a related question, but  AI currently doesn't aggregate what it learns from multiple users.  In other words, the feedback it gets from its interactions with one batch of users is not used to "teach" it how to more accurately respond to a similar question from other users.  At least part of that limitation is per the decision of the developers, such as for user privacy concerns.

However ... that is not necessarily how it will be in the future. Humans learn from others, not just their own perceptions and observations of the world around them ... i.e., the raw "data".  AI could do the same ... and then who knows what we will get.  So I asked ChatGPT this question:

"How long do you think it will be before AI is able to aggregate what it learns from a broad spectrum of interactions with multiple users?"  I got the response below.  It's a cut&paste so hopefully it makes it through the reflector.

Dave   AB7E

p.s.  The comment about "drift" is a real issue I hadn't considered.  Pretty interesting.


     *Technically:* we’re already there (in a limited way)

OpenAI, Google, Anthropic, etc., already do something like this today — but not by letting the model itself remember. Instead:

 *

   They *collect and anonymize* user interactions.

 *

   Those logs feed into *future training runs or fine-tunes* of the model.

 *

   The new model you get months later has improved capabilities based
   on /aggregate/ patterns — but not on any specific identifiable
   conversation.

So in effect, “learning from a broad spectrum of interactions” already happens; it’s just *offline* and at periodic update intervals, not continuous and in real time.

------------------------------------------------------------------------


     *What isn’t here yet:* true “online lifelong learning”

That would mean the model you’re chatting with today is directly shaped by everyone’s conversations from yesterday. That’s technically harder (you risk drift, bias, privacy leaks), and it’s a regulatory minefield.

To do it safely you’d need:

 *

   Very robust anonymization and filtering.

 *

   Real-time evaluation for harmful or biased shifts.

 *

   Clear user consent and opt-out mechanisms.

------------------------------------------------------------------------


     *Timeline guess (if done responsibly):*

 *

   *Within 2–4 years*: Large commercial AIs may begin limited, opt-in
   “online fine-tuning” where they adapt weekly or monthly using
   anonymized data.

 *

   *5–8 years*: We’ll probably see truly “continuously learning” public
   models, but with heavy privacy layers and regulatory oversight.

 *

   *Beyond that*: Personal AIs that combine global learning with your
   private, encrypted data for hyper-personalized experiences, without
   leaking your details to others.

------------------------------------------------------------------------

So the short version:

 *

   The /aggregate learning/ you’re imagining already happens today, but
   in batches and offline.

 *

   Real-time, ongoing learning from everyone at once is coming — but
   probably in a *measured, opt-in* way over the next decade, not as a
   sudden flip.



______________________________________________________________
Elecraft mailing list
Home: http://mailman.qth.net/mailman/listinfo/elecraft
Help: http://mailman.qth.net/mmfaq.htm
Post: mailto:[email protected]

This list hosted by: http://www.qsl.net
Please help support this email list: http://www.qsl.net/donate.html
Message delivered to [email protected] 

Reply via email to