I'm a bit surprised those of us who regularly whinge (yes, that's apparently a real word) 
about the computer-brain analogy haven't yet spoken up. While I'm not sure I agree the 
reasoning is circular, I'm not attracted to the argument because they hand-wave a bit too 
much about *changes* to scope and domain. When we remember an experience, it seems less 
about optimizing some particular objective(s) like finding a _good_ burger. If there's a 
"good", it might lean more towards good experiences.

E.g. the burger might have sucked. But it could be good that it sucked because that's 
where you first met your spouse ... and you may go back to that bad burger joint every 
year to savor the bad burger. Or there could be any number of other *choosable* puzzle 
piece shapes that "click" differently in different contexts, at different times.

Another example are "dive bars" - before they're discovered by hipsters, of 
course.

Regardless, what they lay out is a fantastic foil for such discussions. So I'm 
glad they laid it out and I'll use it.

On 5/7/26 8:35 AM, Matteo Morini wrote:
Dear friends,

I've stumbled upon a controversial (to me) manuscript: 
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/excess-capacity-learning/909013EF15575CF119FD511904CDF0C3
 (Excess Capacity Learning, by Dubova and Sloman, appearing on CUP-edited 
Behavioral and Brain Sciences).

In essence the claim is: double descent in NN training smells like the human 
cognitive system.  My very rough distillation, following a cursory read, is: 
three regimes of representational capacity are laid out.

i. constrained - not enough: I love food, remember burgers taste good to me, 
sort of can tell a burger joint if I see one;

ii. sufficient - I go to burger-making places to have one;

iii. excess - I retain more information than necessary (e.g. day of week on 
which I had a bad burger).

I don't need to know the day of the week, but given a bad burger experience on 
a given day, I will refrain from eating one on the same day.  Implication: 
maybe on Wednesdays there's a different burger flipper at work.

Here's a proper synthesis: 
https://www.santafe.edu/news-center/news/upending-assumptions-about-learning-inspired-by-an-ai-phenomenon

I don't buy it.  For starters, I see a circular reasoning: the moment an "excess signal" becomes 
predictive, it stops being "excess" and becomes "sufficient", in their parlance.

The editors are requesting commentary here*, and I can't think of a more apt 
congregation to throw this curveball.

Thank you for your attention to this matter,

Cheers,

-Matteo


*https://drive.google.com/file/d/1XkBQ2K0_hIz4Se8b0cOUKreM96ONBqqP/view <<< 
link appears on the SFI webpage, assumedly safe



--
8647 ⊥ ɐןןǝdoɹ ǝ uǝןƃ
ὅτε oi μὲν ἄλλοι κύνες τοὺς ἐχϑροὺς δάκνουσιν, ἐγὰ δὲ τοὺς φίλους, ἵνα σώσω.


.- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... 
--- -- . / .- .-. . / ..- ... . ..-. ..- .-..
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to