Discussions of curiosity are like discussions of side effects, spandrels, and the rest. The simple 
conception of curiosity is information seeking to no purpose, no "instrumental benefit". 
But that's clearly nonsense, barring some sophistry around "instrumental benefit".

Curiosity seems to me to be an affect(ation), i.e. it refers to some other 
thing. Of course, that begs us to ask whether more curious people have a larger 
domain for the curiosity operator than other people. So if you can call Sally 
curious about nearly every topic and Bob only curious about particular topics, 
then Sally is more curious than Bob. But that suffers from so many confounders 
as to be meaningless. If Sally only engages with any particular topic for an 
hour, whereas Bob, when he does engage, engages for decades, then which is more 
curious?

And if curiosity is always about some (domain of) referent(s), then how is it 
distinguishable from any other appetite (e.g. inquisitiveness, paraphilia, 
obsessive-compulsion)?

I can't help but hearken back to our past exchanges on this list discussing concepts like 
free will, consciousness, or qualia, all of which seem to me to occupy the same category 
as curiosity. The distinguishing factor seems only to be "energy" and a 
willingness to play others' games -- or any particular game that happens to plop down on 
the table. If one has the energy, one can entertain whatever arbitrary game others 
propose. But when you lack the energy, you're accused of incuriosity or whatever other 
epithet the privileged find convenient.

On 2/12/24 08:30, Marcus Daniels wrote:
With a robot using a generative model, one way a curiosity could manifest is in 
how it learns from experience.   With a somewhat higher sampling temperature, 
the performance of a skill would vary.  At a much higher temperature, the skill 
would not be evident.   If the skill had not been mastered, or there were 
equivalently good ways to perform it, random deviations might find these 
variants.   This sampling temperature doesn’t itself change the model, but the 
feedback loop from the robot in its environment would lead to different losses, 
that would then be corrected through the model, e.g. through back propagation.

An example for me is learning sculling -- finding a rhythm is as much about 
feeling the consequences of a set of movements on the water, as water 
conditions vary, as it is executing a specified set of moves in order.

*From:*Friam <friam-boun...@redfish.com> *On Behalf Of *Prof David West
*Sent:* Monday, February 12, 2024 7:15 AM
*To:* friam@redfish.com
*Subject:* Re: [FRIAM] The problems of interdisciplinary research

The notion of search brings to mind two different experiences:

1- traditional "searching" of the library via the card catalog (yes, I know I 
am old) for relevant inputs; and,

2- the "serendipity of the stacks"—simply looking around me at the books I 
located via search type 1 to see what was in proximity.

My experience: the second type of "search" was far more valuable, to me, than 
the first.

Also, with the books found via search '1-', the included bibliography was 
frequently of more ultimate use than the book containing the bibliography.

Computerized search—ala Google—has always seemed limited; precisely because it is 
exclusively search type '1-'. (Even Google Scholar) Attempts to "improve" 
search by narrowing it on the basis of prior searches makes it really, really, worse.

LLM based search seems, to me, to have some capability to approximate the 
serendipity of the stacks.

davew

On Mon, Feb 12, 2024, at 6:12 AM, David Eric Smith wrote:

    It’s kind of fascinating.

    I imagine that one of the next concepts to come into focus will be 
“curiosity”.  I remember a discussion years ago (15? 18?), I think involving 
David K., about what the nature of “curiosity” is and what role it plays in 
learning.

    Where the paper talks about supervision to train weights, but eschewing 
“search” per se as a component of the capability learned, it makes me think of 
the role of search in the pursuit of inputs, the ultimate worth of which you 
can’t know at the time of searching.  I can imagine (off the cuff) that 
whatever one wants to mean by “curiosity”, it has some flavor of a non-random 
search, but one not guided by known criteria, rather by appropriateness to fit 
existing gaps in (something: confidence? consistency?).

    This also seems like it should tie into Leslie Valiant’s ideas in Probably 
Approximately Correct about how to formally conceptualize teaching in relation 
to learning.  I guess Valiant is now considered decades passe, as AI has 
charged ahead.  But the broad outlines of his argument don’t seem like they 
have become completely superseded.

    We already have “attention” as a secret sauce with important impacts.  I 
wonder when some shift of architectural paradigm will include a design that we 
think is a good formalization of the pre-formal gestures toward curiosity.

    Eric

        On Feb 10, 2024, at 8:19 PM, Marcus Daniels <mar...@snoutfarm.com 
<mailto:mar...@snoutfarm.com>> wrote:

        If one takes results like this --https://arxiv.org/abs/2402.04494 
<https://arxiv.org/abs/2402.04494>-- and then consider what happens with, say, 
Code Llama, it seems plausible that it is representing both the breadth and depth of 
what humans know about large and complex code bases.   It is not clear to me why 
knowledge can’t be extended far beyond what the highest-bandwidth humans can learn in 
a lifetime.   I agree mastery of the idiomatic patterns could constrain invention, 
though.   For software engineering, the most impressive people to me are those that 
can navigate large and complex code bases, often remembering a lot of the code, but 
also can discard whole modules at a time and reimagine them.    Managers are 
suspicious of such people because managers want to modularize expertise for division 
of labor.   Scrum is in some sense a way to impede the development of expertise and 
to deny the need for it.

        *From:*Friam <friam-boun...@redfish.com 
<mailto:friam-boun...@redfish.com>>*On Behalf Of***David Eric Smith
        *Sent:*Saturday, February 10, 2024 2:25 AM
        *To:*The Friday Morning Applied Complexity Coffee Group <friam@redfish.com 
<mailto:friam@redfish.com>>
        *Subject:*Re: [FRIAM] The problems of interdisciplinary research

        There’s a famous old rant by von Neumann, known at least by those who 
were around to hear it, or so I was told by Martin Shubik.

        von Neumann was grumping that “math had become too big; nobody could 
understand more than 1/4 of it”.  As always with von Neumann, the point of 
saying something included an element of self-aggrandizement: von Neumann was 
inviting the listener to notice that _he_ was the one who could understand a 
quarter of all existing math at the time (whether or not such an absurdity 
could be called “true” in any sense).

        I have wondered if this problem marks a qualitative threshold from 
which to define a “complex systems” science.  The premise would be that all 
innovations ultimately occur in individual human heads, triggered somehow.  
(And much of the skill of science is to structure your environment of reading 
and experience and people to “trigger” you in productive ways, since insight 
isn’t something that can be willed into existence).  But those ideas need to be 
answerable to the fullest scope of whatever is currently understood that is 
pertinent.

        The old answer used to be to cram more and more of current knowledge 
into single heads as the fuel for their insights, and then to limit to more and 
more rarified heads that could hold the most and still come up with something.

        But at some point, that model no longer works because there is a limit 
(some kind of extreme-value distribution, I guess) to what human heads can 
hold, at all.

        The project then shifts over into an effort of community design with 
explicit concerns that are not reducible to head-packing.  How do good insights 
come into existence, still limited by heads, but properly responsible to much 
more knowledge than the heads do, or even could, contain?

        I can, of course, shoot down my own way of saying this, immediately.  
In a sense, engineers have been doing this for some very very long time.  No 
“person” knows what is in a 777 aircraft (or for the Europeans, an A380).  
Those cases still feel different to me somehow, and like a more standard 
expansion of the concept of the assembly line and modularization of tasks 
through reliable interfaces (the various ideas behind object design etc.)  I 
imagine that the interesting problem of idea-finding for complex phenomena are 
those that arise when you have modularized as much as you can, and you have run 
out of interesting things to add within the modules, because the things you 
can’t see transcend them.

        But of course I haven’t “made” anything of this string of words, like a 
self-help consultancy or the presidency of any institution.

        Eric

            On Feb 9, 2024, at 7:45 PM, Roger Critchlow <r...@elf.org 
<mailto:r...@elf.org>> wrote:

            Yeah, it seems like the premise of the cartoon, or maybe Jochen's 
interpretation, was that people have limited scopes of application, and the 
average scope of application doesn't include interdisciplinary research.  But 
there are people who have larger scope and have a lot of fun doing 
interdisciplinary projects.  And if an interdisciplinary group can adapt to its 
participant areas of strength, lots of interesting things can happen.

            -- rec --

            On Fri, Feb 9, 2024 at 3:19 PM Frank Wimberly <wimber...@gmail.com 
<mailto:wimber...@gmail.com>> wrote:

                I didn't read the article but Carnegie Mellon, where I worked 
for almost 20 years, prides itself on the amount of interdisciplinary research 
accomplished there..  Herb Simon had appointments in psychology, computer 
science, business and public policy, I believe.  I was a coauthor of papers in 
robotics, public policy, computer science and philosophy.

                On Fri, Feb 9, 2024 at 1:54 PM Jochen Fromm <j...@cas-group.net 
<mailto:j...@cas-group.net>> wrote:

                    Tom Gauld describes most of the problems of 
interdisciplinary research in a single image

                    
https://www.newscientist.com/article/2389834-tom-gauld-on-areas-of-expertise/ 
<https://www.newscientist.com/article/2389834-tom-gauld-on-areas-of-expertise/>

                    -J.

--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to