Re: [FRIAM] The problems of interdisciplinary research

2024-02-12 Thread glen

Discussions of curiosity are like discussions of side effects, spandrels, and the rest. The simple 
conception of curiosity is information seeking to no purpose, no "instrumental benefit". 
But that's clearly nonsense, barring some sophistry around "instrumental benefit".

Curiosity seems to me to be an affect(ation), i.e. it refers to some other 
thing. Of course, that begs us to ask whether more curious people have a larger 
domain for the curiosity operator than other people. So if you can call Sally 
curious about nearly every topic and Bob only curious about particular topics, 
then Sally is more curious than Bob. But that suffers from so many confounders 
as to be meaningless. If Sally only engages with any particular topic for an 
hour, whereas Bob, when he does engage, engages for decades, then which is more 
curious?

And if curiosity is always about some (domain of) referent(s), then how is it 
distinguishable from any other appetite (e.g. inquisitiveness, paraphilia, 
obsessive-compulsion)?

I can't help but hearken back to our past exchanges on this list discussing concepts like 
free will, consciousness, or qualia, all of which seem to me to occupy the same category 
as curiosity. The distinguishing factor seems only to be "energy" and a 
willingness to play others' games -- or any particular game that happens to plop down on 
the table. If one has the energy, one can entertain whatever arbitrary game others 
propose. But when you lack the energy, you're accused of incuriosity or whatever other 
epithet the privileged find convenient.

On 2/12/24 08:30, Marcus Daniels wrote:

With a robot using a generative model, one way a curiosity could manifest is in 
how it learns from experience.   With a somewhat higher sampling temperature, 
the performance of a skill would vary.  At a much higher temperature, the skill 
would not be evident.   If the skill had not been mastered, or there were 
equivalently good ways to perform it, random deviations might find these 
variants.   This sampling temperature doesn’t itself change the model, but the 
feedback loop from the robot in its environment would lead to different losses, 
that would then be corrected through the model, e.g. through back propagation.

An example for me is learning sculling -- finding a rhythm is as much about 
feeling the consequences of a set of movements on the water, as water 
conditions vary, as it is executing a specified set of moves in order.

*From:*Friam  *On Behalf Of *Prof David West
*Sent:* Monday, February 12, 2024 7:15 AM
*To:* friam@redfish.com
*Subject:* Re: [FRIAM] The problems of interdisciplinary research

The notion of search brings to mind two different experiences:

1- traditional "searching" of the library via the card catalog (yes, I know I 
am old) for relevant inputs; and,

2- the "serendipity of the stacks"—simply looking around me at the books I 
located via search type 1 to see what was in proximity.

My experience: the second type of "search" was far more valuable, to me, than 
the first.

Also, with the books found via search '1-', the included bibliography was 
frequently of more ultimate use than the book containing the bibliography.

Computerized search—ala Google—has always seemed limited; precisely because it is 
exclusively search type '1-'. (Even Google Scholar) Attempts to "improve" 
search by narrowing it on the basis of prior searches makes it really, really, worse.

LLM based search seems, to me, to have some capability to approximate the 
serendipity of the stacks.

davew

On Mon, Feb 12, 2024, at 6:12 AM, David Eric Smith wrote:

It’s kind of fascinating.

I imagine that one of the next concepts to come into focus will be 
“curiosity”.  I remember a discussion years ago (15? 18?), I think involving 
David K., about what the nature of “curiosity” is and what role it plays in 
learning.

Where the paper talks about supervision to train weights, but eschewing 
“search” per se as a component of the capability learned, it makes me think of 
the role of search in the pursuit of inputs, the ultimate worth of which you 
can’t know at the time of searching.  I can imagine (off the cuff) that 
whatever one wants to mean by “curiosity”, it has some flavor of a non-random 
search, but one not guided by known criteria, rather by appropriateness to fit 
existing gaps in (something: confidence? consistency?).

This also seems like it should tie into Leslie Valiant’s ideas in Probably 
Approximately Correct about how to formally conceptualize teaching in relation 
to learning.  I guess Valiant is now considered decades passe, as AI has 
charged ahead.  But the broad outlines of his argument don’t seem like they 
have become completely superseded.

We already have “attention” as a secret sauce with important impacts.  I 
wonder when some shift of architectural paradigm will include a design that we 
think is a goo

Re: [FRIAM] The problems of interdisciplinary research

2024-02-12 Thread Marcus Daniels
With a robot using a generative model, one way a curiosity could manifest is in 
how it learns from experience.   With a somewhat higher sampling temperature, 
the performance of a skill would vary.  At a much higher temperature, the skill 
would not be evident.   If the skill had not been mastered, or there were 
equivalently good ways to perform it, random deviations might find these 
variants.   This sampling temperature doesn’t itself change the model, but the 
feedback loop from the robot in its environment would lead to different losses, 
that would then be corrected through the model, e.g. through back propagation.

An example for me is learning sculling -- finding a rhythm is as much about 
feeling the consequences of a set of movements on the water, as water 
conditions vary, as it is executing a specified set of moves in order.

From: Friam  On Behalf Of Prof David West
Sent: Monday, February 12, 2024 7:15 AM
To: friam@redfish.com
Subject: Re: [FRIAM] The problems of interdisciplinary research

The notion of search brings to mind two different experiences:

1- traditional "searching" of the library via the card catalog (yes, I know I 
am old) for relevant inputs; and,
2- the "serendipity of the stacks"—simply looking around me at the books I 
located via search type 1 to see what was in proximity.

My experience: the second type of "search" was far more valuable, to me, than 
the first.

Also, with the books found via search '1-', the included bibliography was 
frequently of more ultimate use than the book containing the bibliography.

Computerized search—ala Google—has always seemed limited; precisely because it 
is exclusively search type '1-'. (Even Google Scholar) Attempts to "improve" 
search by narrowing it on the basis of prior searches makes it really, really, 
worse.

LLM based search seems, to me, to have some capability to approximate the 
serendipity of the stacks.

davew


On Mon, Feb 12, 2024, at 6:12 AM, David Eric Smith wrote:
It’s kind of fascinating.

I imagine that one of the next concepts to come into focus will be “curiosity”. 
 I remember a discussion years ago (15? 18?), I think involving David K., about 
what the nature of “curiosity” is and what role it plays in learning.

Where the paper talks about supervision to train weights, but eschewing 
“search” per se as a component of the capability learned, it makes me think of 
the role of search in the pursuit of inputs, the ultimate worth of which you 
can’t know at the time of searching.  I can imagine (off the cuff) that 
whatever one wants to mean by “curiosity”, it has some flavor of a non-random 
search, but one not guided by known criteria, rather by appropriateness to fit 
existing gaps in (something: confidence? consistency?).

This also seems like it should tie into Leslie Valiant’s ideas in Probably 
Approximately Correct about how to formally conceptualize teaching in relation 
to learning.  I guess Valiant is now considered decades passe, as AI has 
charged ahead.  But the broad outlines of his argument don’t seem like they 
have become completely superseded.

We already have “attention” as a secret sauce with important impacts.  I wonder 
when some shift of architectural paradigm will include a design that we think 
is a good formalization of the pre-formal gestures toward curiosity.

Eric



On Feb 10, 2024, at 8:19 PM, Marcus Daniels 
mailto:mar...@snoutfarm.com>> wrote:

If one takes results like this -- https://arxiv.org/abs/2402.04494 -- and then 
consider what happens with, say, Code Llama, it seems plausible that it is 
representing both the breadth and depth of what humans know about large and 
complex code bases.   It is not clear to me why knowledge can’t be extended far 
beyond what the highest-bandwidth humans can learn in a lifetime.   I agree 
mastery of the idiomatic patterns could constrain invention, though.   For 
software engineering, the most impressive people to me are those that can 
navigate large and complex code bases, often remembering a lot of the code, but 
also can discard whole modules at a time and reimagine them.Managers are 
suspicious of such people because managers want to modularize expertise for 
division of labor.   Scrum is in some sense a way to impede the development of 
expertise and to deny the need for it.

From: Friam mailto:friam-boun...@redfish.com>> On 
Behalf Of David Eric Smith
Sent: Saturday, February 10, 2024 2:25 AM
To: The Friday Morning Applied Complexity Coffee Group 
mailto:friam@redfish.com>>
Subject: Re: [FRIAM] The problems of interdisciplinary research

There’s a famous old rant by von Neumann, known at least by those who were 
around to hear it, or so I was told by Martin Shubik.

von Neumann was grumping that “math had become too big; nobody could understand 
more than 1/4 of it”.  As always with von Neumann, the point of saying 
something included an element of self-aggrandizement: 

Re: [FRIAM] The problems of interdisciplinary research

2024-02-12 Thread Prof David West
The notion of search brings to mind two different experiences:

1- traditional "searching" of the library via the card catalog (yes, I know I 
am old) for relevant inputs; and,
2- the "serendipity of the stacks"—simply looking around me at the books I 
located via search type 1 to see what was in proximity.

My experience: the second type of "search" was far more valuable, to me, than 
the first.

Also, with the books found via search '1-', the included bibliography was 
frequently of more ultimate use than the book containing the bibliography.

Computerized search—ala Google—has always seemed limited; precisely because it 
is exclusively search type '1-'. (Even Google Scholar) Attempts to "improve" 
search by narrowing it on the basis of prior searches makes it really, really, 
worse.

LLM based search seems, to me, to have some capability to approximate the 
serendipity of the stacks.

davew


On Mon, Feb 12, 2024, at 6:12 AM, David Eric Smith wrote:
> It’s kind of fascinating.
> 
> I imagine that one of the next concepts to come into focus will be 
> “curiosity”.  I remember a discussion years ago (15? 18?), I think involving 
> David K., about what the nature of “curiosity” is and what role it plays in 
> learning.  
> 
> Where the paper talks about supervision to train weights, but eschewing 
> “search” per se as a component of the capability learned, it makes me think 
> of the role of search in the pursuit of inputs, the ultimate worth of which 
> you can’t know at the time of searching.  I can imagine (off the cuff) that 
> whatever one wants to mean by “curiosity”, it has some flavor of a non-random 
> search, but one not guided by known criteria, rather by appropriateness to 
> fit existing gaps in (something: confidence? consistency?).
> 
> This also seems like it should tie into Leslie Valiant’s ideas in Probably 
> Approximately Correct about how to formally conceptualize teaching in 
> relation to learning.  I guess Valiant is now considered decades passe, as AI 
> has charged ahead.  But the broad outlines of his argument don’t seem like 
> they have become completely superseded.
> 
> We already have “attention” as a secret sauce with important impacts.  I 
> wonder when some shift of architectural paradigm will include a design that 
> we think is a good formalization of the pre-formal gestures toward curiosity.
> 
> Eric
> 
> 
> 
>> On Feb 10, 2024, at 8:19 PM, Marcus Daniels  wrote:
>> 
>> If one takes results like this -- https://arxiv.org/abs/2402.04494 -- and 
>> then consider what happens with, say, Code Llama, it seems plausible that it 
>> is representing both the breadth and depth of what humans know about large 
>> and complex code bases.   It is not clear to me why knowledge can’t be 
>> extended far beyond what the highest-bandwidth humans can learn in a 
>> lifetime.   I agree mastery of the idiomatic patterns could constrain 
>> invention, though.   For software engineering, the most impressive people to 
>> me are those that can navigate large and complex code bases, often 
>> remembering a lot of the code, but also can discard whole modules at a time 
>> and reimagine them.Managers are suspicious of such people because 
>> managers want to modularize expertise for division of labor.   Scrum is in 
>> some sense a way to impede the development of expertise and to deny the need 
>> for it.
>>  
>> *From:* Friam  *On Behalf Of *David Eric Smith
>> *Sent:* Saturday, February 10, 2024 2:25 AM
>> *To:* The Friday Morning Applied Complexity Coffee Group 
>> *Subject:* Re: [FRIAM] The problems of interdisciplinary research
>>  
>> There’s a famous old rant by von Neumann, known at least by those who were 
>> around to hear it, or so I was told by Martin Shubik.  
>>  
>> von Neumann was grumping that “math had become too big; nobody could 
>> understand more than 1/4 of it”.  As always with von Neumann, the point of 
>> saying something included an element of self-aggrandizement: von Neumann was 
>> inviting the listener to notice that _he_ was the one who could understand a 
>> quarter of all existing math at the time (whether or not such an absurdity 
>> could be called “true” in any sense).
>>  
>> I have wondered if this problem marks a qualitative threshold from which to 
>> define a “complex systems” science.  The premise would be that all 
>> innovations ultimately occur in individual human heads, triggered somehow.  
>> (And much of the skill of science is to structure your environment of 
>> reading and experience and people to “trigger” you in productive ways, since 
>> insight isn’t something that can be willed into existen

Re: [FRIAM] The problems of interdisciplinary research

2024-02-12 Thread David Eric Smith
It’s kind of fascinating.

I imagine that one of the next concepts to come into focus will be “curiosity”. 
 I remember a discussion years ago (15? 18?), I think involving David K., about 
what the nature of “curiosity” is and what role it plays in learning.  

Where the paper talks about supervision to train weights, but eschewing 
“search” per se as a component of the capability learned, it makes me think of 
the role of search in the pursuit of inputs, the ultimate worth of which you 
can’t know at the time of searching.  I can imagine (off the cuff) that 
whatever one wants to mean by “curiosity”, it has some flavor of a non-random 
search, but one not guided by known criteria, rather by appropriateness to fit 
existing gaps in (something: confidence? consistency?).

This also seems like it should tie into Leslie Valiant’s ideas in Probably 
Approximately Correct about how to formally conceptualize teaching in relation 
to learning.  I guess Valiant is now considered decades passe, as AI has 
charged ahead.  But the broad outlines of his argument don’t seem like they 
have become completely superseded.

We already have “attention” as a secret sauce with important impacts.  I wonder 
when some shift of architectural paradigm will include a design that we think 
is a good formalization of the pre-formal gestures toward curiosity.

Eric



> On Feb 10, 2024, at 8:19 PM, Marcus Daniels  wrote:
> 
> If one takes results like this -- https://arxiv.org/abs/2402.04494 -- and 
> then consider what happens with, say, Code Llama, it seems plausible that it 
> is representing both the breadth and depth of what humans know about large 
> and complex code bases.   It is not clear to me why knowledge can’t be 
> extended far beyond what the highest-bandwidth humans can learn in a 
> lifetime.   I agree mastery of the idiomatic patterns could constrain 
> invention, though.   For software engineering, the most impressive people to 
> me are those that can navigate large and complex code bases, often 
> remembering a lot of the code, but also can discard whole modules at a time 
> and reimagine them.Managers are suspicious of such people because 
> managers want to modularize expertise for division of labor.   Scrum is in 
> some sense a way to impede the development of expertise and to deny the need 
> for it.
>  
> From: Friam mailto:friam-boun...@redfish.com>> On 
> Behalf Of David Eric Smith
> Sent: Saturday, February 10, 2024 2:25 AM
> To: The Friday Morning Applied Complexity Coffee Group  <mailto:friam@redfish.com>>
> Subject: Re: [FRIAM] The problems of interdisciplinary research
>  
> There’s a famous old rant by von Neumann, known at least by those who were 
> around to hear it, or so I was told by Martin Shubik.  
>  
> von Neumann was grumping that “math had become too big; nobody could 
> understand more than 1/4 of it”.  As always with von Neumann, the point of 
> saying something included an element of self-aggrandizement: von Neumann was 
> inviting the listener to notice that _he_ was the one who could understand a 
> quarter of all existing math at the time (whether or not such an absurdity 
> could be called “true” in any sense).
>  
> I have wondered if this problem marks a qualitative threshold from which to 
> define a “complex systems” science.  The premise would be that all 
> innovations ultimately occur in individual human heads, triggered somehow.  
> (And much of the skill of science is to structure your environment of reading 
> and experience and people to “trigger” you in productive ways, since insight 
> isn’t something that can be willed into existence).  But those ideas need to 
> be answerable to the fullest scope of whatever is currently understood that 
> is pertinent.  
>  
> The old answer used to be to cram more and more of current knowledge into 
> single heads as the fuel for their insights, and then to limit to more and 
> more rarified heads that could hold the most and still come up with 
> something.  
>  
> But at some point, that model no longer works because there is a limit (some 
> kind of extreme-value distribution, I guess) to what human heads can hold, at 
> all.
>  
> The project then shifts over into an effort of community design with explicit 
> concerns that are not reducible to head-packing.  How do good insights come 
> into existence, still limited by heads, but properly responsible to much more 
> knowledge than the heads do, or even could, contain?  
>  
>  
> I can, of course, shoot down my own way of saying this, immediately.  In a 
> sense, engineers have been doing this for some very very long time.  No 
> “person” knows what is in a 777 aircraft (or for the Europeans, an A380).  
> Those cases still feel different to me somehow, and like a more standard 

Re: [FRIAM] The problems of interdisciplinary research

2024-02-10 Thread Marcus Daniels
If one takes results like this -- https://arxiv.org/abs/2402.04494 -- and then 
consider what happens with, say, Code Llama, it seems plausible that it is 
representing both the breadth and depth of what humans know about large and 
complex code bases.   It is not clear to me why knowledge can’t be extended far 
beyond what the highest-bandwidth humans can learn in a lifetime.   I agree 
mastery of the idiomatic patterns could constrain invention, though.   For 
software engineering, the most impressive people to me are those that can 
navigate large and complex code bases, often remembering a lot of the code, but 
also can discard whole modules at a time and reimagine them.Managers are 
suspicious of such people because managers want to modularize expertise for 
division of labor.   Scrum is in some sense a way to impede the development of 
expertise and to deny the need for it.

From: Friam  On Behalf Of David Eric Smith
Sent: Saturday, February 10, 2024 2:25 AM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] The problems of interdisciplinary research

There’s a famous old rant by von Neumann, known at least by those who were 
around to hear it, or so I was told by Martin Shubik.

von Neumann was grumping that “math had become too big; nobody could understand 
more than 1/4 of it”.  As always with von Neumann, the point of saying 
something included an element of self-aggrandizement: von Neumann was inviting 
the listener to notice that _he_ was the one who could understand a quarter of 
all existing math at the time (whether or not such an absurdity could be called 
“true” in any sense).

I have wondered if this problem marks a qualitative threshold from which to 
define a “complex systems” science.  The premise would be that all innovations 
ultimately occur in individual human heads, triggered somehow.  (And much of 
the skill of science is to structure your environment of reading and experience 
and people to “trigger” you in productive ways, since insight isn’t something 
that can be willed into existence).  But those ideas need to be answerable to 
the fullest scope of whatever is currently understood that is pertinent.

The old answer used to be to cram more and more of current knowledge into 
single heads as the fuel for their insights, and then to limit to more and more 
rarified heads that could hold the most and still come up with something.

But at some point, that model no longer works because there is a limit (some 
kind of extreme-value distribution, I guess) to what human heads can hold, at 
all.

The project then shifts over into an effort of community design with explicit 
concerns that are not reducible to head-packing.  How do good insights come 
into existence, still limited by heads, but properly responsible to much more 
knowledge than the heads do, or even could, contain?


I can, of course, shoot down my own way of saying this, immediately.  In a 
sense, engineers have been doing this for some very very long time.  No 
“person” knows what is in a 777 aircraft (or for the Europeans, an A380).  
Those cases still feel different to me somehow, and like a more standard 
expansion of the concept of the assembly line and modularization of tasks 
through reliable interfaces (the various ideas behind object design etc.)  I 
imagine that the interesting problem of idea-finding for complex phenomena are 
those that arise when you have modularized as much as you can, and you have run 
out of interesting things to add within the modules, because the things you 
can’t see transcend them.

But of course I haven’t “made” anything of this string of words, like a 
self-help consultancy or the presidency of any institution.

Eric



On Feb 9, 2024, at 7:45 PM, Roger Critchlow mailto:r...@elf.org>> 
wrote:

Yeah, it seems like the premise of the cartoon, or maybe Jochen's 
interpretation, was that people have limited scopes of application, and the 
average scope of application doesn't include interdisciplinary research.  But 
there are people who have larger scope and have a lot of fun doing 
interdisciplinary projects.  And if an interdisciplinary group can adapt to its 
participant areas of strength, lots of interesting things can happen.

-- rec --

On Fri, Feb 9, 2024 at 3:19 PM Frank Wimberly 
mailto:wimber...@gmail.com>> wrote:
I didn't read the article but Carnegie Mellon, where I worked for almost 20 
years, prides itself on the amount of interdisciplinary research accomplished 
there..  Herb Simon had appointments in psychology, computer science, business 
and public policy, I believe.  I was a coauthor of papers in robotics, public 
policy, computer science and philosophy.

On Fri, Feb 9, 2024 at 1:54 PM Jochen Fromm 
mailto:j...@cas-group.net>> wrote:
Tom Gauld describes most of the problems of interdisciplinary research in a 
single image
https://www.newscientist.com/article/2389834-tom-gauld-on-areas

Re: [FRIAM] The problems of interdisciplinary research

2024-02-10 Thread David Eric Smith
It is an interesting question.

A colleague of mine, to whom I refer either affectionately (sometimes) or in 
exasperation (most times) as The Mystic believes that this utilization was what 
the Phenomenologists were after, though he considers only Husserl and Fink the 
real deal, and the others as closeted hair-splitting Analyticals who didn’t 
understand the purpose (which, like the purpose of everything of any worth, is 
Mysticism).  He also believes that the European phenomenologists were sort of 
undergrads at this, whereas the Vedic Hindus were maybe impressive post-docs, 
along with maybe Daoists and some Confucians, Buddhists in general were the 
professoriate, and among them the Tibetans the true grandmasters.  (He will 
give somewhat more credit to the Medieval Christian mystics in Europe — the 
group studied by people like Barry McGinn — than to their post-enlightenment 
descendants in philosophy.)  I tend to get lost in the hierarchy of holies, 
since it seems to vary depending on what I might have said that he needs to 
tell me was wrong.  But I feel like I remember patterns in things said to me 
over many conversations over the span of years, going into decades.

And of course, I say “he believes”, as if I didn’t recognize the complete 
absurdity of that, as I have no place or way to say anything about what he 
believes, never having said (or asked, or seemingly, even thought) anything 
that he doesn’t consider an error so categorical as to be hard for him to 
express.  

But, without about the honor due to Sartre’s self-taught man, I can continue to 
listen and try to remember the surface forms of what gets told to me.  The idea 
that there should be room for growth here doesn’t seem crazy to me.  

Eric



> On Feb 10, 2024, at 11:55 AM, Prof David West  wrote:
> 
> Eric said:
> 
> "there is a limit (some kind of extreme-value distribution, I guess) to what 
> human heads can hold, at all."
> 
> I must disagree. 
> 
> It may very well be true that the human mind-brain is limited in the amount 
> of 'formal-abstract' knowledge (mathematical, scientific, computational—the 
> stuff you learn in school) it can hold; that kind of knowledge represents a 
> small portion of what every human 'knows'. (Maybe 10%)
> 
> The exemplar of 'non-formal-abstract' knowledge possessed by everyone is 
> culture: "a complex whole consisting, in part, of language, norms, values, 
> worldviews, technologies, behaviors, and appearances." This kind of knowledge 
> can grow without limit (excepting maybe death) as long as one remains open to 
> its acquisition with varied experiences, travel, engagement with the "other," 
> and reading (for pleasure as well as purpose).
> 
> An interesting question: can (and if yes, how) might this knowledge be 
> utilized in service of innovation, insight, and understanding? I believe the 
> the answers are "yes" and "via evocative contextualization." An example of 
> the latter is the "Wheel of Life" Thankgka painting I have hanging above my 
> desk. (attached)
> 
> davew
> 
> 
> 
> 
> On Sat, Feb 10, 2024, at 4:25 AM, David Eric Smith wrote:
>> There’s a famous old rant by von Neumann, known at least by those who were 
>> around to hear it, or so I was told by Martin Shubik.  
>> 
>> von Neumann was grumping that “math had become too big; nobody could 
>> understand more than 1/4 of it”.  As always with von Neumann, the point of 
>> saying something included an element of self-aggrandizement: von Neumann was 
>> inviting the listener to notice that _he_ was the one who could understand a 
>> quarter of all existing math at the time (whether or not such an absurdity 
>> could be called “true” in any sense).
>> 
>> I have wondered if this problem marks a qualitative threshold from which to 
>> define a “complex systems” science.  The premise would be that all 
>> innovations ultimately occur in individual human heads, triggered somehow.  
>> (And much of the skill of science is to structure your environment of 
>> reading and experience and people to “trigger” you in productive ways, since 
>> insight isn’t something that can be willed into existence).  But those ideas 
>> need to be answerable to the fullest scope of whatever is currently 
>> understood that is pertinent.  
>> 
>> The old answer used to be to cram more and more of current knowledge into 
>> single heads as the fuel for their insights, and then to limit to more and 
>> more rarified heads that could hold the most and still come up with 
>> something.  
>> 
>> But at some point, that model no longer works because there is a limit (some 
>> kind of extreme-value distribution, I guess) to what human heads can hold, 
>> at all.
>> 
>> The project then shifts over into an effort of community design with 
>> explicit concerns that are not reducible to head-packing.  How do good 
>> insights come into existence, still limited by heads, but properly 
>> responsible to much more knowledge than the heads do, or even could, 

Re: [FRIAM] The problems of interdisciplinary research

2024-02-10 Thread Stephen Guerin
  “math had become too big; nobody could understand more than 1/4 of
it”.

"But with four neighbors I can compute most of it" ;-)

On Sat, Feb 10, 2024, 5:25 AM David Eric Smith  wrote:

> There’s a famous old rant by von Neumann, known at least by those who were
> around to hear it, or so I was told by Martin Shubik.
>
> von Neumann was grumping that “math had become too big; nobody could
> understand more than 1/4 of it”.  As always with von Neumann, the point of
> saying something included an element of self-aggrandizement: von Neumann
> was inviting the listener to notice that _he_ was the one who could
> understand a quarter of all existing math at the time (whether or not such
> an absurdity could be called “true” in any sense).
>
> I have wondered if this problem marks a qualitative threshold from which
> to define a “complex systems” science.  The premise would be that all
> innovations ultimately occur in individual human heads, triggered somehow.
>  (And much of the skill of science is to structure your environment of
> reading and experience and people to “trigger” you in productive ways,
> since insight isn’t something that can be willed into existence).  But
> those ideas need to be answerable to the fullest scope of whatever is
> currently understood that is pertinent.
>
> The old answer used to be to cram more and more of current knowledge into
> single heads as the fuel for their insights, and then to limit to more and
> more rarified heads that could hold the most and still come up with
> something.
>
> But at some point, that model no longer works because there is a limit
> (some kind of extreme-value distribution, I guess) to what human heads can
> hold, at all.
>
> The project then shifts over into an effort of community design with
> explicit concerns that are not reducible to head-packing.  How do good
> insights come into existence, still limited by heads, but properly
> responsible to much more knowledge than the heads do, or even could,
> contain?
>
>
> I can, of course, shoot down my own way of saying this, immediately.  In a
> sense, engineers have been doing this for some very very long time.  No
> “person” knows what is in a 777 aircraft (or for the Europeans, an A380).
> Those cases still feel different to me somehow, and like a more standard
> expansion of the concept of the assembly line and modularization of tasks
> through reliable interfaces (the various ideas behind object design etc.)
>  I imagine that the interesting problem of idea-finding for complex
> phenomena are those that arise when you have modularized as much as you
> can, and you have run out of interesting things to add within the modules,
> because the things you can’t see transcend them.
>
> But of course I haven’t “made” anything of this string of words, like a
> self-help consultancy or the presidency of any institution.
>
> Eric
>
>
> On Feb 9, 2024, at 7:45 PM, Roger Critchlow  wrote:
>
> Yeah, it seems like the premise of the cartoon, or maybe Jochen's
> interpretation, was that people have limited scopes of application, and the
> average scope of application doesn't include interdisciplinary research.
> But there are people who have larger scope and have a lot of fun doing
> interdisciplinary projects.  And if an interdisciplinary group can adapt to
> its participant areas of strength, lots of interesting things can happen.
>
> -- rec --
>
> On Fri, Feb 9, 2024 at 3:19 PM Frank Wimberly  wrote:
>
>> I didn't read the article but Carnegie Mellon, where I worked for almost
>> 20 years, prides itself on the amount of interdisciplinary research
>> accomplished there..  Herb Simon had appointments in psychology, computer
>> science, business and public policy, I believe.  I was a coauthor of papers
>> in robotics, public policy, computer science and philosophy.
>>
>> On Fri, Feb 9, 2024 at 1:54 PM Jochen Fromm  wrote:
>>
>>> Tom Gauld describes most of the problems of interdisciplinary research
>>> in a single image
>>>
>>> https://www.newscientist.com/article/2389834-tom-gauld-on-areas-of-expertise/
>>>
>>> -J.
>>>
>>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>>> FRIAM Applied Complexity Group listserv
>>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>>> https://bit.ly/virtualfriam
>>> 
>>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>>> 
>>> FRIAM-COMIC http://friam-comic.blogspot.com/
>>> 

Re: [FRIAM] The problems of interdisciplinary research

2024-02-10 Thread David Eric Smith
There’s a famous old rant by von Neumann, known at least by those who were 
around to hear it, or so I was told by Martin Shubik.  

von Neumann was grumping that “math had become too big; nobody could understand 
more than 1/4 of it”.  As always with von Neumann, the point of saying 
something included an element of self-aggrandizement: von Neumann was inviting 
the listener to notice that _he_ was the one who could understand a quarter of 
all existing math at the time (whether or not such an absurdity could be called 
“true” in any sense).

I have wondered if this problem marks a qualitative threshold from which to 
define a “complex systems” science.  The premise would be that all innovations 
ultimately occur in individual human heads, triggered somehow.  (And much of 
the skill of science is to structure your environment of reading and experience 
and people to “trigger” you in productive ways, since insight isn’t something 
that can be willed into existence).  But those ideas need to be answerable to 
the fullest scope of whatever is currently understood that is pertinent.  

The old answer used to be to cram more and more of current knowledge into 
single heads as the fuel for their insights, and then to limit to more and more 
rarified heads that could hold the most and still come up with something.  

But at some point, that model no longer works because there is a limit (some 
kind of extreme-value distribution, I guess) to what human heads can hold, at 
all.

The project then shifts over into an effort of community design with explicit 
concerns that are not reducible to head-packing.  How do good insights come 
into existence, still limited by heads, but properly responsible to much more 
knowledge than the heads do, or even could, contain?  


I can, of course, shoot down my own way of saying this, immediately.  In a 
sense, engineers have been doing this for some very very long time.  No 
“person” knows what is in a 777 aircraft (or for the Europeans, an A380).  
Those cases still feel different to me somehow, and like a more standard 
expansion of the concept of the assembly line and modularization of tasks 
through reliable interfaces (the various ideas behind object design etc.)  I 
imagine that the interesting problem of idea-finding for complex phenomena are 
those that arise when you have modularized as much as you can, and you have run 
out of interesting things to add within the modules, because the things you 
can’t see transcend them.

But of course I haven’t “made” anything of this string of words, like a 
self-help consultancy or the presidency of any institution.

Eric


> On Feb 9, 2024, at 7:45 PM, Roger Critchlow  wrote:
> 
> Yeah, it seems like the premise of the cartoon, or maybe Jochen's 
> interpretation, was that people have limited scopes of application, and the 
> average scope of application doesn't include interdisciplinary research.  But 
> there are people who have larger scope and have a lot of fun doing 
> interdisciplinary projects.  And if an interdisciplinary group can adapt to 
> its participant areas of strength, lots of interesting things can happen.  
> 
> -- rec --
> 
> On Fri, Feb 9, 2024 at 3:19 PM Frank Wimberly  > wrote:
>> I didn't read the article but Carnegie Mellon, where I worked for almost 20 
>> years, prides itself on the amount of interdisciplinary research 
>> accomplished there..  Herb Simon had appointments in psychology, computer 
>> science, business and public policy, I believe.  I was a coauthor of papers 
>> in robotics, public policy, computer science and philosophy.
>> 
>> On Fri, Feb 9, 2024 at 1:54 PM Jochen Fromm > > wrote:
>>> Tom Gauld describes most of the problems of interdisciplinary research in a 
>>> single image
>>> https://www.newscientist.com/article/2389834-tom-gauld-on-areas-of-expertise/
>>> 
>>> -J.
>>> 
>>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>>> FRIAM Applied Complexity Group listserv
>>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
>>> https://bit.ly/virtualfriam 
>>> 
>>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com 
>>> 
>>> FRIAM-COMIC http://friam-comic.blogspot.com/ 
>>> 
>>> archives:  5/2017 thru present 
>>> https://redfish.com/pipermail/friam_redfish.com/ 

Re: [FRIAM] The problems of interdisciplinary research

2024-02-09 Thread Roger Critchlow
Yeah, it seems like the premise of the cartoon, or maybe Jochen's
interpretation, was that people have limited scopes of application, and the
average scope of application doesn't include interdisciplinary research.
But there are people who have larger scope and have a lot of fun doing
interdisciplinary projects.  And if an interdisciplinary group can adapt to
its participant areas of strength, lots of interesting things can happen.

-- rec --

On Fri, Feb 9, 2024 at 3:19 PM Frank Wimberly  wrote:

> I didn't read the article but Carnegie Mellon, where I worked for almost
> 20 years, prides itself on the amount of interdisciplinary research
> accomplished there..  Herb Simon had appointments in psychology, computer
> science, business and public policy, I believe.  I was a coauthor of papers
> in robotics, public policy, computer science and philosophy.
>
> On Fri, Feb 9, 2024 at 1:54 PM Jochen Fromm  wrote:
>
>> Tom Gauld describes most of the problems of interdisciplinary research in
>> a single image
>>
>> https://www.newscientist.com/article/2389834-tom-gauld-on-areas-of-expertise/
>>
>> -J.
>>
>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>> https://bit.ly/virtualfriam
>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/
>> archives:  5/2017 thru present
>> https://redfish.com/pipermail/friam_redfish.com/
>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>
>
>
> --
> Frank Wimberly
> 140 Calle Ojo Feliz
> Santa Fe, NM 87505
> 505 670-9918
>
> Research:  https://www.researchgate.net/profile/Frank_Wimberly2
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] The problems of interdisciplinary research

2024-02-09 Thread Frank Wimberly
I didn't read the article but Carnegie Mellon, where I worked for almost 20
years, prides itself on the amount of interdisciplinary research
accomplished there..  Herb Simon had appointments in psychology, computer
science, business and public policy, I believe.  I was a coauthor of papers
in robotics, public policy, computer science and philosophy.

On Fri, Feb 9, 2024 at 1:54 PM Jochen Fromm  wrote:

> Tom Gauld describes most of the problems of interdisciplinary research in
> a single image
>
> https://www.newscientist.com/article/2389834-tom-gauld-on-areas-of-expertise/
>
> -J.
>
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>


-- 
Frank Wimberly
140 Calle Ojo Feliz
Santa Fe, NM 87505
505 670-9918

Research:  https://www.researchgate.net/profile/Frank_Wimberly2
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/