Re: [FRIAM] Science Fiction Books

2023-09-08 Thread Steve Smith

Cody/Glen -

I like this conceit of dreaming as disinhibited/discursive next-token 
generation.


As a sometimes *lucid* dreamer myself, I feel as if that is what is very 
close to what is going on for me... my best lucid dreaming is 
hypnapompic... happening as I awake or as I drift in and out after a 
good sleep (including various fever dreams such as an overheated nap on 
a summer afternoon). Hypnagogic also, but not so much... it seems to be 
a difference in tuning/calibration/angle of entry vs angle of exit?


The *lucidity* of my dreams seem to stem from some kind of split 
attention or an pseudo-adversarial exchange between a *conscious* and an 
*sub*-conscious component of "self".   The mode in which I engage in my 
lucid dreaming experience is usually somewhat active... "guiding" the 
dream-wander willfully but always gently... experience tells me that 
deliberate lucid dreaming is a bit like catching dust-motes or herding 
cats... too much intention blows it up and shuts it down or ruins it 
qualitatively. "directed disinhibition" might be a fair description of 
my best method.  Psychedelic (or opiod?) tripping may be similar for 
some, my experience with such is so acutely limited that I'm just 
guessing from anecdotal references.


One of my early experiences with chatGPT was to try to co-write short 
fiction with it, based on some simple tech or socio-political tweak.   
The final result was rather unsatisfying but Glen's implication that an 
API interface might provide more degrees of freedom than merely 
web-whispering at it.   I hadn't thought of it before this thread came 
up, but the *problems* I had might have been in the category of *trying 
too hard*...  I was probably trying to force  chatGPT to be more of a 
writing assistant than a collaborative dreamer.   Naturally I begin 
those story-weavings with a specific idea of what it is about and where 
it is going.   Perhaps I can re-engage with something closer to Glen's 
prescription?


Jason Yunkaporta in his book Sand Talk 
 invoked a (n Aboriginal) 
term for doing something like this intentionally with other people and 
calls it "Yarning" somewhat discursive from the usual description of 
Aboriginal Dreaming is my own connotative understanding of it as 
"wandering about in the (larger) adjacent possible".  Yarning seems to 
be a collaborative, directed version of this?  The question of "what 
means adjacent?" is begged here methinks.


I have resisted/puzzled-over/enjoyed Glen's assertion/question about 
"communication is an illusion" (probably mis-stating it here?)... and 
wonder how that relates to this "dreaming" business?  It seems that what 
we nominally call (or intend to be) communication is at best the weaving 
of yarns (and that is a good thing)?


For the musically inclined: Dream On 
! - Aerosmith or a 
Cinematic Slam version  
(more to the spirit of adversarial collaboration)...


For the western take on eastern mystics among us, Alan Watts 
...


Discursively yours,

    - Steve


On 9/8/23 8:03 AM, glen wrote:
One of the things we could easily try is cumulative, iterative 
prompting, particularly with some of the lower scoring responses. 
Dreams are nothing but lower scoring responses, right? While you're 
sleeping, your evaluation/selection mechanism is inhibited, which 
allows you to invest a little more in the total bullshit your own 
next-token generator generates. So for, say ChatGPT to dream, it 
simply needs instructions, including higher temperatures, to be less 
critical of its own responses. It would be annoying to try to do it 
with the web interface, but trivial to do with the API. The hidden 
pre-prompt could be engineered such that the n+1 prompt is a 
(algorithmic or random) composition of the, say, 10 responses to the 
nth prompt. Etc. This would be akin to dreaming, I think. At the end 
of however many iterations, you wake it up and write the highest 
scoring result down in its dream journal.


Maybe I'll try that with Falcon. I can't divert my OAI budget to it.

On 9/7/23 12:37, cody dooderson wrote:
I asked ChatGPT if it dreamed and it said that it didn't. However, is 
adversarial training of neural networks much different than dreaming?


A new class from MITX showed up in my email today. It is called 
/Minds and Machines: An introduction to philosophy of mind, exploring 
consciousness, reality, AI, and more. The most in-depth philosophy 
course available online. 
/https://mitxonline.mit.edu/courses/course-v1:MITxT+24.09x/ 


It may help with this question.
/
/
_ Cody Smith _
c...@simtable.com 


On Thu, Sep 7, 2023 at 12:25 PM Steve Smith > wrote:


    Great observations as usual Glen...   I have lapsed 

Re: [FRIAM] Science Fiction Books

2023-09-08 Thread glen

One of the things we could easily try is cumulative, iterative prompting, 
particularly with some of the lower scoring responses. Dreams are nothing but 
lower scoring responses, right? While you're sleeping, your 
evaluation/selection mechanism is inhibited, which allows you to invest a 
little more in the total bullshit your own next-token generator generates. So 
for, say ChatGPT to dream, it simply needs instructions, including higher 
temperatures, to be less critical of its own responses. It would be annoying to 
try to do it with the web interface, but trivial to do with the API. The hidden 
pre-prompt could be engineered such that the n+1 prompt is a (algorithmic or 
random) composition of the, say, 10 responses to the nth prompt. Etc. This 
would be akin to dreaming, I think. At the end of however many iterations, you 
wake it up and write the highest scoring result down in its dream journal.

Maybe I'll try that with Falcon. I can't divert my OAI budget to it.

On 9/7/23 12:37, cody dooderson wrote:

I asked ChatGPT if it dreamed and it said that it didn't. However, is 
adversarial training of neural networks much different than dreaming?

A new class from MITX showed up in my email today. It is called /Minds and Machines: 
An introduction to philosophy of mind, exploring consciousness, reality, AI, and 
more. The most in-depth philosophy course available online. 
/https://mitxonline.mit.edu/courses/course-v1:MITxT+24.09x/ 

It may help with this question.
/
/
_ Cody Smith _
c...@simtable.com 


On Thu, Sep 7, 2023 at 12:25 PM Steve Smith mailto:sasm...@swcp.com>> wrote:

Great observations as usual Glen...   I have lapsed into *listening* to
almost all long-form writing, whether fiction or non  and it
definitely distorts (torts?) my perception/conception of the
material/subject/message.   A corollary to McLuhan's Medium/Message
duality?

   I find the "output" side to be more specific (or conscious) for me
than the "input" side.   Your point of cuneoform
sticks/quills/pencils/keyboard/gestural-interpreters being part of our
extended phenotype is very apt as is the idea that (if I understand your
intentions) it (intrinsically) effects our interoception and
inter-subjective realities.

I also appreciate your reflections on "mal" and "dis" which I have lived
with all of my life... "judging" or "discriminating" in ways which
themselves are "adaptive" for one suite of purposes but perhaps
"mal"/"dis" for another suite.   Having a vector or tensor fitness
function with (arbitrary) signs on the elements doesn't guarantee they
themselves are "fit" for what you think they are.

Do Androids dream of Electric Sheep?   Do LLM's (or larger adaptive
systems they are embedded in?) dream of the tensor fields they are
embedded in or create or co-create with the fields of human
activity/history/knowledge/experience/future/manifesting-destiny they
were designed to model/emulate/expose/facilitate/co-evolve with?

I dunno,  but it sure is a fascinating milieu to be surfing through in
these auspicious days at the beginning (or end) of the Anthropocene.

   - Steve

On 9/7/23 1:01 PM, glen wrote:
 > Both keyboards and pencils are part of our extended phenotype and play
 > (multiple) roles in interoception, including the induction of
 > inter-subjectivity. I've forgotten who it is, but there's someone on
 > this list who *listens* to our posts, rather than reads them. I tried
 > that with a blog post this morning during my mobility routine:
 >
 > https://www.emilkirkegaard.com/p/preferences-can-be-sick-mental-illness 

 >
 > 
 > Then because I had an allergic reaction to what I heard, I *read* it
 > later. Listening to it disgusted me. I came away thinking this
 > Kirkegaard dude's akin to a scientific racist ... or maybe a
 > eugenecist. I admit to being a fan of Thomas Szasz back in the day. (A
 > friend's mom actually dated him at some point ... allegedly.) But at
 > this point, I've been infected by the Woke Mind Virus; and it's
 > difficult to stomach phrases like "strict homosexuality is more
 > disordered than bisexuality." Reading it, however, helped me remember
 > that maladaption is part and parcel of adaption. Disorder is part and
 > parcel of order. The "mal" and "dis" prefixes are nothing but
 > value-laden subjectivity. The goo of reality extruded through the mold
 > of the author/thinker/subject. For someone like Kirkegaard to claim
 > they're being "objective" while using the "mal" prefix is not even
 > wrong. It's just bullshit. Apparently, my Woke Virus infection is
 > worse near my ears than near my eyes.
 > 
 >
 > But the point