Re: [FRIAM] Book Lovers of Santa FE

2023-03-01 Thread Steve Smith
Seems I missed a trick (or two) again... Peggy apparently started 
looking to be bought-out/partnered over 6 months ago and it is only now 
(after an acute health challenge?) that she seems to be on a mission to 
sell or shutter in short order.


I'm futzing around with examples of people who have used GoFundMe 
programs to rescue bookstores (financially) and it looks like some of 
those were resounding successes.   The key here is probably as much 
finding someone(s) willing and able to manage such a store, even if 
funds for it get raised.  I have no idea of the magnitude of what she 
needs to be released from this obligation she built over 40+ years.


https://www.santafenewmexican.com/news/local_news/queen-of-book-mountain-writing-lifes-last-chapter/article_fc4ede6e-0ce4-11ed-b550-f774b8b7aed5.html

On 3/1/23 3:43 PM, Steve Smith wrote:
I was just in Book Mountain which has been operating in SFe primarily 
as a paperback exchange since 1980 (just before I moved to the area) 
and discovered that the owner, Peggy Frank intends to shut down, 
probably sooner rather than later.  Here is an article about her move 
and rejuvenation of the store a couple of years ago:


https://www.santafenewmexican.com/news/business/book-mountain-alive-and-well-in-new-location/article_9003513a-3c8b-11ea-a81e-373351759896.html 



The new digs and layout were a modest upgrade from what she had going 
before, and has a somewhat wider range of books shelved now, though 
the emphasis is still on high-turnover paperback.  She has one of the 
most extensive collections of Science Fiction I think you fill find 
this side of Powells (Portland) or Tattered Cover (Denver).


I probably unloaded 1/4 of my "2 cords of books" leftover from *my* 
bookstore (Hunt and Gather circa 2006) on her for trade (my credit is 
much too large to begin to drain in the short term).


What I am hoping is that one of her customers or someone in the book 
loving circle would swoop in and buy it outright.   I have already 
talked to some of the other booksellers in town and she has already 
approached them offering to sell them her inventory, but the ideal 
would be for it to remain a viable business.  Her current 
setup/inventory/curation works and is by far more valuable than the 
mere inventory (which would need to be sorted, shelved, etc. again).


If anyone here knows of anyone suitable for taking on such an 
operation I highly recommend it, but I suspect the time is short. 
Peggy is just coming out of some health-challenges and I sense she 
will not be willing/able to wait very long.    I am holding out a slim 
hope that one of the other used-sellers in SFe might pick it up as a 
satellite (Op Cit already has had multiple locations).


In any case, spread the word if you can.   Maybe there is a coop model 
or multiple-partner arrangement that might keep it going and help 
Peggy leave the business cleanly/gracefully.


- Steve




-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam

to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present 
https://redfish.com/pipermail/friam_redfish.com/

 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/



-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Book Lovers of Santa FE

2023-03-01 Thread Steve Smith

FWIW

Searching this US worker-coop directory yielded 10 examples of coop 
bookstores and bookstores registered as "democratic workplaces" which I 
do not yet know the definition of:


   https://www.usworker.coop/directory/

Orca in Olympia is not listed which gives me the feeling that there may 
be (many) more out there also not registered here.  The locations of 
most of these are the usual suspects: San Francisco, Seattle, Asheville, 
Austin, etc.


My favorite coop experience was the Cheeseboard Pizza Collective in 
Berkeley.   When i was there they had a line around the block on 
Thursday and Friday nights (maybe Saturday also) when they opened at 4 
or 5 PM.   As the lore went, every member of the coop worked every role, 
from cook to cashier to cleanup crew "equally"... and they apparently 
had a long waiting list of people wanting to join the coop.   I don't 
know how much profit/pay the members got, but it was reputed to be 
militaristically in it's equality in all ways and was/is apparently high 
functioning enough that people really want to join...


It was take-out only but as your part of the line entered the building, 
you had the option of being handed a "slice" when you ordered to eat 
while you waited to have the order fulfilled (usually less than 5 
minutes, the ovens were running full-tilt all the time).   If not eating 
one in place, they would instead place a single slice on top of your 
box(es) for the person picking up to eat on the way home, avoiding that 
embarassment when you get there and everyone teases you about the 
"missing slice" in the box. "couldn't wait could ya?"!


On 3/1/23 4:16 PM, Steve Smith wrote:
Excellent! This is the kind of feedback I was fishing for.  I am 
reading some of Orca's materials on their CoOpness and can imagine 
that it is a hard thing to pull together without one or a very few 
strong personalities with the motivation to move it along crisply.   
That is definitely not me.


I was not aware of *any* bookstore coops in existence so this is very 
useful and motivates me to maybe look around for other examples.


Peggy has a new employee working there with her who seems pretty 
competent and motivated, I will maybe stop in sometime when she is not 
around (not sure when that is because she used to be there *all the 
time*) and feel out his interest/ability to step up in any 
capacity...   she is a dedicated but difficult personality and I don't 
really want to interfere with her process whatever it might be, just 
see if I can facilitate a "better outcome" to show up.


On 3/1/23 3:50 PM, glen wrote:
FWIW, if there were a lot of people interested in helping out a 
little bit, consider setting up a co-op. This store here in Oly did 
that soon after we moved up here. It was difficult because they 
transitioned, rather that constructed from scratch.


https://www.orcabooks.com/co-op

And I don't see Book Mountain on this list:

https://bookshop.org/pages/bookstores

It's probably too little/late for it to help Book Mountain now. But 
if someone does take over, it would make sense to get on the list.


On 3/1/23 14:43, Steve Smith wrote:
I was just in Book Mountain which has been operating in SFe 
primarily as a paperback exchange since 1980 (just before I moved to 
the area) and discovered that the owner, Peggy Frank intends to shut 
down, probably sooner rather than later.  Here is an article about 
her move and rejuvenation of the store a couple of years ago:


https://www.santafenewmexican.com/news/business/book-mountain-alive-and-well-in-new-location/article_9003513a-3c8b-11ea-a81e-373351759896.html 



The new digs and layout were a modest upgrade from what she had 
going before, and has a somewhat wider range of books shelved now, 
though the emphasis is still on high-turnover paperback. She has one 
of the most extensive collections of Science Fiction I think you 
fill find this side of Powells (Portland) or Tattered Cover (Denver).


I probably unloaded 1/4 of my "2 cords of books" leftover from *my* 
bookstore (Hunt and Gather circa 2006) on her for trade (my credit 
is much too large to begin to drain in the short term).


What I am hoping is that one of her customers or someone in the book 
loving circle would swoop in and buy it outright.   I have already 
talked to some of the other booksellers in town and she has already 
approached them offering to sell them her inventory, but the ideal 
would be for it to remain a viable business.  Her current 
setup/inventory/curation works and is by far more valuable than the 
mere inventory (which would need to be sorted, shelved, etc. again).


If anyone here knows of anyone suitable for taking on such an 
operation I highly recommend it, but I suspect the time is short. 
Peggy is just coming out of some health-challenges and I sense she 
will not be willing/able to wait very long.    I am holding out a 
slim hope that one of the other used-sellers in SFe might pick it up 
as a satellite (Op Cit 

Re: [FRIAM] Book Lovers of Santa FE

2023-03-01 Thread Steve Smith
Excellent!   This is the kind of feedback I was fishing for.  I am 
reading some of Orca's materials on their CoOpness and can imagine that 
it is a hard thing to pull together without one or a very few strong 
personalities with the motivation to move it along crisply.   That is 
definitely not me.


I was not aware of *any* bookstore coops in existence so this is very 
useful and motivates me to maybe look around for other examples.


Peggy has a new employee working there with her who seems pretty 
competent and motivated, I will maybe stop in sometime when she is not 
around (not sure when that is because she used to be there *all the 
time*) and feel out his interest/ability to step up in any capacity...   
she is a dedicated but difficult personality and I don't really want to 
interfere with her process whatever it might be, just see if I can 
facilitate a "better outcome" to show up.


On 3/1/23 3:50 PM, glen wrote:
FWIW, if there were a lot of people interested in helping out a little 
bit, consider setting up a co-op. This store here in Oly did that soon 
after we moved up here. It was difficult because they transitioned, 
rather that constructed from scratch.


https://www.orcabooks.com/co-op

And I don't see Book Mountain on this list:

https://bookshop.org/pages/bookstores

It's probably too little/late for it to help Book Mountain now. But if 
someone does take over, it would make sense to get on the list.


On 3/1/23 14:43, Steve Smith wrote:
I was just in Book Mountain which has been operating in SFe primarily 
as a paperback exchange since 1980 (just before I moved to the area) 
and discovered that the owner, Peggy Frank intends to shut down, 
probably sooner rather than later.  Here is an article about her move 
and rejuvenation of the store a couple of years ago:


https://www.santafenewmexican.com/news/business/book-mountain-alive-and-well-in-new-location/article_9003513a-3c8b-11ea-a81e-373351759896.html 



The new digs and layout were a modest upgrade from what she had going 
before, and has a somewhat wider range of books shelved now, though 
the emphasis is still on high-turnover paperback. She has one of the 
most extensive collections of Science Fiction I think you fill find 
this side of Powells (Portland) or Tattered Cover (Denver).


I probably unloaded 1/4 of my "2 cords of books" leftover from *my* 
bookstore (Hunt and Gather circa 2006) on her for trade (my credit is 
much too large to begin to drain in the short term).


What I am hoping is that one of her customers or someone in the book 
loving circle would swoop in and buy it outright.   I have already 
talked to some of the other booksellers in town and she has already 
approached them offering to sell them her inventory, but the ideal 
would be for it to remain a viable business.  Her current 
setup/inventory/curation works and is by far more valuable than the 
mere inventory (which would need to be sorted, shelved, etc. again).


If anyone here knows of anyone suitable for taking on such an 
operation I highly recommend it, but I suspect the time is short. 
Peggy is just coming out of some health-challenges and I sense she 
will not be willing/able to wait very long.    I am holding out a 
slim hope that one of the other used-sellers in SFe might pick it up 
as a satellite (Op Cit already has had multiple locations).


In any case, spread the word if you can.   Maybe there is a coop 
model or multiple-partner arrangement that might keep it going and 
help Peggy leave the business cleanly/gracefully.



-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam

to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present 
https://redfish.com/pipermail/friam_redfish.com/

 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/



-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Book Lovers of Santa FE

2023-03-01 Thread glen
FWIW, if there were a lot of people interested in helping out a little 
bit, consider setting up a co-op. This store here in Oly did that soon 
after we moved up here. It was difficult because they transitioned, 
rather that constructed from scratch.


https://www.orcabooks.com/co-op

And I don't see Book Mountain on this list:

https://bookshop.org/pages/bookstores

It's probably too little/late for it to help Book Mountain now. But if 
someone does take over, it would make sense to get on the list.


On 3/1/23 14:43, Steve Smith wrote:
I was just in Book Mountain which has been operating in SFe primarily as 
a paperback exchange since 1980 (just before I moved to the area) and 
discovered that the owner, Peggy Frank intends to shut down, probably 
sooner rather than later.  Here is an article about her move and 
rejuvenation of the store a couple of years ago:


https://www.santafenewmexican.com/news/business/book-mountain-alive-and-well-in-new-location/article_9003513a-3c8b-11ea-a81e-373351759896.html

The new digs and layout were a modest upgrade from what she had going 
before, and has a somewhat wider range of books shelved now, though the 
emphasis is still on high-turnover paperback.  She has one of the most 
extensive collections of Science Fiction I think you fill find this side 
of Powells (Portland) or Tattered Cover (Denver).


I probably unloaded 1/4 of my "2 cords of books" leftover from *my* 
bookstore (Hunt and Gather circa 2006) on her for trade (my credit is 
much too large to begin to drain in the short term).


What I am hoping is that one of her customers or someone in the book 
loving circle would swoop in and buy it outright.   I have already 
talked to some of the other booksellers in town and she has already 
approached them offering to sell them her inventory, but the ideal would 
be for it to remain a viable business.  Her current 
setup/inventory/curation works and is by far more valuable than the mere 
inventory (which would need to be sorted, shelved, etc. again).


If anyone here knows of anyone suitable for taking on such an operation 
I highly recommend it, but I suspect the time is short. Peggy is just 
coming out of some health-challenges and I sense she will not be 
willing/able to wait very long.    I am holding out a slim hope that one 
of the other used-sellers in SFe might pick it up as a satellite (Op Cit 
already has had multiple locations).


In any case, spread the word if you can.   Maybe there is a coop model 
or multiple-partner arrangement that might keep it going and help Peggy 
leave the business cleanly/gracefully.



-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


[FRIAM] Book Lovers of Santa FE

2023-03-01 Thread Steve Smith
I was just in Book Mountain which has been operating in SFe primarily as 
a paperback exchange since 1980 (just before I moved to the area) and 
discovered that the owner, Peggy Frank intends to shut down, probably 
sooner rather than later.  Here is an article about her move and 
rejuvenation of the store a couple of years ago:


https://www.santafenewmexican.com/news/business/book-mountain-alive-and-well-in-new-location/article_9003513a-3c8b-11ea-a81e-373351759896.html

The new digs and layout were a modest upgrade from what she had going 
before, and has a somewhat wider range of books shelved now, though the 
emphasis is still on high-turnover paperback.  She has one of the most 
extensive collections of Science Fiction I think you fill find this side 
of Powells (Portland) or Tattered Cover (Denver).


I probably unloaded 1/4 of my "2 cords of books" leftover from *my* 
bookstore (Hunt and Gather circa 2006) on her for trade (my credit is 
much too large to begin to drain in the short term).


What I am hoping is that one of her customers or someone in the book 
loving circle would swoop in and buy it outright.   I have already 
talked to some of the other booksellers in town and she has already 
approached them offering to sell them her inventory, but the ideal would 
be for it to remain a viable business.  Her current 
setup/inventory/curation works and is by far more valuable than the mere 
inventory (which would need to be sorted, shelved, etc. again).


If anyone here knows of anyone suitable for taking on such an operation 
I highly recommend it, but I suspect the time is short. Peggy is just 
coming out of some health-challenges and I sense she will not be 
willing/able to wait very long.    I am holding out a slim hope that one 
of the other used-sellers in SFe might pick it up as a satellite (Op Cit 
already has had multiple locations).


In any case, spread the word if you can.   Maybe there is a coop model 
or multiple-partner arrangement that might keep it going and help Peggy 
leave the business cleanly/gracefully.


- Steve




-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Magic Harry Potter mirrors or more?

2023-03-01 Thread Steve Smith

EricS -

these are good observations... I believe it will lead to more and more 
value in *good curation* and a finer distinction on "authority through 
reputation".   I also believe that formal blockchain and/or some 
informal analog will become critical to authenticating sources.   I 
already find myself questioning my friends and colleagues 
(gently/quietly) with "why do you believe what you believe" because it 
feels like they (we) oftentimes expose ourselves to "this or that" and 
then risk passing it on.


The metaphor of "global pandemic" is still fresh enough to us that 
*maybe* some folks will be more discriminate of who they "conjugate 
with".   Also many of us grew up through the AIDS and/or general STD 
period when the motto "when you have sex with someone you are having sex 
with everyone *they* have had sex with".


 I had two roomates a few years ago (they are peppered 
through my anecdotes here) who were somewhat polar opposites.  One was 
halfway to Q at the time, actually just waiting for a Q-like figure to 
show up.  She had one-liners like: "I think people should be able to 
believe what they want to" and "*I* have an open mind".   The other was 
an artist-by-training but also very rational if not always fully 
informed.   His response to those ideations of hers (rarely to her face, 
he was too polite) was: "I don't want to know what you believe, I want 
to know what you *think*" and "the problem with an open mind, is just 
about anyone  can pour anything into it".


He also had his own pet "conspiracy theories" IMO, but much less wild 
and destructive than hers. (and of course, then there is me and all my 
rattling on)


IN her environmental/anti-globalism she managed to turn it into a 
hate/fear of the Democrats while holding an entirely gullible receptive 
posture toward the Republicans, the more absurd the better.  She held 
(and propogated) various extreme beliefs about the personal lives and 
circumstances of her anti-heros (Clintons and Obamas in particular) yet 
did not recognize the term _ad hominem_  except when applied to the 
likes of her heroes (think Alex Jones and Donald Trump).  She did not 
openly endorse either of the latter, seeming to recognize that they were 
at least widely perceived as the epitome of toxic public personalities, 
but she was known to defend them on-principle...   quoting free-speech 
and decrying the term "conspiracy-theory" as if anyone labeled with it 
had earned by being a true-visionary and hero-of-the-people whistleblower.


I dropped nearly all conversation with her mid-COVID for her rabid 
anti-vaxx rhetoric which I could withstand when directed toward me, but 
her brush was pretty broad when it came to impugning just about anyone 
and everyone who might actually believe in any part of modern medicine.




SteveS


This is fun.  Will have to watch it when I have time.

Is there a large active genre just now combining ChatGPT wiht deepfakes, to 
generate video of whomeever-saying-whatever?

I was thinking a couple of years ago about what direction in big-AI would be 
the most distructive, in requiring extra cognitive load to check what was 
coming in through every sense channel all the time.  Certainly, as much as we 
must live by habit, because doing everything through the prefrontal cortex all 
the time is exhausting (go to a strange country, wake up in the middle of the 
night, where are the lightswitches in this country and how do they work?), 
there clearly are whole sensory modalities that we have just taken for granted 
as long as we could.  I have assumed that the audiovisual channel of watching a 
person say something was near the top of that list.

Clearly a few years ago, deepfakes suddenly took laziness off the table for 
that channel.   The one help was that human-generated nonsense still takes 
human time, on which there is some limit.

But if we have machine-generated nonsense, delivered through machine-generated 
rendering, we can put whole servers onto it full-time.  Sort of like bitcoin 
mining.  Burn a lot of irreplaceable carbon fuel to generate something of no 
value and some significant social cost.

So I assume there is some component of the society that is bored and already 
doing this (?)

Eric




On Feb 28, 2023, at 9:10 PM, Gillian Densmore  wrote:

This john oliver piece might either amus, and or mortify you.
https://www.youtube.com/watch?v=Sqa8Zo2XWc4_channel=LastWeekTonight

On Tue, Feb 28, 2023 at 4:00 PM Gillian Densmore  wrote:


On Tue, Feb 28, 2023 at 2:06 PM Jochen Fromm  wrote:
The "Transformer" movies are like the "Resident evil" movies based on a similar idea: we take a 
simple, almost primitive story such as "cars that can transform into alien robots" or "a bloody fight 
against a zombie apocalypse" and throw lots of money at it.

But maybe deep learning and large language models are the same: we take a 
simple idea (gradient descent learning for deep neural networks) and throw lots 
of money (and 

Re: [FRIAM] Magic Harry Potter mirrors or more?

2023-03-01 Thread glen

Right. I mispoke, "training" may not be the right word. But my understanding is 
they have humans monitoring at least some of the ChatGPT usage and using it for RL. I 
have no idea what the frequency of the feedback is, though. I speculate that it was much 
faster early on, when they effectively took it offline for most people.

So, whether one's little online demonstration of the chat toolchain telling you 
what you want to hear impacts the RL is an open question. Shirley, the 2+2=5 
lessons are ignored by the RL workflow.

On 3/1/23 09:08, Marcus Daniels wrote:

It is pre-trained.   Just because there is a chat doesn’t mean it considers the 
correspondent as providing new evidence.

*From:* Friam  *On Behalf Of *Gillian Densmore
*Sent:* Wednesday, March 1, 2023 9:05 AM
*To:* The Friday Morning Applied Complexity Coffee Group 
*Subject:* Re: [FRIAM] Magic Harry Potter mirrors or more?

Glen Funny you say that about chat gpt:

https://twitter.com/tasty_gigabyte7/status/1620571251344551938 


On Wed, Mar 1, 2023 at 10:02 AM Marcus Daniels mailto:mar...@snoutfarm.com>> wrote:

On one hand, there needs to be ongoing debate (in training) to reflect 
actual uncertainty in responses.   One the other hand, humans spew a lot of 
nonsense, and a lot of it is just wrong.  That leads to the vulnerability to 
black hatters.  If there is bias in the (peer) review of the input data, there 
will be bias in the output distributions.

-Original Message-
From: Friam mailto:friam-boun...@redfish.com>> 
On Behalf Of glen
Sent: Wednesday, March 1, 2023 8:51 AM
To: friam@redfish.com 
Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?

Exactly. We recently started a rough eval of the newer "text-embedding-ada-002" model 
versus the older "text-similarity-curie-001" model. The newer model produces a lower 
dimensional embedding (1536) than the older (4096), which could imply the older model might provide 
a more fine-grained [dis]similarity. I don't think that's the case, though, because the encoding 
for the new model allows for 8192 tokens and the old one only 2046 tokens. So, the ability of the 
high dimensional embedding is limited by the granularity of the encoding. We're not done with the 
evaluation yet, though.

One of the ideas I had when chatgpt took off, more along the lines of 
EricS' question, is to focus on red-teaming GPT. OpenAI's already doing this 
with their human-in-the-loop RL workflow. And the good faith skeptics in the 
world are publishing the edge cases they find (e.g. teaching GPT to say 2+2=5). 
But if a black hatter gets a backdoor into a *medically* focused app, she could 
really screw up particular domains (e.g. caregiver demographics, patient 
demographics, etc.). Or, if she were anti-corporate, she could screw up the 
interface between insurance companies and medical care.

On 3/1/23 08:33, Marcus Daniels wrote:
 > It seems to me the "mansplaining" is built into an algorithm that 
chooses the most likely response.  Choose all responses above probability 0.9 and present 
them all to give the user a sense of the uncertainty.
 >
 > -Original Message-
 > From: Friam mailto:friam-boun...@redfish.com>> On Behalf Of glen
 > Sent: Wednesday, March 1, 2023 8:31 AM
 > To: friam@redfish.com 
 > Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?
 >
 > Yep, that's the fundamental problem with the "chat" usage pattern. But it's much less of a 
problem with other usage patterns. For example, we have a project at UCSF where we're using GPT3.5 to help us 
with the embeddings for full text biomedical articles. This produces opportunities for several other usage 
patterns that preserve the inherent uncertainty, allowing the user to both gain some new insight without the 
"mansplaining" confidence of the chat mode. We're way upstream of the clinic so far, though. FDA 
approval for such a "device" might be sticky.
 >
 > On 3/1/23 08:19, Barry MacKichan wrote:
 >> When I bought back my company about 25 years ago, the mantra for 
programmers was “Google the error message!” Now ChatGPT will write some of the code 
for you. The job of programming still requires a lot of knowledge and experience 
since using ChatGPT-generated code without quality checking is far from failsafe.
 >>
 >> —Barry
 >>
 >> On 1 Mar 2023, at 11:04, Marcus Daniels wrote:
 >>
 >>      I have seen doctors run internet searches in front of me. If a LLM 
is given all the medical journals, biology textbooks, and hospital records for 
training, that could be a useful resource for society.
 >>
 >>      -Original Message-
 >>      From: Friam mailto:friam-boun...@redfish.com>> On Behalf Of Santafe
 >>      Sent: Wednesday, March 1, 2023 4:45 AM
 >>      To: The Friday Morning 

Re: [FRIAM] Magic Harry Potter mirrors or more?

2023-03-01 Thread Marcus Daniels
It is pre-trained.   Just because there is a chat doesn’t mean it considers the 
correspondent as providing new evidence.

From: Friam  On Behalf Of Gillian Densmore
Sent: Wednesday, March 1, 2023 9:05 AM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?

Glen Funny you say that about chat gpt:
https://twitter.com/tasty_gigabyte7/status/1620571251344551938


On Wed, Mar 1, 2023 at 10:02 AM Marcus Daniels 
mailto:mar...@snoutfarm.com>> wrote:
On one hand, there needs to be ongoing debate (in training) to reflect actual 
uncertainty in responses.   One the other hand, humans spew a lot of nonsense, 
and a lot of it is just wrong.  That leads to the vulnerability to black 
hatters.   If there is bias in the (peer) review of the input data, there will 
be bias in the output distributions.

-Original Message-
From: Friam mailto:friam-boun...@redfish.com>> On 
Behalf Of glen
Sent: Wednesday, March 1, 2023 8:51 AM
To: friam@redfish.com
Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?

Exactly. We recently started a rough eval of the newer "text-embedding-ada-002" 
model versus the older "text-similarity-curie-001" model. The newer model 
produces a lower dimensional embedding (1536) than the older (4096), which 
could imply the older model might provide a more fine-grained [dis]similarity. 
I don't think that's the case, though, because the encoding for the new model 
allows for 8192 tokens and the old one only 2046 tokens. So, the ability of the 
high dimensional embedding is limited by the granularity of the encoding. We're 
not done with the evaluation yet, though.

One of the ideas I had when chatgpt took off, more along the lines of EricS' 
question, is to focus on red-teaming GPT. OpenAI's already doing this with 
their human-in-the-loop RL workflow. And the good faith skeptics in the world 
are publishing the edge cases they find (e.g. teaching GPT to say 2+2=5). But 
if a black hatter gets a backdoor into a *medically* focused app, she could 
really screw up particular domains (e.g. caregiver demographics, patient 
demographics, etc.). Or, if she were anti-corporate, she could screw up the 
interface between insurance companies and medical care.

On 3/1/23 08:33, Marcus Daniels wrote:
> It seems to me the "mansplaining" is built into an algorithm that chooses the 
> most likely response.  Choose all responses above probability 0.9 and present 
> them all to give the user a sense of the uncertainty.
>
> -Original Message-
> From: Friam mailto:friam-boun...@redfish.com>> On 
> Behalf Of glen
> Sent: Wednesday, March 1, 2023 8:31 AM
> To: friam@redfish.com
> Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?
>
> Yep, that's the fundamental problem with the "chat" usage pattern. But it's 
> much less of a problem with other usage patterns. For example, we have a 
> project at UCSF where we're using GPT3.5 to help us with the embeddings for 
> full text biomedical articles. This produces opportunities for several other 
> usage patterns that preserve the inherent uncertainty, allowing the user to 
> both gain some new insight without the "mansplaining" confidence of the chat 
> mode. We're way upstream of the clinic so far, though. FDA approval for such 
> a "device" might be sticky.
>
> On 3/1/23 08:19, Barry MacKichan wrote:
>> When I bought back my company about 25 years ago, the mantra for programmers 
>> was “Google the error message!” Now ChatGPT will write some of the code for 
>> you. The job of programming still requires a lot of knowledge and experience 
>> since using ChatGPT-generated code without quality checking is far from 
>> failsafe.
>>
>> —Barry
>>
>> On 1 Mar 2023, at 11:04, Marcus Daniels wrote:
>>
>>  I have seen doctors run internet searches in front of me. If a LLM is 
>> given all the medical journals, biology textbooks, and hospital records for 
>> training, that could be a useful resource for society.
>>
>>  -Original Message-
>>  From: Friam 
>> mailto:friam-boun...@redfish.com>> On Behalf Of 
>> Santafe
>>  Sent: Wednesday, March 1, 2023 4:45 AM
>>  To: The Friday Morning Applied Complexity Coffee Group 
>> mailto:friam@redfish.com>>
>>  Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?
>>
>>  This is fun. Will have to watch it when I have time.
>>
>>  Is there a large active genre just now combining ChatGPT wiht 
>> deepfakes, to generate video of whomeever-saying-whatever?
>>
>>  I was thinking a couple of years ago about what direction in big-AI 
>> would be the most distructive, in requiring extra cognitive load to check 
>> what was coming in through every sense channel all the time. Certainly, as 
>> much as we must live by habit, because doing everything through the 
>> prefrontal cortex all the time is exhausting (go to a strange country, wake 
>> up in the middle of the night, 

Re: [FRIAM] Magic Harry Potter mirrors or more?

2023-03-01 Thread Gillian Densmore
Glen Funny you say that about chat gpt:
https://twitter.com/tasty_gigabyte7/status/1620571251344551938


On Wed, Mar 1, 2023 at 10:02 AM Marcus Daniels  wrote:

> On one hand, there needs to be ongoing debate (in training) to reflect
> actual uncertainty in responses.   One the other hand, humans spew a lot of
> nonsense, and a lot of it is just wrong.  That leads to the vulnerability
> to black hatters.   If there is bias in the (peer) review of the input
> data, there will be bias in the output distributions.
>
> -Original Message-
> From: Friam  On Behalf Of glen
> Sent: Wednesday, March 1, 2023 8:51 AM
> To: friam@redfish.com
> Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?
>
> Exactly. We recently started a rough eval of the newer
> "text-embedding-ada-002" model versus the older "text-similarity-curie-001"
> model. The newer model produces a lower dimensional embedding (1536) than
> the older (4096), which could imply the older model might provide a more
> fine-grained [dis]similarity. I don't think that's the case, though,
> because the encoding for the new model allows for 8192 tokens and the old
> one only 2046 tokens. So, the ability of the high dimensional embedding is
> limited by the granularity of the encoding. We're not done with the
> evaluation yet, though.
>
> One of the ideas I had when chatgpt took off, more along the lines of
> EricS' question, is to focus on red-teaming GPT. OpenAI's already doing
> this with their human-in-the-loop RL workflow. And the good faith skeptics
> in the world are publishing the edge cases they find (e.g. teaching GPT to
> say 2+2=5). But if a black hatter gets a backdoor into a *medically*
> focused app, she could really screw up particular domains (e.g. caregiver
> demographics, patient demographics, etc.). Or, if she were anti-corporate,
> she could screw up the interface between insurance companies and medical
> care.
>
> On 3/1/23 08:33, Marcus Daniels wrote:
> > It seems to me the "mansplaining" is built into an algorithm that
> chooses the most likely response.  Choose all responses above probability
> 0.9 and present them all to give the user a sense of the uncertainty.
> >
> > -Original Message-
> > From: Friam  On Behalf Of glen
> > Sent: Wednesday, March 1, 2023 8:31 AM
> > To: friam@redfish.com
> > Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?
> >
> > Yep, that's the fundamental problem with the "chat" usage pattern. But
> it's much less of a problem with other usage patterns. For example, we have
> a project at UCSF where we're using GPT3.5 to help us with the embeddings
> for full text biomedical articles. This produces opportunities for several
> other usage patterns that preserve the inherent uncertainty, allowing the
> user to both gain some new insight without the "mansplaining" confidence of
> the chat mode. We're way upstream of the clinic so far, though. FDA
> approval for such a "device" might be sticky.
> >
> > On 3/1/23 08:19, Barry MacKichan wrote:
> >> When I bought back my company about 25 years ago, the mantra for
> programmers was “Google the error message!” Now ChatGPT will write some of
> the code for you. The job of programming still requires a lot of knowledge
> and experience since using ChatGPT-generated code without quality checking
> is far from failsafe.
> >>
> >> —Barry
> >>
> >> On 1 Mar 2023, at 11:04, Marcus Daniels wrote:
> >>
> >>  I have seen doctors run internet searches in front of me. If a LLM
> is given all the medical journals, biology textbooks, and hospital records
> for training, that could be a useful resource for society.
> >>
> >>  -Original Message-
> >>  From: Friam  On Behalf Of Santafe
> >>  Sent: Wednesday, March 1, 2023 4:45 AM
> >>  To: The Friday Morning Applied Complexity Coffee Group <
> friam@redfish.com>
> >>  Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?
> >>
> >>  This is fun. Will have to watch it when I have time.
> >>
> >>  Is there a large active genre just now combining ChatGPT wiht
> deepfakes, to generate video of whomeever-saying-whatever?
> >>
> >>  I was thinking a couple of years ago about what direction in
> big-AI would be the most distructive, in requiring extra cognitive load to
> check what was coming in through every sense channel all the time.
> Certainly, as much as we must live by habit, because doing everything
> through the prefrontal cortex all the time is exhausting (go to a strange
> country, wake up in the middle of the night, where are the lightswitches in
> this country and how do they work?), there clearly are whole sensory
> modalities that we have just taken for granted as long as we could. I have
> assumed that the audiovisual channel of watching a person say something was
> near the top of that list.
> >>
> >>  Clearly a few years ago, deepfakes suddenly took laziness off the
> table for that channel. The one help was that human-generated nonsense
> still takes 

Re: [FRIAM] Magic Harry Potter mirrors or more?

2023-03-01 Thread Marcus Daniels
On one hand, there needs to be ongoing debate (in training) to reflect actual 
uncertainty in responses.   One the other hand, humans spew a lot of nonsense, 
and a lot of it is just wrong.  That leads to the vulnerability to black 
hatters.   If there is bias in the (peer) review of the input data, there will 
be bias in the output distributions.

-Original Message-
From: Friam  On Behalf Of glen
Sent: Wednesday, March 1, 2023 8:51 AM
To: friam@redfish.com
Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?

Exactly. We recently started a rough eval of the newer "text-embedding-ada-002" 
model versus the older "text-similarity-curie-001" model. The newer model 
produces a lower dimensional embedding (1536) than the older (4096), which 
could imply the older model might provide a more fine-grained [dis]similarity. 
I don't think that's the case, though, because the encoding for the new model 
allows for 8192 tokens and the old one only 2046 tokens. So, the ability of the 
high dimensional embedding is limited by the granularity of the encoding. We're 
not done with the evaluation yet, though.

One of the ideas I had when chatgpt took off, more along the lines of EricS' 
question, is to focus on red-teaming GPT. OpenAI's already doing this with 
their human-in-the-loop RL workflow. And the good faith skeptics in the world 
are publishing the edge cases they find (e.g. teaching GPT to say 2+2=5). But 
if a black hatter gets a backdoor into a *medically* focused app, she could 
really screw up particular domains (e.g. caregiver demographics, patient 
demographics, etc.). Or, if she were anti-corporate, she could screw up the 
interface between insurance companies and medical care.

On 3/1/23 08:33, Marcus Daniels wrote:
> It seems to me the "mansplaining" is built into an algorithm that chooses the 
> most likely response.  Choose all responses above probability 0.9 and present 
> them all to give the user a sense of the uncertainty.
> 
> -Original Message-
> From: Friam  On Behalf Of glen
> Sent: Wednesday, March 1, 2023 8:31 AM
> To: friam@redfish.com
> Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?
> 
> Yep, that's the fundamental problem with the "chat" usage pattern. But it's 
> much less of a problem with other usage patterns. For example, we have a 
> project at UCSF where we're using GPT3.5 to help us with the embeddings for 
> full text biomedical articles. This produces opportunities for several other 
> usage patterns that preserve the inherent uncertainty, allowing the user to 
> both gain some new insight without the "mansplaining" confidence of the chat 
> mode. We're way upstream of the clinic so far, though. FDA approval for such 
> a "device" might be sticky.
> 
> On 3/1/23 08:19, Barry MacKichan wrote:
>> When I bought back my company about 25 years ago, the mantra for programmers 
>> was “Google the error message!” Now ChatGPT will write some of the code for 
>> you. The job of programming still requires a lot of knowledge and experience 
>> since using ChatGPT-generated code without quality checking is far from 
>> failsafe.
>>
>> —Barry
>>
>> On 1 Mar 2023, at 11:04, Marcus Daniels wrote:
>>
>>  I have seen doctors run internet searches in front of me. If a LLM is 
>> given all the medical journals, biology textbooks, and hospital records for 
>> training, that could be a useful resource for society.
>>
>>  -Original Message-
>>  From: Friam  On Behalf Of Santafe
>>  Sent: Wednesday, March 1, 2023 4:45 AM
>>  To: The Friday Morning Applied Complexity Coffee Group 
>> 
>>  Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?
>>
>>  This is fun. Will have to watch it when I have time.
>>
>>  Is there a large active genre just now combining ChatGPT wiht 
>> deepfakes, to generate video of whomeever-saying-whatever?
>>
>>  I was thinking a couple of years ago about what direction in big-AI 
>> would be the most distructive, in requiring extra cognitive load to check 
>> what was coming in through every sense channel all the time. Certainly, as 
>> much as we must live by habit, because doing everything through the 
>> prefrontal cortex all the time is exhausting (go to a strange country, wake 
>> up in the middle of the night, where are the lightswitches in this country 
>> and how do they work?), there clearly are whole sensory modalities that we 
>> have just taken for granted as long as we could. I have assumed that the 
>> audiovisual channel of watching a person say something was near the top of 
>> that list.
>>
>>  Clearly a few years ago, deepfakes suddenly took laziness off the table 
>> for that channel. The one help was that human-generated nonsense still takes 
>> human time, on which there is some limit.
>>
>>  But if we have machine-generated nonsense, delivered through 
>> machine-generated rendering, we can put whole servers onto it full-time. 
>> Sort of like bitcoin mining. Burn a lot of 

Re: [FRIAM] Magic Harry Potter mirrors or more?

2023-03-01 Thread glen

Exactly. We recently started a rough eval of the newer "text-embedding-ada-002" model 
versus the older "text-similarity-curie-001" model. The newer model produces a lower 
dimensional embedding (1536) than the older (4096), which could imply the older model might provide 
a more fine-grained [dis]similarity. I don't think that's the case, though, because the encoding 
for the new model allows for 8192 tokens and the old one only 2046 tokens. So, the ability of the 
high dimensional embedding is limited by the granularity of the encoding. We're not done with the 
evaluation yet, though.

One of the ideas I had when chatgpt took off, more along the lines of EricS' 
question, is to focus on red-teaming GPT. OpenAI's already doing this with 
their human-in-the-loop RL workflow. And the good faith skeptics in the world 
are publishing the edge cases they find (e.g. teaching GPT to say 2+2=5). But 
if a black hatter gets a backdoor into a *medically* focused app, she could 
really screw up particular domains (e.g. caregiver demographics, patient 
demographics, etc.). Or, if she were anti-corporate, she could screw up the 
interface between insurance companies and medical care.

On 3/1/23 08:33, Marcus Daniels wrote:

It seems to me the "mansplaining" is built into an algorithm that chooses the 
most likely response.  Choose all responses above probability 0.9 and present them all to 
give the user a sense of the uncertainty.

-Original Message-
From: Friam  On Behalf Of glen
Sent: Wednesday, March 1, 2023 8:31 AM
To: friam@redfish.com
Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?

Yep, that's the fundamental problem with the "chat" usage pattern. But it's much less of a problem 
with other usage patterns. For example, we have a project at UCSF where we're using GPT3.5 to help us with 
the embeddings for full text biomedical articles. This produces opportunities for several other usage 
patterns that preserve the inherent uncertainty, allowing the user to both gain some new insight without the 
"mansplaining" confidence of the chat mode. We're way upstream of the clinic so far, though. FDA 
approval for such a "device" might be sticky.

On 3/1/23 08:19, Barry MacKichan wrote:

When I bought back my company about 25 years ago, the mantra for programmers 
was “Google the error message!” Now ChatGPT will write some of the code for 
you. The job of programming still requires a lot of knowledge and experience 
since using ChatGPT-generated code without quality checking is far from 
failsafe.

—Barry

On 1 Mar 2023, at 11:04, Marcus Daniels wrote:

 I have seen doctors run internet searches in front of me. If a LLM is 
given all the medical journals, biology textbooks, and hospital records for 
training, that could be a useful resource for society.

 -Original Message-
 From: Friam  On Behalf Of Santafe
 Sent: Wednesday, March 1, 2023 4:45 AM
 To: The Friday Morning Applied Complexity Coffee Group 
 Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?

 This is fun. Will have to watch it when I have time.

 Is there a large active genre just now combining ChatGPT wiht deepfakes, 
to generate video of whomeever-saying-whatever?

 I was thinking a couple of years ago about what direction in big-AI would 
be the most distructive, in requiring extra cognitive load to check what was 
coming in through every sense channel all the time. Certainly, as much as we 
must live by habit, because doing everything through the prefrontal cortex all 
the time is exhausting (go to a strange country, wake up in the middle of the 
night, where are the lightswitches in this country and how do they work?), 
there clearly are whole sensory modalities that we have just taken for granted 
as long as we could. I have assumed that the audiovisual channel of watching a 
person say something was near the top of that list.

 Clearly a few years ago, deepfakes suddenly took laziness off the table 
for that channel. The one help was that human-generated nonsense still takes 
human time, on which there is some limit.

 But if we have machine-generated nonsense, delivered through 
machine-generated rendering, we can put whole servers onto it full-time. Sort 
of like bitcoin mining. Burn a lot of irreplaceable carbon fuel to generate 
something of no value and some significant social cost.

 So I assume there is some component of the society that is bored and 
already doing this (?)

 Eric


 On Feb 28, 2023, at 9:10 PM, Gillian Densmore  
wrote:

 This john oliver piece might either amus, and or mortify you.
 https://www.youtube.com/watch?v=Sqa8Zo2XWc4_channel=LastWeekTonight 


 On Tue, Feb 28, 2023 at 4:00 PM Gillian Densmore 
 wrote:


 On Tue, Feb 28, 2023 at 2:06 PM Jochen Fromm  
wrote:
 The "Transformer" movies are like the "Resident evil" movies 

Re: [FRIAM] Magic Harry Potter mirrors or more?

2023-03-01 Thread Marcus Daniels
It seems to me the "mansplaining" is built into an algorithm that chooses the 
most likely response.  Choose all responses above probability 0.9 and present 
them all to give the user a sense of the uncertainty.  

-Original Message-
From: Friam  On Behalf Of glen
Sent: Wednesday, March 1, 2023 8:31 AM
To: friam@redfish.com
Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?

Yep, that's the fundamental problem with the "chat" usage pattern. But it's 
much less of a problem with other usage patterns. For example, we have a 
project at UCSF where we're using GPT3.5 to help us with the embeddings for 
full text biomedical articles. This produces opportunities for several other 
usage patterns that preserve the inherent uncertainty, allowing the user to 
both gain some new insight without the "mansplaining" confidence of the chat 
mode. We're way upstream of the clinic so far, though. FDA approval for such a 
"device" might be sticky.

On 3/1/23 08:19, Barry MacKichan wrote:
> When I bought back my company about 25 years ago, the mantra for programmers 
> was “Google the error message!” Now ChatGPT will write some of the code for 
> you. The job of programming still requires a lot of knowledge and experience 
> since using ChatGPT-generated code without quality checking is far from 
> failsafe.
> 
> —Barry
> 
> On 1 Mar 2023, at 11:04, Marcus Daniels wrote:
> 
> I have seen doctors run internet searches in front of me. If a LLM is 
> given all the medical journals, biology textbooks, and hospital records for 
> training, that could be a useful resource for society.
> 
> -Original Message-
> From: Friam  On Behalf Of Santafe
> Sent: Wednesday, March 1, 2023 4:45 AM
> To: The Friday Morning Applied Complexity Coffee Group 
> Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?
> 
> This is fun. Will have to watch it when I have time.
> 
> Is there a large active genre just now combining ChatGPT wiht deepfakes, 
> to generate video of whomeever-saying-whatever?
> 
> I was thinking a couple of years ago about what direction in big-AI would 
> be the most distructive, in requiring extra cognitive load to check what was 
> coming in through every sense channel all the time. Certainly, as much as we 
> must live by habit, because doing everything through the prefrontal cortex 
> all the time is exhausting (go to a strange country, wake up in the middle of 
> the night, where are the lightswitches in this country and how do they 
> work?), there clearly are whole sensory modalities that we have just taken 
> for granted as long as we could. I have assumed that the audiovisual channel 
> of watching a person say something was near the top of that list.
> 
> Clearly a few years ago, deepfakes suddenly took laziness off the table 
> for that channel. The one help was that human-generated nonsense still takes 
> human time, on which there is some limit.
> 
> But if we have machine-generated nonsense, delivered through 
> machine-generated rendering, we can put whole servers onto it full-time. Sort 
> of like bitcoin mining. Burn a lot of irreplaceable carbon fuel to generate 
> something of no value and some significant social cost.
> 
> So I assume there is some component of the society that is bored and 
> already doing this (?)
> 
> Eric
> 
> 
> On Feb 28, 2023, at 9:10 PM, Gillian Densmore 
>  wrote:
> 
> This john oliver piece might either amus, and or mortify you.
> 
> https://www.youtube.com/watch?v=Sqa8Zo2XWc4_channel=LastWeekTonight 
> 
> 
> On Tue, Feb 28, 2023 at 4:00 PM Gillian Densmore 
>  wrote:
> 
> 
> On Tue, Feb 28, 2023 at 2:06 PM Jochen Fromm  
> wrote:
> The "Transformer" movies are like the "Resident evil" movies based on 
> a similar idea: we take a simple, almost primitive story such as "cars that 
> can transform into alien robots" or "a bloody fight against a zombie 
> apocalypse" and throw lots of money at it.
> 
> But maybe deep learning and large language models are the same: we 
> take a simple idea (gradient descent learning for deep neural networks) and 
> throw lots of money (and data) at it. In this sense transformer is a perfect 
> name of the architecture, isn't it?
> 
> -J.
> 樂
> 
>  Original message 
> From: Gillian Densmore 
> Date: 2/28/23 1:47 AM (GMT+01:00)
> To: The Friday Morning Applied Complexity Coffee Group
> 
> Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?
> 
> Transformer architecture works because it's cybertronian technology. 
> And is so advanced as to be almost magic.
> 
> On Mon, Feb 27, 2023 at 3:51 PM Jochen Fromm  
> wrote:
> Terrence Sejnowski argues that the new AI super chatbots are like a 
> magic Harry Potter mirror that tells the user what he 

Re: [FRIAM] Magic Harry Potter mirrors or more?

2023-03-01 Thread glen

Yep, that's the fundamental problem with the "chat" usage pattern. But it's much less of a problem 
with other usage patterns. For example, we have a project at UCSF where we're using GPT3.5 to help us with 
the embeddings for full text biomedical articles. This produces opportunities for several other usage 
patterns that preserve the inherent uncertainty, allowing the user to both gain some new insight without the 
"mansplaining" confidence of the chat mode. We're way upstream of the clinic so far, though. FDA 
approval for such a "device" might be sticky.

On 3/1/23 08:19, Barry MacKichan wrote:

When I bought back my company about 25 years ago, the mantra for programmers 
was “Google the error message!” Now ChatGPT will write some of the code for 
you. The job of programming still requires a lot of knowledge and experience 
since using ChatGPT-generated code without quality checking is far from 
failsafe.

—Barry

On 1 Mar 2023, at 11:04, Marcus Daniels wrote:

I have seen doctors run internet searches in front of me. If a LLM is given 
all the medical journals, biology textbooks, and hospital records for training, 
that could be a useful resource for society.

-Original Message-
From: Friam  On Behalf Of Santafe
Sent: Wednesday, March 1, 2023 4:45 AM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?

This is fun. Will have to watch it when I have time.

Is there a large active genre just now combining ChatGPT wiht deepfakes, to 
generate video of whomeever-saying-whatever?

I was thinking a couple of years ago about what direction in big-AI would 
be the most distructive, in requiring extra cognitive load to check what was 
coming in through every sense channel all the time. Certainly, as much as we 
must live by habit, because doing everything through the prefrontal cortex all 
the time is exhausting (go to a strange country, wake up in the middle of the 
night, where are the lightswitches in this country and how do they work?), 
there clearly are whole sensory modalities that we have just taken for granted 
as long as we could. I have assumed that the audiovisual channel of watching a 
person say something was near the top of that list.

Clearly a few years ago, deepfakes suddenly took laziness off the table for 
that channel. The one help was that human-generated nonsense still takes human 
time, on which there is some limit.

But if we have machine-generated nonsense, delivered through 
machine-generated rendering, we can put whole servers onto it full-time. Sort 
of like bitcoin mining. Burn a lot of irreplaceable carbon fuel to generate 
something of no value and some significant social cost.

So I assume there is some component of the society that is bored and 
already doing this (?)

Eric


On Feb 28, 2023, at 9:10 PM, Gillian Densmore  
wrote:

This john oliver piece might either amus, and or mortify you.
https://www.youtube.com/watch?v=Sqa8Zo2XWc4_channel=LastWeekTonight 


On Tue, Feb 28, 2023 at 4:00 PM Gillian Densmore 
 wrote:


On Tue, Feb 28, 2023 at 2:06 PM Jochen Fromm  wrote:
The "Transformer" movies are like the "Resident evil" movies based on a similar idea: we 
take a simple, almost primitive story such as "cars that can transform into alien robots" or "a bloody 
fight against a zombie apocalypse" and throw lots of money at it.

But maybe deep learning and large language models are the same: we take 
a simple idea (gradient descent learning for deep neural networks) and throw 
lots of money (and data) at it. In this sense transformer is a perfect name of 
the architecture, isn't it?

-J.
樂

 Original message 
From: Gillian Densmore 
Date: 2/28/23 1:47 AM (GMT+01:00)
To: The Friday Morning Applied Complexity Coffee Group

Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?

Transformer architecture works because it's cybertronian technology. 
And is so advanced as to be almost magic.

On Mon, Feb 27, 2023 at 3:51 PM Jochen Fromm  wrote:
Terrence Sejnowski argues that the new AI super chatbots are like a magic Harry Potter 
mirror that tells the user what he wants to hear: "When people discover the mirror, it seems 
to provide truth and understanding. But it does not. It shows the deep-seated desires of anyone who 
stares into it". ChatGPT, LaMDA, LLaMA and other large language models would "take in our 
words and reflect them back to us".
https://www.nytimes.com/2023/02/26/technology/ai-chatbot-information-t 

ruth.html

It is true that large language models have absorbed unimaginably huge 
amount of texts, but what if our 

Re: [FRIAM] Magic Harry Potter mirrors or more?

2023-03-01 Thread Marcus Daniels
Sure, but a doctor doesn’t need to take candidate diagnosis as anything more 
than a hypothesis.

In M3gan (which I found to be more of a fun satire than a horror drama), the 
android is better able to counsel the child than the human guardian.  That also 
seems plausible.  All fiction and internet discussion is available for training.

It's been reported that people with high religious participation have fewer 
deaths of despair 
(https://www.economist.com/graphic-detail/2023/02/27/places-with-high-religious-participation-have-fewer-deaths-of-despair).
   That seems like the M3gan observation/fictionalization.  Vulnerable people 
are easy to support (and manipulate).

From: Friam  On Behalf Of Barry MacKichan
Sent: Wednesday, March 1, 2023 8:20 AM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?


When I bought back my company about 25 years ago, the mantra for programmers 
was “Google the error message!” Now ChatGPT will write some of the code for 
you. The job of programming still requires a lot of knowledge and experience 
since using ChatGPT-generated code without quality checking is far from 
failsafe.

—Barry

On 1 Mar 2023, at 11:04, Marcus Daniels wrote:

I have seen doctors run internet searches in front of me. If a LLM is given all 
the medical journals, biology textbooks, and hospital records for training, 
that could be a useful resource for society.

-Original Message-
From: Friam mailto:friam-boun...@redfish.com>> On 
Behalf Of Santafe
Sent: Wednesday, March 1, 2023 4:45 AM
To: The Friday Morning Applied Complexity Coffee Group 
mailto:friam@redfish.com>>
Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?

This is fun. Will have to watch it when I have time.

Is there a large active genre just now combining ChatGPT wiht deepfakes, to 
generate video of whomeever-saying-whatever?

I was thinking a couple of years ago about what direction in big-AI would be 
the most distructive, in requiring extra cognitive load to check what was 
coming in through every sense channel all the time. Certainly, as much as we 
must live by habit, because doing everything through the prefrontal cortex all 
the time is exhausting (go to a strange country, wake up in the middle of the 
night, where are the lightswitches in this country and how do they work?), 
there clearly are whole sensory modalities that we have just taken for granted 
as long as we could. I have assumed that the audiovisual channel of watching a 
person say something was near the top of that list.

Clearly a few years ago, deepfakes suddenly took laziness off the table for 
that channel. The one help was that human-generated nonsense still takes human 
time, on which there is some limit.

But if we have machine-generated nonsense, delivered through machine-generated 
rendering, we can put whole servers onto it full-time. Sort of like bitcoin 
mining. Burn a lot of irreplaceable carbon fuel to generate something of no 
value and some significant social cost.

So I assume there is some component of the society that is bored and already 
doing this (?)

Eric


On Feb 28, 2023, at 9:10 PM, Gillian Densmore 
mailto:gil.densm...@gmail.com>> wrote:

This john oliver piece might either amus, and or mortify you.
https://www.youtube.com/watch?v=Sqa8Zo2XWc4_channel=LastWeekTonight

On Tue, Feb 28, 2023 at 4:00 PM Gillian Densmore 
mailto:gil.densm...@gmail.com>> wrote:


On Tue, Feb 28, 2023 at 2:06 PM Jochen Fromm 
mailto:j...@cas-group.net>> wrote:
The "Transformer" movies are like the "Resident evil" movies based on a similar 
idea: we take a simple, almost primitive story such as "cars that can transform 
into alien robots" or "a bloody fight against a zombie apocalypse" and throw 
lots of money at it.

But maybe deep learning and large language models are the same: we take a 
simple idea (gradient descent learning for deep neural networks) and throw lots 
of money (and data) at it. In this sense transformer is a perfect name of the 
architecture, isn't it?

-J.
樂

 Original message 
From: Gillian Densmore mailto:gil.densm...@gmail.com>>
Date: 2/28/23 1:47 AM (GMT+01:00)
To: The Friday Morning Applied Complexity Coffee Group
mailto:friam@redfish.com>>
Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?

Transformer architecture works because it's cybertronian technology. And is so 
advanced as to be almost magic.

On Mon, Feb 27, 2023 at 3:51 PM Jochen Fromm 
mailto:j...@cas-group.net>> wrote:
Terrence Sejnowski argues that the new AI super chatbots are like a magic Harry 
Potter mirror that tells the user what he wants to hear: "When people discover 
the mirror, it seems to provide truth and understanding. But it does not. It 
shows the deep-seated desires of anyone who stares into it". ChatGPT, LaMDA, 
LLaMA and other large language models would "take in our words and reflect them 
back to us".

Re: [FRIAM] Magic Harry Potter mirrors or more?

2023-03-01 Thread Barry MacKichan
When I bought back my company about 25 years ago, the mantra for 
programmers was “Google the error message!” Now ChatGPT will write 
some of the code for you. The job of programming still requires a lot of 
knowledge and experience since using ChatGPT-generated code without 
quality checking is far from failsafe.


—Barry

On 1 Mar 2023, at 11:04, Marcus Daniels wrote:

I have seen doctors run internet searches in front of me.   If a LLM 
is given all the medical journals, biology textbooks, and hospital 
records for training, that could be a useful resource for society.


-Original Message-
From: Friam  On Behalf Of Santafe
Sent: Wednesday, March 1, 2023 4:45 AM
To: The Friday Morning Applied Complexity Coffee Group 


Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?

This is fun.  Will have to watch it when I have time.

Is there a large active genre just now combining ChatGPT wiht 
deepfakes, to generate video of whomeever-saying-whatever?


I was thinking a couple of years ago about what direction in big-AI 
would be the most distructive, in requiring extra cognitive load to 
check what was coming in through every sense channel all the time.  
Certainly, as much as we must live by habit, because doing everything 
through the prefrontal cortex all the time is exhausting (go to a 
strange country, wake up in the middle of the night, where are the 
lightswitches in this country and how do they work?), there clearly 
are whole sensory modalities that we have just taken for granted as 
long as we could.  I have assumed that the audiovisual channel of 
watching a person say something was near the top of that list.


Clearly a few years ago, deepfakes suddenly took laziness off the 
table for that channel.   The one help was that human-generated 
nonsense still takes human time, on which there is some limit.


But if we have machine-generated nonsense, delivered through 
machine-generated rendering, we can put whole servers onto it 
full-time.  Sort of like bitcoin mining.  Burn a lot of irreplaceable 
carbon fuel to generate something of no value and some significant 
social cost.


So I assume there is some component of the society that is bored and 
already doing this (?)


Eric



On Feb 28, 2023, at 9:10 PM, Gillian Densmore 
 wrote:


This john oliver piece might either amus, and or mortify you.
https://www.youtube.com/watch?v=Sqa8Zo2XWc4_channel=LastWeekTonight

On Tue, Feb 28, 2023 at 4:00 PM Gillian Densmore 
 wrote:



On Tue, Feb 28, 2023 at 2:06 PM Jochen Fromm  
wrote:
The "Transformer" movies are like the "Resident evil" movies based on 
a similar idea: we take a simple, almost primitive story such as 
"cars that can transform into alien robots" or "a bloody fight 
against a zombie apocalypse" and throw lots of money at it.


But maybe deep learning and large language models are the same: we 
take a simple idea (gradient descent learning for deep neural 
networks) and throw lots of money (and data) at it. In this sense 
transformer is a perfect name of the architecture, isn't it?


-J.
樂

 Original message 
From: Gillian Densmore 
Date: 2/28/23 1:47 AM (GMT+01:00)
To: The Friday Morning Applied Complexity Coffee Group

Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?

Transformer architecture works because it's cybertronian technology. 
And is so advanced as to be almost magic.


On Mon, Feb 27, 2023 at 3:51 PM Jochen Fromm  
wrote:
Terrence Sejnowski argues that the new AI super chatbots are like a 
magic Harry Potter mirror that tells the user what he wants to hear: 
"When people discover the mirror, it seems to provide truth and 
understanding. But it does not. It shows the deep-seated desires of 
anyone who stares into it". ChatGPT, LaMDA, LLaMA and other large 
language models would "take in our words and reflect them back to 
us".

https://www.nytimes.com/2023/02/26/technology/ai-chatbot-information-t
ruth.html

It is true that large language models have absorbed unimaginably huge 
amount of texts, but what if our prefrontal cortex in the brain works 
in the same way?

https://direct.mit.edu/neco/article/35/3/309/114731/Large-Language-Mod
els-and-the-Reverse-Turing-Test

I think it is possible that the "transformer" architecture is so
successful because it is - like the cortical columns in the neocortex
- a modular solution for the problem what comes next in an
unpredictable world https://en.wikipedia.org/wiki/Cortical_column

-J.

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe 
http://redfish.com/mailman/listinfo/friam_redfish.com

FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present
https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/ -. --- - / ...-
.- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .

Re: [FRIAM] Magic Harry Potter mirrors or more?

2023-03-01 Thread Marcus Daniels
I have seen doctors run internet searches in front of me.   If a LLM is given 
all the medical journals, biology textbooks, and hospital records for training, 
that could be a useful resource for society.   

-Original Message-
From: Friam  On Behalf Of Santafe
Sent: Wednesday, March 1, 2023 4:45 AM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?

This is fun.  Will have to watch it when I have time.

Is there a large active genre just now combining ChatGPT wiht deepfakes, to 
generate video of whomeever-saying-whatever?

I was thinking a couple of years ago about what direction in big-AI would be 
the most distructive, in requiring extra cognitive load to check what was 
coming in through every sense channel all the time.  Certainly, as much as we 
must live by habit, because doing everything through the prefrontal cortex all 
the time is exhausting (go to a strange country, wake up in the middle of the 
night, where are the lightswitches in this country and how do they work?), 
there clearly are whole sensory modalities that we have just taken for granted 
as long as we could.  I have assumed that the audiovisual channel of watching a 
person say something was near the top of that list.

Clearly a few years ago, deepfakes suddenly took laziness off the table for 
that channel.   The one help was that human-generated nonsense still takes 
human time, on which there is some limit.  

But if we have machine-generated nonsense, delivered through machine-generated 
rendering, we can put whole servers onto it full-time.  Sort of like bitcoin 
mining.  Burn a lot of irreplaceable carbon fuel to generate something of no 
value and some significant social cost.

So I assume there is some component of the society that is bored and already 
doing this (?)

Eric



> On Feb 28, 2023, at 9:10 PM, Gillian Densmore  wrote:
> 
> This john oliver piece might either amus, and or mortify you. 
> https://www.youtube.com/watch?v=Sqa8Zo2XWc4_channel=LastWeekTonight
> 
> On Tue, Feb 28, 2023 at 4:00 PM Gillian Densmore  
> wrote:
> 
> 
> On Tue, Feb 28, 2023 at 2:06 PM Jochen Fromm  wrote:
> The "Transformer" movies are like the "Resident evil" movies based on a 
> similar idea: we take a simple, almost primitive story such as "cars that can 
> transform into alien robots" or "a bloody fight against a zombie apocalypse" 
> and throw lots of money at it.
> 
> But maybe deep learning and large language models are the same: we take a 
> simple idea (gradient descent learning for deep neural networks) and throw 
> lots of money (and data) at it. In this sense transformer is a perfect name 
> of the architecture, isn't it?
> 
> -J.
> 樂
> 
>  Original message 
> From: Gillian Densmore 
> Date: 2/28/23 1:47 AM (GMT+01:00)
> To: The Friday Morning Applied Complexity Coffee Group 
> 
> Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?
> 
> Transformer architecture works because it's cybertronian technology. And is 
> so advanced as to be almost magic.
> 
> On Mon, Feb 27, 2023 at 3:51 PM Jochen Fromm  wrote:
> Terrence Sejnowski argues that the new AI super chatbots are like a magic 
> Harry Potter mirror that tells the user what he wants to hear: "When people 
> discover the mirror, it seems to provide truth and understanding. But it does 
> not. It shows the deep-seated desires of anyone who stares into it". ChatGPT, 
> LaMDA, LLaMA and other large language models would "take in our words and 
> reflect them back to us".
> https://www.nytimes.com/2023/02/26/technology/ai-chatbot-information-t
> ruth.html
> 
> It is true that large language models have absorbed unimaginably huge amount 
> of texts, but what if our prefrontal cortex in the brain works in the same 
> way? 
> https://direct.mit.edu/neco/article/35/3/309/114731/Large-Language-Mod
> els-and-the-Reverse-Turing-Test
> 
> I think it is possible that the "transformer" architecture is so 
> successful because it is - like the cortical columns in the neocortex 
> - a modular solution for the problem what comes next in an 
> unpredictable world https://en.wikipedia.org/wiki/Cortical_column
> 
> -J.
> 
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present 
> https://redfish.com/pipermail/friam_redfish.com/
>  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/ -. --- - / ...- 
> .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC 

Re: [FRIAM] Magic Harry Potter mirrors or more?

2023-03-01 Thread Santafe
This is fun.  Will have to watch it when I have time.

Is there a large active genre just now combining ChatGPT wiht deepfakes, to 
generate video of whomeever-saying-whatever?

I was thinking a couple of years ago about what direction in big-AI would be 
the most distructive, in requiring extra cognitive load to check what was 
coming in through every sense channel all the time.  Certainly, as much as we 
must live by habit, because doing everything through the prefrontal cortex all 
the time is exhausting (go to a strange country, wake up in the middle of the 
night, where are the lightswitches in this country and how do they work?), 
there clearly are whole sensory modalities that we have just taken for granted 
as long as we could.  I have assumed that the audiovisual channel of watching a 
person say something was near the top of that list.

Clearly a few years ago, deepfakes suddenly took laziness off the table for 
that channel.   The one help was that human-generated nonsense still takes 
human time, on which there is some limit.  

But if we have machine-generated nonsense, delivered through machine-generated 
rendering, we can put whole servers onto it full-time.  Sort of like bitcoin 
mining.  Burn a lot of irreplaceable carbon fuel to generate something of no 
value and some significant social cost.

So I assume there is some component of the society that is bored and already 
doing this (?)

Eric



> On Feb 28, 2023, at 9:10 PM, Gillian Densmore  wrote:
> 
> This john oliver piece might either amus, and or mortify you. 
> https://www.youtube.com/watch?v=Sqa8Zo2XWc4_channel=LastWeekTonight
> 
> On Tue, Feb 28, 2023 at 4:00 PM Gillian Densmore  
> wrote:
> 
> 
> On Tue, Feb 28, 2023 at 2:06 PM Jochen Fromm  wrote:
> The "Transformer" movies are like the "Resident evil" movies based on a 
> similar idea: we take a simple, almost primitive story such as "cars that can 
> transform into alien robots" or "a bloody fight against a zombie apocalypse" 
> and throw lots of money at it.
> 
> But maybe deep learning and large language models are the same: we take a 
> simple idea (gradient descent learning for deep neural networks) and throw 
> lots of money (and data) at it. In this sense transformer is a perfect name 
> of the architecture, isn't it?
> 
> -J.
> 樂
> 
>  Original message 
> From: Gillian Densmore 
> Date: 2/28/23 1:47 AM (GMT+01:00)
> To: The Friday Morning Applied Complexity Coffee Group 
> Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?
> 
> Transformer architecture works because it's cybertronian technology. And is 
> so advanced as to be almost magic.
> 
> On Mon, Feb 27, 2023 at 3:51 PM Jochen Fromm  wrote:
> Terrence Sejnowski argues that the new AI super chatbots are like a magic 
> Harry Potter mirror that tells the user what he wants to hear: "When people 
> discover the mirror, it seems to provide truth and understanding. But it does 
> not. It shows the deep-seated desires of anyone who stares into it". ChatGPT, 
> LaMDA, LLaMA and other large language models would "take in our words and 
> reflect them back to us".
> https://www.nytimes.com/2023/02/26/technology/ai-chatbot-information-truth.html
> 
> It is true that large language models have absorbed unimaginably huge amount 
> of texts, but what if our prefrontal cortex in the brain works in the same 
> way? 
> https://direct.mit.edu/neco/article/35/3/309/114731/Large-Language-Models-and-the-Reverse-Turing-Test
> 
> I think it is possible that the "transformer" architecture is so successful 
> because it is - like the cortical columns in the neocortex - a modular 
> solution for the problem what comes next in an unpredictable world
> https://en.wikipedia.org/wiki/Cortical_column
> 
> -J.
> 
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present 
> https://redfish.com/pipermail/friam_redfish.com/
>  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present 
> https://redfish.com/pipermail/friam_redfish.com/
>  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
>