On 12/7/23 7:51 AM, glen wrote:

We need less *trust* and more *trustworthiness*. What I meant by "reality distortion field" seems different from what you meant. I meant the effect of being ensconced in privilege, having billions of dollars, swimming in an ocean of sycophants, etc. Musk's reality is severely distorted.

   touche' and bravo.   yes, this is yet-more-relevant and yes to
   /trust V trustworthiness/.  I generally trust everyone to pursue
   their own self-interest but what I don't trust as clearly is my
   understanding of that self-interest and their level of enlightenment
   as they pursue it which convolved with their alignment to my idea of
   a "greater good" would seem to be their "trustworthiness".


Of course, I think a part of the TESCREAL club's rhetoric is similar to revelatory religions like Catholicism or Scientology ...

   TESCREAL, the acronym, is new to me but I appreciate the
   cluster/aggregation it offers.  I'm a sucker for all things
   hopeful/futurist/optimistic by some measure, yet also rather
   allergic (the allergy/addiction duality).  From the following paper:

   
https://akjournals.com/view/journals/2054/aop/article-10.1556-2054.2023.00292/article-10.1556-2054.2023.00292.xml

       /The backbone of this worldview is the TESCREAL bundle of
       ideologies—an acronym coined by the critical AI scholars Émile
       Torres and Timnit Gebru to describe an interrelated cluster of
       belief systems: transhumanism, Extropianism, singularitarianism,
       cosmism, Rationalism, Effective Altruism, and longtermism./

or even occult societies and heavy psychedelics users, where initiates have a distorted view but the masters who've "studied" for a long time have a clearer understanding of reality. A wealthy man once told me, "Money is like air. It's everywhere. The difference between you and I is that I know how to build engines that harvest and concentrate it." He clearly felt like he had a better understanding of reality than me. The rhetoric inverts.

I feel the same way about inter-species mind reading. When I see humans engineer their local ecology (e.g. damming a river or introducing a biocontrol species), I don't see humans understanding biology *better* than, say, the rats whose day to day lives might be intensely impacted. I see the rats as having the clear view and the humans as having the "distorted" view. Musk, Thiel, and all the rest seem to think they're Hari Seldons.

   In fact, it is beyond inter-species mind-reading, it is
   individual-of-an-arrogant-species reading the mind of Gaia (or some
   significant subset). I think I take your point however.  I've been
   re-reading Henry Petroski's "To Engineer is Human" and it is flooded
   with examples of the hubris implied here.

But we rats understand that *luck* is the primordial force and those with it (the lucky) are so badly skewed they can't see their hand in front of their face. Perhaps the continually unlucky are also badly skewed? Only those of us who can narratively map our wandering from luck to unluck and back are best situated to understand reality?

   I've put that under my hat and am letting it try to soak in.  It may
   take a while.  I sense something profound in it but haven't been
   able to absorb/parse/internalize it yet.


The same difference exists between, say, a front-end developer and a "close to the metal" embedded systems developer. The former is closer to ideal computation, computronium, as it were. The latter is closer to the actual world, where the rubber meets the road. Which of the two has the more distorted field? Or perhaps only full stack (writ large) developers experience the least distortion?

Arcing back to conceptions of openness. Here are some indices that seem more trustworthy than whatever field being whipped up by the byzantine AI Alliance:

https://opening-up-chatgpt.github.io/

   I had no idea how many list-worthy text-generators were out there (I
   only recognized a few) nor the number of categories to be open (or
   not) within!   I'm a sucker for a good taxonomy or maybe more to the
   point, partitioning or embedding space as a way to get some
   bearings/orientation on the larger landscape.

https://hai.stanford.edu/news/introducing-foundation-model-transparency-index

   I was not aware (either) of the term Foundation Model... useful and
   interesting (from the website):


               /In recent years, a new successful paradigm for building
               AI systems has emerged: Train one model on a huge amount
               of data and adapt it to many applications. We call such
               a model a foundation model./




On 12/6/23 10:47, Steve Smith wrote:
As the habitual tangenteer that I am, I'm left reacting to the phrase "Musk's reality distortion field".    Tangent aside, I do very much appreciate Glen's take on this and found the multiple references (much more on-topic than my tangential riff here) interesting and useful.   I too hope Stallman will weigh in and wonder what the next evolution of EFF might look like or be replaced by in this new evolutionary landscape at the intersection of tech and culture?

I'm hung up, the last few years, on Yuval Harari's Intersubjective Reality <https://medium.com/amalgamate/inter-subjective-realities-64b4f6716f72> as derived from the social science Intersubjectivity <https://en.wikipedia.org/wiki/Intersubjectivity>.

When I first heard Harari's usage/coinage I reacted to it somewhat the way I did to Kellyanne Conway's Alternative Facts <https://en.wikipedia.org/wiki/Alternative_facts>, but I now deeply appreciate what they are all alluding to, some more disingenously than others.

I don't disagree that Musk's every action and statement has the effect of "distorting reality" but it is our /Intersubjective Reality/ that is being distorted, not the reality that most of us were trained/steeped in via the philosophical tradition of /Logical Positivism <https://plato.stanford.edu/entries/logical-empiricism/>/. Others here were (Social Sciences, Humanism) probably trained up and steeped more in Phenomenology <https://plato.stanford.edu/entries/phenomenology/> and more comfortable with Intersubjective Reality.

I find the likes of Musk or Trump or Altman or ( * ), as the /Personality/ in "Cult of Personality" in much the way star this recently discovered in-sync planetary system <https://mashable.com/article/nasa-exoplanets-orbit-star-sync> exists (see below).   With the planets' orbits finding pairwise (and more generally n-wise) resonances, all (presumably) coupled exclusively by gravity (and synced through internal dissipative tidal forces?).

Is chatGPT or OpenAI or AIAlliance ( or, or, or, . . . ) yet another species of celestial body in an orbital dance?

Musk, of course, operates in a higher-dimensional field of forces with Tweets (X's?), public appearances, financial transactions, and launch/contract/release announcements as the "intermediate vector particles"  and the sturm and drang of individual drama trauma among the companies, organizations and individuals who are effected by it all as the internal dissipative forces.

Mashable article on in-sync planetary system <https://mashable.com/article/nasa-exoplanets-orbit-star-sync>

https://science.nasa.gov/missions/tess/discovery-alert-watch-the-synchronized-dance-of-a-6-planet-system/

On 12/6/23 10:57 AM, Pietro Terna wrote:
    Genius!


Il 06/12/23 15:11, glen ha scritto:
For those of us who refuse to contribute to Musk's reality distortion field: https://thealliance.ai/

Yeah, it's interesting. 2 questions came to my mind: 1) Where is Mozilla? Are they a part of it? And 2) "open" is not a simple concept. Is it possible that so many organizations have a clear understanding of what it means? If so, what do they mean? We've seen, over and over again, a kind of exploitation of Utopian values, especially in infrastructure-level software. (I'd love to get Stallman's opinion.)

One way to clarify someone's position on their private conception of "open" is to ask how they feel about limits to the exportation of encryption software. <https://www.eff.org/deeplinks/2019/08/us-export-controls-and-published-encryption-source-code-explained>

Another tack is to ask how they feel about fake news, trust in institutions, free speech, platforming, etc.

IDK. The AI Alliance smells, to me, kinda like more TESCREAL [1], ripe for exploitation and *-washing [2] by the privileged. If a tech is open and stays open, it'll most likely do so because individuals commit to it, not because some meta-corp of mega-corps get together as "allies". But I'm a bit cynical.

[1] Transhumanism, Extropianism, Singulatarianism, Cosmism, Rationalism, Effective Altruism, Longtermism [2] Green-washing (fossil fuel lobbyists at cop28), ethics-washing ("ai safety"), dei-washing (sensitivity training), etc.

On 12/5/23 23:47, Pietro Terna wrote:
Dear all,

what about the post below?

Star Wars?

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to