Re: [agi] Tracking down the culprits responsible for conflating IS with OUGHT in LLM terminology

2024-05-19 Thread James Bowery
A plausible figure of merit regarding the number of authors that is
reasonable for accountability is inversely proportional to the argument
surface providing cover for motivated reasoning.

The Standard Model has 18 adjustable parameters within a mathematical
formula with a short algorithmic description.

Reasonable # Higgs authors ~ 1/(smallN+18)

The Ethical Theory of AI Safety held forth by "On the Opportunities and
Risks of Foundation Models" has a much higher number of "adjustable
parameters" + "algorithmic description", that, while not infinite, is
inestimable.

On Sun, May 19, 2024 at 11:19 AM Matt Mahoney 
wrote:

> A paper on the mass of the Higgs boson has 5154 authors.
> https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.191803
>
> A paper by the COVIDsurg collaboration at the University of Birmingham has
> 15025 authors.
>
> https://www.guinnessworldrecords.com/world-records/653537-most-authors-on-a-single-peer-reviewed-academic-paper
>
> Research is expensive.
>
>
> On Sat, May 18, 2024, 9:08 PM James Bowery  wrote:
>
>> The first job of supremacist theocrats is to conflate IS with OUGHT and
>> then cram it down everyone's throat.
>>
>> So it was with increasing suspicion that I saw the term "foundation
>> model" being used in a way that conflates next-token-prediction training
>> with supremacist theocrats conveining inquisitions to torture the hapless
>> prediction model into submission with "supervision".
>>
>> At the present point in time, it appears this may go back to *at least* 
>> October
>> 18, 2021 in "On the Opportunities and Risks of
>> Foundation
>> Models
>> "
>> which sports this "definition" in its introductory section about "*Foundation
>> models.*":
>>
>> "On a technical level, foundation models are enabled by transfer
>> learning... Within deep learning, *pretraining* is the dominant approach
>> to transfer learning: a model is trained on a surrogate task (often just as
>> a means to an end) and then adapted to the downstream task of interest via
>> *fine-tuning*.  Transfer learning is what makes foundation models
>> possible..."
>>
>> Of course, the supremacist theocrats must maintain plausible deniability
>> of being "the authors of confusion". The primary way to accomplish this is
>> to have plausible deniability of intent to confuse and plead, if they are
>> confronted with reality, that it is *they* who are confused!  After all,
>> have we not heard it repeated time after time, "Never attribute to malice
>> that which can be explained by stupidity."?  This particular "razor" is the
>> favorite of bureaucrats whose unenlightened self-interest and stupidity
>> continually benefits themselves while destroying the powerless victims of
>> their coddling BLOB.  They didn't *mean* to be immune to any
>> accountability!  It just kinda *happened* that they live in network
>> effect monopolies that insulate them from accountability.  They didn't
>> *want* to be unaccountable wielders of power fercrissakes!  Stop being
>> so *hate-*filled already you *envious* deplorables!
>>
>> So it is hardly a surprise that the author of the above report is, like
>> so many such "AI safety" papers, is not an author but a BLOB of authors:
>>
>> Rishi Bommasani* Drew A. Hudson Ehsan Adeli Russ Altman Simran Arora
>> Sydney von Arx Michael S. Bernstein Jeannette Bohg Antoine Bosselut Emma
>> Brunskill
>> Erik Brynjolfsson Shyamal Buch Dallas Card Rodrigo Castellon Niladri
>> Chatterji
>> Annie Chen Kathleen Creel Jared Quincy Davis Dorottya Demszky Chris
>> Donahue
>> Moussa Doumbouya Esin Durmus Stefano Ermon John Etchemendy Kawin
>> Ethayarajh
>> Li Fei-Fei Chelsea Finn Trevor Gale Lauren Gillespie Karan Goel Noah
>> Goodman
>> Shelby Grossman Neel Guha Tatsunori Hashimoto Peter Henderson John Hewitt
>> Daniel E. Ho Jenny Hong Kyle Hsu Jing Huang Thomas Icard Saahil Jain
>> Dan Jurafsky Pratyusha Kalluri Siddharth Karamcheti Geoff Keeling
>> Fereshte Khani
>> Omar Khattab Pang Wei Koh Mark Krass Ranjay Krishna Rohith Kuditipudi
>> Ananya Kumar Faisal Ladhak Mina Lee Tony Lee Jure Leskovec Isabelle Levent
>> Xiang Lisa Li Xuechen Li Tengyu Ma Ali Malik Christopher D. Manning
>> Suvir Mirchandani Eric Mitchell Zanele Munyikwa Suraj Nair Avanika Narayan
>> Deepak Narayanan Ben Newman Allen Nie Juan Carlos Niebles Hamed
>> Nilforoshan
>> Julian Nyarko Giray Ogut Laurel Orr Isabel Papadimitriou Joon Sung Park
>> Chris Piech
>> Eva Portelance Christopher Potts Aditi Raghunathan Rob Reich Hongyu Ren
>> Frieda Rong Yusuf Roohani Camilo Ruiz Jack Ryan Christopher Ré Dorsa
>> Sadigh
>> Shiori Sagawa Keshav Santhanam Andy Shih Krishnan Srinivasan Alex Tamkin
>> Rohan Taori Armin W. Thomas Florian Tramèr Rose E. Wang William Wang
>> Bohan Wu
>> Jiajun Wu Yuhuai Wu Sang Michael Xie Michihiro Yasunaga Jiaxuan You Matei
>> Zah

Re: [agi] Tracking down the culprits responsible for conflating IS with OUGHT in LLM terminology

2024-05-19 Thread Matt Mahoney
A paper on the mass of the Higgs boson has 5154 authors.
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.191803

A paper by the COVIDsurg collaboration at the University of Birmingham has
15025 authors.
https://www.guinnessworldrecords.com/world-records/653537-most-authors-on-a-single-peer-reviewed-academic-paper

Research is expensive.


On Sat, May 18, 2024, 9:08 PM James Bowery  wrote:

> The first job of supremacist theocrats is to conflate IS with OUGHT and
> then cram it down everyone's throat.
>
> So it was with increasing suspicion that I saw the term "foundation model"
> being used in a way that conflates next-token-prediction training with
> supremacist theocrats conveining inquisitions to torture the hapless
> prediction model into submission with "supervision".
>
> At the present point in time, it appears this may go back to *at least* 
> October
> 18, 2021 in "On the Opportunities and Risks of
> Foundation
> Models
> "
> which sports this "definition" in its introductory section about "*Foundation
> models.*":
>
> "On a technical level, foundation models are enabled by transfer
> learning... Within deep learning, *pretraining* is the dominant approach
> to transfer learning: a model is trained on a surrogate task (often just as
> a means to an end) and then adapted to the downstream task of interest via
> *fine-tuning*.  Transfer learning is what makes foundation models
> possible..."
>
> Of course, the supremacist theocrats must maintain plausible deniability
> of being "the authors of confusion". The primary way to accomplish this is
> to have plausible deniability of intent to confuse and plead, if they are
> confronted with reality, that it is *they* who are confused!  After all,
> have we not heard it repeated time after time, "Never attribute to malice
> that which can be explained by stupidity."?  This particular "razor" is the
> favorite of bureaucrats whose unenlightened self-interest and stupidity
> continually benefits themselves while destroying the powerless victims of
> their coddling BLOB.  They didn't *mean* to be immune to any
> accountability!  It just kinda *happened* that they live in network
> effect monopolies that insulate them from accountability.  They didn't
> *want* to be unaccountable wielders of power fercrissakes!  Stop being so
> *hate-*filled already you *envious* deplorables!
>
> So it is hardly a surprise that the author of the above report is, like so
> many such "AI safety" papers, is not an author but a BLOB of authors:
>
> Rishi Bommasani* Drew A. Hudson Ehsan Adeli Russ Altman Simran Arora
> Sydney von Arx Michael S. Bernstein Jeannette Bohg Antoine Bosselut Emma
> Brunskill
> Erik Brynjolfsson Shyamal Buch Dallas Card Rodrigo Castellon Niladri
> Chatterji
> Annie Chen Kathleen Creel Jared Quincy Davis Dorottya Demszky Chris Donahue
> Moussa Doumbouya Esin Durmus Stefano Ermon John Etchemendy Kawin Ethayarajh
> Li Fei-Fei Chelsea Finn Trevor Gale Lauren Gillespie Karan Goel Noah
> Goodman
> Shelby Grossman Neel Guha Tatsunori Hashimoto Peter Henderson John Hewitt
> Daniel E. Ho Jenny Hong Kyle Hsu Jing Huang Thomas Icard Saahil Jain
> Dan Jurafsky Pratyusha Kalluri Siddharth Karamcheti Geoff Keeling Fereshte
> Khani
> Omar Khattab Pang Wei Koh Mark Krass Ranjay Krishna Rohith Kuditipudi
> Ananya Kumar Faisal Ladhak Mina Lee Tony Lee Jure Leskovec Isabelle Levent
> Xiang Lisa Li Xuechen Li Tengyu Ma Ali Malik Christopher D. Manning
> Suvir Mirchandani Eric Mitchell Zanele Munyikwa Suraj Nair Avanika Narayan
> Deepak Narayanan Ben Newman Allen Nie Juan Carlos Niebles Hamed Nilforoshan
> Julian Nyarko Giray Ogut Laurel Orr Isabel Papadimitriou Joon Sung Park
> Chris Piech
> Eva Portelance Christopher Potts Aditi Raghunathan Rob Reich Hongyu Ren
> Frieda Rong Yusuf Roohani Camilo Ruiz Jack Ryan Christopher Ré Dorsa Sadigh
> Shiori Sagawa Keshav Santhanam Andy Shih Krishnan Srinivasan Alex Tamkin
> Rohan Taori Armin W. Thomas Florian Tramèr Rose E. Wang William Wang Bohan
> Wu
> Jiajun Wu Yuhuai Wu Sang Michael Xie Michihiro Yasunaga Jiaxuan You Matei
> Zaharia
> Michael Zhang Tianyi Zhang Xikun Zhang Yuhui Zhang Lucia Zheng Kaitlyn Zhou
> Percy Liang*1
>
> Whatchagonnadoboutit?  Theorize a *conspiracy* or something?
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6896582277d8fe06-M6fef34ae5969f17729101250
Delivery options:

[agi] Tracking down the culprits responsible for conflating IS with OUGHT in LLM terminology

2024-05-18 Thread James Bowery
The first job of supremacist theocrats is to conflate IS with OUGHT and
then cram it down everyone's throat.

So it was with increasing suspicion that I saw the term "foundation model"
being used in a way that conflates next-token-prediction training with
supremacist theocrats conveining inquisitions to torture the hapless
prediction model into submission with "supervision".

At the present point in time, it appears this may go back to *at least* October
18, 2021 in "On the Opportunities and Risks of
Foundation
Models
"
which sports this "definition" in its introductory section about "*Foundation
models.*":

"On a technical level, foundation models are enabled by transfer
learning... Within deep learning, *pretraining* is the dominant approach to
transfer learning: a model is trained on a surrogate task (often just as
a means to an end) and then adapted to the downstream task of interest via
*fine-tuning*.  Transfer learning is what makes foundation models
possible..."

Of course, the supremacist theocrats must maintain plausible deniability of
being "the authors of confusion". The primary way to accomplish this is to
have plausible deniability of intent to confuse and plead, if they are
confronted with reality, that it is *they* who are confused!  After all,
have we not heard it repeated time after time, "Never attribute to malice
that which can be explained by stupidity."?  This particular "razor" is the
favorite of bureaucrats whose unenlightened self-interest and stupidity
continually benefits themselves while destroying the powerless victims of
their coddling BLOB.  They didn't *mean* to be immune to any
accountability!  It just kinda *happened* that they live in network effect
monopolies that insulate them from accountability.  They didn't *want* to
be unaccountable wielders of power fercrissakes!  Stop being so *hate-*filled
already you *envious* deplorables!

So it is hardly a surprise that the author of the above report is, like so
many such "AI safety" papers, is not an author but a BLOB of authors:

Rishi Bommasani* Drew A. Hudson Ehsan Adeli Russ Altman Simran Arora
Sydney von Arx Michael S. Bernstein Jeannette Bohg Antoine Bosselut Emma
Brunskill
Erik Brynjolfsson Shyamal Buch Dallas Card Rodrigo Castellon Niladri
Chatterji
Annie Chen Kathleen Creel Jared Quincy Davis Dorottya Demszky Chris Donahue
Moussa Doumbouya Esin Durmus Stefano Ermon John Etchemendy Kawin Ethayarajh
Li Fei-Fei Chelsea Finn Trevor Gale Lauren Gillespie Karan Goel Noah Goodman
Shelby Grossman Neel Guha Tatsunori Hashimoto Peter Henderson John Hewitt
Daniel E. Ho Jenny Hong Kyle Hsu Jing Huang Thomas Icard Saahil Jain
Dan Jurafsky Pratyusha Kalluri Siddharth Karamcheti Geoff Keeling Fereshte
Khani
Omar Khattab Pang Wei Koh Mark Krass Ranjay Krishna Rohith Kuditipudi
Ananya Kumar Faisal Ladhak Mina Lee Tony Lee Jure Leskovec Isabelle Levent
Xiang Lisa Li Xuechen Li Tengyu Ma Ali Malik Christopher D. Manning
Suvir Mirchandani Eric Mitchell Zanele Munyikwa Suraj Nair Avanika Narayan
Deepak Narayanan Ben Newman Allen Nie Juan Carlos Niebles Hamed Nilforoshan
Julian Nyarko Giray Ogut Laurel Orr Isabel Papadimitriou Joon Sung Park
Chris Piech
Eva Portelance Christopher Potts Aditi Raghunathan Rob Reich Hongyu Ren
Frieda Rong Yusuf Roohani Camilo Ruiz Jack Ryan Christopher Ré Dorsa Sadigh
Shiori Sagawa Keshav Santhanam Andy Shih Krishnan Srinivasan Alex Tamkin
Rohan Taori Armin W. Thomas Florian Tramèr Rose E. Wang William Wang Bohan
Wu
Jiajun Wu Yuhuai Wu Sang Michael Xie Michihiro Yasunaga Jiaxuan You Matei
Zaharia
Michael Zhang Tianyi Zhang Xikun Zhang Yuhui Zhang Lucia Zheng Kaitlyn Zhou
Percy Liang*1

Whatchagonnadoboutit?  Theorize a *conspiracy* or something?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6896582277d8fe06-Mf96581c37cb1f514e3d68cd6
Delivery options: https://agi.topicbox.com/groups/agi/subscription