Re: [agi] Fwd: The Bloomer's Paradox

2025-11-27 Thread James Bowery
On Wed, Nov 26, 2025 at 5:50 PM Matt Mahoney 
wrote:

> ... GB is as much as a human could read over a lifetime, and therefore
> should be enough to train a language model to human level. Of course LLMs
> train on much larger sets, which makes them far more knowledgeable than any
> human.
>

Your benchmark using only the 1GB 20 year old snapshot of Wikipedia, if
incentivized under the Genesis Mission would go a long way toward
accomplishing it.  $100M would be the minimum underwriting by my estimate
required.

...
>
> Besides AI, what questions could you answer by a compression contest and
> what data would you use?
>

Don't conflate the lossless compression prize with what people are thinking
of as "AI".  It is forensic epistemology.  Human intervention is essential
at least at this stage.  You aren't going to get a 120 bit lagrangian of
the big bang localized to the context of Wikipedia's generation circa
2005.  The point is to get people to stop their damn yammering at each
other while we drift toward a rhyme with the 30 Years War over our beliefs,
and start operationalizing what they're saying -- and not just about "the
world" devoid of human actors subverting human knowledge with their "edits".


> I did some work on your Laboratory of the Counties a couple of years ago.
> Have you made any discoveries from this data?
>

That was originally my attempt to get Charlie Smith (Tukey's student and
ask Hinton about his funding the second connectionist summer) to use his
connections to reform the social sciences.  I had finally, after 20 years,
gotten him to understand that algorithmic information could be a superior
information criterion to all others.  But this was in the midst of the
Trump upset of 2016 and Charlie was and still is solidly on "Team Blue".
I, being solidly NOT on "Team Blue" started to lose traction despite the
fact that I was trying to find a neutral ground between Team Red and Team
Blue based on the THIRD connectionist summer -- and thereby avert a 30
Years War.

The biggest part of the problem I've found is that people on Team Blue,
despite all their hair-on-fire hysterics about Team Red pulling out their
guns and bibles any minute to round up anyone who isn't an Aryan Superman
Hitler Wet Dream, when it comes right down to it are certain that Team Red
will not resort to violence to protect individual moral agency against The
Unfriendly AGI known as The Global Economy.  They are certain, as
apparently are you, that Everything Is Under Control (as Robert Anton
Wilson wryly titled his critique).  So I'd really like to wake you guys up
if that is at all possible.  Killing 25% of the population just so people
can feel like they have some control over their local communities should
not be necessary.

As far as my own work on that dataset, yes, I've made some discoveries but
they're mainly about how to properly estimate the algorithmic information
of a model vs residual errors based on instrument precision and systematic
tracking of the jacobiians into the compressed representation and back out
to its reconstruction.

Maybe the most important discovery is that my intuition that one could
extract dynamics from that dataset, despite being primarily spatial in
nature, has now been vindicated by bioinformatics in the form of virtual
time to infer cellular development trajectories with differential
equations.  I have a model of county dynamics using information geometry to
discover the state space and then back that out to impute the >90% missing
data in the full time series panel so that I can then discover the
differential equations.  I haven't released that code yet but it is
actually something that the social sciences haven't gotten around to doing
yet.







>
> -- Matt Mahoney, [email protected]
>
> On Wed, Nov 26, 2025, 4:24 PM James Bowery  wrote:
>
>> What is canonical human knowledge?
>>
>> You may recall me suggesting the Wikipedia change log as the Hutter Prize
>> corpus which would have proven impractical in 2005 (as Marcus pointed out
>> and as you may have as well).
>>
>> That would have been to assist in forensic epistemology:
>>
>> Discovering, not only the rampant generators of bias that were becoming
>> obvious in Wikipedia back in 2005, but, the morphisms between various
>> "schools of thought":  A Rosetta Stone of Human Knowledge (which was also a
>> reason I suggested including the other language versions of Wikipedia).
>>
>> Canonical human knowledge would include, of course, identities latent in
>> the data as generators of bias.  But it would also include scientific
>> discovery which sometimes arises when people get past inappropriate use of
>> symbols in technical languages, many of which are represented in Wikipedia.
>>
>> See https://spasim.org/docs/leibniz_quine_etter_identity.html for
>> something Tom Etter* and I were working toward when we were both basically
>> booted from Silicon Valley by H-1b fraud.  Language isn't "just" language.
>>
>> *I didn't know un

Re: [agi] Fwd: The Bloomer's Paradox

2025-11-26 Thread Matt Mahoney
I didn't include multiple languages in the large text benchmark because I
reasoned that passing the Turing test does not require the computer be
multilingual. I didn't consider including edits because it would be too
much data. 1 GB is as much as a human could read over a lifetime, and
therefore should be enough to train a language model to human level. Of
course LLMs train on much larger sets, which makes them far more
knowledgeable than any human.

But I agree that training on edits would allow you to predict text based on
user ID, in effect making smaller models of thousands of minds, including
their biases and areas of expertise. The ID in enwik9 is just the last
person to edit it. The latest Hutter entry compresses IDs in a separate
stream, not even using them.

Besides AI, what questions could you answer by a compression contest and
what data would you use? I did some work on your Laboratory of the Counties
a couple of years ago. Have you made any discoveries from this data?

-- Matt Mahoney, [email protected]

On Wed, Nov 26, 2025, 4:24 PM James Bowery  wrote:

> What is canonical human knowledge?
>
> You may recall me suggesting the Wikipedia change log as the Hutter Prize
> corpus which would have proven impractical in 2005 (as Marcus pointed out
> and as you may have as well).
>
> That would have been to assist in forensic epistemology:
>
> Discovering, not only the rampant generators of bias that were becoming
> obvious in Wikipedia back in 2005, but, the morphisms between various
> "schools of thought":  A Rosetta Stone of Human Knowledge (which was also a
> reason I suggested including the other language versions of Wikipedia).
>
> Canonical human knowledge would include, of course, identities latent in
> the data as generators of bias.  But it would also include scientific
> discovery which sometimes arises when people get past inappropriate use of
> symbols in technical languages, many of which are represented in Wikipedia.
>
> See https://spasim.org/docs/leibniz_quine_etter_identity.html for
> something Tom Etter* and I were working toward when we were both basically
> booted from Silicon Valley by H-1b fraud.  Language isn't "just" language.
>
> *I didn't know until years after Tom's death that he and Solomonoff were
> friends and arrived early at the Dartmouth Summer of AI together.  I didn't
> even hear from Tom about Solomonoff or Algorithmic Information Theory and
> related concepts. I only hired Tom to do this kind of work because I saw a
> problem with mathematical foundations going back to my work at VIEWTRON
> where I was the future's architect responsible for establishing what might
> have been _the_ computer network protocol we all live with today.  That's
> why I went to work at HP's "Internet Chapter 2" project despite their story
> of what they were doing making little sense to me.
>
>
>
>
>
> On Wed, Nov 26, 2025 at 10:35 AM Matt Mahoney 
> wrote:
>
>> Besides language model evaluation, what are some examples of questions
>> you want to answer using lossless data compression?
>>
>> -- Matt Mahoney, [email protected]
>>
>> On Tue, Nov 25, 2025, 10:39 PM James Bowery  wrote:
>>
>>>
>>>
>>> On Mon, Nov 24, 2025 at 9:05 AM Matt Mahoney 
>>> wrote:
>>>
 ...
 Which raises the even bigger problem that as you mentioned, motivation,
 ego, and money drive science. Scientists who should know better still want
 to prove themselves right...

>>>
>>> This holds also for scientists who want to prove that it is hopeless to
>>> hold them to account with an objective model selection criterion.
>>>
>>> Not only is that motivation enormous, it requires almost no motivation
>>> at all since those in power can't be held to account by those without power
>>> -- so, even if they are so foolish as to engage the powerless in argument,
>>> they can make BS arguments respond to any counter-argument with more BS.
>>> This is being automated with LLMs on a mass scale now that Turing's BS test
>>> has been passed.
>>>
>>>
 Suppose you want to answer the question of whether covid-19 vaccines
 are safe and effective...

>>>
>>> That's not what large models are for.  Large models either answer an
>>> enormous range of questions effectively because they have an effective
>>> world model or they are narrow pre-programmed small models that do
>>> simulations based on human expert specifications; merely encoding prior
>>> expert knowledge in simulation algorithms.
>>>
>>> The data set huge.
>>>
>>> As I said, there is a huge difference between the data that go into
>>> climate model and the data that go into macrosocial psychology models such
>>> as those upon which you base your argument in the OP.
>>>
>>>
 ...Do you trust the US CDC? Do you trust the Chinese CDC? Do you trust
 Turkmenistan, the only country to report zero cases throughout the
 pandemic? Who gets to decide which data to include?

>>>
>>> Data and models are in different categories there

Re: [agi] Fwd: The Bloomer's Paradox

2025-11-26 Thread James Bowery
What is canonical human knowledge?

You may recall me suggesting the Wikipedia change log as the Hutter Prize
corpus which would have proven impractical in 2005 (as Marcus pointed out
and as you may have as well).

That would have been to assist in forensic epistemology:

Discovering, not only the rampant generators of bias that were becoming
obvious in Wikipedia back in 2005, but, the morphisms between various
"schools of thought":  A Rosetta Stone of Human Knowledge (which was also a
reason I suggested including the other language versions of Wikipedia).

Canonical human knowledge would include, of course, identities latent in
the data as generators of bias.  But it would also include scientific
discovery which sometimes arises when people get past inappropriate use of
symbols in technical languages, many of which are represented in Wikipedia.

See https://spasim.org/docs/leibniz_quine_etter_identity.html for something
Tom Etter* and I were working toward when we were both basically booted
from Silicon Valley by H-1b fraud.  Language isn't "just" language.

*I didn't know until years after Tom's death that he and Solomonoff were
friends and arrived early at the Dartmouth Summer of AI together.  I didn't
even hear from Tom about Solomonoff or Algorithmic Information Theory and
related concepts. I only hired Tom to do this kind of work because I saw a
problem with mathematical foundations going back to my work at VIEWTRON
where I was the future's architect responsible for establishing what might
have been _the_ computer network protocol we all live with today.  That's
why I went to work at HP's "Internet Chapter 2" project despite their story
of what they were doing making little sense to me.





On Wed, Nov 26, 2025 at 10:35 AM Matt Mahoney 
wrote:

> Besides language model evaluation, what are some examples of questions you
> want to answer using lossless data compression?
>
> -- Matt Mahoney, [email protected]
>
> On Tue, Nov 25, 2025, 10:39 PM James Bowery  wrote:
>
>>
>>
>> On Mon, Nov 24, 2025 at 9:05 AM Matt Mahoney 
>> wrote:
>>
>>> ...
>>> Which raises the even bigger problem that as you mentioned, motivation,
>>> ego, and money drive science. Scientists who should know better still want
>>> to prove themselves right...
>>>
>>
>> This holds also for scientists who want to prove that it is hopeless to
>> hold them to account with an objective model selection criterion.
>>
>> Not only is that motivation enormous, it requires almost no motivation at
>> all since those in power can't be held to account by those without power --
>> so, even if they are so foolish as to engage the powerless in argument,
>> they can make BS arguments respond to any counter-argument with more BS.
>> This is being automated with LLMs on a mass scale now that Turing's BS test
>> has been passed.
>>
>>
>>> Suppose you want to answer the question of whether covid-19 vaccines are
>>> safe and effective...
>>>
>>
>> That's not what large models are for.  Large models either answer an
>> enormous range of questions effectively because they have an effective
>> world model or they are narrow pre-programmed small models that do
>> simulations based on human expert specifications; merely encoding prior
>> expert knowledge in simulation algorithms.
>>
>> The data set huge.
>>
>> As I said, there is a huge difference between the data that go into
>> climate model and the data that go into macrosocial psychology models such
>> as those upon which you base your argument in the OP.
>>
>>
>>> ...Do you trust the US CDC? Do you trust the Chinese CDC? Do you trust
>>> Turkmenistan, the only country to report zero cases throughout the
>>> pandemic? Who gets to decide which data to include?
>>>
>>
>> Data and models are in different categories therefore data selection
>> criteria and model selection criteria are in different categories.  I
>> addressed this in the README at
>> https://github.com/jabowery/HumesGuillotine
>>
>>
>>> How do you convince people who believe that the moon landing was fake?
>>>
>>
>> You don't.  What you do is convince decisionmakers to take information
>> criteria for model selection seriously enough to apply algorithmic
>> information theory.
>>
>> As to the uncomputability of proving one has found the best possible
>> scientific model for a given dataset leading to a potentially bottomless
>> pit of resources being poured down the science rat hole:  Precisely!
>> That's why funding authorities need criteria that holds those receiving the
>> science funding objectively accountable and in such a manner that they
>> don't have to worry about leaked evaluation datasets.
>>
>> -- Matt Mahoney, [email protected]
>>>
>>> On Sun, Nov 23, 2025, 10:30 AM James Bowery  wrote:
>>>
 There are, of course, an infinite number of "arguments" one can come up
 with to expand what Nick Szabo calls the "Argument Surface" and that is
 where the real "problem for statistics about people" arises -- not in the
>>

Re: [agi] Fwd: The Bloomer's Paradox

2025-11-26 Thread Matt Mahoney
Besides language model evaluation, what are some examples of questions you
want to answer using lossless data compression?

-- Matt Mahoney, [email protected]

On Tue, Nov 25, 2025, 10:39 PM James Bowery  wrote:

>
>
> On Mon, Nov 24, 2025 at 9:05 AM Matt Mahoney 
> wrote:
>
>> ...
>> Which raises the even bigger problem that as you mentioned, motivation,
>> ego, and money drive science. Scientists who should know better still want
>> to prove themselves right...
>>
>
> This holds also for scientists who want to prove that it is hopeless to
> hold them to account with an objective model selection criterion.
>
> Not only is that motivation enormous, it requires almost no motivation at
> all since those in power can't be held to account by those without power --
> so, even if they are so foolish as to engage the powerless in argument,
> they can make BS arguments respond to any counter-argument with more BS.
> This is being automated with LLMs on a mass scale now that Turing's BS test
> has been passed.
>
>
>> Suppose you want to answer the question of whether covid-19 vaccines are
>> safe and effective...
>>
>
> That's not what large models are for.  Large models either answer an
> enormous range of questions effectively because they have an effective
> world model or they are narrow pre-programmed small models that do
> simulations based on human expert specifications; merely encoding prior
> expert knowledge in simulation algorithms.
>
> The data set huge.
>
> As I said, there is a huge difference between the data that go into
> climate model and the data that go into macrosocial psychology models such
> as those upon which you base your argument in the OP.
>
>
>> ...Do you trust the US CDC? Do you trust the Chinese CDC? Do you trust
>> Turkmenistan, the only country to report zero cases throughout the
>> pandemic? Who gets to decide which data to include?
>>
>
> Data and models are in different categories therefore data selection
> criteria and model selection criteria are in different categories.  I
> addressed this in the README at
> https://github.com/jabowery/HumesGuillotine
>
>
>> How do you convince people who believe that the moon landing was fake?
>>
>
> You don't.  What you do is convince decisionmakers to take information
> criteria for model selection seriously enough to apply algorithmic
> information theory.
>
> As to the uncomputability of proving one has found the best possible
> scientific model for a given dataset leading to a potentially bottomless
> pit of resources being poured down the science rat hole:  Precisely!
> That's why funding authorities need criteria that holds those receiving the
> science funding objectively accountable and in such a manner that they
> don't have to worry about leaked evaluation datasets.
>
> -- Matt Mahoney, [email protected]
>>
>> On Sun, Nov 23, 2025, 10:30 AM James Bowery  wrote:
>>
>>> There are, of course, an infinite number of "arguments" one can come up
>>> with to expand what Nick Szabo calls the "Argument Surface" and that is
>>> where the real "problem for statistics about people" arises -- not in the
>>> choice of language ambiguity.  People who are not motivated to get rid of
>>> motivated reasoning will not be motivated to solve problems like the choice
>>> of language ambiguity -- as just one example of many.  I will grant,
>>> however, that particular redoubt is only for the elect who, like you and I,
>>> have been involved with judging the Hutter Prize.  IIRC, even Shane Legg
>>> sets forth that argument as a reason to avoid the ALgorithmic Information
>>> Criterion -- and you can't get much more authoritative than that unless you
>>> go to Hutter himself or, in the hypothetical case, Solomonoff.  I did
>>> express concern to Marcus at one time, when Solomonoff was still living and
>>> shortly after the Hutter Prize had been announced, that Solomonoff might
>>> "torpedo" the Hutter Prize with that argument (if I recall the exact
>>> wording).  Marcus reassured me that Solomonoff would do no such thing.
>>> IIRC shortly thereafter Solomoff posted something like that argument to his
>>> blog.  IIRC Marcus objected to using the ALIC for global warming despite
>>> the Biden administration setting the value of addressing that issue at
>>> around $10T/year -- and I can see merit in that objection given the scale
>>> of the data.
>>>
>>> But it all comes down to "incentives" when we are addressing the
>>> "motivated reasoning" problem and that's why I posted my Congressional
>>> testimony about the "incentives" regarding rocket technology -- which you
>>> commented on but did not seem to get the point I was trying to make about
>>> incentives.
>>>
>>> Once we're in the realm of macrosocial psychological dynamical models,
>>> the incentives are so great as to beggar the imagination.  This is far
>>> greater even than Biden's rNPV of $10T/year and the macrosocial psychology
>>> data is many orders of magnitude smaller than climate dat

Re: [agi] Fwd: The Bloomer's Paradox

2025-11-26 Thread James Bowery
>From the Hume's Guillotine README
:

The reason you keep all "errors in measurement" -- the reason you avoid
lossy compression -- is to avoid what is known as "confirmation bias" or,
what might be called "Ockham's Chainsaw Massacre". Almost all criticisms of
Ockham's Razor boil down to mischaracterizing it as Ockham's Chainsaw
Massacre. The remaining criticisms of Ockham's Razor boil down to the claim
that those selecting the data never include data that doesn't fit their
preconceptions. That critique may be reasonable but it is not an argument
against the Algorithmic Information Criterion, which only applies to a
given dataset. Models and data are different. Therefore model selection
criteria are qualitatively different from data selection criteria.

Yes, people can and will argue over what data to include or exclude -- but
the Algorithmic Information Criterion traps the intellectually dishonest by
making their job much harder since they must include exponentially much
more data that is biased towards their particular agenda in order to wash
out data coherence (and interdisciplinary consilience) in the rest of the
dataset. The ever-increasing diversity of data sources identifies the
sources of bias -- and then starts predicting the behavior of data sources
in terms of their bias, as bias. Trap sprung! This is much the same
argument as that leveled against conspiracy theories: At some point it
becomes simply impractical to hide a lie against the increasing diversity
of observations and perspectives.

On Tue, Nov 25, 2025 at 9:39 PM James Bowery  wrote:

>
>
> On Mon, Nov 24, 2025 at 9:05 AM Matt Mahoney 
> wrote:
>
>> ...
>> Which raises the even bigger problem that as you mentioned, motivation,
>> ego, and money drive science. Scientists who should know better still want
>> to prove themselves right...
>>
>
> This holds also for scientists who want to prove that it is hopeless to
> hold them to account with an objective model selection criterion.
>
> Not only is that motivation enormous, it requires almost no motivation at
> all since those in power can't be held to account by those without power --
> so, even if they are so foolish as to engage the powerless in argument,
> they can make BS arguments respond to any counter-argument with more BS.
> This is being automated with LLMs on a mass scale now that Turing's BS test
> has been passed.
>
>
>> Suppose you want to answer the question of whether covid-19 vaccines are
>> safe and effective...
>>
>
> That's not what large models are for.  Large models either answer an
> enormous range of questions effectively because they have an effective
> world model or they are narrow pre-programmed small models that do
> simulations based on human expert specifications; merely encoding prior
> expert knowledge in simulation algorithms.
>
> The data set huge.
>
> As I said, there is a huge difference between the data that go into
> climate model and the data that go into macrosocial psychology models such
> as those upon which you base your argument in the OP.
>
>
>> ...Do you trust the US CDC? Do you trust the Chinese CDC? Do you trust
>> Turkmenistan, the only country to report zero cases throughout the
>> pandemic? Who gets to decide which data to include?
>>
>
> Data and models are in different categories therefore data selection
> criteria and model selection criteria are in different categories.  I
> addressed this in the README at
> https://github.com/jabowery/HumesGuillotine
>
>
>> How do you convince people who believe that the moon landing was fake?
>>
>
> You don't.  What you do is convince decisionmakers to take information
> criteria for model selection seriously enough to apply algorithmic
> information theory.
>
> As to the uncomputability of proving one has found the best possible
> scientific model for a given dataset leading to a potentially bottomless
> pit of resources being poured down the science rat hole:  Precisely!
> That's why funding authorities need criteria that holds those receiving the
> science funding objectively accountable and in such a manner that they
> don't have to worry about leaked evaluation datasets.
>
> -- Matt Mahoney, [email protected]
>>
>> On Sun, Nov 23, 2025, 10:30 AM James Bowery  wrote:
>>
>>> There are, of course, an infinite number of "arguments" one can come up
>>> with to expand what Nick Szabo calls the "Argument Surface" and that is
>>> where the real "problem for statistics about people" arises -- not in the
>>> choice of language ambiguity.  People who are not motivated to get rid of
>>> motivated reasoning will not be motivated to solve problems like the choice
>>> of language ambiguity -- as just one example of many.  I will grant,
>>> however, that particular redoubt is only for the elect who, like you and I,
>>> have been involved with judging the Hutter Prize.  IIRC, even Shane Legg
>>> sets forth that argument as a reason to avoid the ALgorithmic Infor

Re: [agi] Fwd: The Bloomer's Paradox

2025-11-25 Thread James Bowery
On Mon, Nov 24, 2025 at 9:05 AM Matt Mahoney 
wrote:

> ...
> Which raises the even bigger problem that as you mentioned, motivation,
> ego, and money drive science. Scientists who should know better still want
> to prove themselves right...
>

This holds also for scientists who want to prove that it is hopeless to
hold them to account with an objective model selection criterion.

Not only is that motivation enormous, it requires almost no motivation at
all since those in power can't be held to account by those without power --
so, even if they are so foolish as to engage the powerless in argument,
they can make BS arguments respond to any counter-argument with more BS.
This is being automated with LLMs on a mass scale now that Turing's BS test
has been passed.


> Suppose you want to answer the question of whether covid-19 vaccines are
> safe and effective...
>

That's not what large models are for.  Large models either answer an
enormous range of questions effectively because they have an effective
world model or they are narrow pre-programmed small models that do
simulations based on human expert specifications; merely encoding prior
expert knowledge in simulation algorithms.

The data set huge.

As I said, there is a huge difference between the data that go into climate
model and the data that go into macrosocial psychology models such as those
upon which you base your argument in the OP.


> ...Do you trust the US CDC? Do you trust the Chinese CDC? Do you trust
> Turkmenistan, the only country to report zero cases throughout the
> pandemic? Who gets to decide which data to include?
>

Data and models are in different categories therefore data selection
criteria and model selection criteria are in different categories.  I
addressed this in the README at https://github.com/jabowery/HumesGuillotine


> How do you convince people who believe that the moon landing was fake?
>

You don't.  What you do is convince decisionmakers to take information
criteria for model selection seriously enough to apply algorithmic
information theory.

As to the uncomputability of proving one has found the best possible
scientific model for a given dataset leading to a potentially bottomless
pit of resources being poured down the science rat hole:  Precisely!
That's why funding authorities need criteria that holds those receiving the
science funding objectively accountable and in such a manner that they
don't have to worry about leaked evaluation datasets.

-- Matt Mahoney, [email protected]
>
> On Sun, Nov 23, 2025, 10:30 AM James Bowery  wrote:
>
>> There are, of course, an infinite number of "arguments" one can come up
>> with to expand what Nick Szabo calls the "Argument Surface" and that is
>> where the real "problem for statistics about people" arises -- not in the
>> choice of language ambiguity.  People who are not motivated to get rid of
>> motivated reasoning will not be motivated to solve problems like the choice
>> of language ambiguity -- as just one example of many.  I will grant,
>> however, that particular redoubt is only for the elect who, like you and I,
>> have been involved with judging the Hutter Prize.  IIRC, even Shane Legg
>> sets forth that argument as a reason to avoid the ALgorithmic Information
>> Criterion -- and you can't get much more authoritative than that unless you
>> go to Hutter himself or, in the hypothetical case, Solomonoff.  I did
>> express concern to Marcus at one time, when Solomonoff was still living and
>> shortly after the Hutter Prize had been announced, that Solomonoff might
>> "torpedo" the Hutter Prize with that argument (if I recall the exact
>> wording).  Marcus reassured me that Solomonoff would do no such thing.
>> IIRC shortly thereafter Solomoff posted something like that argument to his
>> blog.  IIRC Marcus objected to using the ALIC for global warming despite
>> the Biden administration setting the value of addressing that issue at
>> around $10T/year -- and I can see merit in that objection given the scale
>> of the data.
>>
>> But it all comes down to "incentives" when we are addressing the
>> "motivated reasoning" problem and that's why I posted my Congressional
>> testimony about the "incentives" regarding rocket technology -- which you
>> commented on but did not seem to get the point I was trying to make about
>> incentives.
>>
>> Once we're in the realm of macrosocial psychological dynamical models,
>> the incentives are so great as to beggar the imagination.  This is far
>> greater even than Biden's rNPV of $10T/year and the macrosocial psychology
>> data is many orders of magnitude smaller than climate data.  That said,
>> there is room for your concern about choice of language in conjunction with
>> the identification "noise" regarding which, as I've often pointed out:
>> "one man's noise is another man's cyphertext".
>>
>> So we have two "argument surfaces" here:
>>
>> How much of the macrosocial dataset is "*noise*" as opposed to
>> inadequately mot

Re: [agi] Fwd: The Bloomer's Paradox

2025-11-24 Thread Matt Mahoney
The proof of language independence in Kolmogorov complexity as the data
gets larger is that you can always change the language by appending a fixed
sized translator. For English, that's about 10^9 bits, which is a factor of
2^10^9 ≈ 10^300,000,000 probability difference.

We can reduce this problem by using the simplest language possible, like
your idea of a state machine made of the fewest 2 input NOR gates that
sequentially outputs enwik9. But these are really hard to program. Wolfram
held a contest to prove that a 2 state non halting Turing machine with a
base 3 tape is universal. But nobody knows how to even write a "hello
world" program on it.

Which raises the bigger problem that Kolmogorov complexity is not
computable. The winning theory then becomes the one that people put the
most effort into solving.

Which raises the even bigger problem that as you mentioned, motivation,
ego, and money drive science. Scientists who should know better still want
to prove themselves right. If the experiment doesn't give the right answer,
then fix the experiment. This happens even in physics, but is especially
bad in medicine and the social sciences where you can cherry pick the data
that supports your theories.

Suppose you want to answer the question of whether covid-19 vaccines are
safe and effective. The data set huge. Just on Worldometer you have weekly
case, hospitalization and death rates by week and county with vaccination
rates and test coverage. There are thousands of studies, millions of genome
sequences of different strains, billions of raw data points for individual
cases, and tracking data for billions of people in Asian countries where
people had to run apps or wear a device that continually reported their
location to the government. Do you compress all of it? What about data you
think is irrelevant? What about data you think is unreliable? What about
studies that were not peer reviewed? What about studies funded by vaccine
makers? Do you trust the US CDC? Do you trust the Chinese CDC? Do you trust
Turkmenistan, the only country to report zero cases throughout the
pandemic? Who gets to decide which data to include?

How do you convince people who believe that the moon landing was fake? How
do you convince people when anything on the Internet could be fake? When
any text or image or video could be created by AI?

-- Matt Mahoney, [email protected]

On Sun, Nov 23, 2025, 10:30 AM James Bowery  wrote:

> There are, of course, an infinite number of "arguments" one can come up
> with to expand what Nick Szabo calls the "Argument Surface" and that is
> where the real "problem for statistics about people" arises -- not in the
> choice of language ambiguity.  People who are not motivated to get rid of
> motivated reasoning will not be motivated to solve problems like the choice
> of language ambiguity -- as just one example of many.  I will grant,
> however, that particular redoubt is only for the elect who, like you and I,
> have been involved with judging the Hutter Prize.  IIRC, even Shane Legg
> sets forth that argument as a reason to avoid the ALgorithmic Information
> Criterion -- and you can't get much more authoritative than that unless you
> go to Hutter himself or, in the hypothetical case, Solomonoff.  I did
> express concern to Marcus at one time, when Solomonoff was still living and
> shortly after the Hutter Prize had been announced, that Solomonoff might
> "torpedo" the Hutter Prize with that argument (if I recall the exact
> wording).  Marcus reassured me that Solomonoff would do no such thing.
> IIRC shortly thereafter Solomoff posted something like that argument to his
> blog.  IIRC Marcus objected to using the ALIC for global warming despite
> the Biden administration setting the value of addressing that issue at
> around $10T/year -- and I can see merit in that objection given the scale
> of the data.
>
> But it all comes down to "incentives" when we are addressing the
> "motivated reasoning" problem and that's why I posted my Congressional
> testimony about the "incentives" regarding rocket technology -- which you
> commented on but did not seem to get the point I was trying to make about
> incentives.
>
> Once we're in the realm of macrosocial psychological dynamical models, the
> incentives are so great as to beggar the imagination.  This is far greater
> even than Biden's rNPV of $10T/year and the macrosocial psychology data is
> many orders of magnitude smaller than climate data.  That said, there is
> room for your concern about choice of language in conjunction with the
> identification "noise" regarding which, as I've often pointed out:  "one
> man's noise is another man's cyphertext".
>
> So we have two "argument surfaces" here:
>
> How much of the macrosocial dataset is "*noise*" as opposed to
> inadequately motivated forensic epistemology "decyphering" that noise?
>
> How much of the wiggle room for *choice of language *can be squeezed out
> by forensic epistemology motiv

Re: [agi] Fwd: The Bloomer's Paradox

2025-11-23 Thread James Bowery
There are, of course, an infinite number of "arguments" one can come up
with to expand what Nick Szabo calls the "Argument Surface" and that is
where the real "problem for statistics about people" arises -- not in the
choice of language ambiguity.  People who are not motivated to get rid of
motivated reasoning will not be motivated to solve problems like the choice
of language ambiguity -- as just one example of many.  I will grant,
however, that particular redoubt is only for the elect who, like you and I,
have been involved with judging the Hutter Prize.  IIRC, even Shane Legg
sets forth that argument as a reason to avoid the ALgorithmic Information
Criterion -- and you can't get much more authoritative than that unless you
go to Hutter himself or, in the hypothetical case, Solomonoff.  I did
express concern to Marcus at one time, when Solomonoff was still living and
shortly after the Hutter Prize had been announced, that Solomonoff might
"torpedo" the Hutter Prize with that argument (if I recall the exact
wording).  Marcus reassured me that Solomonoff would do no such thing.
IIRC shortly thereafter Solomoff posted something like that argument to his
blog.  IIRC Marcus objected to using the ALIC for global warming despite
the Biden administration setting the value of addressing that issue at
around $10T/year -- and I can see merit in that objection given the scale
of the data.

But it all comes down to "incentives" when we are addressing the "motivated
reasoning" problem and that's why I posted my Congressional testimony about
the "incentives" regarding rocket technology -- which you commented on but
did not seem to get the point I was trying to make about incentives.

Once we're in the realm of macrosocial psychological dynamical models, the
incentives are so great as to beggar the imagination.  This is far greater
even than Biden's rNPV of $10T/year and the macrosocial psychology data is
many orders of magnitude smaller than climate data.  That said, there is
room for your concern about choice of language in conjunction with the
identification "noise" regarding which, as I've often pointed out:  "one
man's noise is another man's cyphertext".

So we have two "argument surfaces" here:

How much of the macrosocial dataset is "*noise*" as opposed to inadequately
motivated forensic epistemology "decyphering" that noise?

How much of the wiggle room for *choice of language *can be squeezed out by
forensic epistemology motivated by an rNPV of $10T/year, ie: well in excess
of $100T, with let's say only 1% of that amount going to ALIC research:
>$1T?

First of all, recognize that the exploit you regard is decisive
is miniscule compared to the argument surface presently not only tolerated
but exploited by the academy, think tanks and punditry.  At present there
is virtually nothing BUT macrosocial psychological "argument surface", e.g.
arguments such as the one to which you appealed for normative alignment of
young men to be optimistic lest their pessimism be a self fulfilling
prophecy.

Secondly, forensic epistemology is precisely about *presuming* criminal
behavior such as that to which you appeal as a reason for despair.  With
>$1T at stake there will be enormous motivation to suss out issues
regarding "language choice" and I can easily demonstrate that none of the
existing authorities have been sufficiently motivated to reduce that aspect
of the argument surface:

As I've pointed out before, not only is there an entirely different
theoretical basis for addressing that reason (really excuse) to support
avoidance of  scientific accountability by our policy makers (ie: NiNOR
Complexity), but there are obvious, at-hand, techniques to reduce that
argument surface.   For example, a GPU provides an "instruction set", ie
"language", that is radically different from a CPU.  So are we to now throw
up our hands in despair and let those in power get away with "Well gee who
could have KNOWN???" when things don't go "according to projections"?
Really?  Why am I the ONLY person to have addressed the *obvious* fact that
a GPU's "instruction set" is describable as a relatively tiny procedure in
a canonical instruction set and that procedure's algorithmic length should
be used?

Could it be that, perhaps, I'm the only sufficiently MOTIVATED person among
those who have been taking information criteria remotely seriously?


On Thu, Nov 20, 2025 at 5:27 PM Matt Mahoney 
wrote:

> On Thu, Nov 20, 2025, 10:11 AM James Bowery  wrote:
>
>>
>>
>> On Wed, Nov 19, 2025 at 11:19 AM Matt Mahoney 
>> wrote:
>>
>>> Algorithmic information or compression is great for evaluating language
>>> models but not for everything
>>>
>>> I could try compressing world population data by fitting it to a
>>> polynomial,
>>>
>>
>> Do you understand the difference between statistics and dynamics?
>>
>
> No, it's the difference between compressing text and compressing video.
> You can't accurately measure the compression of a tiny signal in a sea of
> 

Re: [agi] Fwd: The Bloomer's Paradox

2025-11-20 Thread Matt Mahoney
On Thu, Nov 20, 2025, 10:11 AM James Bowery  wrote:

>
>
> On Wed, Nov 19, 2025 at 11:19 AM Matt Mahoney 
> wrote:
>
>> Algorithmic information or compression is great for evaluating language
>> models but not for everything
>>
>> I could try compressing world population data by fitting it to a
>> polynomial,
>>
>
> Do you understand the difference between statistics and dynamics?
>

No, it's the difference between compressing text and compressing video. You
can't accurately measure the compression of a tiny signal in a sea of noise.

This becomes a problem for statistics about people. It only takes a few
bits of Kolmogorov complexity for social scientists to construct models
that favor one group over another, and those bits can be hidden in the
choice of language ambiguity.

I think it would be great if we could answer political questions
objectively. So how would you solve the problem?


-- Matt Mahoney, [email protected]

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T504adacb23f3c455-M417cdb6912f7a31e584ba578
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Fwd: The Bloomer's Paradox

2025-11-20 Thread James Bowery
On Wed, Nov 19, 2025 at 11:19 AM Matt Mahoney 
wrote:

> Algorithmic information or compression is great for evaluating language
> models but not for everything
>
> I could try compressing world population data by fitting it to a
> polynomial,
>

Do you understand the difference between statistics and dynamics?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T504adacb23f3c455-Md49fd5f054dbc9f5d8062388
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Fwd: The Bloomer's Paradox

2025-11-19 Thread Matt Mahoney
Algorithmic information or compression is great for evaluating language
models but not for everything. It doesn't even work for other types of AI
like vision because uncompressable noise overwhelms the 10 bit per second
signal. I did apply for NSF funding for a text compression contest around
2000 while working on my dissertation, but it was rejected like 90% of
proposals, so I changed my Ph.D. topic and later created a benchmark with
no prize money that later became the Hutter prize.

I could try compressing world population data by fitting it to a
polynomial, which works in the short term but doesn't tell me anything
about when it will go to 0 like all species eventually do. Grok and
DeepSeek both say population will peak at 10.3 or 10.4 billion in the mid
2080's, citing UN projections. I think it will happen sooner as AI leads to
social isolation, speeding up the drop in fertility rate. I'm not sure what
data I could compress to answer the question.

On space travel: there is no reason to send humans into space except
tourism. We sent humans to the moon in 1969 because we didn't have the
technology to send robots.

-- Matt Mahoney, [email protected]

On Wed, Nov 19, 2025, 9:13 AM James Bowery  wrote:

>
>
> On Tue, Nov 18, 2025 at 10:11 PM Matt Mahoney 
> wrote:
>
>> The book argues that the only thing we have to fear is fear itself.
>>
>
> Macrosocial psychological dynamics sells.
>
> But it's all bullshit due, I'm afraid to say, Matt, because people like
> you don't understand the importance of the algorithmic information
> criterion for model selection at these scales.  In fact, you are, in
> particular, in a position to do something about this but you are too
> committed to your position to avoid motivated reasoning.
>
> The global scale of these dynamics makes what is at stake in getting these
> models right in the trillions of dollars a year and that means the stakes
> in motivated reasoning for getting them WRONG due to rent seeking is
> likewise astronomical.
>
> The NSF should be dispensing money in proportion to the improvements of
> lossless compression of a wide range of longitudinal measures.
>
> This is an idea quite related to your leadership regarding compression of
> text compression which IIRC, you thought the NSF should be financing.
>
> This may, in your mind, be excusable because you are so certain of your
> world model that it is really quite pointless to consider alternative
> dynamics that may entail emergent chaos.
>
> ...
>>
>> The facts about immigration
>>
>
> Are very sparse if you actually go looking for data panels.  This is one
> of the reasons I've spent the majority of my time over the last several
> months working through the information geometric treatment of what data is
> available so as to impute the 90+% missing from the laboratory of the
> counties data panel.
>
> This is a radically different approach to data compression of that sparse
> dataset to what you've tried which amounts to statistical text compression
> of numeric data.  It attempts to get to the root dynamics of development
> and then back that out from the manifold to the original data including
> precision measures based on the MDL of the residuals and parameters of the
> model.
>
> This works as an algorithmic information criterion because the ultimate
> model is not merely information geometric but information geometrodynamical.
>
> This is not an existential crisis. It's evolution.
>
> As though rudderless "evolution" as you call it, can't get into
> catastrophic attractors... As though human agency has no part in
> "evolution".
>
> Look, maybe it's because I actually had some small success at modifying
> the zeitgeist regarding space launch commercialization early on in the
> present breakout into space solar powered machine learning, but I don't
> take lightly your tendency to abjure your unique responsibility as a human
> with agency simply because you are comfortable with "the way things are".
>
> As Charlie Munger was fond of pointing out:
>
> "show me the incentive and I'll show you the outcome"
>
> Necessity and Incentives Opening the Space Frontier: Testimony before the
> House Subcommittee on Space
> Necessity and Incentives
> Opening the Space Frontier
>
> Testimony before the House Subcommittee on Space
> by James Bowery, Chairman
> Coalition for Science and Commerce
> July 31, 1991
>
>
> Mr. Chairman and Distinguished Members of the Subcommittee:
>
> I am James Bowery, Chairman of the Coalition for Science and Commerce. We
> greatly appreciate the opportunity to address the subcommittee on the
> critical and historic topic of commercial incentives to open the space
> frontier.
>
> The Coalition for Science and Commerce is a grassroots network of citizen
> activists supporting greater public funding for diversified scientific
> research and greater private funding for proprietary technology and
> services. We believe these are mutually reinforcing policies which have
> been v

Re: [agi] Fwd: The Bloomer's Paradox

2025-11-19 Thread James Bowery
On Tue, Nov 18, 2025 at 10:11 PM Matt Mahoney 
wrote:

> The book argues that the only thing we have to fear is fear itself.
>

Macrosocial psychological dynamics sells.

But it's all bullshit due, I'm afraid to say, Matt, because people like you
don't understand the importance of the algorithmic information criterion
for model selection at these scales.  In fact, you are, in particular, in a
position to do something about this but you are too committed to your
position to avoid motivated reasoning.

The global scale of these dynamics makes what is at stake in getting these
models right in the trillions of dollars a year and that means the stakes
in motivated reasoning for getting them WRONG due to rent seeking is
likewise astronomical.

The NSF should be dispensing money in proportion to the improvements of
lossless compression of a wide range of longitudinal measures.

This is an idea quite related to your leadership regarding compression of
text compression which IIRC, you thought the NSF should be financing.

This may, in your mind, be excusable because you are so certain of your
world model that it is really quite pointless to consider alternative
dynamics that may entail emergent chaos.

...
>
> The facts about immigration
>

Are very sparse if you actually go looking for data panels.  This is one of
the reasons I've spent the majority of my time over the last several months
working through the information geometric treatment of what data is
available so as to impute the 90+% missing from the laboratory of the
counties data panel.

This is a radically different approach to data compression of that sparse
dataset to what you've tried which amounts to statistical text compression
of numeric data.  It attempts to get to the root dynamics of development
and then back that out from the manifold to the original data including
precision measures based on the MDL of the residuals and parameters of the
model.

This works as an algorithmic information criterion because the ultimate
model is not merely information geometric but information geometrodynamical.

This is not an existential crisis. It's evolution.

As though rudderless "evolution" as you call it, can't get into
catastrophic attractors... As though human agency has no part in
"evolution".

Look, maybe it's because I actually had some small success at modifying the
zeitgeist regarding space launch commercialization early on in the present
breakout into space solar powered machine learning, but I don't take
lightly your tendency to abjure your unique responsibility as a human with
agency simply because you are comfortable with "the way things are".

As Charlie Munger was fond of pointing out:

"show me the incentive and I'll show you the outcome"

Necessity and Incentives Opening the Space Frontier: Testimony before the
House Subcommittee on Space
Necessity and Incentives
Opening the Space Frontier

Testimony before the House Subcommittee on Space
by James Bowery, Chairman
Coalition for Science and Commerce
July 31, 1991


Mr. Chairman and Distinguished Members of the Subcommittee:

I am James Bowery, Chairman of the Coalition for Science and Commerce. We
greatly appreciate the opportunity to address the subcommittee on the
critical and historic topic of commercial incentives to open the space
frontier.

The Coalition for Science and Commerce is a grassroots network of citizen
activists supporting greater public funding for diversified scientific
research and greater private funding for proprietary technology and
services. We believe these are mutually reinforcing policies which have
been violated to the detriment of civilization. We believe in the
constitutional provision of patents of invention and that the principles of
free enterprise pertain to intellectual property. We therefore see
technology development as a private sector responsibility. We also
recognize that scientific knowledge is our common heritage and is therefore
a proper function of government. We oppose government programs that remove
procurement authority from scientists, supposedly in service of them.
Rather we support the inclusion, on a per-grant basis, of all funding
needed to purchase the use of needed goods and services, thereby creating a
scientist-driven market for commercial high technology and services. We
also oppose government subsidy of technology development. Rather we support
legislation and policies that motivate the intelligent investment of
private risk capital in the creation of commercially viable intellectual
property.

In 1990, after a 3 year effort with Congressman Ron Packard (CA) and a
bipartisan team of Congressional leaders, we succeeded in passing the
Launch Services Purchase Act of 1990, a law which requires NASA to procure
launch services in a commercially reasonable manner from the private
sector. The lobbying effort for this legislation came totally from
taxpaying citizens acting in their home districts without a direct
financial stake -- the kind 

Re: [agi] Fwd: The Bloomer's Paradox

2025-11-18 Thread Matt Mahoney
The book argues that the only thing we have to fear is fear itself. The
review criticizes it as doomerism is bad unless it's doomerism about
doomerism.

The facts about immigration are that trade and travel across borders are
becoming easier, and that most babies today are being born in Africa and
Muslim countries. Resistance to immigration will decrease as their
economies improve and fertility drops along with the rest of the world. The
people who still don't like this are mostly older and not reproducing.

This is not an existential crisis. It's evolution. Evolution is what will
save humanity from a world of social isolation where AI gives us everything
we want.

People aren't happy because happiness is not utility. It is the rate of
increase of utility. There is no technology in a finite universe that can
fix that.

-- Matt Mahoney, [email protected]

On Tue, Nov 18, 2025, 7:14 PM James Bowery  wrote:

>
>
> On Thu, Nov 13, 2025 at 1:11 PM Matt Mahoney 
> wrote:
>
>> My take:
>> 1. Technology is making life objectively better.
>>
>
> "Objectively better" is a contradiction in terms.
>
>
>> 2. AI needs to know everything about you to work.
>>
>
> Never been true of anyone who does work for someone.  The best that can be
> said of this position is that the more AI knows about you the better it can
> serve you but that's true of people too and people are more accountable.
>
>
>> 3. People want to be controlled by positive reinforcement.
>>
>
> People want positive reinforcement.
>
> 4. Bad news is addictive.
>
> Especially when they have a bad feeling about the way things are that
> isn't being validated in a manner that satisfies.  For example, for 60
> years more than a supermajority of the US citizenry has told Gallup they
> didn't want increasing immigration rates and for 60 years that's all they
> got -- but no one will put the situation in terms that addresses just how
> deeply illegitimate our institutions and authorities are.  Not Trump.  Not
> Musk.  Not Carleson.  Not Fuentes.  All people get is the same old bullshit
> from both sides about immigration:
>
> "Immigration good."  "Immigration bad."
>
> Not
>
> "Our institutions don't work in the most primordial intrasexual selection
> competition since the Cambrian Explosion, let alone the military defense of
> territory."
>
>
>> 5. People are unhappy because they can't stop doom scrolling and think
>> the world is getting worse.
>>
>
> People are unhappy because they aren't being even thought of as people by
> those that have power over them.
>
>
>> -- Matt Mahoney, [email protected]
>>
>> -- Forwarded message -
>> From: Astral Codex Ten 
>> Date: Thu, Nov 6, 2025, 7:23 AM
>> Subject: The Bloomer's Paradox
>> To: 
>>
>>
>> ...
>> ͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>>   ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>>     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>> ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>>   ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>>     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>> ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>>   ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>>     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>> ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>>   ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>>     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>> ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>>   ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>>     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>> ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>>   ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>>     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>> ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­
>> Forwarded this email? Subscribe here
>> 
>> for more
>> The Bloomer's Paradox
>> 

Re: [agi] Fwd: The Bloomer's Paradox

2025-11-18 Thread James Bowery
On Thu, Nov 13, 2025 at 1:11 PM Matt Mahoney 
wrote:

> My take:
> 1. Technology is making life objectively better.
>

"Objectively better" is a contradiction in terms.


> 2. AI needs to know everything about you to work.
>

Never been true of anyone who does work for someone.  The best that can be
said of this position is that the more AI knows about you the better it can
serve you but that's true of people too and people are more accountable.


> 3. People want to be controlled by positive reinforcement.
>

People want positive reinforcement.

4. Bad news is addictive.

Especially when they have a bad feeling about the way things are that isn't
being validated in a manner that satisfies.  For example, for 60 years
more than a supermajority of the US citizenry has told Gallup they didn't
want increasing immigration rates and for 60 years that's all they got --
but no one will put the situation in terms that addresses just how deeply
illegitimate our institutions and authorities are.  Not Trump.  Not Musk.
Not Carleson.  Not Fuentes.  All people get is the same old bullshit from
both sides about immigration:

"Immigration good."  "Immigration bad."

Not

"Our institutions don't work in the most primordial intrasexual selection
competition since the Cambrian Explosion, let alone the military defense of
territory."


> 5. People are unhappy because they can't stop doom scrolling and think the
> world is getting worse.
>

People are unhappy because they aren't being even thought of as people by
those that have power over them.


> -- Matt Mahoney, [email protected]
>
> -- Forwarded message -
> From: Astral Codex Ten 
> Date: Thu, Nov 6, 2025, 7:23 AM
> Subject: The Bloomer's Paradox
> To: 
>
>
> ...
> ͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>   ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
> ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>   ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
> ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>   ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
> ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>   ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
> ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>   ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
> ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>   ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
>     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏
> ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­
> Forwarded this email? Subscribe here
> 
> for more
> The Bloomer's Paradox
> 
> ...
>
> Nov 6
>
>
> 
>
> 
>
>