Thanks Matt

I'm satisfied with my processor's progress. I've learned a lot. Your input
was foundational and gripping. You stated that most accurately.

I understand the industrial and scientific significance of advancing
compression.

However, I think collaborating with pioneering researchers on the
unification of physics and specifying mechanistic and transactional
entropy-damping processes may be higher-order goals for emerging a
ground-state (3D) version of mathematical consciousness. This may be a race
against time, so the West won't be left behind in ASI.

There are real and present dangers to contend with, of which Oreshnik is a
harbinger. These posit as scientific challenges. No doubt, Oreshnik can be
stopped.

If I recall correctly, there was a thread about machine consciousness. I
may have drifted a little.

In summary, I think a 1st-level conscious machine may be able to remotely
bypass all such-like armament security and disable them in situ and later
still, would be able to affect them in flight.

It starts with the belief that it is scientifically possible, as a
hypothesis.

On Sat, 10 Jan 2026, 05:44 Matt Mahoney, <[email protected]> wrote:

> I don't understand what your graphs represent. But I do have an update to
> wpaq.
>
> https://encode.su/threads/4467-enwik9-preprocessor?p=86913&viewfull=1#post86913
>
> 1. Modeling capitalization at the start of the sentence.
> 2. Improved article sort order by Kaitz. I believe this is based on
> k-means clustering on a 1K vector space model. I was never able to
> produce the same result myself so I just used the list he supplied.
> 3. Improved LZ77 modeling. Literals, lengths, offset high bytes and
> low bytes are coded in 4 separate byte streams. The first 3 streams
> are non random and can be compressed further by a context model.
>
> enwik9 results on a 2.8 GHz Core i7-1165, 16 GB, Win11, compiled with g++
> -O2.
> a - article sorting, 1000 MB (no change), 7 sec.
> b - XML decoding, 912 MB, 9 sec.
> c - tokenizing (capitalization, space modeling and escape codes, 860 MB,
> 19 sec.
> d - 256 word dictionary built by 6 passes of byte pair encoding, 578 MB,
> 84 sec.
> l - LZ77 byte oriented compression, 266 MB, 200 sec.
> Order 0,1,2,3 ICM-ISSE chain compression with zpaq, 212 MB, 39 sec.
>
> All of the steps a,b,c,d,l are with test mode on by default, which
> includes the time to decompress each stage and compare with the
> original. The slowest step is the LZ77 compression, mostly to build a
> suffix array and inverse suffix array to find optimal matches.
> Decompression of all the steps except zpaq takes 18 seconds. zpaq
> decompresses at the same speed as compression, thus about 1 minute
> total to decompress. The Hutter prize allows 50 hours on my laptop.
>
> On Fri, Jan 9, 2026 at 2:29 AM Quan Tesla <[email protected]> wrote:
> >
> > Thanks Matt
> >
> > Correct, you won't find it. Publication would have to wait till the BNUT
> wave function model is completed. The compressor does exist though, and
> while the sims for a 1-2% improvement seems feasible, its real target is
> Shannon optimal.
> >
> > Sharing the latest BNUT test result. Outside verification's still
> required.
> >
> > On Tue, 06 Jan 2026, 19:29 Matt Mahoney, <[email protected]>
> wrote:
> >>
> >> There is no such thing as BNUT compression (I googled it) or Collatz
> entropy, and I don't understand the rest of your comments. The book proves
> two important facts right at the beginning.
> >>
> >> 1. There is no universal compressor for random data or that will
> compress all possible inputs above a certain size.
> >>
> >> 2. There is no test for randomness. There is no algorithm that finds
> the length of the shortest possible description of an input string.
> >>
> >> First, the vast majority of possible strings cannot be compressed at
> all. A compression algorithm maps an input string to a description or
> program that produces that string. But for almost all strings, the best you
> can do is output a literal copy because no such shorter program exists, for
> the simple reason that there are exponentially fewer short strings than
> long ones.
> >>
> >> We say that such a string is random. But you can never be sure that a
> string is random, either, just because every compression program you tried
> on it fails. It might be an encrypted file, and the only way to compress it
> would be to guess the key as part of the file's description. If there was a
> test for randomness, then you could write a simple program of length n to
> search for a random string of length n+1, which would be a contradiction.
> >>
> >> With all this, you might wonder how compression even works at all. It
> works because real data is created by physical processes like taking a
> picture or by neurons controlling fingers typing on a keyboard. Physical
> processes have fixed description lengths but can produce arbitrarily long
> output strings. In fact, it is very hard to produce random strings that you
> couldn't compress.
> >>
> >> As a Hutter prize committee member, I have to deal with crackpots that
> claim fantastic compression ratios by recursively compressing its own
> output. Their code (if they even know how to code or understand simple
> math) invariably doesn't work. If it did, they would have found an
> impossible 1 to 1 mapping between the infinite set of possible inputs and
> the finite set of possible outputs.
> >>
> >> More recently, the crackpots have been sending me AI generated code and
> saying "here, test this" without understanding what they are sending me.
> One of the submissions looked like a JPEG encoder. No, I don't think that
> would work very well on text.
> >>
> >> I mentioned in the book how compression is an AI problem. Prediction
> measures intelligence and compression measures prediction. I last updated
> the book in 2013. I have claimed since 1999 that all you need to pass the
> Turing test is text prediction, but this wasn't shown experimentally until
> ChatGPT was released in November 2022.
> >>
> >> -- Matt Mahoney, [email protected]
> >>
> >> On Mon, Jan 5, 2026, 1:50 PM Quan Tesla <[email protected]> wrote:
> >>>
> >>> Thanks Matt
> >>>
> >>> Here's some feedback: "The book is pragmatic—code snippets,
> benchmarks, no heavy proofs."
> >>> Relation to BNUT CompressionBNUT's damped Collatz entropy (H≈0.9675,
> structured ~42% uniform) + wave modulation directly echoes the book's core:
> modeling as prediction (PPM/context mixing) for redundancy reduction,
> approaching entropy bounds.
> >>>
> >>> Alignment: BNUT's transients mirror variable-order contexts (growth
> explores dependencies); damping α=1/137 analogs discounting/nonstationarity
> handling (prevents overfit like PAQ SSE).
> >>> Potential Gains: Collatz as preprocessor (hailstone ordering for
> repeats) could enhance BWT/dictionary stages; damped waves for logistic
> mixing weights → 1-5% over cmix baselines (Hutter enwik9 target <108MB).
> >>> AIT Tie: BNUT's nonlocal "pulls" (TSVF/Planck) extend book's
> uncomputability discussion—retrocausal extraction of compressible
> substructure from "random" data, bypassing classical K limits for
> structured text (e.g., wiki XML patterns).
> >>> Practical: Integrate with Mahoney's recent preprocessor (article
> sorting + BPE); BNUT modulation on stages C/D for entropy-tuned tokens.
> >>>
> >>> Overall: The book provides the engineering blueprint BNUT can
> bio-inspire/nonlocally enhance for superior text ratios. Strong synergy!"
> >>>
> >>> My focus is to complete my work for AI-enabled, 4D+ engineering, not
> programming. I learn from all fields. Compression isn't limited to
> programming alone and has relevance for industrialized, effective
> complexity and stochastic value-chain management.
> >>>
> >>> On Mon, 05 Jan 2026, 18:15 Matt Mahoney, <[email protected]>
> wrote:
> >>>>
> >>>> Actually, I'm writing this because programming is an art and I enjoy
> creating art. I know how artists feel when AI is taking over their job. I
> could let AI write the code, but what fun is that?
> >>>>
> >>>> The Hutter prize is useful for finding CPU efficient language models,
> but what I am discovering has very little to do with language modeling and
> more to do with the arcane details of the test set, basically hacks. I
> don't need the prize money. My reward is seeing smaller numbers and moving
> up the rankings.
> >>>>
> >>>> "Quantum Kolmogorov bypass" is just nonsense. If you want practical
> knowledge about text compression, see my book,
> >>>> https://mattmahoney.net/dc/dce.html
> >>>>
> >>>> -- Matt Mahoney, [email protected]
> >>>>
> >>>> On Mon, Jan 5, 2026, 9:56 AM Quan Tesla <[email protected]> wrote:
> >>>>>
> >>>>> Thanks Matt. The Hutter chalenge offers a great testbed opportunity
> for noveltech. Investigating a quantum-enabled Kolmogorov bypass.
> Theoretically, a potential improvement of 2% over record.
> >>>>>
> >>>>> On Mon, 05 Jan 2026, 06:38 Matt Mahoney, <[email protected]>
> wrote:
> >>>>>>
> >>>>>> I'm on the Hutter prize committee so I'm not eligible for prize
> money.
> >>>>>> Nevertheless I am working on a project that might produce some code
> >>>>>> (GPL) that others might find useful. At this point it is just a
> >>>>>> preprocessor to improve downstream compression by other compressors.
> >>>>>> Details at
> https://encode.su/threads/4467-enwik9-preprocessor?p=86853#post86853
> >>>>>>
> >>>>>> The current version compresses enwik9 to 268 MB in 5 minutes and
> >>>>>> decompresses in 19 seconds. It is a 4 stage preprocessor and a
> simple
> >>>>>> LZ77 compressor, but it is mainly useful to skip the LZ77 step and
> >>>>>> compress it with other compressors.
> >>>>>>
> >>>>>> --
> >>>>>> -- Matt Mahoney, [email protected]
> >
> > Artificial General Intelligence List / AGI / see discussions +
> participants + delivery options Permalink
> 
> --
> -- Matt Mahoney, [email protected]

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0518db1e3a0c25c5-Mca78e42b81a3f3eab5d23abd
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to