" It's not because of the no free lunch theorem. It's because you can
always find something that your program can't compress, and you have to add
yet another special case. "

I think it is exactly because of the No Free Lunch Theorem (NFLT). Let me
explain how I think about it, and you can correct me if you think this is
wrong.

We pay for better compression by adding more code that is, writing longer
algorithms. These new algorithms pay the price of compression--if you send
a message to someone, you have to send the compressed data together with
the algorithm. I think that there is no free lunch with such efficiency in
compression. You can't add the ability to compress X without paying the
price of prolonging the algorithm. Remember, you have to count in the
length of your algorithm together with the length of your
compressed content. At the end, if you make the algorithm of the length of
your data_to_be_compressed, you can compress everything. But the ultimate
price is paid.

This all goes down to NFLT.

Danko



On Sun, Sep 27, 2020 at 8:05 PM Matt Mahoney <[email protected]>
wrote:

>
>
> On Sun, Sep 27, 2020, 1:41 PM Danko Nikolic <[email protected]>
> wrote:
>
>> I see the no free lunch theorem striking every day. Every time we pick
>> one ML architecture for one type of problem and another architecture for
>> another type of problem, it is the No Free Lunch Theorem dictating the fact
>> that we have to make thos chices and are not able to have one the same
>> architecture for all kinds of problems.
>>
>
> There is no simple, universal prediction algorithm. Suppose you have one.
> Then I can create a simple sequence that you can't predict. My program runs
> a copy of your program and outputs the opposite of your prediction.
>
> The best compressors have lots of code to handle lots of rare, special
> cases. It's not because of the no free lunch theorem. It's because you can
> always find something that your program can't compress, and you have to add
> yet another special case.
>
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/Ta433301e9ac5fb42-Ma5ea798b6ef2e4396fa8ad4f>
>


-- 
Dr. Danko Nikolić
www.danko-nikolic.com
https://www.linkedin.com/in/danko-nikolic/

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta433301e9ac5fb42-Mf4e62051708cd1eb474f54e7
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to