You say: " i dont believe there isnt a solution " and " there is
solutions ". But this is what theorems do--they tell you that a statement
is valid for a broad range of conditions without you having to go try every
single condition. For example, Pythagora's theorem tells you that in a
triangle, c2 = a2 + b2. If there is a proven theorem, you don't need to go
search for a and b of a triangle that will give you c3 = a3 + b3. The proof
of the theorem has already done an exhaustive search for you. Pythagora's
theorem already knows that you will never find such a combination of a, b
and c that you get c3 = a3 + b3 . Similarly, the no-free-lunch is a
theorem, which means it already was able to look into all possible
combinations of models that you can ever device. An what it found is that
an algorithm cannot exist that both works with a broad spectrum of data and
is in the same time efficient with all those data.

This is why theorems are useful.

One can come up with an idea "let me use logarithms here, maybe this work".
The NFL theorem tells us, "nope, I've tried that. Doesn't work".

Then we go, "let me try to put sinusoidals and divide everyting by Pi". The
NFL theorem says "Nope. Tried that too. Didn't work for me."

Then we go and say "let me make a network of neurons as similar to human
brain as possible". Maybe this works. The NFL theorems, goes again telling
us "Nope, included that too in my proof. Not working."

Then we go and say "let me try a super big deep learning network + I
connect all neurons with all neurons + I apply Bayes equation to each
parameter of the network + I connect million such networks in a Markov
super-chain + I tune everything with a evolutionary algoirthm using a
computer of the size of the Milky way galaxy". The NFL theorem has tried
that too and guess what, it found out that this did not work either. Still,
no free lunches there.

Theorems are useful.


On Mon, Feb 3, 2020 at 11:46 PM <[email protected]> wrote:

> Yeh I guess your right,  another problem is how long it takes to train,
> then gaps in the training set, yes,  but i dont believe there isnt a
> solution to get over these combinatorial explosions that get us a.i.
> programmers down.
>
> there is solutions,  and if you find them,  then you get to jump on the
> free lunch train once you get there. :)  anyhow, thats just me anyway,
> theres many ways to take this "ai programming life situation" i only do it
> as i do it.
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T353f2000d499d93b-Mb72e5602130623967e202b6a>
>


-- 
Prof. Dr. Danko Nikolić
www.danko-nikolic.com
https://www.linkedin.com/in/danko-nikolic/

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T353f2000d499d93b-M0c0a5a5355eaa5d448c40397
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to