I'll try to give some constructive thoughts.

In my view I see you talking way too much about the A>B rule, and using too 
many names to say the same thing, from what I know this is just simply a simple 
pattern that everything is based on (ex. word2vec...etc). You even made a 
diagram, it seems overdone on trying to look formal, which wasted time and 
energy.

"The gist of our theory is that Deep Learning provides us with neural networks 
(ie. non-linear functions) that serve
as the proof mechanism of logic via the Curry-Howard isomorphism. With this 
interpretation, we can impose the
mathematical structure of logic (such as symmetries) onto neural networks."

In case not, I do hope you know how my algorithm works.
https://encode.su/threads/3595-Star-Engine-AI-data-compressor
I don't see the need to use any old fashion logo forumlation or the need to 
suggest it can be made to work with deep nets which is the same thing as well 
but wearing a different top (implementation method). Maybe you mean make the 
implementation of logic combine with the implementation of deep learning, i.e. 
backprop + very-hardcoded rules. If so, I think what you really want is my 
algorithm which is just is logic in clean form really and no blurry backprop in 
the way. Do note mine is not some hardcoded chatbot.

My AI is an ultra-advanced markov chain, and I only just begin. The mechanisms 
are clear in my explanation above and the implementation - as you know - is the 
code that runs it however you code that thing up to work it fast and 
efficiently - and others use backprop etc, I do it another way. But the AI is 
always the same really, there's only one way AGI works, many ways to code it, 
and one way you should code it.

https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence

This too seems wasteful to abstract, logic is algorithm and algorithm is AI, 
the whole universe is. AI is simply the most common patterns and then it comes 
up with small mental programs/code (memories) all on its own. See how it 
mentions in that table truth, sums, categories, implication....these are all 
explained in my AGI guide I once tried showing to some select few friends. 
Specifically the truth and sums are not actually coded in my AI and probably 
won't need to either, it is more rarer needed to do those predictions.

"In particular, logic propositions in a conjunction (such as A ∧ B) are 
commutative, ie. invariant under permutations,
which is a “symmetry” of logic. This symmetry essentially decomposes a logic 
“state” into a set of propositions, and
seems to be a fundamental feature of most logics known to humans. Imposing this 
symmetry on neural networks gives
rise to symmetric neural networks, which can be easily implemented thanks to 
separate research on the latter topic.
This is discussed in §3."

Again (I believe) I see you doing it here too. It looks like you are trying to 
hard to abstract it and connect things.

"As an aside, the Curry-Howard isomorphism also establishes connections to 
diverse disciplines. Whenever there is
a space of elements and some operations over them, there is a chance that it 
has an underlying “logic” to it.""

All of AI has an underlying logic to do it...all of AI is just built up from 
markov chain rule. The first pattern you can only find in a dataset is how many 
times a letter or word repeats, and what follows around it ex. zb or bz or bzq.

"Why BERT is a Logic
In the following diagram 2, observe that the Transformer is 
permutation-invariant (or more precisely, equivariant).
That is to say, for example, if input #1 and #2 are swapped, then output #1 and 
#2 would also be swapped:"

Happy to see this here. In my plans for my AI I will turn the delay matching 
into a as Hinton calls it "equivalence" so that h e l l o matches hello, the 
input has many spaces but most are ignored as they are 'all' spaced, so error 
is not as big as it would normally think, it matches less only if there is no 
pattern and much change. As well I know how to teach it abcdefg and show it 
gfedc and predict the rest backwards. The idea is it matches a lot to the non 
backwards memory and then instead of predicting the tail next letter it 
predicts the delay order as the input is seen, it predicts the rest of the 
memory. And I don't think humans are good at this pattern, we can only do it by 
hand using lots of resources stressfully.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb5526c8a9151713b-M08c70390ee2b0e26d3735eb5
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to