On 7/6/21, [email protected] <[email protected]> wrote: > I'll try to give some constructive thoughts. > > In my view I see you talking way too much about the A>B rule, and using too > many names to say the same thing, from what I know this is just simply a > simple pattern that everything is based on (ex. word2vec...etc). You even > made a diagram, it seems overdone on trying to look formal, which wasted > time and energy.
Yes, I will make the introduction a bit more concise.... > "The gist of our theory is that Deep Learning provides us with neural > networks (ie. non-linear functions) that serve > as the proof mechanism of logic via the Curry-Howard isomorphism. With this > interpretation, we can impose the > mathematical structure of logic (such as symmetries) onto neural networks." > > In case not, I do hope you know how my algorithm works. > https://encode.su/threads/3595-Star-Engine-AI-data-compressor > I don't see the need to use any old fashion logo formulation or the need to > suggest it can be made to work with deep nets which is the same thing as > well but wearing a different top (implementation method). Maybe you mean > make the implementation of logic combine with the implementation of deep > learning, i.e. backprop + very-hardcoded rules. If so, I think what you > really want is my algorithm which is just is logic in clean form really and > no blurry backprop in the way. Do note mine is not some hardcoded chatbot. If you don't use deep learning (in AGI) you're missing out on the most powerful machine learning technique currently known. Deep learning allows to: 1) learn from a massive amount of data, AND 2) accomplish the learning task quickly. Other symbolic or combinatorial learning methods cannot achieve this. Think about this very important point... :) > My AI is an ultra-advanced markov chain, and I only just begin. The > mechanisms are clear in my explanation above and the implementation - as you > know - is the code that runs it however you code that thing up to work it > fast and efficiently - and others use backprop etc, I do it another way. But > the AI is always the same really, there's only one way AGI works, many ways > to code it, and one way you should code it. I don't see a clear exposition of your theory so it's difficult for me to comment on it... > https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence > > This too seems wasteful to abstract, logic is algorithm and algorithm is AI, > the whole universe is. AI is simply the most common patterns and then it > comes up with small mental programs/code (memories) all on its own. See how > it mentions in that table truth, sums, categories, implication....these are > all explained in my AGI guide I once tried showing to some select few > friends. Specifically the truth and sums are not actually coded in my AI and > probably won't need to either, it is more rarer needed to do those > predictions. > ...... > Again (I believe) I see you doing it here too. It looks like you are trying > to hard to abstract it and connect things. > ...... > All of AI has an underlying logic to do it...all of AI is just built up from > markov chain rule. The first pattern you can only find in a dataset is how > many times a letter or word repeats, and what follows around it ex. zb or bz > or bzq. I like abstraction... it makes the design clearer and thus easier to implement. An AGI theory with tens of modules will never be implemented... no one will spend the time to read over all the specifications.... LOL > "Why BERT is a Logic > In the following diagram 2, observe that the Transformer is > permutation-invariant (or more precisely, equivariant). > That is to say, for example, if input #1 and #2 are swapped, then output #1 > and #2 would also be swapped:" > > Happy to see this here. In my plans for my AI I will turn the delay matching > into a as Hinton calls it "equivalence" so that h e l l o matches hello, the > input has many spaces but most are ignored as they are 'all' spaced, so > error is not as big as it would normally think, it matches less only if > there is no pattern and much change. As well I know how to teach it abcdefg > and show it gfedc and predict the rest backwards. The idea is it matches a > lot to the non backwards memory and then instead of predicting the tail next > letter it predicts the delay order as the input is seen, it predicts the > rest of the memory. And I don't think humans are good at this pattern, we > can only do it by hand using lots of resources stressfully. Your thinking is not abstract enough and you talk about specific tasks without a theory to unify them :) ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tb5526c8a9151713b-M3c65c07d787c4f7c454ab38f Delivery options: https://agi.topicbox.com/groups/agi/subscription
