Re: [agi] I was wondering if the mathematics of superposition can be effectively in AGI

2022-02-23 Thread Jim Bromer
I was wondering if some of the most elementary mathematics of superposition could be borrowed and shaped effectively for AGI.  Is there any chance that something derived -or at least inspired- from the field might be useful for our own purposes. I came to the conclusion that there probably is

Re: [agi] I was wondering if the mathematics of superposition can be effectively in AGI

2022-02-20 Thread Jim Bromer
Thanks for the comments. It is going to take me a while to understand what you are saying - I need to focus on a job right now. -- Artificial General Intelligence List: AGI Permalink:

[agi] Re: I was wondering if the mathematics of superposition can be effectively in AGI

2022-02-19 Thread Jim Bromer
The teacher gets into the subject at 9 minutes 45 seconds. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T85ce710057b5a5ac-M1318e2d2a364e0e4e70db95d Delivery options:

[agi] I was wondering if the mathematics of superposition can be effectively in AGI

2022-02-19 Thread Jim Bromer
I do not believe in quantum physics. I don't believe that it describes what is at the heart of the puzzle of modern physics. But I am not opposed to the use of various mathematical principles or anything like that. I was wondering if the principles of superposition might be used more

Re: [agi] All Compression is Lossy, More or Less

2021-11-23 Thread Jim Bromer
"Algorithmic Randomness is very clear: A random string of bits cannot be represented as a program in fewer bits." Your definition is opposite of what I would consider to be random - unless you are defining randomness relative to a special bounded object.  Which is what I was saying. I realized

Re: [agi] All Compression is Lossy, More or Less

2021-11-23 Thread Jim Bromer
Since you have to have some kind of ordering (or system of orderings) to define random within, I believe you are going to find that random is somewhat problematic.  For example, you might talk about a range for the value of randomness. Why should randomness be binary? But while this seems like

Re: [agi] All Compression is Lossy, More or Less

2021-11-20 Thread Jim Bromer
So if I see a string of counting numbers with a length greater than 3 or 4, I would conclude that those numbers are not "random" based on my experiences or samplings of strings of numbers, the psychology of elementary number theory and my awareness from thinking about this stuff. But that is

Re: [agi] All Compression is Lossy, More or Less

2021-11-20 Thread Jim Bromer
For example, you might take an extensive set of observed values (from some sensor or generator) and then discuss strings that did not occur as being random.  From that you might use statistical methods to determine the relative likelihood of substrings and use them to talk about relative

Re: [agi] All Compression is Lossy, More or Less

2021-11-20 Thread Jim Bromer
Randomness is not an strong mathematical object because it cannot be defined in strong mathematical terms. All the information entropy talk is informal arm-chair gab. Randomness can only be defined relative to a definition of a subset of all possible orderings for a finite set or collection of

Re: [agi] All Compression is Lossy, More or Less

2021-11-20 Thread Jim Bromer
John: There is no such thing as pure entropy. It may be useful as an idealization within an imagined containment or ideological structure but that is it.  There is no such thing as pure randomness except as an imaginary thing within a mathematical container.  If pure randomness is useful in

Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas

2021-11-07 Thread Jim Bromer
Ideas have to be able to act on other kinds of ideas.** -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T1675e049c274c867-M20ef12826c3882122fa5c779 Delivery options:

Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas

2021-11-07 Thread Jim Bromer
Previous Training will not always be more primitive than a product of abstract thinking, but in *order to implement the insight that metacognition can provide* you need to be able to symbolically represent information in order to transform it (if you will) into information that is much more

Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas

2021-11-07 Thread Jim Bromer
ANN or DLN (and contemporary Transformers) do not have mechanisms to implement metacognition.  Metacognition requires the use of symbolic representations in order to think about your own thinking from a perspective that will be different than the habitual reactions of some more primitive

Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas

2021-11-03 Thread Jim Bromer
The idea of a basic hypergraph is more mundane than I thought.  When I said that any part of data has to be "globally accessible" in the mundane sense I meant that in order for it to be used it has to be accessible, either directly or indirectly. A hierarchy might be used just to provide

Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas

2021-11-01 Thread Jim Bromer
Brett, I will try to write a reply a few days from now. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T1675e049c274c867-Me50573b01aab086b5e5d7f59 Delivery options:

Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas

2021-11-01 Thread Jim Bromer
If logic (traditional logic) is used to refer to numerical values (like weighted values) then the evaluation of various logical combinations will require some numerical system that would be (or might be) governed by logic but which exists outside the logic representing the values. Similarly, if

[agi] Re: Developmental Network

2021-11-01 Thread Jim Bromer
I would like to study this more carefully. I will try to look at it next week. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tabd1763b276303cf-Mdb3236ae9218b9203b0f30ee Delivery options:

Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas

2021-11-01 Thread Jim Bromer
Brett: I think your Adaptron could be used to create interpretable networks of simple relationships between ideas. Interpretable Networks was the idea that got me interested in posting to this group again. However, I do not think that your overarching theory would be necessary and after quickly

Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas

2021-11-01 Thread Jim Bromer
Quan: Yes, or Not Sure. I do not actually know what you mean. The problem with the simplistic answer that you demanded is that I am not sure what you mean by, my "approach to programmable logic" and it is complicated even more because of your use of the terms, "programmable", and "logic".  Your

Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas

2021-10-27 Thread Jim Bromer
Nano: You do not seem to understand what I am saying and you constantly seem to add your own  interpretations which are wrong. Your comments about narrow AI are not relevant to what I am talking about. In order to communicate with other people you have to have the ability to recognize your own

Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas

2021-10-27 Thread Jim Bromer
Nanograte, I said: "I wasn't thinking of hypergraphs as being completely connected all of the time, since relationships in AI are conditional." Your response was, "This should be true for narrow AI in reductionism only, which is the more physical (in the sense of programmable) aspect of any,

Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas

2021-10-23 Thread Jim Bromer
Brett: You say that Adaptron uses compositional hierarchies of binary neurons (binons) as its representational system, and that as it learns, it builds up an integrated perception–action hierarchy of binons to represent its experiences. Are you saying that all knowledge is then stored in a

Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas

2021-10-23 Thread Jim Bromer
Nongrate: I did not understand what it was that you getting at. I still don't completely get it. And I wasn't thinking of hypergraphs as being completely connected all of the time, since relationships in AI are conditional.  And I was also thinking that relations (or edges of hypergraphs) could

Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas

2021-10-23 Thread Jim Bromer
Thank you for your offer Brett. I will take you up on it once I get some time. I just wanted to tell people that when I started working with some more focus, I realized that the algorithm I am starting to develop would not work. But when I started to think about it I remembered that there was

Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas

2021-10-23 Thread Jim Bromer
I am working on a relatively simple computational project. I don't think it is going to work but I want to try it out anyway. However, as I try to design the algorithm I keep running into problems, some of which are manifesting themselves just because I am getting older.  But the main cause of

Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas

2021-10-23 Thread Jim Bromer
Nanograte: I do not want to be rude, but if you have interpretable AI networks figured out then just apply those rules to an actual AI program.  I cannot understand what precisely you are saying, but the graph or model is never mature (in a constrained sense of graph theory) if you introduce

Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas

2021-10-23 Thread Jim Bromer
Brett: I am looking forward to reading your book. But I am a slow reader. Jim -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T1675e049c274c867-M9eac6602087520583016cbea Delivery options:

[agi] Re: Ideas Have to Act on Other Kinds of Ideas

2021-10-20 Thread Jim Bromer
I guess an interpretable network of (graph) components (or subcomponents) that uses hypergraphs to refer to them could work in the way I am thinking. To use them in a variety of what-if scenarios the lines as well as the nodes could be varied. But I think this would still be in the Relevancy

[agi] Ideas Have to Act on Other Kinds of Ideas

2021-10-20 Thread Jim Bromer
As I was thinking about a few presentations I realized that my ideas about more human-like AI must include something that acts a little like a trial and error method even when the subject matter is well understood. I think the component method of AI is necessary. The components would represent

Re: [agi] Re: The AGI-21 Presentations Are Interesting

2021-10-20 Thread Jim Bromer
I was disappointed by the end of Linas's presentation because he seemed to be promoting the method as scalable although he did not say anything about the complexity (complicatedness) of the integration of ideas that would need to be built on numerous and complicated references.  A concept has

[agi] Re: The AGI-21 Presentations Are Interesting

2021-10-18 Thread Jim Bromer
Abstract representations might be useful when the abstractions refer to reasoning along with the sources of the 'objects' used in the reasoning rather than just some resultant that is stripped of the derivation of  its sources. -- Artificial General

[agi] Re: The AGI-21 Presentations Are Interesting

2021-10-18 Thread Jim Bromer
abstract representations might be useful when the abstractions refer to reasoning itself along with the 'factoid' 'objects' of reasoning. -- Artificial General Intelligence List: AGI Permalink:

[agi] The AGI-21 Presentations Are Interesting

2021-10-18 Thread Jim Bromer
I am really enjoying listening to AGI-21 presentations. I liked Ben's presentation and I learned something from it although I can't remember most of it off hand. I also got something about the discussion about databases even though that is more of a user-group thing. But I find myself heartily

Re: [agi] Colin Hales mention in Salon editorial

2021-05-10 Thread Jim Bromer
The proposition that my unproven speculation only needs some tweaking but your unproven speculation is completely wrong is a weak proposition that I have often seen in these AI discussion groups. Having seen it in others, I am wary of it popping up in my own thinking.

[agi] Re: Attention is All you Need

2021-05-10 Thread Jim Bromer
I appreciated the links to the transformers. I found a slightly more readable and see that the first step of transformer use in nlp is to turn words into embedding and positional vectors that indicate more than just co-occurrence. I appreciate that. But then the phraseology becomes confused

Re: [agi] Re: Making an AI from a compressor

2021-04-29 Thread Jim Bromer
I was reading your comment that, "Storage is transmission," and I realized, based on an idea I had a number of years ago, that if digital data was in constant transmission form then the null signal could be used for a value (like 0).  That can't be done when data is stored in a contemporary  

Re: [agi] Re: Making an AI from a compressor

2021-04-29 Thread Jim Bromer
Free Lunch? Maybe https://jamesbromer.com/Freelunch-Maybe.html -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-M03dd226382fa10c5519d6c69 Delivery options:

Re: [agi] Re: Making an AI from a compressor

2021-04-28 Thread Jim Bromer
Malleable compression is an interesting way to put it. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-Mb517e1b5a5fc923ff46099fc Delivery options:

Re: [agi] Re: Making an AI from a compressor

2021-04-28 Thread Jim Bromer
In response to something I said about cross-generalization, John Rose replied "  You can optimally compress some of the data all of the time, you can optimally compress all of the data some of the time, but you can’t optimally compress all of the data all of the time. It is what it is bruh."

[agi] Re: Making an AI from a compressor

2021-04-27 Thread Jim Bromer
There is something about cross-generalizations and cross-categorizations or overlapping insights about concepts (or components of concepts) that make me think of a compression method that would not be an optimal compressor for a narrow task but would be highly sophisticated for AI / AGI. But I

[agi] Re: It is sometimes useful to expand data before compressing it

2021-04-22 Thread Jim Bromer
There are no vectors in the use of CNNs to detect spoken keywords that I mentioned in the first post. I only mentioned vector spaces after immortal.discoveries mentioned something about recognition of various sizes. I guess bringing vector spaces into the discussion was a mistake on my part. I

[agi] Re: It is sometimes useful to expand data before compressing it

2021-04-21 Thread Jim Bromer
All the vectors in a vector space that contain a magnitude could be multiplied by a constant in order to scale it.  Similarly, the directions of all the vectors could be rotated by a constant in order to rotate the image or objects defined by the vectors.  If the vector space is in more than 2

[agi] Re: It is sometimes useful to expand data before compressing it

2021-04-21 Thread Jim Bromer
To put in a more abstract form, some useful data that can be sensed from nature is effectively compressed and therefore would have to be decompressed (expanded) in order to extract it so that it could then be compressed in another form. AI is not just an entry in a compressor contest, it is

[agi] Re: It is sometimes useful to expand data before compressing it

2021-04-21 Thread Jim Bromer
Not all 'integral' objects can be recognized from visual edge filters. Some years ago I gave myself a few days to make an edge detector myself and I got amazing results where a person or an solid object was a different color and contrasted against the background. But when the pixels were mixed

[agi] Re: It is sometimes useful to expand data before compressing it

2021-04-21 Thread Jim Bromer
I am not familiar with what you are talking about. Can it be used for simple speech recognition? -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Td08f1cb9cdd5e5d9-M6599963555a6cfd2a123820f Delivery options:

[agi] It is sometimes useful to expand data before compressing it

2021-04-21 Thread Jim Bromer
I am taking a free course on Tiny Machine Learning. I was wondering why they converted speech (for an extremely simple recognition task) into (a form of) imagery. Part of the reason is that by extracting frequencies of the speech (with a Fourier Transform) the output could be simplified by

[agi] Re: Attention is All you Need

2021-04-21 Thread Jim Bromer
Shifting *local* windows for visual processing were emphasized in the video. It really gets me to think about how these can be applied to other AI applications. -- Artificial General Intelligence List: AGI Permalink:

[agi] Re: Attention is All you Need

2021-04-21 Thread Jim Bromer
That explains a lot. The link that I sent, DeepMind x UCL | Deep Learning Lectures | 8/12 | Attention and Memory in Deep Learning - YouTube  , showed attention and window shifts but I was not able to fully integrate that into my thinking about

[agi] Re: Attention is All you Need

2021-04-20 Thread Jim Bromer
When I said that ANNs used linear approximations you knew what I meant because 'you are in the club.' But a newbie might have been confused and thought something like, "So that's how Neural Networks work. They use linear approximations." Seeing this I will try to find better phrases like - they

[agi] Re: Attention is All you Need

2021-04-20 Thread Jim Bromer
Transformer Attention does seem to be more than just those two fundamental points. I do not want to spend a lot of time working with NNs (other than on my TinyML projects) but I do want to get a better understanding about how these things work and then apply some of the ideas to some slightly

[agi] Re: Attention is All you Need

2021-04-19 Thread Jim Bromer
I have been watching this video. I can intuitively follow most of what he is saying. DeepMind x UCL | Deep Learning Lectures | 8/12 | Attention and Memory in Deep Learning - YouTube -- Artificial General

[agi] Attention is All you Need

2021-04-19 Thread Jim Bromer
For those of you who can understand it: Attention is All you Need (nips.cc) -- Artificial General Intelligence List: AGI Permalink:

[agi] Re: Thursday, March 25, 2021 Constructing Transformers For Longer Sequences with Sparse Attention Methods

2021-04-19 Thread Jim Bromer
I think that since the input to a node (in a NN) can consist of multiple scalars, it is sometimes called a vector. However, it is by no means certain that the use of the term is appropriate. From Wikipedia Tuples that are not really vectors[edit

[agi] Re: Thursday, March 25, 2021 Constructing Transformers For Longer Sequences with Sparse Attention Methods

2021-04-19 Thread Jim Bromer
That use of the term "vector" is confusing. But, in my opinion it is also sometimes used pretentiously.  In a typical neural network the direction of an input or an output to a node is not encoded in the input or output itself. But the input is coming from some other node or nodes and the

[agi] Re: Thursday, March 25, 2021 Constructing Transformers For Longer Sequences with Sparse Attention Methods

2021-04-19 Thread Jim Bromer
Matt's use of the term "vector" helped me see why they use that term. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T68be2fedf1f53ef2-M96ae85f4310a688ab558b07c Delivery options:

[agi] Re: Thursday, March 25, 2021 Constructing Transformers For Longer Sequences with Sparse Attention Methods

2021-04-19 Thread Jim Bromer
The values of the Input and Output to a Node carry no information of direction, so their casual description as "Vectors" are formally pretentious even though that definition does help me see what they are getting at. -- Artificial General Intelligence

[agi] Re: Thursday, March 25, 2021 Constructing Transformers For Longer Sequences with Sparse Attention Methods

2021-04-19 Thread Jim Bromer
The values of the Input and Output of a neural network carry no information of direction. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T68be2fedf1f53ef2-Mb1bea1c787fd1f600b6665cb Delivery options:

[agi] Re: Thursday, March 25, 2021 Constructing Transformers For Longer Sequences with Sparse Attention Methods

2021-04-19 Thread Jim Bromer
Decartes was a 17th Century mathematician. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T68be2fedf1f53ef2-M0e350af4c748bcf59b4ae0af Delivery options: https://agi.topicbox.com/groups/agi/subscription

[agi] Re: Thursday, March 25, 2021 Constructing Transformers For Longer Sequences with Sparse Attention Methods

2021-04-19 Thread Jim Bromer
Because I have been studying a little ML and DL in a TinyML course for using DL for microcontrollers (Simple sensors and activators for IoT kinds of things) I am starting to read more about DL. I have studied a lot of mathematics but I do not remember most of it and there are a lot of things

[agi] Re: Thursday, March 25, 2021 Constructing Transformers For Longer Sequences with Sparse Attention Methods

2021-04-17 Thread Jim Bromer
And after you have had time to see it and relate it to other working models, you can then formalize it if you think it is helpful. -- Artificial General Intelligence List: AGI Permalink:

[agi] Re: Thursday, March 25, 2021 Constructing Transformers For Longer Sequences with Sparse Attention Methods

2021-04-17 Thread Jim Bromer
I did not understand what they were saying well enough but the fundamentals of my criticisms are relevant - as are yours.  The reference to GAN was something that I reacted to in the same way you did when I first read it although I can appreciate it better now that I had time to think about it. 

[agi] Re: Thursday, March 25, 2021 Constructing Transformers For Longer Sequences with Sparse Attention Methods

2021-04-17 Thread Jim Bromer
I did not understand what they were talking about enough but the fundamentals of what I said we -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T68be2fedf1f53ef2-M65a4e59b279214a230a517b2 Delivery options:

[agi] Re: Thursday, March 25, 2021 Constructing Transformers For Longer Sequences with Sparse Attention Methods

2021-04-17 Thread Jim Bromer
That is an interesting paper. Unfortunately, as I tried to follow up with their references I quickly discovered that the reference I landed on was written in a technical (abstract) form that I could not interpret. Although I think mathematics and formalizations expressed in mathematical terms

[agi] P=NP?

2021-03-31 Thread Jim Bromer
If a maximally efficient logical notation system, designed to express a system that can be denoted efficiently by our contemporary logical notation system and which was more efficient than the current system, was feasible then equivalences would be possible only by going outside the notational

[agi] Can Computer Algorithms Learn to Fight Wars Ethically?

2021-02-27 Thread Jim Bromer
Future warfare will feature automomous weaponry - The Washington Post -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Patterns of Cognition

2021-02-27 Thread Jim Bromer
I thought that your abstract contained terms that should have been explained.   Is your use of the term 'directed metagraph' referring to something similar to a directed graph but which is more abstract than or is abstracted from the more concrete graphs that would be used by the system to

Re: [agi] Patterns of Cognition

2021-02-27 Thread Jim Bromer
I thought your use of th -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Md84b30ce5ec4dbbf688a66c5 Delivery options: https://agi.topicbox.com/groups/agi/subscription

[agi] Re: I still say there are free lunches but...

2021-01-24 Thread Jim Bromer
So I did find some interesting ideas trying to see how trinary and binary might be related but I did not discover any breakthroughs of any kind.  And, the exaggerated search for narrow efficiencies can lead you to infinite dead-ends.  I did discover a new way to convert from trinary to binary

[agi] Re: I still say there are free lunches but...

2021-01-20 Thread Jim Bromer
I said that the n-ary system, n>1, is a free lunch relative to the natural counting system of unary numbers. One can argue against that by saying that there is no free lunch in an n-ary system (like binary or decimal representations) and the computational methods of arithmetic that we use with

[agi] Re: I still say there are free lunches but...

2021-01-19 Thread Jim Bromer
The immediate issue is whether I can do anything efficiently useful using combinations of binary and trinary numbers that could be used in a future analog computer that was able to represent numbers in different n-ary forms. If I could that would be weak preliminary evidence that the

[agi] Re: I still say there are free lunches but...

2021-01-19 Thread Jim Bromer
If it isn't for combinations of n-ary systems, which could be implemented on analog computers than this opens the possibility of much more efficient computers - within our lifetimes. And it brings another question to the forefront: What else might we be have been missing?

[agi] Re: I still say there are free lunches but...

2021-01-19 Thread Jim Bromer
My recollection is that the no free lunch theory (or conjecture or belief) means that there is no perfect compression method that will compress all possible values. My counter argument is that the n-ary (or base n) system of representing numbers where n>1 proves that conjecture wrong when you

[agi] I still say there are free lunches but...

2021-01-19 Thread Jim Bromer
There is no free lunch when you want to find an abstract system that will be capable of compressing all possible expressions, but there are free lunches in the practical day to day world where you do not need a 'perfect' system. Seems reasonable and yet, even that sensible statement is wrong.

[agi] Re: Are all AI's like this?

2020-10-13 Thread Jim Bromer
I do not want to parse everything imortal.discoveries said, but my feeling is that as an AI program learns more, it will need to keep relatively more specialized data and it will need to create more 'indexes' (or something that acts like an index) into the data.  So it will create an

Re: [agi] There is such a thing as a Free Lunch

2020-10-09 Thread Jim Bromer
I think we have to explore in order to learn - and in order to utilize our knowledge. But it is nicer when we can do so wisely.  So we explore areas that we do not know well in order to expand our knowledge which includes the application of knowledge that we have already acquired. But when we

Re: [agi] There is such a thing as a Free Lunch

2020-10-04 Thread Jim Bromer
So I guess my point of view would be that mathematics, in the contemporary common sense of the term, is not adequate for AGI. I do not think that human beings have ultimate compression-decompression systems that are responsible for thinking  although some kind of effective compression

Re: [agi] There is such a thing as a Free Lunch

2020-10-03 Thread Jim Bromer
I was thinking about it and the No Free Lunch theory is relevant to a large number of computational algorithms, but it is not a problem when the time differences (between runs of an algorithm) do not bother us. By thinking about simpler problems I was able to start to see some of the related

Re: [agi] There is such a thing as a Free Lunch

2020-10-01 Thread Jim Bromer
I am learning a little. The no free lunch theorem depends on the analysis of an imperfect system that will lead to perfect knowledge.  But in real world it is possible that someone might come up with a solution to a complicated problem that would have to be heavily developed in order to be

Re: [agi] There is such a thing as a Free Lunch

2020-09-30 Thread Jim Bromer
Danko, I think your comment is closer to a reasonable definition but I am not sure that is the common definition.  The no free lunch idea is a very useful bit of insight but treating a rule that does not typically transcend the premises of its application as if it could is a mistake or at least

Re: [agi] There is such a thing as a Free Lunch

2020-09-30 Thread Jim Bromer
A 'free lunch' is possible because human beings do not know everything. That means that new efficiencies can be discovered.  Even though you may discuss a frame work as if it were the only basis to achieve a goal the argument does not prove the premise that the particular frame work you have in

Re: [agi] There is such a thing as a Free Lunch

2020-09-27 Thread Jim Bromer
It sounds like you are predicting that you will be unable to make any progress (in the versions of) ML (that you have in mind). -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] There is such a thing as a Free Lunch

2020-09-27 Thread Jim Bromer
If there was no such thing as a free lunch all progress would have been impossible. By falling into your presumptions you lock yourself into them. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] There is such a thing as a Free Lunch

2020-09-26 Thread Jim Bromer
And it is relevant to the task and equipment that can be used on the task. My point is that there may be issues, like compression, that may be designed specifically for AI or AGI which may work better than more general compression methods. But that also means that they may not be as effective

Re: [agi] There is such a thing as a Free Lunch

2020-09-22 Thread Jim Bromer
Since you expend calories on things other than getting more calories then that implies that every lunch that is sufficient to sustain you must be partially free. You get more calories from lunch then you expend getting it. Suppose that there was some supremely efficient characterization of a

Re: [agi] There is such a thing as a Free Lunch

2020-09-22 Thread Jim Bromer
That is a very interesting point, but in talking about a uniform probability distribution over an infinite set you are characterizing the issue as if it were the universal characterization that underlies any characterization (of all possible universal generalizations).

[agi] There is such a thing as a Free Lunch

2020-09-22 Thread Jim Bromer
If there was no such thing as a free lunch then we would all be living in the stone age. Every advancement is based on some kind of efficiency. Yes, those achievements come at a cost.  So there may be a relative trade-off but the loss of generality from a purely imaginary (unattainable ultimate

[agi] Heavily Typed Conceptual Language

2020-02-20 Thread Jim Bromer
I have not been able to come up with a way to overcome p!=np in logic, so I am thinking about developing a heavily typed logic (logic-like references) as a way to get around the bottleneck of exponential complexity. However, I have run into some difficulties there as well. I would like an object

[agi] Re: Mathematical Model of Intuition

2020-02-19 Thread Jim Bromer
A mathematical model of telepathic intuition? -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tdc10d1af6ac7971e-M76b386652a733fcfd8b13273 Delivery options: https://agi.topicbox.com/groups/agi/subscription

[agi] Re: This Professor’s ‘Amazing’ Trick Makes Quadratic Equations Easier

2020-02-13 Thread Jim Bromer
I wasn't going to spend much time on this but the problem may be more subtle than you seem to appreciate. The solution for one or two points on a curve contains less information than the equation of the curve. And that made me realize that a curve or even a straight line has an infinite number

[agi] Re: Test your knowledge of probability theory

2020-02-12 Thread Jim Bromer
If you flipped a coin and it came up heads then the probability that it came up heads is 1. Was that a trick question? -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tefd74cfe5df991e0-M92070c65d19d54ea93b928c2

Re: [agi] Re: General Intelligence vs. no-free-lunch theorem

2020-02-12 Thread Jim Bromer
Wow I did not even see this discussion before I wrote my comments  about free stuff in another thread. I do not remember (and do not  care right now) where the no free lunch concept started. But I see it as there is a cost somewhere. But there ca*n always be efficiencies that have not been

[agi] Re: This Professor’s ‘Amazing’ Trick Makes Quadratic Equations Easier

2020-02-11 Thread Jim Bromer
The given solution to the parabola is a solution of the intersection of a straight line and the parabola. Is it possible that there is a solution to the intersection of a cubic equation and a quadratic equation that could be solved by solving for a quadratic equation? If the 'trick' can be

[agi] Re: This Professor’s ‘Amazing’ Trick Makes Quadratic Equations Easier

2020-02-11 Thread Jim Bromer
For instance, I can create a simple logical formula for any three variable statement without using any of the variables in more than one sub formula. However, I have to introduce a new kind of operation.  I can probably do the same for any 4 variable statement and so on, but the question is

[agi] Re: This Professor’s ‘Amazing’ Trick Makes Quadratic Equations Easier

2020-02-11 Thread Jim Bromer
Therefore you criticism is not relevant because you did not actually... -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tfeccd81f7ad3e928-Mf6633a0cb9b28488c89d0b42 Delivery options:

[agi] Re: This Professor’s ‘Amazing’ Trick Makes Quadratic Equations Easier

2020-02-11 Thread Jim Bromer
That is typical of exaggeration. You can often get things for free.  Therefor your criticism is not therefore because you did not actually  test your critical theory out in any way.  I am unable to see how this solution, or the other surprising relations the conic sections have to a broad

[agi] Re: This Professor’s ‘Amazing’ Trick Makes Quadratic Equations Easier

2020-02-09 Thread Jim Bromer
Being able to solve a quadratic equation with a linear solution is a kind of compression and it is a compressed operation (on the data) which is also important. I have not be able to show that it is a general solution (for parabolas that have been rotated for example) but I think that it might

[agi] This Professor’s ‘Amazing’ Trick Makes Quadratic Equations Easier

2020-02-07 Thread Jim Bromer
What's this got to do with AGI? Maybe nothing. https://www.nytimes.com/2020/02/05/science/quadratic-equations-algebra.html -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Complexity - General Learning

2019-08-10 Thread Jim Bromer
son does not pick up on it. (Dig?) You should not get annoyed with other people when you are tying to explain cool. On the other hand, the Phonz did show some flashes of annyance when Potsie or one of the other characters was being a little square. But keep your uncool brief dude. That's you, not your squar

Re: [agi] Complexity - General Learning

2019-08-10 Thread Jim Bromer
Perhaps the detection of simple things that are composed of simpler but more general things is still rooted in metaphysics because we don't have good models (of how abstraction, composition and so on work) which can explain it adequately. Jim Bromer On Thu, Aug 8, 2019 at 1:06 PM Mike Archbold

Re: [agi] Complexity - General Learning

2019-08-10 Thread Jim Bromer
the individual features that occur in the data that contains something that it is trained to detect is why DL is not AGI. It is not even narrow AGI. It may become narrow AGI but it definitely is not there yet. Jim Bromer On Thu, Aug 8, 2019 at 12:46 PM Brett N Martensen wrote: > Jim, You are ri

  1   2   3   >