But the more significant question is how do we choose the best concepts (or
sub concepts if you must) to apply to the new concept being formed. I think
the best way is to use concepts that are already existing in ways that are
similar to the attempt to explain some phenomena that is the subject of the
concept. But this isn't *just* a kind of application of metaphor because of
two reasons. The application of a pre-existing concept may be very
appropriate, especially at the particular level of the concept formation it
is being used) so it might not be analogous (in the truest sense of the
word). But the second reason that this is not just the metaphor theory is
because the application of the initial concepts can be followed by the
application of other concepts that shape how the new idea should be formed.
The one thing that seems to be missing from contemporary AI and AGI is that
there is very little conscious application (from within the consciousness
of the programmer) of methods (or pseudo code) that would describe
abstractly (programmatically) how concepts could shape other concepts. So
while we have lots of variations of stock methods (logic, weighted
reasoning, prototype causation and so on) there is very little discussion
of more sophisticated (abstract) methods. For example, if an abstract
concept (which is undefined) can use other concepts (also undefined) to
shape the formation of existing or new concepts then what is to stop them
from wiping good stuff out as they kick in (for some preprogrammed reasons
which have never been shown to be highly reliable in the variety of
situations that might be expected to arise.) This kind of question is as
old as the hills but no one has answered it well without resorting to some
kind of magic fundamental basis which would verify one version (of a
concept) over another. So while it would seem to make sense to allow two
versions of (explanation-like) knowledge to exist during the refinement of
a concept then the next question is how can the versions be compared?

Jim Bromer

On Fri, Jan 23, 2015 at 8:22 PM, Aaron Hosford via AGI <[email protected]>
wrote:

> Well, we don't start with "nothing more than a label".
>
>
> Starting with just a label would make sense when someone uses an
> unfamiliar word. The contextual information would then quickly be applied
> to enhance the initial bare concept, but it would be little more than a
> placeholder at first.
>
> On Fri, Jan 23, 2015 at 6:11 PM, Samantha Atkins via AGI <[email protected]>
> wrote:
>
>>
>> On 1/22/15 7:21 PM, Aaron Hosford via AGI wrote:
>>
>>  So I can say
>>> something about the compressor in a jet engine with a jet propulsion
>>> engineer even though I don't know most of the details about a jet
>>> engine or about the compressors of jet engines.
>>
>>
>>  I think what enables this is that internally we work with changing
>> levels of detail, enhancing concepts as new information becomes available.
>> We start with a high-level description, which really consists of nothing
>> more than a label, and then progressively decorate that description with
>> details as we are exposed to them. Think of a class in OOP, and imagine
>> progressively adding members, properties, and methods as the need arises
>> rather than statically defining all of them up front. Adding these new
>> components to the class doesn't break existing code that doesn't expect
>> them to be there, but it enables new, more powerful code to be implemented
>> in terms of them. Likewise, new facets of an object or type of object
>> enables new expectations or thought patterns to be formed in terms of those
>> new facets without interfering with existing expectations or thought
>> patterns. As you learn more about jet engines and compressors, your mental
>> representation becomes richer, and so does your capacity for having
>> conversations about them.
>>
>>
>> Well, we don't start with "nothing more than a label".  There is a set of
>> attributions or generalizations for what is to be subsumed by the concept
>> even if that said is very "mushy" or intuitive.  OOP isn't a very good
>> analogy but even there you don't just create a Foo class with no idea of
>> what it is for or represents.   Of course you are correct that we refine
>> it.
>>
>> Whether you break existing code or not is quite orthogonal to concept
>> formation as such.  It is semi-orthogonal to OOP except we don't break
>> external messages ideally.  We can break internal expectations any way we
>> want.  That is encapsulation.  A refinement of a concept could result in
>> pass associations and expectations around that concept becoming invalid.
>> That is actually not a bad thing.
>>
>> "When I was a child I spoke (applied and associated a concept) as a
>> child.."
>>
>> - samantha
>>
>>
>>
>> On Thu, Jan 22, 2015 at 5:59 PM, Jim Bromer via AGI <[email protected]>
>> wrote:
>>
>>> I meant to say:
>>> Now the thing is that instruction code is a kind of enumeration (as
>>> are most of the referential codes) but the value data may - in many
>>> cases - be something more.
>>>
>>> But I am both right and wrong about that.
>>>
>>> I wanted to ask the rhetorical question: Can an instruction code be
>>> something more than an enumeration just like I said that a value can
>>> be?
>>>
>>> However, after I formed this question I realized that value data can
>>> be something more than an enumeration just because it can refer to a
>>> dynamic system that can be superimposed on it and that system can be
>>> encoded somewhere else in the instructions or in the program. So if
>>> the data is typed, for example, then the extra power of the values are
>>> due to the algorithms that are used with the that type of data so data
>>> can be "something more," as I said, only because it can refer to other
>>> dynamic or multiple step instructions.
>>>
>>> However, with thought those systems may exist in other minds even if
>>> they are not explicitly described in a particular mind. So I can say
>>> something about the compressor in a jet engine with a jet propulsion
>>> engineer even though I don't know most of the details about a jet
>>> engine or about the compressors of jet engines.
>>>
>>> So in one sense I was wrong. The value data is not something more
>>> glorious than an enumeration. Technically I was right. The fact that
>>> certain data can be used in special ways does not mean that it is just
>>> an enumeration. And I am still right in the spirit of the idea, that
>>> some static data can implicitly refer to a set of instructions on how
>>> to use it.
>>>
>>> So then value data can also refer to more than one set of instructions.
>>> Jim Bromer
>>>
>>>
>>> On Thu, Jan 22, 2015 at 6:29 PM, Jim Bromer <[email protected]> wrote:
>>> > Look at the code for a computer program. Certain values represent
>>> > instructions and others represent data and others represent various
>>> > references to data. Suppose you had a computer that was nearly as
>>> > primitive as a Turing Machine. Could you convert all the program so
>>> > that the static data were all replaced by instruction values and the
>>> > programming instructions were replaced by value and reference data. I
>>> > mean could this be virtually accomplished with something like a
>>> > universal turing machine so none of the original data was preserved in
>>> > its original forms? Is there a way to make the instruction code do the
>>> > stuff that the parameters do and a way to make the parameters do the
>>> > stuff the instructions do - for that program?
>>> >
>>> > The point is that the distinction between instruction code and
>>> > parameter code is not set in stone. Now the thing is that instruction
>>> > code is a kind of enumeration (as are most of the references) but the
>>> > value code in the instruction data may - in many cases - be something
>>> > more.
>>> >
>>> > Is this off topic?
>>> >
>>> > Jim Bromer
>>>
>>>
>>> -------------------------------------------
>>> AGI
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed:
>>> https://www.listbox.com/member/archive/rss/303/23050605-2da819ff
>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>
>>
>>     *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/2997756-fc0b9b09> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to