You might be able to think of ways to benefit the poor but you would
have a lot of trouble to implement them. You might be able to help a
few people but if you are like most of the rest of us that would be
it.

So you think that there are a lot of opportunities to use basic
implementation strategies to get the AI/AGI program to do something
that would be beneficial in some way? But the only problem that you
foresee is the coding? But why would that be difficult? For example, I
think that I could develop a prototype of an AGI program using text
only. If you start with something like that then it would be simple to
get started because you can find code that contains the basic forms
for text IO. The problem that I am having is that even when I strip
the plan down to what I think would be a minimum for a simple database
management program (of my own design) it still cannot be done on the
little time I have to code, and without any reason to believe that I
could get past something that would not work too well I don't have
much commitment to get going on it.

You said:
"Values (rules about values) come into play as the AGI picks the next
thing to do.  But, we already know that early AGI doesn't have a
"values" structure to refer to.  To program one is really not much of
an option - it is too complex to "calculate" what the value of
something is.  To test the validity of my statement that it is too
complex to calculate, try it. Imagine that you are writing this into
code!"

I have tried to imagine writing that into code! (Why wouldn't I have
tried to imagine that?) But the program has to be able to develop its
own strategies to 'evaluate' some things because that is a good
strategy for a computer program to use - in some cases. And the
usefulness of logical 'evaluation' implies that some strategy for
evaluating conceptual relationships other than simple numerical
methods would also be a good strategy to use. But this would be
complicated. I think the opportunities that you mentioned would be
difficult to code as well - if you wanted to avoid getting bogged down
in code that is good for narrow-AI. The problem is that once you make
the commitment to do something that is effectively narrow-AI then
there are all sorts of enticing shortcuts that become available but
that you really need to keep to a minimum.

Using a text-only program that has to start so that it can only act on
the simple 'opportunities' (or 'low hanging fruit') of text (and
conversation of course) is where I would start. But it should be clear
that I don't want to take all the shortcuts that sort of situation
would offer. So I want my program to 'look' for opportunities on its
own so to speak. It may not be possible for a program to do that at a
very sophisticated level from our point of view, but we know that
computer programs are good at some things that we are not so good at.
So, my point of view is that the program should be able to pick up all
sorts of patterns (opportunities) that we would miss so that is where
I want to start at. Having thought about that I concluded that it
would have to be looking at the recombination of all sorts of odd
kinds of data in order to find a few combinations that might be
useful.
Jim Bromer


On Fri, Dec 4, 2015 at 5:27 PM, Stanley Nilsen <[email protected]> wrote:
> On 12/04/2015 11:24 AM, Jim Bromer wrote:
>>
>> If meta-data can be used to invoke rules, and rules (systems of rules
>> and conditional data) can be learned or acquired (perhaps implicitly)
>> then the program would have to have a way to govern the actions the
>> program might take. One way might be through the use of goals. But I
>> would want my program to be able to derive or develop some of its own
>> goals.
>
> The problem I see with goals is the way we tend to think of them. We humans
> set goals, change goals and dream of goals without knowing much about how we
> will make the goal happen.  We acquire ideas about reaching the goal and
> eventually take steps related to the goal. Fine, but we also have already
> developed strategies for pursuit. The AGI unit is far from developing much
> of anything, let alone a general strategy for reaching goals.
>
> In my thinking about AGI I rarely use the term goal, but rather think of
> governing the actions in terms of benefit.  Benefit ties things together for
> me.   If you lived in a country with really poor people, you would have very
> little trouble coming up with ways to benefit those poor.  And so it might
> be with the fledgling AGI.  The wannabe AGI is "functionality" poor, and
> needs to have more methods to increase the chance that it will be able to do
> something beneficial.   The AGI is a long way from having a world concept
> that allows it to assess what is beneficial to others.
>
> Values (rules about values) come into play as the AGI picks the next thing
> to do.  But, we already know that early AGI doesn't have a "values"
> structure to refer to.  To program one is really not much of an option - it
> is too complex to "calculate" what the value of something is.  To test the
> validity of my statement that it is too complex to calculate, try it.
> Imagine that you are writing this into code!
>
> What's the alternative to calculating a value factor?   Adoption (my
> preferred term.)
>
> What I mean by Adoption is the acquiring of a "behavior" that the AGI could
> perform and along with instructions to implement the behavior, also acquire
> the data that tells the AGI when and if important.  When is this behavior to
> be used? what is the combination or triggers? and how significant is this
> behavior in terms of priority to be executed?
>
> In my design concept, this package of information is referred to as the
> "opportunity."  I like the term opportunity because we relate to it as human
> beings.  People can share opportunity with each other. In describing an
> opportunity we offer the "when" can this be done; and are given a rough idea
> of why this is considered important, or at least given a recommendation.  It
> is the recommendation that is of value to us if we ever come to the
> situation where the opportunity is an option for the moment.
>
> If the AGI had a large database of opportunity available to it, wouldn't
> that be smart!  It could probably produce some benefit.
>
> Stan
>
>
>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-653794b5
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to