Rules, policies,
everywhere a statement,
everywhere a sentence.

How we agree on rules:
parliamentary procedure --
a vocal editing of text.

How we learn rules:
we read or hear their statement,
or we infer the local custom,
by observing the examples.

How do we implement rules:
with love for safety and co-operation,
we enact them in ourselves,
for those that don't self-discipline
with policy enforcement officers,
a reminder of the fear of disorder.

How are rules used:
For safety, for co-operation.
For communication.

Text, text, text,
it's what we revolve around,
not here, not there,
but everywhere,
we find it.

Poetically,
Logan

On Tue, Dec 8, 2015 at 7:39 PM, Jim Bromer <[email protected]> wrote:

> I think I understand what you are getting at, and it makes a lot of
> sense. You and Aaron have convinced me that I should spend more time
> working on my AI / AGI project but unfortunately I still do not seem
> to have the time to work on it.
>
> I think I do have some good ideas about things like artificial
> imagination which is important.  And curiosity is something that I
> always felt was easy.
> Jim Bromer
>
>
> On Sun, Dec 6, 2015 at 3:41 PM, Stanley Nilsen <[email protected]>
> wrote:
> > Thanks for giving this some thought Jim.  I'm going out of town for a few
> > days, so don't consider silence to be a loss of interest.
> >
> > One of your comments was:
> >
> > "But the program has to be able to develop its
> > own strategies to 'evaluate' some things because that is a good
> > strategy for a computer program to use - in some cases. And the
> > usefulness of logical 'evaluation' implies that some strategy for
> > evaluating conceptual relationships other than simple numerical
> > methods would also be a good strategy to use."
> >
> > ---------------------
> > My problem with the program developing "it's own strategy to
> evaluate..." is
> > that strategy is not a strength of a child. Somehow children acquire the
> > ability to put 2 and 2 together, but we haven't discovered how to get a
> > machine to do it.  What's the machine equivalent of curiosity?  I'm not
> > convinced that we have an adequate "big" picture to see how the pieces
> will
> > eventually fit together.
> >
> > The big picture looks kind of like "design and make a system that works,
> > even if one needs to, substitute human effort for some of the
> components."
> > Then, when the system is in place, determine how to remove more and more
> of
> > the human element.  Eventually one is left with a system that may
> interface
> > with humans but only as though using them as a resource.
> >
> > By the way, I think a text only approach is a good start.  I'm
> interested in
> > looking at the use of words as a way to convey "benefit."  Initial
> design is
> > interesting because there are so many words and phrases to choose from.
> I
> > get it that this sounds like a chat bot, but for me it's a way of
> > experimenting with the idea of a benefit driven system.
> >
> > Stan
> >
> >
> >
> > On 12/06/2015 09:17 AM, Jim Bromer wrote:
> >>
> >> You might be able to think of ways to benefit the poor but you would
> >> have a lot of trouble to implement them. You might be able to help a
> >> few people but if you are like most of the rest of us that would be
> >> it.
> >>
> >> So you think that there are a lot of opportunities to use basic
> >> implementation strategies to get the AI/AGI program to do something
> >> that would be beneficial in some way? But the only problem that you
> >> foresee is the coding? But why would that be difficult? For example, I
> >> think that I could develop a prototype of an AGI program using text
> >> only. If you start with something like that then it would be simple to
> >> get started because you can find code that contains the basic forms
> >> for text IO. The problem that I am having is that even when I strip
> >> the plan down to what I think would be a minimum for a simple database
> >> management program (of my own design) it still cannot be done on the
> >> little time I have to code, and without any reason to believe that I
> >> could get past something that would not work too well I don't have
> >> much commitment to get going on it.
> >>
> >> You said:
> >> "Values (rules about values) come into play as the AGI picks the next
> >> thing to do.  But, we already know that early AGI doesn't have a
> >> "values" structure to refer to.  To program one is really not much of
> >> an option - it is too complex to "calculate" what the value of
> >> something is.  To test the validity of my statement that it is too
> >> complex to calculate, try it. Imagine that you are writing this into
> >> code!"
> >>
> >> I have tried to imagine writing that into code! (Why wouldn't I have
> >> tried to imagine that?) But the program has to be able to develop its
> >> own strategies to 'evaluate' some things because that is a good
> >> strategy for a computer program to use - in some cases. And the
> >> usefulness of logical 'evaluation' implies that some strategy for
> >> evaluating conceptual relationships other than simple numerical
> >> methods would also be a good strategy to use. But this would be
> >> complicated. I think the opportunities that you mentioned would be
> >> difficult to code as well - if you wanted to avoid getting bogged down
> >> in code that is good for narrow-AI. The problem is that once you make
> >> the commitment to do something that is effectively narrow-AI then
> >> there are all sorts of enticing shortcuts that become available but
> >> that you really need to keep to a minimum.
> >>
> >> Using a text-only program that has to start so that it can only act on
> >> the simple 'opportunities' (or 'low hanging fruit') of text (and
> >> conversation of course) is where I would start. But it should be clear
> >> that I don't want to take all the shortcuts that sort of situation
> >> would offer. So I want my program to 'look' for opportunities on its
> >> own so to speak. It may not be possible for a program to do that at a
> >> very sophisticated level from our point of view, but we know that
> >> computer programs are good at some things that we are not so good at.
> >> So, my point of view is that the program should be able to pick up all
> >> sorts of patterns (opportunities) that we would miss so that is where
> >> I want to start at. Having thought about that I concluded that it
> >> would have to be looking at the recombination of all sorts of odd
> >> kinds of data in order to find a few combinations that might be
> >> useful.
> >> Jim Bromer
> >>
> >>
> >
> >
> >
> > -------------------------------------------
> > AGI
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed:
> https://www.listbox.com/member/archive/rss/303/24379807-653794b5
> > Modify Your Subscription:
> > https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/5037279-a88c7a6d
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to