David,
I did read your last message but I want to add something more to what I was
saying and I think it is relevant to what you were saying in some way.
There are philosophical problems with the assertion that a computer program
would be able to choose what it wanted to do or that it would be able to
determine what it was good at.  However, as we all know, programs can do
things that we cannot anticipate, so the question of whether or not we can
fashion some of those goals while discovering which of the instances or
particulars that it will choose (or will be chosen) is not totally
impossible to appreciate from more objective vantages.
Jim

On Wed, Aug 11, 2010 at 4:24 PM, Jim Bromer <jimbro...@gmail.com> wrote:

> I guess what I was saying was that I can test my mathematical theory and my
> theories about primitive judgement both at the same time by trying to find
> those areas where the program seems to be good at something.  For example, I
> found that it was easy to write a program that found outlines where there
> was some contrast between a solid object and whatever was in the background
> or whatever was in the foreground.  Now I, as an artist could use that to
> create interesting abstractions.  However, that does not mean that an AGI
> program that was supposed to learn and acquire greater judgement based on my
> ideas for a primitive judgement would be able to do that.  Instead, I would
> let it do what it seemed good at, so long as I was able to appreciate what
> it was doing.  Since this would lead to something - a next step at least - I
> could use this to test my theory that a good more general SAT solution would
> be useful as well.
> Jim Bromer
>
> On Wed, Aug 11, 2010 at 3:57 PM, David Jones <davidher...@gmail.com>wrote:
>
>> Slightly off the topic of your last email. But, all this discussion has
>> made me realize how to phrase something... That is that solving AGI requires
>> understand the constraints that problems impose on a solution. So, it's sort
>> of a unbelievably complex constraint satisfaction problem. What we've been
>> talking about is how we come up with solutions to these problems when we
>> sometimes aren't actually trying to solve any of the real problems. As I've
>> been trying to articulate lately is that in order to satisfy the constraints
>> of the problems AGI imposes, we must really understand the problems we want
>> to solve and how they can be solved(their constraints). I think that most of
>> us do not do this because the problem is so complex, that we refuse to
>> attempt to understand all of its constraints. Instead we focus on something
>> very small and manageable with fewer constraints. But, that's what creates
>> narrow AI, because the constraints you have developed the solution for only
>> apply to a narrow set of problems. Once you try to apply it to a different
>> problem that imposes new, incompatible constraints, the solution fails.
>>
>> So, lately I've been pushing for people to truly analyze the problems
>> involved in AGI, step by step to understand what the constraints are. I
>> think this is the only way we will develop a solution that is guaranteed to
>> work without wasting undo time in trial and error. I don't think trial and
>> error approaches will work. We must know what the constraints are, instead
>> of guessing at what solutions might approximate the constraints. I think the
>> problem space is too large to guess.
>>
>> Of course, I think acquisition of knowledge through automated means is the
>> first step in understanding these constraints. But, unfortunately, few agree
>> with me.
>>
>> Dave
>>
>> On Wed, Aug 11, 2010 at 3:44 PM, Jim Bromer <jimbro...@gmail.com> wrote:
>>
>>> I've made two ultra-brilliant statements in the past few days.  One is
>>> that a concept can simultaneously be both precise and vague.  And the other
>>> is that without judgement even opinions are impossible.  (Ok, those two
>>> statements may not be ultra-brilliant but they are brilliant right?  Ok,
>>> maybe not truly brilliant,  but highly insightful and
>>> perspicuously intelligent... Or at least interesting to the cognoscenti
>>> maybe?.. Well, they were interesting to me at least.)
>>>
>>> Ok, these two interesting-to-me comments made by me are interesting
>>> because they suggest that we do not know how to program a computer even to
>>> create opinions.  Or if we do, there is a big untapped difference between
>>> those programs that show nascent judgement (perhaps only at levels relative
>>> to the domain of their capabilities) and those that don't.
>>>
>>> This is AGI programmer's utopia.  (Or at least my utopia).  Because I
>>> need to find something that is simple enough for me to start with and which
>>> can lend itself to develop and test theories of AGI judgement and
>>> scalability.  By allowing an AGI program to participate more in the
>>> selection of its own primitive 'interests' we will be able to interact with
>>> it, both as programmer and as user, to guide it toward selecting those
>>> interests which we can understand and seem interesting to us.  By creating
>>> an AGI program that has a faculty for primitive judgement (as we might
>>> envision such an ability), and then testing the capabilities in areas where
>>> the program seems to work more effectively, we might be better able to
>>> develop more powerful AGI theories that show greater scalability, so long as
>>> we are able to understand what interests the program is pursuing.
>>>
>>> Jim Bromer
>>>
>>> On Wed, Aug 11, 2010 at 1:40 PM, Jim Bromer <jimbro...@gmail.com> wrote:
>>>
>>>> On Wed, Aug 11, 2010 at 10:53 AM, David Jones <davidher...@gmail.com>wrote:
>>>>
>>>>> I don't think it makes sense to apply sanitized and formal mathematical
>>>>> solutions to AGI. What reason do we have to believe that the problems we
>>>>> face when developing AGI are solvable by such formal representations? What
>>>>> reason do we have to think we can represent the problems as an instance of
>>>>> such mathematical problems?
>>>>>
>>>>> We have to start with the specific problems we are trying to solve,
>>>>> analyze what it takes to solve them, and then look for and design a
>>>>> solution. Starting with the solution and trying to hack the problem to fit
>>>>> it is not going to work for AGI, in my opinion. I could be wrong, but I
>>>>> would need some evidence to think otherwise.
>>>>>
>>>>>
>>>>
>>>> I agree that disassociated theories have not proved to be very
>>>> successful at AGI, but then again what has?
>>>>
>>>> I would use a mathematical method that gave me the number or percentage
>>>> of True cases that satisfy a propositional formula as a way to check the
>>>> internal logic of different combinations of logic-based conjectures.  Since
>>>> methods that can do this with logical variables for any logical system that
>>>> goes (a little) past 32 variables are feasible the potential of this method
>>>> should be easy to check (although it would hit a rather low ceiling of
>>>> scalability).  So I do think that logic and other mathematical methods 
>>>> would
>>>> help in true AGI programs.  However, the other major problem, as I see it,
>>>> is one of application. And strangely enough, this application problem is so
>>>> pervasive, that it means that you cannot even develop artificial opinions!
>>>> You can program the computer to jump on things that you expect it to see,
>>>> and you can program it to create theories about random combinations of
>>>> objects, but how could you have a true opinion without child-level
>>>> judgement?
>>>>
>>>> This may sound like frivolous philosophy but I think it really shows
>>>> that the starting point isn't totally beyond us.
>>>>
>>>> Jim Bromer
>>>>
>>>>
>>>>  On Wed, Aug 11, 2010 at 10:53 AM, David Jones 
>>>> <davidher...@gmail.com>wrote:
>>>>
>>>>> This seems to be an overly simplistic view of AGI from a mathematician.
>>>>> It's kind of funny how people over emphasize what they know or depend on
>>>>> their current expertise too much when trying to solve new problems.
>>>>>
>>>>> I don't think it makes sense to apply sanitized and formal mathematical
>>>>> solutions to AGI. What reason do we have to believe that the problems we
>>>>> face when developing AGI are solvable by such formal representations? What
>>>>> reason do we have to think we can represent the problems as an instance of
>>>>> such mathematical problems?
>>>>>
>>>>> We have to start with the specific problems we are trying to solve,
>>>>> analyze what it takes to solve them, and then look for and design a
>>>>> solution. Starting with the solution and trying to hack the problem to fit
>>>>> it is not going to work for AGI, in my opinion. I could be wrong, but I
>>>>> would need some evidence to think otherwise.
>>>>>
>>>>> Dave
>>>>>
>>>>>   On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer <jimbro...@gmail.com>wrote:
>>>>>
>>>>>>   You probably could show that a sophisticated mathematical structure
>>>>>> would produce a scalable AGI program if is true, using contemporary
>>>>>> mathematical models to simulate it.  However, if scalability was
>>>>>> completely dependent on some as yet undiscovered mathemagical principle,
>>>>>> then you couldn't.
>>>>>>
>>>>>> For example, I think polynomial time SAT would solve a lot of problems
>>>>>> with contemporary AGI.  So I believe this could be demonstrated on a
>>>>>> simulation.  That means, that I could demonstrate effective AGI that 
>>>>>> works
>>>>>> so long as the SAT problems are easily solved.  If the program reported 
>>>>>> that
>>>>>> a complicated logical problem could not be solved, the user could provide
>>>>>> his insight into the problem at those times to help with the problem.  
>>>>>> This
>>>>>> would not work exactly as hoped, but by working from there, I believe 
>>>>>> that I
>>>>>> would be able to determine better ways to develop such a program so it 
>>>>>> would
>>>>>> work better - if my conjecture about the potential efficacy of polynomial
>>>>>> time SAT for AGI was true.
>>>>>>
>>>>>> Jim Bromer
>>>>>>
>>>>>> On Mon, Aug 9, 2010 at 6:11 PM, Jim Bromer <jimbro...@gmail.com>wrote:
>>>>>>
>>>>>>> On Mon, Aug 9, 2010 at 4:57 PM, John G. Rose <
>>>>>>> johnr...@polyplexic.com> wrote:
>>>>>>>
>>>>>>>> > -----Original Message-----
>>>>>>>> > From: Jim Bromer [mailto:jimbro...@gmail.com]
>>>>>>>> >
>>>>>>>> >  how would these diverse examples
>>>>>>>> > be woven into highly compressed and heavily cross-indexed pieces
>>>>>>>> of
>>>>>>>> > knowledge that could be accessed quickly and reliably, especially
>>>>>>>> for the
>>>>>>>> > most common examples that the person is familiar with.
>>>>>>>>
>>>>>>>> This is a big part of it and for me the most exciting. And I don't
>>>>>>>> think
>>>>>>>> that this "subsystem" would take up millions of lines of code
>>>>>>>> either. It's
>>>>>>>> just that it is a *very* sophisticated and dynamic mathematical
>>>>>>>> structure
>>>>>>>> IMO.
>>>>>>>>
>>>>>>>> John
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Well, if it was a mathematical structure then we could start
>>>>>>> developing prototypes using familiar mathematical structures.  I think 
>>>>>>> the
>>>>>>> structure has to involve more ideological relationships than 
>>>>>>> mathematical.
>>>>>>> For instance you can apply a idea to your own thinking in a such a way 
>>>>>>> that
>>>>>>> you are capable of (gradually) changing how you think about something.  
>>>>>>> This
>>>>>>> means that an idea can be a compression of some greater change in your 
>>>>>>> own
>>>>>>> programming.  While the idea in this example would be associated with a
>>>>>>> fairly strong notion of meaning, since you cannot accurately understand 
>>>>>>> the
>>>>>>> full consequences of the change it would be somewhat vague at first.  
>>>>>>> (It
>>>>>>> could be a very precise idea capable of having strong effect, but the
>>>>>>> details of those effects would not be known until the change had
>>>>>>> progressed.)
>>>>>>>
>>>>>>> I think the more important question is how does a general concept be
>>>>>>> interpreted across a range of different kinds of ideas.  Actually this 
>>>>>>> is
>>>>>>> not so difficult, but what I am getting at is how are sophisticated
>>>>>>> conceptual interrelations integrated and resolved?
>>>>>>> Jim
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>>>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>>>>> <http://www.listbox.com/>
>>>>>>
>>>>>
>>>>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>>>> Modify<https://www.listbox.com/member/?&;>Your Subscription 
>>>>> <http://www.listbox.com/>
>>>>>
>>>>
>>>>
>>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>> <http://www.listbox.com/>
>>>
>>
>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com/>
>>
>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to