Of course, Jim. You're absolutely right. I couldn't possibly have known all
that stuff. It's not like it was stuff that only requires common sense or
something. I'm being absurd, I have no social skills, I can't comprehend
your subtle points, I have a pretty stupid ego, I'm in denial, I don't get
anything until you enlighten me, and I am so over & done with this
conversation. Good night.


On Fri, Jan 4, 2013 at 8:00 PM, Jim Bromer <[email protected]> wrote:

> But you have not thought of everything that I  said before I said it. You
> have interpreted what I have said so that it fit into your notions of the
> subject.  The details are what you have missed because, for example, it is
> not true that my "extensive verbiage can be easily summed up...[with]...The
> proof in in the pudding."
>
> To take the most trivial example, testing during development does not
> constitute a "proof" but only evidence for initial feasibility and that
> this or that works and this or that does not.  This is a trivial insight
> but one that is true but which you were evidently unable to get until after
> I said it.
>
> There is something really lacking in your social skills.  So your
> insistence that you will eventually figure it out because you will not quit
> is just something that does not concern me.  I assume that you will say
> something that interests me in the future but the idea that you knew
> everything I said even before I said it is one of the most absurd things
> anyone in these groups has ever said.
>
> Jim Bromer
>
>
>
>
> On Fri, Jan 4, 2013 at 6:57 PM, Aaron Hosford <[email protected]> wrote:
>
>> Much better!
>>
>> Yes, I get it, and yes, I already got it before you said it (though I
>> didn't know that's what you were trying to say). You're extensive verbiage
>> can be easily summed up: *The proof is in the pudding.* I never claimed
>> to have everything figured out. I simply have a plan, and I strongly
>> believe that plan will lead me to eventual success, even if you're
>> unconvinced -- and you're quite welcome to be skeptical, by the way; it
>> won't hurt my feelings. I'm confident of my plan because I take great pains
>> to think of every possible requirement ahead of time, which explains the "I
>> thought of that already" phenomenon that seems to bother you so much. There
>> is nothing wrong with thinking ahead, though.
>>
>> No, I can't test the entire system yet, because I haven't gotten that
>> far. There's a lot of underlying work that has to be done first, and it's
>> not just a matter of whipping something up over the course of a few months
>> just because I know where I'm going with it. Nor is it a matter of
>> immediately testing my ideas, apart from the entire system, because in
>> order to test them I have to finish the underlying infrastructure first.
>>
>> However, I understand the importance of testing, and I am indeed testing
>> my system as I go -- at suitable points along the way -- to verify that it
>> works before I build more on top of it. And on those occasions where I run
>> into a fundamental flaw that prevents me from moving forward, I recognize
>> it as such and revise my design and code until the issue is resolved. This
>> is why I can say I'm sure I'll eventually get there, given the time; it's
>> simply because I won't quit until I make it, even if I have to start over
>> from scratch, not that I think I'll have to.
>>
>> So, in summary, I am confident, but not delusional, and it's fine if you
>> disagree with my assessment of my chances of success, because it doesn't
>> actually affect my chances of success.
>>
>>
>>
>>
>> On Fri, Jan 4, 2013 at 3:55 PM, Jim Bromer <[email protected]> wrote:
>>
>>> We all know that our projects are not working at human-level capacity.
>>> So how could you test the essential characteristic of the program if you
>>> only have a limited 'capacity' to try it out on?  This is the essential
>>> question of testing during development.  Saying that if an algorithm works
>>> then it works and if it doesn't then it needs some more work is not an
>>> adequate test of whether or not the essential quality of an AGI
>>> is achievable using your ideas.  There is not an easy answer to this
>>> question but I can at least try to start to answer it.
>>>
>>> Suppose that someone demonstrated that his numerical algorithm, which
>>> used averaging and weighting was able to learn to speed up, slow down and
>>> steer a remote controll car based on some kind of numerical feedback
>>> for different goals.  Once done, once the program showed that it could
>>> control the car adequately for each learned trip how would the programmer
>>> show, given the constraint of his computational resources, that the the
>>> essential characteristics of the program was truly AGI?  He would, for
>>> example, have to show that the learning could be used in planning tor new
>>> trips.  But then he would have to show that his program could work with
>>> other kinds of problems including problems that used different IO
>>> modalities. How does a purely numerical program solve word-based problems
>>> for instance?  If the programmer thinks it could be done then this would be
>>> a requirement to start to show that his program had adequate generality to
>>> work on this program.
>>>
>>> While many people say their program would be able to work with different
>>> kinds of modalities (with different kinds of problems) the scientific proof
>>> is making it do so.  It is not enough to say that we are creating the
>>> program to do exactly that when that is the claim that is actively being
>>> questioned.  Can't you guys get that?  To say that yeah we already thought
>>> of that is pure nonsense.  What I am questioning here is not whether or not
>>> you guys get this on a superficial level but whether or not you guys get
>>> that the claim that you already have thought of a general untried theory
>>> does not stand in for adequate testing methodology. To say that we already
>>> know that is a little like saying that we already know that the program
>>> would have to be just about capable of thinking like a human being to
>>> demonstrate true AGI.  Well, so what?  Of course you already know that you
>>> [more colorful language deleted].  If, for instance, you have a
>>> careful algorithm worked out which you claim that you could show the
>>> essence of AI generality, then what do you have to test the untried
>>> algorithm out with?  The claim that you have it all worked out means that
>>> you can get the coding done in a few months. The belief that your carefully
>>> worked out method is going to work without substantial development is
>>> delusional.  If you have it all worked out but cannot test it because it
>>> will take a year of development then what could you do to begin testing it
>>> now?  If you seriously think that you have it all figured out (except for
>>> the tweaking) then you should be able to contrive all sorts of small tests
>>> that will show almost immediately if your ideas would work or if they would
>>> need a lot more work.  But it would have to be done in a way to show the
>>> potential to work within a little complexity.  Did you get what I just said
>>> even before I said it?
>>> Jim Bromer
>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/10561250-470149cf> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to