Matt,

I was going to do a point-by-point argument, but I see there is a central
disagreement hidden in our discussions.

The disagreement from **MY** POV is: There seems to be a presumption that
an AGI will be a LOT smarter than I think is possible given the noise and
restrictions in the information we have available. There are just too many
unknowns and unknowables in our world to predict much of anything where
people are involved. The uncertainties are and will continue to be too
great to do a lot better than people now do.

I still remember having a meeting with the football coach at my high
school. I was going to show him the advantages of using game theory to make
decisions as to how to play. We constructed several simple 2X2 payoff
matrices, and he said what he would do as I computed the correct game
theory answer. I was amazed by the result. His gut-feeling answers were
spot on - within the uncertainty of the numbers in the matrices. I suspect
that much the same result will be seen when AGIs make their debut.

Attempting at great risk to translate this to your apparent POV: While
people's intelligence is nowhere near "perfect", it is close enough that
the noise and restrictions in our information swamps the shortcomings in
our intelligence, so that even limitless intelligence won't be able to do a
lot better than we now do, EXCEPT in areas of close competition like stock
trading. Sure there will be tremendous advantages in filing the world's
information, but THAT information is no better than the people who created
it, and I don't trust ANY of those people any more than I would trust an
AGI.

Are we now on the same page as to what we are disagreeing about?

Steve


On Tue, Oct 7, 2014 at 7:46 PM, Matt Mahoney via AGI <[email protected]>
wrote:

> On Tue, Oct 7, 2014 at 7:41 PM, Steve Richfield via AGI <[email protected]>
> wrote:
> >
> > Hi all,
> >
> > There seems to be some highly questionable underlying assumptions in
> this discussion. After spending 3 months of full time effort researching a
> health problem I had, only to find out that there was plenty on the
> Internet about it, but no way to instruct a search engine to find it for
> me, I created DrEliza. Anyway, the questionable assumptions I see are:
> > 1.  That ANYONE can be "trusted" (EVERYONE is laboring under
> misconceptions).
>
> The default behavior in my proposal is not to trust a new peer.
> Diffie-Hellman key exchange is vulnerable to a man-in-the-middle
> attack, but this is not a problem because at the point when it is used
> you have no more reason to trust the intended peer than the attacker.
>
> > 2.  That you can successfully guess what you want to search for (the
> difference between problem solving and question answering).
>
> That is the incentive to make your peers smarter.
>
> > 3.  That the correct information won't be buried in millions of
> miscreant entries.
>
> Of course it will. Google has solved this problem. The solution can be
> distributed.
>
> > 4.  That you can recognize the correct answer when it is displayed right
> in front of you (preconceived misconceptions).
>
> Of course you can't. If you knew what the answer was, you wouldn't
> need to ask. You have to rely on the sender's reputation.
>
> --
> -- Matt Mahoney, [email protected]
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Full employment can be had with the stoke of a pen. Simply institute a six
hour workday. That will easily create enough new jobs to bring back full
employment.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to