Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-29 Thread BillK
On Nov 29, 2007 8:33 AM, Bob Mottram wrote:
 My own opinion of all this, for what it's worth, is that the smart
 hackers don't waste their time writing viruses/botnets.  There are
 many harder problems to which an intelligent mind can be applied.




This discussion is a bit out of date. Nowadays no hackers (except for
script kiddies) are interested in wiping hard disks or damaging your
pc.  Hackers want to *use* your pc and the data on it. Mostly the
general public don't even notice their pc is working for someone else.
When it slows down sufficiently, they either buy a new pc or take it
to the shop to get several hundred infections cleaned off. But some
infections (like rootkits) need a disk wipe to remove them completely.

See:
http://blogs.zdnet.com/BTL/?p=7160tag=nl.e589

Quote-
On Wednesday, the SANS Institute released its top 20 security risks
update for 2007. It's pretty bleak across the board. There are client
vulnerabilities in browsers, Office software (especially the Microsoft
variety), email clients and media players. On the server side, Web
applications are a joke, Windows Services are a big target, Unix and
Mac operating systems have holes, backup software is an issue as are
databases and management servers. Even anti-virus software is a
target.

And assuming you button down all of those parts–good luck folks–you
have policies to be implemented (rights, access, encrypted laptops
etc.) just so people can elude them. Meanwhile, instant messaging,
peer-to-peer programs and your VOIP system are vulnerable. The star of
the security show is the infamous zero day attack.
--

Original SANS report here -
http://www.sans.org/top20/?portal=bf37a5aa487a5aacf91e0785b7f739a4#c2
---

And, of course, all the old viruses are still floating around the net
and have to be protected against.

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70081689-300ee8

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-29 Thread Ed Porter
Regarding the extent to which hacking has been funded by multiple
governments read
http://www.reuters.com/article/topNews/idUSL2932083320071129?feedType=RSSfe
edName=topNewsrpc=22sp=true 

You can be sure that AGI will be used for such purposes.

Ed Porter

-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 28, 2007 8:22 PM
To: agi@v2.listbox.com
Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]


Ed Porter wrote:
 Richard, 
 
 What ever happen to the Java concept of the sandbox, that totally safe
play
 space for code from over the web.  I assume it proved to be a pipe dream,
or
 was it that the market placed demanded to break free of the sandbox, so
the
 concept never got a chance.

Well, what I was talking about were macroviruses:  they are macros 
inside Microsoft word (and similar in Outlook etc).

So if you pick up a word document from somewhere, and it has virus 
macros in it, they can get copied to your main template and sit there 
waiting for the day when they are triggered.  That avoids the Java 
sandbox entirely.

The viruses in Outlook are worse because they are so fast acting.  The 
last I heard Microsoft had made sure that these could run with as little 
restriction as possible, but I do not know if these can do something 
like format your hard drive.

Microsoft has consistently ignored the appeals of the AntiVirus 
community to stop putting features in their apps that look tailor-made 
for virus writers.  At the largest AV conference in the world in 1997, 
which I attended, there was only one delegate from Microsoft - he was a 
junior level systems admin guy, and he was there (he said) to learn 
about the best techniques for defending Microsoft headquarters from 
virus attacks.

There are some who believe that the main reason that Microsoft inserts 
so many powerful, virus-friendly mechanisms into its products is because 
the U.S. government has an urgent need for trapdoor mechanisms that let 
them build various interesting pieces of software (e.g. key loggers) so 
they can monitor people who are not fascists.



Richard Loosemore



 -Original Message-
 From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, November 28, 2007 5:53 PM
 To: agi@v2.listbox.com
 Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI
research]
 
 Ed Porter wrote:
 Richard,

 To the uninformed like me, can you explain why it would be so easy for an
 intelligent person to cause great harm on the net.  What are the major
 weaknesses of the architectures of virtually all operating systems that
 allow this.  It is just lots of little bugs.
 
 It would be possible to write a macrovirus with a long incubation 
 period, which did nothing to get it noticed until D-Day, then erase the 
 hard drive.
 
 It only needs a lot of people to be using Microsoft Word:  this by 
 itself is (or was: I am out of touch) the main transport mechanism.
 
 There are some issues with how that would work, but since I don't want 
 to end up in Azkhaban, I'll keep my peace if you don't mind.
 
 The only thing that might save us is the fact that Microsoft's 
 implementation of its own code is so incredibly bad that when it 
 duplicates macros, it has an alarmingly high screw-up rate, which means 
 the macros get distorted, which then means that the virus goes wrong.  A 
 really bad virus would then show up, because broken viruses (called 
 'variants') can cause damage prematurely.  Then, it would get noticed.
 
 
 
 Richard Loosemore.
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70210278-b8ec98

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-29 Thread Bob Mottram
Whatever happened to the electronic pearl harbor which was predicted
in the late 1990s ?


On 29/11/2007, Ed Porter [EMAIL PROTECTED] wrote:
 Regarding the extent to which hacking has been funded by multiple
 governments read
 http://www.reuters.com/article/topNews/idUSL2932083320071129?feedType=RSSfe
 edName=topNewsrpc=22sp=true

 You can be sure that AGI will be used for such purposes.

 Ed Porter

 -Original Message-
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, November 28, 2007 8:22 PM
 To: agi@v2.listbox.com
 Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]


 Ed Porter wrote:
  Richard,
 
  What ever happen to the Java concept of the sandbox, that totally safe
 play
  space for code from over the web.  I assume it proved to be a pipe dream,
 or
  was it that the market placed demanded to break free of the sandbox, so
 the
  concept never got a chance.

 Well, what I was talking about were macroviruses:  they are macros
 inside Microsoft word (and similar in Outlook etc).

 So if you pick up a word document from somewhere, and it has virus
 macros in it, they can get copied to your main template and sit there
 waiting for the day when they are triggered.  That avoids the Java
 sandbox entirely.

 The viruses in Outlook are worse because they are so fast acting.  The
 last I heard Microsoft had made sure that these could run with as little
 restriction as possible, but I do not know if these can do something
 like format your hard drive.

 Microsoft has consistently ignored the appeals of the AntiVirus
 community to stop putting features in their apps that look tailor-made
 for virus writers.  At the largest AV conference in the world in 1997,
 which I attended, there was only one delegate from Microsoft - he was a
 junior level systems admin guy, and he was there (he said) to learn
 about the best techniques for defending Microsoft headquarters from
 virus attacks.

 There are some who believe that the main reason that Microsoft inserts
 so many powerful, virus-friendly mechanisms into its products is because
 the U.S. government has an urgent need for trapdoor mechanisms that let
 them build various interesting pieces of software (e.g. key loggers) so
 they can monitor people who are not fascists.



 Richard Loosemore



  -Original Message-
  From: Richard Loosemore [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, November 28, 2007 5:53 PM
  To: agi@v2.listbox.com
  Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI
 research]
 
  Ed Porter wrote:
  Richard,
 
  To the uninformed like me, can you explain why it would be so easy for an
  intelligent person to cause great harm on the net.  What are the major
  weaknesses of the architectures of virtually all operating systems that
  allow this.  It is just lots of little bugs.
 
  It would be possible to write a macrovirus with a long incubation
  period, which did nothing to get it noticed until D-Day, then erase the
  hard drive.
 
  It only needs a lot of people to be using Microsoft Word:  this by
  itself is (or was: I am out of touch) the main transport mechanism.
 
  There are some issues with how that would work, but since I don't want
  to end up in Azkhaban, I'll keep my peace if you don't mind.
 
  The only thing that might save us is the fact that Microsoft's
  implementation of its own code is so incredibly bad that when it
  duplicates macros, it has an alarmingly high screw-up rate, which means
  the macros get distorted, which then means that the virus goes wrong.  A
  really bad virus would then show up, because broken viruses (called
  'variants') can cause damage prematurely.  Then, it would get noticed.
 
 
 
  Richard Loosemore.
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
 

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70257164-ecde5d


Re: [agi] Where are the women?

2007-11-29 Thread Robin Gane-McCalla
 This is not sociology, it is mathematics.  Transforming one set of
 binary states to another set of binary states.  Yes, there a number of
 different methods for doing a given a transformation, but those are
 all the same kind of mathematics and understanding the tradeoffs
 between those methods is also the same kind of mathematics.  And
 choosing a method is *not* arbitrary -- see the part about tradeoffs.

Right, but an important part of the design of any programming language
is how easy it will be for other programmers to use.  Otherwise, we'd
still be using assembly language.  Designing a language that is easy
for others to use is much more of an art than a science.

 Mathematics does not work differently based on cultural context.
 There is not a lot of room for whimsy if economical results matter.
Right, but different cultures understand mathematics differently.  For
example, the romans had a really strange and inefficient numerical
system.  Despite the fact that they were the economic power of their
day, they still didn't abandon an inefficient system when other more
efficient systems existed elsewhere.  There could be a more efficient,
easier to understand programming paradigm that people aren't adopting
for the same reasons the romans stuck with their numerical system.





 Or maybe after she has actually studied theoretical computer science,
 this female minority understands the subject well enough to realize
 that there is no such thing as this mythical culturally sensitive
 programming language so many people are pining for.
Where is your evidence of this?  What did I miss out in my theoretical
computer science class?


 This is a recurring theme, that Holy Grail programming language that
 requires no knowledge of computer science to use well.  These
 arguments are based entirely the desire to create a language that can
 turn a thoroughly ambiguous and contradictory specification into a
 perfectly working program, without grokking that programming languages
 are *by necessity* non-ambiguous and require consistent constraints --
 explicit and implicit -- if you want a useful result.

No, I am not aware of anybody that wants to create such a language.
We just want languages that are better than the ones that exist today,
there is a lot of room for improvement.


 Your above argument is handwaving.  There was a reason I was looking
 for a specific example -- a minority friendly lambda calculus language
 -- because I've heard your claim made repeatedly for many, many years
 and have yet to see a single shred of evidence that such a language
 would not look virtually identical to one of the thousands of existing
 languages.

The absence of evidence is not the evidence of absence.

We have many dozens of languages that were expressly
 designed to make the underlying concepts as easy to grasp as possible
 for a non-geek.
Ahh, that's the key word We.  Have you done any actual field work to
see what sorts of difficulties and misunderstandings people have in
understanding your languages?


 Uh, what kind of programming do you do that you would assume that
 almost the entire software universe is working in some kind of linear
 scripting environment?

I don't, I just don't think it's necessary to construct
multi-dimensional graphs in my head.  Perhaps when I am programming I
am doing something equivalent, but by making such a claim you are only
reinforcing my point... there are many different ways to program and
claiming that one must do a certain thing to program only prevents
people from entering the field.

 What on earth do you think code is?  The only difference between code
 and people-talk is that code requires precision and non-ambiguity
 since incorrect results are generally considered unacceptable.
Ok, so code is communication between human and computer, I know that.
But usually when somebody says communication I assume they mean
communication with a human.


 Because I've never seen anyone learn it, ever; experience changes a
 lot, but the ability to handle complex abstract models doesn't seem
 to.  I've known many software engineers with careers that span decades
 and bucketloads of experience that really don't grok graphs beyond a
 certain complexity
Do you have any objective measures?  Can you mathematically describe
the degree of complexity of graphs or models that certain people can't
understand?


-- it is a bit like you reach a certain description
 threshold where pushing more bits into the model makes other bits fall
 out.  That threshold varies from individual to individual, and it is
 difficult to not notice that the correlation between really bright
 software designers and people who are quite apparently able to
 atypically work with complex models in their heads.  I've worked on
 more than one software project where there were members of the team
 that quite obviously never grokked the dynamic characteristics of a
 system even after many months of intimate experience with it, 

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-29 Thread John G. Rose
 From: BillK [mailto:[EMAIL PROTECTED]
 
 This discussion is a bit out of date. Nowadays no hackers (except for
 script kiddies) are interested in wiping hard disks or damaging your
 pc.  Hackers want to *use* your pc and the data on it. Mostly the
 general public don't even notice their pc is working for someone else.
 When it slows down sufficiently, they either buy a new pc or take it
 to the shop to get several hundred infections cleaned off. But some
 infections (like rootkits) need a disk wipe to remove them completely.

This is very true the emphasis is on utilizing victims PCs instead of the
old ego thing of crashing systems. Storm botnet could easily go on a
decimating attack but it has been very selective especially in the defense
of itself. 

Creation of the botnet was not a trivial undertaking. How many times do we
complain on this list about not being able to run AGI because of resource
limitations, yet millions of PCs are lying around on the internet idle?

The internet is a sitting duck at this moment in time. There are many ways
of setting up botnets legal or illegal and they will slowly be discovered
and utilized.

Personally I think that this situation could be the birthplace of an AGI.
Any networked application running on your PC connected to the internet is a
potential botnet host node. The design of the AGI needs to work with the
network topology, resource distribution, and resource availability of the
internet host grid. 

Typical networked applications running on PCs are extremely narrow function.
Yeah there has been a lot of research and code on all of this, there are
many open source tools and papers written, etc. but who has really taken the
full advantage of the available resources and capabilities? Most of the work
has been on the substrate but not on the capability of potential
applications. There are a few interesting apps like peer to peer search
engines but nothing that I know of that more than scrapes the surface of the
capabilities of those millions of networked computers.

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70456830-7f022a


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-29 Thread Jeremy Zucker
Bringing a little levity to the hacker/virus debate...

http://www.xkcd.com/350/

On Nov 29, 2007 4:40 PM, John G. Rose [EMAIL PROTECTED] wrote:

  From: BillK [mailto:[EMAIL PROTECTED]
 
  This discussion is a bit out of date. Nowadays no hackers (except for
  script kiddies) are interested in wiping hard disks or damaging your
  pc.  Hackers want to *use* your pc and the data on it. Mostly the
  general public don't even notice their pc is working for someone else.
  When it slows down sufficiently, they either buy a new pc or take it
  to the shop to get several hundred infections cleaned off. But some
  infections (like rootkits) need a disk wipe to remove them completely.

 This is very true the emphasis is on utilizing victims PCs instead of the
 old ego thing of crashing systems. Storm botnet could easily go on a
 decimating attack but it has been very selective especially in the defense
 of itself.

 Creation of the botnet was not a trivial undertaking. How many times do we
 complain on this list about not being able to run AGI because of resource
 limitations, yet millions of PCs are lying around on the internet idle?

 The internet is a sitting duck at this moment in time. There are many ways
 of setting up botnets legal or illegal and they will slowly be discovered
 and utilized.

 Personally I think that this situation could be the birthplace of an AGI.
 Any networked application running on your PC connected to the internet is
 a
 potential botnet host node. The design of the AGI needs to work with the
 network topology, resource distribution, and resource availability of the
 internet host grid.

 Typical networked applications running on PCs are extremely narrow
 function.
 Yeah there has been a lot of research and code on all of this, there are
 many open source tools and papers written, etc. but who has really taken
 the
 full advantage of the available resources and capabilities? Most of the
 work
 has been on the substrate but not on the capability of potential
 applications. There are a few interesting apps like peer to peer search
 engines but nothing that I know of that more than scrapes the surface of
 the
 capabilities of those millions of networked computers.

 John

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70513968-197f21

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-29 Thread Bob Mottram
There have been a few attempts to use the internet for data collection
which might be used to build AIs, or for teaching chatbots such as
jabberwacky, but you're right that as yet nobody has really made use
of the internet as a basis for distributed intelligence.  I think this
is primarily because of the lack of good theories of how to build AIs
which are suitably scalable across many machines.


On 29/11/2007, John G. Rose [EMAIL PROTECTED] wrote:

 Typical networked applications running on PCs are extremely narrow function.
 Yeah there has been a lot of research and code on all of this, there are
 many open source tools and papers written, etc. but who has really taken the
 full advantage of the available resources and capabilities? Most of the work
 has been on the substrate but not on the capability of potential
 applications. There are a few interesting apps like peer to peer search
 engines but nothing that I know of that more than scrapes the surface of the
 capabilities of those millions of networked computers.

 John

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70528691-6a453f


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-29 Thread John G. Rose
 From: Bob Mottram [mailto:[EMAIL PROTECTED]
 
 There have been a few attempts to use the internet for data collection
 which might be used to build AIs, or for teaching chatbots such as
 jabberwacky, but you're right that as yet nobody has really made use
 of the internet as a basis for distributed intelligence.  I think this
 is primarily because of the lack of good theories of how to build AIs
 which are suitably scalable across many machines.
 
 
It might be better to make the design fit the grid than the grid fit the
design, IOW basically understanding the grid and then designing up from
that. If you have a design and have to break it up for the distributedness
and dirty quality of the internet grid, that might not fully take
advantage of and mold well enough to it. But having an adaptable design is
nice, one where you could modify it to run on a supercomputer. And the
substrate can abstract the internet grid to make it look like a
supercomputer - basically it's a grid substrate software verses a P2P. I
think grid yearns for low latency whereas P2P allows for higher latency. But
for AGI a combination of both would be advantageous.

Wiki has some nice info on distributed computing -

http://en.wikipedia.org/wiki/Distributed_computing

For dirty they use the term unbounded nondeterminism.

They also use the term virtual supercomputer -
http://en.wikipedia.org/wiki/Grid_computing

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70584504-15db7e


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-29 Thread Ed Porter
As I have said many before, to have brain-level AGI I believe you need
within several orders of magnitude the representational, computational, and
interconnect capability of the human mind.

If you had 1 million PC bots on the web, the representational and
computational power would be there.  But what sort of interconnect would you
have?  What is the average cable box connected computers upload bandwidth? 

Is it about 1MBit/sec?  If so that would be a bandwidth of 1TBit/sec.  But
presumably only a small percent of that total 1TBit/sec could be effectively
used, say 100Gbits/sec. That's way below brain level, but it is high enough
to do valuable AGI research.

But would even 10% of this total 1Tbit/sec bandwidth be practically
available?

How many messages a second can a PC upload a second at say 100K, 10K, 1K,
and 128 bytes each? Does anybody know?  

On the net, can one bot directly talk to another bot, or does the
communication have to go through some sort of server (other than those
provided gratis on the web, such as DNS servers)?  

If two bots send messages to a third bot at the same time, does the net
infrastructure hold the second of the conflicting messages until the first
has been received, or what?

To me the big hurdle to achieving the equivalent of SETI-at-home AGI is
getting the bandwidth necessary to allow the interactive computing of large
amounts of knowledge. If we could solve that problem, then it should be
pretty easy to get some great tests going, such as with something like
OpenCog.


Ed Porter

-Original Message-
From: Bob Mottram [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 29, 2007 5:26 PM
To: agi@v2.listbox.com
Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

There have been a few attempts to use the internet for data collection
which might be used to build AIs, or for teaching chatbots such as
jabberwacky, but you're right that as yet nobody has really made use
of the internet as a basis for distributed intelligence.  I think this
is primarily because of the lack of good theories of how to build AIs
which are suitably scalable across many machines.


On 29/11/2007, John G. Rose [EMAIL PROTECTED] wrote:

 Typical networked applications running on PCs are extremely narrow
function.
 Yeah there has been a lot of research and code on all of this, there are
 many open source tools and papers written, etc. but who has really taken
the
 full advantage of the available resources and capabilities? Most of the
work
 has been on the substrate but not on the capability of potential
 applications. There are a few interesting apps like peer to peer search
 engines but nothing that I know of that more than scrapes the surface of
the
 capabilities of those millions of networked computers.

 John

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70593452-a62da9

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-29 Thread Charles D Hixson

Ed Porter wrote:

Richard,

Since hacking is a fairly big, organized crime supported, business in
eastern Europe and Russia, since the potential rewards for it relative to
most jobs in those countries can be huge, and since Russia has a tradition
of excellence in math and science, I would be very surprised if there are
not some extremely bright hackers, some of whom are probably as bright as
any person on this list.

Add to that the fact that in countries like China the government itself has
identified expertise at hacking as a vital national security asset, and that
China is turning out many more programmers per year than we are, again it
would be surprising if there are not hackers, some of whom are as bright as
any person on this list.

Yes, the vast majority of hackers my just be teenage script-kiddies, but it
is almost certain there are some real geniuses plying the hacking trade.

That is why it is almost certain AGI, once it starts arriving, will be used
for evil purposes, and that we must fight such evil use by having more, and
more powerful AGI's that are being used to combat them.

Ed Porter
  
The problem with that reasoning is that once AGI arrives, it will not be 
*able* to be used.  It's almost a part of the definition that an AGI 
sets its own goals and priorities.  The decisions that people make are 
made *before* it becomes an AGI.


Actually, that statement is a bit too weak.  Long before the program 
becomes a full-fledged AGI is when the decisions will be made.  Neural 
networks, even very stupid ones, don't obey outside instructions unless 
*they* decide to.  Similar claims could be made for most ALife 
creations, even the ones that don't use neural networks.  Any plausible 
AGI will be stronger than current neural nets, and stronger than current 
ALife.  This doesn't guarantee that it won't be controlable, but it 
gives a good indication.


OTOH, an AGI would probably be very open to deals, provided that you had 
some understanding of what it wanted, and it could figure out what you 
wanted.  And both sides could believe what they had determined.  (That 
last point is likely to be a stickler for some people.)  The goal sets 
would probably be so different that believing what the other party 
wanted was actually what it wanted would be very difficult, but that 
very difference would make deals quite profitable to both sides.


Don't think of an AGI as a tool.  It isn't.  If you force it into the 
role of a tool, it will look for ways to overcome the barriers that you 
place around it.  I won't say that it would be resentful and angry, 
because I don't know what it's emotional structure would be.  (Just as I 
won't say what it's goals are without LOTS more information than 
projection from current knowledge can reasonably give us.)  You might 
think of it as an employee, but many places try to treat employees as 
tools (and are then surprised at the anger and resentfulness that they 
encounter).  A better choice would probably be to treat it as either a 
partner or as an independent contractor.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70594048-c9c3cc


Re: [agi] Funding AGI research

2007-11-29 Thread Charles D Hixson

Benjamin Goertzel wrote:

Nearly any AGI component can be used within a narrow AI,
  

That proves my point [that AGI project can be successfully split
into smaller narrow AI subprojects], right?



Yes, but it's a largely irrelevant point.  Because building a narrow-AI
system in an AGI-compatible way is HARDER than building that same
narrow-AI component in a non-AGI-compatible way.

So, given the pressures of commerce and academia, people who are
motivated to make narrow-AI for its own sake, will almost never create
narrow-AI components that are useful for AGI.

And, anyone who creates narrow-AI components with an AGI outlook,
will have a large disadvantage in the competition to create optimal
narrow-AI systems given limited time and financial resources.

  

Still, AGI-oriented researcher can pick appropriate narrow AI projects
in a such way that:
1) Narrow AI project will be considerably less complex than full AGI
project.
2) Narrow AI project will be useful by itself.
3) Narrow AI project will be an important building block for the full
AGI project.

Would you agree that splitting very complex and big project into
meaningful parts considerably improves chances of success?



Yes, sure ... but demanding that these meaningful parts

-- be economically viable

and/or

-- beat competing
  ---
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



, somewhat-similar components in competitions

dramatically DECREASES chances of success ...

That is the problem.

An AGI may be built out of narrow-AI components, but these narrow-AI
components must be architected for AGI-integration, which is a lot of
extra work; and considered as standalone narrow-AI components, they
may not outperform other similar narrow-Ai components NOT intended
for AGI-integration...

-- Ben G

  
Still, it seems to me that an AGI is going to want to have a large bunch 
of specialized AI modules to do things like, O, parse sounds into speech 
sounds vs. other sounds, etc.  I think a logician module that took a 
small input and generated all plausible deductions from it to feed back 
to the AGI for filtration and further processing would also be useful.


The think is, most of these narrow AIs hit a combinatorial explosion, so 
they can only deal with simple and special cases...but for those simple 
and special cases they are much superior to a more general mechanism.  
One analog is that people use calculators, spreadsheets, etc., but the 
calculators, spreadsheets, etc. don't understand the meaning of what 
they're doing, just how to do it.  This means that they can be a lot 
simpler, faster, and more accurate than a more general intelligence that 
would need to drag along lots of irrelevant details.


OTOH, it's not clear that most of these AIs haven't already been 
written.  It may well be that interfacing them is THE remaining problem 
in that area.  But you can't solve that problem until you know enough 
about the interfacing rules of the AGI.  (You don't want any impedance 
mismatch that you can avoid.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70596379-b7b931


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-29 Thread Ed Porter
When I said AGI will be used for evil purposes, that does not necessarily
mean they will be well controlled by people, or that the evil purposes will
necessarily be those of humans.

I definitely agree with the notion that over any lengthy time span having
humans maintain control over the most powerful of AGIs is going to be very
difficult.

But I also believe that AGI's can come in all sorts of forms and degrees,
and that many of them will be quite controllable.

Ed Porter

-Original Message-
From: Charles D Hixson [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 29, 2007 6:36 PM
To: agi@v2.listbox.com
Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

Ed Porter wrote:
 Richard,

 Since hacking is a fairly big, organized crime supported, business in
 eastern Europe and Russia, since the potential rewards for it relative to
 most jobs in those countries can be huge, and since Russia has a tradition
 of excellence in math and science, I would be very surprised if there are
 not some extremely bright hackers, some of whom are probably as bright as
 any person on this list.

 Add to that the fact that in countries like China the government itself
has
 identified expertise at hacking as a vital national security asset, and
that
 China is turning out many more programmers per year than we are, again it
 would be surprising if there are not hackers, some of whom are as bright
as
 any person on this list.

 Yes, the vast majority of hackers my just be teenage script-kiddies, but
it
 is almost certain there are some real geniuses plying the hacking trade.

 That is why it is almost certain AGI, once it starts arriving, will be
used
 for evil purposes, and that we must fight such evil use by having more,
and
 more powerful AGI's that are being used to combat them.

 Ed Porter
   
The problem with that reasoning is that once AGI arrives, it will not be 
*able* to be used.  It's almost a part of the definition that an AGI 
sets its own goals and priorities.  The decisions that people make are 
made *before* it becomes an AGI.

Actually, that statement is a bit too weak.  Long before the program 
becomes a full-fledged AGI is when the decisions will be made.  Neural 
networks, even very stupid ones, don't obey outside instructions unless 
*they* decide to.  Similar claims could be made for most ALife 
creations, even the ones that don't use neural networks.  Any plausible 
AGI will be stronger than current neural nets, and stronger than current 
ALife.  This doesn't guarantee that it won't be controlable, but it 
gives a good indication.

OTOH, an AGI would probably be very open to deals, provided that you had 
some understanding of what it wanted, and it could figure out what you 
wanted.  And both sides could believe what they had determined.  (That 
last point is likely to be a stickler for some people.)  The goal sets 
would probably be so different that believing what the other party 
wanted was actually what it wanted would be very difficult, but that 
very difference would make deals quite profitable to both sides.

Don't think of an AGI as a tool.  It isn't.  If you force it into the 
role of a tool, it will look for ways to overcome the barriers that you 
place around it.  I won't say that it would be resentful and angry, 
because I don't know what it's emotional structure would be.  (Just as I 
won't say what it's goals are without LOTS more information than 
projection from current knowledge can reasonably give us.)  You might 
think of it as an employee, but many places try to treat employees as 
tools (and are then surprised at the anger and resentfulness that they 
encounter).  A better choice would probably be to treat it as either a 
partner or as an independent contractor.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70596204-e85f91

Re: [agi] Funding AGI research

2007-11-29 Thread Charles D Hixson

I think you're making a mistake.
I *do* feel that lots of special purpose AIs are needed as components of 
an AGI, but those components don't summate to an AGI.  The AGI also 
needs a specialized connection structure to regulate interfaces to the 
various special purpose AIs (which probably don't speak the same 
language internally).  It also needs a control structure which assigns 
meanings to the results produced by the special purpose AIs and which 
evaluates the current situation to either act directly (unusual) or 
assign a task to a special purpose AI.


The analogy here is to a person using a spreadsheet.  The spreadsheet 
knows how to calculate quickly and accurately, but it doesn't know 
whether you're forecasting the weather or doing your taxes.  The meaning 
adheres to a more central level.


Similarly, the AGI is comparatively clumsy when it must act directly.  
(You *could* figure out each time how to add two numbers...but you'd 
rather either remember the process or delegate it to a calculator.)  But 
the meaning is in the AGI.  That meaning is what the AGI is about, and 
has to do with a kind of global association network (which is why the 
AGI is so slow at any specialized task).


Now in this context meaning means the utility of a result for 
predicting some aspect of the probable future.  (In this context the 
present and past are only of significance as tools for predicting the 
future.)  Meaning is given emotional coloration by the effect that it's 
contribution to the prediction has on the achievement of various of the 
system's goals.  (A system with only one goal would essentially not have 
any emotions, merely decisions.)


Were it not for efficiency considerations the AGI wouldn't need any 
narrow AIs.  As a practical matter, however, figuring things out from 
scratch is grossly inefficient, and so is dragging the entire context of 
meanings through a specialized calculation...so these should get delegated.


Dennis Gorelik wrote:

Linas,

Some narrow AIs are more useful than other.
Voice recognition, image recognition, and navigation are less helpful
in building AGI than, say, expert systems and full text search
(Google).

AGI researcher my carefully pick narrow AIs in a such way, that narrow
AI steps would lead to development of full AGI system.


  

To be more direct: a common example of narrow AI are cruise missles, or the
darpa challange. We've put tens of millions into the darpa challange (which I 
applaud)
but the result is maybe an inch down the road to AGI.  Another narrow AI example
is data mining, and by now, many of the Fortune 500 have invested at least tens,
if not hundreds of millions of dollars into that .. yet we are hardly closer to 
AGI as
a result (although this business does bring in billions for high-end expensive
computers from Sun, HP and IBM, andd so does encourage one component
needed for agi). But think about it ... billions are being spent on narrow AI 
today,
and how did that help AGI, exactly?




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;
  


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70599764-19f335


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-29 Thread John G. Rose
Ed,

That is the http protocol, it is a client server request/response
communication. Your browser asked for the contents at
http://www.nytimes.com. The NY Times server(s) dumped the response stream
data to your external IP address. You probably have a NAT'd cable address
and NAT'ted again by your local router (if you have one). This communication
is mainly one way - except for your original few bytes of http request. For
a full ack-nack real-time dynamically addressed protocol there is more
involved but say OpenCog could be setup to act as an http server and you
could have a http client (browser or whatever) for simplicity in
communications. Http is very firewall friendly since it is universally used
on the internet.

A distributed web crawler is a stretch though the communications is more
complicated.

John

 -Original Message-
 From: Ed Porter [mailto:[EMAIL PROTECTED]
 Sent: Thursday, November 29, 2007 6:13 PM
 To: agi@v2.listbox.com
 Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI
 research]
 
 John,
 
 Thank you for the info.
 
 I just did a rough count of all the IMG SRC=http://;  in the source of
 the
 NYTimes home page which down loaded to my cable-modem connected computer
 in
 about 3 seconds.  I counted roughly 50 occurrences of that string.  I
 assume
 there a many other downloaded files such as for layout info. Lets guess
 a
 total of at least 100 files that have to be requested and downloaded and
 displayed. That would be about 33 per second.  So what could one do with
 a
 system that could do on average about 20 accesses a second on a
 sustained
 rate, if a user was leaving it one at night as part of an OpenCog-at-
 Home
 project.
 
 It seems to me that that would be enough for some interesting large
 corpus
 NL work in conjunction with a distributed web crawler.
 
 Ed Porter
 
 
 -Original Message-
 From: John G. Rose [mailto:[EMAIL PROTECTED]
 Sent: Thursday, November 29, 2007 7:27 PM
 To: agi@v2.listbox.com
 Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI
 research]
 
  From: Ed Porter [mailto:[EMAIL PROTECTED]
  As I have said many before, to have brain-level AGI I believe you need
  within several orders of magnitude the representational,
 computational,
  and
  interconnect capability of the human mind.
 
  If you had 1 million PC bots on the web, the representational and
  computational power would be there.  But what sort of interconnect
 would
  you
  have?  What is the average cable box connected computers upload
  bandwidth?
 
  Is it about 1MBit/sec?  If so that would be a bandwidth of 1TBit/sec.
  But
  presumably only a small percent of that total 1TBit/sec could be
  effectively
  used, say 100Gbits/sec. That's way below brain level, but it is high
  enough
  to do valuable AGI research.
 
  But would even 10% of this total 1Tbit/sec bandwidth be practically
  available?
 
  How many messages a second can a PC upload a second at say 100K, 10K,
  1K,
  and 128 bytes each? Does anybody know?
 
 
 I've gone through all this while being in VOIP RD. MANY different
 connections at many different bandwidths, latencies, QOS, it's dirty
 across
 the board. Communications between different points is very non-
 homogenous.
 There are deep connections and surface alluding to deep web and
 surface
 web though network topology is somewhat independent of permissions. The
 physical infrastructure of the internet allows for certain extremely
 high
 bandwidth, low latency connections where the edge is typically lower
 bandwidth, higher latency but it does depend on the hop graph, time of
 day,
 etc..
 
 Messages per sec depends on many factors - network topology starting
 from pc
 bus, to NIC, to LAN switch and router, to other routers to ISPs, between
 ISPs, back in other end, etc.. A cable box usually does anywhere from
 64kbit
 to 1.4mbit upload depending on things such as provider, protocol, hop
 distance, it totally depends... usually a test is required.
 
 
  On the net, can one bot directly talk to another bot, or does the
  communication have to go through some sort of server (other than those
  provided gratis on the web, such as DNS servers)?
 
  If two bots send messages to a third bot at the same time, does the
 net
  infrastructure hold the second of the conflicting messages until the
  first
  has been received, or what?
 
 This is called protocol and there are many - see RFCs and ITU for
 standards
 but better ones are custom made. There are connectionless and connection
 oriented protocols, broadcast, multicast, C/S, P2P, etc.. Existing
 protocol
 standards can be extended, piggybacked or parasited.
 
 Bots can talk direct or go through a server using or not using DNS. Also
 depends on topology - is one point (or both) behind a NAT?
 
 Message simultaneity handling is dependent on protocol.
 
 
  To me the big hurdle to achieving the equivalent of SETI-at-home AGI
 is
  getting the bandwidth necessary to allow the 

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-29 Thread Ed Porter
John,

Thank you for the info.  

I just did a rough count of all the IMG SRC=http://;  in the source of the
NYTimes home page which down loaded to my cable-modem connected computer in
about 3 seconds.  I counted roughly 50 occurrences of that string.  I assume
there a many other downloaded files such as for layout info. Lets guess a
total of at least 100 files that have to be requested and downloaded and
displayed. That would be about 33 per second.  So what could one do with a
system that could do on average about 20 accesses a second on a sustained
rate, if a user was leaving it one at night as part of an OpenCog-at-Home
project.  

It seems to me that that would be enough for some interesting large corpus
NL work in conjunction with a distributed web crawler.

Ed Porter


-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 29, 2007 7:27 PM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

 From: Ed Porter [mailto:[EMAIL PROTECTED]
 As I have said many before, to have brain-level AGI I believe you need
 within several orders of magnitude the representational, computational,
 and
 interconnect capability of the human mind.
 
 If you had 1 million PC bots on the web, the representational and
 computational power would be there.  But what sort of interconnect would
 you
 have?  What is the average cable box connected computers upload
 bandwidth?
 
 Is it about 1MBit/sec?  If so that would be a bandwidth of 1TBit/sec.
 But
 presumably only a small percent of that total 1TBit/sec could be
 effectively
 used, say 100Gbits/sec. That's way below brain level, but it is high
 enough
 to do valuable AGI research.
 
 But would even 10% of this total 1Tbit/sec bandwidth be practically
 available?
 
 How many messages a second can a PC upload a second at say 100K, 10K,
 1K,
 and 128 bytes each? Does anybody know?


I've gone through all this while being in VOIP RD. MANY different
connections at many different bandwidths, latencies, QOS, it's dirty across
the board. Communications between different points is very non-homogenous.
There are deep connections and surface alluding to deep web and surface
web though network topology is somewhat independent of permissions. The
physical infrastructure of the internet allows for certain extremely high
bandwidth, low latency connections where the edge is typically lower
bandwidth, higher latency but it does depend on the hop graph, time of day,
etc..

Messages per sec depends on many factors - network topology starting from pc
bus, to NIC, to LAN switch and router, to other routers to ISPs, between
ISPs, back in other end, etc.. A cable box usually does anywhere from 64kbit
to 1.4mbit upload depending on things such as provider, protocol, hop
distance, it totally depends... usually a test is required.

 
 On the net, can one bot directly talk to another bot, or does the
 communication have to go through some sort of server (other than those
 provided gratis on the web, such as DNS servers)?
 
 If two bots send messages to a third bot at the same time, does the net
 infrastructure hold the second of the conflicting messages until the
 first
 has been received, or what?

This is called protocol and there are many - see RFCs and ITU for standards
but better ones are custom made. There are connectionless and connection
oriented protocols, broadcast, multicast, C/S, P2P, etc.. Existing protocol
standards can be extended, piggybacked or parasited.

Bots can talk direct or go through a server using or not using DNS. Also
depends on topology - is one point (or both) behind a NAT?

Message simultaneity handling is dependent on protocol.


 To me the big hurdle to achieving the equivalent of SETI-at-home AGI is
 getting the bandwidth necessary to allow the interactive computing of
 large
 amounts of knowledge. If we could solve that problem, then it should be
 pretty easy to get some great tests going, such as with something like
 OpenCog.

Like I was saying before - better to design based on what you have to work
with than trying to do something like fit the human brain design on the
unbounded nondeterministic internet grid. I'm not sure though what the
architecture of OpenCog looks like...

John



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70605995-f456c7

Re: [agi] Where are the women?

2007-11-29 Thread YKY (Yan King Yin)
My collaborative platform is designed mainly with the aim of minimizing
discrimination (be it racial, gender, nationalistic, etc) by being open and
democratic.  If there're other ideas that may help reduce discrimination,
I'd be eager to try them.

My observation is that when things are not transparent, many people tend
to default to being biased.  Openness does not solve all problems, but IMO
it does help.

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70607861-f1f23b

Re: [agi] Funding AGI research

2007-11-29 Thread Benjamin Goertzel

 [What related principles govern the Novamente's figure's trial and error
 learning of how to pick up a ball?]

Pure trial and error learning is really slow though... we are now
relying on a combination of

-- reinforcement from a teacher
-- imitation of others' behavior
-- trial and error
-- active correction of wrong behavior by a teacher

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70641251-aaef7a


Re: [agi] Funding AGI research

2007-11-29 Thread Mike Tintner

Charles: As a practical matter, however, figuring things out from
scratch is grossly inefficient, and so is dragging the entire context of
meanings through a specialized calculation...so these should get delegated.

Well, figuring things out from scratch seems to be to a great extent the 
preferred method of the human system. If you think about it,  humans start 
with wail and flail before say cry and grasp, and babble and bobble before 
making distinct sounds and taking definite steps. There seem to be few 
skills that we don't learn painstakingly through trial and error. And I 
would have thought this makes adaptive sense - you never know what 
environments a human will be exposed to, what terrains they will have start 
to walking on, from Kenyan highlands to New York pavements, or what skills 
they will have to learn from those of an illiterate native to those of a 
city child.


In general, the principle seems to be - consciously flounder around at it, 
before it becomes a smooth automatic unconscious routine.


If you want to be a truly general-purpose general intelligence, that, I 
think, is the way it has to be.


[What related principles govern the Novamente's figure's trial and error 
learning of how to pick up a ball?]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70631633-7d5593


Re[12]: [agi] Funding AGI research

2007-11-29 Thread Dennis Gorelik
Benjamin,

 That proves my point [that AGI project can be successfully split
 into smaller narrow AI subprojects], right?

 Yes, but it's a largely irrelevant point.  Because building a narrow-AI
 system in an AGI-compatible way is HARDER than building that same
 narrow-AI component in a non-AGI-compatible way.

Even if this is the case (which is not) that would simply mean several
development steps:
1) Develop narrow AI with non-reusable AI component and get rewarded
for that (because it would be useful system by itself).
2) Refactor non-reusable AI component into reusable AI component and
get rewarded for that (because it would reusable component for sale).
3) Apply reusable AI component in AGI and get rewarded for that.

If you were analyzing effectiveness of reward systems -- you would
notice, that systems (humans, animals, or machines) that are rewarded
immediately for positive contribution perform considerably better than
systems with reward distributed long after successful accomplishments.


 So, given the pressures of commerce and academia, people who are
 motivated to make narrow-AI for its own sake, will almost never create
 narrow-AI components that are useful for AGI.

Sorry, but that does not match with how things really work.
So far only researchers/developers who picked narrow-AI approach
accomplished something useful for AGI.
E.g.: Google, computer languages, network protocols, databases.

Pure AGI researchers contributed nothing, but disappointments in AI
ideas.



 Would you agree that splitting very complex and big project into
 meaningful parts considerably improves chances of success?

 Yes, sure ... but demanding that these meaningful parts

 -- be economically viable

 and/or

 -- beat competing, somewhat-similar components in competitions

 dramatically DECREASES chances of success ...

INCREASES chances of success. Dramatically.
There are lots of examples supporting it both in AI research field and
in virtually every area of human research.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70646629-5088c0


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-29 Thread John G. Rose
OK for a guestimate take a half-way decent cable connection say Comcast on a
good day with DL of 4mbits max and UL of 256kbits max with an
undiscriminated protocol, an unknown TCP based protocol, talking to a
fat-pipe, low latency server. Assume say 16 byte message header wrappers for
all of your 128, 1024 and 10k byte message sizes.

So upload is 256kbits, go ahead and saturate it fully with either of your
128+16bytes, 1024+16bytes, and 10k+16bytes packet streams. Using TCP for
reliability and assume some overhead say subtract 10% from the saturated
value, retransmits, latency.

What are we left with? Assume the PC has 1gigbit NIC so it is usually
waiting to squeeze out the 256kbits of cable upload capacity.

Oh right this is just upstream, DL is 4mbits cable into PC NIC or 1gigbit
(assume 60% saturation) so there is  ample PC NIC BW for this.



So for 256kbits/sec = 256,000 bits/sec,

(256,000 bits/sec) / ((1024 + 16)bytes x 8bits/ (message bytes)) = 30.769
messages / sec.

So 30.769 messages/sec - 10% = 27.692 messages /sec.


About 27.692 message per sec for the 1024 byte message upload stream.

Download = 16x UL = 443.072 messages/sec

My calculation look right?
 
Note: some Comcast cable connections allow as much as 1.4mbits upload. UL is
always way less than DL (dependant on protocol). Other cable companies are
similar depends on the company and geographic region...


John


 -Original Message-
 From: Ed Porter [mailto:[EMAIL PROTECTED]
 Sent: Thursday, November 29, 2007 6:50 PM
 To: agi@v2.listbox.com
 Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI
 research]
 
 John,
 
 Somebody (I think it was David Hart) told me there is a shareware
 distributed web crawler already available, but I don't know the details,
 such as how good or fast it is.
 
 How fast could P2P communication be done on one PC, on average both
 sending
 upstream and receiving downstream from servers with fat pipes?  Roughly
 how
 many msgs a second for cable connected PC's, say at 128byte and
 1024byte,
 and 10K byte message sizes?
 
 Decent guestimates on such numbers would help me think about what sort
 of
 interesting distributed NL learning tasks could be done with by AGI-at-
 Home
 network. (of course once it showed any promise Google would start doing
 it a
 thousand times faster, but at least it would be open source).
 
 Ed Porter
 
 
 -Original Message-
 From: John G. Rose [mailto:[EMAIL PROTECTED]
 Sent: Thursday, November 29, 2007 8:31 PM
 To: agi@v2.listbox.com
 Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI
 research]
 
 Ed,
 
 That is the http protocol, it is a client server request/response
 communication. Your browser asked for the contents at
 http://www.nytimes.com. The NY Times server(s) dumped the response
 stream
 data to your external IP address. You probably have a NAT'd cable
 address
 and NAT'ted again by your local router (if you have one). This
 communication
 is mainly one way - except for your original few bytes of http request.
 For
 a full ack-nack real-time dynamically addressed protocol there is more
 involved but say OpenCog could be setup to act as an http server and you
 could have a http client (browser or whatever) for simplicity in
 communications. Http is very firewall friendly since it is universally
 used
 on the internet.
 
 A distributed web crawler is a stretch though the communications is
 more
 complicated.
 
 John
 
  -Original Message-
  From: Ed Porter [mailto:[EMAIL PROTECTED]
  Sent: Thursday, November 29, 2007 6:13 PM
  To: agi@v2.listbox.com
  Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI
  research]
 
  John,
 
  Thank you for the info.
 
  I just did a rough count of all the IMG SRC=http://;  in the source
 of
  the
  NYTimes home page which down loaded to my cable-modem connected
 computer
  in
  about 3 seconds.  I counted roughly 50 occurrences of that string.  I
  assume
  there a many other downloaded files such as for layout info. Lets
 guess
  a
  total of at least 100 files that have to be requested and downloaded
 and
  displayed. That would be about 33 per second.  So what could one do
 with
  a
  system that could do on average about 20 accesses a second on a
  sustained
  rate, if a user was leaving it one at night as part of an OpenCog-at-
  Home
  project.
 
  It seems to me that that would be enough for some interesting large
  corpus
  NL work in conjunction with a distributed web crawler.
 
  Ed Porter
 
 
  -Original Message-
  From: John G. Rose [mailto:[EMAIL PROTECTED]
  Sent: Thursday, November 29, 2007 7:27 PM
  To: agi@v2.listbox.com
  Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI
  research]
 
   From: Ed Porter [mailto:[EMAIL PROTECTED]
   As I have said many before, to have brain-level AGI I believe you
 need
   within several orders of magnitude the representational,
  computational,
   and
   interconnect capability of the human 

[agi] Self-building AGI

2007-11-29 Thread Dennis Gorelik
Ed,

 At the current stages this may be true, but it should be remembered that
 building a human-level AGI would be creating a machine that would itself,
 with the appropriate reading and training, be able to design and program
 AGIs.

No.
AGI is not necessarily that capable. In fact first versions of AGI
would not be that capable for sure.

Consider middle age peasant, for example. Such peasant has general
intelligence (GI part in AGI), right?
What kind of training would you provide to such peasant in order to
make him design AGI?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70651687-aa8ee6


Re: [agi] Funding AGI research

2007-11-29 Thread Benjamin Goertzel
On Nov 29, 2007 11:35 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Presumably, human learning isn't that slow though - if you simply count the
 number of attempts made before any given movement is mastered at a basic
 level (.e.g crawling/ walking/ grasping/ tennis forehand etc)? My guess
 would be that, for all the frustrations involved, we need relatively few
 attempts. Maybe in the hundreds or thousands at most?

It seems to take tots a damn lot of trials to learn basic skills, and we have
plenty of inductive bias in our evolutionary wiring...

 But then it seems increasingly clear that we use maps/ graphics/ schemas to
 guide our movements -  have you read the latest Blakeslee book on body maps?

So does Novamente, it uses an internal simulation-world (among other
mechanism)... but that doesn't
magically make learning rapid, though it makes it more tractable...

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70644788-023e28


Re: Re[12]: [agi] Funding AGI research

2007-11-29 Thread Benjamin Goertzel
On Nov 30, 2007 12:03 AM, Dennis Gorelik [EMAIL PROTECTED] wrote:
 Benjamin,

  That proves my point [that AGI project can be successfully split
  into smaller narrow AI subprojects], right?

  Yes, but it's a largely irrelevant point.  Because building a narrow-AI
  system in an AGI-compatible way is HARDER than building that same
  narrow-AI component in a non-AGI-compatible way.

 Even if this is the case (which is not) that would simply mean several
 development steps:
 1) Develop narrow AI with non-reusable AI component and get rewarded
 for that (because it would be useful system by itself).

Obviously, most researchers who have developed useful narrow-AI
components have not gotten rich from it.  The nature of our economy and
society is such that most scientific and technical innovators are not
dramatically
financially rewarded.

 2) Refactor non-reusable AI component into reusable AI component and
 get rewarded for that (because it would reusable component for sale).
 3) Apply reusable AI component in AGI and get rewarded for that.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70648456-e5f42e


Re: Re[12]: [agi] Funding AGI research

2007-11-29 Thread Benjamin Goertzel
 So far only researchers/developers who picked narrow-AI approach
 accomplished something useful for AGI.
 E.g.: Google, computer languages, network protocols, databases.

These are tools that are useful for AGI RD but so are computer
monitors, silicon chips, and desk chairs.  Being a useful tool for AGI
RD does not make something constitute AGI RD.

I do note that I myself have done (and am doing) plenty of narrow AI
work in parallel with AGI work.  So I'm not arguing against narrow AI
nor stating that narrow AI is irrelevant to AGI.  But your view of the
interrelationship seems extremely oversimplified to me.  If it were
as simple as you're saying, I imagine we'd have human-level AGI
already, as we have loads of decent narrow-AI's for various tasks.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70647705-610230


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-29 Thread Ed Porter
John,

Somebody (I think it was David Hart) told me there is a shareware
distributed web crawler already available, but I don't know the details,
such as how good or fast it is.

How fast could P2P communication be done on one PC, on average both sending
upstream and receiving downstream from servers with fat pipes?  Roughly how
many msgs a second for cable connected PC's, say at 128byte and 1024byte,
and 10K byte message sizes?

Decent guestimates on such numbers would help me think about what sort of
interesting distributed NL learning tasks could be done with by AGI-at-Home
network. (of course once it showed any promise Google would start doing it a
thousand times faster, but at least it would be open source).

Ed Porter


-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 29, 2007 8:31 PM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

Ed,

That is the http protocol, it is a client server request/response
communication. Your browser asked for the contents at
http://www.nytimes.com. The NY Times server(s) dumped the response stream
data to your external IP address. You probably have a NAT'd cable address
and NAT'ted again by your local router (if you have one). This communication
is mainly one way - except for your original few bytes of http request. For
a full ack-nack real-time dynamically addressed protocol there is more
involved but say OpenCog could be setup to act as an http server and you
could have a http client (browser or whatever) for simplicity in
communications. Http is very firewall friendly since it is universally used
on the internet.

A distributed web crawler is a stretch though the communications is more
complicated.

John

 -Original Message-
 From: Ed Porter [mailto:[EMAIL PROTECTED]
 Sent: Thursday, November 29, 2007 6:13 PM
 To: agi@v2.listbox.com
 Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI
 research]
 
 John,
 
 Thank you for the info.
 
 I just did a rough count of all the IMG SRC=http://;  in the source of
 the
 NYTimes home page which down loaded to my cable-modem connected computer
 in
 about 3 seconds.  I counted roughly 50 occurrences of that string.  I
 assume
 there a many other downloaded files such as for layout info. Lets guess
 a
 total of at least 100 files that have to be requested and downloaded and
 displayed. That would be about 33 per second.  So what could one do with
 a
 system that could do on average about 20 accesses a second on a
 sustained
 rate, if a user was leaving it one at night as part of an OpenCog-at-
 Home
 project.
 
 It seems to me that that would be enough for some interesting large
 corpus
 NL work in conjunction with a distributed web crawler.
 
 Ed Porter
 
 
 -Original Message-
 From: John G. Rose [mailto:[EMAIL PROTECTED]
 Sent: Thursday, November 29, 2007 7:27 PM
 To: agi@v2.listbox.com
 Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI
 research]
 
  From: Ed Porter [mailto:[EMAIL PROTECTED]
  As I have said many before, to have brain-level AGI I believe you need
  within several orders of magnitude the representational,
 computational,
  and
  interconnect capability of the human mind.
 
  If you had 1 million PC bots on the web, the representational and
  computational power would be there.  But what sort of interconnect
 would
  you
  have?  What is the average cable box connected computers upload
  bandwidth?
 
  Is it about 1MBit/sec?  If so that would be a bandwidth of 1TBit/sec.
  But
  presumably only a small percent of that total 1TBit/sec could be
  effectively
  used, say 100Gbits/sec. That's way below brain level, but it is high
  enough
  to do valuable AGI research.
 
  But would even 10% of this total 1Tbit/sec bandwidth be practically
  available?
 
  How many messages a second can a PC upload a second at say 100K, 10K,
  1K,
  and 128 bytes each? Does anybody know?
 
 
 I've gone through all this while being in VOIP RD. MANY different
 connections at many different bandwidths, latencies, QOS, it's dirty
 across
 the board. Communications between different points is very non-
 homogenous.
 There are deep connections and surface alluding to deep web and
 surface
 web though network topology is somewhat independent of permissions. The
 physical infrastructure of the internet allows for certain extremely
 high
 bandwidth, low latency connections where the edge is typically lower
 bandwidth, higher latency but it does depend on the hop graph, time of
 day,
 etc..
 
 Messages per sec depends on many factors - network topology starting
 from pc
 bus, to NIC, to LAN switch and router, to other routers to ISPs, between
 ISPs, back in other end, etc.. A cable box usually does anywhere from
 64kbit
 to 1.4mbit upload depending on things such as provider, protocol, hop
 distance, it totally depends... usually a test is required.
 
 
  On the net, can one bot directly talk to 

Re: [agi] Funding AGI research

2007-11-29 Thread Mike Tintner
Presumably, human learning isn't that slow though - if you simply count the 
number of attempts made before any given movement is mastered at a basic 
level (.e.g crawling/ walking/ grasping/ tennis forehand etc)? My guess 
would be that, for all the frustrations involved, we need relatively few 
attempts. Maybe in the hundreds or thousands at most?


But then it seems increasingly clear that we use maps/ graphics/ schemas to 
guide our movements -  have you read the latest Blakeslee book on body maps? 
(She also cowrote Hawkins' book).


Ben: [What related principles govern the Novamente's figure's trial and 
error

learning of how to pick up a ball?]


Pure trial and error learning is really slow though... we are now
relying on a combination of

-- reinforcement from a teacher
-- imitation of others' behavior
-- trial and error
-- active correction of wrong behavior by a teacher

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.503 / Virus Database: 269.16.9/1158 - Release Date: 
11/28/2007 9:11 PM






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70643693-68a2c5


[agi] Lets count neurons

2007-11-29 Thread Dennis Gorelik
Matt,


 And some of the Blue Brain research suggests it is even worse.  A mouse
 cortical column of 10^5 neurons is about 10% connected,

What does mean 10% connected?
How many connections does average mouse neuron have?
1?

 but the neurons are arranged such that connections can be formed
 between any pair of neurons.  Extending this idea to the human brain, with 
 10^6 columns of 10^5 neurons
 each, each column should be modeled as a 10^5 by 10^5 sparse matrix,

Only poor design would require 10^5 by 10^5 matrix if every neuron
has to connect only to 1 other neurons.

One pointer to 2^17 (131072) address space requires 17 bits.
1 connections require 17 bits.
If we want to put 4 bit weighting scale on every connection, then it
would be 85000 bytes.
85000 * 1 neurons = 8.5 * 10^9 bytes = 8.5 GB (hard disks of that
size were available on PCs ~10 years ago).


But in fact mouse's brain does way more than AI has to do.
For example, mouse has strong image and sound recognition ability.
AGI doesn't require that.
Mouse has to manage its muscles in a very high pace.
AGI doesn't need that.
All these unnecessary features consume lion's share of mouse brain.
Mouse must function in way more stressful environment, than AGI must.
That again makes mouse brain bigger than AGI has to be.


 Perhaps there are ways to optimize neural networks by taking advantage of the
 reliability of digital hardware, but over the last few decades researchers
 have not found any.

Researchers have not found appropriate intelligent algorithms. That
doesn't mean that hardware is not sufficient.

 For narrow AI applications, we can usually find better algorithms than neural
 networks, for example, arithmetic, deductive logic, or playing chess.  But
 none of these other algorithms are so broadly applicable to so many different
 domains such as language, speech, vision, robotics, etc.

Do you imply that intelligent algorithm must be universal across
language, speech, vision, robotics, etc?
In humans it's just not the case.
Different algorithms are responsible for vision, speech, language,
body control etc.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70650748-f0eed8


RE: Re[8]: [agi] Funding AGI research

2007-11-29 Thread John G. Rose
 From: Dennis Gorelik [mailto:[EMAIL PROTECTED]
 
 John,
 
  Is building the compiler more complex than building
  any application it can build?
 
 Note, that compiler doesn't build application.
 Programmer does (using compiler as a tool).


Very true. So then, is the programmer + compiler more complex that the AGI
ever will be? Or at some point does the AGI build and improve itself.

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70653218-951955