Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-02 Thread Jim Bromer
I guess the trans-infinite is computable, given infinite resources.  It
doesn't make sense to me except that the infinite does not exist as a
number-like object, it is an active process of incrementation or something
like that.  End of Count.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-02 Thread Jim Bromer
I see that erasure is from an alternative definition for a Turing Machine.
I am not sure if a four state Turing Machine could be used to
make Solomonoff Induction convergent.  If all programs that required working
memory greater than the length of the output string could be eliminated then
that would have an impact on convergent feasibility.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-02 Thread Jim Bromer
On Mon, Aug 2, 2010 at 7:21 AM, Jim Bromer jimbro...@gmail.com wrote:

 I see that erasure is from an alternative definition for a Turing Machine.
 I am not sure if a four state Turing Machine could be used to
 make Solomonoff Induction convergent.  If all programs that required working
 memory greater than the length of the output string could be eliminated then
 that would have an impact on convergent feasibility.

But then again this is getting back to my whole thesis.  By constraining the
definition of all possible programs sufficiently, we should be left with a
definable subset of programs that could be used in an actual computations.

I want to study more to try to better understand Abrams definition of a
convergent derivation of Solomonoff Induction.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Shhh!

2010-08-02 Thread Jim Bromer
I can write an algorithm that is capable of describing ('reaching') every
possible irrational number - given infinite resources.  The infinite is not
a number-like object, it is an active form of incrementation or
concatenation.  So I can write an algorithm that can write *every* finite
state of *every* possible number.  However, it would take another algorithm
to 'prove' it.  Given an irrational number, this other algorithm could find
the infinite incrementation for every digit of the given number.  Each
possible number (including the incrementation of those numbers that cannot
be represented in truncated form) is embedded within a single infinite
infinite incrementation of digits that is produced by the algorithm, so the
second algorithm would have to calculate where you would find each digit of
the given irrational number by increment.  But the thing is, both functions
would be computable and provable.  (I haven't actually figured the second
algorithm out yet, but it is not a difficult problem.)

This means that the Trans-Infinite Is Computable.  But don't tell anyone
about this, it's a secret.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Re: Shhh!

2010-08-02 Thread Jim Bromer
I think I can write an abbreviated version, but there would only be a few
people in the world who would both believe me and understand why it would
work.

On Mon, Aug 2, 2010 at 8:53 AM, Jim Bromer jimbro...@gmail.com wrote:

 I can write an algorithm that is capable of describing ('reaching') every
 possible irrational number - given infinite resources.  The infinite is not
 a number-like object, it is an active form of incrementation or
 concatenation.  So I can write an algorithm that can write *every* finite
 state of *every* possible number.  However, it would take another
 algorithm to 'prove' it.  Given an irrational number, this other algorithm
 could find the infinite incrementation for every digit of the given number.
 Each possible number (including the incrementation of those numbers that
 cannot be represented in truncated form) is embedded within a single
 infinite infinite incrementation of digits that is produced by the
 algorithm, so the second algorithm would have to calculate where you would
 find each digit of the given irrational number by increment.  But the thing
 is, both functions would be computable and provable.  (I haven't actually
 figured the second algorithm out yet, but it is not a difficult problem.)

 This means that the Trans-Infinite Is Computable.  But don't tell anyone
 about this, it's a secret.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Walker Lake

2010-08-02 Thread Steve Richfield
Sometime when you are flying between the northwest US to/from Las Vegas,
look out your window as you fly over Walker Lake in eastern Nevada. At the
south end you will see a system of roads leading to tiny buildings, all
surrounded by military security. From what I have been able to figure out,
you will find the U.S. arsenal of chemical and biological weapons housed
there. No, we are not now making these weapons, but neither are we disposing
of them.

Similarly, there has been discussion of developing advanced military
technology using AGI and other computer-related methods. I believe that
these efforts are fundamentally anti-democratic, as they allow a small
number of people to control a large number of people. Gone are the days when
people voted with their swords. We now have the best government that money
can buy monitoring our every email, including this one, to identify anyone
resisting such efforts. 1984 has truly arrived. This can only lead to a
horrible end to freedom, with AGIs doing their part and more.

Like chemical and biological weapons, unmanned and automated weapons should
be BANNED. Unfortunately, doing so would provide a window of opportunity for
others to deploy them. However, if we make these and stick them in yet
another building at the south end of Walker Lake, we would be ready in case
other nations deploy such weapons.

How about an international ban on the deployment of all unmanned and
automated weapons? The U.S. won't now even agree to ban land mines. At least
this would restore SOME relationship between popular support and military
might. Doesn't it sound ethical to insist that a human being decide when
to end another human being's life? Doesn't it sound fair to require the
decision maker to be in harm's way, especially when the person being killed
is in or around their own home? Doesn't it sound unethical to add to the
present situation? When deployed on a large scale, aren't these WMDs?

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Walker Lake

2010-08-02 Thread David Jones
How about you go to war yourself or send your children. I'd rather send a
robot. It's safer for both the soldier and the people on the ground because
you don't have to shoot first, ask questions later.

And you're right, we shouldn't monitor anyone. We should just allow
terrorists to talk openly to plot attacks on us. After all, I'd rather have
my privacy than my life.

dumb.

On Mon, Aug 2, 2010 at 10:40 AM, Steve Richfield
steve.richfi...@gmail.comwrote:

 Sometime when you are flying between the northwest US to/from Las Vegas,
 look out your window as you fly over Walker Lake in eastern Nevada. At the
 south end you will see a system of roads leading to tiny buildings, all
 surrounded by military security. From what I have been able to figure out,
 you will find the U.S. arsenal of chemical and biological weapons housed
 there. No, we are not now making these weapons, but neither are we disposing
 of them.

 Similarly, there has been discussion of developing advanced military
 technology using AGI and other computer-related methods. I believe that
 these efforts are fundamentally anti-democratic, as they allow a small
 number of people to control a large number of people. Gone are the days when
 people voted with their swords. We now have the best government that money
 can buy monitoring our every email, including this one, to identify anyone
 resisting such efforts. 1984 has truly arrived. This can only lead to a
 horrible end to freedom, with AGIs doing their part and more.

 Like chemical and biological weapons, unmanned and automated weapons should
 be BANNED. Unfortunately, doing so would provide a window of opportunity for
 others to deploy them. However, if we make these and stick them in yet
 another building at the south end of Walker Lake, we would be ready in case
 other nations deploy such weapons.

 How about an international ban on the deployment of all unmanned and
 automated weapons? The U.S. won't now even agree to ban land mines. At least
 this would restore SOME relationship between popular support and military
 might. Doesn't it sound ethical to insist that a human being decide when
 to end another human being's life? Doesn't it sound fair to require the
 decision maker to be in harm's way, especially when the person being killed
 is in or around their own home? Doesn't it sound unethical to add to the
 present situation? When deployed on a large scale, aren't these WMDs?

 Steve

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Walker Lake

2010-08-02 Thread Mike Tintner
Steve:How about an international ban on the deployment of all unmanned and 
automated weapons? 

You might as well ask for a ban on war (or, perhaps, aggression). I strongly 
recommend reading the SciAm July 2010 issue on robotic warfare. The US already 
operates from memory somewhere between 13,000 and 20,000 unmanned weapons. 
Unmanned war (obviously with some but ever less human supervision)  IS the 
future of war.

If you used a little lateral thinking, you'd realise that this may well be a 
v.g. thing - let robots kill each other rather than humans - whoever's robots 
win, wins the war. It would be interesting to compare Afghan./Vietnam - I 
imagine the kill count is considerably down (but correct me) - *because* of 
superior, more automated technology.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Walker Lake

2010-08-02 Thread Russell Wallace
I don't often request list moderation, but if this kind of off-topic spam
and clueless trolling doesn't call for it, nothing does, so: I hereby
request that a moderator take appropriate action.

On Mon, Aug 2, 2010 at 3:40 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 Sometime when you are flying between the northwest US to/from Las Vegas,
 look out your window as you fly over Walker Lake in eastern Nevada. At the
 south end you will see a system of roads leading to tiny buildings, all
 surrounded by military security. From what I have been able to figure out,
 you will find the U.S. arsenal of chemical and biological weapons housed
 there. No, we are not now making these weapons, but neither are we disposing
 of them.

 Similarly, there has been discussion of developing advanced military
 technology using AGI and other computer-related methods. I believe that
 these efforts are fundamentally anti-democratic, as they allow a small
 number of people to control a large number of people. Gone are the days when
 people voted with their swords. We now have the best government that money
 can buy monitoring our every email, including this one, to identify anyone
 resisting such efforts. 1984 has truly arrived. This can only lead to a
 horrible end to freedom, with AGIs doing their part and more.

 Like chemical and biological weapons, unmanned and automated weapons should
 be BANNED. Unfortunately, doing so would provide a window of opportunity for
 others to deploy them. However, if we make these and stick them in yet
 another building at the south end of Walker Lake, we would be ready in case
 other nations deploy such weapons.

 How about an international ban on the deployment of all unmanned and
 automated weapons? The U.S. won't now even agree to ban land mines. At least
 this would restore SOME relationship between popular support and military
 might. Doesn't it sound ethical to insist that a human being decide when
 to end another human being's life? Doesn't it sound fair to require the
 decision maker to be in harm's way, especially when the person being killed
 is in or around their own home? Doesn't it sound unethical to add to the
 present situation? When deployed on a large scale, aren't these WMDs?

 Steve

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Robot Warriors - the closest to real AGI?

2010-08-02 Thread Mike Tintner
[Here's the SciAm article - go see the illustrations too. We should really be 
discussing all this technologically because it strikes me as the closest to 
real AGI there is - and probably where we're likely to see the soonest advances]



WAR MACHINES



Robots on and above the battlefield are bringing

about the most profound transformation of

warfare since the advent of the atom bomb

By P. W. Singer

Back in the early 1970s,
a handful of scientists, engineers,
defense contractors and
U.S. Air Force officers got
together to form a professional
group. They were essentially trying to
solve the same problem: how to build
machines that can operate on their own
without human control and to figure
out ways to convince both the public
and a reluctant Pentagon brass that robots
on the battlefield are a good idea.
For decades they met once or twice a
year, in relative obscurity, to talk over
technical issues, exchange gossip and
renew old friendships. This once cozy
group, the Association for Unmanned
Systems International, now encompasses
more than 1,500 member companies
and organizations from 55 countries.
The growth happened so fast, in fact,
that it found itself in something of an
identity crisis. At one of its meetings in
San Diego, it even hired a master storyteller
to help the group pull together
the narrative of the amazing changes in
robotic technology. As one attendee
summed up, Where have we come
from? Where are we? And where should
we-and where do we want to-go?
What prompted the group's soulsearching
is one of the most profound
changes in modern warfare since the
advent of gunpowder or the airplane:
an astonishingly rapid rise in the use of
robots on the battlefield. Not a single
robot accompanied
the U.S. advance
from Kuwait
toward Baghdad in 2003.
Since then, 7,000 unmanned aircraft and another
12,000 ground vehicles have entered the
U.S. military inventory, entrusted with missions
that range from seeking out snipers to bombing
the hideouts of al-Qaeda higher-ups in Pakistan.
The world's most powerful fighting forces,
which once eschewed robots as unbecoming to
their warrior culture, have now embraced a war
of the machines as a means of combating an irregular
enemy that triggers remote explosions
with cell phones and then blends back into the
crowd. These robotic systems are not only having
a big effect on how this new type of warfare
is fought, but they also have initiated a set of
contentious arguments about the implications
of using ever more autonomous and intelligent
machines in battle. Moving soldiers out of
harm's way may save lives, but the growing use
of robots also raises deep political, legal and
ethical questions about the fundamental nature
of warfare and whether these technologie
could inadvertently make wars easier to start.
The earliest threads of this story arguably
hark back to the 1921 play R.U.R., in which
Czech writer Karel ^C apek coined the word robot
to describe mechanical servants that eventually
rise up against their human masters. The
word was packed with meaning, because it derived
from the Czech word for servitude and
the older Slavic word for slave, historically
linked to the robotniks, peasants who had revolted
against rich landowners in the 1800s.
This theme of robots taking on the work we
don't want to do but then ultimately assuming
control is a staple of science fiction that continues
today in The Terminator and The Matrix.
Today roboticists invoke the descriptors unmanned
or remote-operated to avoid Hollywood-
fueled visions of machines that are plotting
our demise. In the simplest terms, robots are
machines built to operate in a sense-think-act
paradigm. That is, they have sensors that gather
develinformation
about the world. Those data are
then relayed to computer processors, and perhaps
artificial-intelligence software, that use
them to make appropriate decisions. Finally,
based on that information, mechanical systems
known as effectors carry out some physical action
on the world around them. Robots do not
have to be anthropomorphic, as is the other Hollywood
trope of a man in a metal suit. The size
and shape of the systems that are beginning to
carry out these actions vary widely and rarely
evoke the image of C-3PO or the Terminator.
The Global Positioning Satellite system, videogame-
like remote controls and a host of other
technologies have made robots both useful and
usable on the battlefield during the past decade.
The increased ability to observe, pinpoint and
then attack targets in hostile settings without
having to expose the human operator to danger
became a priority after the 9/11 attacks, and
each new use of the systems on the ground created
a success story that had broader repercussions.
As an example, in the first few months of the Afghan
campaign in 2001, a prototype of the PackBot,
now used extensively to defuse bombs, was
sent into the field for testing. The soldiers liked it
so much that they would not return it to its manufacturer,
iRobot, 

Re: [agi] Walker Lake

2010-08-02 Thread Steve Richfield
Matt,

I grant you your points, but they miss the my point. Where is this
ultimately leading - to a superpower with the ability to kill its opponents
without any risk to itself. This may be GREAT so long as you agree with and
live under that superpower, but how about when things change for the worse?
What if we get another Bush who lies to congress and wages unprovoked war
with other nations, only next time with vast armies of robots ala *The Clone
Wars*? Sure the kill rate will be almost perfect. Sure we can more
accurately kill their heads of government without killing so many civilians
along the way.

How about when you flee future U.S. tyranny, and your new destination
becomes valued by the U.S. enough to send a bunch of robots in to seize it.
Your last thought could be of the U.S. robot that is killing YOU. Oops, too
late to reconsider where this is all going.

Note in passing that our standard of living has been gradually declining as
the wealth of the world is concentrated into fewer and fewer hands. Note in
passing that the unemployment situation is looking bleaker and bleaker, with
no prospect for improvement in sight. Do you REALLY want to concentrate SO
much power in the hands of SUCH a dysfunctional government? If this doesn't
work out well, what would be the options for improvement? This appears to be
a one-way street with no exit.

Steve
=
On Mon, Aug 2, 2010 at 7:55 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Steve:How about an international ban on the deployment of all unmanned
 and automated weapons?

 You might as well ask for a ban on war (or, perhaps, aggression). I
 strongly recommend reading the SciAm July 2010 issue on robotic warfare. The
 US already operates from memory somewhere between 13,000 and 20,000 unmanned
 weapons. Unmanned war (obviously with some but ever less human
 supervision)  IS the future of war.

 If you used a little lateral thinking, you'd realise that this may well be
 a v.g. thing - let robots kill each other rather than humans - whoever's
 robots win, wins the war. It would be interesting to compare Afghan./Vietnam
 - I imagine the kill count is considerably down (but correct me) - *because*
 of superior, more automated technology.
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Int'l Relations

2010-08-02 Thread Ian Parker
On 1 August 2010 21:18, Jan Klauck jkla...@uni-osnabrueck.de wrote:

 Ian Parker wrote

  McNamara's dictum seems on
  the face of it to contradict the validity of Psychology as a
  science.

 I don't think so. That in unforseen events people switch to
 improvisation isn't suprising. Even an AGI, confronted with a novel
 situation and lacking data and models and rules for that, has to
 switch to ad-hoc heuristics.

  Psychology, if is is a valid science can be used for modelling.

 True. And it's used for that purpose. In fact some models of
 psychology are so good that the simulation's results are consistent
 with what is empirically found in the real world.

  Some of what McNamara has to say seems to me to be a little bit
  contradictory. On the one hand he espouses *gut feeling*. On the other
  he says you should be prepared to change your mind.

 I don't see the contradiction. Changing one's mind refers to one's
 assumption and conceptual framings. You always operate under uncertainty
 and should be open for re-evaluation of what you believe.

 And the lower the probability of an event, the lesser are you prepared
 for it and you switch to gut feelings since you lack empirical experience.
 Likely that one's gut feelings operate within one's frame of mind.

 So these are two different levels.


This seems to link in with the very long running set of postings on
Solomonoff (or should it be -ov  -oв in Cyrillic). Laplace assigned a
probability of 50% to something we knew absolutely nothing about. I feel
that *gut feelings* are quite often wrong. Freeloading is very much
believed in by the man in the street but it is wroong and very much
oversimnplified.

Could I tell you something of the background of John Prescott. He is very
much a bruiser. He has a Trade Union background and has not had much
education. Many such people have a sense of inverted snobbery. Alan Sugar
says that he got around the World speaking only English, yet a firm that
employs linguists can more than double its sales overseas. Of course as I
think we all agree one of the main characteristics of AGI is its ability to
understand NL. AGI will thus be polyglot. Indeed one of the main tests will
be translation. What is the difference between laying concrete at 50C and
fighting Israel?. First Turing question!


  John Prescott at the Chilcot Iraq inquiry said that the test of
  politicians was not hindsight, but courage and leadership. What the 
  does he mean.

 Rule of thumb is that it's better to do something than to do nothing.
 You act, others have to react. As long as you lead the game, you can
 correct your own errors. But when you hesitate, the other parties will
 move first and you eat what they hand out to you.

 And don't forget that the people still prefer alpha-males that lead,
 not those that deeply think. It's more important to unite the tribe
 with screams and jumps against the enemy than to reason about budgets
 or rule of law--gawd how boring... :)


Yes, but an AGI system will have to balance budgets. In fact narrow AI is
making a contribution in the shape of Forex. I have claimed that perhaps AGI
will consist of a library of narrow AI. Forex, or rather software of the
Forex type will be an integral part of AGI. Could Forex manage the European
Central Bank? With modifications I think yes.

AGI will have to think about the rule of law as well, otherwise it will be
an intolerable and dangerous.

The alpha male syndrome is something we have to get away from, if we are
going to make progress of any kind.


  It seems that *getting things right* is not a priority
  for politicians.

 Keeping things running is the priority.


Thins will run, sort of, even if bad decisions are taken.


 --- Now to the next posting ---

  This is an interesting article.

 Indeed.

  Google is certain to uncover the *real motivators.*

 Sex and power.


Are you in effect claiming that the leaders of (say) terrorist movements are
motivated by power and do not have any ideology. It has been said that war
is individual unselfishness combined with corporate selfishness (interesting
quote to remember). I am not sure. What are the motivations of the
unselfish foot soldiers? How do leaders obtain their power. As Mr Cameron
rightly said the ISI is exporting terror. British Pakistanis though are free
agents. They do not have to be *exported* by the ISI. Why do they allow
themselves to be? They are *not* conscripts.


  - Ian Parker




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] AGI Int'l Relations

2010-08-02 Thread Matt Mahoney
Steve Richfield wrote:
 I would feel a **LOT** better if someone explained SOME scenario to 
 eventually 
emerge from our current economic mess.

What economic mess?
http://www.google.com/publicdata?ds=wb-wdictype=lstrail=falsenselm=hmet_y=ny_gdp_mktp_cdscale_y=linind_y=falserdim=countryidim=country:USAtdim=truetstart=-31561920tunit=Ytlen=48hl=endl=en


 Unemployment appears to be permanent and getting worse, 

When you pay people not to work, they are less inclined to work.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Steve Richfield steve.richfi...@gmail.com
To: agi agi@v2.listbox.com
Sent: Mon, August 2, 2010 11:54:25 AM
Subject: Re: [agi] AGI  Int'l Relations

Jan

I can see that I didn't state one of my points clearly enough...


On Sun, Aug 1, 2010 at 3:04 PM, Jan Klauck jkla...@uni-osnabrueck.de wrote:



 My simple (and completely unacceptable) cure for this is to tax savings,
 to force the money back into the economy.

You have either consumption or savings. The savings are put back into
the economy in form of credits to those who invest the money.


Our present economic problem is that those credits aren't being turned over 
fast enough to keep the economic engine running well. At present, with present 
systems in place, there is little motivation to quickly turn over one's wealth, 
and lots of motivation to very carefully protect it. The result is that most of 
the wealth of the world is just sitting there in various accounts, and is NOT 
being spent/invested on various business propositions to benefit the population 
of the world.

We need to do SOMETHING to get the wealth out of the metaphorical mattresses 
and 
back into the economy. Taxation is about the only effective tool that the 
government hasn't already dulled beyond utility. However, taxation doesn't 
stand 
a chance without the cooperation of other countries to do the same. There seems 
to be enough lobbying power in the hands of those with the money to stop any 
such efforts, or at least to leave enough safe havens to make such efforts 
futile.

I would feel a **LOT** better if someone explained SOME scenario to eventually 
emerge from our current economic mess. Unemployment appears to be permanent and 
getting worse, as does the research situation. All I hear are people citing 
stock prices and claiming that the economy is turning around, when I see little 
connection between stock prices and on-the-street economy.

This is an IR problems of monumental proportions. What would YOU do about it?

Steve



 
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Walker Lake

2010-08-02 Thread Matt Mahoney
Steve Richfield wrote:
 How about an international ban on the deployment of all unmanned and 
 automated 
weapons?
 
How about a ban on suicide bombers to level the playing field?

 1984 has truly arrived.

No it hasn't. People want public surveillance. It is also necessary for AGI. In 
order for machines to do what you want, they have to know what you know. In 
order for a global brain to use that knowledge, it has to be public. AGI has to 
be a global brain because it is too expensive to build any other way, and 
because it would be too dangerous if the whole world didn't control it.

-- Matt Mahoney, matmaho...@yahoo.com





From: Steve Richfield steve.richfi...@gmail.com
To: agi agi@v2.listbox.com
Sent: Mon, August 2, 2010 10:40:20 AM
Subject: [agi] Walker Lake

Sometime when you are flying between the northwest US to/from Las Vegas, look 
out your window as you fly over Walker Lake in eastern Nevada. At the south end 
you will see a system of roads leading to tiny buildings, all surrounded by 
military security. From what I have been able to figure out, you will find the 
U.S. arsenal of chemical and biological weapons housed there. No, we are not 
now 
making these weapons, but neither are we disposing of them.

Similarly, there has been discussion of developing advanced military technology 
using AGI and other computer-related methods. I believe that these efforts are 
fundamentally anti-democratic, as they allow a small number of people to 
control 
a large number of people. Gone are the days when people voted with their 
swords. 
We now have the best government that money can buy monitoring our every email, 
including this one, to identify anyone resisting such efforts. 1984 has truly 
arrived. This can only lead to a horrible end to freedom, with AGIs doing their 
part and more.

Like chemical and biological weapons, unmanned and automated weapons should be 
BANNED. Unfortunately, doing so would provide a window of opportunity for 
others 
to deploy them. However, if we make these and stick them in yet another 
building 
at the south end of Walker Lake, we would be ready in case other nations deploy 
such weapons.

How about an international ban on the deployment of all unmanned and automated 
weapons? The U.S. won't now even agree to ban land mines. At least this would 
restore SOME relationship between popular support and military might. Doesn't 
it 
sound ethical to insist that a human being decide when to end another human 
being's life? Doesn't it sound fair to require the decision maker to be in 
harm's way, especially when the person being killed is in or around their own 
home? Doesn't it sound unethical to add to the present situation? When deployed 
on a large scale, aren't these WMDs?
 
Steve

 
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Shhh!

2010-08-02 Thread Matt Mahoney
Jim, you are thinking out loud. There is no such thing as trans-infinite. How 
about posting when you actually solve the problem.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Jim Bromer jimbro...@gmail.com
To: agi agi@v2.listbox.com
Sent: Mon, August 2, 2010 9:06:53 AM
Subject: [agi] Re: Shhh!


I think I can write an abbreviated version, but there would only be a few 
people 
in the world who would both believe me and understand why it would work.


On Mon, Aug 2, 2010 at 8:53 AM, Jim Bromer jimbro...@gmail.com wrote:

I can write an algorithm that is capable of describing ('reaching') every 
possible irrational number - given infinite resources.  The infinite is not a 
number-like object, it is an active form of incrementation or concatenation.  
So 
I can write an algorithm that can write every finite state of every possible 
number.  However, it would take another algorithm to 'prove' it.  Given an 
irrational number, this other algorithm could find the infinite incrementation 
for every digit of the given number.  Each possible number (including 
the incrementation of those numbers that cannot be represented in truncated 
form) is embedded within a single infinite infinite incrementation of digits 
that is produced by the algorithm, so the second algorithm would have to 
calculate where you would find each digit of the given irrational number by 
increment.  But the thing is, both functions would be computable and provable.  
(I haven't actually figured the second algorithm out yet, but it is not 
a difficult problem.)
 
This means that the Trans-Infinite Is Computable.  But don't tell anyone about 
this, it's a secret.
 

agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Shhh!

2010-08-02 Thread Abram Demski
Jim,

:) Looks to me like you are developing your own internally consistent
mathematics without worrying about relating it back to the standard stuff.
(How do you define the result of running a program continuum long? Is the
result unique?) This is great, but it might be worth your while to later
come back to basic computability theory and see if/how you can present your
ideas as an extension of it.

Whenever I have done this, I've later found out that whatever-great-idea has
already been thought of (but with very different terminology, of course). I
take this as evidence that there is a very strong mental landscape... if
you go in a particular direction there is a natural series of landmarks,
including both great ideas and pitfalls that everyone runs into. (Different
people take different amounts of time to climb out of the pitfalls, though.
Some may keep looking for gold at a dead end for a long time.)

--Abram

On Mon, Aug 2, 2010 at 8:53 AM, Jim Bromer jimbro...@gmail.com wrote:

 I can write an algorithm that is capable of describing ('reaching') every
 possible irrational number - given infinite resources.  The infinite is not
 a number-like object, it is an active form of incrementation or
 concatenation.  So I can write an algorithm that can write *every* finite
 state of *every* possible number.  However, it would take another
 algorithm to 'prove' it.  Given an irrational number, this other algorithm
 could find the infinite incrementation for every digit of the given number.
 Each possible number (including the incrementation of those numbers that
 cannot be represented in truncated form) is embedded within a single
 infinite infinite incrementation of digits that is produced by the
 algorithm, so the second algorithm would have to calculate where you would
 find each digit of the given irrational number by increment.  But the thing
 is, both functions would be computable and provable.  (I haven't actually
 figured the second algorithm out yet, but it is not a difficult problem.)

 This means that the Trans-Infinite Is Computable.  But don't tell anyone
 about this, it's a secret.

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Shhh!

2010-08-02 Thread David Jones
Abram Wrote:

 I take this as evidence that there is a very strong mental landscape...
 if you go in a particular direction there is a natural series of landmarks,
 including both great ideas and pitfalls that everyone runs into. (Different
 people take different amounts of time to climb out of the pitfalls, though.
 Some may keep looking for gold at a dead end for a long time.)



That is a very nice description of AI research and the pitfalls we come
across in our quest.  :)

Dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-02 Thread Abram Demski
Jim,

Interestingly, the formalization of Solomonoff induction I'm most familiar
with uses a construction that relates the space of programs with the real
numbers just as you say. This formulation may be due to Solomonoff, or
perhaps Hutter... not sure. I re-formulated it to gloss over that in order
to make it simpler; I'm pretty sure the version I gave is equivalent in the
relevant aspects. However, some notes on the original construction.

Programs are created by flipping coins to come up with the 1s and 0s. We are
to think of it like this: whenever the computer reaches the end of the
program and tries to continue on, we flip a coin to decide what the next bit
of the program will be. We keep doing this for as long as the computer wants
more bits of instruction.

This framework makes room for infinitely long programs, but makes them
infinitely improbable-- formally, they have probability 0. (We could alter
the setup to allow them an infinitesimal probability.) Intuitively, this
means that if we keep flipping a coin to tell the computer what to do,
eventually we will either create an infinite loop-back (so the computer
keeps executing the already-written parts of the program and never asks for
more) or write out the HALT command. Avoiding doing one or the other
forever is just too improbable.

This also means all real numbers are output by some program! It just may be
one which is infinitely long.

However, all the programs that slip past my time bound as T increases to
infinity will have measure 0, meaning they don't add anything to the sum.
This means the convergence is unaffected.

Note: in this construction, program space is *still* a well-defined entity.

--Abram

On Sun, Aug 1, 2010 at 9:05 PM, Jim Bromer jimbro...@gmail.com wrote:

 Abram,

 This is a very interesting function.  I have spent a lot of time thinking
 about it.  However, I do not believe that does, in any way, prove or
 indicate that Solomonoff Induction is convergent.  I want to discuss the
 function but I need to take more time to study some stuff and to work
 various details out.  (Although I have thought a lot about it, I am writing
 this under a sense of deadline, so it may not be well composed.)



 My argument was that Solomonoff's conjecture, which was based (as far as I
 can tell) on 'all possible programs', was fundamentally flawed because the
 idea of 'all possible programs' is not a programmable definition.  All
 possible programs is a domain, not a class of programs that can be feasibly
 defined in the form of an algorithm that could 'reach' all the programs.



 The domain of all possible programs is trans-infinite just as the domain of
 irrational numbers are.  Why do I believe this?  Because if we imagine
 that infinite algorithms are computable, then we could compute irrational
 numbers.  That is, there are programs that, given infinite resources,
 could compute irrational numbers.  We can use the binomial theorem, for
 example to compute the square root of 2.  And we can use trial and error
 methods to compute the nth root of any number.  So that proves that there
 are infinite irrational numbers that can be computed by algorithms that run
 for infinity.



 So what does this have to do with Solomonoff's conjecture of all possible
 programs?  Well, if I could prove that any individual irrational number
 could be computed (with programs that ran through infinity) then I might be
 able to prove that there are trans-infinite programs.  If I could prove
 that some trans-infinite subset of irrational numbers could be computed then
 I might be able to prove that 'all possible programs' is a trans-infinite
 class.


 Now Abram said that since his sum, based on runtime and program length, is
 convergent it can prove that Solomonoff Induction is convergent.  Even
 assuming that his convergent sum method could be fixed up a little, I
 suspect that this time-length bound is misleading.  Since a Turing Machine
 allows for erasures this means that a program could last longer than his
 time parameter and still produce an output string that matches the given
 string.  And if 'all possible programs' is a trans-infinite class then
 there are programs that you are going to miss.  Your encoding method will
 miss trans-infinite programs (unless you have trans-cended the
 trans-infinite.)

 However, I want to study the function and some other ideas related to this
 kind of thing a little more.  It is an interesting function.
 Unfortunately I also have to get back to other-worldly things.

 Jim Bromer


 On Mon, Jul 26, 2010 at 2:54 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 I'll argue that solomonoff probabilities are in fact like Pi, that is,
 computable in the limit.

 I still do not understand why you think these combinations are necessary.
 It is not necessary to make some sort of ordering of the sum to get it to
 converge: ordering only matters for infinite sums which include negative
 numbers. (Perhaps that's where 

Re: [agi] Walker Lake

2010-08-02 Thread Steve Richfield
Matt,

On Mon, Aug 2, 2010 at 1:10 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 Steve Richfield wrote:
  How about an international ban on the deployment of all unmanned and
 automated weapons?

 How about a ban on suicide bombers to level the playing field?


Of course we already have that. Unfortunately, one begets the other. Hence,
we seem to have a choice, neither or both. I vote for neither.


  1984 has truly arrived.

 No it hasn't. People want public surveillance.


I'm not sure what you mean by public surveillance. Monitoring private
phone calls? Monitoring otherwise unused web cams? Monitoring your output
when you use the toilet? Where, if anywhere, do YOU draw the line?


 It is also necessary for AGI. In order for machines to do what you want,
 they have to know what you know.


Unfortunately, knowing everything, any use of this information will either
be to my benefit, or my detriment. Do you foresee any way to limit use to
only beneficial use?

BTW, decades ago I developed the plan of, when my kids got in some sort of
trouble in school or elsewhere, to represent their interests as well as
possible, regardless of whether I agreed with them or not. This worked
EXTREMELY well for me, and for several other families who have tried this.
The point is that to successfully represent their interests, I had to know
what was happening. Potential embarrassment and explainability limited the
kids' actions. I wonder if the same would work for AGIs?


 In order for a global brain to use that knowledge, it has to be public.


Again, where do you draw the line between public and private?


 AGI has to be a global brain because it is too expensive to build any other
 way, and because it would be too dangerous if the whole world didn't control
 it.


I'm not sure what you mean by control.

Here is the BIG question in my own mind, that I have asked in various ways,
so far without any recognizable answer:

There are plainly lots of things wrong with our society. We got here by
doing what we wanted, and by having our representatives do what we wanted
them to do. Clearly some social re-engineering is in our future, if we are
to thrive in the foreseeable future. All changes are resisted by some, and I
suspect that some needed changes will be resisted by most, and perhaps
nearly everyone. Disaster scenarios aside, what would YOU have YOUR AGI do
to navigate this future?

To help guide your answer, I see that the various proposed systems of
ethics would prevent breaking the eggs needed to make a good futuristic
omelet. I suspect that completely democratic systems have run their course.
Against this is letting AGI loose has its own unfathomable hazards. I've
been hanging around here for quite a while, and I don't yet see any success
path to work toward.

I'm on your side in that any successful AGI would have to have the
information and the POWER to succeed, akin to *Colossus, the Forbin Project*,
which I personally see as more of a success story than a horror scenario.
Absent that, AGIs will only add to our present problems.

What is the success path that you see?

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Int'l Relations

2010-08-02 Thread Steve Richfield
Matt,

On Mon, Aug 2, 2010 at 1:05 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 Steve Richfield wrote:
  I would feel a **LOT** better if someone explained SOME scenario to
 eventually emerge from our current economic mess.

 What economic mess?

 http://www.google.com/publicdata?ds=wb-wdictype=lstrail=falsenselm=hmet_y=ny_gdp_mktp_cdscale_y=linind_y=falserdim=countryidim=country:USAtdim=truetstart=-31561920tunit=Ytlen=48hl=endl=en

 Perhaps you failed to note the great disparity between the US and the
World's performance since 2003, or that with each year, greater percentages
of the GDP is going into fewer and fewer pockets. Kids starting out now
don't really have a chance.



 http://www.google.com/publicdata?ds=wb-wdimet=ny_gdp_mktp_cdtdim=truedl=enhl=enq=world+gdp#met=ny_gdp_mktp_cdidim=country:USAtdim=true
  Unemployment
 appears to be permanent and getting worse,

 When you pay people not to work, they are less inclined to work.


That does NOT explain that there are MANY unemployed for every available
job, and that many are falling off the end of their benefits with nothing to
help them. This view may have been true long ago, but it is now dated and
wrong.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Brief mention of bio-AGI in the Boston Globe...

2010-08-02 Thread Ben Goertzel
Open science is, to some, humanity's best
hopehttp://www.google.com/url?sa=Xq=http://www.boston.com/business/healthcare/articles/2010/08/02/biotech_movement_hopes_to_spur_rise_of_citizen_scientists/ct=gacad=:s1:f2:v0:d1:i0:lt:e0:p0:t1280774083:cd=sfIgD21-SMcusg=AFQjCNHAxjADEHZpOGQP6cK4G6jyO3wj2g
Boston Globe
“What is really needed to cure diseases and extend life,'' *Goertzel* said,
“is to link together all available bio data in a vast public database, *...*

--
Tip: Use a minus sign (-) in front of terms in your query that you want to
exclude.Learn 
morehttp://www.google.com/support/websearch/bin/answer.py?answer=136861hl=engl=source=alertsmailcd=sfIgD21-SMccad=:s1:f2:v0:d1:
.

Removehttp://www.google.com/alerts/remove?s=AB2Xq4hUEKvcJpGOdQ3Ohxm954kNjKjX_dH0vGghl=engl=source=alertsmailcd=sfIgD21-SMccad=:s1:f2:v0:d1:this
alert.
Createhttp://www.google.com/alerts?hl=engl=source=alertsmailcd=sfIgD21-SMccad=:s1:f2:v0:d1:another
alert.
Managehttp://www.google.com/alerts/manage?hl=engl=source=alertsmailcd=sfIgD21-SMccad=:s1:f2:v0:d1:your
alerts.



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if we are
to give everything its due, two times two makes five is sometimes a very
charming thing too. -- Fyodor Dostoevsky



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com