The special rate at the Crowne Plaza does not
apply to the night of Monday, 9 March. If the
post-conference workshops on Monday extend
into the afternoon, it would be useful if the
special rate was available on Monday night.
Thanks,
Bill
---
agi
Hi Stephen,
As a small operation independent of Cyc, distributing
your AGI system as open source is likely to be a good
strategy.
As a small university PI developing visualization
software, distributing my systems as open source turned
out to be very good for my project. Our collaborators
and
Eliezer Yudkowsky wrote:
Bill Hibbard wrote:
Eliezer,
I don't think it
inappropriate to cite a problem that is general to supervised learning
and reinforcement, when your proposal is to, in general, use supervised
learning and reinforcement. You can always appeal to a different
Eliezer,
I don't think it
inappropriate to cite a problem that is general to supervised learning
and reinforcement, when your proposal is to, in general, use supervised
learning and reinforcement. You can always appeal to a different
algorithm or a different implementation that, in some
Thank you for your responses.
Jeff, I have taken your suggestion and sent a couple
questions to the Summit. My concern is motivated by
noticing that the Summit includes speakers who have
been very clear about their opposition to regulating
AI, but none who I am aware of who have advocated it
at:
http://www.ssec.wisc.edu/~billh/g/Singularity_Notes.html
Bill Hibbard
---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
http://www.nytimes.com/2005/03/24/technology/24think.html?
The name is a lot like Novamente. Interesting to see what
he comes up with.
---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
I agree with the posters who say that Google is not strong AI.
But it is amazingly useful because it, along with the web, forms
a huge content-addressable memory. That's an important part of
human brains. I think of google as my second brain. It can't
think, but it is a wonderful complement to our
in the near term
Anyway, in addition to catching up with Pei and Bill Hibbard, I made a
couple useful new contacts at the conference -- and interestingly, both were
industry scientists rather than academics. For some reason there was more
broad AI vision in the industry AI researchers than
My talk is available at:
http://www.ssec.wisc.edu/~billh/g/FS104HibbardB.pdf
There was a really interesting talk by the neuroscientist
Richard Grainger with some publications available at:
http://www.brainengineering.com/publications.html
Cheers,
Bill
---
To unsubscribe, change your
years) I'd definitely
see creating the first open source AGI system as a big
opportunity.
Cheers,
Bill
--
Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706
[EMAIL PROTECTED] 608-263-4427 fax: 608-263-6738
http://www.ssec.wisc.edu
Ben,
I think that emotions in humans are CORRELATED with value-judgments, but are
certainly not identical to them.
We can have emotions that are ambiguous in value, and we can have strong
value judgments with very little emotion attached to them.
That is reasonable. As I said in my first
I said:
That is reasonable. As I said in my first post on this topic,
there is variation in the way people define emotion. The
quotes from Edelman and Crick show some precedence for
defining emotion essentially as value, but it is also common
to define emotion more in terms of expression or
On Wed, 21 Jan 2004, Eric Baum wrote:
New Book:
What is Thought?
Eric B. Baum
What a great book.
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
It seems that Baum is arguing that biological minds are amazingly quick
at making sense of the world because, as a result of evolution, the
structure of the brain is set up with inbuilt limitations/assumptions
based on likely possibilities in the real world - thus cutting out vast
areas for
market an agent only has to be
better than competing agents to make money. And in
predicting the weather, there is a real limit on how well
an agent can do.
Cheers,
Bill
--
Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706
[EMAIL
On Mon, 15 Sep 2003, Amara D. Angelica wrote:
Any commments on this paper?
http://www.kluweronline.com/issn/1389-1987/current
Anders Sandberg's PhD thesis (thanks to Cole Kitchen for
originally posting this to the AGI list) at:
http://akira.nada.kth.se/~asa/Thesis/thesis.pdf
entitled
http://www.newscientist.com/news/news.jsp?id=ns3488
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
On Wed, 26 Feb 2003, Brad Wyble wrote:
The limitation in multi-agent systems is usually the degree of interaction they can
have. The bandwidth between ants, for example, is fairly low even when they are in
direct contact, let alone 1 inch apart.
This limitation keeps their behavior
On Tue, 18 Feb 2003, Brad Wyble wrote:
. . .
Incorrect. The cortex has genetically pre-programmed systems.
It cannot be said that is a matrix loaded with software from
subcortical structures..
. . .
Yes, but there is a very interesting experiment with rewiring
brains of young ferrets so
Eliezer S. Yudkowsky wrote:
Bill Hibbard wrote:
On Fri, 14 Feb 2003, Eliezer S. Yudkowsky wrote:
It *could* do this but it *doesn't* do this. Its control process is such
that it follows an iterative trajectory through chaos which is forbidden
to arrive at a truthful solution, though
Eliezer S. Yudkowsky wrote:
. . .
Yes. Laws (logical constraints) are inevitably ambiguous.
Does that include the logical constraints governing the reinforcement
process itself?
There is a logic of the reinforcement process, but it is a
behavior rather than a constraint on a behavior.
By
Ed,
I agree that it was very decent of Philip to admit
to starting the mis-spelling of my name. My general
complaint abou the mis-spelling was sent hours before
I even read Eliezer's message, but due to the vagaries
of email was delivered hours after my reply to Eliezer,
giving the impression
Ben,
As Moshe pointed out to me, Marcus Hutter and his students tried to
replicate Baum's work, with mixed results:
go to
http://www.idsia.ch/~marcus/
click on Artificial Intelligence and scroll down to
Market-Based Reinforcement Learning in Partially Observable Worlds (with I.
Kwee
On Sat, 15 Feb 2003, Ben Goertzel wrote:
In my book I say that consciousness is part of the way
the brain implements reinforcement learning, and I think
something like that is necessary for a really robust
solution. That's why I think it will take 100 years.
I would say, rather, that
that the stakes are high, but think
the safer apporach is to build ethics into the fundamental
driver of super-intelligent machines, which will be their
reinforcement values.
Cheers,
Bill
--
Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI
Hey Eliezer, my name is Hibbard, not Hubbard.
On Fri, 14 Feb 2003, Eliezer S. Yudkowsky wrote:
Bill Hibbard wrote:
I never said perfection, and in my book make it clear that
the task of a super-intelligent machine learning behaviors
to promote human happiness will be very messy. That's
Strange that there would be someone on this list with a
name so similar to mine.
Cheers,
Bill
--
Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706
[EMAIL PROTECTED] 608-263-4427 fax: 608-263-6738
http://www.ssec.wisc.edu/~billh
Hi Ben,
I'd like to reference your soon-to-be-published book in
a paper. Could you please send me the proper form of
reference. I am sending this to the AGI list as others
may want this information.
Thanks,
Bill
--
Bill Hibbard, SSEC, 1225
Hi Arthur,
On Wed, 12 Feb 2003, Arthur T. Murray wrote:
. . .
Since the George and Barbara Bushes of this world
are constantly releasing their little monsters onto the planet,
why should we creators of Strong AI have to take any
more precautions with our Moravecian Mind Children
than human
On Wed, 12 Feb 2003, Arthur T. Murray wrote:
The quest is as hopeless as it is with human children.
Although Bill Hibbard singles out the power of super-intelligence
as the reason why we ought to try to instill morality and friendliness
in our AI offspring, such offspring are made in our own
is that an AIXI's optimality is only as
valid as its assumption about the probability distribution
of universal Turing machine programs.
Cheers,
Bill
--
Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706
[EMAIL PROTECTED] 608-263-4427 fax
On Wed, 12 Feb 2003, Philip Sutton wrote:
Ben/Bill,
My feeling is that goals and ethics are not identical concepts. And I
would think that goals would only make an intentional ethical
contribution if they related to the empathetic consideration of others.
. . .
Absolutely goals (I prefer
On Tue, 11 Feb 2003, Ben Goertzel wrote:
Eliezer wrote:
* a paper by Marcus Hutter giving a Solomonoff induction based theory
of general intelligence
Interesting you should mention that. I recently read through Marcus
Hutter's AIXI paper, and while Marcus Hutter has done valuable
Ben,
On Tue, 11 Feb 2003, Ben Goertzel wrote:
The formality of Hutter's definitions can give the impression
that they cannot evolve. But they are open to interactions
with the external environment, and can be influenced by it
(including evolving in response to it). If the reinforcement
Hi Philip,
On Tue, 11 Feb 2003, Philip Sutton wrote:
Ben,
If in the Novamente configuration the dedicated Ethics Unit is focussed
on GoalNode refinement, it might be worth using another term to
describe the whole ethical architecture/machinery which would involve
aspects of most/all (??)
situation it observes... i.e. it's a
'valuation' ;-)
Interesting. Are these values used for reinforcing behaviors
in a learning system? Or are they used in a continuous-valued
reasoning system?
Cheers,
Bill
--
Bill Hibbard, SSEC, 1225 W
Hi Ben,
I think that true machine intelligence will be computationally
demanding and will initially appear on expensive hardware
available only to wealthy institutions like the government or
corporations. Even when it is possible on commodity hardware,
expensive hardware will still support much
38 matches
Mail list logo