I agree -- these things suck. I have spent time trying to fool the sensor.
It brings up the bigger issue of what makes good technology vs. what is not
a good use of technology. A lot of technology I simply hate, like digital
pictures (intended for some aesthetic use). We have to be selective
in evolution. Just a
guess.
Mike Archbold
cheers,
Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your Subscription
http://www.listbox.com
I guess I don't see how cloud computing is materially different from
open source in so much as we see the sharing of resources and also now
increased availability, no need to buy so much hardware at the outset.
But it seems more a case of convenience.
So what does that have to do with AGI? I
2008/8/14 Ciro Aisa [EMAIL PROTECTED]:
On Thu, Aug 14, 2008 at 10:25:57AM +0100, Bob Mottram wrote:
I doubt that there will be much practical application of biological
neuron powered robots, since the overhead of keeping the biology alive
would be too troublesome (requiring feeding and
OK, this brings up something that I'd like to pose to the list as a whole.
I realize this will be a somewhat antagonistic question - my intent here
is not to offend (or to single anyone out), especially since I could be
wrong.
But my impression is that with some exceptions, AI researchers
2008/7/22 Mike Archbold [EMAIL PROTECTED]:
It looks to me to be borrowed from Aristotle's ethics. Back in my
college
days, I was trying to explain my project and the professor kept
interrupting me to ask: What does it do? Tell me what it does. I
don't
understand what your system does
- Original Message -
From: Mike Archbold [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, July 22, 2008 1:04 PM
Subject: Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS
It looks to me to be borrowed from Aristotle's ethics. Back in my
college
days, I
it is deduction as far as I've ever
heard of. With induction we are implying repeated observations that lead
to some new knowledge (ie., some new rule in this case). That was my
understanding anyway, and I'm no PhD scientist.
Mike Archbold
---
agi
Archives
and a loosely-knit individual, where AI is concerned. So I
argue that if a society can (and should) hypercompute, there is no
reason to suspect that an individual can't (or shouldn't).
On Mon, Jun 16, 2008 at 11:37 PM, Mike Archbold [EMAIL PROTECTED]
wrote:
I'm not sure that I'm responding to your
Mike Archbold,
It seems you've made a counterargument without meaning to.
When we make this transition, it seems to me that the shift is so radical
that it is impossible to justify making the step, because as I mentioned
it involves a surreptitious shift from quantity to quality.
I
in his Science of Logic, which I
think is a good ontology in general because it mixes up a lot of issues AI
struggles with, like the ideal nature of quality and quantity, and also
infinity.
Mike Archbold
Seattle
The paper includes a study of the uncomputable busy beaver function up
to x=6
and I
realize there aren't a log of hegelians around. But, tomorrow I will read
further.
Mike
On Mon, Jun 16, 2008 at 9:19 PM, Mike Archbold [EMAIL PROTECTED] wrote:
I previously posted here claiming that the human mind (and therefore an
ideal AGI) entertains uncomputable models, counter
12 matches
Mail list logo