On 27 Feb 2012, at 23:15, Craig Weinberg wrote:
On Feb 27, 4:52 pm, meekerdb <meeke...@verizon.net> wrote:
On 2/27/2012 1:09 PM, Craig Weinberg wrote:
On Feb 27, 3:32 pm, meekerdb<meeke...@verizon.net> wrote:
On 2/27/2012 11:54 AM, Craig Weinberg wrote:
...when we program them specifically to 'learn' in the the exact
Not exactly. AI learns from interactions which are not known to
those who write the AI
They don't have to generate their own software though, we have
AIs can generate their own software. That is the point of AI.
them to do that and specify exactly how we want them to do it.
which we want them to.
They can learn by higher level program modifications too, and those
can also be random.
So there is no evidence that their learning is qualitatively
different from yours.
There is no such thing as evidence when it comes to qualitative
phenomenology. You don't need evidence to infer that a clock doesn't
know what time it is.
A clock has no self-referential ability.
You reason like that: no animals can fly, because pigs cannot fly.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com.
To unsubscribe from this group, send email to
For more options, visit this group at