Believe it or not, this sentence is grammatically correct and has
meaning: 'Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo
buffalo.'
source: http://www.mentalfloss.com/blogs/archives/13120
:-)
---
agi
Archives:
then you don't understand it.
That was Richard Feynman
Regards,
Jiri Jelinek
PS: Sorry if I'm missing anything. Being busy, I don't read all posts.
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member
, 2008 at 3:39 AM, Trent Waddington
[EMAIL PROTECTED] wrote:
On Wed, Nov 19, 2008 at 6:20 PM, Jiri Jelinek [EMAIL PROTECTED] wrote:
Trent Waddington wrote:
Apparently, it was Einstein who said that if you can't explain it to
your grandmother then you don't understand it.
That was Richard Feynman
controls then we (just like many other species) are
due for extinction for adaptability limitations.
Regards,
Jiri Jelinek
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your
in that definition ;-)
Regards,
Jiri
On Wed, Nov 12, 2008 at 12:16 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Jiri Jelinek wrote:
On Wed, Nov 12, 2008 at 2:41 AM, John G. Rose [EMAIL PROTECTED]
wrote:
is it really necessary for an AGI to be conscious?
Depends on how you define it.
H
it comes to
problem solving.
So what? Equal/superior in whatever - who cares as long as we can
progress safely enjoy life - which is what our tools (including AGI)
are being designed to help us with.
Regards,
Jiri Jelinek
---
agi
Archives: https
Mike,
The chance of someone stealing your idea is v. remote.
There are many companies that made fortune with stolen ideas (e.g. Microsoft).
But of course they are primarily after proven ideas.
YKY,
If practically doable, I would recommend closed source, utilizing (
possibly developing) as
Matt,
So, what formal language model can solve this problem?
A FL that clearly separates basic semantic concepts like objects,
attributes, time, space, actions, roles, relationships, etc + core
subjective concepts e.g. want, need, feel, aware, believe, expect,
unreal/fantasy. Humans have senses
On Fri, Sep 19, 2008 at 10:46 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Google is the closest we have to AI at the moment.
Matt,
There is a difference between being good at
a) finding problem-related info/pages, and
b) finding functional solutions (through reasoning), especially when
all the
Matt,
Q: how many fluid ounces in a cubic mile?
Google: 1 cubic mile = 1.40942995 × 10^14 US fluid ounces
Q: who is the tallest U.S. president?
Google: Abraham Lincoln at six feet four inches. (along with other text)
Try What's the color of Dan Brown's black coat? What's the excuse
for a
URLs for images and users describe it using system's
formal language (which I named GSL by the way - General Scripting
Language). GINA deals with images in similar way as with above
mentioned phrases.
Regards,
Jiri Jelinek
---
agi
Archives: https
Samantha, Mike,
Would you also say that without a body, you couldn't understand
3D space ?
It depends on what is meant by, and the value of, understand 3D space.
If the intelligence needs to navigate or work with 3D space or even
understand intelligence whose very concepts are filled with
Mike,
Imagine a simple 3D scene with 2 different-size spheres. A simple
program allows you to change positions of the spheres and it can
answer question Is the smaller sphere inside the bigger sphere?
[Yes|Partly|No]. I can write such program in no time. Sure, it's
extremely simple, but it deals
classification..
Regards,
Jiri Jelinek
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
On Wed, Sep 10, 2008 at 2:39 PM, Mike Tintner [EMAIL PROTECTED] wrote:
Without a body, you couldn't understand the joke.
False. Would you also say that without a body, you couldn't understand
3D space ?
BTW it's kind of sad that people find it funny when others get hurt. I
wonder what are the
On Wed, Sep 10, 2008 at 5:23 PM, Mike Tintner [EMAIL PROTECTED] wrote:
You're saying it's that 3D space *can* be understood without a body?
Er, false.
http://en.wikipedia.org/wiki/SHRDLU
Jiri
---
agi
Archives:
than others. And of course, there are also
certain things in particular societies you need to avoid. If the
system gets feedback joke samples, it can tweak/generate its joke
templates (always considering info about the audience) and get better.
Decent KR - that's the first thing.
Regards,
Jiri
improving a particular AGI design, your views would
change drastically. Just my opinion..
Regards,
Jiri Jelinek
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription
Mike,
every kind of representation, not just mathematical and logical and
linguistic, but everything - visual, aural, solid, models, embodied etc etc.
There is a vast range. That means also every subject domain - artistic,
historical, scientific, philosophical, technological, politics,
-players changes in relevant stages. You
can design user friendly interface for teaching systems in meaningful
ways so it can later think using queriable models and understand
relationships [changes] between concepts etc... Sorry about the
brevity (busy schedule).
Regards,
Jiri Jelinek
PS: we might
, but
we humans have limitations of that nature as well.
Regards,
Jiri Jelinek
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member
[from whatever source] and *then* it's time to apply its
intelligence.
Regards,
Jiri Jelinek
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https
of those other tools being used for cutting bread (and
is not self-aware in any sense), it still can (when asked for advice)
make a reasonable suggestion to try the T2 (because of the
similarity) = coming up with a novel idea demonstrating general
intelligence.
Regards,
Jiri Jelinek
for safety, you would IMO end up with basically giving
the goals - which is of course easier without messing with qualia
implementation. Forget qualia as a motivation for our AGIs. Our AGIs
are supposed to work for us, not for themselves.
Regards,
Jiri Jelinek
-in. There are simply certain concepts for which the
AGI's attempt to somehow learn it on its owns would [IMO] be a
complete waste of resources + it would require senses similar to those
we have.
Regards,
Jiri Jelinek
---
agi
Archives: https://www.listbox.com/member/archive
), OR
you can use a formal language which will help your AGI to semantically
sort out the input(=possibly [initially] less user friendly, but fewer
resources needed for implementation + you can go for NL support later,
after implementing input-understanding, reasoning possibly scaling).
Regards,
Jiri
On Tue, Aug 5, 2008 at 12:48 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
The problem is that writing stories in a formal language, with enough nuance
and volume to really contain the needed commonsense info, would require a
Cyc-scale effort at formalized story entry. While possible in principle,
On Tue, Jul 22, 2008 at 11:22 AM, Jan Klauck [EMAIL PROTECTED] wrote:
Opinions ?
Can your (ethical) AI read the content of the
following link, translate it conceptually from
physics to AI and give us a friendly answer
(including a score)?
http://www.math.ucr.edu/home/baez/crackpot.html
becomes available.
Regards,
Jiri Jelinek
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244id_secret
if you wire-head, you go extinct
Doing it today certainly wouldn't be a good idea, but
whatever we do to take care of risks and improvements, our AGI(s) will
eventually do a better job, so why not then?
Going into a degenerate mental state is no different than death. If you can't
see
On Fri, Jun 13, 2008 at 1:28 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
I think that our culture of self-indulgence is to some extent in a Nirvana
attractor. If you think that's a good thing, why shouldn't we all lie around
with wires in our pleasure centers (or hopped up on cocaine, same
Mark,
Assuming that
a) pain avoidance and pleasure seeking are our primary driving forces; and
b) our intelligence wins over our stupidity; and
c) we don't get killed by something we cannot control;
Nirvana is where we go.
Jiri
---
agi
Archives:
,
Jiri Jelinek
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox
a) pain avoidance and pleasure seeking are our primary driving forces;
On Fri, Jun 13, 2008 at 3:47 PM, Mark Waser [EMAIL PROTECTED] wrote:
Yes, but I strongly disagree with assumption one. Pain avoidance and
pleasure are best viewed as status indicators, not goals.
Pain and pleasure [levels]
Buddhism teaches that happiness comes from within, so stop twisting the
world around to make yourself happy, because this can't succeed.
Which is of course false... It might come within but triggers can be
internal as well as external and both work pretty well. For the world
twisting, it's just
On Fri, Jun 13, 2008 at 6:21 PM, Mark Waser [EMAIL PROTECTED] wrote:
if you wire-head, you go extinct
Doing it today certainly wouldn't be a good idea, but whatever we do
to take care of risks and improvements, our AGI(s) will eventually do
a better job, so why not then?
Regards,
Jiri Jelinek
. People are more interested in
pleasure than in messing with terribly complicated problems.
Regards,
Jiri Jelinek
*** Problems for AIs, work for robots, feelings for us. ***
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http
understand that:
Humans demonstrate GI, but being fully human-level is not
necessarily required for true AGI.
In some ways, it might even hurt the problem solving abilities.
Regards,
Jiri Jelinek
---
agi
Archives: http://www.listbox.com/member/archive/303
/rule [self-]modifications.
Regards,
Jiri Jelinek
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244id_secret
YKY,
Can you give an example of something expressed in PLN that is very
hard or impossible to express in FOL?
FYI, I recently run into some issues with my [under-development]
formal language (which is being designed for my AGI-user
communication) when trying to express statements like:
John
/capabilities of X?
In the case of self-consciousness, the X would simply = self.
Regards,
Jiri Jelinek
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http
I got the below info from my supervisor. Contact me off-list if interested.
Thanks,
Jiri Jelinek
We are looking for a person with the following:
- Artificial Intelligence background, preferably with modeling and
simulation experience
If your AGI project supports a formal language (FL) communication, I
would be interested to see how would be the following sentence
expressed in that FL:
John said that if he knew yesterday what he knows today, he wouldn't
do what he did back then.
Thanks,
Jiri Jelinek
PS: Sorry if similar
did you use to get funding? (if applicable)
Regards,
Jiri Jelinek
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id
: Interesting... Sapphire is the name of my project.
It's hard to find a nice unique project name in these days. One of
the reasons why I picked GINA (General Intelligence Narrative Agent)
name for my AGI experiment is that it's also a regular name = widely
accepted for reuse :-)
Regards,
Jiri Jelinek
primate mammal. I hope the future holds something better for
us and the world will be more mechanized controlled by our
technology.
Sorry if I read too quickly and missed something important. I have
tons of AGI stuff to catch up with after months of non-AI captivity.
Regards,
Jiri Jelinek
I would like to learn more about approaches people took when trying to
implement indexing for case based reasoning (to support searches for
semantic similarities in large case-repositories) - preferably in AGI
implementations. Any good online sources to learn from?
Thanks,
Jiri Jelinek
unless he gets off his knees and actually does something about
himself.
Regards,
Jiri Jelinek
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75960344-1f0d2b
, planning to rewrite text-books (/lectures) using neutral and
woman-appealing analogies. I did not really follow it so not sure what
the outcome was.
Regards,
Jiri Jelinek
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2
.
Regards,
Jiri Jelinek
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67381162-3ea8d6
Dennis,
Could you give an example of such problem?
For example figuring out country's foreign policies to protect the
best interest of the nation (considering short long term
consequences).
Sorry for not responding to some of the stuff you wrote recently. I'm
deep in coding mood in these days
so much for us that it would be IMO worth
to immediately stop working on all non-critical projects and
temporarily spend as many resources as possible on AGI RD.
Regards,
Jiri Jelinek
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go
a behavior changes through
reinforcement based on given rules.
Good luck with this,
Jiri Jelinek
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66443285-fe79dd
(and other pleasant) from
their perspective?
Regards,
Jiri Jelinek
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66309775-832549
full time but limited
resources force them to focus on other stuff. Money can buy their
time.
Regards,
Jiri Jelinek
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret
Thanks for the responses. Sorry, I picked just a couple of folks.
Dealing with the wide audience of the whole AGI list would IMO make
things more difficult for me. I may share selected stuff later.
Regards,
Jiri Jelinek
-
This list is sponsored by AGIRI: http://www.agiri.org/email
to help, please get in touch through my private gmail account.
Thanks,
Jiri Jelinek
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64603883-e8db13
On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
We just need to control AGIs goal system.
You can only control the goal system of the first iteration.
..and you can add rules for it's creations (e.g. stick with the same
goals/rules unless authorized otherwise)
But if
in these days.
Regards,
Jiri Jelinek
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64455711-090e72
it in another.
Thanks,
Jiri
On Nov 6, 2007 5:29 AM, BillK [EMAIL PROTECTED] wrote:
On 11/6/07, Jiri Jelinek wrote:
Did you read the following definition somewhere?
General intelligence is the ability to gain knowledge in one context
and correctly apply it in another.
I found it in notes I wrote
On Nov 5, 2007 7:01 PM, Jiri Jelinek [EMAIL PROTECTED] wrote:
On Nov 4, 2007 12:40 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
How do you propose to measure intelligence in a proof of concept?
Hmmm, let me check my schedule...
Ok, I'll figure this out on Thursday night (unless I get hit
based on section 14,15,.. keyword-matches) for use in a particular AGI
project.
It would be nice if the system had features for reporting similar
efforts by independent groups.
Regards,
Jiri Jelinek
On Nov 7, 2007 11:55 AM, Derek Zahn [EMAIL PROTECTED] wrote:
Let me give a couple of examples
for less significant
stuff.
Tell me again why *anyone* would want to fill this out?
Because most AGI developers need all the help they can get. Try to put
together well functional AGI dev team with very limited resources. I
have great respect for the few who managed to do that.
Regards,
Jiri
When listening to that like-filled dialogue, I was few times under
strong impression that very specific timing in which particular parts
of the like-containing sentences were pronounced played a critical
role in figuring out the meaning of the particular like instance.
Jiri
On Nov 6, 2007 12:49
Matt,
We can compute behavior, but nothing indicates we can compute
feelings. Qualia research needed to figure out new platforms for
uploading.
Regards,
Jiri Jelinek
On Nov 4, 2007 1:15 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Jiri Jelinek [EMAIL PROTECTED] wrote:
Matt,
Create
On Nov 4, 2007 12:40 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Jiri Jelinek [EMAIL PROTECTED] wrote:
If you can't get meaning from clean input format then what makes you
think you can handle NL?
Humans seem to get meaning more easily from ambiguous statements than from
mathematical
with our value system.
Regards,
Jiri Jelinek
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60898198-756d29
On Nov 2, 2007 3:56 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Jiri Jelinek [EMAIL PROTECTED] wrote:
On Oct 31, 2007 8:53 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
Natural language is a fundamental part of the knowledge
base, not something you can add on later.
I disagree. You can
look at
the human goal system and investigate where it's likely to lead us. My
impression is that most of us have only very shallow understanding of
what we really want. When messing with AGI, we better know what we
really want.
Regards,
Jiri Jelinek
-
This list is sponsored by AGIRI: http
algorithms working and then (possibly much later) let the system to
focus on NL analysis/understanding or build some
NL-to-the_structured_format translation tools.
Regards,
Jiri Jelinek
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please
and plugging you into the pleasure grid. ;-)
Ok, seriously, what's the best possible future for mankind you can imagine?
In other words, where do we want our cool AGIs to get us? I mean
ultimately. What is it at the end as far as you can see?
Regards,
Jiri Jelinek
-
This list is sponsored
Linas, BillK
It might currently be hard to accept for association-based human
minds, but things like roses, power-over-others, being worshiped
or loved are just waste of time with indirect feeling triggers
(assuming the nearly-unlimited ability to optimize).
Regards,
Jiri Jelinek
On Nov 2, 2007
, but not enough to
actually build one.
Just build AGI that follows given rules.
Regards,
Jiri Jelinek
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60681447-d775a0
destroying the world by launching 10,000 nuclear
bombs. We should be more worried that it will give us what we want.
I'm optimistic.
Regards,
Jiri Jelinek
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com
.
Regards,
Jiri Jelinek
On Nov 2, 2007 12:54 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
Jiri Jelinek wrote:
Let's go to an extreme: Imagine being an immortal idiot.. No matter
what you do how hard you try, the others will be always so much
better in everything that you will eventually
(/limitations) in some way.
What a life. Suddenly, there is this amazing pleasure machine as a new
god-like-style of living for poor creatures like you. What do you do?
Regards,
Jiri Jelinek
You know, survival of the
fittest and all that other boring rot that just happens to dominate reality.
Nirvana
be viewed as a whole in this respect.
Regards,
Jiri Jelinek
On Nov 2, 2007 1:37 AM, Stefan Pernar [EMAIL PROTECTED] wrote:
On Nov 2, 2007 1:19 PM, Jiri Jelinek [EMAIL PROTECTED] wrote:
Is this really what you *want*?
Out of all the infinite possibilities, this is the world in which you
.
Regards,
Jiri Jelinek
On Oct 31, 2007 4:19 AM, Bob Mottram [EMAIL PROTECTED] wrote:
From a promotional perspective these ideas seem quite weak. To most
people AI saving the world or destroying it just sounds crackpot (a
cartoon caricature of technology), whereas helping us to accomplish
our goals
concepts like intelligence are totally meaningless.
Regards,
Jiri Jelinek
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=59810742-14fd96
I'll probably include a reference to the: Risks to civilization,
humans and planet Earth
http://en.wikipedia.org/wiki/Risks_to_civilization%2C_humans_and_planet_Earth
Jiri
On Oct 30, 2007 10:18 AM, Jiri Jelinek [EMAIL PROTECTED] wrote:
The idea that we really need to build smarter machines
powerful tools than not
have. If we are too stupid to live then we don't deserve to live.. IMO
fair enough.. Let's give it a shot :-)
Regards,
Jiri Jelinek
On Oct 30, 2007 6:09 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Jiri Jelinek [EMAIL PROTECTED] wrote:
I'll probably include a reference
such overall-very-human-like AGIs since many parts of
the human architecture are not worth mimicking (or even kind of stupid
to mimic).
Jiri Jelinek
PS: Being busy with dev work, I'm unlikely to get back to this discussions.
On 10/16/07, Mike Tintner [EMAIL PROTECTED] wrote:
In 2006, Henrik Christensen
--
Regards,
Jiri Jelinek
On 10/1/07, Edward W. Porter [EMAIL PROTECTED] wrote:
Check out the following article entitled: The Future of Computing,
According to Intel -- Massively multicore processors will enable smarter
computers that can infer our activities.
http://www.technologyreview.com
Eric,
I'm not 100% sure if someone/something else than me feels pain, but
considerable similarities between my and other humans
- architecture
- [triggers of] internal and external pain related responses
- independent descriptions of subjective pain perceptions which
correspond in certain ways
should we expect a computer
program to feel pain (?) ..
Regards,
Jiri Jelinek
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e
algorithmic
complexity on VNA.
Regards,
Jiri Jelinek
On 6/14/07, Mark Waser [EMAIL PROTECTED] wrote:
Oh. You're stuck on qualia (and zombies). I haven't seen a good
compact argument to convince you (and e-mail is too low band-width and
non-interactive to do one of the longer ones). My
James,
determine for some reason that the physical is truly missing something
Look at twin particles = just another example of something missing in
the world as we can see it.
Is it good enough to act and think and reason as if you have
experienced the feeling.
For AGI - yes. Why not (?).
4D to
figure out how qualia really work.
But OK, let's assume for a moment that certain VNA-processed
algorithms can produce qualia as a side-effect. What factors do you
expect to play an important role in making a particular quale pleasant
vs unpleasant?
Regards,
Jiri Jelinek
On 6/11/07, Mark
doing this any more'
point. ;-)) Looks like the entropy is kind of pain to us ( to our
devices) and the negative entropy might be kind of pain to the
universe. Hopefully, when (/if) our AGI figures this out, it will not
attempt to squeeze the Universe into a single spot to solve it.
Regards,
Jiri
of protected mode with access to the 1K data, ask it to solve the
tricky problems and auto-check the solution. Happy waiting! ;-))
Regards,
Jiri Jelinek
On 6/12/07, John G. Rose [EMAIL PROTECTED] wrote:
There are always the difficulties of creating AGI in software written by
people. Maybe it would
James,
Frank Jackson (in Epiphenomenal Qualia) defined qualia as
...certain features of the bodily sensations especially, but also of
certain perceptual experiences, which no amount of purely physical
information includes.. :-)
If it walks like a human, talks like a human, then for all those
Mark,
Could you specify some of those good reasons (i.e. why a sufficiently
large/fast enough von Neumann architecture isn't sufficient substrate
for a sufficiently complex mind to be conscious and feel -- or, at
least, to believe itself to be conscious and believe itself to feel
For being
Hi Mark,
Your brain can be simulated on a large/fast enough von Neumann architecture.
From the behavioral perspective (which is good enough for AGI) - yes,
but that's not the whole story when it comes to human brain. In our
brains, information not only is and moves but also feels. From
my
*** I want it now! ***
Friendly AGI? many worry,
but I would start with input story:
NL support - what a pain,
human-senses - how insane..
But, there's a shortcut one can grab,
Form-based I/O does the job.
You can get it NL-close,
still fun for users - the way I chose.
Many keep trying the NL
the pain
sensation.
Regards,
Jiri Jelinek
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e
(at least in our bodies) are not enough for actual
feelings. For example, to feel pleasure, you also need things like
serotonin, acetylcholine, noradrenaline, glutamate, enkephalins and
endorphins. Worlds of real feelings and logic are loosely coupled.
Regards,
Jiri Jelinek
On 5/23/07, Mark Waser
will be OK. My emotions say that
there is far too much that can go awry if you depend upon *everything* that
you say you're depending upon *plus* everything that you don't realize
you're depending upon *plus* . . .
Mark
- Original Message -
From: Jiri Jelinek [EMAIL PROTECTED]
To: agi
- Sure.
The ultimate decision maker - I would not vote for that.
Sorry it took me a while to get back to you but (even though I don't
post to this AGI list much) I felt guilty of too much AGI talk and not
enough AGI work so I had to do something about it. :)
Regards,
Jiri Jelinek
On 5/3/07, Mark
Make sure you don't spend too much time pondering about x,y,z before
solving a,b,c. The x,y,z may later look differently to you. Work out the
knowledge representation first.
Regards,
Jiri Jelinek
On 5/3/07, a [EMAIL PROTECTED] wrote:
Hello,
I have trouble implementing my AGI algorithm
of and stories about emotionless people.
Mark
P.S. Great discussion. Thank you.
- Original Message -
*From:* Jiri Jelinek [EMAIL PROTECTED]
*To:* agi@v2.listbox.com
*Sent:* Tuesday, May 01, 2007 6:21 PM
*Subject:* Re: [agi] Pure reason is a disease.
Mark,
I understand your point but have
1 - 100 of 107 matches
Mail list logo