[agi] NL parsing

2010-07-16 Thread Jiri Jelinek
Believe it or not, this sentence is grammatically correct and has meaning: 'Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo.' source: http://www.mentalfloss.com/blogs/archives/13120 :-) --- agi Archives:

Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-19 Thread Jiri Jelinek
then you don't understand it. That was Richard Feynman Regards, Jiri Jelinek PS: Sorry if I'm missing anything. Being busy, I don't read all posts. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member

Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-19 Thread Jiri Jelinek
, 2008 at 3:39 AM, Trent Waddington [EMAIL PROTECTED] wrote: On Wed, Nov 19, 2008 at 6:20 PM, Jiri Jelinek [EMAIL PROTECTED] wrote: Trent Waddington wrote: Apparently, it was Einstein who said that if you can't explain it to your grandmother then you don't understand it. That was Richard Feynman

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-14 Thread Jiri Jelinek
controls then we (just like many other species) are due for extinction for adaptability limitations. Regards, Jiri Jelinek --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Jiri Jelinek
in that definition ;-) Regards, Jiri On Wed, Nov 12, 2008 at 12:16 PM, Richard Loosemore [EMAIL PROTECTED] wrote: Jiri Jelinek wrote: On Wed, Nov 12, 2008 at 2:41 AM, John G. Rose [EMAIL PROTECTED] wrote: is it really necessary for an AGI to be conscious? Depends on how you define it. H

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Jiri Jelinek
it comes to problem solving. So what? Equal/superior in whatever - who cares as long as we can progress safely enjoy life - which is what our tools (including AGI) are being designed to help us with. Regards, Jiri Jelinek --- agi Archives: https

Re: [agi] open or closed source for AGI project?

2008-10-07 Thread Jiri Jelinek
Mike, The chance of someone stealing your idea is v. remote. There are many companies that made fortune with stolen ideas (e.g. Microsoft). But of course they are primarily after proven ideas. YKY, If practically doable, I would recommend closed source, utilizing ( possibly developing) as

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Jiri Jelinek
Matt, So, what formal language model can solve this problem? A FL that clearly separates basic semantic concepts like objects, attributes, time, space, actions, roles, relationships, etc + core subjective concepts e.g. want, need, feel, aware, believe, expect, unreal/fantasy. Humans have senses

Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Jiri Jelinek
On Fri, Sep 19, 2008 at 10:46 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Google is the closest we have to AI at the moment. Matt, There is a difference between being good at a) finding problem-related info/pages, and b) finding functional solutions (through reasoning), especially when all the

Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Jiri Jelinek
Matt, Q: how many fluid ounces in a cubic mile? Google: 1 cubic mile = 1.40942995 × 10^14 US fluid ounces Q: who is the tallest U.S. president? Google: Abraham Lincoln at six feet four inches. (along with other text) Try What's the color of Dan Brown's black coat? What's the excuse for a

Re: [agi] Artificial [Humor ] vs Real Approaches to Information

2008-09-12 Thread Jiri Jelinek
URLs for images and users describe it using system's formal language (which I named GSL by the way - General Scripting Language). GINA deals with images in similar way as with above mentioned phrases. Regards, Jiri Jelinek --- agi Archives: https

Re: [agi] Artificial humor

2008-09-11 Thread Jiri Jelinek
Samantha, Mike, Would you also say that without a body, you couldn't understand 3D space ? It depends on what is meant by, and the value of, understand 3D space. If the intelligence needs to navigate or work with 3D space or even understand intelligence whose very concepts are filled with

Re: [agi] Artificial humor

2008-09-11 Thread Jiri Jelinek
Mike, Imagine a simple 3D scene with 2 different-size spheres. A simple program allows you to change positions of the spheres and it can answer question Is the smaller sphere inside the bigger sphere? [Yes|Partly|No]. I can write such program in no time. Sure, it's extremely simple, but it deals

Re: [agi] Artificial humor

2008-09-11 Thread Jiri Jelinek
classification.. Regards, Jiri Jelinek --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69

Re: [agi] Artificial humor

2008-09-10 Thread Jiri Jelinek
On Wed, Sep 10, 2008 at 2:39 PM, Mike Tintner [EMAIL PROTECTED] wrote: Without a body, you couldn't understand the joke. False. Would you also say that without a body, you couldn't understand 3D space ? BTW it's kind of sad that people find it funny when others get hurt. I wonder what are the

Re: [agi] Perception Understanding of Space

2008-09-10 Thread Jiri Jelinek
On Wed, Sep 10, 2008 at 5:23 PM, Mike Tintner [EMAIL PROTECTED] wrote: You're saying it's that 3D space *can* be understood without a body? Er, false. http://en.wikipedia.org/wiki/SHRDLU Jiri --- agi Archives:

Re: [agi] Philosophy of General Intelligence

2008-09-08 Thread Jiri Jelinek
than others. And of course, there are also certain things in particular societies you need to avoid. If the system gets feedback joke samples, it can tweak/generate its joke templates (always considering info about the audience) and get better. Decent KR - that's the first thing. Regards, Jiri

Re: [agi] draft for comment

2008-09-07 Thread Jiri Jelinek
improving a particular AGI design, your views would change drastically. Just my opinion.. Regards, Jiri Jelinek --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription

Re: [agi] Philosophy of General Intelligence

2008-09-07 Thread Jiri Jelinek
Mike, every kind of representation, not just mathematical and logical and linguistic, but everything - visual, aural, solid, models, embodied etc etc. There is a vast range. That means also every subject domain - artistic, historical, scientific, philosophical, technological, politics,

Re: [agi] How Would You Design a Play Machine?

2008-08-29 Thread Jiri Jelinek
-players changes in relevant stages. You can design user friendly interface for teaching systems in meaningful ways so it can later think using queriable models and understand relationships [changes] between concepts etc... Sorry about the brevity (busy schedule). Regards, Jiri Jelinek PS: we might

Re: [agi] The Necessity of Embodiment

2008-08-28 Thread Jiri Jelinek
, but we humans have limitations of that nature as well. Regards, Jiri Jelinek --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member

Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Jiri Jelinek
[from whatever source] and *then* it's time to apply its intelligence. Regards, Jiri Jelinek --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https

Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Jiri Jelinek
of those other tools being used for cutting bread (and is not self-aware in any sense), it still can (when asked for advice) make a reasonable suggestion to try the T2 (because of the similarity) = coming up with a novel idea demonstrating general intelligence. Regards, Jiri Jelinek

Re: [agi] How Would You Design a Play Machine?

2008-08-27 Thread Jiri Jelinek
for safety, you would IMO end up with basically giving the goals - which is of course easier without messing with qualia implementation. Forget qualia as a motivation for our AGIs. Our AGIs are supposed to work for us, not for themselves. Regards, Jiri Jelinek

Re: [agi] Groundless reasoning

2008-08-07 Thread Jiri Jelinek
-in. There are simply certain concepts for which the AGI's attempt to somehow learn it on its owns would [IMO] be a complete waste of resources + it would require senses similar to those we have. Regards, Jiri Jelinek --- agi Archives: https://www.listbox.com/member/archive

Re: [agi] Groundless reasoning

2008-08-04 Thread Jiri Jelinek
), OR you can use a formal language which will help your AGI to semantically sort out the input(=possibly [initially] less user friendly, but fewer resources needed for implementation + you can go for NL support later, after implementing input-understanding, reasoning possibly scaling). Regards, Jiri

Re: [agi] Groundless reasoning

2008-08-04 Thread Jiri Jelinek
On Tue, Aug 5, 2008 at 12:48 AM, Ben Goertzel [EMAIL PROTECTED] wrote: The problem is that writing stories in a formal language, with enough nuance and volume to really contain the needed commonsense info, would require a Cyc-scale effort at formalized story entry. While possible in principle,

Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-22 Thread Jiri Jelinek
On Tue, Jul 22, 2008 at 11:22 AM, Jan Klauck [EMAIL PROTECTED] wrote: Opinions ? Can your (ethical) AI read the content of the following link, translate it conceptually from physics to AI and give us a friendly answer (including a score)? http://www.math.ucr.edu/home/baez/crackpot.html

Re: [agi] Nirvana

2008-06-15 Thread Jiri Jelinek
becomes available. Regards, Jiri Jelinek --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244id_secret

Re: [agi] Nirvana

2008-06-14 Thread Jiri Jelinek
if you wire-head, you go extinct Doing it today certainly wouldn't be a good idea, but whatever we do to take care of risks and improvements, our AGI(s) will eventually do a better job, so why not then? Going into a degenerate mental state is no different than death. If you can't see

Re: [agi] Nirvana

2008-06-13 Thread Jiri Jelinek
On Fri, Jun 13, 2008 at 1:28 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: I think that our culture of self-indulgence is to some extent in a Nirvana attractor. If you think that's a good thing, why shouldn't we all lie around with wires in our pleasure centers (or hopped up on cocaine, same

Re: [agi] Nirvana

2008-06-13 Thread Jiri Jelinek
Mark, Assuming that a) pain avoidance and pleasure seeking are our primary driving forces; and b) our intelligence wins over our stupidity; and c) we don't get killed by something we cannot control; Nirvana is where we go. Jiri --- agi Archives:

Re: [agi] The Logic of Nirvana

2008-06-13 Thread Jiri Jelinek
, Jiri Jelinek --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225 Powered by Listbox

Re: [agi] Nirvana

2008-06-13 Thread Jiri Jelinek
a) pain avoidance and pleasure seeking are our primary driving forces; On Fri, Jun 13, 2008 at 3:47 PM, Mark Waser [EMAIL PROTECTED] wrote: Yes, but I strongly disagree with assumption one. Pain avoidance and pleasure are best viewed as status indicators, not goals. Pain and pleasure [levels]

Re: [agi] The Logic of Nirvana

2008-06-13 Thread Jiri Jelinek
Buddhism teaches that happiness comes from within, so stop twisting the world around to make yourself happy, because this can't succeed. Which is of course false... It might come within but triggers can be internal as well as external and both work pretty well. For the world twisting, it's just

Re: [agi] Nirvana

2008-06-13 Thread Jiri Jelinek
On Fri, Jun 13, 2008 at 6:21 PM, Mark Waser [EMAIL PROTECTED] wrote: if you wire-head, you go extinct Doing it today certainly wouldn't be a good idea, but whatever we do to take care of risks and improvements, our AGI(s) will eventually do a better job, so why not then? Regards, Jiri Jelinek

Re: [agi] Nirvana

2008-06-12 Thread Jiri Jelinek
. People are more interested in pleasure than in messing with terribly complicated problems. Regards, Jiri Jelinek *** Problems for AIs, work for robots, feelings for us. *** --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http

Re: [agi] Nirvana

2008-06-12 Thread Jiri Jelinek
understand that: Humans demonstrate GI, but being fully human-level is not necessarily required for true AGI. In some ways, it might even hurt the problem solving abilities. Regards, Jiri Jelinek --- agi Archives: http://www.listbox.com/member/archive/303

Re: [agi] Nirvana

2008-06-11 Thread Jiri Jelinek
/rule [self-]modifications. Regards, Jiri Jelinek --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244id_secret

Re: [agi] OpenCog's logic compared to FOL?

2008-06-02 Thread Jiri Jelinek
YKY, Can you give an example of something expressed in PLN that is very hard or impossible to express in FOL? FYI, I recently run into some issues with my [under-development] formal language (which is being designed for my AGI-user communication) when trying to express statements like: John

Re: [agi] Did this message get completely lost?

2008-06-01 Thread Jiri Jelinek
/capabilities of X? In the case of self-consciousness, the X would simply = self. Regards, Jiri Jelinek --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http

[agi] AI job in Arlington, VA (DARPA)

2008-05-19 Thread Jiri Jelinek
I got the below info from my supervisor. Contact me off-list if interested. Thanks, Jiri Jelinek We are looking for a person with the following: - Artificial Intelligence background, preferably with modeling and simulation experience

[agi] Formal Language Expressions

2008-05-16 Thread Jiri Jelinek
If your AGI project supports a formal language (FL) communication, I would be interested to see how would be the following sentence expressed in that FL: John said that if he knew yesterday what he knows today, he wouldn't do what he did back then. Thanks, Jiri Jelinek PS: Sorry if similar

[agi] SAFAIRE project

2008-04-11 Thread Jiri Jelinek
did you use to get funding? (if applicable) Regards, Jiri Jelinek --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id

Re: [agi] SAFAIRE project

2008-04-11 Thread Jiri Jelinek
: Interesting... Sapphire is the name of my project. It's hard to find a nice unique project name in these days. One of the reasons why I picked GINA (General Intelligence Narrative Agent) name for my AGI experiment is that it's also a regular name = widely accepted for reuse :-) Regards, Jiri Jelinek

Re: [agi] Nine Misunderstandings About AI

2008-04-09 Thread Jiri Jelinek
primate mammal. I hope the future holds something better for us and the world will be more mechanized controlled by our technology. Sorry if I read too quickly and missed something important. I have tons of AGI stuff to catch up with after months of non-AI captivity. Regards, Jiri Jelinek

[agi] Indexing for CBR

2008-01-07 Thread Jiri Jelinek
I would like to learn more about approaches people took when trying to implement indexing for case based reasoning (to support searches for semantic similarities in large case-repositories) - preferably in AGI implementations. Any good online sources to learn from? Thanks, Jiri Jelinek

Re: [agi] AGI and Deity

2007-12-13 Thread Jiri Jelinek
unless he gets off his knees and actually does something about himself. Regards, Jiri Jelinek - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=75960344-1f0d2b

Re: [agi] Where are the women?

2007-11-28 Thread Jiri Jelinek
, planning to rewrite text-books (/lectures) using neutral and woman-appealing analogies. I did not really follow it so not sure what the outcome was. Regards, Jiri Jelinek - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2

Re: Re[4]: [agi] Funding AGI research

2007-11-20 Thread Jiri Jelinek
. Regards, Jiri Jelinek - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=67381162-3ea8d6

Re: Re[6]: [agi] Funding AGI research

2007-11-20 Thread Jiri Jelinek
Dennis, Could you give an example of such problem? For example figuring out country's foreign policies to protect the best interest of the nation (considering short long term consequences). Sorry for not responding to some of the stuff you wrote recently. I'm deep in coding mood in these days

Re: Re[2]: [agi] Funding AGI research

2007-11-18 Thread Jiri Jelinek
so much for us that it would be IMO worth to immediately stop working on all non-critical projects and temporarily spend as many resources as possible on AGI RD. Regards, Jiri Jelinek - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go

Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-18 Thread Jiri Jelinek
a behavior changes through reinforcement based on given rules. Good luck with this, Jiri Jelinek - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=66443285-fe79dd

Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-17 Thread Jiri Jelinek
(and other pleasant) from their perspective? Regards, Jiri Jelinek - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=66309775-832549

Re: [agi] Funding AGI research

2007-11-17 Thread Jiri Jelinek
full time but limited resources force them to focus on other stuff. Money can buy their time. Regards, Jiri Jelinek - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret

Re: [agi] advice-level dev collaboration

2007-11-14 Thread Jiri Jelinek
Thanks for the responses. Sorry, I picked just a couple of folks. Dealing with the wide audience of the whole AGI list would IMO make things more difficult for me. I may share selected stuff later. Regards, Jiri Jelinek - This list is sponsored by AGIRI: http://www.agiri.org/email

[agi] advice-level dev collaboration

2007-11-13 Thread Jiri Jelinek
to help, please get in touch through my private gmail account. Thanks, Jiri Jelinek - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=64603883-e8db13

Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-12 Thread Jiri Jelinek
On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote: We just need to control AGIs goal system. You can only control the goal system of the first iteration. ..and you can add rules for it's creations (e.g. stick with the same goals/rules unless authorized otherwise) But if

Re: [agi] Soliciting Papers for Workshop on the Broader Implications of AGI

2007-11-12 Thread Jiri Jelinek
in these days. Regards, Jiri Jelinek - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=64455711-090e72

Re: [agi] definition source?

2007-11-08 Thread Jiri Jelinek
it in another. Thanks, Jiri On Nov 6, 2007 5:29 AM, BillK [EMAIL PROTECTED] wrote: On 11/6/07, Jiri Jelinek wrote: Did you read the following definition somewhere? General intelligence is the ability to gain knowledge in one context and correctly apply it in another. I found it in notes I wrote

Re: [agi] NLP + reasoning?

2007-11-08 Thread Jiri Jelinek
On Nov 5, 2007 7:01 PM, Jiri Jelinek [EMAIL PROTECTED] wrote: On Nov 4, 2007 12:40 PM, Matt Mahoney [EMAIL PROTECTED] wrote: How do you propose to measure intelligence in a proof of concept? Hmmm, let me check my schedule... Ok, I'll figure this out on Thursday night (unless I get hit

Re: [agi] Connecting Compatible Mindsets

2007-11-07 Thread Jiri Jelinek
based on section 14,15,.. keyword-matches) for use in a particular AGI project. It would be nice if the system had features for reporting similar efforts by independent groups. Regards, Jiri Jelinek On Nov 7, 2007 11:55 AM, Derek Zahn [EMAIL PROTECTED] wrote: Let me give a couple of examples

Re: [agi] Connecting Compatible Mindsets

2007-11-07 Thread Jiri Jelinek
for less significant stuff. Tell me again why *anyone* would want to fill this out? Because most AGI developers need all the help they can get. Try to put together well functional AGI dev team with very limited resources. I have great respect for the few who managed to do that. Regards, Jiri

Re: [agi] NLP + reasoning?

2007-11-06 Thread Jiri Jelinek
When listening to that like-filled dialogue, I was few times under strong impression that very specific timing in which particular parts of the like-containing sentences were pronounced played a critical role in figuring out the meaning of the particular like instance. Jiri On Nov 6, 2007 12:49

Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-05 Thread Jiri Jelinek
Matt, We can compute behavior, but nothing indicates we can compute feelings. Qualia research needed to figure out new platforms for uploading. Regards, Jiri Jelinek On Nov 4, 2007 1:15 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- Jiri Jelinek [EMAIL PROTECTED] wrote: Matt, Create

Re: [agi] NLP + reasoning?

2007-11-05 Thread Jiri Jelinek
On Nov 4, 2007 12:40 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- Jiri Jelinek [EMAIL PROTECTED] wrote: If you can't get meaning from clean input format then what makes you think you can handle NL? Humans seem to get meaning more easily from ambiguous statements than from mathematical

Re: [agi] Nirvana? Manyana? Never!

2007-11-04 Thread Jiri Jelinek
with our value system. Regards, Jiri Jelinek - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=60898198-756d29

Re: [agi] NLP + reasoning?

2007-11-03 Thread Jiri Jelinek
On Nov 2, 2007 3:56 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- Jiri Jelinek [EMAIL PROTECTED] wrote: On Oct 31, 2007 8:53 PM, Matt Mahoney [EMAIL PROTECTED] wrote: Natural language is a fundamental part of the knowledge base, not something you can add on later. I disagree. You can

Re: [agi] Nirvana? Manyana? Never!

2007-11-03 Thread Jiri Jelinek
look at the human goal system and investigate where it's likely to lead us. My impression is that most of us have only very shallow understanding of what we really want. When messing with AGI, we better know what we really want. Regards, Jiri Jelinek - This list is sponsored by AGIRI: http

Re: [agi] NLP + reasoning?

2007-11-02 Thread Jiri Jelinek
algorithms working and then (possibly much later) let the system to focus on NL analysis/understanding or build some NL-to-the_structured_format translation tools. Regards, Jiri Jelinek - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please

Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Jiri Jelinek
and plugging you into the pleasure grid. ;-) Ok, seriously, what's the best possible future for mankind you can imagine? In other words, where do we want our cool AGIs to get us? I mean ultimately. What is it at the end as far as you can see? Regards, Jiri Jelinek - This list is sponsored

Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Jiri Jelinek
Linas, BillK It might currently be hard to accept for association-based human minds, but things like roses, power-over-others, being worshiped or loved are just waste of time with indirect feeling triggers (assuming the nearly-unlimited ability to optimize). Regards, Jiri Jelinek On Nov 2, 2007

Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Jiri Jelinek
, but not enough to actually build one. Just build AGI that follows given rules. Regards, Jiri Jelinek - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=60681447-d775a0

Re: [agi] popularizing injecting sense of urge

2007-11-01 Thread Jiri Jelinek
destroying the world by launching 10,000 nuclear bombs. We should be more worried that it will give us what we want. I'm optimistic. Regards, Jiri Jelinek - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com

Re: [agi] Nirvana? Manyana? Never!

2007-11-01 Thread Jiri Jelinek
. Regards, Jiri Jelinek On Nov 2, 2007 12:54 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: Jiri Jelinek wrote: Let's go to an extreme: Imagine being an immortal idiot.. No matter what you do how hard you try, the others will be always so much better in everything that you will eventually

Re: [agi] Nirvana? Manyana? Never!

2007-11-01 Thread Jiri Jelinek
(/limitations) in some way. What a life. Suddenly, there is this amazing pleasure machine as a new god-like-style of living for poor creatures like you. What do you do? Regards, Jiri Jelinek You know, survival of the fittest and all that other boring rot that just happens to dominate reality. Nirvana

Re: [agi] Nirvana? Manyana? Never!

2007-11-01 Thread Jiri Jelinek
be viewed as a whole in this respect. Regards, Jiri Jelinek On Nov 2, 2007 1:37 AM, Stefan Pernar [EMAIL PROTECTED] wrote: On Nov 2, 2007 1:19 PM, Jiri Jelinek [EMAIL PROTECTED] wrote: Is this really what you *want*? Out of all the infinite possibilities, this is the world in which you

Re: [agi] popularizing injecting sense of urgenc

2007-10-31 Thread Jiri Jelinek
. Regards, Jiri Jelinek On Oct 31, 2007 4:19 AM, Bob Mottram [EMAIL PROTECTED] wrote: From a promotional perspective these ideas seem quite weak. To most people AI saving the world or destroying it just sounds crackpot (a cartoon caricature of technology), whereas helping us to accomplish our goals

Re: [agi] popularizing injecting sense of urgen

2007-10-31 Thread Jiri Jelinek
concepts like intelligence are totally meaningless. Regards, Jiri Jelinek - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=59810742-14fd96

Re: [agi] popularizing injecting sense of urgency

2007-10-30 Thread Jiri Jelinek
I'll probably include a reference to the: Risks to civilization, humans and planet Earth http://en.wikipedia.org/wiki/Risks_to_civilization%2C_humans_and_planet_Earth Jiri On Oct 30, 2007 10:18 AM, Jiri Jelinek [EMAIL PROTECTED] wrote: The idea that we really need to build smarter machines

Re: [agi] popularizing injecting sense of urgenc

2007-10-30 Thread Jiri Jelinek
powerful tools than not have. If we are too stupid to live then we don't deserve to live.. IMO fair enough.. Let's give it a shot :-) Regards, Jiri Jelinek On Oct 30, 2007 6:09 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- Jiri Jelinek [EMAIL PROTECTED] wrote: I'll probably include a reference

Re: [agi] Why roboticists have more fun

2007-10-16 Thread Jiri Jelinek
such overall-very-human-like AGIs since many parts of the human architecture are not worth mimicking (or even kind of stupid to mimic). Jiri Jelinek PS: Being busy with dev work, I'm unlikely to get back to this discussions. On 10/16/07, Mike Tintner [EMAIL PROTECTED] wrote: In 2006, Henrik Christensen

Re: [agi] The Future of Computing, According to Intel -- Massively multicore processors will enable smarter computers that can infer our activities

2007-10-01 Thread Jiri Jelinek
-- Regards, Jiri Jelinek On 10/1/07, Edward W. Porter [EMAIL PROTECTED] wrote: Check out the following article entitled: The Future of Computing, According to Intel -- Massively multicore processors will enable smarter computers that can infer our activities. http://www.technologyreview.com

Re: [agi] Pure reason is a disease.

2007-06-16 Thread Jiri Jelinek
Eric, I'm not 100% sure if someone/something else than me feels pain, but considerable similarities between my and other humans - architecture - [triggers of] internal and external pain related responses - independent descriptions of subjective pain perceptions which correspond in certain ways

Re: [agi] Pure reason is a disease.

2007-06-15 Thread Jiri Jelinek
should we expect a computer program to feel pain (?) .. Regards, Jiri Jelinek - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-14 Thread Jiri Jelinek
algorithmic complexity on VNA. Regards, Jiri Jelinek On 6/14/07, Mark Waser [EMAIL PROTECTED] wrote: Oh. You're stuck on qualia (and zombies). I haven't seen a good compact argument to convince you (and e-mail is too low band-width and non-interactive to do one of the longer ones). My

Re: [agi] Pure reason is a disease.

2007-06-14 Thread Jiri Jelinek
James, determine for some reason that the physical is truly missing something Look at twin particles = just another example of something missing in the world as we can see it. Is it good enough to act and think and reason as if you have experienced the feeling. For AGI - yes. Why not (?).

Re: [agi] Pure reason is a disease.

2007-06-13 Thread Jiri Jelinek
4D to figure out how qualia really work. But OK, let's assume for a moment that certain VNA-processed algorithms can produce qualia as a side-effect. What factors do you expect to play an important role in making a particular quale pleasant vs unpleasant? Regards, Jiri Jelinek On 6/11/07, Mark

Re: [agi] Pure reason is a disease.

2007-06-12 Thread Jiri Jelinek
doing this any more' point. ;-)) Looks like the entropy is kind of pain to us ( to our devices) and the negative entropy might be kind of pain to the universe. Hopefully, when (/if) our AGI figures this out, it will not attempt to squeeze the Universe into a single spot to solve it. Regards, Jiri

Re: [agi] AGI Generator - Make life easier?

2007-06-12 Thread Jiri Jelinek
of protected mode with access to the 1K data, ask it to solve the tricky problems and auto-check the solution. Happy waiting! ;-)) Regards, Jiri Jelinek On 6/12/07, John G. Rose [EMAIL PROTECTED] wrote: There are always the difficulties of creating AGI in software written by people. Maybe it would

Re: [agi] Pure reason is a disease.

2007-06-11 Thread Jiri Jelinek
James, Frank Jackson (in Epiphenomenal Qualia) defined qualia as ...certain features of the bodily sensations especially, but also of certain perceptual experiences, which no amount of purely physical information includes.. :-) If it walks like a human, talks like a human, then for all those

Re: [agi] Pure reason is a disease.

2007-06-10 Thread Jiri Jelinek
Mark, Could you specify some of those good reasons (i.e. why a sufficiently large/fast enough von Neumann architecture isn't sufficient substrate for a sufficiently complex mind to be conscious and feel -- or, at least, to believe itself to be conscious and believe itself to feel For being

Re: [agi] Pure reason is a disease.

2007-06-04 Thread Jiri Jelinek
Hi Mark, Your brain can be simulated on a large/fast enough von Neumann architecture. From the behavioral perspective (which is good enough for AGI) - yes, but that's not the whole story when it comes to human brain. In our brains, information not only is and moves but also feels. From my

Re: [agi] Bad Friendly AI poetry ;-)

2007-05-29 Thread Jiri Jelinek
*** I want it now! *** Friendly AGI? many worry, but I would start with input story: NL support - what a pain, human-senses - how insane.. But, there's a shortcut one can grab, Form-based I/O does the job. You can get it NL-close, still fun for users - the way I chose. Many keep trying the NL

Re: [agi] Pure reason is a disease.

2007-05-26 Thread Jiri Jelinek
the pain sensation. Regards, Jiri Jelinek - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-05-24 Thread Jiri Jelinek
(at least in our bodies) are not enough for actual feelings. For example, to feel pleasure, you also need things like serotonin, acetylcholine, noradrenaline, glutamate, enkephalins and endorphins. Worlds of real feelings and logic are loosely coupled. Regards, Jiri Jelinek On 5/23/07, Mark Waser

Re: [agi] Pure reason is a disease.

2007-05-20 Thread Jiri Jelinek
will be OK. My emotions say that there is far too much that can go awry if you depend upon *everything* that you say you're depending upon *plus* everything that you don't realize you're depending upon *plus* . . . Mark - Original Message - From: Jiri Jelinek [EMAIL PROTECTED] To: agi

Re: [agi] Pure reason is a disease.

2007-05-16 Thread Jiri Jelinek
- Sure. The ultimate decision maker - I would not vote for that. Sorry it took me a while to get back to you but (even though I don't post to this AGI list much) I felt guilty of too much AGI talk and not enough AGI work so I had to do something about it. :) Regards, Jiri Jelinek On 5/3/07, Mark

Re: [agi] Trouble implementing my AGI Algorithm

2007-05-03 Thread Jiri Jelinek
Make sure you don't spend too much time pondering about x,y,z before solving a,b,c. The x,y,z may later look differently to you. Work out the knowledge representation first. Regards, Jiri Jelinek On 5/3/07, a [EMAIL PROTECTED] wrote: Hello, I have trouble implementing my AGI algorithm

Re: [agi] Pure reason is a disease.

2007-05-02 Thread Jiri Jelinek
of and stories about emotionless people. Mark P.S. Great discussion. Thank you. - Original Message - *From:* Jiri Jelinek [EMAIL PROTECTED] *To:* agi@v2.listbox.com *Sent:* Tuesday, May 01, 2007 6:21 PM *Subject:* Re: [agi] Pure reason is a disease. Mark, I understand your point but have

  1   2   >