[agi] Logical Satisfiability

2008-01-13 Thread Jim Bromer
or another. Jim Bromer - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=85461334-795c26

[agi] Logical Satisfiability

2008-01-15 Thread Jim Bromer
examples of problems in np by definition. So if SAT and the equivalent problems are in p that does not mean that anything in np is in p.) Jim Bromer - Never miss a thing. Make Yahoo your homepage. - This list is sponsored by AGIRI: http

Re: [agi] Logical Satisfiability

2008-01-15 Thread Jim Bromer
will still have some problems that cannot be solved in P-Time. Jim Bromer Robin Gane-McCalla [EMAIL PROTECTED] wrote: Actually, SAT is an NP-complete problem (http://en.wikipedia.org/wiki/Boolean_satisfiability_problem#NP-completeness) so if it were calculatable in polynomial time, then P = NP

Re: [agi] Logical Satisfiability

2008-01-17 Thread Jim Bromer
. But it suggests that it also can be made much more efficient than it will be as soon as I figure it out, (if it can be figured out.) Thanks for the links. I just downloaded Ghostscript and I am looking forward to studying the lecture notes. Jim Bromer Lukasz Stafiniak [EMAIL PROTECTED] wrote: Lucky

Re: [agi] Logical Satisfiability

2008-01-18 Thread Jim Bromer
, and because this problem has something in common with the np-complete problem of Satisfiability, some people might be confused by the problem. Jim Bromer - Never miss a thing. Make Yahoo your homepage. - This list is sponsored by AGIRI: http

Re: [agi] Logical Satisfiability

2008-01-20 Thread Jim Bromer
I believe that a polynomial solution to the Logical Satisifiability problem will have a major effect on AI, and I would like to discuss that at sometime. Jim Bromer Richard Loosemore [EMAIL PROTECTED] wrote: This thread has nothing to do with artificial general intelligence

Re: [agi] Logical Satisfiability

2008-01-20 Thread Jim Bromer
I had no idea what you were talking about until I read Matt Mahoney's remarks. I do not understand why people have so much trouble reading my messages but it is not entirely my fault. I may have misunderstood something that I read, or you may have misinterpreted something that I was saying.

Re: KILLTHREAD -- Re: [agi] Logical Satisfiability

2008-01-20 Thread Jim Bromer
I am disappointed because the question of how a polynomial time solution of logical satisfiability might affect agi is very important to me. Jim Bromer Ben Goertzel [EMAIL PROTECTED] wrote: Hi all, I'd like to kill this thread, because not only is it off-topic, but it seems not to be going

[agi] SAT, SMT and AGI

2008-01-21 Thread Jim Bromer
On Jan 20, 2008 2:34 PM, Jim Bromer [EMAIL PROTECTED] wrote: I am disappointed because the question of how a polynomial time solution of logical satisfiability might affect agi is very important to me. Ben Wrote: Well, feel free to start a new thread on that topic, then ;-) In fact, I will do

Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Jim Bromer
as such. Many paradox can be resolved by recognizing that determinism and randomness do not exist as separate fundamentals of the universe. Jim Bromer Looking for last minute shopping deals

Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Jim Bromer
use to them? If it would be useful, then there is a reason to believe that it might be useful to AGI. Jim Bromer - Never miss a thing. Make Yahoo your homepage. --- agi Archives: http://www.listbox.com/member

Re: [agi] would anyone want to use a commonsense KB?

2008-02-24 Thread Jim Bromer
, but if a reasonable polytime general solver is feasible then it means that that we can significantly boost computing power through software. Even if this doesn't produce a significant leap in AI it might produce the overdue next step. Jim Bromer - Looking

Re: [agi] reasoning knowledge

2008-02-26 Thread Jim Bromer
of) imitation? I think that childish imitation, in all of its variations, can only be explained by theories of complex conceptual integration. Jim Bromer - Looking for last minute shopping deals? Find them fast with Yahoo! Search

Re: [agi] reasoning knowledge

2008-02-26 Thread Jim Bromer
ways appropriately, how to incorporate reason effectively and how these imaginative processes can be integrated with empirical methods and cross analysis are still major complications that no one has seemed to master. Jim Bromer - Be a better friend

Re: Common Sense Consciousness [WAS Re: [agi] reasoning knowledge]

2008-02-28 Thread Jim Bromer
it would not necessarily translate into a feasible and extensible general program. Jim Bromer - Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. --- agi Archives: http://www.listbox.com

Re: [agi] reasoning knowledge

2008-02-28 Thread Jim Bromer
by the way, was very helpful in giving me some understanding of how complex ideas work. Or at least I think it was. Jim Bromer - Looking for last minute shopping deals? Find them fast with Yahoo! Search. --- agi

Re: [agi] The Effect of Application of an Idea

2008-03-24 Thread Jim Bromer
start out as being simplistic. But by carefully studying how complicated interactions interfere or cohere I believe that some new AI principles may be found. I will try to come up with a simple model during the next week. Jim Bromer On Sun, Mar 23, 2008 at 4:53 AM, Vladimir Nesov [EMAIL

Re: [agi] The Effect of Application of an Idea

2008-03-25 Thread Jim Bromer
On Tue, Mar 25, 2008 at 11:23 AM, William Pearson [EMAIL PROTECTED] wrote: On 24/03/2008, Jim Bromer [EMAIL PROTECTED] wrote: To try to understand what I am talking about, start by imagining a simulation of some physical operation, like a part of a complex factory in a Sim City

Re: [agi] The Effect of Application of an Idea

2008-03-25 Thread Jim Bromer
is significant because the potential problem is so complex that constrained models may be used to study details that would be impossible in more dynamic learning models. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http

Re: [agi] The Effect of Application of an Idea

2008-03-26 Thread Jim Bromer
of problems that my theory is meant to address. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244id_secret

Re: [agi] The Effect of Application of an Idea

2008-03-26 Thread Jim Bromer
to an online video that were recently posted. Is this similar to what you mean by prototyping? Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http

Re: [agi] The Effect of Application of an Idea

2008-03-26 Thread Jim Bromer
perfectly. Although this kind of talk may not solve the problem, I believe that this is where we are going to end up working if we continue to work on the problem Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http

Re: [agi] Instead of an AGI Textbook

2008-03-29 Thread Jim Bromer
more precision, or at least differentiation, some of the more obscure issues may eventually be revealed. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your

Re: [agi] Novamente's next 15 minutes of fame...

2008-03-29 Thread Jim Bromer
It sounds interesting. Can anyone go and try it, or does it cost money or something. Is it set up already? Jim Bromer On Fri, Mar 28, 2008 at 6:54 PM, Ben Goertzel [EMAIL PROTECTED] wrote: http://technology.newscientist.com/article/mg19726495.700-virtual-pets-can-learn-just-like-babies.html

Re: [agi] Logical Satisfiability...Get used to it.

2008-03-31 Thread Jim Bromer
would be significant in the advancement AI programming. Jim Bromer On Sun, Mar 30, 2008 at 11:47 AM, Jim Bromer [EMAIL PROTECTED] wrote: The issue that I am still trying to develop is whether or not a general SAT solver would be useful for AGI. I believe it would be. So I am going to go

Re: [agi] Logical Satisfiability...Get used to it.

2008-03-31 Thread Jim Bromer
to a hybrid approach. Thank you for your politeness and your insightful comments. I am going to quit this group because I have found that it is a pretty bad sign when the moderator mocks an individual for his religious beliefs. However, I hope to talk to again on some other forum. Jim Bromer

Re: [agi] Logical Satisfiability...Get used to it.

2008-03-31 Thread Jim Bromer
On Mon, Mar 31, 2008 at 9:46 AM, Ben Goertzel [EMAIL PROTECTED] wrote: All this talk about the Lord and SAT solvers has me thinking up variations to the Janis Joplin song http://www.azlyrics.com/lyrics/janisjoplin/mercedesbenz.html Oh Lord, won't you buy me a polynomial-time SAT solution

[agi] The resource allocation problem

2008-04-07 Thread Jim Bromer
indexing overtakes the decrease in complexity that the indexing can offer, and this point can be reached pretty quickly. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify

[agi] Logical Satisfiability...Get used to it.

2008-04-14 Thread Jim Bromer
for any immediate AGI project, but the novel logical methods that the solver will reveal may be more significant. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your

[agi] Rationalism and Scientific Rationalism: Was Logical Satisfiability...Get used to it.

2008-04-15 Thread Jim Bromer
, because it would need to explore alternatives through the use of imagination. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com

[agi] Logical Satisfiability...Get used to it.

2008-04-15 Thread Jim Bromer
are reasonable and rational. Your comment is interesting. I would like to write more about this once I have a little more time. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303

[agi] Rationalism and Empricial Rationalism

2008-04-17 Thread Jim Bromer
concepts used in AGI. But I do believe that some kind of 'grounding' is absolutely necessary for it. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription

Re: [agi] Why Symbolic Representation P.S.

2008-04-24 Thread Jim Bromer
On Thu, Apr 24, 2008 at 9:00 AM, Jim Bromer [EMAIL PROTECTED] wrote: I appreciate what is trying to be said, but there is much more to it. It is not symbolization of words vs symbolization of images issue. Jim Bromer More grammatically: I appreciate what Mike and Bob are reaching

Re: [agi] Why Symbolic Representation P.S.

2008-04-24 Thread Jim Bromer
problem. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4 Powered by Listbox

Re: [agi] Why Symbolic Representation P.S.

2008-04-25 Thread Jim Bromer
describe something of your approach for visual reasoning? Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id

RE: [agi] THE NEWEST REVELATIONS ABOUT RICHARD'S COMPLEXITY THEORIES

2008-04-25 Thread Jim Bromer
. I want to know why he thinks complexity cannot be tolerated and bounded by a programmed AGI system (of limited complexity)? Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303

Re: [agi] THE NEWEST REVELATIONS ABOUT RICHARD'S COMPLEXITY THEORIES

2008-04-26 Thread Jim Bromer
not be adequate to deal with this kind of complexity regardless of the amount of memory, speed and parallelism that can be brought in? Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive

Re: [agi] Complexity is in the system, not the rules themselves

2008-04-30 Thread Jim Bromer
such as approximate correlations. But I think your insight that since interactive symbolic references are not necessarily 'continuous' in some way they may require more elaborate methodologies to understand them is important. Jim Bromer

[agi] Graph mining

2008-05-07 Thread Jim Bromer
might be used in AGI for reason-derived what-if kinds of conjectures. Jim Bromer Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt

[agi] Overlapping Interrelated Bounded Logical Models

2008-05-07 Thread Jim Bromer
. This overlapping models theory requires the explicit use of more complex programming constructs than is typically discussed in these AI discussion groups. But I believe that overlapping logical models will develop naturally in a program that is written around the theory. Jim Bromer

Re: Symbol Grounding [WAS Re: [agi] AGI-08 videos]

2008-05-08 Thread Jim Bromer
this. Jim Bromer Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ --- agi

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-08 Thread Jim Bromer
you are going to think without thinking it. I don't want to get into a quibble fest, but understanding is not necessarily constrained to prediction. Jim Bromer Be a better friend, newshound

Re: Symbol Grounding [WAS Re: [agi] AGI-08 videos]

2008-05-09 Thread Jim Bromer
call conceptual integration. My idea of conceptual integration includes blending but it is not limited to it. (And the computers in that era were too wimpy.) Jim Bromer Hi Jim, It's simply I think - and I stand to be corrected - that he has never pushed those levels v. hard at all

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Jim Bromer
- Original Message From: Matt Mahoney [EMAIL PROTECTED] --- Jim Bromer [EMAIL PROTECTED] wrote: I don't want to get into a quibble fest, but understanding is not necessarily constrained to prediction. What would be a good test for understanding an algorithm? -- Matt Mahoney

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Jim Bromer
Richfield --- I agree. And you have to find the instructions before you can read them. (Seriously.) Jim Bromer Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. http

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-13 Thread Jim Bromer
to imply that you have some effective ability to use that understanding in some way. A little like the woodworker who knows how to work wood or the engineer who understands a great deal about bridges. Jim Bromer Jim Bromer --- agi Archives: http

Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-14 Thread Jim Bromer
in your argument that accurate result from a compressed form is equivalent to prediction), you will need to make it more sophisticated. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com

Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-15 Thread Jim Bromer
be produced. That is another weakness of compression=understanding theory. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http

Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-16 Thread Jim Bromer
algorithm is just a compression algorithm. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id

Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-16 Thread Jim Bromer
now that I did not have 7 months ago that it may actually work. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com

Re: [agi] Porting MindForth AI into JavaScript Mind.html

2008-05-17 Thread Jim Bromer
on the ruminations of the other crackpots and cranks in these groups. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member

Re: [agi] Porting MindForth AI into JavaScript Mind.html

2008-05-19 Thread Jim Bromer
on yourself as well. I can say that there are discussions that I find really interesting and discussions that I do not find interesting. I usually skim over the comments that are not very interesting to me. This message is one of those that would not be very interesting to me. Jim Bromer

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Jim Bromer
pulling conclusions out of thin air is just bluster. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Jim Bromer
was expressing. But maybe I found a different paper than was being discussed. I noticed that the abstract he wrote for his paper was not written too well (in my opinion). Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed

Re: [agi] More Info Please

2008-05-26 Thread Jim Bromer
for a web site that also has some introductory material on how one goes about working on a listed open source project. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303

Re: [agi] Compression PLUS a fitness function motoring for hypothesized compressibility is intelligence?

2008-05-31 Thread Jim Bromer
useful to him in his work. But to declare that an eccentric individualistic vision of the problem is the only truly objective method that should be used in all AI research is a case of putting the cart before the horse. Jim Bromer - Original Message From: Tudor Boloni [EMAIL PROTECTED

Re: [agi] Simplistic Paradigms Are Not Substitutes For Insight Into Conceptual Complexity

2008-05-31 Thread Jim Bromer
). --- That is not what I was talking about. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com

[agi] Ideological Interactions Need to be Studied

2008-06-01 Thread Jim Bromer
theories must be combined and integrated with previously acquired knowledge through some complicated processes of intelligence which are not yet widely appreciated. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed

Re: [agi] Uncertainty

2008-06-02 Thread Jim Bromer
in some cases with the efficacy of methods that would be needed to produce good results in a greater variety of situations. So while I am sure that you are an exceptional teacher, I am also able to assign a made up probability of .96532 that you have not yet found the yellow brick road. Jim

Re: [agi] Ideological Interactions Need to be Studied

2008-06-02 Thread Jim Bromer
be discovered. Of course actual experiments with AI prototypes are necessary as well, but I do feel that there are some significant mysteries about the way ideas work and interact that still to be discovered. Jim Bromer --- agi Archives: http

Re: [agi] Neurons

2008-06-04 Thread Jim Bromer
.) It cannot be proved or disproved for some time, it does not prove or disprove some other interesting technical question, nor does it provide new insight into the more interesting questions of what is feasible and what is not feasible in contemporary AI. Jim Bromer

Re: [agi] Pearls Before Swine...

2008-06-08 Thread Jim Bromer
solution technically... -- Instead of talking about what you would do, do it. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription

Re: [agi] Pearls Before Swine...

2008-06-08 Thread Jim Bromer
solution technically... -- Instead of talking about what you would do, do it. I mean, work out your ideal way to solve the questions of the mind and share it with us after you've have found some interesting results. Jim Bromer

Re: [agi] Pearls Before Swine...

2008-06-10 Thread Jim Bromer
it a primary concern to me. I would say that I am interested in the problems of complexity and integration of concepts. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss

Re: [agi] Pearls Before Swine...

2008-06-10 Thread Jim Bromer
- Original Message From: Steve Richfield [EMAIL PROTECTED] 3-- integrative ... which itself is a very broad category with a lot of heterogeneity ... including e.g. systems composed of wholly distinct black boxes versus systems that have intricate real-time feedbacks between different

[agi] Complexity and Ideological Integration

2008-06-14 Thread Jim Bromer
to individuate and highlight the true nature of the complex relations being considered. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription

[agi] A citicism of minimal guidance during instruction.

2008-06-15 Thread Jim Bromer
programs that can learn and from that point of view this is very interesting. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http

Re: [agi] A citicism of minimal guidance during instruction.

2008-06-15 Thread Jim Bromer
the bottlenecks that have been encountered with other AI paradigms of the past then it is not likely to be a true intermediate step toward a better AI product. Jim Bromer - Original Message From: Mike Tintner [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Sunday, June 15, 2008 11:02:39 AM

Re: [agi] Learning without Understanding?

2008-06-17 Thread Jim Bromer
is acquired and becomes explicit, then the results might be important. Jim Bromer --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http

[agi] Approximations of Knowledge

2008-06-21 Thread Jim Bromer
difficult. But perhaps Abram's idea could be useful here. As the program has to deal with more complicated collections of simple insights that concern some hard subject matter, it could tend to rely more on approximations to manage those complexes of insight. Jim Bromer

Re: [agi] Approximations of Knowledge

2008-06-22 Thread Jim Bromer
. Our conclusions are often only approximations, but they can contain unarticulated links to other possibilities that may indicate other ways of looking at the data or conditional variations to the base conclusion. Jim Bromer --- agi Archives

Re: [agi] Approximations of Knowledge

2008-06-22 Thread Jim Bromer
kinds of question is very relevant to discussions about advanced AI.) What do you mean by the figure 6 shape of cause-and-effect chains. It must refer to some kind of feedback-like effect. Jim Bromer --- agi Archives: http://www.listbox.com

Re: [agi] Approximations of Knowledge

2008-06-22 Thread Jim Bromer
show whether advancements in complexity can make a difference to AI even if its application does not immediately result in human level of intelligence. Jim Bromer - Original Message From: Abram Demski [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Sunday, June 22, 2008 4:38:02 PM Subject

Re: [agi] Approximations of Knowledge

2008-06-23 Thread Jim Bromer
should work, or the way AI programs and research into AI should work? Jim Bromer - Original Message From: Abram Demski [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Monday, June 23, 2008 3:11:16 PM Subject: Re: [agi] Approximations of Knowledge Thanks for the comments. My replies

Re: [agi] Approximations of Knowledge

2008-06-24 Thread Jim Bromer
man carrying some books was walking behind me, I would not be too worried about that either. Your statement was way over the line, and it showed some really bad judgment. Jim Bromer - Original Message From: Steve Richfield [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Monday, June 23

Re: [agi] Approximations of Knowledge

2008-06-25 Thread Jim Bromer
to discover the pseudo-elements (or relative elements) of the system relative to the features of the problem. Jim Bromer - Original Message From: Richard Loosemore [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Tuesday, June 24, 2008 9:02:31 PM Subject: Re: [agi] Approximations

Re: [agi] Approximations of Knowledge

2008-06-25 Thread Jim Bromer
in the technical sense of that term, which does not mean a complicated system in ordinary language). Richard Loosemore -- I don't feel that you are seriously interested in discussing the subject with me. Let me know if you ever change your mind. Jim Bromer

Re: [agi] Approximations of Knowledge

2008-06-27 Thread Jim Bromer
is the reality of advanced AI programming. But if you are throwing technical arguments at me, some of which are trivial from my perspective like the definition of, continuous mathematics (as distinguished from discrete mathematics), then all I can do is wonder why. Jim Bromer

Re: [agi] Approximations of Knowledge

2008-06-28 Thread Jim Bromer
in the future that you would like to discuss this with me please let me know. Jim Bromer - Original Message From: Richard Loosemore [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Friday, June 27, 2008 9:13:01 PM Subject: Re: [agi] Approximations of Knowledge Jim Bromer wrote: From: Richard

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-11 Thread Jim Bromer
that problems are solved through study and experimentation, Richard has no response to the most difficult problems in contemporary AI research except to cry foul. He does not even consider such questions to be valid. Jim Bromer --- agi Archives

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-12 Thread Jim Bromer
referring to and I only glanced at one paper on SHRUTI but I am pretty sure that I got enough of what was being discussed to talk about it.) Jim Bromer --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-14 Thread Jim Bromer
can be helpful in the analysis of the kinds of problems that can be expected from more ambitious AI models. Jim Bromer --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Jim Bromer
theories. However, I will not know for sure until I test it and right now that looks like it would be years off. I would be happy to continue the dialog if it can be conducted in a less confrontational and more genial manner than it has been during the past week. Jim Bromer Jim

Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-22 Thread Jim Bromer
, you get this one on me. ;-) Let me know if you're interested. I have everything I need to get started right away. Cheers, Brad --- Dude... Get a life. I mean that in the friendliest way possible, but honestly. Get a life. Jim Bromer

Re: [agi] a fuzzy reasoning problem.. P.S.

2008-07-28 Thread Jim Bromer
Mike said: I didn't emphasize the first flaw in logic, (which is more relevant to your question, and why such questions will keep recurring and can never be *methodologically* sorted out) - the assumption that we know what the terms *refer to*. Example: Mary says Clinton had sex with her. Clinton

Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread Jim Bromer
ideas. But this means that the program has to be able to deal with greater complexity. Jim Bromer On Mon, Jul 28, 2008 at 10:04 AM, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: Here is an example of a problematic inference: 1. Mary has cybersex with many different partners 2. Cybersex is a kind

Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread Jim Bromer
have to recognize that rules or rule-like systems need to be applied so that the program could learn to recognize that additional information that is derived from the IO environment can be applied to another situation to develop a more sophisticated understanding of some other rule. Jim Bromer

Re: [agi] How do we know we don't know?

2008-07-28 Thread Jim Bromer
in the surface input data. Jim Bromer --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121

Re: [agi] How do we know we don't know?

2008-07-28 Thread Jim Bromer
us to delineate some of the processes of thinking with the hope of finding feasible ways to do this might be done in an AI program. Jim Bromer On Mon, Jul 28, 2008 at 5:23 PM, James Ratcliff [EMAIL PROTECTED] wrote: It is fairly simple at that point, we have enough context to have a very limited

Re: [agi] How do we know we don't know?

2008-07-30 Thread Jim Bromer
grounded, I would like to see some research that shows that unknown words will not strongly activate any neurons. Take your time, I am only asking a question, not challenging you to fantasy combat. Jim Bromer --- agi Archives: https://www.listbox.com/member

Re: [agi] META: do we need a stronger politeness code on this list?

2008-08-03 Thread Jim Bromer
and an objective appreciation of the frame and nature of the kinds of experiments which would be required to examine them scientifically. We all have the ability to help and guide each other toward achieving our personal goals while improving our social skills at the same time. It's not rocket science. Jim

[agi] Thinking About Controlled Experiments in Extensible Complexity of Reference

2008-08-03 Thread Jim Bromer
. My point is: I often get something out of these conversations even though other people's thinking is usually very different from mine. Jim Bromer --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member

Re: [agi] META: do we need a stronger politeness code on this list?

2008-08-03 Thread Jim Bromer
I seriously meant it to be a friendly statement. Obviously I expressed myself poorly. Jim Bromer On Sun, Aug 3, 2008 at 6:41 PM, Brad Paulsen [EMAIL PROTECTED] wrote: This from the guy who only about three or four days ago responded to a post I made here by telling me to get a life

Re: [agi] Groundless reasoning

2008-08-04 Thread Jim Bromer
that would provide them with more grounding. But first we have to figure it out, because there is not a robot in the world that will be able to figure it out before we do. Jim Bromer --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed

Re: [agi] Groundless reasoning

2008-08-05 Thread Jim Bromer
in complexity is a primary problem that has to be solved if these kinds of programs are ever going to be capable of the kind of higher reasoning that we are thinking of. Jim Bromer --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

Re: [agi] Probabilistic Inductive Logic Programming and PLN

2008-08-05 Thread Jim Bromer
on all the terms as you were talking about them before. However, I did, at least, get the essence of what you are working on. If you want to share a draft of the paper let us know, because I would be interested in looking at it. Jim Bromer --- agi Archives

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Jim Bromer
about its IO data environment through its interactions with it. This is a subtle argument that cannot be dismissed with an appeal to a hidden presumption of the human dominion over understanding or by fixing it to some primitive theory about AI which was unable to learn through trial and error. Jim

Re: [agi] For an indication of the complexity of primate brain hardware

2008-08-07 Thread Jim Bromer
what other experts in the field think is being imaged through the method. Jim Bromer --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread Jim Bromer
more to be learned. The apparent paradox can be reduced to the never ending deterministic vs free will argument. I think the resolution of these two paradoxical problems is a necessary design principle. Jim Bromer --- agi Archives: https://www.listbox.com

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread Jim Bromer
, no one else is even talking about it. Everyone knows its a problem, but everyone thinks their particular theory has already solved the problem. I say it should be the focus of study and experiment. Jim Bromer --- agi Archives: https://www.listbox.com

  1   2   3   >