or another.
Jim
Bromer
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85461334-795c26
examples of problems in np by
definition. So if SAT and the equivalent problems are in p that does not mean
that anything in np is in p.)
Jim Bromer
-
Never miss a thing. Make Yahoo your homepage.
-
This list is sponsored by AGIRI: http
will still have
some problems that cannot be solved in P-Time.
Jim Bromer
Robin Gane-McCalla [EMAIL PROTECTED] wrote: Actually, SAT is an NP-complete
problem
(http://en.wikipedia.org/wiki/Boolean_satisfiability_problem#NP-completeness)
so if it were calculatable in polynomial time, then P = NP
. But it suggests that it also can be made much more
efficient than it will be as soon as I figure it out, (if
it can be figured out.)
Thanks for the links. I just downloaded Ghostscript and I am looking forward
to studying the lecture notes.
Jim Bromer
Lukasz Stafiniak [EMAIL PROTECTED] wrote:
Lucky
, and because this
problem has something in common with the np-complete problem of Satisfiability,
some people might be confused by the problem.
Jim Bromer
-
Never miss a thing. Make Yahoo your homepage.
-
This list is sponsored by AGIRI: http
I believe that a polynomial solution to the Logical Satisifiability problem
will have a major effect on AI, and I would like to discuss that at sometime.
Jim Bromer
Richard Loosemore [EMAIL PROTECTED] wrote:
This thread has nothing to do with artificial general intelligence
I had no idea what you were talking about until I read
Matt Mahoney's remarks. I do not understand why people have so much trouble
reading my messages but it is not entirely my fault. I may have misunderstood
something that I read, or you may have misinterpreted something that I was
saying.
I am disappointed because the question of how a polynomial time solution of
logical satisfiability might affect agi is very important to me.
Jim Bromer
Ben Goertzel [EMAIL PROTECTED] wrote: Hi all,
I'd like to kill this thread, because not only is it off-topic, but it seems not
to be going
On Jan 20, 2008 2:34 PM, Jim Bromer [EMAIL PROTECTED] wrote:
I am disappointed because the question of how a polynomial time solution of
logical satisfiability might affect agi is very important to me.
Ben Wrote:
Well, feel free to start a new thread on that topic, then ;-)
In fact, I will do
as such. Many paradox can be resolved by
recognizing that determinism and randomness do not exist as separate
fundamentals of the universe.
Jim Bromer
Looking for last minute shopping deals
use to them? If it would be useful, then there is a reason
to believe that it might be useful to AGI.
Jim Bromer
-
Never miss a thing. Make Yahoo your homepage.
---
agi
Archives: http://www.listbox.com/member
, but if a reasonable polytime general solver is feasible then it means that
that we can significantly boost computing power through software. Even if this
doesn't produce a significant leap in AI it might produce the overdue next step.
Jim Bromer
-
Looking
of) imitation? I think
that childish imitation, in all of its variations, can only be explained by
theories of complex conceptual integration.
Jim Bromer
-
Looking for last minute shopping deals? Find them fast with Yahoo! Search
ways appropriately, how to incorporate
reason effectively and how these imaginative
processes can be integrated with empirical methods and cross analysis are
still major complications that no one has seemed to master.
Jim Bromer
-
Be a better friend
it would not necessarily translate into a feasible and extensible general
program.
Jim Bromer
-
Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now.
---
agi
Archives: http://www.listbox.com
by the way, was very helpful
in giving me some understanding of how complex ideas work. Or at least I think
it was.
Jim Bromer
-
Looking for last minute shopping deals? Find them fast with Yahoo! Search.
---
agi
start out
as being simplistic. But by carefully studying how complicated interactions
interfere or cohere I believe that some new AI principles may be found. I
will try to come up with a simple model during the next week.
Jim Bromer
On Sun, Mar 23, 2008 at 4:53 AM, Vladimir Nesov [EMAIL
On Tue, Mar 25, 2008 at 11:23 AM, William Pearson [EMAIL PROTECTED]
wrote:
On 24/03/2008, Jim Bromer [EMAIL PROTECTED] wrote:
To try to understand what I am talking about, start by imagining a
simulation of some physical operation, like a part of a complex factory in a
Sim City
is
significant because the potential problem is so complex that constrained
models may be used to study details that would be impossible in more dynamic
learning models.
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http
of problems that my theory is meant to address.
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244id_secret
to an online video that were recently
posted. Is this similar to what you mean by prototyping?
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http
perfectly.
Although this kind of talk may not solve the problem, I believe that this is
where we are going to end up working if we continue to work on the problem
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http
more precision, or
at least differentiation, some of the more obscure issues may eventually be
revealed.
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your
It sounds interesting. Can anyone go and try it, or does it cost money or
something. Is it set up already?
Jim Bromer
On Fri, Mar 28, 2008 at 6:54 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
http://technology.newscientist.com/article/mg19726495.700-virtual-pets-can-learn-just-like-babies.html
would be significant in the
advancement AI programming.
Jim Bromer
On Sun, Mar 30, 2008 at 11:47 AM, Jim Bromer [EMAIL PROTECTED] wrote:
The issue that I am still trying to develop is whether or not a general SAT
solver would be useful for AGI. I believe it would be. So I am going to go
to a
hybrid approach.
Thank you for your politeness and your insightful comments. I am
going to quit this group because I have found that it is a pretty bad
sign when the moderator mocks an individual for his religious beliefs.
However, I hope to talk to again on some other forum.
Jim Bromer
On Mon, Mar 31, 2008 at 9:46 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
All this talk about the Lord and SAT solvers has me thinking up variations
to the Janis Joplin song
http://www.azlyrics.com/lyrics/janisjoplin/mercedesbenz.html
Oh Lord, won't you buy me
a polynomial-time SAT solution
indexing overtakes the decrease in
complexity that the indexing can offer, and this point can be reached
pretty quickly.
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify
for any immediate AGI project, but the
novel logical methods that the solver will reveal may be more
significant.
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your
, because it
would need to explore alternatives through the use of imagination.
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com
are reasonable and rational. Your comment is interesting. I would
like to write more about this once I have a little more time.
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303
concepts used in AGI. But I do
believe that some kind of 'grounding' is absolutely necessary for it.
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription
On Thu, Apr 24, 2008 at 9:00 AM, Jim Bromer [EMAIL PROTECTED] wrote:
I appreciate what is trying to be said, but there is much more to it.
It is not symbolization of words vs symbolization of images issue.
Jim Bromer
More grammatically:
I appreciate what Mike and Bob are reaching
problem.
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox
describe something of your approach for visual reasoning?
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id
.
I want to know why he thinks complexity cannot be tolerated and
bounded by a programmed AGI system (of limited complexity)?
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303
not be
adequate to deal with this kind of complexity regardless of the amount
of memory, speed and parallelism that can be brought in?
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive
such as
approximate correlations. But I think your insight that since interactive
symbolic references are not necessarily
'continuous' in some way they may require more elaborate methodologies to
understand them is important.
Jim Bromer
might be
used in AGI for reason-derived what-if kinds of conjectures.
Jim Bromer
Be a better friend, newshound, and
know-it-all with Yahoo! Mobile. Try it now.
http://mobile.yahoo.com/;_ylt
.
This overlapping models theory requires the explicit use of more
complex programming constructs than is typically discussed in these AI
discussion groups. But I believe that overlapping logical models will develop
naturally in a program that is written around the theory.
Jim Bromer
this.
Jim Bromer
Be a better friend, newshound, and
know-it-all with Yahoo! Mobile. Try it now.
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
---
agi
you are going to think without thinking it.
I don't want to get into a quibble fest, but understanding is not necessarily
constrained to prediction.
Jim Bromer
Be a better friend, newshound
call conceptual integration. My idea of conceptual integration includes
blending but it is not limited to it. (And the computers in that era were too
wimpy.)
Jim Bromer
Hi Jim,
It's simply I think - and I stand to be corrected - that he has never pushed
those levels v. hard at all
- Original Message
From: Matt Mahoney [EMAIL PROTECTED]
--- Jim Bromer [EMAIL PROTECTED] wrote:
I don't want to get into a quibble fest, but understanding is not
necessarily constrained to prediction.
What would be a good test for understanding an algorithm?
-- Matt Mahoney
Richfield
---
I agree. And you have to find the instructions before you can read them.
(Seriously.)
Jim Bromer
Be a better friend, newshound, and
know-it-all with Yahoo! Mobile. Try it now.
http
to imply that you have some effective ability to use that
understanding in some way. A little like the woodworker who knows how to work
wood or the engineer who understands a great deal about bridges.
Jim Bromer
Jim Bromer
---
agi
Archives: http
in your argument that
accurate result from a compressed form is equivalent to prediction), you will
need to make it more sophisticated.
Jim
Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com
be produced. That is another weakness of
compression=understanding theory.
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http
algorithm is just a compression algorithm.
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id
now that I did not
have 7 months ago that it may actually work.
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com
on the ruminations of the other
crackpots and cranks in these groups.
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member
on yourself as well.
I can say that there are discussions that I find really interesting and
discussions that I do not find interesting. I usually skim over the comments
that are not very interesting to me. This message is one of those that would
not be very interesting to me.
Jim Bromer
pulling conclusions out of
thin air is just bluster.
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member
was
expressing.
But maybe I found a different paper than was being discussed. I noticed that
the abstract he wrote for his paper was not written too well (in my opinion).
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed
for a web site that also has
some introductory material on
how one goes about working on a listed open source project.
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303
useful to him in his work. But to declare that an eccentric
individualistic vision of the problem is the only truly objective method that
should be used in all AI research is a case of putting the cart before the
horse.
Jim Bromer
- Original Message
From: Tudor Boloni [EMAIL PROTECTED
).
---
That is not what I was talking about.
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com
theories must be combined and integrated with previously acquired
knowledge through some complicated processes of intelligence which are not yet
widely appreciated.
Jim
Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed
in some cases
with the efficacy of methods that would be needed to produce good results in a
greater variety of situations. So while I am sure that you are an exceptional
teacher, I am also able to assign a made up probability of .96532 that you have
not yet found the yellow brick road.
Jim
be discovered. Of course actual
experiments with AI prototypes are necessary as well, but I do feel that there
are some significant mysteries about the way ideas work and interact that still
to be discovered.
Jim Bromer
---
agi
Archives: http
.) It cannot be proved or disproved for
some time, it does not prove or disprove some other interesting technical
question, nor does it provide new insight into the more interesting questions
of what is feasible and what is not feasible in contemporary AI.
Jim Bromer
solution technically...
--
Instead of talking about what you would do, do it.
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription
solution technically...
--
Instead of talking about what you would do, do it.
I mean, work out your ideal way to solve the questions of the mind and share it
with us after you've have found some interesting results.
Jim Bromer
it a primary concern to me. I would say that I am interested
in the problems of complexity and integration of concepts.
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss
- Original Message
From: Steve Richfield [EMAIL PROTECTED]
3-- integrative ... which itself is a very broad category with a lot
of heterogeneity ... including e.g. systems composed of wholly
distinct black boxes versus systems that have intricate real-time
feedbacks between different
to
individuate and highlight the true nature of the complex relations being
considered.
Jim
Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription
programs that can learn and from that point of view this is very interesting.
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http
the bottlenecks that have
been encountered with other AI paradigms of the past then it is not likely to
be a true intermediate step toward a better AI product.
Jim Bromer
- Original Message
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, June 15, 2008 11:02:39 AM
is acquired and becomes explicit, then the results might be
important.
Jim Bromer
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http
difficult. But perhaps Abram's
idea could be useful
here. As the program has to deal with
more complicated collections of simple insights that concern some hard subject
matter, it could tend to rely more on approximations to manage those complexes
of insight.
Jim Bromer
.
Our conclusions are often only approximations, but they can contain
unarticulated links to other possibilities that may indicate other ways of
looking at the data or conditional variations to the base conclusion.
Jim Bromer
---
agi
Archives
kinds of
question is very relevant to discussions about advanced AI.)
What do you mean by the figure 6 shape of cause-and-effect chains. It must
refer to some kind of feedback-like effect.
Jim Bromer
---
agi
Archives: http://www.listbox.com
show whether advancements in complexity can make a difference to AI
even if its application does not immediately result in human level of
intelligence.
Jim Bromer
- Original Message
From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, June 22, 2008 4:38:02 PM
Subject
should work, or the way AI programs and research into AI
should work?
Jim Bromer
- Original Message
From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, June 23, 2008 3:11:16 PM
Subject: Re: [agi] Approximations of Knowledge
Thanks for the comments. My replies
man carrying some
books was walking behind me, I would not be too worried about that either. Your
statement was way over the line, and it showed some really bad judgment.
Jim Bromer
- Original Message
From: Steve Richfield [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, June 23
to discover the pseudo-elements (or relative elements) of the system
relative to the features of the problem.
Jim Bromer
- Original Message
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, June 24, 2008 9:02:31 PM
Subject: Re: [agi] Approximations
in the technical sense
of that term, which does not mean a complicated system in ordinary
language).
Richard Loosemore
--
I don't feel that you are seriously interested in discussing the subject with
me. Let me know if you ever change your mind.
Jim Bromer
is the reality of advanced
AI programming.
But if you are throwing technical arguments at me, some of which are trivial
from my perspective like the definition of, continuous mathematics (as
distinguished from discrete mathematics), then all I can do is wonder why.
Jim Bromer
in the future that you would
like to discuss this with me please let me know.
Jim Bromer
- Original Message
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, June 27, 2008 9:13:01 PM
Subject: Re: [agi] Approximations of Knowledge
Jim Bromer wrote:
From: Richard
that problems are solved through study and
experimentation, Richard has no response to the most difficult problems in
contemporary AI research except to cry foul. He does not even consider such
questions to be valid.
Jim Bromer
---
agi
Archives
referring to and I only
glanced at one paper on SHRUTI but I am pretty sure that I got enough of what
was being discussed to talk about it.)
Jim Bromer
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
can be helpful in the analysis of the kinds of problems that can be
expected from more ambitious AI models.
Jim Bromer
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303
theories. However, I will not know for sure until I test it and right now that
looks like it would be years off.
I
would be happy to continue the dialog if it can be conducted in a less
confrontational and more genial manner than it has been during
the past week.
Jim Bromer
Jim
, you get this one on me.
;-) Let me know if you're interested. I have everything I need to get started
right away.
Cheers,
Brad
---
Dude... Get a life.
I mean that in the friendliest way possible, but honestly. Get a life.
Jim Bromer
Mike said:
I didn't emphasize the first flaw in logic, (which is more relevant to
your question, and why such questions will keep recurring and can
never be *methodologically* sorted out) - the assumption that we know
what the terms *refer to*. Example:
Mary says Clinton had sex with her.
Clinton
ideas. But this
means that the program has to be able to deal with greater complexity.
Jim Bromer
On Mon, Jul 28, 2008 at 10:04 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
Here is an example of a problematic inference:
1. Mary has cybersex with many different partners
2. Cybersex is a kind
have to
recognize that rules or rule-like systems need to be applied so that
the program could learn to recognize that additional information that
is derived from the IO environment can be applied to another situation
to develop a more sophisticated understanding of some other rule.
Jim Bromer
in the
surface input data.
Jim Bromer
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
us to delineate some of the processes of
thinking with the hope of finding feasible ways to do this might be
done in an AI program.
Jim Bromer
On Mon, Jul 28, 2008 at 5:23 PM, James Ratcliff [EMAIL PROTECTED] wrote:
It is fairly simple at that point, we have enough context to have a very
limited
grounded, I would like to see some research that shows
that unknown words will not strongly activate any neurons. Take your
time, I am only asking a question, not challenging you to fantasy
combat.
Jim Bromer
---
agi
Archives: https://www.listbox.com/member
and
an objective appreciation of the frame and nature of the kinds of
experiments which would be required to examine them scientifically.
We all have the ability to help and guide each other toward achieving
our personal goals while improving our social skills at the same time.
It's not rocket science.
Jim
.
My point is: I often get something out of these conversations even
though other people's thinking is usually very different from mine.
Jim Bromer
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member
I seriously meant it to be a friendly statement. Obviously I
expressed myself poorly.
Jim Bromer
On Sun, Aug 3, 2008 at 6:41 PM, Brad Paulsen [EMAIL PROTECTED] wrote:
This from the guy who only about three or four days ago responded to a post
I made here by telling me to get a life
that would provide them with more grounding.
But first we have to figure it out, because there is not a robot in
the world that will be able to figure it out before we do.
Jim Bromer
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed
in complexity is a primary
problem that has to be solved if these kinds of programs are ever
going to be capable of the kind of higher reasoning that we are
thinking of.
Jim Bromer
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
on all the terms as you were
talking about them before. However, I did, at least, get the essence
of what you are working on. If you want to share a draft of the paper
let us know, because I would be interested in looking at it.
Jim Bromer
---
agi
Archives
about its IO data environment through its interactions with
it. This is a subtle argument that cannot be dismissed with an appeal
to a hidden presumption of the human dominion over understanding or by
fixing it to some primitive theory about AI which was unable to learn
through trial and error.
Jim
what other experts in the field think is being imaged
through the method.
Jim Bromer
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com
more to be learned. The apparent paradox can be
reduced to the never ending deterministic vs free will argument. I
think the resolution of these two paradoxical problems is a necessary
design principle.
Jim Bromer
---
agi
Archives: https://www.listbox.com
, no one else is even talking about it.
Everyone knows its a problem, but everyone thinks their particular
theory has already solved the problem. I say it should be the focus
of study and experiment.
Jim Bromer
---
agi
Archives: https://www.listbox.com
1 - 100 of 267 matches
Mail list logo