Six and a half years ago I had an experience in which I interpreted
something I overheard someone say as if it might be an indication from the
Lord that I would find a polynomial time solution to logical satisfiability
problems. Even though I thought that the chance that I would actually be
able to do it was extremely unlikely, I thought that even the slightest
possibility would be worth exploring. I also realized it would be good
example of the rationality of belief. So I decided to try to recall
everything I was thinking about at that moment and continue working on the
problem.

I am just about on my last effort. If the current method I am
considering does not lead to something effective I will concede that there
is no good reason for my belief that the Lord actually told me, (through
someone else's words), that I would solve this problem.

I am not sure why this kind of statement angers some people, because it
seems to me that the rational construction of beliefs is a significant part
of AI - and I don't talk about religious beliefs in the overwhelming
majority of messages that I post.

My current method has passed an important preliminary feasibility test.
There is nothing which is inherently exponential (or in np) about expanding
it for the simplest 3-SAT problems. This does not mean much but it would be
important if it failed this preliminary test. One thing that is encouraging
is that the few variations that I tried on the method seemed to be very
inefficient.  Why is this encouraging? Because that means that if the
current method is at all useful then it will be possible to find more
efficiencies. There is one problem however. I am not sure if the
step-by-step conversion is really going to be in p because I had to rely on
intuition in finding good compressions (that will be used in the subsequent
conversion step). While these were easy to find in the simplest of
problems, they may become more elusive in more complicated problems. But,
as I said, it would be a serious problem if the method had failed this
preliminary test.

One other coincidence. I started thinking that an advance in logical
satisfiability might help create an advance in paralysis research and this
motivated me to work on the problem some more. But then I started wondering
about that. There is not much reason to believe that an advance in logic
would translate into a major advance in medical biochemical research any
time soon.  But, just as I was starting to think of giving up on my
unlikely effort, I happened to see a paralyzed person walking in a robotic
exoskeleton-boot set up.  I realized that even though an advance for
logical satisfiability might not foster a major medical breakthrough, it
would definitely advance robotic research big-time.
Jim Bromer



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to