But there are other questions about the system which cannot be stated as
theorems about the system.  Some may be studied by using approximations for
example.  Now if the system was, as far as we could tell complete, then,
(continuing with the example that I have in mind), we might entertain the
idea that all the approximations might one day be reduced to absolute
theorems (about the continuum for example).

If the underlying methods of logic and mathematics are incomplete then
their application (or the application of methods that are both consistent
and decidable) are going to clearly be incomplete.  This means that
the consideration of the very possibility that one might have a system
powerful enough to represent all interesting systems (about some
referential events) using a consistent and potentially complete method is
going to be impossible using the only systems that we presently know
about.  That means that there are always going to be new things to
discover, some of which will be interesting and some of which will be about
things that we already know something about.

Suppose a program had an extensive amount of information on the usage of a
compound concept but it has to go through an extraordinary amount of
processing in order to determine which usage was appropriate for a
particular situation.  If the problem was mathematical we might think that
a logically sound look-ahead mathematical method might determine if a
decision was feasible.  The halting problem shows that this may not be
possible.  So there may be some (or many) decision processes which are
extremely complex and time consuming because there is no way to even see if
a decision is feasible for all problems.  As a work around we might use an
approximation for the solution but even here the approximation method is
not necessarily convergent.  The halting problem does not prove this but (I
believe) a variation of the halting problem could.

I was a little dubious of Russell's statement at first, but when I started
thinking about it...

Jim Bromer

On Tue, Mar 19, 2013 at 4:51 PM, Aaron Hosford <[email protected]> wrote:

> All Godel's Theorem says is that there are statements that can't be proven
> or disproven within an axiomatic system sufficiently rich enough to
> represent arithmetic operations. It doesn't say how interesting those
> statements are. Really, after the novelty has worn off, how important or
> interesting is the statement "This statement cannot be proven within the
> current axiomatic system"? Outside of the general implications for
> mathematics as a field, who cares whether that particular statement can or
> can't be proven? Just because there is a guarantee that there will always
> be new things to discover doesn't mean they will be interesting.
>
>
> On Mon, Mar 18, 2013 at 8:21 PM, just camel <[email protected]> wrote:
>
>> Neither the Halting Problem nor Goedel's incompleteness theorem say that
>> there will always be new things to discover? Could you elaborate on that?
>> Being unable to predict/proof things within system A does not say anything
>> about whether you can actually discover new stuff ad infinitum? Just
>> because I can not disproof the existence of God or turtles carrying the
>> cosmos on their shoulders does not mean that I will discover anything? If
>> we are living in a perfect simulation the system will not allow you to find
>> out anything about the entity running those simulations for example. If you
>> can not access anything outside of your framework than you will have a hard
>> time discovering new things and Goedel's theorem will still hold true.
>>
>>
>> On 03/14/2013 01:52 PM, Russell Wallace wrote:
>>
>>> The answer turns out to be no, not as a matter of opinion, but as a
>>> matter of mathematical proof: check out Godel's theorem, the Halting
>>> Problem etc. No matter how much you know, there will always be new
>>> discoveries to make.
>>>
>>
>>
>>
>> ------------------------------**-------------
>> AGI
>> Archives: 
>> https://www.listbox.com/**member/archive/303/=now<https://www.listbox.com/member/archive/303/=now>
>> RSS Feed: https://www.listbox.com/**member/archive/rss/303/**
>> 23050605-2da819ff<https://www.listbox.com/member/archive/rss/303/23050605-2da819ff>
>> Modify Your Subscription: https://www.listbox.com/**member/?&id_**
>> secret=23050605-53e85d0d <https://www.listbox.com/member/?&;>
>>
>> Powered by Listbox: http://www.listbox.com
>>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10561250-470149cf> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to