RE: my model revision
I had some trouble with this post the first time. It is in the archives but I got no bounce back so I am not sure it got distributed and this is an unfamiliar computer. The post is only about a page so I posted again. Sorry if it is a duplication of a distribution that worked before. Hal Ruhl Hi Everyone: I have not posted for awhile but here is the latest revision to my model: Hal Ruhl DEFINITIONS: V k 04/03/10 1) Distinction: That which describes a cut [boundary], such as the cut between red and other colors. 2) Devisor: That which encompasses a quantity of distinctions. Some divisors are collections of divisors. [A devisor may be "information" but I will not use that term here.] Since a distinction is a description, a devisor is a quantity of descriptions. [A description can be encoded in a number so a devisor may be simply a number encoding some multiplicity of distinctions. There is no restriction on the variety or encoding schemes so the number can include them all. I wish to not include other properties of numbers herein and mention them only in passing to establish a possible link.] 3) Incomplete: The inability of a divisor to answer a question that is meaningful to that divisor. [This has a mirror image in inconsistency wherein all possible answers to a meaningful question are in the devisor [yes and no, true and false, etc.] MODEL: 1) Assumption #1: There exists a complete ensemble [possibly a "set" but I wish to not use that term here] of all possible divisors - call it the "All", [The "All" may be the "Everything" but I wish not to use that term here]. 2) The All therefore encompasses every distinction. The All is thus itself a divisor and therefore contains itself an unbounded number of times. 3) Define N(j) as divisors that encompass a zero quantity of distinction. Call them Nothings. By definition each copy of the All contains at least one N(j). 4) Define S(k) as divisors that encompass a non zero quantity of distinction but not all distinction. Call them Somethings. 5) An issue that arises is whether or not a particular divisor is static or dynamic in any way [the relevant possibilities are discussed below]. Devisors cannot be both. This requires that all divisors individually encompass the self referential distinction of being static or dynamic. 6) From #3 one divisor type - the Nothings - encompass zero distinction but must encompass this static/dynamic distinction thus they are incomplete. 7) The N(j) are thus unstable with respect to their zero distinction condition [dynamic one]. They each must at some point spontaneously "seek" to encompass this static/dynamic distinction. That is they spontaneously become Somethings. 8) Somethings can also be incomplete and/or inconsistent. 9) The result is a "flow" of a "condition" from an incomplete and/or inconsistent Something to a successor Something that encompasses a new quantity of distinction. 10) The "condition" is whether or not a particular Something is the current terminus of a path or not. 11) Since a Something can have a multiplicity of successors the "flow" is a multiplicity of paths of successions of Somethings until a complete something is arrived at which stops the individual path [i.e. a path stasis [dynamic three.]] 12) Some members of the All describe individual states of universes. 13) Our universe's path would be a succession of such members of an All. A particular succession of Somethings can vary from fully random to strictly driven by the incompleteness and/or inconsistency of the current terminus Something. I suspect our universe's path has until now been close to the latter. -- You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything-l...@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en. -- You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything-l...@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
RE: everything-list and the Singularity
I believe Stephen Gould indicated evolution was a random walk with a lower bound. It seems reasonable that the longest random walk would more or less double in length more or less periodically i.e. exponential growth. Hal Ruhl _ From: everything-list@googlegroups.com [mailto:everything-l...@googlegroups.com] On Behalf Of Jason Resch Sent: Sunday, April 04, 2010 10:46 To: everything-list@googlegroups.com Subject: Re: everything-list and the Singularity Hello Skeletori, Welcome to the list. I enjoy your comments and rationalization regarding personal identity and of why we should consider I to be the universe / multiverse / or the everything. I have some comments regarding the technological singularity below. On Sat, Apr 3, 2010 at 5:23 PM, Skeletori wrote: Hello! I have some tentative arguments on TS and wanted to put them somewhere where knowledgeable people could comment. This seemed like a good place. I also believe in an ultimate ensemble but that's a different story. Let's start with intelligence explosion. This part is essentially the same as Hawkins' argument against it (it can be found on the Wikipedia page on TS). When we're talking about self-improving intelligence, making improved copies of oneself, we're talking about a very, very complex optimization problem. So complex that our only tool is heuristic search, making guesses and trying to create better rules for taking stabs in the dark. The recursive optimization process improves by making better heuristics. However, an instinctual misassumption behind IE is that intelligence is somehow a simple concept and could be recursively leveraged not only descriptively but also algorithmically. If the things we want a machine to do have no simple description then it's unlikely they can be captured by simple heuristics. And if heuristics can't be simple then the metasearch space is vast. I think some people don't fully appreciate the huge complexity of self-improving search. The notion that an intelligent machine could accelerate its optimization exponentially is just as implausible as the notion that a genetic algorithm equipped with open-ended metaevolution rules would be able to do so. It just doesn't happen in practice, and we haven't even attempted to solve any problems that are anywhere near the magnitude of this one. So I think that the flaw in IE reasoning is that there should, at some higher level of intelligence, emerge a magic process that is able to achieve miraculous things. If you accept that, it precludes the possibility of TS happening (solely) through an IE. What then about Kurzweil's law of accelerating returns? Well, technological innovation is similarly a complex optimization problem, just in a different setting. We can regard the scientific community as the optimizing algorithm here and come to the same conclusions as with IE. That is, unless humans possess some kind of higher intelligence that can defeat heuristic search. I don't think there's any reason to believe that. Complex optimization problems exhibit the law of diminished returns and the law of fits and starts, where the optimization process gets stuck in a plateau for a long time, then breaks out of it and makes quick progress for a while. But I've never seen anything exhibiting a law of accelerating returns. This would imply that, e.g., Moore's law is just "an accident", a random product of exceedingly complex interactions. It would take more than some plots of a few data points to convince me to believe in a law of accelerating returns. If not the plots what would it take to convince you? I think one should accept the law of accelerating returns until someone can describe what accident caused the plot. Kurzweil's page describes a model and assumptions which re-create the real-world data plot: http://www.kurzweilai.net/articles/art0134.html?printable=1 It is a rather long page, Ctrl+F for "The Model considers the following variables:" to find where he describes the reasoning behind the law of accelerated returns. It also depends on how one defines exponential growth, as one can always take X as exp(X) - I suppose we want the exponential growth of some variable that is needed for TS and whose linear growth corresponds to linear increase in "technological ability" (that's very vague, can anybody help here?). In conclusion, I haven't yet found a credible lawlike explanation of anything that could cause a "runaway" TS where things become very unpredictable. All comments are welcome. I think intelligence optimization is composed of several different, but interrelated components, and that it makes sense to clearly define these components of intelligence rather than talk about intelligence as a single entity. I think intelligence embodies. 1. knowledge - information that is useful for something 2. memory - the capacity to store, index and organize information 3. processing rate - the rate at which information can be process
Re: everything-list and the Singularity
Hello Skeletori, Welcome to the list. I enjoy your comments and rationalization regarding personal identity and of why we should consider I to be the universe / multiverse / or the everything. I have some comments regarding the technological singularity below. On Sat, Apr 3, 2010 at 5:23 PM, Skeletori wrote: > Hello! > > I have some tentative arguments on TS and wanted to put them somewhere > where knowledgeable people could comment. This seemed like a good > place. I also believe in an ultimate ensemble but that's a different > story. > > Let's start with intelligence explosion. This part is essentially the > same as Hawkins' argument against it (it can be found on the Wikipedia > page on TS). > > When we're talking about self-improving intelligence, making improved > copies of oneself, we're talking about a very, very complex > optimization problem. So complex that our only tool is heuristic > search, making guesses and trying to create better rules for taking > stabs in the dark. > > The recursive optimization process improves by making better > heuristics. However, an instinctual misassumption behind IE is that > intelligence is somehow a simple concept and could be recursively > leveraged not only descriptively but also algorithmically. If the > things we want a machine to do have no simple description then it's > unlikely they can be captured by simple heuristics. And if heuristics > can't be simple then the metasearch space is vast. I think some people > don't fully appreciate the huge complexity of self-improving search. > > The notion that an intelligent machine could accelerate its > optimization exponentially is just as implausible as the notion that a > genetic algorithm equipped with open-ended metaevolution rules would > be able to do so. It just doesn't happen in practice, and we haven't > even attempted to solve any problems that are anywhere near the > magnitude of this one. > > So I think that the flaw in IE reasoning is that there should, at some > higher level of intelligence, emerge a magic process that is able to > achieve miraculous things. > > If you accept that, it precludes the possibility of TS happening > (solely) through an IE. What then about Kurzweil's law of accelerating > returns? Well, technological innovation is similarly a complex > optimization problem, just in a different setting. We can regard the > scientific community as the optimizing algorithm here and come to the > same conclusions as with IE. That is, unless humans possess some kind > of higher intelligence that can defeat heuristic search. I don't think > there's any reason to believe that. > > Complex optimization problems exhibit the law of diminished returns > and the law of fits and starts, where the optimization process gets > stuck in a plateau for a long time, then breaks out of it and makes > quick progress for a while. But I've never seen anything exhibiting a > law of accelerating returns. This would imply that, e.g., Moore's law > is just "an accident", a random product of exceedingly complex > interactions. It would take more than some plots of a few data points > to convince me to believe in a law of accelerating returns. If not the plots what would it take to convince you? I think one should accept the law of accelerating returns until someone can describe what accident caused the plot. Kurzweil's page describes a model and assumptions which re-create the real-world data plot: http://www.kurzweilai.net/articles/art0134.html?printable=1 It is a rather long page, Ctrl+F for "The Model considers the following variables:" to find where he describes the reasoning behind the law of accelerated returns. > It also > depends on how one defines exponential growth, as one can always take > X as exp(X) - I suppose we want the exponential growth of some > variable that is needed for TS and whose linear growth corresponds to > linear increase in "technological ability" (that's very vague, can > anybody help here?). > > In conclusion, I haven't yet found a credible lawlike explanation of > anything that could cause a "runaway" TS where things become very > unpredictable. > > All comments are welcome. > I think intelligence optimization is composed of several different, but interrelated components, and that it makes sense to clearly define these components of intelligence rather than talk about intelligence as a single entity. I think intelligence embodies. 1. knowledge - information that is useful for something 2. memory - the capacity to store, index and organize information 3. processing rate - the rate at which information can be processed The faster the processing rate, the faster knowledge can be applied and the faster new knowledge may be acquired. There are several methods in which new knowledge can be be generated: Searching for patterns and relations within the existing store of knowledge (data mining). Proposing and investigating currently unknown areas (research). Applying crea
Re: Generalization and Observers
Hi! I read some more and noticed that this line of thought is quite common here. So feel free to ignore this :). -- You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything-l...@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: Free will: Wrong entry.
I also think that free will is a meaningless concept, for many reasons. Like, let's say I'm in situation X and can choose A or B. What is it that could make me choose differently in an otherwise identical situation? Presumably my will. But the will has to be part of me to be my free will, thus I'm not identical, and the situation is not identical either. Then there are regression problems and the assumption behind free will that I'm a person who's traveling on a single path through time, so I can only pick A or B but not both. -- You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything-l...@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
What physical object am I?
I'm examining this question: if I'm a physical object, what object am I? We'll try two competing hypotheses: 1. I am a body. This hypothesis has massive descriptive complexity (Kolmogorov complexity), as we must trace its form in spacetime when we describe it. It has no clear boundaries in spacetime. It is not causally separate from its environment. However, if the mind, the cognitive process that rationalizes existence, doesn't look after the body, the body will die. 2. I am the universe. This hypothesis is simple and concise. It is a self-contained physical object. It is causally complete. I'd say the fact that the mind must take care of the body is contingent and should carry less weight to a materialist than more fundamental physical arguments like causality. But of course this is metaphysics :). Note that the physical processes that are responsible for cognition exist inside the body, but they exist just as equally inside the universe. The descriptive complexity of the universe can be questioned. What if the universe is part of some multiverse and doesn't exist as an independent physical object? Then we would need to decribe how it's a part of this multiverse, as we can't take a non-physical thing as a descriptional primitive, increasing the complexity. In that case we would simply carry the abstraction one step further and assume the hypothesis "I am the multiverse", which is again simple. This process can be continued until only one object remains, I. To continue on a tangent, I believe the person to be a necessary mental model for survival, but serious thought should be given before it's turned from an unstable psychological identity to something more fundamental. -- You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything-l...@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
JOINING post
Hi! My academic background is an MSc in computer science including lots of math, plus some years of graduate studies and AI research. I'm a very rational person (with Asperger's) and am interested in many intellectual topics. I subscribed to the multiverse idea a long time ago by following this simple line of thought: if something exists, is there a reason for something else not to exist? I answered no and concluded that everything exists :). -- You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything-l...@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Generalization and Observers
I think the "everything exists" hypothesis is a very nice generalization from a universe to a multiverse. I suggest that the same generalization can be carried out to observers: by reducing them to just one we can avoid many problems. For instance: Am I following some path in time? How is the path chosen? If I follow many paths, will I end anywhere? When did I begin? When the number of observers is reduced to one then we can think of "successive" OMs (I think of OM here as consciousness and its contents, not necessarily a whole brain) being generated at random from the set of all OMs. But we may as well say that the one observer (some would call it Brahman) is experiencing everything simultaneously. Time in this model is a product of mind anyway, as there are no paths to follow. I believe (I'm still working on arguments) our metaphysical identity is very simple, just being. The various manifestations like brains where cognitive processes take place would be secondary to "pure existence" in some sense. Ultimately, the only thing that can be said about an OM is P(X | X) = 1 or something :). -- You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything-l...@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: everything-list and the Singularity
Hello! I have some tentative arguments on TS and wanted to put them somewhere where knowledgeable people could comment. This seemed like a good place. I also believe in an ultimate ensemble but that's a different story. Let's start with intelligence explosion. This part is essentially the same as Hawkins' argument against it (it can be found on the Wikipedia page on TS). When we're talking about self-improving intelligence, making improved copies of oneself, we're talking about a very, very complex optimization problem. So complex that our only tool is heuristic search, making guesses and trying to create better rules for taking stabs in the dark. The recursive optimization process improves by making better heuristics. However, an instinctual misassumption behind IE is that intelligence is somehow a simple concept and could be recursively leveraged not only descriptively but also algorithmically. If the things we want a machine to do have no simple description then it's unlikely they can be captured by simple heuristics. And if heuristics can't be simple then the metasearch space is vast. I think some people don't fully appreciate the huge complexity of self-improving search. The notion that an intelligent machine could accelerate its optimization exponentially is just as implausible as the notion that a genetic algorithm equipped with open-ended metaevolution rules would be able to do so. It just doesn't happen in practice, and we haven't even attempted to solve any problems that are anywhere near the magnitude of this one. So I think that the flaw in IE reasoning is that there should, at some higher level of intelligence, emerge a magic process that is able to achieve miraculous things. If you accept that, it precludes the possibility of TS happening (solely) through an IE. What then about Kurzweil's law of accelerating returns? Well, technological innovation is similarly a complex optimization problem, just in a different setting. We can regard the scientific community as the optimizing algorithm here and come to the same conclusions as with IE. That is, unless humans possess some kind of higher intelligence that can defeat heuristic search. I don't think there's any reason to believe that. Complex optimization problems exhibit the law of diminished returns and the law of fits and starts, where the optimization process gets stuck in a plateau for a long time, then breaks out of it and makes quick progress for a while. But I've never seen anything exhibiting a law of accelerating returns. This would imply that, e.g., Moore's law is just "an accident", a random product of exceedingly complex interactions. It would take more than some plots of a few data points to convince me to believe in a law of accelerating returns. It also depends on how one defines exponential growth, as one can always take X as exp(X) - I suppose we want the exponential growth of some variable that is needed for TS and whose linear growth corresponds to linear increase in "technological ability" (that's very vague, can anybody help here?). In conclusion, I haven't yet found a credible lawlike explanation of anything that could cause a "runaway" TS where things become very unpredictable. All comments are welcome. -- You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything-l...@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.