Re: [agi] How AI will kill us

2024-03-29 Thread Matt Mahoney
On Thu, Mar 28, 2024, 5:56 PM Keyvan M. Sadeghi wrote: > The problem with finer grades of >> like/dislike is that it slows down humans another half a second, which >> adds up over thousands of times per day. >> > > I'm not sure the granularity of feedback mechanism is the problem. I think > the

[agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-03-29 Thread James Bowery
I got involved with the Alternative Natural Philosophy Association back in the late 1990s when I hired one of the attendees of the Dartmouth Summer of AI Workshop, Tom Etter, to work on the foundation of programming languages.  ANPA was founded on the late 1950s discovery of the Combinatorial

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread Quan Tesla
Musical tuning and resonant conspiracy? Cooincidently, I spent some time researching that just today. Seems, while tuning of instruments is a matter of personal taste (e.g., Verdi tuning) there's no real merit in the pitch of a musical instrument affecting humankind, or the cosmos. Having said

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread Quan Tesla
The fine structure constant, in conjunction with the triple-alpha process could be coded and managed via AI. Computational code. On Fri, Mar 29, 2024, 16:18 Quan Tesla wrote: > 3rd point. The potential exists to bring any form to same functions, where > gestalt as an emergent property may be

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread Quan Tesla
4th point. The matrix is an illusion. It glitches and shifts whimsically, as is AI. By contrast, the aether is relatively stable and "hackable", meaning interactively understandable. AGI could potentially be similar to the aether. Limited, but similar. On Fri, Mar 29, 2024, 16:18 Quan Tesla

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread Quan Tesla
3rd point. The potential exists to bring any form to same functions, where gestalt as an emergent property may be different, in being a function of the overall potential. Meaning, gestalt may be more real in a natural sense, than engineered form. On Fri, Mar 29, 2024, 15:33 John Rose wrote: >

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread Quan Tesla
To your first point. Constants are taken as being persistent within a narrow range. There's no use modifying them out-of-range experimentally. Hence, the problem with the cosmological constant. To your 2nd point, not enough physics in "soft" engineering. Agreed On Fri, Mar 29, 2024, 15:33 John

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread John Rose
On Thursday, March 28, 2024, at 4:55 PM, Quan Tesla wrote: > Alpha won't directly result in AGI, but it probsbly did result in all > intelligence on Earth, and would definitely resolve the power issues plaguing > AGI (and much more), especially as Moore's Law may be stalling, and > Kurzweil's