Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-04-01 Thread immortal . discoveries
*now 6.5 months ago..not "not" -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M8af033a8bd9db72c85b629e6 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-04-01 Thread immortal . discoveries
On Thursday, March 28, 2024, at 1:45 AM, Quan Tesla wrote: > Time's running out. How many years of talking shit on this forum and still no > real progress to show? Hands up! How many here entered serious contracts of > collaborative AGI via this forum?  > Haha ya. The best are below, all made

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-04-01 Thread John Rose
On Friday, March 29, 2024, at 8:31 AM, Quan Tesla wrote: > Musical tuning and resonant conspiracy? Cooincidently, I spent some time > researching that just today. Seems, while tuning of instruments is a matter > of personal taste (e.g., Verdi tuning)  there's no real merit in the pitch of > a

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread Matt Mahoney
On Sat, Mar 30, 2024, 7:35 PM John Rose wrote: > On Saturday, March 30, 2024, at 7:11 PM, Matt Mahoney wrote: > > Prediction measures intelligence. Compression measures prediction. > > > Can you reorient the concept of time from prediction? If time is on an > axis, if you reorient the time

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 7:11 PM, Matt Mahoney wrote: > Prediction measures intelligence. Compression measures prediction. Can you reorient the concept of time from prediction? If time is on an axis, if you reorient the time perspective is there something like energy complexity? The

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread Matt Mahoney
On Sat, Mar 30, 2024, 11:13 AM Nanograte Knowledge Technologies < nano...@live.com> wrote: > > I can see there's no serious interest here to take a fresh look at doable > AGI. Best to then leave it there. > AI is a solved problem. It is nothing more than text prediction. We have LLMs that pass

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 11:11 AM, Nanograte Knowledge Technologies wrote: > Who said anything about modifying the fine structure constant? I used the > terms: "coded and managed". > > I can see there's no serious interest here to take a fresh look at doable > AGI. Best to then leave

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread Nanograte Knowledge Technologies
ch 2024 13:01 To: AGI Subject: Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance On Friday, March 29, 2024, at 8:25 AM, Quan Tesla wrote: The fine structure constant, in conjunction with the triple-alpha process could be coded and managed via AI. Computat

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread Matt Mahoney
On Sat, Mar 30, 2024, 7:02 AM John Rose wrote: > On Friday, March 29, 2024, at 8:25 AM, Quan Tesla wrote: > > The fine structure constant, in conjunction with the triple-alpha process > could be coded and managed via AI. Computational code. > > > Imagine the government in its profound wisdom

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Friday, March 29, 2024, at 8:25 AM, Quan Tesla wrote: > The fine structure constant, in conjunction with the triple-alpha process > could be coded and managed via AI. Computational code.  Imagine the government in its profound wisdom declared that the fine structure constant needed to be

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread Quan Tesla
Musical tuning and resonant conspiracy? Cooincidently, I spent some time researching that just today. Seems, while tuning of instruments is a matter of personal taste (e.g., Verdi tuning) there's no real merit in the pitch of a musical instrument affecting humankind, or the cosmos. Having said

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread Quan Tesla
The fine structure constant, in conjunction with the triple-alpha process could be coded and managed via AI. Computational code. On Fri, Mar 29, 2024, 16:18 Quan Tesla wrote: > 3rd point. The potential exists to bring any form to same functions, where > gestalt as an emergent property may be

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread Quan Tesla
4th point. The matrix is an illusion. It glitches and shifts whimsically, as is AI. By contrast, the aether is relatively stable and "hackable", meaning interactively understandable. AGI could potentially be similar to the aether. Limited, but similar. On Fri, Mar 29, 2024, 16:18 Quan Tesla

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread Quan Tesla
3rd point. The potential exists to bring any form to same functions, where gestalt as an emergent property may be different, in being a function of the overall potential. Meaning, gestalt may be more real in a natural sense, than engineered form. On Fri, Mar 29, 2024, 15:33 John Rose wrote: >

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread Quan Tesla
To your first point. Constants are taken as being persistent within a narrow range. There's no use modifying them out-of-range experimentally. Hence, the problem with the cosmological constant. To your 2nd point, not enough physics in "soft" engineering. Agreed On Fri, Mar 29, 2024, 15:33 John

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread John Rose
On Thursday, March 28, 2024, at 4:55 PM, Quan Tesla wrote: > Alpha won't directly result in AGI, but it probsbly did result in all > intelligence on Earth, and would definitely resolve the power issues plaguing > AGI (and much more), especially as Moore's Law may be stalling, and > Kurzweil's

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread Quan Tesla
Counter argument. How did neural networks evolve at all on Earth without the fine structure constant (alpha)? For AGI, thinking a biotech jumpstart would do the physics trick, it won't. It's merely a desperate hack, most inelegant and riddled with single points of failure. Essentially, a serial

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread James Bowery
It is nonsense to respond to the OP the way you did unless your purpose is to derail objective metrics of AGI. I can think of lots of reasons to do that, not the least of which is you don't want AGI to happen. On Thu, Mar 28, 2024 at 1:34 PM Quan Tesla wrote: > Would you like a sensible

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread Matt Mahoney
On Thu, Mar 28, 2024, 2:34 PM Quan Tesla wrote: > Would you like a sensible response? What's your position on the > probability of AGI without the fine structure constant? > If the fine structure constant were much different than 1/137.0359992 then the binding energy between atoms relative to

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread Quan Tesla
Would you like a sensible response? What's your position on the probability of AGI without the fine structure constant? On Thu, Mar 28, 2024, 18:00 James Bowery wrote: > This guy's non sequitur response to my position is so inept as to exclude > the possibility that it is a LLM. > *Artificial

[agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread James Bowery
This guy's non sequitur response to my position is so inept as to exclude the possibility that it is a LLM. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M09295b0b8f3b9a1334921b1e Delivery