John, in terms of dependencies, what came first, the fine structure
constant, or AGI? Thus, AGI is indirectly a function of the fine structure
constant. If alpha didn't trigger the triple-alpha process, what would be
on Earth today?

I'm not out to make my theory secretive. I shared these same thoughts on
LinkedIn. Given the proposed (not proven) alpha value of ~1/137, much
confusion may have been caused. A misalignment ensued.

However, if the value was changed to 1/137.5 (aligned to known, fractal
evolution), suddenly the hand slides perfectly into the glove. I know, I'm
going to be pummeled with pi, but I'll counter with pi being confirmation
bias (overly convenient) in this case. Stepping away from the pi argument
for now.

For alpha, the digital root of the resultant calculation equals 9,
resembling a true singularity. Alpha is adimensional, probably a
singularity. Meaning, the same value should also be equal to zero (even as
a placeholder in 1D space).

I only shared this information in an attempt to level the AGI playing
field. Not sure it's going to do the world much good. Seems humans have the
intent to kill off as many humans as possible.

Mostly I shared publicly because I suspected there were some scientists who
already had this snswer, but never published it (could be in contract with
tech giants). Obfuscating truth to get the upper hand?

Alpha won't directly result in AGI, but it probsbly did result in all
intelligence on Earth, and would definitely resolve the power issues
plaguing AGI (and much more), especially as Moore's Law may be stalling,
and Kurzweil's singularity with it.

The road to AGI seems less cluttered now.

On Thu, Mar 28, 2024, 23:07 John Rose <[email protected]> wrote:

> On Thursday, March 28, 2024, at 10:06 AM, Quan Tesla wrote:
>
> At least with an AI-enabled fine structure constant, we could've tried
> repopulating selectively and perhaps reversed a lot of the damage we caused
> Earth.
>
>
> The idea of AI-enabling the fine-structure constant is thought provoking
> but... how? Seems like a far out concept. Is it theoretically and
> practicably changeable? Perhaps AI-enable the perception of it?
>
> As an aside, look at this beautiful book I found with that as title:
> https://shorturl.at/hJRUY
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-Ma224f6d8bfd11b3d8aa0ea2f>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M06d22deabcb503cdb8bd96d5
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to