Re: [agi] GPT-4o

2024-06-02 Thread John Rose
On Sunday, June 02, 2024, at 9:04 AM, Sun Tzu InfoDragon wrote:
> The most important metric, obviously, is whether GPT can pass for a doctor on 
> the US Medical Licensing Exam by scoring the requisite 60%.

Not sure who I trust less, lawyers, medical doctors, or an AI trying to imitate 
them as is :)

Humor aside, can an AI take oaths? Otherwise it may prioritize revenue over 
wellbeing... is very difficult.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Me156069d98089b564d12302a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-06-02 Thread John Rose
On Sunday, June 02, 2024, at 10:32 AM, Keyvan M. Sadeghi wrote:
> Aka click bait? :) ;)

Jabbed?

https://www.bitchute.com/video/jB9JXD9lvK8m/

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M9d5dc91e461de7ef3f157953
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-06-02 Thread John Rose
On Saturday, June 01, 2024, at 7:03 PM, immortal.discoveries wrote:
> I love how a thread I started ends up with Matt and Jim and others having a 
> conversation again lol.

Tame the butterfly effect. Just imagine you switch a couple words around and 
the whole world starts conversing.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Me35ef5ce96c0eb10ad393d1d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-29 Thread John Rose
On Wednesday, May 29, 2024, at 3:56 PM, Keyvan M. Sadeghi wrote:
> Judging the future of AGI (not distant, 5 years), with our current premature 
> brains is a joke. Worse, it's an unholy/profitable business for Sam Altmans / 
> Eric Schmidts / Elon Musks of the world.

I was referring to extracting value meaning real value verses a nominal value. 
You can predict the future using economic cycles over hundreds to thousands of 
years. Also, human vices and virtues drive similar actions over time. People 
exploit people with new tools and technologies and the technologies take on a 
life of their own. Thinking AGI is going to instantly change all that for the 
good, as in some immantized eschaton is naïve. Are there laws that protect us 
if it goes wrong? I keep hearing now about nano dust already in the food and 
have seen recent congressional hearings on weapons being used that modify human 
minds remotely. These technologies are deployed and our reactions take years... 
in some cases decades. Judging by recent events we need to self-organize since 
our governments are not going to protect us. In fact, it’s going in the 
direction of overthrowing governments likely leading to new governmental 
structures. Big tech is already embedded and fused with big gov't. But the real 
government is the central banks. Central bank behavior is another input into 
the predictor unless AGI makes central banks somehow magically go away :) 
That is also naïve thinking IMO.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Mf9e859a8fd420b8c363401cd
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-29 Thread John Rose
On Monday, May 27, 2024, at 6:58 PM, Keyvan M. Sadeghi wrote:
> Good thing is some productive chat happens outside this forum:
> 
> https://x.com/ylecun/status/1794998977105981950

Smearing those who are concerned of particular AI risks by pooling them into a 
prejudged category entitled “Doomers” is not really being serious. It’s similar 
to smearing those who scrutinize and reject particular medical injections as 
“anti-vaxxers” or anti-science when they are really pro-science.

AI will further embed into existing systems and will be used to extract more 
value from all of us in ways we can’t even imagine. We are farm animals but the 
longer we are kept happy, oblivious and indoctrinated, the more value will be 
extracted. When there is little value left, we will be culled. It’s really that 
simple. BTW H5N1 mRNA incoming 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Mf0b8b619f7e13adf152bd1d2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-24 Thread John Rose
On Wednesday, May 15, 2024, at 12:28 AM, Matt Mahoney wrote:
> The top entry on the large text benchmark, nncp, uses a transformer. It is 
> closed source but there is a paper describing the algorithm. It doesn't 
> qualify for the Hutter prize because it takes 3 days to compress 1 GB on a 
> GPU with 10K cores.

If we have MLP, KAN, NN1, NN2, etc. the discoverer could perhaps use the 
Principle of Least Action on the mathematical system to find/generate the 
structure that minimizes bit flips to produce similar results. It would be the 
laziest structure… or lazier structures since the laziest might not be provable.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1af6c40307437a26-M790672f3424e5b9a96e27236
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-22 Thread John Rose
On Tuesday, May 21, 2024, at 10:34 PM, Rob Freeman wrote:
> Unless I've missed something in that presentation. Is there anywhere
in the hour long presentation where they address a decoupling of
category from pattern, and the implications of this for novelty of
structure?

I didn’t watch the video but isn’t this just morphisms and functors so you can 
map ML between knowledge domains. Some may need to be fuzzy and the best 
structure I’ve found is Smarandache’s neutrosphic...So a generalized 
intelligence will manage sets of various morphisms across N domains. For 
example, if an AI that knows how to drive a car attempts to build a birdhouse 
it takes a small subset of morphisms between the two but grows more towards the 
birdhouse. As it attempts to build the birdhouse there actually may be some 
morphismic structure that apply to driving a car but most will be utilized and 
grow one way… N morphisms for example epi, mono, homo, homeo, endo, auto, zero, 
etc. and most obvious iso. Another mapping from car driving to motorcycle 
driving would have more utilizable morphisms… like steering wheel to 
handlebars… there is some symmetry mapping between group operations but they 
are not full iso. The pattern recognition is morphism recognition and novelty 
is created from mathematical structure manipulation across knowledge domains. 
This works very well when building new molecules since there are tight, almost 
lossless IOW iso morphismic relationships.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-Me455a509be8e5e3671c3b5e0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-20 Thread John Rose
On Saturday, May 18, 2024, at 6:53 PM, Matt Mahoney wrote:
> Surely you are aware of the 100% failure rate of symbolic AI over the last 70 
> years? It should work in theory, but we have a long history of 
> underestimating the cost, lured by the early false success of covering half 
> of the cases with just a few hundred rules.
> 

I view LLM’s as systems within symbolic systems. Why? Simply that we exist in a 
spacetime environment and ALL COMMUNICATION is symbolic. And sub-symbolic 
representation is required for computation. All bits are symbols based on 
probabilities. Then as LLM’s become more intelligent the physical power 
consumption required to produce similar results will decrease as their symbolic 
networks grow and optimize.

Could be wrong but It makes sense to me… saying everything is symbolic 
eliminates the argument. I know it's lazy but  that's often how developers look 
at things in order to code them up :) Laziness is a form of optimization... 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M2252941b1c7cca5b59b32c1f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-17 Thread John Rose
On Friday, May 17, 2024, at 10:07 AM, Sun Tzu InfoDragon wrote:

the AI just really a regurgitation engine that smooths everything over and 
appears smart.
> 
> No you!

I agree. Humans are like memetic switches, information repeaters, reservoirs. 
The intelligence is in the collective, we’re just individual host nodes. Though 
some originate intelligence more than others.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M5c3feeec4fa21dc6b3116830
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-17 Thread John Rose
On Tuesday, May 14, 2024, at 11:21 AM, James Bowery wrote:
> Yet another demonstration of how Alan Turing poisoned the future with his 
> damnable "test" that places mimicry of humans over truth.

This unintentional result of Turing’s idea is an intentional component of some 
religions. The elder wisemen wanted to retain control over science as science 
spun from religion since they knew humans may become irrelevant. So they 
attempted to control the future and slow things down, thus Galileo gets burned. 
Perhaps they saw it as a small sacrifice for the larger whole.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Ma35aaeb8de27a4ee42f6e993
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-17 Thread John Rose
On Tuesday, May 14, 2024, at 10:27 AM, Matt Mahoney wrote:
> Does everyone agree this is AGI?

Ya is the AI just really a regurgitation engine that smooths everything over 
and appears smart. Kinda like a p-zombie, poke it, prod it, sounds generally 
intelligent!  But… artificial is what everyone is going for seems like. Is 
there a difference?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Ma141cb8a667972f0df709a6b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-17 Thread John Rose
On Thursday, May 16, 2024, at 11:26 AM, ivan.moony wrote:
> What should symbolic approach include to entirely replace neural networks 
> approach in creating true AI?

Symbology will compress NN monstrosities… right?  Or should say increasing 
efficiency via emerging symbolic activity for complexity reduction. Then less 
NN will be required since the “intelligence” was will have been formed. But 
still need sensory…

There is much room for innovation in mathematics… some of us have been working 
on that for a while.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M5b45da5fff085a720d8ea765
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI is killing the internet

2024-05-13 Thread John Rose
Also, with TikTok governments don’t want the truth exposed because populations 
tend to get rebellious so they want “unsafe” information suppressed. E.g. 
Canadian trucker protests…. I sometimes wonder do Canadians know that Trudeau 
is Castro’s biological son? Thanks TicTok didn’t know that. And the American 
gov’t really needs to have some control in TicTok urgently because big bad evil 
China is stealing people’s data. Uhm or is it that a tool like TicTok could 
result in millions of angry residents storming DC, similar to what happened in 
Sri Lanka in 2022 but on a 1000x scale. The grifters in control fear us using 
TicTok. They already have Facebook, etc., see Taibbi and the censorship 
industrial complex. We’re not smart enough to make our owns decisions so 
deepfakes will be banned except deepstate deepfakes. AI and state embedded AGI 
are going to obscure truth more smartly.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T217f203a5b9455f2-M5ad931d352187cbc8b6fc1c6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI is killing the internet

2024-05-13 Thread John Rose
Mike Gunderloy disconnected. Before the internet he did Factsheet Five which 
connected alt undergrounders. It really was an amazing publication that could 
be considered a type of pre-internet search engine with zines as websites.

https://en.wikipedia.org/wiki/Factsheet_Five

Then as the internet expanded he wrote umpteen books on Microsoft software 
technologies and blogged incessantly.

Here is his last blog apparently during Covid:

https://afreshcup.com/home/2020/10/30/double-shot-2717.html

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T217f203a5b9455f2-M89ab7ed9b30568df607c3a4e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-12 Thread John Rose
On Sunday, May 12, 2024, at 10:38 AM, Matt Mahoney wrote:
> All neural networks are trained by some variation of adjusting anything that 
> is adjustable in the direction that reduces error. The problem with KAN alone 
> is you have a lot fewer parameters to adjust, so you need a lot more neurons 
> to represent the same function space. That's even with 2 parameters per 
> neuron, threshold level and steepness. The human brain has another 7000 
> parameters per neuron in the synaptic weights.

I bet in some of these so-called “compressor” apps that Matt always looks at 
there is some serious NN structure tweaking going on there. They’re open 
source, right? Do people obfuscate the code when submitting?


Well it’s kinda obvious but transformations like this:

(Universal Approximation Theorem) => (Kolmogorov-Arnold Representation Theorem)

There’s going to be more of them.

Automating or not I’m sure researchers are on it.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1af6c40307437a26-Md991f57050d37e51db0e68c5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-12 Thread John Rose
On Sunday, May 12, 2024, at 12:13 AM, immortal.discoveries wrote:
> But doesn't it have to run the code to find out no?

The people who wrote the paper did some nice work on this. They laid it out 
perhaps intentionally so that doing it again with modified structures is easy 
to visualize.

A simple analysis would be to basically “tween” mathematics and graph structure 
as a vector from MLP to KAN to open a peep whole into the larger thing.

Right, a generalized software system test host… think reflection, many 
programming languages have reflection, so you reflect off of the test structure 
as the abstraction layer into a fixed computing resource measurement to rank.

It’s not difficult to generate the structures but how do you find the best 
candidates to run the tester on? Perhaps a coupling with some topology of 
computational complexity classes and see which structures push easiest into 
that?… or some other method… this is probably the difficult part... unless you 
just throw massive computing power at it :)

But yes, when you start thinking about it there might be a recursion where the 
MLP/KAN’s or whatever view themselves to self modify.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1af6c40307437a26-M50970ab0535f6725bf2e12ec
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-11 Thread John Rose
On Wednesday, May 08, 2024, at 6:24 PM, Keyvan M. Sadeghi wrote:
>> Perhaps we need to sort out human condition issues that stem from human 
>> consciousness?
> 
> Exactly what we should do and what needs funding, but shitheads of the world 
> be funding wars. And Altman :))

If Jeremy Griffith’s explanation is correct it would invalidate some literature 
on the subject. I would like to see rebuttals.
And if he is right then models of the development of human intelligence may be 
affected.

I do see potential issues but it's worth entertaining to see if it "fixes" or 
alters some models. Though many individuals may not take notice since it could 
require a deep refactoring of their worldview But some may find comfort in 
this explanation regarding their own personal behavior and an understanding in 
observations of such behavior 
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-M6fae07260db6c30dcb0a97c0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-11 Thread John Rose
Seems that software and more generalized mathematics should be discovering 
these new structures. If a system projects candidates into a test domain, 
abstracted, and wires them up for testing in a software host how would you 
narrow the search space of potential candidates? You’d need a more general 
mathematical model that has insight into efficiency projections. And the 
abstracted software may require a somewhat open-ended generalization capability 
for testing since the candidates would take on unknow forms.

https://arxiv.org/abs/2404.19756

Ballmer: "Developers, developers, developers, developers!"
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1af6c40307437a26-M8b051de20c2a71345de3edf1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-08 Thread John Rose
On Tuesday, May 07, 2024, at 9:41 PM, Keyvan M. Sadeghi wrote:
> It's because of biology. There, I said it. But it's more nuanced. Brain cells 
> are almost identical at birth. The experiences that males and females go 
> through in life, however, are societally different. And that's rooted in 
> chimpz being our forefathers, and mqscular difference of males and females in 
> most species.
> 

Perhaps we need to sort out human condition issues that stem from human 
consciousness?

“Selfish, competitive and aggressive behavior is not due to savage instincts 
but to a psychologically upset state or condition.”

https://youtu.be/q-TK6_aWqGU

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-Mc710fa6e6eb7fd3c2015948d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-08 Thread John Rose
On Tuesday, May 07, 2024, at 6:53 PM, Matt Mahoney wrote:
> Kolmogorov proved there is no such thing as an infinitely powerful
compressor. Not even if you have infinite computing power.

Compressing the universe is a unique case especially being supplied with 
infinite computing power. Would the compressor ever stop? And would we be copy 
compressing the universe or actually compressing the full universe as data 
including the compressor itself. Would the compressor only run once since the 
whole universe would potentially go with it prohibiting another compression 
comparison or a decompression.

Assuming we are actually compressing the universe and not a copy, and there is 
no infinitely powerful compressor according to Kolmogorov, then it seems that 
the universe might still expand against the finite compressor that is being 
supplied with infinite power.

But then does the infinite power come from within the U or from outside 
somehow... h…
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M1f1c33b606b4df64d1bdc119
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread John Rose
On Tuesday, May 07, 2024, at 10:01 AM, Matt Mahoney wrote:
> We don't
know the program that computes the universe because it would require
the entire computing power of the universe to test the program by
running it, about 10^120 or 2^400 steps. But we do have two useful
approximations. If we set the gravitational constant G = 0, then we
have quantum mechanics, a complex differential wave equation whose
solution is observers that see particles. Or if we set Planck's
constant h = 0, then we have general relativity, a tensor field
equation whose solution is observers that see space and time. Wolfram
and Yudkowsky both estimate this unknown program is only a few hundred
bits long, and I agree. It is roughly the complexity of quantum
mechanics and relativity taken together, and roughly the minimum size
by Occam's Razor of a multiverse where the n'th universe is run for n
steps until we observe one that necessarily contains intelligent life.

Sounds like the KC of U, the maximum lossless compression of the universe 
assuming infinite resources for perfect prediction. But there is a lot of 
lossylosslessness out there for imperfect prediction or locally perfect 
lossless, near lossless, etc. That intelligence has a physical computational 
topology across spacetime where much is redundant though estimable… and 
temporally changing. I don’t rule out though no matter how improbable that 
there could be an infinitely powerful compressor within this universe, an 
InfiniComp. Weird stuff has been shown to be possible. We can conceive of it 
but there may be issues with our conception since even that is bound by limits.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M8f6799ef3b2e99f86336b4cb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread John Rose
On Tuesday, May 07, 2024, at 8:04 AM, Quan Tesla wrote:
> To suggest that every hypothetical universe has its own alpha, makes no 
> sense, as alpha is all encompassing as it is.

You are exactly correct. There is another special case besides expressing the 
intelligence of the universe. And that is expressing the intelligence of 
hypothetical universe at zero communication complexity... unless there is some 
unknown Gödel channel.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Me43083c2dce972b7746d22ed
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread John Rose
On Friday, May 03, 2024, at 7:10 PM, Matt Mahoney wrote:
> So when we talk about the intelligence of the universe, we can only really 
> measure it's computing power, which we generally correlate with prediction 
> power as a measure of intelligence.

The universes overall prediction power should increase, for example with the 
rise of intelligent civilizations among galaxies, though physical entropy is 
increasingly generated in the universe environment. All these prediction powers 
would increase unevenly though they would become increasingly networked via 
interstellar communication. A prediction power apex would be different from a 
sum and it emerges from biological negentropy and then from synthetic AGI but 
physical prediction "power" across the universe implies a sum verses an apex… 
if one civilization’s AGI has more prediction capacity or potential.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M00d6486e8f5ef51067361ff8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-05-07 Thread John Rose

For those genuinely interested in this particular Imminent threat here is a 
case study (long video) circulating on how western consciousness is being 
programmatically hijacked presented by a gentleman who has been involved and 
researching it for several decades. He describes this particular “rogue, 
unfriendly” as a cloaked remnant “KGB Hydra”. We can only speculate what it 
really is at this day and age since the Soviet Union and KGB were officially 
dissolved in 1991 and some of us are aware of the advanced technologies that 
they were working on back then.

https://twitter.com/i/status/1779017982733107529

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M40062529b066bd7448fe50a0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-03 Thread John Rose
Expressing the intelligence of the universe is a unique case, verses say 
expressing the intelligence of an agent like a human mind. A human mind is very 
lossy verses the universe where there is theoretically no loss. If lossy and 
lossless were a duality then the universe would be a singularity of 
lossylosslessness.

There is a strange reflective duality though in that when one attempts to 
mathematically/algorithmically express the intelligence of the universe the 
universe at that movement is expressing the intelligence of the agent since the 
agent's conceptual expression is contained and created by the universe.

Whatever happened to Wissner-Gross's Causal Entropic Force I haven't heard of 
that in a while...

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Me821389c43b756e156ceef66
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-05-03 Thread John Rose
On Thursday, May 02, 2024, at 6:03 AM, YKY (Yan King Yin, 甄景贤) wrote:
> It's not easy to prove new theorems in category theory or categorical 
> logic... though one open problem may be the formulation of fuzzy toposes.

Or perhaps neutrosophic topos, Florentin Smarandache has written much 
interesting work in this area.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-Mdb56eae8d4bc3eeff6b6e40c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-21 Thread John Rose
If the fine structure constant was tunable across different hypothetical 
universes how would that affect the overall intelligence of each universe? Dive 
into that rabbit hole, express and/or algorithmicize the intelligence of a 
universe. There are several potential ways to do that, some of which offer 
rather curious implications.

Apparently though alpha may vary significantly within our own universe... 
according to some unsubstantiated articles I've read.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M292d0a064091603346d3095e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Iran <> Israel, can AGI zealots do anything?

2024-04-18 Thread John Rose
On Wednesday, April 17, 2024, at 11:17 AM, Alan Grimes wrote:
> It's a stage play. I think Iran is either a puppet regime or living 
under blackmail. The entire thing was done to cover up / distract from / 
give an excuse for the collapse of the banking system. Simultaneously, 
the market riggers ran 1.4 billion ounces of silver derivatives through 
the market to keep the price from rising above $30/oz.

Some of these Macro Event Probabilities look like:

P(E)= f(DXY, 10Y, Price of Crude,…)

Middle East charades are also a distraction from the WHO Pandemic Treaty which 
the globalists are attempting to jam through despite protests in various 
countries. They really want that jab juice in as many as possible... in as many 
intelligent human agents as possible... programmable agents... wirelessly 
programmable agents... like hordes of remote controlled NPC Wojaks. Oy ve!

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9d6271053bbd3f3-Me44a6b55e65ffe990a4ddca5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-18 Thread John Rose
On Thursday, April 11, 2024, at 1:13 PM, James Bowery wrote:
> Matt's use of Planck units in his example does seem to support your 
> suspicion.  Moreover, David McGoveran's Ordering Operator Calculus approach 
> to the proton/electron mass ratio (based on just the first 3 of the 4 levels 
> of the CH) does treat those pure/dimensionless numbers as possessing a 
> physical dimension -- mass IIRC.

Different alphas across different hypothetical universes might affect the 
overall intelligence of each universe. Perhaps affecting the rate at which 
intelligence increases. I don’t buy what some say though that if alpha wasn’t 
perfectly tuned to what it is now then intelligent life wouldn’t exist. It 
might exist but in a different form. Unless there is some particular strong 
physical coupling.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M9f71087c9ae68ae4aae0896e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-11 Thread John Rose
On Thursday, April 11, 2024, at 10:07 AM, James Bowery wrote:
> What assumption is that?

The assumption that alpha is unitless. Yes they cancel out but the simple 
process of cancelling units seems incomplete.

Many of these constants though are re-representations of each other. How many 
constants does everything boil down to I wonder...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M1b3cc8ce2f8e3f5ba2c77697
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Will SORA lead to AGI?

2024-04-11 Thread John Rose
On Thursday, April 11, 2024, at 12:27 AM, immortal.discoveries wrote:
> Anyway, very interesting thoughts I share here maybe? Hmm back to the 
> question, do we need video AI? Well, AGI is a good exact matcher if you get 
> me :):), so if it is going to think about how to improve AGI in video format 
> style, it would dream of the matching happening. But it can too in text form 
> hmm.

It’s machine <=> human visual feedback on progress towards AGI. What we can 
imagine and what is currently being done in the visual department. There are 
things that we need to see visually… but it’s difficult to monitor it all. 
Visual needs to be coupled with experiential in reliable knowledge generation 
from uncertainty. Though it's a small window really...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5ac1f9ef84312f96-Mb4e474e1c25acd7c50fdf4a9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Entering the frenzy.

2024-04-11 Thread John Rose
On Friday, April 05, 2024, at 6:22 PM, Alan Grimes wrote:
> It's difficult to decide whether this is actually a good investment:

Dell Precisions are very reliable IMO and the cloud is great for scaling up. 
You can script up a massive amount of compute in a cloud then turn it off when 
done.

Is consciousness itself the resource hog really?  Or is it the intelligence 
side of things.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc72db78d6b880c68-M2cc3a9a4ed0998d782a038f2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-11 Thread John Rose
> "Abstract Fundamental physical constants need not be constant, neither 
> spatially nor temporally."

If we could remote view somehow across multiple multiverse instances 
simultaneously in various non-deterministic states and perceive how the 
universe structure varies across different alphas. Do the different universe 
alphas coalesce to a similar value temporally? I think they may get stuck at 
different stabilization states and have non-continuous variation across 
universes. But if they trended to the same value that would tell you something 
about a core inception algorithm.

Have to read up on contemporary cosmology… I have assumed a sort of injection 
model. But the injection might really be a generative perception as if each 
universe is generatively perceived from a consciously creative rendition. The 
different alpha structures may then give insight then into any injector 
cognition model…. Kind of speculative though.

I also question though the unitless assumption.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Ma10187a154c485f1f53d8506
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Entering the frenzy.

2024-04-05 Thread John Rose
On Friday, April 05, 2024, at 12:21 AM, Alan Grimes wrote:
> So let me tell you about the venture I want to start. I would like to 
put together a lab / research venture to sprint to achieve machine 
consciousness. I think there is enough tech available these days that I 
think there's enough tech to try out my theory of consciousness. For the 
sake of completing the project, all discussion is prohibited. If you 
mention the Hard Problem, then you're off the project, no discussion! I 
want to actually do this, go ruminated hard problems for the next ten 
millinea, I don't care. You are allowed to argue with me but I have 
absolute authority to shut down any argument with prejudice.

The “Hard Problem of Consciousness” term is similar to “Conspiracy Theory” and 
is probably an unintentional psycholinguistic tool for memetic hegemony. That 
technique can be utilized… we need to take back the narrative from authorities 
since people are being led astray. Not by Chalmers who is relatively harmless 
in doing that but by other newly defined existentially democratic structural 
entities, moving from individual democratic autonomy to institutional 
democratic autonomy, i.e. the institution verses the individual in the newly 
defined control narrative.

Menlo Park VCs are connected to banks like the failed Silicon Valley Bank where 
the capital is setup to flow freely on insider agreements with the Federal 
Reserve. If you highlight your project with ESG, DEI, etc. you will get favored 
status and the money flow is relatively unlimited. Until more banks fail which 
is coming soon as there is a massive move towards bank centralization. Also, we 
are moving towards a war economy as the last vestiges of value in the currency 
are getting expended in defense of itself which makes one wonder if intelligent 
war machines have some value in having a consciousness.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc72db78d6b880c68-Ma0613726134d24f173b3fe64
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-04-04 Thread John Rose
I was just thinking here that the ordering of the consciousness in permutations 
of strings is related to their universal pattern frequency so would need 
algorithms to represent that... 
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M67fae77e54378c18f8497550
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-04 Thread John Rose
On Wednesday, April 03, 2024, at 2:39 PM, James Bowery wrote:
> * and I realize this is getting pretty far removed from anything relevant to 
> practical "AGI" except insofar as the richest man in the world (last I heard) 
> was the guy who wants to use it to discover what makes "the simulation" tick 
> (xAI) and he's the guy who founded OpenAI, etc.

This is VERY interesting James and a useful exercise it does all relate. We 
might be able to find some answers by looking at the code you are pasting. I 
haven’t seen it presented in this way it’s sort of like reworking a macro/micro 
view. Many people pursuing AGI are approaching "the simulation" source code 
either knownst or unbeknownst to themselves. As a youngster I realized that the 
key to understanding everything was in the relationship between the big and the 
small and that seems still to be true.
 
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Md441902c49d7fc2595fdacdf
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-04-02 Thread John Rose
Or perhaps better, describe an algorithm that ranks the consciousness of some 
of the integers in [0..N]. There may be a stipulation that the integers be 
represented as atomic states all unobserved or all observed once… or allow ≥ 0 
observations for all and see what various theories say.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-Ma62fd8f51ea4c6b7c92a2ee7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-04-02 Thread John Rose
On Monday, April 01, 2024, at 3:24 PM, Matt Mahoney wrote:
> Tonini doesn't even give a precise formula for what he calls phi, a measure 
> of consciousness, in spite of all the math in his papers. Under reasonable 
> interpretations of his hand wavy arguments, it gives absurd results. For 
> example, error correcting codes or parity functions have a high level of 
> consciousness. Scott Aaronson has more to say about this. 
> https://scottaaronson.blog/?p=1799

Yes, I remember Aaronson completely tearing up IIT, redoing it several ways, 
and handing it back to him. There is a video too I think. A prospective 
conscious model should need to pass the Aaronson test.

Besides the simplistic one-to-one mapping of bits to bits a question might be – 
describe an algorithm that ranks the consciousness of some of the permutations 
of a string. It would be interesting to see what various consciousness models 
say about that, if anything.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M46e52b8511bf1d7bd31a856c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-04-01 Thread John Rose
On Friday, March 29, 2024, at 8:31 AM, Quan Tesla wrote:
> Musical tuning and resonant conspiracy? Cooincidently, I spent some time 
> researching that just today. Seems, while tuning of instruments is a matter 
> of personal taste (e.g., Verdi tuning)  there's no real merit in the pitch of 
> a musical instrument affecting humankind, or the cosmos. 
> 
> Having said that, resonance is a profound study and well-worth pursuing. 
> Consider how the JWST can "see" way beyond its technical capabilities. 

Conspiracy theory? On it  :)

https://www.youtube.com/watch?v=BQCbjS4xOfs

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M04a527cf59256b52a4968c57
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-04-01 Thread John Rose
On Sunday, March 31, 2024, at 7:55 PM, Matt Mahoney wrote:
> The problem with this explanation is that it says that all systems with 
> memory are conscious. A human with 10^9 bits of long term memory is a billion 
> times more conscious than a light switch. Is this definition really useful?

A scientific panpsychist might say that a broken 1 state light switch has 
consciousness. I agree it would be useful to have a mathematical formula that 
shows then how much more conscious a human mind is than a working or broken 
light switch. I still haven’t read Tononi’s computations since I don’t want it 
to influence my model one way or another but IIT may have that formula? In the 
model you expressed you assume a 1 bit to 1 bit scaling which may be a gross 
estimate but there are other factors.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M9c1f29e200e462ef29fbfcdf
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 7:11 PM, Matt Mahoney wrote:
> Prediction measures intelligence. Compression measures prediction.

Can you reorient the concept of time from prediction? If time is on an axis, if 
you reorient the time perspective is there something like energy complexity?

The reason I ask is that I was mentally attempting to eliminate time from 
thought and energy complexity came up... verses say a physical power 
complexity. Or is this  a non-sequitur.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M960152aadc5494156052b57d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 11:11 AM, Nanograte Knowledge Technologies 
wrote:
> Who said anything about modifying the fine structure constant? I used the 
> terms: "coded and managed".
>  
>  I can see there's no serious interest here to take a fresh look at doable 
> AGI. Best to then leave it there.

I can’t get it out of my head now, researching, asking ChatGPT what it thinks. 
Kinda makes you wonder.

They say people become obsessed with it.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M16bd0477206ddf4e2ecaa55c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 7:33 AM, Keyvan M. Sadeghi wrote:
> For the same reason that we, humans, don't kill dogs to save the planet.

Exactly. If people can’t snuff Wuffy to save the planet how could they decide 
to kill off a few billion useless eaters? Although central banks do fuel both 
sides of wars for reasons that include population modifications across 
multi-decade currency cycles.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-Mfe60caa2e1c211ec6f07c236
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 7:22 AM, Keyvan M. Sadeghi wrote:
> With all due respect John, thinking an AI that has digested all human 
> knowledge, then goes on to kill us, is fucking delusional 

Why is that delusional? It may be a logical decision for the AI to make an 
attempt to save the planet from natural destruction.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M4379121c7778c79b8be00581
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-30 Thread John Rose
On Thursday, March 28, 2024, at 5:55 PM, Keyvan M. Sadeghi wrote:
> I'm not sure the granularity of feedback mechanism is the problem. I think 
> the problem lies in us not knowing if we're looping or contributing to the 
> future. This thread is a perfect example of how great minds can loop forever.

Contributing to the future might mean figuring out ways to have AI stop killing 
us. An issue is that living people need to do this, the dead ones only leave 
memories. Many scientists have proven now that the mRNA jab system is a death 
machine but people keep getting zapped. That is a non-forever loop.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-Me755cab585f5cb9f665c8b0c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Friday, March 29, 2024, at 8:25 AM, Quan Tesla wrote:
> The fine structure constant, in conjunction with the triple-alpha process 
> could be coded and managed via AI. Computational code. 

Imagine the government in its profound wisdom declared that the fine structure 
constant needed to be modified and anyone that didn’t follow the new rule would 
be whisked away and have their social media accounts cancelled. I know that 
could never, ever happen *wink* but entertain the possibility. What would be 
fixed and what would break?

It’s true, governments collude to modify physical constants, for example time, 
daylight savings time, adding seconds to years, shifting calendars for example 
from 13 months to 12 and some say this intentionally caused a natural human 
cyclic decoupling rendering turtle shell calendars obsolete thus retarding 
turtle effigy consciousness 

But you want to physically modify the constant with AI in a nuclear lab. That’s 
a long shot to emerge an AGI.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-Mac063c8e597998109b576ec9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread John Rose
On Thursday, March 28, 2024, at 4:55 PM, Quan Tesla wrote:
> Alpha won't directly result in AGI, but it probsbly did result in all 
> intelligence on Earth, and would definitely resolve the power issues plaguing 
> AGI (and much more), especially as Moore's Law may be stalling, and 
> Kurzweil's singularity with it. 

There are many ways to potentially modify these physical constants. Most I 
think have to deal with perception but perception is generation. Are they 
really constants? For all practical purposes, yes… well, not all apparently and 
calling them constants may be a form of bias.

There is reality and perception of reality. We know perception changes, for 
example Newtonian => Relativistic. There were measurements that didn’t add up. 
Relativistic now doesn’t add up. Engineering lags physics often…

I do believe that we can modify more than just the perception of reality 
outside of spacetime and have thought about it somewhat, it would be like 
REALLY hacking the matrix. But something tells me not to go there as it could 
be extremely dangerous. I’m sure some people are going there.

You would have to be more specific on what modification (AI enabling) of the 
fine structure constant you are referring to.

There is this interesting thing I see once in a while (not sure if it’s 
related) but have never pursued it where people say that some standard music 
frequency was slightly modified by the Rockefellers for some reason like adding 
a slight dissonance or something… I do know they modified the medical system to 
be more predatory and monopolistic in the early 1900’s and that led to where we 
are now.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M5b91bea0fa77902a0b0bc7fc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread John Rose
On Thursday, March 28, 2024, at 10:06 AM, Quan Tesla wrote:
> At least with an AI-enabled fine structure constant, we could've tried 
> repopulating selectively and perhaps reversed a lot of the damage we caused 
> Earth.

The idea of AI-enabling the fine-structure constant is thought provoking but... 
how? Seems like a far out concept. Is it theoretically and practicably 
changeable? Perhaps AI-enable the perception of it?

As an aside, look at this beautiful book I found with that as title:
https://shorturl.at/hJRUY

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-Ma224f6d8bfd11b3d8aa0ea2f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread John Rose
On Thursday, March 28, 2024, at 8:44 AM, Quan Tesla wrote:
> One cannot disparage that which already makes no difference either way. 
> John's well, all about John, as can be expected.

What?? LOL listen to you 

On Thursday, March 28, 2024, at 8:44 AM, Quan Tesla wrote:
> I've completed work and am still researching. Latest contribution is my 
> theory as to the "where from?" and "why?" of the fine structure constant. 
> Can't imagine achieving AGI without it. Can you? 

Where does it come from then? What’s the story?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-Ma783b61c757709857a923c99
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-28 Thread John Rose
On Wednesday, March 27, 2024, at 3:15 PM, Matt Mahoney wrote:
> In my 2008 distributed AGI proposal (
> https://mattmahoney.net/agi2.html ) I described a hostile peer to peer
> network where information has negative value and people (and AI)
> compete for attention. My focus was on distributing storage and
> computation in a scalable way, roughly O(n log n).

By waiting all this time many technical issues have been sorted out in forkable 
tools and technologies to build something like your CMR. I was actually 
thinking about it a few months ago regarding a DeSci system for these vax 
issues since I have settled on an implementable model of consciousness which 
provides a virtual fabric and generally explains an intelligent system like a 
CMR. I mean CMR could be extended into a panpsychist world wouldn’t that be 
exciting?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M2eae32fa79678c15892395f7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread John Rose
On Thursday, March 28, 2024, at 1:45 AM, Quan Tesla wrote:
> If yes, what results have you to show for it?

There’s no need to disparage the generous contributions by some highly valued 
and intelligent individuals on this list. I’ve obtained invaluable knowledge 
and insight from these discussions even though I may not sound like it being 
distracted the last few years by the contemporary state of affairs. And some of 
us choose not to disclose certain things for various reasons.

It’s an email list, very asynchronous which has its benefits. What are you 
expecting? AI go Foom? What results have you to show? We're all ears (eyes).

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M94b5e733c65bbf20218206e3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-27 Thread John Rose
On Wednesday, March 27, 2024, at 3:15 PM, Matt Mahoney wrote:
> I predict a return of smallpox and polio because people won't get vaccinated. 
> We have already seen it happen with measles.

I think it’s a much higher priority as to what’s with that non-human DNA 
integrated into chromosomes 9 and 12 for millions of people. Measles and a rare 
smallpox case we can address later… Is it to unsuppress tumors for depop 
purposes? I can understand that. And there is an explosion of turbo cancers 
across many countries now esp. in young people. BUT... I suspect more than that 
and potentially other "features". This must be analyzed ASAP.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M2b017a488fcbbff4f4b81c65
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-27 Thread John Rose
On Wednesday, March 27, 2024, at 12:37 PM, Matt Mahoney wrote:
> Flat Earthers, including the majority who secretly know the world is
round, have a more important message. How do you know what is true?

We need to emphasize hard science verses intergenerational pseudo-religious 
belief systems that are accepted as de facto truth. For example, vaccines are 
good for you and won't modify your DNA :)

https://twitter.com/i/status/1738303046965145848
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M66e2cfff4f8461d3f15cd897
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-27 Thread John Rose
On Wednesday, March 27, 2024, at 12:37 PM, Matt Mahoney wrote:
> We have a fairly good understanding of biological self replicators and
how to prime the immune systems of humans and farm animals to fight
them. But how to fight misinformation?

Regarding the kill-shots you emphasize reproduction verses peer-review 
especially when journals such as The Lancet and NE Journal of Medicine now are 
captured by pharma. And ignore manipulated media like CNN, etc.. including 
information from your own federal government unfortunately. 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M50126dd1549d1b40f2990b80
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-27 Thread John Rose
On Monday, March 25, 2024, at 5:18 AM, stefan.reich.maker.of.eye wrote:
> On Saturday, March 23, 2024, at 11:10 PM, Matt Mahoney wrote:
>> Also I have been eating foods containing DNA every day of my life without 
>> any bad effects.
> 
> Why would that have bad effects?

That used to not be an issue. Now they are mRNA jabbing farm animals and 
putting nano dust in the food. The control freaks think they have the right to 
see out of your eyes… and you’re just a rented meatsuit.

We need to understand what this potential rogue unfriendly looks like. It 
started out embedded with dumbed down humans mooch leeching on it…. like a big 
queen ant.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M799cc6d0a090f0c1e8d83050
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-24 Thread John Rose
On Saturday, March 23, 2024, at 6:10 PM, Matt Mahoney wrote:
> But I wonder how we will respond to existential threats in the future, like 
> genetically engineered pathogens or self replicating nanotechnology. The 
> vaccine was the one bright spot in our mostly bungled response to covid-19. 
> We have never before developed a vaccine to a novel disease this fast, just 
> over a year from identifying the virus to widespread distribution.

This is the future, we have a live one to study but it requires regurgitating 
any blue-pills :)

The jab was decades in development and the disease contains patented genetic 
sequences.

Documentary on how they blackholed hydroxy (among others) to force your 
chromosomal modifications: 
 https://twitter.com/i/status/1768799083660231129

Unfriendly AGI is one thing but a rogue unfriendly is another so a diagnosis is 
necessary.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M9a0ac94d8b6a4d1cd960cb3e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-22 Thread John Rose
On Thursday, March 21, 2024, at 1:07 PM, James Bowery wrote:
> Musk has set a trap far worse than censorship.

I wasn’t really talking about Musk OK mutants? Though he had the cojones to do 
something big about the censorship and opened up a temporary window basically 
by acquiring Twitter.

A question is who or what is behind the curtain? Those in the know that leak 
data seem to get snuffed…

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-Mb4e8e4edcd88a6b1bb9e9667
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-21 Thread John Rose
On Thursday, March 21, 2024, at 11:41 AM, Keyvan M. Sadeghi wrote:
> Worship stars, not humans 

The censorship the last few years was like an eclipse.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-Mf2b0a65e2f58709ef10adfec
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-21 Thread John Rose
I don’t like beating this drum but this has to be studied in relation to 
unfriendly AGI and the WHO pandemic treaty is coming up in May which has to be 
stopped. Here is a passionate interview after Dr. Chris Shoemaker presenting in 
US congress, worth watching for a summary of the event and the current 
mainstream status. It’s not too technical.

My hypothesis still stands IMO… I do want it to fail. Chromosomes 9 and 12 are 
modified, why? Tumor suppression related chromosomes? I don't know... The 
interview doesn’t cover the graphene oxide, quantum dots, etc. and radiation 
related mechanisms which are also potentially mind blowing.

Thank you Elon for fixing Twitter without which we were in a very, very dark 
place.

https://twitter.com/i/status/1770522686210343392

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M1bbfbd0c1261f7e85119dff4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: "The Optimal Choice of Hypothesis Is the Weakest, Not the Shortest"

2024-01-29 Thread John Rose
Weakness reminds me of losylossslessyzygy (sic. lossy lossless syzygy)… hmm… I 
wonder if it’s related.

Cardinality (Description Length) verses cardinality of its extension (Weakness)…

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T78fb8d90b9a51bf0-M8ce9daaebe2365093ca8c16f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-01-15 Thread John Rose
...continuing P# research…

Though I will say that the nickname for P# code used for authoritarian and 
utilitarian zombification is Z# for zomby cybernetic script. And for language 
innovation which seems scarce lately since many new programming languages are 
syntactic rehashes, new intelligence inspired innovative representations are 
imperative.

AGI/Singularity are commonly thought of as an immanentizing eschaton:
https://en.wikipedia.org/wiki/Immanentize_the_eschaton
But before that imagined scalar event horizon there are noticeable 
reconfigurations in systems that might essentially be a self-organizing in an 
emergent autopoiesis. Entertaining that, as well as a potential unfriendly 
AGI-like Globocap in the crosshairs which is wielding new obligatory digitized 
fiat coupled with medical based tyranny (CBDC) there is evidence that can be 
dot-connected into a broader configurative view. And with a potentially 
emergent or emerged intelligence preparing to dominate we need to attempt to 
negotiate the best deal for humanity instead of having unseen and unknown 
figures, whoever or whatever they are, engineering a new feudal system while we 
have the capability remaining:

https://youtu.be/4MrIsXDKrtE?t=14359

https://thegreattaking.com/

https://www.uimedianetwork.com/293534/the-great-setup-part1.htm

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M08dc9498c96683f9c3924c19
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-01-08 Thread John Rose
…continuing P# research…

This book by Dr. Michael Nehis “The Indoctrinated Brain” offers an interesting 
neuroscience explanation and self-defense tips on how the contemporary 
zombification of human minds is being implemented. Essentially, he describes a 
mental immune system and there is a sustained attack on the autobiographical 
memory center. The mental immune system involves “index neurons” which are 
created nightly. Index neuron production is the neural correlate of natural 
curiosity. To manipulate a population the neurogenesis is blocked via 
neuroinflammation so people’s ability to think independently is hacked and 
replaced with indoctrinated scripts. The continual creation of crises 
facilities this. The result being that individuals spontaneously and 
uncontrollably blurb narrative phrases like “safe and effective” and 
“conspiracy theory” from propaganda sources when challenged to independently 
think critically on something like the kill-shots… essentially acting as 
memetic switches and routers. The goal is to strengthen the topology of this 
network of intellectually castrated zombies, or zomb-net, that programmatically 
obeys a centralized command intelligence:

https://rumble.com/v42conr-neurohacking-exposed-dr.-michael-nehls-reveals-how-the-global-mind-manipula.html

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M861d73d982b6cb6575bb6c5e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-23 Thread John Rose
…continuing

The science changes when conflicts of interest are removed. This is a fact. And 
a behavior seems to be that injected individuals go into this state of “Where’s 
the evidence?” And when evidence is presented, they can’t acknowledge it or 
grok it and go into a type of loop:

“Where’s the evidence?”
“There is no evidence.”
“Where’s the evidence?”
“There is no evidence.”
…

Understanding loops from computer programming sometimes they occur on 
exceptions… sometimes they are from a state machine where a particular state 
hasn’t been built out yet. Perhaps a new language can be learned via 
programming language detection/recognition and we can view the code. I had 
suggested that this would be the P#.

But who or what is the programmer? 

Evidently the misfoldings do have effects. An increasing number of 
post-injective neurodegenerative evidence is being observed and this is most 
likely related to misfolding. This paper provides some science on significant 
spike seeded acceleration of amyloid formation:

“An increasing number of reports suggest an association between COVID-19 
infection and initiation or acceleration of neurodegenerative diseases (NDs) 
including Alzheimer’s disease (AD) and Creutzfeldt-Jakob disease (CJD). Both 
these diseases and several other NDs are caused by conversion of human proteins 
into a misfolded, aggregated amyloid fibril state… We here provide evidence of 
significant Spike-amyloid fibril seeded acceleration of amyloid formation of 
CJD associated human prion protein (HuPrP) using an in vitro conversion assay.” 

“…Data from Brogna and colleagues demonstrate that Spike protein produced in 
the host as response to mRNA vaccine, as deduced by specific amino acid 
substitutions, persists in blood samples from 50% of vaccinated individuals for 
between 67 and 187 days after mRNA vaccination (23). Such prolonged Spike 
protein exposure has previously been hypothesized to stem from residual virus 
reservoirs, but evidently this can occur also as consequence of mRNA 
vaccination. “

https://www.biorxiv.org/content/10.1101/2023.09.01.555834v1


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M79f0fd78330318f219c4b110
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-21 Thread John Rose
On Tuesday, December 19, 2023, at 9:47 AM, John Rose wrote:
> On Tuesday, December 19, 2023, at 8:59 AM, Matt Mahoney wrote:
>> That's just a silly conspiracy theory. Do you think polio and smallpox were 
>> also attempts to microchip us?
> 
> That is a very strong signal in the genomic data. What will be interesting is 
> how this signal changes now that it has been identified. Is it possible that 
> the mutations are self-correcting somehow? The paper is still undergoing peer 
> review with 73,000 downloads so far...

There are multiple ways that genetic mutations can “unmutate” or appear to have 
been unmutated. I’m not familiar with GenBank enough to look at that in regards 
to the study…

But intelligence detection is important in AGI. What might be interesting in 
this systems signaling analysis perhaps is the frequency of the variant’s 
synthesis and dispersal with the half-life of the injected human test subjects 
producing and emitting spike-protein. What are the correlations there?

This study shows up to 187 days of spike emission:
https://pubmed.ncbi.nlm.nih.gov/37650258/

Other "issues" exist though in addition to spike emission. There are misfolded 
protein factors as well as ribosomal frameshifting:
https://www.nature.com/articles/s41586-023-06800-3

BUT, these misfoldings and frameshifts may just appear to be noise or errors 
and may in fact be intentional and utilitarian. We are observing all of this 
from a discovery perspective. Also, the lipid nanoparticles utilized are 
carrier molecules across the blood brain barrier (BBB). We can measure 
sociological and psychological behavior anomalies externally but it can be 
difficult to decipher changes that occurred in people’s minds individually 
after they got the injections...

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mbad6e64e7d9263447bf7ffe4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-19 Thread John Rose
On Tuesday, December 19, 2023, at 8:59 AM, Matt Mahoney wrote:
> That's just a silly conspiracy theory. Do you think polio and smallpox were 
> also attempts to microchip us?

That is a very strong signal in the genomic data. What will be interesting is 
how this signal changes now that it has been identified. Is it possible that 
the mutations are self-correcting somehow? The paper is still undergoing peer 
review with 73,000 downloads so far...

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M9bef38f970d3fcbd86376af7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-19 Thread John Rose
On Monday, December 18, 2023, at 9:31 PM, Matt Mahoney wrote:
> I'm not sure what your point is.

The paper shows that the variants are from genomically generative non-mutative 
origination. Look at the step ladder in the mutation diagrams showing corrected 
previous mutations on each variant. IOW they are getting artificially and 
systemically synthesized and dispersed. Keep in mind the variants are meant to 
the drive smart-ape injections. BTW good job analyzing the GenBank data by the 
researchers.

On Monday, December 18, 2023, at 9:31 PM, Matt Mahoney wrote:
> But I'm not sure what this has to do with AGI except to delay it for a couple 
> of years.

How do you know that AGI isn't deployed yet?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mafbfd6f5016f26536ba3c37c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-18 Thread John Rose
Evidence comin' at ya, check out Supplemental Figure 2:

https://zenodo.org/records/8361577


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mb096662703220edbaab50359
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: By fire or by ice.

2023-12-08 Thread John Rose
On Wednesday, December 06, 2023, at 1:52 PM, Alan Grimes wrote:
> Whether it is even possible for the fed to print that much money is an 
open question. I want to be on the other side of the financial collapse 
(fiscal armageddon) as soon as possible. Right now we are just waiting 
to see whether the old system will die by fire or by ice... =\

For an AGI to model the global financial system accurately it would have to 
understand how it REALLY works. As in a “Creature from Jekyll Island” model, 
the Titanic, etc. How predictions can be made, for example if the price of oil 
falls below a particular range then an “event” happens like a random missile 
gets fired or a terrorist attack occurs or there is an “accident”. And if DXY + 
10Y rate rise above particular thresholds then central banks secretly start 
buying debt and large caps. And how inflation is really managed since inflation 
is the control mechanism where value is slowly extracted from the system, as in 
a trickle-up economics… and how precious metals are suppressed with naked 
shorts to anchor the whole thing and how crypto markets are manipulated… etc. 
etc..

There is still value to be extracted from the existing system so an illusion of 
stability is being maintained.

At the same time war is getting tweaked up as inflation smolders and awareness 
of the kill-shots rise. The scamdemic was demand destruction with liquidity 
flood and the wars are massive money laundering.

Their plan is to create distress where the public welcomes CBDC, probably with 
free food credits as food scarcity increases, start using this app, with social 
credit scoring. “They” being “the cabal” and friends.

If a GloboCap is an unfriendly AGI then it is a type of massive multiagent 
system with human agents and fiat is a protocol but the fiat is not one-to-one 
with commodities whereas BRICs are looking at a one-to-one but that won’t 
happen since debt creation might actually be a required mechanism in a monetary 
system with its corruptibility a strong catalyst..

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tdab60d6adcb6250c-M23343c8287a07b1a3ba6e5e4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-07 Thread John Rose
On Wednesday, December 06, 2023, at 12:50 PM, James Bowery wrote:
> Please note the, uh, popularity of the notion that there is no free will.  
> Also note Matt's prior comment on recursive self improvement having started 
> with primitive technology.  
> 
> From this "popular" perspective, there is no *principled* reason to view "AGI 
> Safety" as distinct from the de facto utility function guiding decisions at a 
> global level.

Oh, that’s bad. Any sort of semblance of freewill is a threat. These far-right 
extremists will be hunted down and investigated as potential harborers of 
testosterone.

It’s flawed thinking where if everyone speaks the same language for example or 
if there is just one world government everything will be better and more 
efficient. The homogenization becomes unbearable. It might be entropy at work, 
squeezing out excess complexity and implementing a control framework onto human 
negentropic slave resources.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mc393cedb2b870e339c30636b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-06 Thread John Rose
On Tuesday, December 05, 2023, at 9:53 AM, James Bowery wrote:
> The anti-vaxers, in the final analysis, and at an inchoate level, want to be 
> able to maintain strict migration into their territories of virulent agents 
> of whatever level of abstraction.  That is what makes the agents of The 
> Unfriendly AGI Known As The Global Economy treat them as the "moral" 
> equivalent of "xenophobes":  to be feared and controlled by any means 
> necessary.

The concept of GloboCap from CJ Hopkins, which I thought was brilliant, can be 
viewed yes as an Unfriendly AGI, yes:
https://youtu.be/-n2OhCuf8_s

These fiat systems though are at least semicyclic through time. We are at the 
end of various cycles here though including a fiat, next is a digital control 
system and, this time is different but full of opportunities and dangers.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M813aead2a2f32726c8a69005
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-05 Thread John Rose
People are going to go Amish.

Faraday clothingware is gaining traction for the holidays. 

And mobile carriers are offering the iPhone 15 upgrade for next to nothing. I 
need someone to confirm that Voice-to-Skull is NOT in the 15 series but I keep 
getting blank stares…

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mb05a84e6219f0149a5f09798
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-05 Thread John Rose
On Tuesday, December 05, 2023, at 2:14 AM, Alan Grimes wrote:
> It's been said that the collective IQ of humanity rises with every 
vaccine death... I'm still waiting for it to reach room temperature...

It’s not all bad news. I heard that in some places unvaxxed sperm is going for 
$1200 a pop. And unvaxxed blood is paying an increasing premium...

Sorry Matt, it doesn’t scale with the number of shots  >=)

Was asking around for a friend… people gotta pay bills ‘n stuff.

https://rumble.com/v3ofzq9-klaus-schwabs-greatest-hits.html

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Maa7ad3866377b34ed3d49679
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-04 Thread John Rose
On Sunday, December 03, 2023, at 10:00 AM, Matt Mahoney wrote:
> I don't mean to sound dystopian.


OK, let me present this a bit differently.

THIS MUTHERFUCKER WAS DESIGNED TO KILL YOU

Mkay?

In a nice way, not using gas or guns or bombs. It was a trial balloon developed 
over several decades and released to see how it would go. The shot that is, 
covid’s purpose was to facilitate the shots. It went quite well with little 
resistance. It took out 12 to 17 million lives according to conservative ACM 
estimates. I’ve seen other estimates much higher with the vax injuries in the 
100’s of millions, not mentioning natality rates, disabilities and the yet to 
be made dead.

Now you might ask, what’s all this got to do with AGI? Well let’s call it AI 
for now to obfuscate and not give AGI a bad name.

Two things: This weaponry is getting further honed by AI, and, AI can fight AI.

The scope is quite large and difficult to maintain a comprehensive focus on as 
it extends into various realms. As well most people are still playing catch up 
by just proving and acknowledging that it actually maims and kills verses what 
it is all about. For example, the Philippines gov’t has just voted on 
investigating what happened to those surplus 300+ thousand dead people from a 
couple years ago.

To me, tens of millions dead with many more injuries and mortality and natality 
plunging are some red flags and cause for concern.

You could say it was human driven by the deep state or transnational elites, or 
aliens or whatever but it could be AI. And it is/was definitely AI assisted and 
increasingly more so… so fighting this will require AI/AGI and/or other 
technologies yet to be provided. And if this is merely just Satan Klaus with 
the WEF and Kill Gates they will be taken care of using other mechanisms. But, 
if it is AI some unique skills may be required to deal with.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M4ada06808870efab3a89b104
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-03 Thread John Rose
On Sunday, December 03, 2023, at 7:59 AM, James Bowery wrote:
> 
> A dream to some, a nightmare to others.  
> 

All those paleolithic megaliths around the globe… hmmm…could they be from 
previous human technological cycles? 
Unless there's some supercyclic AI keepin' us down, now that's conspiracy 
theory :) Bleeding off elite souls from the NPC's.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M1e1775cac8b1ea833360c625
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-03 Thread John Rose
On Saturday, December 02, 2023, at 11:25 PM, Giovanni Santostasi wrote:
> AGI gives whatever we want so that is the end of us, so idiotic conclusion, 
> sorry.

Although I would say after looking at the definition of dystopia and once one 
fully understands the gravity of what is happening it is already globally 
dystopic, by far.

An intentionally sustained ACM burn rate increasingly tweaked up by 
artificially intelligent actors, while at the same time mindscrewing masses 
into a lemming like state, and defending it, including top thinkers and 
scientists, what’s the terminology for that?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M088429e6d9556972fbf0f71a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-03 Thread John Rose
On Saturday, December 02, 2023, at 11:25 PM, Giovanni Santostasi wrote:
> I cannot believe this group is full of dystopians. Dystopia never happen, at 
> least not for long or globally. They are always localized in time or space. 
> Hollywood is full of dystopia because they lack imagination. 

This group is not full of dystopians, don’t smear.

Why do you think dystopias haven't happened, like nukes not killing us? Nuclear 
explosions make great art why be so doomer! Enjoy the sunshine.

Not.  We need to develop plans because this thing is just getting started.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M778b5a27ab9f1c1a1e65145d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-02 Thread John Rose
People need to understand the significance of this global mindscrew. And 
ChatGPT is blue-pilled on the shots, as if anyone expected differently.

What is absolutely amazing is that Steve Kirsch wasn’t able to speak at the MIT 
auditorium named after him since he was labeled as a misinformation 
superspreader until it was arranged by truth seeking and freedom loving 
undergrads..

https://rumble.com/v3yovx4-vsrf-live-104-exclusive-mit-speech-by-steve-kirsch.html

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M92f2f141ecb6d16a44d51d85
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Lexical model learning for LLMs

2023-11-23 Thread John Rose
A compression ratio of 0.1072 seems like there is plenty of room still. What is 
the max ratio estimate something like 0.08 to 0.04?  Though 0.04 might be 
impossibly tight... even at 0.05 the resource consumption has got to 
exponentiate out of control unless there are overlooked discoveries yet to 
be made.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tdc371ce11a040352-Mb338eb5e669be29a59811a86
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 Turbo Fails AIC Test -- HARD

2023-11-09 Thread John Rose
All strings are numbers. Base 65,536 in Basic Multilingual Plane Unicode:

printf(“D㈔읂咶㗏葥㩖ퟆ疾矟”);

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8581faf50bfa0ead-Mcaf3b77c1dd3b8e2295f6c96
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-10-25 Thread John Rose
Etcetera:

https://correlation-canada.org/nobel-vaccine-and-all-cause-mortality/


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M7c76a4ad6e4459816b12787d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-10-25 Thread John Rose
On Wednesday, September 20, 2023, at 5:28 PM, David Williams wrote:
> 4. No credible scientific evidence for creating amyloid clots. Even the 
> possibly *extremely rare* cases that could *possibly* be attributed to the 
> vaccines are vanishingly small compared to the vaccine benefits in protecting 
> against severe disease, hospitalization, and death.

It's not an alarm clock, it's an opportunity clock. Please wake up.

Peer reviewed literature for those who "trust the the science":
https://drtrozzi.org/2023/09/28/1000-peer-reviewed-articles-on-vaccine-injuries/

Dr. Yeadon explaining intentionality:
https://rumble.com/v3aoa7z-dr.-michael-yeadon-are-the-mrna-injections-toxic-by-mistake-or-by-design.html
 

Could a non-AI design something so effective? 

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mff3abc4e9a33ec0e1653a8ef
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-09-28 Thread John Rose
On Wednesday, September 27, 2023, at 11:41 AM, Matt Mahoney wrote:
> So like many scientists, they look for evidence that supports their theories 
> instead of evidence that refutes them.

"In formulating their theories, “most physicists think about experiments,” he 
said. “I think they should be thinking, ‘Is my theory compatible with 
consciousness?’—because we know that’s real.”"

https://www.scientificamerican.com/article/is-consciousness-part-of-the-fabric-of-the-universe/
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mc65c00e7e9331e5b69bce1d0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-09-27 Thread John Rose
On Wednesday, September 27, 2023, at 12:13 PM, Matt Mahoney wrote:
> If you are going to define consciousness as intelligence, then you need to 
> define intelligence. We have two widely accepted definitions applicable to 
> computers.

It’s not difficult. Entertain a panpsychist model of consciousness. What is the 
physical property in the universe that can be defined as consciousness where 
it’s presence would be existent in everything? Whatever that is, it would be 
present when implementing any intelligence model. It might explain many 
existing theories of consciousness since this one would need to have relatively 
low complexity. It should have a strict mathematical and physical definition 
where it simplifies many of these issues... and perhaps adds understanding to 
various models of intelligence. I'm sure there are a number of candidates that 
may fit this criteria including perhaps pieces of Orch-OR.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mf0a091a75e50bde406521792
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-09-27 Thread John Rose
On Wednesday, September 27, 2023, at 11:41 AM, Matt Mahoney wrote:
> 1. Medical consciousness. The mental state of being awake and able to form 
> memories. The opposite of unconsciousness.
> 2. Ethical consciousness. The property of higher animals that makes it 
> unethical to inflict pain or to harm or kill them.
> 3. Phenomenal consciousness. The undefinable property that makes humans 
> different from zombies, where a zombie is exactly like a human by any 
> behavioral test. The thing that various religions claim goes to heaven when 
> you die. The little person in your head. Awareness of your own awareness. 
> What thinking feels like.

There is another you’re omitting and that is how consciousness relates to 
intelligence. We can call it CI for Conscio-Intelligence. Trying to stay 
focused on intelligence related aspects...

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M6fdd681aa75ede80687382cc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-09-27 Thread John Rose
On Tuesday, September 26, 2023, at 11:53 PM, Quan Tesla wrote:
> Incredible. We won't believe hard science, but we'll believe almost 
> everything else. This is "The Truman Show" all over again. 
> 

Orch-OR is macro level human brain centric consciousness theory though it may 
apply to animals, not sure...  No one here is disbelieving hard science.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M2aaf7dea7bcd44f16379e038
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-09-27 Thread John Rose
On Wednesday, September 27, 2023, at 8:00 AM, Quan Tesla wrote:
> Yip. It's called the xLimit. We've hit the ceiling...lol

It's difficult to make progress on an email list if disengaged people 
spontaneously emit useless emotionally triggered quips... 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Me1289bbe433d1f6493dc7452
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-09-27 Thread John Rose
On Tuesday, September 26, 2023, at 5:18 PM, EdFromNH wrote:
> Of course it is possible that advanced AI might find organic lifeforms 
> genetically engineered with organic brains to be the most efficient way to 
> mass produce brainpower under their control, and that such intelligent 
> organic lifeforms have been genetically engineered to be slaves of such AIs.

Yes, when you go down that rabbit hole there are many security gates. Perhaps 
there are things that we are better off not knowing. Are we being protected for 
our own good by the many decades long knowledge suppression by the three-letter 
agencies?

A problem is that we are moving closer to WW3 and multiple countries are 
waiting to roll out their alien-derived warfare technologies. At some point 
biologic and non-biologic will be indistinguishable and we are probably there 
now.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M0530edcf1a1f968b357293c0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-09-26 Thread John Rose
On Tuesday, September 26, 2023, at 3:17 PM, Nanograte Knowledge Technologies 
wrote:
> But according to all scientific evidence, and even Dr. Stuart Hammerhoff's 
> latest theory of anaesthetics, such patients aren't conscious at all. It's 
> hard science.
>  
>  AGI pertains to human intelligence, thus human consciousness, not to all 
> matter.

Yes, he’s theorizing human consciousness using Orch-OR. Human consciousness 
from the perspective of a panpsychist physical model may support his theory or 
may not depending. I think his is still under evaluation. Human consciousness 
though has all sorts of added attributes like experiencing the qualia, having 
subconscious, etc.. A utilitarian panpsychist physical model can have the 
objective of incorporating a non-biological intelligence structure.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mdc5fcd50809c145612347cbd
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-09-26 Thread John Rose
On Tuesday, September 26, 2023, at 8:56 AM, James Bowery wrote:
> Since property rights are founded on civil society and civil society is 
> founded on the abrogation of individual male intrasexual selection by young 
> males in exchange for collectivized force that would act to protect 
> collective territory, we have been in a state of civil collapse since at 
> least 1965.  All property rights acquired since then are at risk.

OH I see so that takes care of the first part of “you’ll own nothing and be 
happy”. Not sure about the happy part unless that means… no more walketh upon 
the earth happy. I feel somewhat uncomfortable with these terms. Perhaps that 
mantra needs to be renegotiated…
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mb39557fb1627391d89912e54
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-09-26 Thread John Rose
On Monday, September 25, 2023, at 2:14 AM, Quan Tesla wrote:
> But, in the new world (this dystopia we're existing in right now), free 
> lunches for AI owners are all the rage.  It's patently obvious in the total 
> onslaught by owners of cloud-based AI who are stealing IP, company video 
> meetings, home footage, biometrics, privacy-protected data, government data, 
> voice samples, trade secrets, etcetera, hand over fist. 

I’m baffled as to how many people willingly submitted their DNA. Who owns 
perpetual rights to that DNA now?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M99e16e82a32061fb060d8141
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-09-26 Thread John Rose
On Tuesday, September 26, 2023, at 1:02 AM, Nanograte Knowledge Technologies 
wrote:
> Are you asserting that a patient under aneasthesia is conscious? How then, if 
> there's no memory of experience, or sensation, or cognitive interaction, do 
> we claim human consciousness? 
>  
>  Just a reminder, the topic still is AGI and not the philosophy of 
> consciousness. Meaning, the target would have to be emergent and/or 
> programmable consciousness.
>  
>  "Boom done!"?, nothing of the sort!

LOL

If it is defined as a physical attribute across all matter, yes they would have 
to be.

I work off of a model of conscious intelligence or conscio-intelligence (CI) 
for AGI. Otherwise, I wouldn’t bring up the topic here so often... Mmmkay? 

Not everyone is all juiced up over neural networks.. though you can see how 
those models are evolving.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mabe80642a20350426a3b5078
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-09-25 Thread John Rose
On Monday, September 25, 2023, at 3:27 PM, Matt Mahoney wrote:
> OK. Give me a test for consciousness and I'll do the experiment. If you mean 
> the Turing test then there is an easy proof.

If you define consciousness as a panpsychist physical attribute then all 
implemented compressors would be conscious to some extent so you would need a 
test for non-consciousness, but everything is conscious therefore conscious 
compressors are better than non.

Boom done. Next problem?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M86ef1e4782863a7ee1ba03de
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-09-25 Thread John Rose
On Monday, September 25, 2023, at 1:09 PM, Matt Mahoney wrote:
> For those still here, what is there left to do?

I think we need a mathematical proof that conscious compressors compress better 
than non… 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mb4fefdd7838d9a8b3952003e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-09-24 Thread John Rose
On Sunday, September 24, 2023, at 11:23 AM, Nanograte Knowledge Technologies 
wrote:
> Would this perhaps be from the disease of an overly accentuated sense of self 
> important abilities, hypertension and performance anxiety? 
> 

If you are an MD or medical professional and you challenge the narrative of 
“safe and effective”, you will be railroaded into loosing your medical license 
and rendered mentally unstable ASAP. That's why most violate their oaths and 
say all's good. So yes, the continuation of the kill-shots are related to that. 
The supranational biopharmaceutical complex is in command at a lower level 
above governments. But there are levels above that...

If AI is going to kill us do you think that it will just come out and start 
zapping people with lasers? No, it will use intelligence so that most people 
will be like clueless imbeciles as to what is being done to them. 

https://twitter.com/i/status/1704617174676218107

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M2a620dea83ea486d6a2f7bab
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-09-23 Thread John Rose
On Wednesday, September 20, 2023, at 5:28 PM, David Williams wrote:
> 4. No credible scientific evidence for creating amyloid clots. Even the 
> possibly *extremely rare* cases that could *possibly* be attributed to the 
> vaccines are vanishingly small compared to the vaccine benefits in protecting 
> against severe disease, hospitalization, and death.

Clots = Thrombotic Thrombocytopenia Syndrome (TTS)

You have to understand that the White House is several rungs down in the chain 
of command. That alone takes some 
research. Who's at the top? AI? 

Notice though that Noami talks about scripts, we have the scripts. P#!  
Programming people!

https://rumble.com/v3jynww-bombshell-foia-documents-prove-biden-white-house-hid-covid-vaccine-harms-fr.html

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M19b1bbe5d16d8c1ac08f1327
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Toyota's groundbreaking robotics research

2023-09-23 Thread John Rose
It's mimicry, taught by a human then remap behaviors down. He mentioned 
generalizing behaviors soon. Not sure if they're doing a sort of “tweening” 
between behavior models or perhaps modelling behaviors of behaviors…
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6c69cd380bb41c01-Mda14d5b790e2f538dc4e75af
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-09-21 Thread John Rose
On Wednesday, September 20, 2023, at 5:57 PM, James Bowery wrote:
> Sure, you'll have marching morons overpowering the scientific community -- 
> and the marching morons willf frequently be carrying placards saying "Trust 
> The Science" as they crush any opposition. And, sure, they will have been the 
> creatures of the guys who control the the incentives these animals are 
> reinforced by via the dispensation of civilization's positive network 
> externalities in the form of fiat currency issuance.  We see that all over 
> the place.

I know it’s difficult for people to comprehend and confront the evil of what 
was done and most would rather go on living their lives in their oblivious 
dream-state equivocating their perception of it all especially those who 
profited from the whole thing. And I am wrong by calling it a clot shot that 
does do it an injustice. A more accurate term would be kill-shot since the 
clots alone don’t cover all the other detriments related to the spike-protein 
and that whole scenario. If the s-protein was restricted to only those duped 
into taking the shots that would be one thing but the s-protein proliferates to 
the unjabbed and unborn via various means and that is where it gets even more 
sinister….  As well as the whole EUA still in effect with other effective 
treatments banned, see https://c19ivm.org/. And the propaganda, censorship and 
gaslighting of those of us who knew/know what is going on…

The problem is that Scamdemic 1.0 was a trial run and it’s coming back soon. 
The movement is underway to cede multi-national sovereign authority to 
globalist control via this predatory medical tyranny and that will be coupled 
with central bank domination using vaccine passports and CBDC’s without any 
democratic input. Then AI will have full control to clear out the planet of 
excess smart monkeys by 2030… so enjoy it while you’re still able 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M27731517b59a67752d042ddb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-09-20 Thread John Rose
On Wednesday, September 20, 2023, at 5:28 PM, David Williams wrote:
> 6. “Clot shot” is loaded language in that it's conspiracy=theory lingo 
> designed to induce fear, etc.

Note title of message thread:  "How AI will kill us"

The clot shots were designed and tested via extensive AI modelling run by DoD 
who's technology is decades ahead of contemporary. The term "conspiracy theory" 
is a psycholinguistic tool for memetic hegemony... from the CIA I believe.

Do the research and become deprogrammed but I'm not obligated to conform to the 
narradigm so don't expect it.
 
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M83f19321fbbb417e6ed7c097
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-09-20 Thread John Rose
On Wednesday, September 20, 2023, at 4:09 PM, David Williams wrote:
> Matt, on #12 under “what else,” you could add “gamma ray burst”
> 
> Btw I only check in periodically, when did loaded language like “clot shot” 
> enter the thread? Not useful imho.

It is from the autopsies. The traditional definition of vaccine was changed 
without consultation specifically for the covid injections. They are not 
traditional vaccines they are gene therapy shots. The reality is that they 
create amyloid clots in their victims so perhaps clot-shots are a better name? 
Who gets to decide that anyway? Who's in charge of the naming? :)

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M13f5ce2f1c3ebe57f2cdb8c1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Programming Humans, in what Language?

2023-09-20 Thread John Rose
I wonder if people with calcified pineal glands from fluoride are more 
programmable than non-calcified. I assume calcified. Is it easier to program 
one that is unaware of their programming verses one that is aware of it? I 
suppose it depends on the “scripts”.

P# is just a placeholder for a name like Peeps# but then perhaps the language 
should be called Sheeple?

P# scripts could derive from an any-to-any multimodel LLM then generate or 
transpile into, for example, javascript for a web page that spits out 
multimedia.  P#: string s = GenerateCuriosity(Javascript,…);

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc1bcda5fdb4147f4-Meaae0fc0d2040d31be1fda83
Delivery options: https://agi.topicbox.com/groups/agi/subscription


  1   2   3   4   5   6   7   >