Re: [agi] GPT-4o

2024-05-29 Thread John Rose
On Wednesday, May 29, 2024, at 3:56 PM, Keyvan M. Sadeghi wrote: > Judging the future of AGI (not distant, 5 years), with our current premature > brains is a joke. Worse, it's an unholy/profitable business for Sam Altmans / > Eric Schmidts / Elon Musks of the world. I was referring to

Re: [agi] GPT-4o

2024-05-29 Thread John Rose
On Monday, May 27, 2024, at 6:58 PM, Keyvan M. Sadeghi wrote: > Good thing is some productive chat happens outside this forum: > > https://x.com/ylecun/status/1794998977105981950 Smearing those who are concerned of particular AI risks by pooling them into a prejudged category entitled “Doomers”

Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-24 Thread John Rose
On Wednesday, May 15, 2024, at 12:28 AM, Matt Mahoney wrote: > The top entry on the large text benchmark, nncp, uses a transformer. It is > closed source but there is a paper describing the algorithm. It doesn't > qualify for the Hutter prize because it takes 3 days to compress 1 GB on a > GPU

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-22 Thread John Rose
On Tuesday, May 21, 2024, at 10:34 PM, Rob Freeman wrote: > Unless I've missed something in that presentation. Is there anywhere in the hour long presentation where they address a decoupling of category from pattern, and the implications of this for novelty of structure? I didn’t watch the video

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-20 Thread John Rose
On Saturday, May 18, 2024, at 6:53 PM, Matt Mahoney wrote: > Surely you are aware of the 100% failure rate of symbolic AI over the last 70 > years? It should work in theory, but we have a long history of > underestimating the cost, lured by the early false success of covering half > of the

Re: [agi] GPT-4o

2024-05-17 Thread John Rose
On Friday, May 17, 2024, at 10:07 AM, Sun Tzu InfoDragon wrote: the AI just really a regurgitation engine that smooths everything over and appears smart. > > No you! I agree. Humans are like memetic switches, information repeaters, reservoirs. The intelligence is in the collective, we’re just

Re: [agi] GPT-4o

2024-05-17 Thread John Rose
On Tuesday, May 14, 2024, at 11:21 AM, James Bowery wrote: > Yet another demonstration of how Alan Turing poisoned the future with his > damnable "test" that places mimicry of humans over truth. This unintentional result of Turing’s idea is an intentional component of some religions. The elder

Re: [agi] GPT-4o

2024-05-17 Thread John Rose
On Tuesday, May 14, 2024, at 10:27 AM, Matt Mahoney wrote: > Does everyone agree this is AGI? Ya is the AI just really a regurgitation engine that smooths everything over and appears smart. Kinda like a p-zombie, poke it, prod it, sounds generally intelligent!  But… artificial is what everyone

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-17 Thread John Rose
On Thursday, May 16, 2024, at 11:26 AM, ivan.moony wrote: > What should symbolic approach include to entirely replace neural networks > approach in creating true AI? Symbology will compress NN monstrosities… right?  Or should say increasing efficiency via emerging symbolic activity for

Re: [agi] How AI is killing the internet

2024-05-13 Thread John Rose
Also, with TikTok governments don’t want the truth exposed because populations tend to get rebellious so they want “unsafe” information suppressed. E.g. Canadian trucker protests…. I sometimes wonder do Canadians know that Trudeau is Castro’s biological son? Thanks TicTok didn’t know that. And

Re: [agi] How AI is killing the internet

2024-05-13 Thread John Rose
Mike Gunderloy disconnected. Before the internet he did Factsheet Five which connected alt undergrounders. It really was an amazing publication that could be considered a type of pre-internet search engine with zines as websites. https://en.wikipedia.org/wiki/Factsheet_Five Then as the

Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-12 Thread John Rose
On Sunday, May 12, 2024, at 10:38 AM, Matt Mahoney wrote: > All neural networks are trained by some variation of adjusting anything that > is adjustable in the direction that reduces error. The problem with KAN alone > is you have a lot fewer parameters to adjust, so you need a lot more neurons

Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-12 Thread John Rose
On Sunday, May 12, 2024, at 12:13 AM, immortal.discoveries wrote: > But doesn't it have to run the code to find out no? The people who wrote the paper did some nice work on this. They laid it out perhaps intentionally so that doing it again with modified structures is easy to visualize. A

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-11 Thread John Rose
On Wednesday, May 08, 2024, at 6:24 PM, Keyvan M. Sadeghi wrote: >> Perhaps we need to sort out human condition issues that stem from human >> consciousness? > > Exactly what we should do and what needs funding, but shitheads of the world > be funding wars. And Altman :)) If Jeremy Griffith’s

[agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-11 Thread John Rose
Seems that software and more generalized mathematics should be discovering these new structures. If a system projects candidates into a test domain, abstracted, and wires them up for testing in a software host how would you narrow the search space of potential candidates? You’d need a more

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-08 Thread John Rose
On Tuesday, May 07, 2024, at 9:41 PM, Keyvan M. Sadeghi wrote: > It's because of biology. There, I said it. But it's more nuanced. Brain cells > are almost identical at birth. The experiences that males and females go > through in life, however, are societally different. And that's rooted in >

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-08 Thread John Rose
On Tuesday, May 07, 2024, at 6:53 PM, Matt Mahoney wrote: > Kolmogorov proved there is no such thing as an infinitely powerful compressor. Not even if you have infinite computing power. Compressing the universe is a unique case especially being supplied with infinite computing power. Would the

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread John Rose
On Tuesday, May 07, 2024, at 10:01 AM, Matt Mahoney wrote: > We don't know the program that computes the universe because it would require the entire computing power of the universe to test the program by running it, about 10^120 or 2^400 steps. But we do have two useful approximations. If we set

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread John Rose
On Tuesday, May 07, 2024, at 8:04 AM, Quan Tesla wrote: > To suggest that every hypothetical universe has its own alpha, makes no > sense, as alpha is all encompassing as it is. You are exactly correct. There is another special case besides expressing the intelligence of the universe. And that

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread John Rose
On Friday, May 03, 2024, at 7:10 PM, Matt Mahoney wrote: > So when we talk about the intelligence of the universe, we can only really > measure it's computing power, which we generally correlate with prediction > power as a measure of intelligence. The universes overall prediction power should

Re: [agi] How AI will kill us

2024-05-07 Thread John Rose
For those genuinely interested in this particular Imminent threat here is a case study (long video) circulating on how western consciousness is being programmatically hijacked presented by a gentleman who has been involved and researching it for several decades. He describes this particular

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-03 Thread John Rose
Expressing the intelligence of the universe is a unique case, verses say expressing the intelligence of an agent like a human mind. A human mind is very lossy verses the universe where there is theoretically no loss. If lossy and lossless were a duality then the universe would be a singularity

Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-05-03 Thread John Rose
On Thursday, May 02, 2024, at 6:03 AM, YKY (Yan King Yin, 甄景贤) wrote: > It's not easy to prove new theorems in category theory or categorical > logic... though one open problem may be the formulation of fuzzy toposes. Or perhaps neutrosophic topos, Florentin Smarandache has written much

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-21 Thread John Rose
If the fine structure constant was tunable across different hypothetical universes how would that affect the overall intelligence of each universe? Dive into that rabbit hole, express and/or algorithmicize the intelligence of a universe. There are several potential ways to do that, some of

Re: [agi] Re: Iran <> Israel, can AGI zealots do anything?

2024-04-18 Thread John Rose
On Wednesday, April 17, 2024, at 11:17 AM, Alan Grimes wrote: > It's a stage play. I think Iran is either a puppet regime or living under blackmail. The entire thing was done to cover up / distract from / give an excuse for the collapse of the banking system. Simultaneously, the market riggers

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-18 Thread John Rose
On Thursday, April 11, 2024, at 1:13 PM, James Bowery wrote: > Matt's use of Planck units in his example does seem to support your > suspicion.  Moreover, David McGoveran's Ordering Operator Calculus approach > to the proton/electron mass ratio (based on just the first 3 of the 4 levels > of

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-11 Thread John Rose
On Thursday, April 11, 2024, at 10:07 AM, James Bowery wrote: > What assumption is that? The assumption that alpha is unitless. Yes they cancel out but the simple process of cancelling units seems incomplete. Many of these constants though are re-representations of each other. How many

[agi] Re: Will SORA lead to AGI?

2024-04-11 Thread John Rose
On Thursday, April 11, 2024, at 12:27 AM, immortal.discoveries wrote: > Anyway, very interesting thoughts I share here maybe? Hmm back to the > question, do we need video AI? Well, AGI is a good exact matcher if you get > me :):), so if it is going to think about how to improve AGI in video

Re: [agi] Re: Entering the frenzy.

2024-04-11 Thread John Rose
On Friday, April 05, 2024, at 6:22 PM, Alan Grimes wrote: > It's difficult to decide whether this is actually a good investment: Dell Precisions are very reliable IMO and the cloud is great for scaling up. You can script up a massive amount of compute in a cloud then turn it off when done. Is

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-11 Thread John Rose
> "Abstract Fundamental physical constants need not be constant, neither > spatially nor temporally." If we could remote view somehow across multiple multiverse instances simultaneously in various non-deterministic states and perceive how the universe structure varies across different alphas.

[agi] Re: Entering the frenzy.

2024-04-05 Thread John Rose
On Friday, April 05, 2024, at 12:21 AM, Alan Grimes wrote: > So let me tell you about the venture I want to start. I would like to put together a lab / research venture to sprint to achieve machine consciousness. I think there is enough tech available these days that I think there's enough tech

Re: [agi] How AI will kill us

2024-04-04 Thread John Rose
I was just thinking here that the ordering of the consciousness in permutations of strings is related to their universal pattern frequency so would need algorithms to represent that... -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-04 Thread John Rose
On Wednesday, April 03, 2024, at 2:39 PM, James Bowery wrote: > * and I realize this is getting pretty far removed from anything relevant to > practical "AGI" except insofar as the richest man in the world (last I heard) > was the guy who wants to use it to discover what makes "the simulation"

Re: [agi] How AI will kill us

2024-04-02 Thread John Rose
Or perhaps better, describe an algorithm that ranks the consciousness of some of the integers in [0..N]. There may be a stipulation that the integers be represented as atomic states all unobserved or all observed once… or allow ≥ 0 observations for all and see what various theories say.

Re: [agi] How AI will kill us

2024-04-02 Thread John Rose
On Monday, April 01, 2024, at 3:24 PM, Matt Mahoney wrote: > Tonini doesn't even give a precise formula for what he calls phi, a measure > of consciousness, in spite of all the math in his papers. Under reasonable > interpretations of his hand wavy arguments, it gives absurd results. For  >

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-04-01 Thread John Rose
On Friday, March 29, 2024, at 8:31 AM, Quan Tesla wrote: > Musical tuning and resonant conspiracy? Cooincidently, I spent some time > researching that just today. Seems, while tuning of instruments is a matter > of personal taste (e.g., Verdi tuning)  there's no real merit in the pitch of > a

Re: [agi] How AI will kill us

2024-04-01 Thread John Rose
On Sunday, March 31, 2024, at 7:55 PM, Matt Mahoney wrote: > The problem with this explanation is that it says that all systems with > memory are conscious. A human with 10^9 bits of long term memory is a billion > times more conscious than a light switch. Is this definition really useful? A

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 7:11 PM, Matt Mahoney wrote: > Prediction measures intelligence. Compression measures prediction. Can you reorient the concept of time from prediction? If time is on an axis, if you reorient the time perspective is there something like energy complexity? The

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 11:11 AM, Nanograte Knowledge Technologies wrote: > Who said anything about modifying the fine structure constant? I used the > terms: "coded and managed". > > I can see there's no serious interest here to take a fresh look at doable > AGI. Best to then leave

Re: [agi] How AI will kill us

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 7:33 AM, Keyvan M. Sadeghi wrote: > For the same reason that we, humans, don't kill dogs to save the planet. Exactly. If people can’t snuff Wuffy to save the planet how could they decide to kill off a few billion useless eaters? Although central banks do fuel both

Re: [agi] How AI will kill us

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 7:22 AM, Keyvan M. Sadeghi wrote: > With all due respect John, thinking an AI that has digested all human > knowledge, then goes on to kill us, is fucking delusional  Why is that delusional? It may be a logical decision for the AI to make an attempt to save the

Re: [agi] How AI will kill us

2024-03-30 Thread John Rose
On Thursday, March 28, 2024, at 5:55 PM, Keyvan M. Sadeghi wrote: > I'm not sure the granularity of feedback mechanism is the problem. I think > the problem lies in us not knowing if we're looping or contributing to the > future. This thread is a perfect example of how great minds can loop

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Friday, March 29, 2024, at 8:25 AM, Quan Tesla wrote: > The fine structure constant, in conjunction with the triple-alpha process > could be coded and managed via AI. Computational code.  Imagine the government in its profound wisdom declared that the fine structure constant needed to be

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread John Rose
On Thursday, March 28, 2024, at 4:55 PM, Quan Tesla wrote: > Alpha won't directly result in AGI, but it probsbly did result in all > intelligence on Earth, and would definitely resolve the power issues plaguing > AGI (and much more), especially as Moore's Law may be stalling, and > Kurzweil's

Re: [agi] Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread John Rose
On Thursday, March 28, 2024, at 10:06 AM, Quan Tesla wrote: > At least with an AI-enabled fine structure constant, we could've tried > repopulating selectively and perhaps reversed a lot of the damage we caused > Earth. The idea of AI-enabling the fine-structure constant is thought provoking

Re: [agi] Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread John Rose
On Thursday, March 28, 2024, at 8:44 AM, Quan Tesla wrote: > One cannot disparage that which already makes no difference either way. > John's well, all about John, as can be expected. What?? LOL listen to you  On Thursday, March 28, 2024, at 8:44 AM, Quan Tesla wrote: > I've completed work and

Re: [agi] How AI will kill us

2024-03-28 Thread John Rose
On Wednesday, March 27, 2024, at 3:15 PM, Matt Mahoney wrote: > In my 2008 distributed AGI proposal ( > https://mattmahoney.net/agi2.html ) I described a hostile peer to peer > network where information has negative value and people (and AI) > compete for attention. My focus was on distributing

Re: [agi] Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread John Rose
On Thursday, March 28, 2024, at 1:45 AM, Quan Tesla wrote: > If yes, what results have you to show for it? There’s no need to disparage the generous contributions by some highly valued and intelligent individuals on this list. I’ve obtained invaluable knowledge and insight from these

Re: [agi] How AI will kill us

2024-03-27 Thread John Rose
On Wednesday, March 27, 2024, at 3:15 PM, Matt Mahoney wrote: > I predict a return of smallpox and polio because people won't get vaccinated. > We have already seen it happen with measles. I think it’s a much higher priority as to what’s with that non-human DNA integrated into chromosomes 9 and

Re: [agi] How AI will kill us

2024-03-27 Thread John Rose
On Wednesday, March 27, 2024, at 12:37 PM, Matt Mahoney wrote: > Flat Earthers, including the majority who secretly know the world is round, have a more important message. How do you know what is true? We need to emphasize hard science verses intergenerational pseudo-religious belief systems

Re: [agi] How AI will kill us

2024-03-27 Thread John Rose
On Wednesday, March 27, 2024, at 12:37 PM, Matt Mahoney wrote: > We have a fairly good understanding of biological self replicators and how to prime the immune systems of humans and farm animals to fight them. But how to fight misinformation? Regarding the kill-shots you emphasize reproduction

Re: [agi] How AI will kill us

2024-03-27 Thread John Rose
On Monday, March 25, 2024, at 5:18 AM, stefan.reich.maker.of.eye wrote: > On Saturday, March 23, 2024, at 11:10 PM, Matt Mahoney wrote: >> Also I have been eating foods containing DNA every day of my life without >> any bad effects. > > Why would that have bad effects? That used to not be an

Re: [agi] How AI will kill us

2024-03-24 Thread John Rose
On Saturday, March 23, 2024, at 6:10 PM, Matt Mahoney wrote: > But I wonder how we will respond to existential threats in the future, like > genetically engineered pathogens or self replicating nanotechnology. The > vaccine was the one bright spot in our mostly bungled response to covid-19. >

Re: [agi] How AI will kill us

2024-03-22 Thread John Rose
On Thursday, March 21, 2024, at 1:07 PM, James Bowery wrote: > Musk has set a trap far worse than censorship. I wasn’t really talking about Musk OK mutants? Though he had the cojones to do something big about the censorship and opened up a temporary window basically by acquiring Twitter. A

Re: [agi] How AI will kill us

2024-03-21 Thread John Rose
On Thursday, March 21, 2024, at 11:41 AM, Keyvan M. Sadeghi wrote: > Worship stars, not humans  The censorship the last few years was like an eclipse. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] How AI will kill us

2024-03-21 Thread John Rose
I don’t like beating this drum but this has to be studied in relation to unfriendly AGI and the WHO pandemic treaty is coming up in May which has to be stopped. Here is a passionate interview after Dr. Chris Shoemaker presenting in US congress, worth watching for a summary of the event and the

[agi] Re: "The Optimal Choice of Hypothesis Is the Weakest, Not the Shortest"

2024-01-29 Thread John Rose
Weakness reminds me of losylossslessyzygy (sic. lossy lossless syzygy)… hmm… I wonder if it’s related. Cardinality (Description Length) verses cardinality of its extension (Weakness)… -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] How AI will kill us

2024-01-15 Thread John Rose
...continuing P# research… Though I will say that the nickname for P# code used for authoritarian and utilitarian zombification is Z# for zomby cybernetic script. And for language innovation which seems scarce lately since many new programming languages are syntactic rehashes, new intelligence

Re: [agi] How AI will kill us

2024-01-08 Thread John Rose
…continuing P# research… This book by Dr. Michael Nehis “The Indoctrinated Brain” offers an interesting neuroscience explanation and self-defense tips on how the contemporary zombification of human minds is being implemented. Essentially, he describes a mental immune system and there is a

Re: [agi] How AI will kill us

2023-12-23 Thread John Rose
…continuing The science changes when conflicts of interest are removed. This is a fact. And a behavior seems to be that injected individuals go into this state of “Where’s the evidence?” And when evidence is presented, they can’t acknowledge it or grok it and go into a type of loop: “Where’s

Re: [agi] How AI will kill us

2023-12-21 Thread John Rose
On Tuesday, December 19, 2023, at 9:47 AM, John Rose wrote: > On Tuesday, December 19, 2023, at 8:59 AM, Matt Mahoney wrote: >> That's just a silly conspiracy theory. Do you think polio and smallpox were >> also attempts to microchip us? > > That is a very strong signal in t

Re: [agi] How AI will kill us

2023-12-19 Thread John Rose
On Tuesday, December 19, 2023, at 8:59 AM, Matt Mahoney wrote: > That's just a silly conspiracy theory. Do you think polio and smallpox were > also attempts to microchip us? That is a very strong signal in the genomic data. What will be interesting is how this signal changes now that it has

Re: [agi] How AI will kill us

2023-12-19 Thread John Rose
On Monday, December 18, 2023, at 9:31 PM, Matt Mahoney wrote: > I'm not sure what your point is. The paper shows that the variants are from genomically generative non-mutative origination. Look at the step ladder in the mutation diagrams showing corrected previous mutations on each variant. IOW

Re: [agi] How AI will kill us

2023-12-18 Thread John Rose
Evidence comin' at ya, check out Supplemental Figure 2: https://zenodo.org/records/8361577 -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mb096662703220edbaab50359 Delivery options:

[agi] Re: By fire or by ice.

2023-12-08 Thread John Rose
On Wednesday, December 06, 2023, at 1:52 PM, Alan Grimes wrote: > Whether it is even possible for the fed to print that much money is an open question. I want to be on the other side of the financial collapse (fiscal armageddon) as soon as possible. Right now we are just waiting to see whether

Re: [agi] How AI will kill us

2023-12-07 Thread John Rose
On Wednesday, December 06, 2023, at 12:50 PM, James Bowery wrote: > Please note the, uh, popularity of the notion that there is no free will.  > Also note Matt's prior comment on recursive self improvement having started > with primitive technology.   > > From this "popular" perspective, there

Re: [agi] How AI will kill us

2023-12-06 Thread John Rose
On Tuesday, December 05, 2023, at 9:53 AM, James Bowery wrote: > The anti-vaxers, in the final analysis, and at an inchoate level, want to be > able to maintain strict migration into their territories of virulent agents > of whatever level of abstraction.  That is what makes the agents of The >

Re: [agi] How AI will kill us

2023-12-05 Thread John Rose
People are going to go Amish. Faraday clothingware is gaining traction for the holidays. And mobile carriers are offering the iPhone 15 upgrade for next to nothing. I need someone to confirm that Voice-to-Skull is NOT in the 15 series but I keep getting blank stares…

Re: [agi] How AI will kill us

2023-12-05 Thread John Rose
On Tuesday, December 05, 2023, at 2:14 AM, Alan Grimes wrote: > It's been said that the collective IQ of humanity rises with every vaccine death... I'm still waiting for it to reach room temperature... It’s not all bad news. I heard that in some places unvaxxed sperm is going for $1200 a pop.

Re: [agi] How AI will kill us

2023-12-04 Thread John Rose
On Sunday, December 03, 2023, at 10:00 AM, Matt Mahoney wrote: > I don't mean to sound dystopian. OK, let me present this a bit differently. THIS MUTHERFUCKER WAS DESIGNED TO KILL YOU Mkay? In a nice way, not using gas or guns or bombs. It was a trial balloon developed over several decades

Re: [agi] How AI will kill us

2023-12-03 Thread John Rose
On Sunday, December 03, 2023, at 7:59 AM, James Bowery wrote: > > A dream to some, a nightmare to others.  > All those paleolithic megaliths around the globe… hmmm…could they be from previous human technological cycles? Unless there's some

Re: [agi] How AI will kill us

2023-12-03 Thread John Rose
On Saturday, December 02, 2023, at 11:25 PM, Giovanni Santostasi wrote: > AGI gives whatever we want so that is the end of us, so idiotic conclusion, > sorry. Although I would say after looking at the definition of dystopia and once one fully understands the gravity of what is happening it is

Re: [agi] How AI will kill us

2023-12-03 Thread John Rose
On Saturday, December 02, 2023, at 11:25 PM, Giovanni Santostasi wrote: > I cannot believe this group is full of dystopians. Dystopia never happen, at > least not for long or globally. They are always localized in time or space. > Hollywood is full of dystopia because they lack imagination. 

Re: [agi] How AI will kill us

2023-12-02 Thread John Rose
People need to understand the significance of this global mindscrew. And ChatGPT is blue-pilled on the shots, as if anyone expected differently. What is absolutely amazing is that Steve Kirsch wasn’t able to speak at the MIT auditorium named after him since he was labeled as a misinformation

Re: [agi] Re: Lexical model learning for LLMs

2023-11-23 Thread John Rose
A compression ratio of 0.1072 seems like there is plenty of room still. What is the max ratio estimate something like 0.08 to 0.04?  Though 0.04 might be impossibly tight... even at 0.05 the resource consumption has got to exponentiate out of control unless there are overlooked discoveries

Re: [agi] GPT-4 Turbo Fails AIC Test -- HARD

2023-11-09 Thread John Rose
All strings are numbers. Base 65,536 in Basic Multilingual Plane Unicode: printf(“D㈔읂咶㗏葥㩖ퟆ疾矟”); -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T8581faf50bfa0ead-Mcaf3b77c1dd3b8e2295f6c96 Delivery options:

Re: [agi] How AI will kill us

2023-10-25 Thread John Rose
Etcetera: https://correlation-canada.org/nobel-vaccine-and-all-cause-mortality/ -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M7c76a4ad6e4459816b12787d Delivery options:

Re: [agi] How AI will kill us

2023-10-25 Thread John Rose
On Wednesday, September 20, 2023, at 5:28 PM, David Williams wrote: > 4. No credible scientific evidence for creating amyloid clots. Even the > possibly *extremely rare* cases that could *possibly* be attributed to the > vaccines are vanishingly small compared to the vaccine benefits in

Re: [agi] How AI will kill us

2023-09-28 Thread John Rose
On Wednesday, September 27, 2023, at 11:41 AM, Matt Mahoney wrote: > So like many scientists, they look for evidence that supports their theories > instead of evidence that refutes them. "In formulating their theories, “most physicists think about experiments,” he said. “I think they should be

Re: [agi] How AI will kill us

2023-09-27 Thread John Rose
On Wednesday, September 27, 2023, at 12:13 PM, Matt Mahoney wrote: > If you are going to define consciousness as intelligence, then you need to > define intelligence. We have two widely accepted definitions applicable to > computers. It’s not difficult. Entertain a panpsychist model of

Re: [agi] How AI will kill us

2023-09-27 Thread John Rose
On Wednesday, September 27, 2023, at 11:41 AM, Matt Mahoney wrote: > 1. Medical consciousness. The mental state of being awake and able to form > memories. The opposite of unconsciousness. > 2. Ethical consciousness. The property of higher animals that makes it > unethical to inflict pain or to

Re: [agi] How AI will kill us

2023-09-27 Thread John Rose
On Tuesday, September 26, 2023, at 11:53 PM, Quan Tesla wrote: > Incredible. We won't believe hard science, but we'll believe almost > everything else. This is "The Truman Show" all over again.  > Orch-OR is macro level human brain centric consciousness theory though it may apply to animals,

Re: [agi] How AI will kill us

2023-09-27 Thread John Rose
On Wednesday, September 27, 2023, at 8:00 AM, Quan Tesla wrote: > Yip. It's called the xLimit. We've hit the ceiling...lol It's difficult to make progress on an email list if disengaged people spontaneously emit useless emotionally triggered quips... --

Re: [agi] How AI will kill us

2023-09-27 Thread John Rose
On Tuesday, September 26, 2023, at 5:18 PM, EdFromNH wrote: > Of course it is possible that advanced AI might find organic lifeforms > genetically engineered with organic brains to be the most efficient way to > mass produce brainpower under their control, and that such intelligent > organic

Re: [agi] How AI will kill us

2023-09-26 Thread John Rose
On Tuesday, September 26, 2023, at 3:17 PM, Nanograte Knowledge Technologies wrote: > But according to all scientific evidence, and even Dr. Stuart Hammerhoff's > latest theory of anaesthetics, such patients aren't conscious at all. It's > hard science. > > AGI pertains to human

Re: [agi] How AI will kill us

2023-09-26 Thread John Rose
On Tuesday, September 26, 2023, at 8:56 AM, James Bowery wrote: > Since property rights are founded on civil society and civil society is > founded on the abrogation of individual male intrasexual selection by young > males in exchange for collectivized force that would act to protect >

Re: [agi] How AI will kill us

2023-09-26 Thread John Rose
On Monday, September 25, 2023, at 2:14 AM, Quan Tesla wrote: > But, in the new world (this dystopia we're existing in right now), free > lunches for AI owners are all the rage.  It's patently obvious in the total > onslaught by owners of cloud-based AI who are stealing IP, company video >

Re: [agi] How AI will kill us

2023-09-26 Thread John Rose
On Tuesday, September 26, 2023, at 1:02 AM, Nanograte Knowledge Technologies wrote: > Are you asserting that a patient under aneasthesia is conscious? How then, if > there's no memory of experience, or sensation, or cognitive interaction, do > we claim human consciousness? > > Just a

Re: [agi] How AI will kill us

2023-09-25 Thread John Rose
On Monday, September 25, 2023, at 3:27 PM, Matt Mahoney wrote: > OK. Give me a test for consciousness and I'll do the experiment. If you mean > the Turing test then there is an easy proof. If you define consciousness as a panpsychist physical attribute then all implemented compressors would be

Re: [agi] How AI will kill us

2023-09-25 Thread John Rose
On Monday, September 25, 2023, at 1:09 PM, Matt Mahoney wrote: > For those still here, what is there left to do? I think we need a mathematical proof that conscious compressors compress better than non…  -- Artificial General Intelligence List: AGI

Re: [agi] How AI will kill us

2023-09-24 Thread John Rose
On Sunday, September 24, 2023, at 11:23 AM, Nanograte Knowledge Technologies wrote: > Would this perhaps be from the disease of an overly accentuated sense of self > important abilities, hypertension and performance anxiety?  > If you are an MD or medical professional and you challenge the

Re: [agi] How AI will kill us

2023-09-23 Thread John Rose
On Wednesday, September 20, 2023, at 5:28 PM, David Williams wrote: > 4. No credible scientific evidence for creating amyloid clots. Even the > possibly *extremely rare* cases that could *possibly* be attributed to the > vaccines are vanishingly small compared to the vaccine benefits in

[agi] Re: Toyota's groundbreaking robotics research

2023-09-23 Thread John Rose
It's mimicry, taught by a human then remap behaviors down. He mentioned generalizing behaviors soon. Not sure if they're doing a sort of “tweening” between behavior models or perhaps modelling behaviors of behaviors… -- Artificial General Intelligence

Re: [agi] How AI will kill us

2023-09-21 Thread John Rose
On Wednesday, September 20, 2023, at 5:57 PM, James Bowery wrote: > Sure, you'll have marching morons overpowering the scientific community -- > and the marching morons willf frequently be carrying placards saying "Trust > The Science" as they crush any opposition. And, sure, they will have been

Re: [agi] How AI will kill us

2023-09-20 Thread John Rose
On Wednesday, September 20, 2023, at 5:28 PM, David Williams wrote: > 6. “Clot shot” is loaded language in that it's conspiracy=theory lingo > designed to induce fear, etc. Note title of message thread:  "How AI will kill us" The clot shots were designed and tested via extensive AI modelling

Re: [agi] How AI will kill us

2023-09-20 Thread John Rose
On Wednesday, September 20, 2023, at 4:09 PM, David Williams wrote: > Matt, on #12 under “what else,” you could add “gamma ray burst” > > Btw I only check in periodically, when did loaded language like “clot shot” > enter the thread? Not useful imho. It is from the autopsies. The traditional

Re: [agi] Programming Humans, in what Language?

2023-09-20 Thread John Rose
I wonder if people with calcified pineal glands from fluoride are more programmable than non-calcified. I assume calcified. Is it easier to program one that is unaware of their programming verses one that is aware of it? I suppose it depends on the “scripts”. P# is just a placeholder for a

Re: [agi] How AI will kill us

2023-09-20 Thread John Rose
On Wednesday, September 20, 2023, at 12:44 PM, James Bowery wrote: > At _that_ point, the "political" factions will, in fact, be the scientific > faction and the morons. > > Guess who wins? If morons control the debt-based fiat currency they will win since they can capture perception and they

Re: [agi] How AI will kill us

2023-09-19 Thread John Rose
On Friday, September 08, 2023, at 3:06 PM, John Rose wrote: > A big threat is for people’s minds to be programmed to believe that something > that will slowly kill them is good for them.  > > *hint* *hint* But then, in order to maximize the seductive efficacy of an AI generated Da

Re: [agi] Programming Humans, in what Language?

2023-09-10 Thread John Rose
On Saturday, September 09, 2023, at 4:21 PM, Matt Mahoney wrote: > Because of this rate limit, effectively programming humans requires holding > their attention for long periods. It takes about 8 hours per day for 2% of > your age to change your political views. Education and religious >

  1   2   3   4   5   6   7   >