Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-21 Thread John Rose
If the fine structure constant was tunable across different hypothetical universes how would that affect the overall intelligence of each universe? Dive into that rabbit hole, express and/or algorithmicize the intelligence of a universe. There are several potential ways to do that, some of

Re: [agi] Re: Iran <> Israel, can AGI zealots do anything?

2024-04-18 Thread John Rose
On Wednesday, April 17, 2024, at 11:17 AM, Alan Grimes wrote: > It's a stage play. I think Iran is either a puppet regime or living under blackmail. The entire thing was done to cover up / distract from / give an excuse for the collapse of the banking system. Simultaneously, the market riggers

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-18 Thread John Rose
On Thursday, April 11, 2024, at 1:13 PM, James Bowery wrote: > Matt's use of Planck units in his example does seem to support your > suspicion.  Moreover, David McGoveran's Ordering Operator Calculus approach > to the proton/electron mass ratio (based on just the first 3 of the 4 levels > of

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-11 Thread John Rose
On Thursday, April 11, 2024, at 10:07 AM, James Bowery wrote: > What assumption is that? The assumption that alpha is unitless. Yes they cancel out but the simple process of cancelling units seems incomplete. Many of these constants though are re-representations of each other. How many

[agi] Re: Will SORA lead to AGI?

2024-04-11 Thread John Rose
On Thursday, April 11, 2024, at 12:27 AM, immortal.discoveries wrote: > Anyway, very interesting thoughts I share here maybe? Hmm back to the > question, do we need video AI? Well, AGI is a good exact matcher if you get > me :):), so if it is going to think about how to improve AGI in video

Re: [agi] Re: Entering the frenzy.

2024-04-11 Thread John Rose
On Friday, April 05, 2024, at 6:22 PM, Alan Grimes wrote: > It's difficult to decide whether this is actually a good investment: Dell Precisions are very reliable IMO and the cloud is great for scaling up. You can script up a massive amount of compute in a cloud then turn it off when done. Is

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-11 Thread John Rose
> "Abstract Fundamental physical constants need not be constant, neither > spatially nor temporally." If we could remote view somehow across multiple multiverse instances simultaneously in various non-deterministic states and perceive how the universe structure varies across different alphas.

[agi] Re: Entering the frenzy.

2024-04-05 Thread John Rose
On Friday, April 05, 2024, at 12:21 AM, Alan Grimes wrote: > So let me tell you about the venture I want to start. I would like to put together a lab / research venture to sprint to achieve machine consciousness. I think there is enough tech available these days that I think there's enough tech

Re: [agi] How AI will kill us

2024-04-04 Thread John Rose
I was just thinking here that the ordering of the consciousness in permutations of strings is related to their universal pattern frequency so would need algorithms to represent that... -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-04 Thread John Rose
On Wednesday, April 03, 2024, at 2:39 PM, James Bowery wrote: > * and I realize this is getting pretty far removed from anything relevant to > practical "AGI" except insofar as the richest man in the world (last I heard) > was the guy who wants to use it to discover what makes "the simulation"

Re: [agi] How AI will kill us

2024-04-02 Thread John Rose
Or perhaps better, describe an algorithm that ranks the consciousness of some of the integers in [0..N]. There may be a stipulation that the integers be represented as atomic states all unobserved or all observed once… or allow ≥ 0 observations for all and see what various theories say.

Re: [agi] How AI will kill us

2024-04-02 Thread John Rose
On Monday, April 01, 2024, at 3:24 PM, Matt Mahoney wrote: > Tonini doesn't even give a precise formula for what he calls phi, a measure > of consciousness, in spite of all the math in his papers. Under reasonable > interpretations of his hand wavy arguments, it gives absurd results. For  >

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-04-01 Thread John Rose
On Friday, March 29, 2024, at 8:31 AM, Quan Tesla wrote: > Musical tuning and resonant conspiracy? Cooincidently, I spent some time > researching that just today. Seems, while tuning of instruments is a matter > of personal taste (e.g., Verdi tuning)  there's no real merit in the pitch of > a

Re: [agi] How AI will kill us

2024-04-01 Thread John Rose
On Sunday, March 31, 2024, at 7:55 PM, Matt Mahoney wrote: > The problem with this explanation is that it says that all systems with > memory are conscious. A human with 10^9 bits of long term memory is a billion > times more conscious than a light switch. Is this definition really useful? A

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 7:11 PM, Matt Mahoney wrote: > Prediction measures intelligence. Compression measures prediction. Can you reorient the concept of time from prediction? If time is on an axis, if you reorient the time perspective is there something like energy complexity? The

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 11:11 AM, Nanograte Knowledge Technologies wrote: > Who said anything about modifying the fine structure constant? I used the > terms: "coded and managed". > > I can see there's no serious interest here to take a fresh look at doable > AGI. Best to then leave

Re: [agi] How AI will kill us

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 7:33 AM, Keyvan M. Sadeghi wrote: > For the same reason that we, humans, don't kill dogs to save the planet. Exactly. If people can’t snuff Wuffy to save the planet how could they decide to kill off a few billion useless eaters? Although central banks do fuel both

Re: [agi] How AI will kill us

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 7:22 AM, Keyvan M. Sadeghi wrote: > With all due respect John, thinking an AI that has digested all human > knowledge, then goes on to kill us, is fucking delusional  Why is that delusional? It may be a logical decision for the AI to make an attempt to save the

Re: [agi] How AI will kill us

2024-03-30 Thread John Rose
On Thursday, March 28, 2024, at 5:55 PM, Keyvan M. Sadeghi wrote: > I'm not sure the granularity of feedback mechanism is the problem. I think > the problem lies in us not knowing if we're looping or contributing to the > future. This thread is a perfect example of how great minds can loop

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Friday, March 29, 2024, at 8:25 AM, Quan Tesla wrote: > The fine structure constant, in conjunction with the triple-alpha process > could be coded and managed via AI. Computational code.  Imagine the government in its profound wisdom declared that the fine structure constant needed to be

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread John Rose
On Thursday, March 28, 2024, at 4:55 PM, Quan Tesla wrote: > Alpha won't directly result in AGI, but it probsbly did result in all > intelligence on Earth, and would definitely resolve the power issues plaguing > AGI (and much more), especially as Moore's Law may be stalling, and > Kurzweil's

Re: [agi] Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread John Rose
On Thursday, March 28, 2024, at 10:06 AM, Quan Tesla wrote: > At least with an AI-enabled fine structure constant, we could've tried > repopulating selectively and perhaps reversed a lot of the damage we caused > Earth. The idea of AI-enabling the fine-structure constant is thought provoking

Re: [agi] Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread John Rose
On Thursday, March 28, 2024, at 8:44 AM, Quan Tesla wrote: > One cannot disparage that which already makes no difference either way. > John's well, all about John, as can be expected. What?? LOL listen to you  On Thursday, March 28, 2024, at 8:44 AM, Quan Tesla wrote: > I've completed work and

Re: [agi] How AI will kill us

2024-03-28 Thread John Rose
On Wednesday, March 27, 2024, at 3:15 PM, Matt Mahoney wrote: > In my 2008 distributed AGI proposal ( > https://mattmahoney.net/agi2.html ) I described a hostile peer to peer > network where information has negative value and people (and AI) > compete for attention. My focus was on distributing

Re: [agi] Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread John Rose
On Thursday, March 28, 2024, at 1:45 AM, Quan Tesla wrote: > If yes, what results have you to show for it? There’s no need to disparage the generous contributions by some highly valued and intelligent individuals on this list. I’ve obtained invaluable knowledge and insight from these

Re: [agi] How AI will kill us

2024-03-27 Thread John Rose
On Wednesday, March 27, 2024, at 3:15 PM, Matt Mahoney wrote: > I predict a return of smallpox and polio because people won't get vaccinated. > We have already seen it happen with measles. I think it’s a much higher priority as to what’s with that non-human DNA integrated into chromosomes 9 and

Re: [agi] How AI will kill us

2024-03-27 Thread John Rose
On Wednesday, March 27, 2024, at 12:37 PM, Matt Mahoney wrote: > Flat Earthers, including the majority who secretly know the world is round, have a more important message. How do you know what is true? We need to emphasize hard science verses intergenerational pseudo-religious belief systems

Re: [agi] How AI will kill us

2024-03-27 Thread John Rose
On Wednesday, March 27, 2024, at 12:37 PM, Matt Mahoney wrote: > We have a fairly good understanding of biological self replicators and how to prime the immune systems of humans and farm animals to fight them. But how to fight misinformation? Regarding the kill-shots you emphasize reproduction

Re: [agi] How AI will kill us

2024-03-27 Thread John Rose
On Monday, March 25, 2024, at 5:18 AM, stefan.reich.maker.of.eye wrote: > On Saturday, March 23, 2024, at 11:10 PM, Matt Mahoney wrote: >> Also I have been eating foods containing DNA every day of my life without >> any bad effects. > > Why would that have bad effects? That used to not be an

Re: [agi] How AI will kill us

2024-03-24 Thread John Rose
On Saturday, March 23, 2024, at 6:10 PM, Matt Mahoney wrote: > But I wonder how we will respond to existential threats in the future, like > genetically engineered pathogens or self replicating nanotechnology. The > vaccine was the one bright spot in our mostly bungled response to covid-19. >

Re: [agi] How AI will kill us

2024-03-22 Thread John Rose
On Thursday, March 21, 2024, at 1:07 PM, James Bowery wrote: > Musk has set a trap far worse than censorship. I wasn’t really talking about Musk OK mutants? Though he had the cojones to do something big about the censorship and opened up a temporary window basically by acquiring Twitter. A

Re: [agi] How AI will kill us

2024-03-21 Thread John Rose
On Thursday, March 21, 2024, at 11:41 AM, Keyvan M. Sadeghi wrote: > Worship stars, not humans  The censorship the last few years was like an eclipse. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] How AI will kill us

2024-03-21 Thread John Rose
I don’t like beating this drum but this has to be studied in relation to unfriendly AGI and the WHO pandemic treaty is coming up in May which has to be stopped. Here is a passionate interview after Dr. Chris Shoemaker presenting in US congress, worth watching for a summary of the event and the

[agi] Re: "The Optimal Choice of Hypothesis Is the Weakest, Not the Shortest"

2024-01-29 Thread John Rose
Weakness reminds me of losylossslessyzygy (sic. lossy lossless syzygy)… hmm… I wonder if it’s related. Cardinality (Description Length) verses cardinality of its extension (Weakness)… -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] How AI will kill us

2024-01-15 Thread John Rose
...continuing P# research… Though I will say that the nickname for P# code used for authoritarian and utilitarian zombification is Z# for zomby cybernetic script. And for language innovation which seems scarce lately since many new programming languages are syntactic rehashes, new intelligence

Re: [agi] How AI will kill us

2024-01-08 Thread John Rose
…continuing P# research… This book by Dr. Michael Nehis “The Indoctrinated Brain” offers an interesting neuroscience explanation and self-defense tips on how the contemporary zombification of human minds is being implemented. Essentially, he describes a mental immune system and there is a

Re: [agi] How AI will kill us

2023-12-23 Thread John Rose
…continuing The science changes when conflicts of interest are removed. This is a fact. And a behavior seems to be that injected individuals go into this state of “Where’s the evidence?” And when evidence is presented, they can’t acknowledge it or grok it and go into a type of loop: “Where’s

Re: [agi] How AI will kill us

2023-12-21 Thread John Rose
On Tuesday, December 19, 2023, at 9:47 AM, John Rose wrote: > On Tuesday, December 19, 2023, at 8:59 AM, Matt Mahoney wrote: >> That's just a silly conspiracy theory. Do you think polio and smallpox were >> also attempts to microchip us? > > That is a very strong signal in t

Re: [agi] How AI will kill us

2023-12-19 Thread John Rose
On Tuesday, December 19, 2023, at 8:59 AM, Matt Mahoney wrote: > That's just a silly conspiracy theory. Do you think polio and smallpox were > also attempts to microchip us? That is a very strong signal in the genomic data. What will be interesting is how this signal changes now that it has

Re: [agi] How AI will kill us

2023-12-19 Thread John Rose
On Monday, December 18, 2023, at 9:31 PM, Matt Mahoney wrote: > I'm not sure what your point is. The paper shows that the variants are from genomically generative non-mutative origination. Look at the step ladder in the mutation diagrams showing corrected previous mutations on each variant. IOW

Re: [agi] How AI will kill us

2023-12-18 Thread John Rose
Evidence comin' at ya, check out Supplemental Figure 2: https://zenodo.org/records/8361577 -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mb096662703220edbaab50359 Delivery options:

[agi] Re: By fire or by ice.

2023-12-08 Thread John Rose
On Wednesday, December 06, 2023, at 1:52 PM, Alan Grimes wrote: > Whether it is even possible for the fed to print that much money is an open question. I want to be on the other side of the financial collapse (fiscal armageddon) as soon as possible. Right now we are just waiting to see whether

Re: [agi] How AI will kill us

2023-12-07 Thread John Rose
On Wednesday, December 06, 2023, at 12:50 PM, James Bowery wrote: > Please note the, uh, popularity of the notion that there is no free will.  > Also note Matt's prior comment on recursive self improvement having started > with primitive technology.   > > From this "popular" perspective, there

Re: [agi] How AI will kill us

2023-12-06 Thread John Rose
On Tuesday, December 05, 2023, at 9:53 AM, James Bowery wrote: > The anti-vaxers, in the final analysis, and at an inchoate level, want to be > able to maintain strict migration into their territories of virulent agents > of whatever level of abstraction.  That is what makes the agents of The >

Re: [agi] How AI will kill us

2023-12-05 Thread John Rose
People are going to go Amish. Faraday clothingware is gaining traction for the holidays. And mobile carriers are offering the iPhone 15 upgrade for next to nothing. I need someone to confirm that Voice-to-Skull is NOT in the 15 series but I keep getting blank stares…

Re: [agi] How AI will kill us

2023-12-05 Thread John Rose
On Tuesday, December 05, 2023, at 2:14 AM, Alan Grimes wrote: > It's been said that the collective IQ of humanity rises with every vaccine death... I'm still waiting for it to reach room temperature... It’s not all bad news. I heard that in some places unvaxxed sperm is going for $1200 a pop.

Re: [agi] How AI will kill us

2023-12-04 Thread John Rose
On Sunday, December 03, 2023, at 10:00 AM, Matt Mahoney wrote: > I don't mean to sound dystopian. OK, let me present this a bit differently. THIS MUTHERFUCKER WAS DESIGNED TO KILL YOU Mkay? In a nice way, not using gas or guns or bombs. It was a trial balloon developed over several decades

Re: [agi] How AI will kill us

2023-12-03 Thread John Rose
On Sunday, December 03, 2023, at 7:59 AM, James Bowery wrote: > > A dream to some, a nightmare to others.  > All those paleolithic megaliths around the globe… hmmm…could they be from previous human technological cycles? Unless there's some

Re: [agi] How AI will kill us

2023-12-03 Thread John Rose
On Saturday, December 02, 2023, at 11:25 PM, Giovanni Santostasi wrote: > AGI gives whatever we want so that is the end of us, so idiotic conclusion, > sorry. Although I would say after looking at the definition of dystopia and once one fully understands the gravity of what is happening it is

Re: [agi] How AI will kill us

2023-12-03 Thread John Rose
On Saturday, December 02, 2023, at 11:25 PM, Giovanni Santostasi wrote: > I cannot believe this group is full of dystopians. Dystopia never happen, at > least not for long or globally. They are always localized in time or space. > Hollywood is full of dystopia because they lack imagination. 

Re: [agi] How AI will kill us

2023-12-02 Thread John Rose
People need to understand the significance of this global mindscrew. And ChatGPT is blue-pilled on the shots, as if anyone expected differently. What is absolutely amazing is that Steve Kirsch wasn’t able to speak at the MIT auditorium named after him since he was labeled as a misinformation

Re: [agi] Re: Lexical model learning for LLMs

2023-11-23 Thread John Rose
A compression ratio of 0.1072 seems like there is plenty of room still. What is the max ratio estimate something like 0.08 to 0.04?  Though 0.04 might be impossibly tight... even at 0.05 the resource consumption has got to exponentiate out of control unless there are overlooked discoveries

Re: [agi] GPT-4 Turbo Fails AIC Test -- HARD

2023-11-09 Thread John Rose
All strings are numbers. Base 65,536 in Basic Multilingual Plane Unicode: printf(“D㈔읂咶㗏葥㩖ퟆ疾矟”); -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T8581faf50bfa0ead-Mcaf3b77c1dd3b8e2295f6c96 Delivery options:

Re: [agi] How AI will kill us

2023-10-25 Thread John Rose
Etcetera: https://correlation-canada.org/nobel-vaccine-and-all-cause-mortality/ -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M7c76a4ad6e4459816b12787d Delivery options:

Re: [agi] How AI will kill us

2023-10-25 Thread John Rose
On Wednesday, September 20, 2023, at 5:28 PM, David Williams wrote: > 4. No credible scientific evidence for creating amyloid clots. Even the > possibly *extremely rare* cases that could *possibly* be attributed to the > vaccines are vanishingly small compared to the vaccine benefits in

Re: [agi] How AI will kill us

2023-09-28 Thread John Rose
On Wednesday, September 27, 2023, at 11:41 AM, Matt Mahoney wrote: > So like many scientists, they look for evidence that supports their theories > instead of evidence that refutes them. "In formulating their theories, “most physicists think about experiments,” he said. “I think they should be

Re: [agi] How AI will kill us

2023-09-27 Thread John Rose
On Wednesday, September 27, 2023, at 12:13 PM, Matt Mahoney wrote: > If you are going to define consciousness as intelligence, then you need to > define intelligence. We have two widely accepted definitions applicable to > computers. It’s not difficult. Entertain a panpsychist model of

Re: [agi] How AI will kill us

2023-09-27 Thread John Rose
On Wednesday, September 27, 2023, at 11:41 AM, Matt Mahoney wrote: > 1. Medical consciousness. The mental state of being awake and able to form > memories. The opposite of unconsciousness. > 2. Ethical consciousness. The property of higher animals that makes it > unethical to inflict pain or to

Re: [agi] How AI will kill us

2023-09-27 Thread John Rose
On Tuesday, September 26, 2023, at 11:53 PM, Quan Tesla wrote: > Incredible. We won't believe hard science, but we'll believe almost > everything else. This is "The Truman Show" all over again.  > Orch-OR is macro level human brain centric consciousness theory though it may apply to animals,

Re: [agi] How AI will kill us

2023-09-27 Thread John Rose
On Wednesday, September 27, 2023, at 8:00 AM, Quan Tesla wrote: > Yip. It's called the xLimit. We've hit the ceiling...lol It's difficult to make progress on an email list if disengaged people spontaneously emit useless emotionally triggered quips... --

Re: [agi] How AI will kill us

2023-09-27 Thread John Rose
On Tuesday, September 26, 2023, at 5:18 PM, EdFromNH wrote: > Of course it is possible that advanced AI might find organic lifeforms > genetically engineered with organic brains to be the most efficient way to > mass produce brainpower under their control, and that such intelligent > organic

Re: [agi] How AI will kill us

2023-09-26 Thread John Rose
On Tuesday, September 26, 2023, at 3:17 PM, Nanograte Knowledge Technologies wrote: > But according to all scientific evidence, and even Dr. Stuart Hammerhoff's > latest theory of anaesthetics, such patients aren't conscious at all. It's > hard science. > > AGI pertains to human

Re: [agi] How AI will kill us

2023-09-26 Thread John Rose
On Tuesday, September 26, 2023, at 8:56 AM, James Bowery wrote: > Since property rights are founded on civil society and civil society is > founded on the abrogation of individual male intrasexual selection by young > males in exchange for collectivized force that would act to protect >

Re: [agi] How AI will kill us

2023-09-26 Thread John Rose
On Monday, September 25, 2023, at 2:14 AM, Quan Tesla wrote: > But, in the new world (this dystopia we're existing in right now), free > lunches for AI owners are all the rage.  It's patently obvious in the total > onslaught by owners of cloud-based AI who are stealing IP, company video >

Re: [agi] How AI will kill us

2023-09-26 Thread John Rose
On Tuesday, September 26, 2023, at 1:02 AM, Nanograte Knowledge Technologies wrote: > Are you asserting that a patient under aneasthesia is conscious? How then, if > there's no memory of experience, or sensation, or cognitive interaction, do > we claim human consciousness? > > Just a

Re: [agi] How AI will kill us

2023-09-25 Thread John Rose
On Monday, September 25, 2023, at 3:27 PM, Matt Mahoney wrote: > OK. Give me a test for consciousness and I'll do the experiment. If you mean > the Turing test then there is an easy proof. If you define consciousness as a panpsychist physical attribute then all implemented compressors would be

Re: [agi] How AI will kill us

2023-09-25 Thread John Rose
On Monday, September 25, 2023, at 1:09 PM, Matt Mahoney wrote: > For those still here, what is there left to do? I think we need a mathematical proof that conscious compressors compress better than non…  -- Artificial General Intelligence List: AGI

Re: [agi] How AI will kill us

2023-09-24 Thread John Rose
On Sunday, September 24, 2023, at 11:23 AM, Nanograte Knowledge Technologies wrote: > Would this perhaps be from the disease of an overly accentuated sense of self > important abilities, hypertension and performance anxiety?  > If you are an MD or medical professional and you challenge the

Re: [agi] How AI will kill us

2023-09-23 Thread John Rose
On Wednesday, September 20, 2023, at 5:28 PM, David Williams wrote: > 4. No credible scientific evidence for creating amyloid clots. Even the > possibly *extremely rare* cases that could *possibly* be attributed to the > vaccines are vanishingly small compared to the vaccine benefits in

[agi] Re: Toyota's groundbreaking robotics research

2023-09-23 Thread John Rose
It's mimicry, taught by a human then remap behaviors down. He mentioned generalizing behaviors soon. Not sure if they're doing a sort of “tweening” between behavior models or perhaps modelling behaviors of behaviors… -- Artificial General Intelligence

Re: [agi] How AI will kill us

2023-09-21 Thread John Rose
On Wednesday, September 20, 2023, at 5:57 PM, James Bowery wrote: > Sure, you'll have marching morons overpowering the scientific community -- > and the marching morons willf frequently be carrying placards saying "Trust > The Science" as they crush any opposition. And, sure, they will have been

Re: [agi] How AI will kill us

2023-09-20 Thread John Rose
On Wednesday, September 20, 2023, at 5:28 PM, David Williams wrote: > 6. “Clot shot” is loaded language in that it's conspiracy=theory lingo > designed to induce fear, etc. Note title of message thread:  "How AI will kill us" The clot shots were designed and tested via extensive AI modelling

Re: [agi] How AI will kill us

2023-09-20 Thread John Rose
On Wednesday, September 20, 2023, at 4:09 PM, David Williams wrote: > Matt, on #12 under “what else,” you could add “gamma ray burst” > > Btw I only check in periodically, when did loaded language like “clot shot” > enter the thread? Not useful imho. It is from the autopsies. The traditional

Re: [agi] Programming Humans, in what Language?

2023-09-20 Thread John Rose
I wonder if people with calcified pineal glands from fluoride are more programmable than non-calcified. I assume calcified. Is it easier to program one that is unaware of their programming verses one that is aware of it? I suppose it depends on the “scripts”. P# is just a placeholder for a

Re: [agi] How AI will kill us

2023-09-20 Thread John Rose
On Wednesday, September 20, 2023, at 12:44 PM, James Bowery wrote: > At _that_ point, the "political" factions will, in fact, be the scientific > faction and the morons. > > Guess who wins? If morons control the debt-based fiat currency they will win since they can capture perception and they

Re: [agi] How AI will kill us

2023-09-19 Thread John Rose
On Friday, September 08, 2023, at 3:06 PM, John Rose wrote: > A big threat is for people’s minds to be programmed to believe that something > that will slowly kill them is good for them.  > > *hint* *hint* But then, in order to maximize the seductive efficacy of an AI generated Da

Re: [agi] Programming Humans, in what Language?

2023-09-10 Thread John Rose
On Saturday, September 09, 2023, at 4:21 PM, Matt Mahoney wrote: > Because of this rate limit, effectively programming humans requires holding > their attention for long periods. It takes about 8 hours per day for 2% of > your age to change your political views. Education and religious >

[agi] Re: Programming Humans, in what Language?

2023-09-09 Thread John Rose
On Saturday, September 09, 2023, at 12:39 PM, ivan.moony wrote: > Maybe mass hypnosis, hehe? > >:-] Yes, like Mass Formation Psychosis... TDS, Trump Derangement Syndrome :) triggering woketards. Rhetoric, propaganda, neuro-linguistic programming are all components of it I suppose. The

[agi] Re: Programming Humans, in what Language?

2023-09-09 Thread John Rose
On Saturday, September 09, 2023, at 10:13 AM, ivan.moony wrote: > Actually, crux might be quite simple - a corner stone could be dealing with > causes and consequences similar to Prolog, but applied to and predicting > human behavior. More complicated part would be proper axiomatization of human

[agi] Re: image to 3D perfect almost so it seems

2023-09-09 Thread John Rose
Need to be able to do this with concept topologies... -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T5e19980bf14c3de4-M85167a6bd55d69c7ba0a1cb3 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] How AI will kill us

2023-09-09 Thread John Rose
On Saturday, September 09, 2023, at 12:05 AM, immortal.discoveries wrote: > Right now, that is true, humans can only get faster at upgrading themselves > by learning more knowledge and having more babies, hence getting even faster > at doing that, and so on repeat. Schools are indoctrination

[agi] Programming Humans, in what Language?

2023-09-08 Thread John Rose
So…… If we take the algebraic structure of consciousness perhaps we can unravel a language. I think it should be called P# so it has a built in memory manager, garbage collector, etc.. Although… it’s a good thing to figure out before embarking on such a BNF.. what type of memory management

Re: [agi] Re: How AI will kill us

2023-09-08 Thread John Rose
On Friday, September 08, 2023, at 2:42 PM, Matt Mahoney wrote: > Which of these are (a) realistic threats and (b) 100% fatal? A big threat is for people’s minds to be programmed to believe that something that will slowly kill them is good for them.  *hint* *hint*

[agi] Re: How AI will kill us

2023-09-06 Thread John Rose
On Wednesday, September 06, 2023, at 6:23 PM, Matt Mahoney wrote: > - 3% of deaths in the US are drug overdoses (mostly fentanyl and meth), > doubling in 6 years. You’re omitting the massive impact on mortality and natality from the clot shots. Safe and Effective is an illusion. Follow Ed Dowd

Re: [agi] Re: my take on the Singularity

2023-08-20 Thread John Rose
On Wednesday, August 16, 2023, at 3:38 PM, Matt Mahoney wrote: > On Tue, Aug 15, 2023, 7:44 AM John Rose wrote: >> I suspect human K complexity is larger than most people realize. > > It's about 10^9 bits of long term memory (based on recall tests for words and > images) and

Re: [agi] Re: my take on the Singularity

2023-08-15 Thread John Rose
On Monday, August 14, 2023, at 5:47 PM, Matt Mahoney wrote: > On Sun, Aug 13, 2023 at 9:27 PM John Rose wrote: >> A clone would have a different K complexity than you therefore it's not you. > No it wouldn't. An atom for atom identical copy of you will have exactly the same Kolmogoro

Re: [agi] Re: my take on the Singularity

2023-08-13 Thread John Rose
On Sunday, August 13, 2023, at 4:29 AM, immortal.discoveries wrote: > Yes it's true, an exact clone of me would be exactly me, yet, due to my silly > human nature that I truly yes "believe" and "abide by" (yes, I do), I believe > I am a viewer of the senses that pass through my eyes/brain, and

Re: [agi] Are we entering NI winter?

2023-08-07 Thread John Rose
LLM's are one giant low hanging proto-AGI fruit that many of us predicted years ago. At the time I was thinking everyone else is pursuing that so I'll do something else... They came in much later than I expected though. https://www.youtube.com/watch?v=-4D8EoI2Aqw

Re: [agi] Re: my take on the Singularity

2023-08-07 Thread John Rose
On Sunday, August 06, 2023, at 7:06 PM, James Bowery wrote: > Better compression requires not just correlation but causation, which is the > entire point of going beyond statistics/Shannon Information criteria to > dynamics/Algorithmic information criterion. > > Regardless of your values, if

Re: [agi] Visualizing Quantum Circuit Probability: Estimating Quantum State Complexity for Quantum Program Synthesis

2023-07-25 Thread John Rose
On Tuesday, July 25, 2023, at 10:10 AM, Matt Mahoney wrote: > Consciousness is what thinking feels like, the positive reinforcement that > you want to preserve by not dying. I use another definition but taking yours then we could say that the Universe is self-simulating itself. Through all

Re: [agi] Visualizing Quantum Circuit Probability: Estimating Quantum State Complexity for Quantum Program Synthesis

2023-07-24 Thread John Rose
On Sunday, July 23, 2023, at 12:33 PM, Matt Mahoney wrote: > Right now we have we have working 10 qubit quantum computers that can factor > 48 bit numbers using a different algorithm called QAOA, a huge leap over > Shor's algorithm. They estimate RSA-2048 can be broken with 372 qubits and a >

Re: [agi] Visualizing Quantum Circuit Probability: Estimating Quantum State Complexity for Quantum Program Synthesis

2023-07-22 Thread John Rose
On Sunday, July 16, 2023, at 5:18 PM, Matt Mahoney wrote: > The result is helpful by giving one more reason why quantum computing is not > the path to AGI. Quantum computing is often misunderstood as computing an > exponential set of paths in parallel, when for most purposes it is actually >

[agi] Re: Visualizing Quantum Circuit Probability: Estimating Quantum State Complexity for Quantum Program Synthesis

2023-07-15 Thread John Rose
On Friday, July 14, 2023, at 9:00 PM, James Bowery wrote: > https://www.mdpi.com/1099-4300/25/5/763 ChatGPT is so very useful for AGI research: Me:  "What is the Kolmogorov complexity of a string of qubits?" ChatGPT:  "In quantum information theory, the concept analogous to the Kolmogorov

[agi] Re: Visualizing Quantum Circuit Probability: Estimating Quantum State Complexity for Quantum Program Synthesis

2023-07-15 Thread John Rose
Nice one. I didn't realize the importance of circuit complexity. The paper discusses some of that. John -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tb2574dcac5560d73-M66cdfc79bf200b9a6f634ef0 Delivery

Re: [agi] Re: HumesGuillotine

2023-07-10 Thread John Rose
OK I see, for example SINDy-Pi a variant of  SINDy, has function libraries, in Matlab: https://github.com/dynamicslab/SINDy-PI/tree/master/PhysicalLawDiscovery/DoublePendulum/Functions And it’s an algorithm for identifying nonlinear dynamics… so you're talking about using SINDy'ies in

Re: [agi] Re: HumesGuillotine

2023-07-10 Thread John Rose
On Sunday, July 09, 2023, at 7:06 PM, James Bowery wrote: > So when you mix the phrase "mathematical compression" with "constrained > compressor", you're conflating the engine doing the "mathematical > compression" with the only thing that is constrained in size:  The compressed > model

[agi] Re: HumesGuillotine

2023-07-09 Thread John Rose
In the spectrum of computable AIXI models, some known and some unknown, certainly some are better than others and there must be features for favoring those. This paper discusses some: https://arxiv.org/abs/1805.08592 -- Artificial General Intelligence

[agi] Re: HumesGuillotine

2023-07-09 Thread John Rose
On Sunday, July 09, 2023, at 2:19 PM, James Bowery wrote: >> Good predictors (including AIXI, other AI, and lossless compression) >> are necessarily complex... >> Two examples: >> 1. SINDy, mentioned earlier, predicts a time series of real numbers by >> testing against a library of different

[agi] Re: HumesGuillotine

2023-07-09 Thread John Rose
On Saturday, July 08, 2023, at 2:32 PM, James Bowery wrote: > Here's the critical difference in a nutshell: > Shannon Information regards the first billion bits of the number Pi to be > random. That is to say, there is no description of those bits in terms of > Shannon Information that is

Re: [agi] Re: AGI world

2023-05-02 Thread John Rose
On Monday, May 01, 2023, at 7:45 PM, Alan Grimes wrote: > I was feeling crackpot enough as it was writing that so I pulled my punches a little there. =\ You mean all these baseless debunked conspiracy theories that turn out to be fact? Someone/something doesn’t want us to know things. Once you

  1   2   3   4   5   6   7   >