[agi] Re: HumesGuillotine

2023-07-09 Thread John Rose
In the spectrum of computable AIXI models, some known and some unknown, certainly some are better than others and there must be features for favoring those. This paper discusses some: https://arxiv.org/abs/1805.08592 -- Artificial General Intelligence

[agi] Re: HumesGuillotine

2023-07-09 Thread John Rose
On Sunday, July 09, 2023, at 2:19 PM, James Bowery wrote: >> Good predictors (including AIXI, other AI, and lossless compression) >> are necessarily complex... >> Two examples: >> 1. SINDy, mentioned earlier, predicts a time series of real numbers by >> testing against a library of different

[agi] Re: Visualizing Quantum Circuit Probability: Estimating Quantum State Complexity for Quantum Program Synthesis

2023-07-15 Thread John Rose
Nice one. I didn't realize the importance of circuit complexity. The paper discusses some of that. John -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tb2574dcac5560d73-M66cdfc79bf200b9a6f634ef0 Delivery

[agi] Re: Visualizing Quantum Circuit Probability: Estimating Quantum State Complexity for Quantum Program Synthesis

2023-07-15 Thread John Rose
On Friday, July 14, 2023, at 9:00 PM, James Bowery wrote: > https://www.mdpi.com/1099-4300/25/5/763 ChatGPT is so very useful for AGI research: Me:  "What is the Kolmogorov complexity of a string of qubits?" ChatGPT:  "In quantum information theory, the concept analogous to the Kolmogorov

Re: [agi] Visualizing Quantum Circuit Probability: Estimating Quantum State Complexity for Quantum Program Synthesis

2023-07-24 Thread John Rose
On Sunday, July 23, 2023, at 12:33 PM, Matt Mahoney wrote: > Right now we have we have working 10 qubit quantum computers that can factor > 48 bit numbers using a different algorithm called QAOA, a huge leap over > Shor's algorithm. They estimate RSA-2048 can be broken with 372 qubits and a >

Re: [agi] Visualizing Quantum Circuit Probability: Estimating Quantum State Complexity for Quantum Program Synthesis

2023-07-25 Thread John Rose
On Tuesday, July 25, 2023, at 10:10 AM, Matt Mahoney wrote: > Consciousness is what thinking feels like, the positive reinforcement that > you want to preserve by not dying. I use another definition but taking yours then we could say that the Universe is self-simulating itself. Through all

Re: [agi] Visualizing Quantum Circuit Probability: Estimating Quantum State Complexity for Quantum Program Synthesis

2023-07-22 Thread John Rose
On Sunday, July 16, 2023, at 5:18 PM, Matt Mahoney wrote: > The result is helpful by giving one more reason why quantum computing is not > the path to AGI. Quantum computing is often misunderstood as computing an > exponential set of paths in parallel, when for most purposes it is actually >

Re: [agi] Are we entering NI winter?

2023-08-07 Thread John Rose
LLM's are one giant low hanging proto-AGI fruit that many of us predicted years ago. At the time I was thinking everyone else is pursuing that so I'll do something else... They came in much later than I expected though. https://www.youtube.com/watch?v=-4D8EoI2Aqw

Re: [agi] Re: my take on the Singularity

2023-08-07 Thread John Rose
On Sunday, August 06, 2023, at 7:06 PM, James Bowery wrote: > Better compression requires not just correlation but causation, which is the > entire point of going beyond statistics/Shannon Information criteria to > dynamics/Algorithmic information criterion. > > Regardless of your values, if

Re: [agi] How AI will kill us

2023-12-18 Thread John Rose
Evidence comin' at ya, check out Supplemental Figure 2: https://zenodo.org/records/8361577 -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mb096662703220edbaab50359 Delivery options:

Re: [agi] How AI will kill us

2023-12-23 Thread John Rose
…continuing The science changes when conflicts of interest are removed. This is a fact. And a behavior seems to be that injected individuals go into this state of “Where’s the evidence?” And when evidence is presented, they can’t acknowledge it or grok it and go into a type of loop: “Where’s

Re: [agi] How AI will kill us

2023-12-21 Thread John Rose
On Tuesday, December 19, 2023, at 9:47 AM, John Rose wrote: > On Tuesday, December 19, 2023, at 8:59 AM, Matt Mahoney wrote: >> That's just a silly conspiracy theory. Do you think polio and smallpox were >> also attempts to microchip us? > > That is a very strong signal in t

Re: [agi] How AI will kill us

2024-01-15 Thread John Rose
...continuing P# research… Though I will say that the nickname for P# code used for authoritarian and utilitarian zombification is Z# for zomby cybernetic script. And for language innovation which seems scarce lately since many new programming languages are syntactic rehashes, new intelligence

Re: [agi] How AI will kill us

2024-01-08 Thread John Rose
…continuing P# research… This book by Dr. Michael Nehis “The Indoctrinated Brain” offers an interesting neuroscience explanation and self-defense tips on how the contemporary zombification of human minds is being implemented. Essentially, he describes a mental immune system and there is a

Re: [agi] How AI will kill us

2023-12-19 Thread John Rose
On Tuesday, December 19, 2023, at 8:59 AM, Matt Mahoney wrote: > That's just a silly conspiracy theory. Do you think polio and smallpox were > also attempts to microchip us? That is a very strong signal in the genomic data. What will be interesting is how this signal changes now that it has

Re: [agi] How AI will kill us

2023-12-19 Thread John Rose
On Monday, December 18, 2023, at 9:31 PM, Matt Mahoney wrote: > I'm not sure what your point is. The paper shows that the variants are from genomically generative non-mutative origination. Look at the step ladder in the mutation diagrams showing corrected previous mutations on each variant. IOW

Re: [agi] How AI will kill us

2023-12-05 Thread John Rose
People are going to go Amish. Faraday clothingware is gaining traction for the holidays. And mobile carriers are offering the iPhone 15 upgrade for next to nothing. I need someone to confirm that Voice-to-Skull is NOT in the 15 series but I keep getting blank stares…

Re: [agi] How AI will kill us

2023-12-05 Thread John Rose
On Tuesday, December 05, 2023, at 2:14 AM, Alan Grimes wrote: > It's been said that the collective IQ of humanity rises with every vaccine death... I'm still waiting for it to reach room temperature... It’s not all bad news. I heard that in some places unvaxxed sperm is going for $1200 a pop.

Re: [agi] How AI will kill us

2023-12-06 Thread John Rose
On Tuesday, December 05, 2023, at 9:53 AM, James Bowery wrote: > The anti-vaxers, in the final analysis, and at an inchoate level, want to be > able to maintain strict migration into their territories of virulent agents > of whatever level of abstraction.  That is what makes the agents of The >

Re: [agi] Re: Lexical model learning for LLMs

2023-11-23 Thread John Rose
A compression ratio of 0.1072 seems like there is plenty of room still. What is the max ratio estimate something like 0.08 to 0.04?  Though 0.04 might be impossibly tight... even at 0.05 the resource consumption has got to exponentiate out of control unless there are overlooked discoveries

Re: [agi] How AI will kill us

2023-12-03 Thread John Rose
On Saturday, December 02, 2023, at 11:25 PM, Giovanni Santostasi wrote: > AGI gives whatever we want so that is the end of us, so idiotic conclusion, > sorry. Although I would say after looking at the definition of dystopia and once one fully understands the gravity of what is happening it is

Re: [agi] How AI will kill us

2023-12-03 Thread John Rose
On Sunday, December 03, 2023, at 7:59 AM, James Bowery wrote: > > A dream to some, a nightmare to others.  > All those paleolithic megaliths around the globe… hmmm…could they be from previous human technological cycles? Unless there's some

Re: [agi] How AI will kill us

2023-12-03 Thread John Rose
On Saturday, December 02, 2023, at 11:25 PM, Giovanni Santostasi wrote: > I cannot believe this group is full of dystopians. Dystopia never happen, at > least not for long or globally. They are always localized in time or space. > Hollywood is full of dystopia because they lack imagination. 

Re: [agi] How AI will kill us

2023-12-02 Thread John Rose
People need to understand the significance of this global mindscrew. And ChatGPT is blue-pilled on the shots, as if anyone expected differently. What is absolutely amazing is that Steve Kirsch wasn’t able to speak at the MIT auditorium named after him since he was labeled as a misinformation

Re: [agi] How AI will kill us

2023-12-04 Thread John Rose
On Sunday, December 03, 2023, at 10:00 AM, Matt Mahoney wrote: > I don't mean to sound dystopian. OK, let me present this a bit differently. THIS MUTHERFUCKER WAS DESIGNED TO KILL YOU Mkay? In a nice way, not using gas or guns or bombs. It was a trial balloon developed over several decades

[agi] Re: By fire or by ice.

2023-12-08 Thread John Rose
On Wednesday, December 06, 2023, at 1:52 PM, Alan Grimes wrote: > Whether it is even possible for the fed to print that much money is an open question. I want to be on the other side of the financial collapse (fiscal armageddon) as soon as possible. Right now we are just waiting to see whether

Re: [agi] How AI will kill us

2023-12-07 Thread John Rose
On Wednesday, December 06, 2023, at 12:50 PM, James Bowery wrote: > Please note the, uh, popularity of the notion that there is no free will.  > Also note Matt's prior comment on recursive self improvement having started > with primitive technology.   > > From this "popular" perspective, there

[agi] Re: "The Optimal Choice of Hypothesis Is the Weakest, Not the Shortest"

2024-01-29 Thread John Rose
Weakness reminds me of losylossslessyzygy (sic. lossy lossless syzygy)… hmm… I wonder if it’s related. Cardinality (Description Length) verses cardinality of its extension (Weakness)… -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-11 Thread John Rose
On Thursday, April 11, 2024, at 10:07 AM, James Bowery wrote: > What assumption is that? The assumption that alpha is unitless. Yes they cancel out but the simple process of cancelling units seems incomplete. Many of these constants though are re-representations of each other. How many

Re: [agi] Re: Iran <> Israel, can AGI zealots do anything?

2024-04-18 Thread John Rose
On Wednesday, April 17, 2024, at 11:17 AM, Alan Grimes wrote: > It's a stage play. I think Iran is either a puppet regime or living under blackmail. The entire thing was done to cover up / distract from / give an excuse for the collapse of the banking system. Simultaneously, the market riggers

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-18 Thread John Rose
On Thursday, April 11, 2024, at 1:13 PM, James Bowery wrote: > Matt's use of Planck units in his example does seem to support your > suspicion.  Moreover, David McGoveran's Ordering Operator Calculus approach > to the proton/electron mass ratio (based on just the first 3 of the 4 levels > of

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-21 Thread John Rose
If the fine structure constant was tunable across different hypothetical universes how would that affect the overall intelligence of each universe? Dive into that rabbit hole, express and/or algorithmicize the intelligence of a universe. There are several potential ways to do that, some of

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-11 Thread John Rose
> "Abstract Fundamental physical constants need not be constant, neither > spatially nor temporally." If we could remote view somehow across multiple multiverse instances simultaneously in various non-deterministic states and perceive how the universe structure varies across different alphas.

Re: [agi] Re: Entering the frenzy.

2024-04-11 Thread John Rose
On Friday, April 05, 2024, at 6:22 PM, Alan Grimes wrote: > It's difficult to decide whether this is actually a good investment: Dell Precisions are very reliable IMO and the cloud is great for scaling up. You can script up a massive amount of compute in a cloud then turn it off when done. Is

[agi] Re: Will SORA lead to AGI?

2024-04-11 Thread John Rose
On Thursday, April 11, 2024, at 12:27 AM, immortal.discoveries wrote: > Anyway, very interesting thoughts I share here maybe? Hmm back to the > question, do we need video AI? Well, AGI is a good exact matcher if you get > me :):), so if it is going to think about how to improve AGI in video

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-03 Thread John Rose
Expressing the intelligence of the universe is a unique case, verses say expressing the intelligence of an agent like a human mind. A human mind is very lossy verses the universe where there is theoretically no loss. If lossy and lossless were a duality then the universe would be a singularity

Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-05-03 Thread John Rose
On Thursday, May 02, 2024, at 6:03 AM, YKY (Yan King Yin, 甄景贤) wrote: > It's not easy to prove new theorems in category theory or categorical > logic... though one open problem may be the formulation of fuzzy toposes. Or perhaps neutrosophic topos, Florentin Smarandache has written much

[agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-11 Thread John Rose
Seems that software and more generalized mathematics should be discovering these new structures. If a system projects candidates into a test domain, abstracted, and wires them up for testing in a software host how would you narrow the search space of potential candidates? You’d need a more

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-11 Thread John Rose
On Wednesday, May 08, 2024, at 6:24 PM, Keyvan M. Sadeghi wrote: >> Perhaps we need to sort out human condition issues that stem from human >> consciousness? > > Exactly what we should do and what needs funding, but shitheads of the world > be funding wars. And Altman :)) If Jeremy Griffith’s

Re: [agi] How AI is killing the internet

2024-05-13 Thread John Rose
Mike Gunderloy disconnected. Before the internet he did Factsheet Five which connected alt undergrounders. It really was an amazing publication that could be considered a type of pre-internet search engine with zines as websites. https://en.wikipedia.org/wiki/Factsheet_Five Then as the

Re: [agi] How AI is killing the internet

2024-05-13 Thread John Rose
Also, with TikTok governments don’t want the truth exposed because populations tend to get rebellious so they want “unsafe” information suppressed. E.g. Canadian trucker protests…. I sometimes wonder do Canadians know that Trudeau is Castro’s biological son? Thanks TicTok didn’t know that. And

Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-12 Thread John Rose
On Sunday, May 12, 2024, at 10:38 AM, Matt Mahoney wrote: > All neural networks are trained by some variation of adjusting anything that > is adjustable in the direction that reduces error. The problem with KAN alone > is you have a lot fewer parameters to adjust, so you need a lot more neurons

Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-12 Thread John Rose
On Sunday, May 12, 2024, at 12:13 AM, immortal.discoveries wrote: > But doesn't it have to run the code to find out no? The people who wrote the paper did some nice work on this. They laid it out perhaps intentionally so that doing it again with modified structures is easy to visualize. A

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread John Rose
On Tuesday, May 07, 2024, at 10:01 AM, Matt Mahoney wrote: > We don't know the program that computes the universe because it would require the entire computing power of the universe to test the program by running it, about 10^120 or 2^400 steps. But we do have two useful approximations. If we set

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-08 Thread John Rose
On Tuesday, May 07, 2024, at 6:53 PM, Matt Mahoney wrote: > Kolmogorov proved there is no such thing as an infinitely powerful compressor. Not even if you have infinite computing power. Compressing the universe is a unique case especially being supplied with infinite computing power. Would the

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-08 Thread John Rose
On Tuesday, May 07, 2024, at 9:41 PM, Keyvan M. Sadeghi wrote: > It's because of biology. There, I said it. But it's more nuanced. Brain cells > are almost identical at birth. The experiences that males and females go > through in life, however, are societally different. And that's rooted in >

Re: [agi] How AI will kill us

2024-05-07 Thread John Rose
For those genuinely interested in this particular Imminent threat here is a case study (long video) circulating on how western consciousness is being programmatically hijacked presented by a gentleman who has been involved and researching it for several decades. He describes this particular

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread John Rose
On Tuesday, May 07, 2024, at 8:04 AM, Quan Tesla wrote: > To suggest that every hypothetical universe has its own alpha, makes no > sense, as alpha is all encompassing as it is. You are exactly correct. There is another special case besides expressing the intelligence of the universe. And that

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread John Rose
On Friday, May 03, 2024, at 7:10 PM, Matt Mahoney wrote: > So when we talk about the intelligence of the universe, we can only really > measure it's computing power, which we generally correlate with prediction > power as a measure of intelligence. The universes overall prediction power should

Re: [agi] How AI will kill us

2024-03-21 Thread John Rose
I don’t like beating this drum but this has to be studied in relation to unfriendly AGI and the WHO pandemic treaty is coming up in May which has to be stopped. Here is a passionate interview after Dr. Chris Shoemaker presenting in US congress, worth watching for a summary of the event and the

Re: [agi] How AI will kill us

2024-03-21 Thread John Rose
On Thursday, March 21, 2024, at 11:41 AM, Keyvan M. Sadeghi wrote: > Worship stars, not humans  The censorship the last few years was like an eclipse. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] How AI will kill us

2024-03-24 Thread John Rose
On Saturday, March 23, 2024, at 6:10 PM, Matt Mahoney wrote: > But I wonder how we will respond to existential threats in the future, like > genetically engineered pathogens or self replicating nanotechnology. The > vaccine was the one bright spot in our mostly bungled response to covid-19. >

Re: [agi] How AI will kill us

2024-03-22 Thread John Rose
On Thursday, March 21, 2024, at 1:07 PM, James Bowery wrote: > Musk has set a trap far worse than censorship. I wasn’t really talking about Musk OK mutants? Though he had the cojones to do something big about the censorship and opened up a temporary window basically by acquiring Twitter. A

Re: [agi] How AI will kill us

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 7:22 AM, Keyvan M. Sadeghi wrote: > With all due respect John, thinking an AI that has digested all human > knowledge, then goes on to kill us, is fucking delusional  Why is that delusional? It may be a logical decision for the AI to make an attempt to save the

Re: [agi] How AI will kill us

2024-03-30 Thread John Rose
On Thursday, March 28, 2024, at 5:55 PM, Keyvan M. Sadeghi wrote: > I'm not sure the granularity of feedback mechanism is the problem. I think > the problem lies in us not knowing if we're looping or contributing to the > future. This thread is a perfect example of how great minds can loop

Re: [agi] How AI will kill us

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 7:33 AM, Keyvan M. Sadeghi wrote: > For the same reason that we, humans, don't kill dogs to save the planet. Exactly. If people can’t snuff Wuffy to save the planet how could they decide to kill off a few billion useless eaters? Although central banks do fuel both

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Friday, March 29, 2024, at 8:25 AM, Quan Tesla wrote: > The fine structure constant, in conjunction with the triple-alpha process > could be coded and managed via AI. Computational code.  Imagine the government in its profound wisdom declared that the fine structure constant needed to be

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 11:11 AM, Nanograte Knowledge Technologies wrote: > Who said anything about modifying the fine structure constant? I used the > terms: "coded and managed". > > I can see there's no serious interest here to take a fresh look at doable > AGI. Best to then leave

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 7:11 PM, Matt Mahoney wrote: > Prediction measures intelligence. Compression measures prediction. Can you reorient the concept of time from prediction? If time is on an axis, if you reorient the time perspective is there something like energy complexity? The

Re: [agi] How AI will kill us

2024-04-02 Thread John Rose
On Monday, April 01, 2024, at 3:24 PM, Matt Mahoney wrote: > Tonini doesn't even give a precise formula for what he calls phi, a measure > of consciousness, in spite of all the math in his papers. Under reasonable > interpretations of his hand wavy arguments, it gives absurd results. For  >

Re: [agi] How AI will kill us

2024-04-02 Thread John Rose
Or perhaps better, describe an algorithm that ranks the consciousness of some of the integers in [0..N]. There may be a stipulation that the integers be represented as atomic states all unobserved or all observed once… or allow ≥ 0 observations for all and see what various theories say.

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-04-01 Thread John Rose
On Friday, March 29, 2024, at 8:31 AM, Quan Tesla wrote: > Musical tuning and resonant conspiracy? Cooincidently, I spent some time > researching that just today. Seems, while tuning of instruments is a matter > of personal taste (e.g., Verdi tuning)  there's no real merit in the pitch of > a

Re: [agi] How AI will kill us

2024-04-01 Thread John Rose
On Sunday, March 31, 2024, at 7:55 PM, Matt Mahoney wrote: > The problem with this explanation is that it says that all systems with > memory are conscious. A human with 10^9 bits of long term memory is a billion > times more conscious than a light switch. Is this definition really useful? A

[agi] Re: Entering the frenzy.

2024-04-05 Thread John Rose
On Friday, April 05, 2024, at 12:21 AM, Alan Grimes wrote: > So let me tell you about the venture I want to start. I would like to put together a lab / research venture to sprint to achieve machine consciousness. I think there is enough tech available these days that I think there's enough tech

Re: [agi] How AI will kill us

2024-04-04 Thread John Rose
I was just thinking here that the ordering of the consciousness in permutations of strings is related to their universal pattern frequency so would need algorithms to represent that... -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-04 Thread John Rose
On Wednesday, April 03, 2024, at 2:39 PM, James Bowery wrote: > * and I realize this is getting pretty far removed from anything relevant to > practical "AGI" except insofar as the richest man in the world (last I heard) > was the guy who wants to use it to discover what makes "the simulation"

Re: [agi] How AI will kill us

2024-03-28 Thread John Rose
On Wednesday, March 27, 2024, at 3:15 PM, Matt Mahoney wrote: > In my 2008 distributed AGI proposal ( > https://mattmahoney.net/agi2.html ) I described a hostile peer to peer > network where information has negative value and people (and AI) > compete for attention. My focus was on distributing

Re: [agi] Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread John Rose
On Thursday, March 28, 2024, at 1:45 AM, Quan Tesla wrote: > If yes, what results have you to show for it? There’s no need to disparage the generous contributions by some highly valued and intelligent individuals on this list. I’ve obtained invaluable knowledge and insight from these

Re: [agi] Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread John Rose
On Thursday, March 28, 2024, at 8:44 AM, Quan Tesla wrote: > One cannot disparage that which already makes no difference either way. > John's well, all about John, as can be expected. What?? LOL listen to you  On Thursday, March 28, 2024, at 8:44 AM, Quan Tesla wrote: > I've completed work and

Re: [agi] Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread John Rose
On Thursday, March 28, 2024, at 10:06 AM, Quan Tesla wrote: > At least with an AI-enabled fine structure constant, we could've tried > repopulating selectively and perhaps reversed a lot of the damage we caused > Earth. The idea of AI-enabling the fine-structure constant is thought provoking

Re: [agi] How AI will kill us

2024-03-27 Thread John Rose
On Monday, March 25, 2024, at 5:18 AM, stefan.reich.maker.of.eye wrote: > On Saturday, March 23, 2024, at 11:10 PM, Matt Mahoney wrote: >> Also I have been eating foods containing DNA every day of my life without >> any bad effects. > > Why would that have bad effects? That used to not be an

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread John Rose
On Thursday, March 28, 2024, at 4:55 PM, Quan Tesla wrote: > Alpha won't directly result in AGI, but it probsbly did result in all > intelligence on Earth, and would definitely resolve the power issues plaguing > AGI (and much more), especially as Moore's Law may be stalling, and > Kurzweil's

Re: [agi] How AI will kill us

2024-03-27 Thread John Rose
On Wednesday, March 27, 2024, at 12:37 PM, Matt Mahoney wrote: > We have a fairly good understanding of biological self replicators and how to prime the immune systems of humans and farm animals to fight them. But how to fight misinformation? Regarding the kill-shots you emphasize reproduction

Re: [agi] How AI will kill us

2024-03-27 Thread John Rose
On Wednesday, March 27, 2024, at 12:37 PM, Matt Mahoney wrote: > Flat Earthers, including the majority who secretly know the world is round, have a more important message. How do you know what is true? We need to emphasize hard science verses intergenerational pseudo-religious belief systems

Re: [agi] How AI will kill us

2024-03-27 Thread John Rose
On Wednesday, March 27, 2024, at 3:15 PM, Matt Mahoney wrote: > I predict a return of smallpox and polio because people won't get vaccinated. > We have already seen it happen with measles. I think it’s a much higher priority as to what’s with that non-human DNA integrated into chromosomes 9 and

Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-24 Thread John Rose
On Wednesday, May 15, 2024, at 12:28 AM, Matt Mahoney wrote: > The top entry on the large text benchmark, nncp, uses a transformer. It is > closed source but there is a paper describing the algorithm. It doesn't > qualify for the Hutter prize because it takes 3 days to compress 1 GB on a > GPU

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-20 Thread John Rose
On Saturday, May 18, 2024, at 6:53 PM, Matt Mahoney wrote: > Surely you are aware of the 100% failure rate of symbolic AI over the last 70 > years? It should work in theory, but we have a long history of > underestimating the cost, lured by the early false success of covering half > of the

Re: [agi] GPT-4o

2024-05-17 Thread John Rose
On Friday, May 17, 2024, at 10:07 AM, Sun Tzu InfoDragon wrote: the AI just really a regurgitation engine that smooths everything over and appears smart. > > No you! I agree. Humans are like memetic switches, information repeaters, reservoirs. The intelligence is in the collective, we’re just

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-17 Thread John Rose
On Thursday, May 16, 2024, at 11:26 AM, ivan.moony wrote: > What should symbolic approach include to entirely replace neural networks > approach in creating true AI? Symbology will compress NN monstrosities… right?  Or should say increasing efficiency via emerging symbolic activity for

Re: [agi] GPT-4o

2024-05-17 Thread John Rose
On Tuesday, May 14, 2024, at 11:21 AM, James Bowery wrote: > Yet another demonstration of how Alan Turing poisoned the future with his > damnable "test" that places mimicry of humans over truth. This unintentional result of Turing’s idea is an intentional component of some religions. The elder

Re: [agi] GPT-4o

2024-05-17 Thread John Rose
On Tuesday, May 14, 2024, at 10:27 AM, Matt Mahoney wrote: > Does everyone agree this is AGI? Ya is the AI just really a regurgitation engine that smooths everything over and appears smart. Kinda like a p-zombie, poke it, prod it, sounds generally intelligent!  But… artificial is what everyone

Re: [agi] GPT-4o

2024-05-29 Thread John Rose
On Monday, May 27, 2024, at 6:58 PM, Keyvan M. Sadeghi wrote: > Good thing is some productive chat happens outside this forum: > > https://x.com/ylecun/status/1794998977105981950 Smearing those who are concerned of particular AI risks by pooling them into a prejudged category entitled “Doomers”

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-22 Thread John Rose
On Tuesday, May 21, 2024, at 10:34 PM, Rob Freeman wrote: > Unless I've missed something in that presentation. Is there anywhere in the hour long presentation where they address a decoupling of category from pattern, and the implications of this for novelty of structure? I didn’t watch the video

Re: [agi] GPT-4o

2024-05-29 Thread John Rose
On Wednesday, May 29, 2024, at 3:56 PM, Keyvan M. Sadeghi wrote: > Judging the future of AGI (not distant, 5 years), with our current premature > brains is a joke. Worse, it's an unholy/profitable business for Sam Altmans / > Eric Schmidts / Elon Musks of the world. I was referring to

Re: [agi] GPT-4 passes the Turing test

2024-06-17 Thread John Rose
On Monday, June 17, 2024, at 4:14 PM, James Bowery wrote: > https://gwern.net/doc/cs/algorithm/information/compression/1999-mahoney.pdf I know, I know that we could construct a test that breaks the p-zombie barrier. Using text alone though? Maybe not. Unless we could somehow makes our brains

Re: [agi] GPT-4 passes the Turing test

2024-06-17 Thread John Rose
On Monday, June 17, 2024, at 2:33 PM, Mike Archbold wrote: > Now time for the usual goal post movers A few years ago it would be a big thing though I remember these chatbots from the BBS days in the early 90's that were pretty convincing. Some of those bots were hybrids, part human part bot so

Re: [agi] GPT-4o

2024-06-02 Thread John Rose
On Saturday, June 01, 2024, at 7:03 PM, immortal.discoveries wrote: > I love how a thread I started ends up with Matt and Jim and others having a > conversation again lol. Tame the butterfly effect. Just imagine you switch a couple words around and the whole world starts conversing.

Re: [agi] GPT-4o

2024-06-02 Thread John Rose
On Sunday, June 02, 2024, at 10:32 AM, Keyvan M. Sadeghi wrote: > Aka click bait? :) ;) Jabbed? https://www.bitchute.com/video/jB9JXD9lvK8m/ -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] GPT-4o

2024-06-02 Thread John Rose
On Sunday, June 02, 2024, at 9:04 AM, Sun Tzu InfoDragon wrote: > The most important metric, obviously, is whether GPT can pass for a doctor on > the US Medical Licensing Exam by scoring the requisite 60%. Not sure who I trust less, lawyers, medical doctors, or an AI trying to imitate them as

[agi] Re: Internal Time-Consciousness Machine (ITCM)

2024-06-15 Thread John Rose
> For those of us pursuing consciousness-based AGI this is an interesting paper > that gets more practical... LLM agent based but still v. interesting: > > https://arxiv.org/abs/2403.20097 I meant to say that this is an exceptionally well-written paper just teeming with insightful research on

[agi] Internal Time-Consciousness Machine (ITCM)

2024-06-15 Thread John Rose
For those of us pursuing consciousness-based AGI this is an interesting paper that gets more practical... LLM agent based but still v. interesting: https://arxiv.org/abs/2403.20097 -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-06-16 Thread John Rose
On Sunday, June 16, 2024, at 7:09 PM, Matt Mahoney wrote: > Not everything can be symbolized in words. I can't describe what a person > looks as well as showing you a picture. I can't describe what a novel > chemical smells like except to let you smell it. I can't tell you how to ride > a

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-06-16 Thread John Rose
On Friday, June 14, 2024, at 3:43 PM, James Bowery wrote: >> Etter: "Thing (n., singular): anything that can be distinguished from >> something else." I simply use “thing” as anything that can be symbolized and a unique case are qualia where from a first-person experiential viewpoint a qualia

Re: [agi] Re: Internal Time-Consciousness Machine (ITCM)

2024-06-17 Thread John Rose
On Sunday, June 16, 2024, at 6:49 PM, Matt Mahoney wrote: > Any LLM that passes the Turing test is conscious as far as you can tell, as > long as you assume that humans are conscious too. But this proves that there > is nothing more to consciousness than text prediction. Good prediction >

Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-06-11 Thread John Rose
Much active research on KANs getting published lately, for example PINNs and DeepONets verses PIKANNS and DeepOKANs: https://arxiv.org/abs/2406.02917 -- Artificial General Intelligence List: AGI Permalink:

[agi] The Implications

2024-06-18 Thread John Rose
It helps to know this: https://www.quantamagazine.org/in-highly-connected-networks-theres-always-a-loop-20240607/ Proof: https://arxiv.org/abs/2402.06603 -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] GPT-4 passes the Turing test

2024-06-18 Thread John Rose
On Tuesday, June 18, 2024, at 10:37 AM, Matt Mahoney wrote: > The p-zombie barrier is the mental block preventing us from understanding > that there is no test for something that is defined as having no test for. > https://en.wikipedia.org/wiki/Philosophical_zombie > Perhaps we need to get past

Re: [agi] Internal Time-Consciousness Machine (ITCM)

2024-06-18 Thread John Rose
On Monday, June 17, 2024, at 5:07 PM, Mike Archbold wrote: > It seems like a reasonable start  as a basis. I don't see how it relates to > consciousness really, except that I think they emphasize a real time aspect > and a flow of time which is good.  If you read the appendix a few times you

Re: [agi] GPT-4 passes the Turing test

2024-06-20 Thread John Rose
On Thursday, June 20, 2024, at 12:32 AM, immortal.discoveries wrote: > I have a test puzzle that shows GPT-4 to be not human. It is simple enough > any human would know the answer. But it makes GPT-4 rattle on nonsense ex. > use spoon to tickle the key to come off the walleven though i said

Re: [agi] The Implications

2024-06-20 Thread John Rose
On Wednesday, June 19, 2024, at 11:36 AM, Matt Mahoney wrote: > I give up. What are the implications? Confidence really and a firm footing for further speculations in graphs, networks, search spaces, topologies, algebraic structures, etc. related to cognitive modelling. Potentially all kinds of

<    2   3   4   5   6   7   8   >