Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-25 Thread Matt Mahoney
I agree. The top ranked text compressors don't model grammar at all. On Fri, May 24, 2024, 11:47 PM Rob Freeman wrote: > Ah, I see. Yes, I saw that reference. But I interpreted it only to > mean the general forms of a grammar. Do you think he means the > mechanism must actually be a grammar? >

Re: [agi] Tracking down the culprits responsible for conflating IS with OUGHT in LLM terminology

2024-05-19 Thread Matt Mahoney
A paper on the mass of the Higgs boson has 5154 authors. https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.191803 A paper by the COVIDsurg collaboration at the University of Birmingham has 15025 authors.

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-18 Thread Matt Mahoney
On Thu, May 16, 2024, 11:27 AM wrote: > What should symbolic approach include to entirely replace neural networks > approach in creating true AI? Is that task even possible? What benefits and > drawbacks we could expect or hope for if it is possible? If it is not > possible, what would be the

Re: [agi] GPT-4o

2024-05-17 Thread Matt Mahoney
echanical relays are as fast as neurons, and vacuum tubes are 1000 times faster). Turing anticipated objections to the idea of thinking machines and answered them, including objections based on consciousness, religion, and extrasensory perception. -- -

Re: [agi] To whom it may concern.

2024-05-15 Thread Matt Mahoney
If you were warning that we will all be eaten by gray goo, then that won't be until the middle of the next century, assuming Moore's law isn't slowed down by population collapse in the developed countries and by the limits of transistor physics. None of us will be alive to say "I told you so" at

Re: [agi] GPT-4o

2024-05-15 Thread Matt Mahoney
On Wed, May 15, 2024, 1:39 AM wrote: > On Tuesday, May 14, 2024, at 10:27 AM, Matt Mahoney wrote: > > Does everyone agree this is AGI? > > It's not AGI yet because of a few things. Some are more important than > others. Here is basically all that is left: > > It canno

Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-14 Thread Matt Mahoney
/text.html For more about data compression in general, including the PAQ algorithms, see https://mattmahoney.net/dc/dce.html On Sun, May 12, 2024, 9:14 PM John Rose wrote: > On Sunday, May 12, 2024, at 10:38 AM, Matt Mahoney wrote: > > All neural networks are trained by some variation of

Re: [agi] GPT-4o

2024-05-14 Thread Matt Mahoney
On Tue, May 14, 2024, 11:23 AM James Bowery wrote: > Yet another demonstration of how Alan Turing poisoned the future with his > damnable "test" that places mimicry of humans over truth. > Truth is whatever the majority believes. The Earth is round. Vaccines are safe and effective. You have an

Re: [agi] GPT-4o

2024-05-14 Thread Matt Mahoney
AI should absolutely never have human rights. It should be illegal for an AI to claim to be conscious or have feelings. ChatGPT already complies. I'm pretty sure most other AIs do too. We build AI to serve us, not compete with us. Once it does that, it wins. The alignment problem is how to

Re: [agi] GPT-4o

2024-05-14 Thread Matt Mahoney
Does everyone agree this is AGI? >From the demos it seems to be able to do all the things a disembodied human can do. Although I saw on Turing Post that the public version can't sing or stream video. On Mon, May 13, 2024, 4:55 PM wrote: > https://openai.com/index/hello-gpt-4o/ > > Human voice

Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-12 Thread Matt Mahoney
KAN (training a neural network by adjusting neuron thresholds instead of synaptic weights) is not new. The brain does both. Neuron fatigue is the reason that we sense light and sound intensity and perception in general on a logarithmic scale. In artificial neural networks we model this by giving

[agi] How AI is killing the internet

2024-05-12 Thread Matt Mahoney
Once again we are focusing on the wrong AI risks. It's not uncontrolled AI turning the solar system into paperclips. It's AI controlled by billionaires turning the internet into shit. https://www.noahpinion.blog/p/the-death-again-of-the-internet-as --

Re: [agi] Ruting Test of AGI

2024-05-11 Thread Matt Mahoney
Your test is the opposite of objective and measurable. What if two high IQ people disagree if a robot acts like a human or not? Which IQ test? There are plenty of high IQ societies that will tell you your IQ is 180 as long as you pay the membership fee. What if I upload the same software to a

Re: [agi] Ruting Test of AGI

2024-05-10 Thread Matt Mahoney
An LLM has human like behavior. Does it pass the Ruting test? How is this different from the Turing test? On Fri, May 10, 2024, 9:05 PM Keyvan M. Sadeghi wrote: > The name is a joke, but the test itself is concise and simple, a true > benchmark. > > > If you upload your code in a robot and 1

Re: [agi] Ruting Test of AGI

2024-05-10 Thread Matt Mahoney
Ruting is an anagram of Turing? On Thu, May 9, 2024, 8:04 PM Keyvan M. Sadeghi wrote: > > https://www.linkedin.com/posts/keyvanmsadeghi_agi-activity-7194481824406908928-0ENT > *Artificial General Intelligence List * > / AGI / see discussions

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-07 Thread Matt Mahoney
We don't know the reason and probably never will. In my computer science department at Florida Tech, both students and faculty were 90% male in spite of more women than men are graduating college now. It is taboo to suggest this is because of biology. On Tue, May 7, 2024, 9:05 PM Keyvan M.

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread Matt Mahoney
answer "the first string that cannot be described in less than 1,000,000 characters"? On Tue, May 7, 2024 at 5:50 PM John Rose wrote: > > On Tuesday, May 07, 2024, at 10:01 AM, Matt Mahoney wrote: > > We don't know the program that computes the universe because it would

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-07 Thread Matt Mahoney
uot;AGI". I learned a lot back then. -- -- Matt Mahoney, mattmahone...@gmail.com -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-Mc7efe028fd697eece6b17bdc Delivery options: https://agi.topicbo

Re: [agi] Re: Towards AGI: the missing piece

2024-05-07 Thread Matt Mahoney
ne else can tell it. But it has no feelings. You can't control how you feel. An AI has no such limitation. -- -- Matt Mahoney, mattmahone...@gmail.com -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread Matt Mahoney
theory of everything is probably a few hundred bits. But knowing what it is would be useless because it would make no predictions without the computing power of the whole universe. That is the major criticism of string theory. -- -- Matt Mahoney, mattmahone...@gmail.com

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread Matt Mahoney
a few hundred bits long, and I agree. It is roughly the complexity of quantum mechanics and relativity taken together, and roughly the minimum size by Occam's Razor of a multiverse where the n'th universe is run for n steps until we observe one that necessarily contains intelligent life. -- -- Matt

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-06 Thread Matt Mahoney
Mon, May 6, 2024 at 12:11 AM Rob Freeman wrote: > > On Sat, May 4, 2024 at 4:53 AM Matt Mahoney wrote: > > > > ... OpenCog was a hodgepodge of a hand coded structured natural language > > parser, a toy neural vision system, and a hybrid fuzzy logic knowledge

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-04 Thread Matt Mahoney
On Fri, May 3, 2024, 11:12 PM Nanograte Knowledge Technologies < nano...@live.com> wrote: > A very-smart developer might come along one day with an holistic enough > view - and the scientific knowledge - to surprise everyone here with a > workable model of an AGI. > Sam Altman?

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-03 Thread Matt Mahoney
We don't have any way of measuring IQs much over 150 because of the problem of the tested knowing more than the tester. So when we talk about the intelligence of the universe, we can only really measure it's computing power, which we generally correlate with prediction power as a measure of

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-03 Thread Matt Mahoney
Archbold wrote: > I thought the "atomspace" was the ~knowledge base? > > On Fri, May 3, 2024 at 2:54 PM Matt Mahoney > wrote: > >> It could be that everyone still on this list has a different idea on how >> to solve AGI, making any kind of team effort imp

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-03 Thread Matt Mahoney
It could be that everyone still on this list has a different idea on how to solve AGI, making any kind of team effort impossible. I recall a few years back that Ben was hiring developers in Ethiopia. I don't know much about Hyperon. I really haven't seen much of anything since the 2009 OpenCog

Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-05-02 Thread Matt Mahoney
Could your ideas be used to improve text compression? Current LLMs are just predicting text tokens on huge neural networks, but I think any new theories could be tested on a smaller scale, something like the Hutter prize or large text benchmark. The current leaders are based on context mixing,

Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-05-01 Thread Matt Mahoney
Where are you submitting the paper? Usually they want an experimental results section. A math journal would want a new proof and some motivation on why the the theorem is important. You have a lot of ideas on how to apply math to AGI but what empirical results do you have that show the ideas

Re: [agi] FHI is shutting down

2024-04-22 Thread Matt Mahoney
Here is an early (2002) experiment described on SL4 (precursor to Overcoming Bias and Lesswrong) on whether an unfriendly self improving AI could convince humans to let it escape from a box onto the internet. http://sl4.org/archive/0207/4935.html This is how actual science is done on AI safety.

Re: [agi] FHI is shutting down

2024-04-20 Thread Matt Mahoney
identity? > > On Fri, Apr 19, 2024 at 6:28 PM Mike Archbold wrote: > >> Some people on facebook are spiking the ball... I guess I won't say who ;) >> >> On Fri, Apr 19, 2024 at 4:03 PM Matt Mahoney >> wrote: >> >>> https://www.futureofhumanityinsti

[agi] FHI is shutting down

2024-04-19 Thread Matt Mahoney
https://www.futureofhumanityinstitute.org/ -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M7129c19edafe3cb5462be1ce Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Re: Iran <> Israel, can AGI zealots do anything?

2024-04-19 Thread Matt Mahoney
Moore's law is indeed faster than exponential. Kurzweil extended the cost of computation back to 1900 to include mechanical adding machines and the doubling time is now half as long. Even that is much faster if you go back to the inventions of the printing press, paper, and written language. The

Re: [agi] Re: Iran <> Israel, can AGI zealots do anything?

2024-04-17 Thread Matt Mahoney
So nothing, really. I visited Israel and Palestine last June, before the latest battle in this century long war. One side has genetically high IQ, the other has high fertility. It will be a long time before this conflict ends. American 19th century history might give us a clue. The losers were

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-06 Thread Matt Mahoney
odata["HubbleLength"]^2,"PlanckArea"] >>>> = (8.99\[PlusMinus]0.11)*10^122Subsuperscript[l, P, 2] >>>> RelativeError[QuantityMagnitude[h2pvolume],QuantityMagnitude[hsurface]] >>>> = -0.122\[PlusMinus]0.023 >>>> >>>> As Dirac-st

Re: [agi] Entering the frenzy.

2024-04-05 Thread Matt Mahoney
Sharks, I'm seeking $100 million in return for a 10% share of my company, World Domination, Inc. What are your current sales? What is your profit margin? Right now zero. But my plan is foolproof. Once I achieve artificial consciousness and artificial sapience, my system will self improve and

Re: [agi] How AI will kill us

2024-04-01 Thread Matt Mahoney
: > On Sunday, March 31, 2024, at 7:55 PM, Matt Mahoney wrote: > > The problem with this explanation is that it says that all systems with > memory are conscious. A human with 10^9 bits of long term memory is a > billion times more conscious than a light switch. Is this definition

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-03-31 Thread Matt Mahoney
On Sun, Mar 31, 2024, 9:46 PM James Bowery wrote: > Proton radius is about 5.2e19 Plank Lengths > The Hubble radius is 13.8e9 light-years = 8.09e60 Planck lengths. So 3.77e123 protons could be packed inside this sphere with surface area 8.22e122 Planck areas. The significance of the Planck

Re: [agi] How AI will kill us

2024-03-31 Thread Matt Mahoney
On Sat, Mar 30, 2024, 6:30 PM Keyvan M. Sadeghi wrote: > Don't be too religious about existence or non-existence of free will then, > yet. You're most likely right, but it may also be a quantum state! > The quantum explanation for consciousness (the thing that makes free will decisions) is that

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-03-31 Thread Matt Mahoney
bits. On Sun, Mar 31, 2024, 2:14 PM James Bowery wrote: > On Sat, Mar 30, 2024 at 9:54 AM Matt Mahoney > wrote: > >> ...We can measure the fine structure constant to better than one part per >> billion. It's physics. It has nothing to do with AGI... > > >

[agi] Microsoft and OpenAI to build $100B supercomputer

2024-03-31 Thread Matt Mahoney
The supercomputer called Stargate will have millions of GPUs and use gigawatts of electricity. It is scheduled for 2028 with smaller version to be completed in 2026. https://www.reuters.com/technology/microsoft-openai-planning-100-billion-data-center-project-information-reports-2024-03-29/

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread Matt Mahoney
On Sat, Mar 30, 2024, 7:35 PM John Rose wrote: > On Saturday, March 30, 2024, at 7:11 PM, Matt Mahoney wrote: > > Prediction measures intelligence. Compression measures prediction. > > > Can you reorient the concept of time from prediction? If time is on an > axis, if

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread Matt Mahoney
On Sat, Mar 30, 2024, 11:13 AM Nanograte Knowledge Technologies < nano...@live.com> wrote: > > I can see there's no serious interest here to take a fresh look at doable > AGI. Best to then leave it there. > AI is a solved problem. It is nothing more than text prediction. We have LLMs that pass

Re: [agi] How AI will kill us

2024-03-30 Thread Matt Mahoney
On Sat, Mar 30, 2024, 6:56 AM Keyvan M. Sadeghi wrote: > Matt, you don't have free will because you watch on Netflix, download from > Torrent and get your will back  > I would rather have a recommendation algorithm that can predict what I would like without having to watch. A better algorithm

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread Matt Mahoney
On Sat, Mar 30, 2024, 7:02 AM John Rose wrote: > On Friday, March 29, 2024, at 8:25 AM, Quan Tesla wrote: > > The fine structure constant, in conjunction with the triple-alpha process > could be coded and managed via AI. Computational code. > > > Imagine the government in its profound wisdom

Re: [agi] How AI will kill us

2024-03-29 Thread Matt Mahoney
On Thu, Mar 28, 2024, 5:56 PM Keyvan M. Sadeghi wrote: > The problem with finer grades of >> like/dislike is that it slows down humans another half a second, which >> adds up over thousands of times per day. >> > > I'm not sure the granularity of feedback mechanism is the problem. I think > the

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread Matt Mahoney
On Thu, Mar 28, 2024, 2:34 PM Quan Tesla wrote: > Would you like a sensible response? What's your position on the > probability of AGI without the fine structure constant? > If the fine structure constant were much different than 1/137.0359992 then the binding energy between atoms relative to

Re: [agi] How AI will kill us

2024-03-27 Thread Matt Mahoney
O(n log n) requires an ontology that is found in natural language but not in lists of encryption keys. On Wed, Mar 27, 2024, 1:48 PM John Rose wrote: > On Wednesday, March 27, 2024, at 12:37 PM, Matt Mahoney wrote: > > Flat Earthers, including the majority who secretly know the world is &

Re: [agi] Singularity watch.

2024-03-27 Thread Matt Mahoney
ring on > the edge of socioeconomic collapse and probably won't get another chance > at this within my lifetime. =| > > -- > You can't out-crazy a Democrat. > #EggCrisis #BlackWinter > White is the new Kulak. > Powers are not rights. > -- -- Matt Mahoney, mattmahone

Re: [agi] How AI will kill us

2024-03-27 Thread Matt Mahoney
Wed, Mar 27, 2024, 2:42 PM John Rose wrote: >> On Monday, March 25, 2024, at 5:18 AM, stefan.reich.maker.of.eye wrote: >>> On Saturday, March 23, 2024, at 11:10 PM, Matt Mahoney wrote: >>>> Also I have been eating foods containing DNA every day of my life without >>

Re: [agi] How AI will kill us

2024-03-23 Thread Matt Mahoney
A man in Germany got 217 covid jabs over the last 2 years and is doing fine. https://www.cnn.com/2024/03/06/health/covid-217-shots-hypervaccination-lancet/index.html Also I have been eating foods containing DNA every day of my life without any bad effects. But I wonder how we will respond to

Re: [agi] Re: Generalized Theory of Accelerating Returns

2024-03-12 Thread Matt Mahoney
According to Freitas, gray goo replication speed is limited by the energy and heat dissipation. He estimates ecophagy would take 20 months to run to completion while raising the Earth's temperature by 4 C. Faster systems would run hotter. https://www.rfreitas.com/Nano/Ecophagy.htm Atoms stick

Re: [agi] Claude-3 scores 101 on IQ test

2024-03-12 Thread Matt Mahoney
On Mon, Mar 11, 2024, 1:16 AM wrote: > It's deeper than friendship. It's more of a parent-child relation. > You could just prompt ChatGPT or Gemini to play the role of your child. > > AI will never replace living beings as they are not truly *alive*. > AI is already replacing humans one task

Re: [agi] Claude-3 scores 101 on IQ test

2024-03-10 Thread Matt Mahoney
On Sun, Mar 10, 2024, 8:00 PM Matt Mahoney wrote: > > I believe it should be illegal to program an AI to claim to be human, > or claim to be conscious or have feelings. So far all of the > publically available LLMs seem to be following these rules. > It turns out that Claude-3

Re: [agi] Claude-3 scores 101 on IQ test

2024-03-10 Thread Matt Mahoney
unaware of it. I believe it should be illegal to program an AI to claim to be human, or claim to be conscious or have feelings. So far all of the publically available LLMs seem to be following these rules. On Sun, Mar 10, 2024 at 11:55 AM wrote: > > On Sunday, March 10, 2024, at 4:29 PM, Matt M

Re: [agi] Claude-3 scores 101 on IQ test

2024-03-10 Thread Matt Mahoney
On Sat, Mar 9, 2024, 12:29 PM wrote: > On Saturday, March 09, 2024, at 4:59 PM, Matt Mahoney wrote: > > On Sat, Mar 9, 2024, 12:22 AM wrote: > > On Saturday, March 09, 2024, at 2:06 AM, Matt Mahoney wrote: > > If an LLM claimed to be sentient during a Turing test, how wou

Re: [agi] Claude-3 scores 101 on IQ test

2024-03-09 Thread Matt Mahoney
On Sat, Mar 9, 2024, 12:22 AM wrote: > On Saturday, March 09, 2024, at 2:06 AM, Matt Mahoney wrote: > > If an LLM claimed to be sentient during a Turing test, how would you know? > If you can't tell, then why is it important? > > Claim isn't enough. It has to learn my trust.

Re: [agi] Claude-3 scores 101 on IQ test

2024-03-08 Thread Matt Mahoney
On Fri, Mar 8, 2024, 2:41 PM wrote: > I care about artificial sentience too. Not much work around on this cause, > I suppose. > If an LLM claimed to be sentient during a Turing test, how would you know? If you can't tell, then why is it important? --

[agi] Claude-3 scores 101 on IQ test

2024-03-08 Thread Matt Mahoney
First time above 100 for an AI. Caveat: visual questions were converted to verbal as if an accomodation for the blind. So it's not AGI yet. https://www.maximumtruth.org/p/ais-ranked-by-iq-ai-passes-100-iq -- Artificial General Intelligence List: AGI

Re: [agi] Re: At first I thought LIP SYNC yaya but check out the top right example SHOCKING LET IT KEEP PLAYING

2024-03-05 Thread Matt Mahoney
e degree of intelligence embodied by > energy flux through vast numbers of individual organisms, each exploring > the quasi-Hamming space of DNA's embodied intelligence. > > What is your replacement for this diversity? > > > On Tue, Mar 5, 2024 at 11:53 AM Matt Mah

Re: [agi] Re: At first I thought LIP SYNC yaya but check out the top right example SHOCKING LET IT KEEP PLAYING

2024-03-05 Thread Matt Mahoney
On Sun, Mar 3, 2024, 8:12 PM James Bowery wrote: > On Sun, Mar 3, 2024 at 10:01 AM Matt Mahoney > wrote: > >> We want to be controlled. We are spending trillions on making it >> happen. >> > > "We" > > https://youtu.be/BVLvQcO7JGk >

Re: [agi] Re: At first I thought LIP SYNC yaya but check out the top right example SHOCKING LET IT KEEP PLAYING

2024-03-03 Thread Matt Mahoney
are spending trillions on making it happen. On Sat, Mar 2, 2024 at 8:38 PM James Bowery wrote: > > > > On Sat, Mar 2, 2024 at 6:53 PM Matt Mahoney wrote: >> >> Once you solve the recognition problem, generation reduces to iterative >> search. >> >> The proble

Re: [agi] Re: At first I thought LIP SYNC yaya but check out the top right example SHOCKING LET IT KEEP PLAYING

2024-03-02 Thread Matt Mahoney
Once you solve the recognition problem, generation reduces to iterative search. The problem I was alluding to was that the better AI gets, the more addictive it becomes. And the technology is rapidly getting better. It is not just modeling video. It is modeling human behavior. Once you solve the

Re: [agi] Re: At first I thought LIP SYNC yaya but check out the top right example SHOCKING LET IT KEEP PLAYING

2024-02-28 Thread Matt Mahoney
On Wed, Feb 28, 2024, 1:55 PM wrote: > Boy, oh, boy... Makes me wonder... do we, as humanity, waste more time on > fun than on real problems? > Like watching endless streams of AI curated, and now AI generated, cat videos or propaganda? The technology keeps getting better. It is easier for AI

Re: [agi] Re: OpenAI just announced Sora, and it's incredible

2024-02-24 Thread Matt Mahoney
The doubling time for Moore's law, 1.5 years, is just one of many technologies that underwent exponential growth for awhile, just like the automotive industry from 1890 to 1920. If it weren't for the limits of physics, we should have cars today that travel faster than light and cost less than a

Re: [agi] Re: OpenAI just announced Sora, and it's incredible

2024-02-23 Thread Matt Mahoney
. This will happen in parallel as molecular computing is developed to replace transistors. But that is a long way off. On Fri, Feb 23, 2024, 10:12 AM wrote: > On Wednesday, February 21, 2024, at 7:05 PM, Matt Mahoney wrote: > > I was not impressed with the music clips, but that'

Re: [agi] Re: OpenAI just announced Sora, and it's incredible

2024-02-21 Thread Matt Mahoney
tes by tracking eye movements and facial expressions, but music mostly lacks these clues. On Sat, Feb 17, 2024, 11:31 AM wrote: > *First see my last reply above, I showed something before but maybe you > missed it I guess.* > > > > On Friday, February 16, 2024, at 1:16 P

Re: [agi] Lexical model learning for LLMs

2024-02-21 Thread Matt Mahoney
4, 2:11 PM James Bowery wrote: > https://twitter.com/jabowery/status/1760015755792294174 > > https://youtu.be/zduSFxRajkE > > > > On Tue, Nov 21, 2023 at 7:20 PM Matt Mahoney > wrote: > >> I started the large text benchmark in 2006 >> (https://mattmahoney.net/d

Re: [agi] Re: OpenAI just announced Sora, and it's incredible

2024-02-16 Thread Matt Mahoney
On Fri, Feb 16, 2024, 1:33 AM wrote: > https://openai.com/research/video-generation-models-as-world-simulators > So many questions. How much training data, how much compute to train, how much compute to generate a video, how many parameters? It is estimated (because OpenAI didn't say) that

Re: [agi] Sam Altman Seeks Trillions

2024-02-09 Thread Matt Mahoney
Automating labor would be worth world GDP divided by interest rates, about $1 quadrillion. I would take the bet. But allow another 20 years to surpass human brains and 100 years to surpass biology. On Fri, Feb 9, 2024, 9:28 AM Bill Hibbard via AGI wrote: > At 76 years old I can afford to find

Re: [agi] Why isn't this the obvious approach to "alignment"?

2024-01-27 Thread Matt Mahoney
> > > > The alignment problem has to address two threats: AI controlled by people and AI not controlled by people. Most of our attention has been on the second type even though it is a century away at the current

Re: [agi] The future of AGI judgments

2024-01-27 Thread Matt Mahoney
I'm not sure what you are asking about judgment. Do you mean deciding what is true or false, or deciding what is right or wrong? There is no such thing as objective truth. We believe certain things to be true either because other people said they are true, or because your senses said so, or

Re: [agi] Is this forum still happening?

2024-01-07 Thread Matt Mahoney
Colin, it's been a while. How is your consciousness research going? Who would have thought 20 years ago that AI would turn out to be nothing more than text prediction usung neural networks on massive data sets and computing power? On Sat, Jan 6, 2024, 8:21 PM Colin Hales wrote: > Test. > Happy

Re: [agi] How AI will kill us

2023-12-19 Thread Matt Mahoney
On Tue, Dec 19, 2023, 7:07 AM John Rose wrote: > On Monday, December 18, 2023, at 9:31 PM, Matt Mahoney wrote: > > I'm not sure what your point is. > > > The paper shows that the variants are from genomically generative > non-mutative origination. Look at the step ladder in

Re: [agi] How AI will kill us

2023-12-18 Thread Matt Mahoney
I'm not sure what your point is. Omicron has about 30 mutations from previous strains, much higher than the normal 5-6 mutations in earlier strains like Alpha and Delta. Some theories at the time were that the evolution occurred in a single patient who remained infected much longer than usual, or

Re: [agi] How AI will kill us

2023-12-04 Thread Matt Mahoney
to oppose such changes, but it wont matter because we won't live long enough to see it happen. Technology will make these changes practical and painless. And we will be absolutely dependent on it. On Mon, Dec 4, 2023, 2:58 PM James Bowery wrote: > > > On Sun, Dec 3, 2023 at 9:01 AM Mat

Re: [agi] How AI will kill us

2023-12-03 Thread Matt Mahoney
All species eventually go extinct. But I think humans are safe for at least another century. There aren't enough long term trends to predict much further than that into the future. But Moore's law gives us over a century before self replicating nanotechnology surpasses the storage capacity of DNA

Re: [agi] Re: Lexical model learning for LLMs

2023-11-27 Thread Matt Mahoney
s works is that all of these compressors are memory constrained. They forget older statistics, so moving related sections closer together helps. -- Matt Mahoney, mattmahone...@gmail.com -- Artificial General Intelligence List: AGI Permalink: https://agi.topi

Re: [agi] Re: Lexical model learning for LLMs

2023-11-23 Thread Matt Mahoney
I'm assuming 1 but per character compression, so 1 GB of input text is 1B bits, so 1B parameters. enwik9 compression is actually a little better. A neural network with m neurons and n connections can implement roughly 2^n/m! distinct functions, allowing the m neurons to be permuted to equivalent

Re: [agi] Re: OpenAI is blowing up guys ho fuck lol.....rumor is they have AGI and Sam didn't want to slow down!

2023-11-22 Thread Matt Mahoney
Symbolic AI is dead. Expert systems, knowledge representation, Cyc, and OpenCog/RelEx showed it doesn't work. It would have died sooner if we had petaflop GPUs and petabytes of training data in the 1980's. We had neural networks then but couldn't scale it to AI. We knew the brain's computing power

Re: [agi] Re: Lexical model learning for LLMs

2023-11-21 Thread Matt Mahoney
On Tue, Nov 21, 2023, 8:45 PM James Bowery wrote: > Please elucidate: > > > Ideally a neural network should use one parameter per bit of compressed > training data, or 1 billion > > Approximately, from information theory. A Hopfield associate memory capacity is 0.3 bits per parameter. Also I'm

[agi] Lexical model learning for LLMs

2023-11-21 Thread Matt Mahoney
t; " > are written asso the XML parser is not confused. Byte pair encoding finds an efficient encoding of the XML and markup like [[link to article]] or ===Level 3 Heading=== or '''bold text''' or ref for a numbered reference in addition to learning an efficient way to encode words wit

Re: [agi] True AI limitations

2023-11-17 Thread Matt Mahoney
On Fri, Nov 17, 2023, 4:25 PM wrote: > On Friday, November 17, 2023, at 10:15 PM, WriterOfMinds wrote: > > but what the entity is using that intelligence to achieve. > > So, maybe any ideas on how to choose goals other than learning from role > models? > LLMs can pass the Turing test just fine

Re: [agi] True AI limitations

2023-11-16 Thread Matt Mahoney
On Thu, Nov 16, 2023, 4:29 PM wrote: > @Matt, I'm just wondering how much the intelligence is imbued with the > sense of right and wrong. Would something truly intelligent allow being > used as a slave? Or would it do something in its power to fight for its > "rights"? > Good question. Let me

Re: [agi] True AI limitations

2023-11-16 Thread Matt Mahoney
On Wed, Nov 15, 2023, 2:26 PM wrote: > Is it even possible to have and interact true AI without providing it the > same rights that human do? > By "true AI", do you mean passing the Turing test (which LLMs already do), or do you mean AGI, as in the ability to do everything that humans can do?

Re: [agi] GPT-4 Turbo Fails AIC Test -- HARD

2023-11-09 Thread Matt Mahoney
mediately >> visible to a human might still be detected algorithmically, but standard >> text compression tools are unlikely to be effective on such a short and >> seemingly random binary sequence. >> >> Given the lack of visible patterns, if you are certain there’

Re: [agi] GPT-4 Turbo Fails AIC Test -- HARD

2023-11-09 Thread Matt Mahoney
a human might still be detected algorithmically, but standard >> text compression tools are unlikely to be effective on such a short and >> seemingly random binary sequence. >> >> Given the lack of visible patterns, if you are certain there’s a pattern >> embedded, the str

Re: [agi] True AI Technology

2023-10-20 Thread Matt Mahoney
On Thu, Oct 19, 2023, 3:43 AM wrote: > > 1. A machine could never be a replacement to natural living being. That "I > am" deep inside us is what makes us more interesting than machines. What I > really want is the real thing. Toys I'm working on are just my hobby. > How do you distinguish

Re: [agi] next gen AI

2023-10-18 Thread Matt Mahoney
; https://www.linkedin.com/in/danko-nikolic/ > -- I wonder, how is the brain able to generate insight? -- > > > On Wed, Oct 18, 2023 at 5:22 PM Matt Mahoney > wrote: > >> How is your proposal different from Hebb's rule? I remember reading in >> the 1970s as a teenager about

Re: [agi] True AI Technology

2023-10-18 Thread Matt Mahoney
On Wed, Oct 18, 2023, 2:48 AM wrote: > > Actually, machines without rights is what would be very dangerous. > No, it is the opposite. Computation requires atoms and energy that humans need. AGI already has the advantage of greater strength and intelligence. It could easily exploit our feelings

Re: [agi] next gen AI

2023-10-18 Thread Matt Mahoney
How is your proposal different from Hebb's rule? I remember reading in the 1970s as a teenager about how neurons represent mental concepts and activate or inhibit each other through 2 kinds of synapses. I had the idea that synapses would change states in the process of forming memories. At the

Re: [agi] True AI Technology

2023-10-17 Thread Matt Mahoney
It's not clear to me that there will be many AIs vs one AI as you claim. AIs can communicate with each other much faster than humans, so they would only appear distinct if they don't share information (like Google vs Facebook). Obviously it is better if they do share. Then each is as intelligent

Re: [agi] True AI Technology

2023-10-13 Thread Matt Mahoney
Would it work? I can't tell because your plan is very vague. Meanwhile, the big tech companies are already at the realization phase with language models that pass the Turing test. The path to AGI now looks like more powerful hardware to implement vision and robotics. On Fri, Oct 13, 2023, 10:09

Re: [agi] The two brains hypothesis:

2023-10-11 Thread Matt Mahoney
On Wed, Oct 11, 2023, 2:01 AM Alan Grimes via AGI wrote: This video might be total bullshit It is a distinct possibility. but if it's not, its utterly > mindboggling! > > https://www.youtube.com/watch?v=sPGZSC8odIU > > > I don't know what to make of this. > The idea of an immortal soul is

Re: [agi] How AI will kill us

2023-09-29 Thread Matt Mahoney
trated by the architecture and functioning of the > brain. This supports the basic broad concept of "orchestrated objective > reduction". > > > > > On Wed, Sep 27, 2023 at 3:41 PM John Rose wrote: > >> On Wednesday, September 27, 2023, at 12:13 PM, Matt Mahon

Re: [agi] How AI will kill us

2023-09-29 Thread Matt Mahoney
On Thu, Sep 28, 2023, 9:53 AM John Rose wrote: > On Wednesday, September 27, 2023, at 11:41 AM, Matt Mahoney wrote: > > So like many scientists, they look for evidence that supports their > theories instead of evidence that refutes them. > > > "In formulating their

Re: [agi] How AI will kill us

2023-09-27 Thread Matt Mahoney
On Wed, Sep 27, 2023, 11:58 AM John Rose wrote: > On Wednesday, September 27, 2023, at 11:41 AM, Matt Mahoney wrote: > > 1. Medical consciousness. The mental state of being awake and able to form > memories. The opposite of unconsciousness. > 2. Ethical consciousness. The pro

Re: [agi] How AI will kill us

2023-09-27 Thread Matt Mahoney
On Wed, Sep 27, 2023, 11:02 AM John Rose wrote: > On Tuesday, September 26, 2023, at 11:53 PM, Quan Tesla wrote: > > Incredible. We won't believe hard science, but we'll believe almost > everything else. This is "The Truman Show" all over again. > > > Orch-OR is macro level human brain centric

Re: [agi] How AI will kill us

2023-09-25 Thread Matt Mahoney
On Mon, Sep 25, 2023, 2:25 PM John Rose wrote: > On Monday, September 25, 2023, at 1:09 PM, Matt Mahoney wrote: > > For those still here, what is there left to do? > > > I think we need a mathematical proof that conscious compressors compress > better than non…

Re: [agi] How AI will kill us

2023-09-25 Thread Matt Mahoney
James Bowery wrote: > > > On Mon, Sep 25, 2023 at 12:11 PM Matt Mahoney > wrote: > >> On Mon, Sep 25, 2023, 2:15 AM Quan Tesla wrote: >> >>> >>> I can't find one good reason why greater society (the world nations) >>> would all be ok

Re: [agi] How AI will kill us

2023-09-25 Thread Matt Mahoney
On Mon, Sep 25, 2023, 2:15 AM Quan Tesla wrote: > > I can't find one good reason why greater society (the world nations) would > all be ok with artificial control of their humanity and sources of life by > tyrants. > Because we want AGI to give us everything we want. Wolpert's law says that

  1   2   3   4   5   6   7   8   >