On Friday, November 17, 2023, at 2:03 PM, ivan.moony wrote:
> Isn't the choice of goals what really matters? The effective procedure of
> achieving goals may be merely an asset.
Yes. That's exactly what I think intelligence is: an asset, a power, a tool.
The determinant of whether an
On Wednesday, November 15, 2023, at 3:08 PM, ivan.moony wrote:
> As a thought experiment, consider yourself being alone on some planet since
> your first day alive (assuming you are given all resources needed to keep you
> alive). What would you do without other livings? That situation is
>
My personal definition of intelligence is, "the ability to discern facts about
oneself and one's environment, and to derive from those facts the actions that
will be most effective for achieving one's goals." By this definition, a true
intelligence could behave very differently from a human. It
On Monday, September 25, 2023, at 11:09 AM, Matt Mahoney wrote:
> For those still here, what is there left to do?
Work on my own project because I love it, and I don't give a hoot about
automating the global economy. I mean, it's a worthy goal, but I don't have to
personally achieve it. My
I thought this was just called "rhetoric."
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tc1bcda5fdb4147f4-M6cc943874f7c007117b3a96d
Delivery options: https://agi.topicbox.com/groups/agi/subscription
As a fellow user of AiDreams, I get the impression that its decline has little
to do with generative AI. A number of the regulars aged out of the hobby or
simply quit the forum for personal reasons. Others were banned. And I think we
aren't getting new people because forums just aren't "the
When I see the word "science," I think of something more specific than the
ability to predict the outcomes of actions.
I don't use the formal scientific method to determine how a friend will react
to a particular gift, for example.
--
Artificial General
I don't have any great fear of what IS or of letting other people know what IS.
But with regards to the specific issue we're discussing, I consider the
question of what IS at a racial level irrelevant, since it should not affect
decisions. Therefore answering it is not important; if you aren't
On Tuesday, June 27, 2023, at 9:55 AM, Matt Mahoney wrote:
> I am at least aware of my own biases, but that hasn't stopped me from being
> biased. I judge people by their appearance and anyone who says they don't is
> lying.
But we aren't talking about you, are we? We're talking about AGI,
GPT algorithms (setting aside the reinforcement learning filter layer) do not
have a goal-driven architecture, or homeostatic drives, or any feature that
would make them capable of actually wanting anything.
In effect, what GPT algorithms do is simulate a wide range of fictional
characters.
On Saturday, June 17, 2023, at 2:05 AM, YKY (Yan King Yin, 甄景贤) wrote:
> I don't know what you're sorry about. You said sometimes people hide
> pejorative meanings under objective guises. And then you cite the Bell Curve
> as an example of scholarship being attacked because it hurts the
> The purpose of AGI is to automate human labor
That is what YOU want AGI for, Matt. We've talked about this enough times that
it should be clear to you that some of us are here from quite different
motives. Some of us are here because we'd like to meet the Rational Other. Why
do you keep
The human population was (much) less than 1 billion for the majority of human
history. Now there are 8 billion of us. Humans and our domestic animals make up
the vast majority of mammalian biomass. And you're crying disaster just because
the birth rate has finally started to decline for once?
Felipe, I think you'll have to do that yourself. Go to
https://agi.topicbox.com/groups/agi, sign in, and click "Delivery Options" in
the right-hand sidebar, then choose not to receive e-mails.
--
Artificial General Intelligence List: AGI
Permalink:
Technically "produce one unit" is an open-ended goal for any AI with a
sufficient idea of uncertainty in its epistemology. It might make one and then
spend the rest of time ensuring that it *really* made one.
--
Artificial General Intelligence List: AGI
And finally ... while it can present stock arguments for a point, the attempts
are mediocre, and I could do better. But it's doubtful how much the people
reading these actually care about the quality of the writing. As long as the
e-mail presents my opinion and is polite, it probably just gets
I recently subjected it to my favorite LLM test: can it write political and
corporate feedback e-mails for me? I plan to put a full writeup on my blog
later, but here's the gist:
Some of the e-mails were actually usable. I.e. I think I could send them to the
real live target without feeling
Do you perhaps mean "functions" instead of "theorems"? You're talking about a
program that can generate all functions which would implement a given input ->
output mapping?
Theorems would (unless I miss my guess) be a special case if the inputs were
all propositions and the outputs were all
To my shock I agree with your core point for once. Gaming the market and
earning money based on something essentially meaningless ... such as the
precise time when transactions are made ... is just a way to extract wealth
from the rest of society without giving anything back. It's a long way
I hate to look like I'm ignoring this, but at the moment I guess I'm not sure
what I could use it for. The logic puzzle solver examples are the most
interesting - I might have to study them more when I get the time.
It looks pretty slick though, for someone with the right application.
Also,
Using a computation-hungry algorithm feels like a new approach for you.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tb7b132b2e3125b1e-Ma56df4755fe9128dc2e41dc0
Delivery options:
I'm guessing it's for things like background removal. There's an OpenCV tool
for this that I've played with before, but it leaves something to be desired.
--
Artificial General Intelligence List: AGI
Permalink:
Okay, sell us on it. Of all the AI-related classics we could read, why this
one? It doesn't even appear to be an AGI book, though nanotechnology is
potentially something that AGIs could interact with or use.
--
Artificial General Intelligence List: AGI
This guy has been working with GPT-3 and has multiple advice blogs about
designing good prompts:
https://andrewmayneblog.wordpress.com/2022/01/22/how-to-get-better-qa-answers-from-gpt-3/
--
Artificial General Intelligence List: AGI
Permalink:
Maybe! Good luck. I still have some reservations about your ultimate plans for
the universe, but I hope you achieve something you can feel proud of.
--
Artificial General Intelligence List: AGI
Permalink:
Well, did it actually help you? Would you say you learned anything from this?
It reproduces the flow of a conversation well. But I wouldn't take anything
here on trust ... I'd have to go check it against other sources to make sure it
was accurate ...
--
I think he's talking about online learning and in-the-moment reasoning
processes for understanding zero-shot cases.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T22ce813ce07d9b1a-Me34993e30c7063edf84a623d
Thirty-three and no cheerios here. Oatmeal is my breakfast of choice; I buy it
in bulk. One-half cup (measured dry) per day.
--
Artificial General Intelligence List: AGI
Permalink:
I'm happy to acknowledge that more advanced minds than the human exist/are
possible. I don't think that means any of us need to wipe ourselves out when
they arrive.
--
Artificial General Intelligence List: AGI
Permalink:
Well aren't you just a bright ray of sunshine. Hahaha
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T3dc391c1276210e8-Mfa765b393a994320f6297883
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Boris! Really! That was uncalled for. Unless you were talking about physical
bodies being poor material to house the human mind or something.
--
Artificial General Intelligence List: AGI
Permalink:
I suppose we'll all have a good chuckle about this when Matt and I are still
alive in June 2022. Or will y'all just keep moving out the date of our
supposedly inevitable deaths by 6 months at a time?
--
Artificial General Intelligence List: AGI
Permalink:
Taking a quick look back, it looks like you cited a couple of compressors that
perform multiple compression steps, using a lossy algorithm for one step and a
lossless algorithm for another. These do not refute my point. The individual
algorithms are still either lossy or lossless, not
On Thursday, November 04, 2021, at 12:30 PM, John Rose wrote:
> WoM there are existing compressions defined technically by the professional
> community that don't categorize into either lossy or lossless... I think we
> reviewed those a while back? Though most generally tend towards lossy.
My
"Lossless compression" refers to the nature of the algorithm itself. All those
other potential sources of data corruption or algorithm storage corruption that
you mention are irrelevant to "lossless compression" and should be discussed
under their own names. The standard definition given to
I get nauseated when I'm frightened, so for me maybe it is a little more
general-purpose ... if I do anything risky, my body punishes me. But then I end
up choosing to do risky things anyway. So I'm definitely not as simple as a
minimizer for that particular signal.
I'm sorry, ID. A dead end isn't necessarily a failure, though, not if you
learned something. Sometimes we have to go down a path just to find out that
there's a wall at the end.
You think it "should have been easy for others to explain" that the wall was
there, but are you sure? Communication
I especially like the Kasparov/Deep Blue illustration.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T279bd9a10824c6bd-M3568e2581af667d61bae2557
Delivery options: https://agi.topicbox.com/groups/agi/subscription
On Wednesday, September 15, 2021, at 9:08 AM, Matt Mahoney wrote:
> Here is a robot that looks and acts like you as far as anyone can tell,
> except that it is younger, healthier, stronger, smarter, upgradable, immortal
> through backups, and it has super powers like infrared vision and wireless
Ohhh, gotcha. Yes, if the attribution comes after the text then you've got
things right.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T2ee04a3eb9a964b5-M7d0f2f0af05597841b9e0bfa
Delivery options:
Hey Mike ... I took a look at the Survey doc, and it appears that a lot of the
opinions are under the wrong names. You've entered my definition as James
Bowery's, Daniel Jue's definition as mine, and so forth (looks like an "off by
one" sort of error that continues down the document).
On Thursday, August 19, 2021, at 10:11 AM, Quan Tesla wrote:
> ... would you consider your intelligence to be committed to a rapid
> evolutionary process with purpose to eventually assume network-interactive
> cyborgian functionality?
Nope.
I might describe myself as transhumanism-curious. I
On Wednesday, August 18, 2021, at 10:31 PM, Nanograte Knowledge Technologies
wrote:
> We're somewhat out of time. Let's see what happens this month, then - if
> we're lucky - we'll talk again.
What's special about this month?
And I take it you think *I'm* a cyborg now. You don't even have any
I would comfort you if I could. But I know I can't.
I took Pfizer dose #2 back in April. Ten months, eh. Look for my proof-of-life
message in June 2022.
And I will continue my work in the meantime.
--
Artificial General Intelligence List: AGI
Permalink:
On Sunday, August 08, 2021, at 12:55 PM, stefan.reich.maker.of.eye wrote:
> Reality trumps faith
Do you not realize that you're asking for people's faith in your unproven
theories and uncertain business venture, then? You're certainly not asking them
to buy reality. You haven't built a real AGI
On Wednesday, August 04, 2021, at 2:12 PM, immortal.discoveries wrote:
> They say all of humanity...but not animality
This gets at one reason I don't care for most of the AI ethics charters I've
seen ... they're too anthropocentric. They ignore not only animals but also
possible/hypothetical
I haven't seen an update from you in a while. How goes the work? What's the
state of Gazelle right now?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T7656225e437e6c65-Mbf37483a2eab2a730ba467e4
Delivery
Here's my crack at it:
To *understand* a concept or statement is to determine its relevancy or
relationship to one's personal goals, experiences, or world-model.
I recently read this paper and thought it was pretty interesting (and a
relatively easy read):
On Sunday, May 09, 2021, at 8:17 PM, Colin Hales wrote:
> OK. I am going to shout. Ready? I AM NOT EMULATING BRAIN PHYSICS. There. That
> feels better! :-).
>
> I am REPLICATING brain physics.
What you ended up describing is what I meant ... I just used the wrong word,
apparently. And I've
On Tuesday, May 04, 2021, at 12:15 PM, immortal.discoveries wrote:
> He wants you to read his *formal *papers WOM.
I already did -- the last one he posted here, that is. I still have questions.
On Tuesday, May 04, 2021, at 12:14 PM, John Rose wrote:
> That's similar to saying consciousness is
On Tuesday, May 04, 2021, at 11:31 AM, Mike Archbold wrote:
> Colin's methods are first and foremost scientific. You can't
fault that.
The scientific methods by which Colin hopes to test his claims remain pretty
cloudy to me.
He has a proposed hardware device/architecture, which he believes does
Disclaimer: I'm just trying to translate these talking points for better
comprehension. I am not a member of the "all intelligence is best modeled as
sequence prediction" camp. For one thing, that's a bit too flat or simplistic.
(Like saying, "all intelligence is just choosing your next
It's not so much "making an AI from a compressor" as "making a compressor from
an AI." Specifically, it has to be a kind of AI that predicts the next element
in a sequence if given past elements of the sequence.
You run your prediction algorithm on the file you wish to compress, and store
only
Interesting. Could be useful for answering questions of the form "What does X
have to do with Y?"
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T29d9a364b5df2085-Medcb1baed7ad21d905f42992
Delivery options:
On Wednesday, April 07, 2021, at 3:09 PM, Matt Mahoney wrote:
> AI = passing the Turing test.
> AGI = doing everything a human can do.
>
> AI requires only text I/O. AGI requires a body. That's why the new term was
> introduced.
I thought the new term was introduced to distinguish AIs
On Wednesday, April 07, 2021, at 2:30 PM, Matt Mahoney wrote:
> A blind quadriplegic robot can't paint your house. Claiming general
> intelligence is meaningless if we still need to pay humans to do work that
> machines can't do.
No it isn't. I'm not pursuing AGI because I want it to do all my
On Monday, April 05, 2021, at 9:15 PM, Matt Mahoney wrote:
> But yeah it's not AGI without vision and robotics.
A blind quadriplegic human is still a general intelligence. So I would not
consider these valid requirements.
--
Artificial General Intelligence
I played with it a little bit.
> My input: "You can't say cows are not alive."
>
> Successfully matched on Pattern 46, "You can't say ".
> Also matched on Patterns 1367, 697, 740, 1082, and 676. Some of these are
> patterns for which the entire content is inside brackets, e.g. " autopilot
@Ben: I've been needing a thousand-foot overview of your work, and this paper
sounds like the most recent version of that? I can't comment yet, but will look
forward to reading it when I can find the time.
On Friday, April 02, 2021, at 10:47 PM, Matt Mahoney wrote:
> Quantum computing may be
Here we go! Mostly real words in there, nice, with some plausible made-up ones
(I like "Johnsciousness"). But seemingly no awareness of sentence or
higher-level structure yet.
I think I asked this question on that Google doc someone made of your
summary.txt, but I don't know if you looked at it
Did you slurp up an existing semantic database to get the "is-a" mappings, does
your bot learn them, or did you manually input them?
--
Artificial General Intelligence List: AGI
Permalink:
On Tuesday, March 09, 2021, at 1:05 PM, Matt Mahoney wrote:
> Consciousness seems real to me. I would not have a reason to live if I didn't
> have this illusion of a soul or little person in my head that experiences the
> world and that I could imagine going to heaven or a computer after I die.
On Monday, March 08, 2021, at 9:17 PM, Nanograte Knowledge Technologies wrote:
> I connect my brain to its brain with a "wire" of variable bandwidth
> and see what it feels like to sense myself fusing with it
>
> Then I carry out the same experiment with another human, a dog, a
> mushroom, a
On Wednesday, December 23, 2020, at 5:54 PM, Colin Hales wrote:
> Thanks for opening this door.
>
> The *paper* (not me) claims (with empirical evidence) that a science that
> assumes a claim "cognition can be achieved by algorithms in GP-computers", an
> equivalence of nature and abstract
Colin reminds me of Searle. I think the claim that underlies all his arguments
is "cognition cannot be achieved by algorithms." Therefore, he regards any
algorithmic approach (including algorithms that model neuronal EM fields) as a
non-starter. In his mind, experiments that measure the
The thing I hated most about my childhood was the lack of freedom. I also
suffered some degree of mental burnout because I was overworked in school. What
you're proposing is even worse. I wouldn't inflict it on any child.
Also, you *can't *force people to love anything. And sometimes they get
Well stay away and don't breathe on me, then.
For me, the bulk of the distraction so far has come from careless people who
want to treat it as a non-threat, not the virus itself. I've probably been more
productive during the pandemic than usual.
The notion that it isn't a big deal is very
On Monday, June 29, 2020, at 9:13 AM, Matt Mahoney wrote:
> Surely anyone who believes that AGI is possible wouldn't also believe
in souls or heaven or ghosts??? Your brain is a computer, right?
Belief in souls and whatnot is fully compatible with the belief that AGI is
possible, if one avoids
On Sunday, June 14, 2020, at 9:08 PM, immortal.discoveries wrote:
> Wait. If the cop knew he'd go to jail and pay a million like the other
> cop did, then why would he shoot? He did it in self defense.
If this logic actually worked, no one would commit serious crimes ever. And
yet, people
"They" burned a Wendy's because a different "they" killed another man for petty
reasons. Neither one is good, but ... we have a lot of Wendy's restaurants, and
they're replaceable. Individual lives aren't.
--
Artificial General Intelligence List: AGI
> ... and I come first just like others manage themselves/ family.
There is the problem, right there. I at least *try* to manage myself in such a
way that *others* come first, and I come second. And I'm not going to change
that, no matter how many wonderful things you offer me.
I have concerns about the Federal Reserve myself, but it is not the topic of
this list. Opinions on current events or politics in general are not the topic
of this list. I am tired of having this be a place where I feel the need to put
out fires and counter misinformation. Let's stick to AGI,
No. Buckle down and implement your own idea.
Implementing it is a great way to work out the details and learn about the
fatal flaws, if there are any. You can be "very sure" of your predictions and
they can still be totally wrong. The proof is in the product.
If you're determined to remain in
> :)Anything you say that you do feel stuff, is the machine. And you can make
> that:)
*Saying* that you feel things is behavior. So I agree, and this doesn't
contradict anything in my post. Actually feeling the things, whether you say so
or not, is experience, and that's the part that would
What if all of human behavior is computable, but not all of human experience is
computable?
When Searle (and that apparent disciple of his who sometimes visits this list
... I don't remember the name now) start ranting that you couldn't possibly
make an AGI without "brain physics," I think
As promised, here's an update on my own project. This is a symbolic AI that
I've been slowly making bigger and more capable over the course of several
years. I just added the beginnings of narrative processing. Check out the
demo video in which I tell the AI a couple of brief stories:
On Thursday, March 19, 2020, at 8:53 PM, Alan Grimes wrote:
> So I propose, out of shame, that we disassociate
ourselves from their esteemed company BY KICKING THEM OFF THE DAMN
LIST
If the compression-related content were generating a lot of noise and drowning
out other, more
Are your "weak words" synonymous with functional words, and "strong words" with
lexical or content words, or is the concept you're aiming at slightly different?
http://linguistlaura.blogspot.com/2012/08/functional-and-lexical-words.html
--
Artificial
On Saturday, February 22, 2020, at 11:31 AM, Stanley Nilsen wrote:
> Simply to say that a "goal" is the way
you determine what is best (e.g. does it "lead to" the goal ) is to
miss the point that goals need to constantly change when
circumstances change.
Instrumental goals or subgoals
> If you take the morality out of intelligence then you should use the term
> "power."
But that's exactly what intelligence is: a form of power, specifically
concerned with the skills of thinking, planning, strategizing, etc. Just look
at a standard IQ (Intelligence Quotient) test: nothing
I think it works just as well in e-mail format if you have an e-mail client
that sorts replies into threads. (And if you don't ... why don't you?)
--
Artificial General Intelligence List: AGI
Permalink:
No, to my knowledge the Winograd challenge has not been solved (at least, not
to the point of a program getting correct answers on all the sentences in a
test set). The only other hobbyist/independent researcher that I've seen
openly working on it is Don Patrick, who describes his approach and
@James:
If the one thing that puffs you up with pride is your own humility, then you're
not humble. If the one thing that makes you consider your race superior is its
general disdain for the idea of racial superiority, then you don't disdain the
idea of racial superiority. Nice try. You can
On Monday, February 17, 2020, at 8:57 AM, John Rose wrote:
> It is crying wolf and neutering the term “racist”.
Umm, hold on. James Bowery actually is racist by the standard definition of
the word. From Oxford:
noun: racist; plural noun: racists
a person who shows or feels discrimination or
On Sunday, February 16, 2020, at 11:09 AM, James Bowery wrote:
> How about nuking the social pseudosciences with selection of the best unified
> model of society based on lossless compression of a wide range of
> longitudinal measures?
How about not being narrow and obsessive? And how about
Well, you'll be *really *unimpressed with me, because ... I have no plan!
I'm kidding ... partly. I sit down at the beginning of every year and sketch
out a development schedule for the next year. Then sometimes, after I actually
start working, I end up changing my mind. And I definitely don't
On Friday, February 14, 2020, at 11:33 AM, James Bowery wrote:
>> ...The only correct way to judge individuals is on the basis of their own
>> character and behavior...
>
> Decision theory doesn't dictate that an intelligent agent stop all actions
> just because it has incomplete information.
>
I myself hold opinions that are "taboo" in some circles. For instance: I, as a
woman, am open to the idea that there could be *on-average* differences between
the sexes, in terms of career interest and perhaps even aptitude. However, I
would still take it poorly if people on this list started
Props for making something of your own. However, you don't need to spam the
list with every intermediate version. Polish your code until it's *done *and
post the *final *version, with a summary of what it accomplishes and the
significance thereof.
As for the "something big" ... don't count
"It was more of just a thing to say to counteract the general saying which I
don't like. Being born was a free lunch, I didn't pay for it!
Oh wait, and I didn't pay for meals the next years either. So there are free
lunches!"
*Somebody* paid for your birth, and all those lunches. It just
Humans quite obviously exist. So if there* were* a contradiction between
general intelligence and no-free-lunch ... either humans would not be general
intelligences, or the NFL would be inapplicable to them.
People who pursue AGI generally have something of roughly human intellectual
capacity
"In either case, the numbers are finite, so there will be no singularity."
Does the average person (or indeed any person) who uses the term "singularity"
genuinely expect that any physical quantity will go to infinity? That was not
my impression. I take "technological singularity" as a
@Stefan: I'm interested in your project, but I haven't yet formed an opinion on
its prospects of success.
I have to agree that long videos of you actually coding might not be the best
presentation format. I listened to the 3-hour one, but I put it on in the
background while I was working (no
I would think the "man with two coins" thought experiment could be handled by a
sequential system using a state-transition paradigm. The following sequence is
only adequate to prove the existence of one coin:
Coin X state-change Hidden -> Visible
Coin X state-change Visible -> Hidden
Coin Y
There's no edit function because it's a mailing list, not a forum. Once you
make your post, it goes to people's e-mail inboxes and you can't call it back.
--
Artificial General Intelligence List: AGI
Permalink:
I get the feeling that the people in this thread who are saying "compression is
faster" might really be thinking about levels of abstraction ... the idea of
"compressing" low-level concepts into high-level ones by eliminating detail.
If you do all your work at a high level of abstraction, then
I have some thoughts, but ... isn't this discussion going to become yet another
distraction? The question of whether AGI will result in a technological
singularity doesn't seem to have a lot of relevance to the question of *how* to
build AGI. So the disciples of the Singularity can believe
@James: the trouble is that we are, in fact, talking about philosophical
zombies. A discussion about some other type of zombie would be a different
discussion. It's not the *word* "zombie" that is the problem, it's the
*concept* of a p-zombie. Some of us find that a useful concept, some of
Poor analogy.
Suppose you receive a requirement from a customer, for a "lossy compressor,"
and you design them a compressor that delivers lossless results for some data
sets. No one will mind. You have met the requirement.
Suppose you receive a requirement from a customer, for a "lossless
What? You finally figured out "I think, therefore I am," sort of? It's about
time.
I'm perfectly happy to consider myself to be a ghost, or observer, or whatever
you want to call it. I can't objectively measure/detect/verify the existence of
*any other* consciousness. I agree with Matt that
1 - 100 of 119 matches
Mail list logo