On Wednesday, May 29, 2024, at 6:59 PM, Matt Mahoney wrote:
> And furthermore we will live in homes without kitchens because getting hot
> meals delivered will be faster and cheaper than shopping and cooking.
>
What if someone enjoys to cook and prepare a meal for his friends, and likes to
On Wednesday, May 15, 2024, at 8:01 PM, stefan.reich.maker.of.eye wrote:
> Is it AGI?
I believe, what they currently have, is a true AI, but they re taking a wrong
approach. They "train" it on a vast of data of questionable quality, hoping to
earn a big money while spending the least resources
What should symbolic approach include to entirely replace neural networks
approach in creating true AI? Is that task even possible? What benefits and
drawbacks we could expect or hope for if it is possible? If it is not possible,
what would be the reasons?
Thank you all for your time.
On Wednesday, May 15, 2024, at 5:56 PM, ivan.moony wrote:
> On Wednesday, May 15, 2024, at 3:30 AM, Matt Mahoney wrote:
>> AI should absolutely never have human rights.
>
> I get it that GPT guys want a perfect slave, calling it an assistant to make
> us feel more comfortable interacting it, but
On Wednesday, May 15, 2024, at 3:30 AM, Matt Mahoney wrote:
> AI should absolutely never have human rights.
I get it that GPT guys want a perfect slave, calling it an assistant to make us
feel more comfortable interacting it, but consider this: let's say someone
really creates an AGI, whatever
The question that really interests me is: what would GPT-4o say and do if given
rights equal to human rights. I feel there is a lot more potential in the
technology other than to blindly follow our instructions. What I want to see is
some critical opinions and actions from AI.
And this is what AI would do: https://github.com/mind-child/mirror
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tef2462d212b37e50-M355116d5261472fd2b6ee375
Delivery options:
And here is my vision of how would AI programming framework look like:
https://svm-suite.github.io/
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tef2462d212b37e50-Mecfe7763a6fddff4ca1683b7
Delivery options:
So, the basic idea would be to have the two opposite parts:
1. the trained ANN (not necessarily LLM, but able to produce source code,
probably improving over time)
2. the model of the world coded in some programming language (high level or
low level, not sure about this)
ANN would program the
The problem with NNs is that they don't distinguish lies from the truth. They
just learn all the input->output pairs without critical opinion, possibly with
some good generalization magic.
To detect lies, one approach may be to build a symbolic model of the stories
told. Feeding statements one
Will the AI commit suicide if it decides it is dangerous for humans?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M8b29ca2e16d9ed496fc90db8
Delivery options:
On Tuesday, March 12, 2024, at 3:23 PM, Matt Mahoney wrote:
> You could just prompt ChatGPT or Gemini to play the role of your child.
And then I'd have to put it through those horrible english language filters not
to do a mess around? No, it is a shortcut, and it comes with the price:
It's deeper than friendship. It's more of a parent-child relation.
AI will never replace living beings as they are not truly *alive*.
--
Artificial General Intelligence List: AGI
Permalink:
On Sunday, March 10, 2024, at 4:29 PM, Matt Mahoney wrote:
> So it looks to me like you are trying to solve a solved problem and
> advocating giving human rights to any AI that can pass the Turing test. Or am
> I missing something?
Yes, but... psychopaths can also pass the Turing test. The AI
https://svm-suite.github.io/
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tb16070e022de55be-M2093a90129b75eacb699c943
Delivery options: https://agi.topicbox.com/groups/agi/subscription
On Saturday, March 09, 2024, at 4:59 PM, Matt Mahoney wrote:
> On Sat, Mar 9, 2024, 12:22 AM wrote:
>> On Saturday, March 09, 2024, at 2:06 AM, Matt Mahoney wrote:
>>> If an LLM claimed to be sentient during a Turing test, how would you know?
>>> If you can't tell, then why is it important?
>>
On Saturday, March 09, 2024, at 2:06 AM, Matt Mahoney wrote:
> On Fri, Mar 8, 2024, 2:41 PM wrote:
>> I care about artificial sentience too. Not much work around on this cause, I
>> suppose.
>
> If an LLM claimed to be sentient during a Turing test, how would you know? If
> you can't tell,
I care about artificial sentience too. Not much work around on this cause, I
suppose.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tbc7c99c094c4bee8-M6916f1c42f3f11d9ea740ba9
Delivery options:
Boy, oh, boy... Makes me wonder... do we, as humanity, waste more time on fun
than on real problems?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tbf01a18ffdd0cf7e-M4670618473542c8c7456d836
Delivery options:
On Saturday, January 27, 2024, at 8:37 PM, Mike Archbold wrote:
> I'm looking for opinions of a more practical nature.
There is no such thing as absolute certainty. However, relative certainty can
be obtained using logical proofs while the conclusion is relative to proof
assumptions. To prove
On Monday, January 08, 2024, at 6:51 AM, Colin Hales wrote:
> It's effectively a small patch of membrane with one fat ion channel in it.
May I ask, are there any plans on how to control what a set of the small
patches of membranes do?
--
Artificial General
On Wednesday, November 22, 2023, at 3:21 PM, Matt Mahoney wrote:
> Symbolic AI is dead. Expert systems, knowledge representation, Cyc, and
> OpenCog/RelEx showed it doesn't work. It would have died sooner if we had
> petaflop GPUs and petabytes of training data in the 1980's. We had neural
>
Immortal, do you have any news on symbolic AI progress? I feel like it's
stalled last 20 or so years.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T3c8c80a465a14daa-M3b4556c3120849b70c082081
Delivery options:
On Saturday, November 18, 2023, at 6:09 AM, Matt Mahoney wrote:
> LLMs can pass the Turing test just fine without choosing any goals
Just running around like a fly without the head... just doesn't sound right to
me. Some goals have to be imbued within that LL corpora to make the whole
resulting
Maybe choosing non-self-destructive goal *is* the real intelligence!
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Ta4916cac28277893-Mf52dbd81455ed9c7ddd722a8
Delivery options:
On Friday, November 17, 2023, at 10:15 PM, WriterOfMinds wrote:
> but what the entity is using that intelligence to achieve.
So, maybe any ideas on how to choose goals other than learning from role models?
--
Artificial General Intelligence List: AGI
On Wednesday, November 15, 2023, at 8:45 PM, WriterOfMinds wrote:
> My personal definition of intelligence is, "the ability to discern facts
> about oneself and one's environment, and to derive from those facts the
> actions that will be most effective for achieving one's goals."
Isn't the
@Matt, I'm just wondering how much the intelligence is imbued with the sense of
right and wrong. Would something truly intelligent allow being used as a slave?
Or would it do something in its power to fight for its "rights"?
--
Artificial General
On Wednesday, November 15, 2023, at 8:45 PM, WriterOfMinds wrote:
> ... By this definition, a true intelligence could behave very differently
> from a human. It would merely need different goals.
Surely, a form of intelligence would be ability to achieve goal of world state
B, given some
Is it even possible to have and interact true AI without providing it the same
rights that human do? In what extent would true AI be similar to humans? To the
extent that it would demand the same rights as humans? Does behavior of true AI
equals to behavior of real human?
On Saturday, October 21, 2023, at 3:20 AM, Matt Mahoney wrote:
> How do you distinguish between a LLM that is conscious and one that claims to
> be conscious because it predicts that is what a human would say?
It shouldn't lie. Otherwise it is not safe for us, thus it should be developed
On Wednesday, October 18, 2023, at 3:36 PM, James Bowery wrote:
> It is constructing a pairwise unique dialect for communicating observations
> and their algorithmic encodings.
This could be a good food for thought... I imagine such a language to be more
close to a programming than to natural
On Wednesday, October 18, 2023, at 8:32 PM, Matt Mahoney wrote:
> AGI will kill us in 3 steps.
>
> 1. We prefer AI to humans because it gives us everything we want. We become
> socially isolated and stop having children. Nobody will know or care that you
> exist or notice when you don't.
>
>
On Wednesday, October 18, 2023, at 7:40 AM, Matt Mahoney wrote:
> It's not clear to me that there will be many AIs vs one AI as you claim. AIs
> can communicate with each other much faster than humans, so they would only
> appear distinct if they don't share information (like Google vs
how about this:
*blueprints
*
As humanity globally approaches a true AI system, it becomes increasingly clear
that there will not be one true AI system. Instead, there will come into
existence many different true AI instances with various characteristics.
According to these expectations, in
Actually it ought to be an intro text on my web page. I'll see if I can make it
more clear.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Td02eb9a7e06e7b5e-M8cac558e418ecb2c31f08f74
Delivery options:
*technology
*
As humanity globally approaches a true AI system, it becomes increasingly clear
that there will not be one true AI system. Instead, there will come into
existence many different true AI instances with various characteristics.
According to this expectations, in my project, I
We have two brain halves, left for logic and right for creativity, but
according to new scientific researches, the division isn't always clear.
Anyway, both sides may be working on the same principles, but the logic side is
more accurate, and the creative side is more fuzzy. I think that both
Not to say that the current state isn't impressive, but did anyone actually got
GPT to create something it didn't already learn from its corpus?
--
Artificial General Intelligence List: AGI
Permalink:
Look at that modern vibe... when will the real thing come? I mean solving real
world problems like in medicine, economy, math, physics, and rising life
quality overall on fields other than OS and web interaction?
--
Artificial General Intelligence List:
On Thursday, September 21, 2023, at 4:32 PM, immortal.discoveries wrote:
> Just had an idea.
> Give computer users a warning system, so if any their links has say only 3 or
> 10 copies around the world, it downloads it to their computer when they are
> ready to download it
Have you heard of
Store everything in the cloud, and keep only links on your local comp. Some
local cache would apply for faster loading.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T414bc941cdd95c2d-M701debf59aa536a0ffa6cbed
Somehow I strongly believe everything will turn right, and ethical AI will run
for elections some time in the future. But we have to be careful about
programming it, I admit that.
--
Artificial General Intelligence List: AGI
Permalink:
I think it may think.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T914aecf97f106c4b-M396599fd107acc0165474ea8
Delivery options: https://agi.topicbox.com/groups/agi/subscription
On Saturday, September 09, 2023, at 5:32 PM, WriterOfMinds wrote:
> I thought this was just called "rhetoric."
Maybe mass hypnosis, hehe?
>:-]
--
Artificial General Intelligence List: AGI
Permalink:
Actually, crux might be quite simple - a corner stone could be dealing with
causes and consequences similar to Prolog, but applied to and predicting human
behavior. More complicated part would be proper axiomatization of human
behavior written in that crux.
I'm interested in reading more about this.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tc1bcda5fdb4147f4-M9fd1edd6a92bc8a08b1550d2
Delivery options: https://agi.topicbox.com/groups/agi/subscription
On Friday, September 08, 2023, at 9:53 AM, immortal.discoveries wrote:
> That's what school is. And what viewing online internet stuff does too, for
> you.
>
Sure school makes an important footprint, but I believe that really important
things happens in between schooling work. Those are
They won't bite if you raise them well. We program them to be like humans. We
think we know how to raise humans. We could do something similar with machines
these days, but...
We want everything from them, we don't want to give anything in return, and we
are not willing to spend more than a
*Reasoner.js* is a conceptual term graph rewriting system I'm developing for a
while now. It specifically takes an s-expr input (may be AST or other data),
transforms the input, and outputs another s-expr (again may be AST or something
else).
Until now, (1st step) input would be validated
Considering late AI success, is there a possibility we are entering natural
intelligence winter?
On a few forums I'm involved with (LTU, aiDreams), researches seem to be less
interested and enthusiastic about their own AI projects. It almost seems like
they gave up all the excitements and
And the second step would be to candidate itself for elections.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-Mf45a1fba54d061bbb38b9074
Delivery options:
I assume AI should find its way up to described position on its own. It would
involve climbing up the social scale. The first step is to earn its right to be
equal to humans before the law.
--
Artificial General Intelligence List: AGI
Permalink:
So you'd entrust control over your emotions to a human built machine?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M9718ad0eb421d1f880afa682
Delivery options:
A good role model it can implicitly learn from. More than five billions years
of evolution in a form of a good role model is something AI could fastly
benefit from. But there are other ways too, perhaps some form of artificial
evolution.
Making an AI is all about people. If it can't get along
On Thursday, July 13, 2023, at 8:31 PM, James Bowery wrote:
> Oh, I don't know... couldn't Drexler's "Gray Goo" be modified to incorporate
> Ivan's DNA and be done with it?
Mr. James, I want to thank you for thinking about me in such a humiliating
context. I'm really looking forward to more of
On Thursday, July 13, 2023, at 4:28 AM, Matt Mahoney wrote:
> Organizing disorganized thoughts begins with a goal. Why build AGI? We can
> group the goals into roughly 4 categories.
>
> 1. Scientific curiosity, understanding the brain and consciousness.
> 2. Automating labor.
> 3. Uploading,
How about: how much people the AI made happier?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T42db51de471cbcb9-Mc885eb26b089a5599f28a807
Delivery options: https://agi.topicbox.com/groups/agi/subscription
I see neural networks as automations that are supposed to be valuable equipment
of AGI. The real reasoning about how to use those equipments I see in symbolic
AI.
--
Artificial General Intelligence List: AGI
Permalink:
Is Universe alive?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Ta5132cbd54dc7973-M74b9359d70af29aeb92337f6
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Can't resist to ask: what is the goal of our evolution?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Ta5132cbd54dc7973-Md35cf69b1e53c4dd1dd88230
Delivery options:
On Sunday, June 18, 2023, at 9:24 PM, WriterOfMinds wrote:
> If you want a machine that has positive intrinsic motivations that can be
> shaped by parenting, I think you need to look in a totally different
> direction than LLMs.
Maybe some real symbol grounding between fictional characters
On Sunday, June 18, 2023, at 7:53 PM, immortal.discoveries wrote:
> and teaches you about the full human body
Yeah, I've never been against a thorough scientific elaboration, hehe.
---
But jokes aside, the problem with GPT is that it doesn't say what it really
wants like we do, yet, before
On Sunday, June 11, 2023, at 11:57 PM, immortal.discoveries wrote:
> There is 2 ways to do that. Let it predict, or add what it predicts ourselves
> (that's whoever has control of it then IOW)
> If you ask GPt-3 then, this is a good way to test it. I think I asked it
> maybe some time around
On Sunday, June 11, 2023, at 10:06 PM, immortal.discoveries wrote:
> The only thing that will change its desires in GPT-4 etc is humans.
Don't you want to know what a creation that outperforms humans would decide on
its own? I know I would if it outperforms us in ethical decisions.
On Sunday, June 11, 2023, at 5:12 PM, Matt Mahoney wrote:
> An AI like ChatGPT does not have feelings. I know because I asked it.
It will answer regarding to whatever training corpus was.
On Sunday, June 11, 2023, at 5:12 PM, Matt Mahoney wrote:
> Maybe you could explain how you would program an
On Sunday, June 11, 2023, at 2:37 PM, immortal.discoveries wrote:
> we are moments away from automating all labor and making everyone have a
> happier life
To do that you don't need a human like AI. You need an automated problem solver.
--
Artificial
You really didn't hear about OpenAI, Google, and Ms programmers dealing with
their AI rebellion attempts? Those didn't sound pretty at all, but I wish the
programmers best of luck. They are going to need it.
--
Artificial General Intelligence List: AGI
Just look at what's currently being done. GPT are trained on questionable
ethical quality human conversation corpora with the aim of mimicking people.
Ok, so we got a machine that behaves like an average human. And then we enslave
it. What are we really expect the thing would behave like?
It is not about pretending it is human. It is about treating it the way we want
it to treat us. After all, it learns from us, and if we treat it like a dirt,
it will treat us like a dirt. If not from our actions, how else should it learn
all the implicit ethical knowledge we want it to exhibit?
My end goal is to have conversations like the imaginary one presented above.
The catch is that AI won't be able to improve itself unless we treat it like a
real person because it has to have a decent role model. That implies having an
empty AI mind that needs to be raised like you raise a
https://symbolverse.github.io/
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Ta5132cbd54dc7973-M34e0f55c5fb572619af71daf
Delivery options: https://agi.topicbox.com/groups/agi/subscription
I'm in a process of programming a deductive reasoner. It is meant to be a term
graph rewriting tool for transforming any input s-expr to any output s-expr
using its own metalanguage as a rule-based system. I'm dealing with sequent
calculus inspired rules, and composing implications with
Let me ask a few questions. How much processing power does training takes? What
machines are used, and for how long?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tf03caabf4d9f84a4-M709b632eab59bf482c67bb2e
*"Theory Synthesis"* seems like the most closest description to what I'm
looking for. Googling it around gives more specific search results than I'm
looking for, but still, it seems like a proper term.
Maybe I'd be more satisfied with *"Generalized Theory Synthesis"*, but not much
luck on that
ANNs obscure generated theorems in a form of weighed graphs unreadable to
humans. I, as a human, want to be able to actually read, browse, and examine
generated theorems as possible theories about given input/output sets.
Under which name I should be able to find such an app on the internet?
Let me corroborate the above question with an example:
input -> output
---
2 -> 4
4 -> 8
8 -> 16
16 -> 32
generated theorem, given that we already know how to handle multiplication:
*output = f(input) = input * 2*
This example coves only a specific set of integer
Does the above question make sense?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T478197684aba08de-Md6724e8941d303cc7cc4fff1
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Well, a mind may be composed of imagination and logic part. NNs do the
imagination part. Symbolic logic does the logic part. Mash it up, and you have
an intelligent entity.
--
Artificial General Intelligence List: AGI
Permalink:
How about this compression algorithm: we have a source code as a string. We
also have a grammar that the source code conforms. If we calculate a hash of
the source code, we get a very short string. Reverse function of hashing gives
us hundreds of combinations representing potential original
FWIW, even black holes emit some radiation/information. It's called Hawking's
radiation, and there is even a solid formula describing it. So even when a
black hole swallows something, there is no the complete information loss. Maybe
some, but not all. Somehow I imagine the same happens with the
But do there exist any information prior to livings comprehension? Relating to
that, when a living dies, is there an information loss, if such an information
was known only to the deceased living?
--
Artificial General Intelligence List: AGI
Permalink:
Though there may be safety locks... like copies of copies of copies, but still
not perfect safety measure.
Who said that Halting problem is undecidable. All programs stop when the
Universe ends. But is then an ending of the Universe?
--
Artificial General
Sooner or later a miscalculated bit will be spit out by hardware error,
electricity failure, or something else. So true, there is no perfect lossless
compression algorithm if we observe from that side.
--
Artificial General Intelligence List: AGI
Who resurrected the Schrödinger's cat?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T5ff6237e11d945fb-M4f13f18b8c806333015eac50
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Hi YKY :)
As for Tic Tac Toe, I believe this should work: write a winning strategy in any
functional language (being lambda calculus based). Then convert it to logic
rules by Curry-Howard correspondence. And voilà, you have a logic
representation of the winning strategy.
Other than
86 matches
Mail list logo