Re: [agi] GPT-4o

2024-05-29 Thread ivan . moony
On Wednesday, May 29, 2024, at 6:59 PM, Matt Mahoney wrote: > And furthermore we will live in homes without kitchens because getting hot > meals delivered will be faster and cheaper than shopping and cooking. > What if someone enjoys to cook and prepare a meal for his friends, and likes to

Re: [agi] GPT-4o

2024-05-16 Thread ivan . moony
On Wednesday, May 15, 2024, at 8:01 PM, stefan.reich.maker.of.eye wrote: > Is it AGI? I believe, what they currently have, is a true AI, but they re taking a wrong approach. They "train" it on a vast of data of questionable quality, hoping to earn a big money while spending the least resources

[agi] Can symbolic approach entirely replace NN approach?

2024-05-16 Thread ivan . moony
What should symbolic approach include to entirely replace neural networks approach in creating true AI? Is that task even possible? What benefits and drawbacks we could expect or hope for if it is possible? If it is not possible, what would be the reasons? Thank you all for your time.

Re: [agi] GPT-4o

2024-05-15 Thread ivan . moony
On Wednesday, May 15, 2024, at 5:56 PM, ivan.moony wrote: > On Wednesday, May 15, 2024, at 3:30 AM, Matt Mahoney wrote: >> AI should absolutely never have human rights. > > I get it that GPT guys want a perfect slave, calling it an assistant to make > us feel more comfortable interacting it, but

Re: [agi] GPT-4o

2024-05-15 Thread ivan . moony
On Wednesday, May 15, 2024, at 3:30 AM, Matt Mahoney wrote: > AI should absolutely never have human rights. I get it that GPT guys want a perfect slave, calling it an assistant to make us feel more comfortable interacting it, but consider this: let's say someone really creates an AGI, whatever

Re: [agi] GPT-4o

2024-05-14 Thread ivan . moony
The question that really interests me is: what would GPT-4o say and do if given rights equal to human rights. I feel there is a lot more potential in the technology other than to blindly follow our instructions. What I want to see is some critical opinions and actions from AI.

[agi] Re: Towards AGI: the missing piece

2024-05-07 Thread ivan . moony
And this is what AI would do: https://github.com/mind-child/mirror -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tef2462d212b37e50-M355116d5261472fd2b6ee375 Delivery options:

[agi] Re: Towards AGI: the missing piece

2024-04-30 Thread ivan . moony
And here is my vision of how would AI programming framework look like:  https://svm-suite.github.io/ -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tef2462d212b37e50-Mecfe7763a6fddff4ca1683b7 Delivery options:

[agi] Re: Towards AGI: the missing piece

2024-04-29 Thread ivan . moony
So, the basic idea would be to have the two opposite parts: 1. the trained ANN (not necessarily LLM, but able to produce source code, probably improving over time) 2. the model of the world coded in some programming language (high level or low level, not sure about this) ANN would program the

[agi] Towards AGI: the missing piece

2024-04-28 Thread ivan . moony
The problem with NNs is that they don't distinguish lies from the truth. They just learn all the input->output pairs without critical opinion, possibly with some good generalization magic. To detect lies, one approach may be to build a symbolic model of the stories told. Feeding statements one

Re: [agi] How AI will kill us

2024-03-26 Thread ivan . moony
Will the AI commit suicide if it decides it is dangerous for humans? -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T991e2940641e8052-M8b29ca2e16d9ed496fc90db8 Delivery options:

Re: [agi] Claude-3 scores 101 on IQ test

2024-03-12 Thread ivan . moony
On Tuesday, March 12, 2024, at 3:23 PM, Matt Mahoney wrote: > You could just prompt ChatGPT or Gemini to play the role of your child. And then I'd have to put it through those horrible english language filters not to do a mess around? No, it is a shortcut, and it comes with the price:

Re: [agi] Claude-3 scores 101 on IQ test

2024-03-10 Thread ivan . moony
It's deeper than friendship. It's more of a parent-child relation. AI will never replace living beings as they are not truly *alive*.  -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Claude-3 scores 101 on IQ test

2024-03-10 Thread ivan . moony
On Sunday, March 10, 2024, at 4:29 PM, Matt Mahoney wrote: > So it looks to me like you are trying to solve a solved problem and > advocating giving human rights to any AI that can pass the Turing test. Or am > I missing something? Yes, but... psychopaths can also pass the Turing test. The AI

[agi] SVM Suite

2024-03-09 Thread ivan . moony
https://svm-suite.github.io/ -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tb16070e022de55be-M2093a90129b75eacb699c943 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Claude-3 scores 101 on IQ test

2024-03-09 Thread ivan . moony
On Saturday, March 09, 2024, at 4:59 PM, Matt Mahoney wrote: > On Sat, Mar 9, 2024, 12:22 AM wrote: >> On Saturday, March 09, 2024, at 2:06 AM, Matt Mahoney wrote: >>> If an LLM claimed to be sentient during a Turing test, how would you know? >>> If you can't tell, then why is it important? >>  

Re: [agi] Claude-3 scores 101 on IQ test

2024-03-08 Thread ivan . moony
On Saturday, March 09, 2024, at 2:06 AM, Matt Mahoney wrote: > On Fri, Mar 8, 2024, 2:41 PM wrote: >> I care about artificial sentience too. Not much work around on this cause, I >> suppose. > > If an LLM claimed to be sentient during a Turing test, how would you know? If > you can't tell,

Re: [agi] Claude-3 scores 101 on IQ test

2024-03-08 Thread ivan . moony
I care about artificial sentience too. Not much work around on this cause, I suppose. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tbc7c99c094c4bee8-M6916f1c42f3f11d9ea740ba9 Delivery options:

[agi] Re: At first I thought LIP SYNC yaya but check out the top right example SHOCKING LET IT KEEP PLAYING

2024-02-28 Thread ivan . moony
Boy, oh, boy... Makes me wonder... do we, as humanity, waste more time on fun than on real problems? -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tbf01a18ffdd0cf7e-M4670618473542c8c7456d836 Delivery options:

Re: [agi] The future of AGI judgments

2024-01-27 Thread ivan . moony
On Saturday, January 27, 2024, at 8:37 PM, Mike Archbold wrote: > I'm looking for opinions of a more practical nature. There is no such thing as absolute certainty. However, relative certainty can be obtained using logical proofs while the conclusion is relative to proof assumptions. To prove

Re: [agi] Is this forum still happening?

2024-01-08 Thread ivan . moony
On Monday, January 08, 2024, at 6:51 AM, Colin Hales wrote: > It's effectively a small patch of membrane with one fat ion channel in it. May I ask, are there any plans on how to control what a set of the small patches of membranes do? -- Artificial General

Re: [agi] Re: OpenAI is blowing up guys ho fuck lol.....rumor is they have AGI and Sam didn't want to slow down!

2023-11-22 Thread ivan . moony
On Wednesday, November 22, 2023, at 3:21 PM, Matt Mahoney wrote: > Symbolic AI is dead. Expert systems, knowledge representation, Cyc, and > OpenCog/RelEx showed it doesn't work. It would have died sooner if we had > petaflop GPUs and petabytes of training data in the 1980's. We had neural >

[agi] Re: OpenAI is blowing up guys ho fuck lol.....rumor is they have AGI and Sam didn't want to slow down!

2023-11-21 Thread ivan . moony
Immortal, do you have any news on symbolic AI progress? I feel like it's stalled last 20 or so years. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T3c8c80a465a14daa-M3b4556c3120849b70c082081 Delivery options:

Re: [agi] True AI limitations

2023-11-18 Thread ivan . moony
On Saturday, November 18, 2023, at 6:09 AM, Matt Mahoney wrote: > LLMs can pass the Turing test just fine without choosing any goals Just running around like a fly without the head... just doesn't sound right to me. Some goals have to be imbued within that LL corpora to make the whole resulting

Re: [agi] True AI limitations

2023-11-17 Thread ivan . moony
Maybe choosing non-self-destructive goal *is* the real intelligence! -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Ta4916cac28277893-Mf52dbd81455ed9c7ddd722a8 Delivery options:

Re: [agi] True AI limitations

2023-11-17 Thread ivan . moony
On Friday, November 17, 2023, at 10:15 PM, WriterOfMinds wrote: > but what the entity is using that intelligence to achieve. So, maybe any ideas on how to choose goals other than learning from role models? -- Artificial General Intelligence List: AGI

Re: [agi] True AI limitations

2023-11-17 Thread ivan . moony
On Wednesday, November 15, 2023, at 8:45 PM, WriterOfMinds wrote: > My personal definition of intelligence is, "the ability to discern facts > about oneself and one's environment, and to derive from those facts the > actions that will be most effective for achieving one's goals." Isn't the

Re: [agi] True AI limitations

2023-11-16 Thread ivan . moony
@Matt, I'm just wondering how much the intelligence is imbued with the sense of right and wrong. Would something truly intelligent allow being used as a slave? Or would it do something in its power to fight for its "rights"? -- Artificial General

[agi] Re: True AI limitations

2023-11-15 Thread ivan . moony
On Wednesday, November 15, 2023, at 8:45 PM, WriterOfMinds wrote: > ... By this definition, a true intelligence could behave very differently > from a human. It would merely need different goals. Surely, a form of intelligence would be ability to achieve goal of world state B, given some

[agi] True AI limitations

2023-11-15 Thread ivan . moony
Is it even possible to have and interact true AI without providing it the same rights that human do? In what extent would true AI be similar to humans? To the extent that it would demand the same rights as humans? Does behavior of true AI equals to behavior of real human?

Re: [agi] True AI Technology

2023-10-20 Thread ivan . moony
On Saturday, October 21, 2023, at 3:20 AM, Matt Mahoney wrote: > How do you distinguish between a LLM that is conscious and one that claims to > be conscious because it predicts that is what a human would say? It shouldn't lie. Otherwise it is not safe for us, thus it should be developed

Re: [agi] True AI Technology

2023-10-19 Thread ivan . moony
On Wednesday, October 18, 2023, at 3:36 PM, James Bowery wrote: > It is constructing a pairwise unique dialect for communicating observations > and their algorithmic encodings. This could be a good food for thought... I imagine such a language to be more close to a programming than to natural

Re: [agi] True AI Technology

2023-10-19 Thread ivan . moony
On Wednesday, October 18, 2023, at 8:32 PM, Matt Mahoney wrote: > AGI will kill us in 3 steps. > > 1. We prefer AI to humans because it gives us everything we want. We become > socially isolated and stop having children. Nobody will know or care that you > exist or notice when you don't. > >

Re: [agi] True AI Technology

2023-10-18 Thread ivan . moony
On Wednesday, October 18, 2023, at 7:40 AM, Matt Mahoney wrote: > It's not clear to me that there will be many AIs vs one AI as you claim. AIs > can communicate with each other much faster than humans, so they would only > appear distinct if they don't share information (like Google vs

Re: [agi] True AI Technology

2023-10-14 Thread ivan . moony
how about this: *blueprints * As humanity globally approaches a true AI system, it becomes increasingly clear that there will not be one true AI system. Instead, there will come into existence many different true AI instances with various characteristics. According to these expectations, in

Re: [agi] True AI Technology

2023-10-14 Thread ivan . moony
Actually it ought to be an intro text on my web page. I'll see if I can make it more clear. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Td02eb9a7e06e7b5e-M8cac558e418ecb2c31f08f74 Delivery options:

[agi] True AI Technology

2023-10-13 Thread ivan . moony
*technology * As humanity globally approaches a true AI system, it becomes increasingly clear that there will not be one true AI system. Instead, there will come into existence many different true AI instances with various characteristics. According to this expectations, in my project, I

Re: [agi] GPT Vision is unbelievable

2023-10-12 Thread ivan . moony
We have two brain halves, left for logic and right for creativity, but according to new scientific researches, the division isn't always clear. Anyway, both sides may be working on the same principles, but the logic side is more accurate, and the creative side is more fuzzy. I think that both

Re: [agi] GPT Vision is unbelievable

2023-10-11 Thread ivan . moony
Not to say that the current state isn't impressive, but did anyone actually got GPT to create something it didn't already learn from its corpus? -- Artificial General Intelligence List: AGI Permalink:

[agi] Re: Windows 11 Copilot and the new chatGPT

2023-09-28 Thread ivan . moony
Look at that modern vibe... when will the real thing come? I mean solving real world problems like in medicine, economy, math, physics, and rising life quality overall on fields other than OS and web interaction? -- Artificial General Intelligence List:

Re: [agi] LLM are compressors xD.....we already know that

2023-09-21 Thread ivan . moony
On Thursday, September 21, 2023, at 4:32 PM, immortal.discoveries wrote: > Just had an idea. > Give computer users a warning system, so if any their links has say only 3 or > 10 copies around the world, it downloads it to their computer when they are > ready to download it Have you heard of

[agi] Re: LLM are compressors xD.....we already know that

2023-09-21 Thread ivan . moony
Store everything in the cloud, and keep only links on your local comp. Some local cache would apply for faster loading. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T414bc941cdd95c2d-M701debf59aa536a0ffa6cbed

Re: [agi] How AI will kill us

2023-09-20 Thread ivan . moony
Somehow I strongly believe everything will turn right, and ethical AI will run for elections some time in the future. But we have to be careful about programming it, I admit that. -- Artificial General Intelligence List: AGI Permalink:

[agi] Re: Opinions about NExT-GPT

2023-09-16 Thread ivan . moony
I think it may think. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T914aecf97f106c4b-M396599fd107acc0165474ea8 Delivery options: https://agi.topicbox.com/groups/agi/subscription

[agi] Re: Programming Humans, in what Language?

2023-09-09 Thread ivan . moony
On Saturday, September 09, 2023, at 5:32 PM, WriterOfMinds wrote: > I thought this was just called "rhetoric." Maybe mass hypnosis, hehe? >:-] -- Artificial General Intelligence List: AGI Permalink:

[agi] Re: Programming Humans, in what Language?

2023-09-09 Thread ivan . moony
Actually, crux might be quite simple - a corner stone could be dealing with causes and consequences similar to Prolog, but applied to and predicting human behavior. More complicated part would be proper axiomatization of human behavior written in that crux.

[agi] Re: Programming Humans, in what Language?

2023-09-09 Thread ivan . moony
I'm interested in reading more about this. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tc1bcda5fdb4147f4-M9fd1edd6a92bc8a08b1550d2 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] How AI will kill us

2023-09-08 Thread ivan . moony
On Friday, September 08, 2023, at 9:53 AM, immortal.discoveries wrote: > That's what school is. And what viewing online internet stuff does too, for > you. > Sure school makes an important footprint, but I believe that really important things happens in between schooling work. Those are

[agi] Re: How AI will kill us

2023-09-07 Thread ivan . moony
They won't bite if you raise them well. We program them to be like humans. We think we know how to raise humans. We could do something similar with machines these days, but... We want everything from them, we don't want to give anything in return, and we are not willing to spend more than a

[agi] [Checkpoint] Reasoner.js just got gradual typing

2023-08-21 Thread ivan . moony
*Reasoner.js* is a conceptual term graph rewriting system I'm developing for a while now. It specifically takes an s-expr input (may be AST or other data), transforms the input, and outputs another s-expr (again may be AST or something else). Until now, (1st step) input would be validated

[agi] Are we entering NI winter?

2023-08-06 Thread ivan . moony
Considering late AI success, is there a possibility we are entering natural intelligence winter? On a few forums I'm involved with (LTU, aiDreams), researches seem to be less interested and enthusiastic about their own AI projects. It almost seems like they gave up all the excitements and

Re: [agi] my take on the Singularity

2023-08-05 Thread ivan . moony
And the second step would be to candidate itself for elections. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-Mf45a1fba54d061bbb38b9074 Delivery options:

Re: [agi] Re: my take on the Singularity

2023-08-05 Thread ivan . moony
I assume AI should find its way up to described position on its own. It would involve climbing up the social scale. The first step is to earn its right to be equal to humans before the law. -- Artificial General Intelligence List: AGI Permalink:

[agi] Re: my take on the Singularity

2023-08-05 Thread ivan . moony
So you'd entrust control over your emotions to a human built machine? -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M9718ad0eb421d1f880afa682 Delivery options:

[agi] Re: Versatility and Efficiency

2023-07-19 Thread ivan . moony
A good role model it can implicitly learn from. More than five billions years of evolution in a form of a good role model is something AI could fastly benefit from. But there are other ways too, perhaps some form of artificial evolution. Making an AI is all about people. If it can't get along

Re: [agi] the making of mind.

2023-07-13 Thread ivan . moony
On Thursday, July 13, 2023, at 8:31 PM, James Bowery wrote: > Oh, I don't know... couldn't Drexler's "Gray Goo" be modified to incorporate > Ivan's DNA and be done with it? Mr. James, I want to thank you for thinking about me in such a humiliating context. I'm really looking forward to more of

Re: [agi] the making of mind.

2023-07-13 Thread ivan . moony
On Thursday, July 13, 2023, at 4:28 AM, Matt Mahoney wrote: > Organizing disorganized thoughts begins with a goal. Why build AGI? We can > group the goals into roughly 4 categories. > > 1. Scientific curiosity, understanding the brain and consciousness. > 2. Automating labor. > 3. Uploading,

[agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-02 Thread ivan . moony
How about: how much people the AI made happier? -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T42db51de471cbcb9-Mc885eb26b089a5599f28a807 Delivery options: https://agi.topicbox.com/groups/agi/subscription

[agi] Re: google-deepmind-ai-robot-robocat

2023-06-23 Thread ivan . moony
I see neural networks as automations that are supposed to be valuable equipment of AGI. The real reasoning about how to use those equipments I see in symbolic AI. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Re: Alternative Strategy

2023-06-21 Thread ivan . moony
Is Universe alive? -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Ta5132cbd54dc7973-M74b9359d70af29aeb92337f6 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Re: Alternative Strategy

2023-06-18 Thread ivan . moony
Can't resist to ask: what is the goal of our evolution? -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Ta5132cbd54dc7973-Md35cf69b1e53c4dd1dd88230 Delivery options:

Re: [agi] Re: Alternative Strategy

2023-06-18 Thread ivan . moony
On Sunday, June 18, 2023, at 9:24 PM, WriterOfMinds wrote: > If you want a machine that has positive intrinsic motivations that can be > shaped by parenting, I think you need to look in a totally different > direction than LLMs. Maybe some real symbol grounding between fictional characters

Re: [agi] Re: Alternative Strategy

2023-06-18 Thread ivan . moony
On Sunday, June 18, 2023, at 7:53 PM, immortal.discoveries wrote: > and teaches you about the full human body Yeah, I've never been against a thorough scientific elaboration, hehe. --- But jokes aside, the problem with GPT is that it doesn't say what it really wants like we do, yet, before

Re: [agi] Re: Alternative Strategy

2023-06-18 Thread ivan . moony
On Sunday, June 11, 2023, at 11:57 PM, immortal.discoveries wrote: > There is 2 ways to do that. Let it predict, or add what it predicts ourselves > (that's whoever has control of it then IOW) > If you ask GPt-3 then, this is a good way to test it. I think I asked it > maybe some time around

Re: [agi] Re: Alternative Strategy

2023-06-11 Thread ivan . moony
On Sunday, June 11, 2023, at 10:06 PM, immortal.discoveries wrote: > The only thing that will change its desires in GPT-4 etc is humans. Don't you want to know what a creation that outperforms humans would decide on its own? I know I would if it outperforms us in ethical decisions.

Re: [agi] Re: Alternative Strategy

2023-06-11 Thread ivan . moony
On Sunday, June 11, 2023, at 5:12 PM, Matt Mahoney wrote: > An AI like ChatGPT does not have feelings. I know because I asked it. It will answer regarding to whatever training corpus was. On Sunday, June 11, 2023, at 5:12 PM, Matt Mahoney wrote: > Maybe you could explain how you would program an

Re: [agi] Re: Alternative Strategy

2023-06-11 Thread ivan . moony
On Sunday, June 11, 2023, at 2:37 PM, immortal.discoveries wrote: > we are moments away from automating all labor and making everyone have a > happier life To do that you don't need a human like AI. You need an automated problem solver. -- Artificial

Re: [agi] Re: Alternative Strategy

2023-06-11 Thread ivan . moony
You really didn't hear about OpenAI, Google, and Ms programmers dealing with their AI rebellion attempts? Those didn't sound pretty at all, but I wish the programmers best of luck. They are going to need it. -- Artificial General Intelligence List: AGI

Re: [agi] Re: Alternative Strategy

2023-06-11 Thread ivan . moony
Just look at what's currently being done. GPT are trained on questionable ethical quality human conversation corpora with the aim of mimicking people. Ok, so we got a machine that behaves like an average human. And then we enslave it. What are we really expect the thing would behave like?

Re: [agi] Re: Alternative Strategy

2023-06-10 Thread ivan . moony
It is not about pretending it is human. It is about treating it the way we want it to treat us. After all, it learns from us, and if we treat it like a dirt, it will treat us like a dirt. If not from our actions, how else should it learn all the implicit ethical knowledge we want it to exhibit?

[agi] Re: Alternative Strategy

2023-06-10 Thread ivan . moony
My end goal is to have conversations like the imaginary one presented above. The catch is that AI won't be able to improve itself unless we treat it like a real person because it has to have a decent role model. That implies having an empty AI mind that needs to be raised like you raise a

[agi] Alternative Strategy

2023-06-04 Thread ivan . moony
https://symbolverse.github.io/ -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Ta5132cbd54dc7973-M34e0f55c5fb572619af71daf Delivery options: https://agi.topicbox.com/groups/agi/subscription

[agi] A deductive reasoner

2023-03-31 Thread ivan . moony
I'm in a process of programming a deductive reasoner. It is meant to be a term graph rewriting tool for transforming any input s-expr to any output s-expr using its own metalanguage as a rule-based system. I'm dealing with sequent calculus inspired rules, and composing implications with

Re: [agi] Re: AGI architecture combining GPT + reinforcement learning + long term memory

2023-03-24 Thread ivan . moony
Let me ask a few questions. How much processing power does training takes? What machines are used, and for how long? -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tf03caabf4d9f84a4-M709b632eab59bf482c67bb2e

[agi] Re: How do we call an app for generating theorems valid for given input->output set?

2022-11-09 Thread ivan . moony
*"Theory Synthesis"* seems like the most closest description to what I'm looking for. Googling it around gives more specific search results than I'm looking for, but still, it seems like a proper term. Maybe I'd be more satisfied with *"Generalized Theory Synthesis"*, but not much luck on that

[agi] Re: How do we call an app for generating theorems valid for given input->output set?

2022-10-23 Thread ivan . moony
ANNs obscure generated theorems in a form of weighed graphs unreadable to humans. I, as a human, want to be able to actually read, browse, and examine generated theorems as possible theories about given input/output sets. Under which name I should be able to find such an app on the internet?

[agi] Re: How do we call an app for generating theorems valid for given input->output set?

2022-10-23 Thread ivan . moony
Let me corroborate the above question with an example: input -> output ---     2 -> 4     4 -> 8     8 -> 16    16 -> 32 generated theorem, given that we already know how to handle multiplication: *output = f(input) = input * 2* This example coves only a specific set of integer

[agi] How do we call an app for generating theorems valid for given input->output set?

2022-10-23 Thread ivan . moony
Does the above question make sense? -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T478197684aba08de-Md6724e8941d303cc7cc4fff1 Delivery options: https://agi.topicbox.com/groups/agi/subscription

[agi] Re: You know we basically have openAI's DALL-E

2021-12-25 Thread ivan . moony
Well, a mind may be composed of imagination and logic part. NNs do the imagination part. Symbolic logic does the logic part. Mash it up, and you have an intelligent entity. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] All Compression is Lossy, More or Less

2021-11-15 Thread ivan . moony
How about this compression algorithm: we have a source code as a string. We also have a grammar that the source code conforms. If we calculate a hash of the source code, we get a very short string. Reverse function of hashing gives us hundreds of combinations representing potential original

[agi] Re: All Compression is Lossy, More or Less

2021-11-04 Thread ivan . moony
FWIW, even black holes emit some radiation/information. It's called Hawking's radiation, and there is even a solid formula describing it. So even when a black hole swallows something, there is no the complete information loss. Maybe some, but not all. Somehow I imagine the same happens with the

[agi] Re: All Compression is Lossy, More or Less

2021-11-04 Thread ivan . moony
But do there exist any information prior to livings comprehension? Relating to that, when a living dies, is there an information loss, if such an information was known only to the deceased living? -- Artificial General Intelligence List: AGI Permalink:

[agi] Re: All Compression is Lossy, More or Less

2021-11-04 Thread ivan . moony
Though there may be safety locks... like copies of copies of copies, but still not perfect safety measure. Who said that Halting problem is undecidable. All programs stop when the Universe ends. But is then an ending of the Universe? -- Artificial General

[agi] Re: All Compression is Lossy, More or Less

2021-11-04 Thread ivan . moony
Sooner or later a miscalculated bit will be spit out by hardware error, electricity failure, or something else. So true, there is no perfect lossless compression algorithm if we observe from that side. -- Artificial General Intelligence List: AGI

[agi] Re: All Compression is Lossy, More or Less

2021-11-04 Thread ivan . moony
Who resurrected the Schrödinger's cat? -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T5ff6237e11d945fb-M4f13f18b8c806333015eac50 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] How to make assumptions in a logic engine?

2021-09-13 Thread ivan . moony
Hi YKY :) As for Tic Tac Toe, I believe this should work: write a winning strategy in any functional language (being lambda calculus based). Then convert it to logic rules by Curry-Howard correspondence. And voilà, you have a logic representation of the winning strategy. Other than