[agi] Re: "Efficient Learning in AI" by Dr. Rachel St.Clair, Nov 20th @ 12pm Pacific Time

2022-11-13 Thread Boris Kazachenko
Consequences is way too coarse a driver for cognition. As a principal mechanism anyway. Good enough for brainless evolution, but cognition is way beyond that.   -- Artificial General Intelligence List: AGI Permalink:

[agi] Re: Huge step forward in computer vision

2022-10-08 Thread Boris Kazachenko
That's cool, this is connectivity clustering I was talking about: "Equipped with this scene graph, we can then retraverse the frames and assign the same label to each surface in the segmentation map that belongs to the same connected component in the scene graph.This allows distinct surface

[agi] Re: Engineering Intelligence (an open source project)

2022-10-06 Thread Boris Kazachenko
Ok, so you have a correspondence theory. Not terribly novel or specific, but definitely a right focus. That’s correspondence between model and accessible environment, and the only way to quantify it is comparison between the two. Both are supposed to expand with an indefinite input stream to

Re: [agi] Online Presentation by Hugo Latapie: "Time Binding Artificial Intelligence - A Next Step on the Path to AGI," this Sunday 09/18, 1 pm Pacific Time

2022-09-14 Thread Boris Kazachenko
"Is" here is the brain, neuromorphics, any sort of NN. In general, some version of centroid-based clustering, starting with perceptron. Which is summation-first, comparison-last. "Ought to be" is comparison-first, summation-last: connectivity-based clustering. Because it is the comparison that

Re: [agi] The next advance over transformer models

2022-07-01 Thread Boris Kazachenko
We are talking about general intelligence, so this has to be framed in general terms. Language is just a high-level communication medium, a surface of the mind. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] The next advance over transformer models

2022-06-30 Thread Boris Kazachenko
On Thursday, June 30, 2022, at 10:15 AM, Rob Freeman wrote: > But in the sense of having the same internal connectivity within two groups > which are not directly connected together. Yes, in the sense that inputs (input clusters) are parameterized with derivatives ("connections") from

Re: [agi] The next advance over transformer models

2022-06-30 Thread Boris Kazachenko
On Thursday, June 30, 2022, at 6:10 AM, Rob Freeman wrote: > what method do you use to do the "connectivity clustering" over it?  I design from the scratch, that's the only way to conceptual integrity in the algorithm: http://www.cognitivealgorithm.info. Couldn't find any existing method that's

Re: [agi] The next advance over transformer models

2022-06-30 Thread Boris Kazachenko
I think "prediction" is a redundant term,  any representation is some kind of prediction.  "Shared property": I meant initially shared between two compared representations, and only later aggregated into higher-level shared property, within a cluster defined by one-to-one matches.  Vs. summing

Re: [agi] The next advance over transformer models

2022-06-30 Thread Boris Kazachenko
On Thursday, June 30, 2022, at 3:00 AM, Rob Freeman wrote: > I'm interested to hear what other mechanisms people might come up with to > replace back-prop, and do this on the fly.. For shared predictions, I don't see much of an alternative to backprop, it would have to be feedback-driven

Re: [agi] Re: The next advance over transformer models

2022-06-29 Thread Boris Kazachenko
On Wednesday, June 29, 2022, at 10:29 AM, Rob Freeman wrote: > You would start with the relational principle those dot products learn, by > which I mean grouping things according to shared predictions, make it instead > a foundational principle, and then just generate groupings with them. Isn't

Re: [agi] Systemic Issues, Rays of Light

2022-06-16 Thread Boris Kazachenko
You are an idiot, sir :). -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Td8db5de3bbc0c6ae-M23440b799fc7a058e26faef0 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Systemic Issues, Rays of Light

2022-06-15 Thread Boris Kazachenko
Hey, that's most of the people here :). Idiocy is normal, that's what you get from hypertrophic monkey brains  trying to do things they didn't evolve for.  -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Thoughts on Judgments

2022-05-22 Thread Boris Kazachenko
The "G" part is %100 unsupervised, the rest is application-specific and optional. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Ta80108e594369c8d-Me30fe8b079cabdbda792215d Delivery options:

Re: [agi] Thoughts on Judgments

2022-05-20 Thread Boris Kazachenko
Then I guess your "judgement" is top-level choices. The problem is, GI can't have a fixed top level, forming incrementally higher levels of generalization is what scalable learning is all about. So, any choice on any level is "judgement", which renders the term meaningless. 

[agi] Re: Thoughts on Judgments

2022-05-20 Thread Boris Kazachenko
So, you are talking about motivation. Which depends on the type of learning process: it's an equivalent of pure curiosity in unsupervised learning, a specific set of "instincts" in supervised learning, or some indirect conditioning values in reinforcement learning. The 1st is intrinsic to GI,

Re: [agi] Re: Presentation to Northwest AGI Forum

2022-05-02 Thread Boris Kazachenko
Thanks Mike! I just updated my introduction, it's even more abstract than Brett's :) http://www.cognitivealgorithm.info/ Intelligence is a general cognitive ability, ultimately the ability to predict. That includes planning, which technically is a self-prediction. Any prediction is interactive

[agi] Re: Presentation to Northwest AGI Forum

2022-04-30 Thread Boris Kazachenko
As we discussed, Brett, I agree on most of the principles above. But your implementation is not defined / justified strictly bottom-up. To me, that means pixels-up cross-comparison, which defines variance: differences/gradients, and invariance: match/compression. Without that, your specifics is

[agi] Re: full cryopreservation paper

2021-12-03 Thread Boris Kazachenko
I know how we work. Am 59, grew up in Soviet Union, spent 5 years in military, walked across Turkish border chased by a platoon of riflemen to get to the States. Stop freaking out about trivial crap. -- Artificial General Intelligence List: AGI Permalink:

[agi] Re: full cryopreservation paper

2021-12-03 Thread Boris Kazachenko
Get some fucking meaning of life, no matter for how long -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T3dc391c1276210e8-M7bf73c8f32478c4c53ec6f8f Delivery options:

[agi] Re: full cryopreservation paper

2021-12-03 Thread Boris Kazachenko
Just try to get those stupid emotions out of your mind. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T3dc391c1276210e8-M55858e394c61b4431b19f3f9 Delivery options:

[agi] Re: full cryopreservation paper

2021-12-03 Thread Boris Kazachenko
Interesting that I was never concerned about my happiness, or living for gazillion years with nothing to do. Maybe because I am not unhappy or insecure about next minute, never mind septillion years. -- Artificial General Intelligence List: AGI Permalink:

[agi] Re: full cryopreservation paper

2021-12-03 Thread Boris Kazachenko
It's funny how transhumanism is just a form of escape for a lot of people, like religion. They turn to it out of misery and fear. -- Artificial General Intelligence List: AGI Permalink:

[agi] Re: full cryopreservation paper

2021-12-03 Thread Boris Kazachenko
This is just an emotional attitude, that tells me you don't feel terribly secure. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T3dc391c1276210e8-M571223cec24e6708abb2aad2 Delivery options:

[agi] Re: full cryopreservation paper

2021-12-03 Thread Boris Kazachenko
Uhh, do you believe in self-improvement? Once you have direct access to your "source code" and start improving it, how long do you think it will be until you are no longer recognizable to current "you"? See, "you" is whatever you identify with, it changes all the time anyway. As you get

[agi] Re: full cryopreservation paper

2021-12-02 Thread Boris Kazachenko
It's just that I do have something better to compare human mind with, in theory.  -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T3dc391c1276210e8-Mb0d28feeb7b4e47a06ea6fb1 Delivery options:

[agi] Re: full cryopreservation paper

2021-12-02 Thread Boris Kazachenko
It's neither personal nor emotional :) -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T3dc391c1276210e8-M5569989404c9b9d9899d297f Delivery options: https://agi.topicbox.com/groups/agi/subscription

[agi] Re: full cryopreservation paper

2021-12-02 Thread Boris Kazachenko
It's garbage on all levels, relatively speaking. We just have nothing to compare it with yet. Human "mind" is 99% about the body, and the rest is crap too.  -- Artificial General Intelligence List: AGI Permalink:

[agi] Re: full cryopreservation paper

2021-12-02 Thread Boris Kazachenko
You will recycle yourself. As soon as you realize what kind of garbage you are made of. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T3dc391c1276210e8-Mfd374c19e22fac4ddc54cf26 Delivery options:

[agi] Re: Sustainable Mountain-Based Health and Wellness Tourist Destinations: The Interrelationships between Tourists’ Satisfaction, Behavioral Intentions, and Competitiveness

2021-12-02 Thread Boris Kazachenko
Fucking spammer -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T3389d065100463ce-M08c3fe15a532432e58663cfc Delivery options: https://agi.topicbox.com/groups/agi/subscription

[agi] Re: full cryopreservation paper

2021-12-02 Thread Boris Kazachenko
Who gives a shit. You will be obsolete before you grow old, never mind freezing and re-animation. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T3dc391c1276210e8-M2f0795666d6c756986e82089 Delivery options:

Re: [agi] Anyone else want to learn or teach each other GPT?

2021-11-28 Thread Boris Kazachenko
No, for the reasons I explain in my "Comparison to ANN and BNN" section: www.cognitivealgorithm.info  Forget about GTP, it's a flavor of the year, variations of MLP can work almost as well, same for some other architectures. And forget about architectures, try to first understand the principles

Re: [agi] Anyone else want to learn or teach each other GPT?

2021-11-28 Thread Boris Kazachenko
You need to understand core principle behind all NNs, GPT or not. And that is fuzzy centroid clustering in perceptron with any sort of feedback, local or backprop.  -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] drug name

2021-11-24 Thread Boris Kazachenko
Urgh... Never seen a "AGI" list that is not a freak show. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tfc4d42f7fb128a4f-Mcb4fb9239c9b917eef13f2fb Delivery options:

Re: [agi] Re: Another Kind of KC Prize

2021-10-16 Thread Boris Kazachenko
He is talking about instruction set, which is part of decompressing program. I agree that it's a red herring. And I agree that KC is a trivial principle. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Re: Another Kind of KC Prize

2021-10-16 Thread Boris Kazachenko
It's really the number of bit in (compressed representation + decompressing program). If the amount of data is large enough, the second component becomes insignificant. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] AGI discussion group, Sep 10 7AM Pacific: Characterizing and Implementing Human-Like Consciousness

2021-09-16 Thread Boris Kazachenko
Yeah, let's get deeper into that bullshit. Because there's not real work to be done, right?  -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T5e30c339c3bfa713-Mb0ae68a6c814a412b382f763 Delivery options:

Re: [agi] Re: UNDERSTANDING -- Part I -- the Survey, online discussion: Sunday 10 a.m. Pacific Time, evening in Europe, you are invited

2021-09-14 Thread Boris Kazachenko
Working on it: https://github.com/boris-kz/CogAlg/wiki -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T2ee04a3eb9a964b5-M9b7c4a2cce2d4138e8de6fd4 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] UNDERSTANDING -- Part I -- the Survey, online discussion: Sunday 10 a.m. Pacific Time, evening in Europe, you are invited

2021-09-12 Thread Boris Kazachenko
On Sunday, September 12, 2021, at 5:14 PM, doddy wrote: > is algo the abreviation for algorithm? Yes: http://www.cognitivealgorithm.info/ -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] UNDERSTANDING -- Part I -- the Survey, online discussion: Sunday 10 a.m. Pacific Time, evening in Europe, you are invited

2021-09-12 Thread Boris Kazachenko
That's two things: comparison to quantify similarity and clustering to group the results. This is what my alg is specifically and uniquely designed to do. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] UNDERSTANDING -- Part I -- the Survey, online discussion: Sunday 10 a.m. Pacific Time, evening in Europe, you are invited

2021-09-11 Thread Boris Kazachenko
> The model is presumed to be hierarchical, and its "validation": comparison to > lower-level experience, should proceed top-down. To be precise, we are almost never comparing model to raw experience, it's always a comparison between models (I think pattern is better name) of some level.  

Re: [agi] UNDERSTANDING -- Part I -- the Survey, online discussion: Sunday 10 a.m. Pacific Time, evening in Europe, you are invited

2021-09-11 Thread Boris Kazachenko
Brett, your True is what I call binary match between specific model, or any part thereof, wrt. specific experience. Belief, certainty - those are just emotional connotations of such match. And it doesn't have to be binary, that's the crudest version, match can be expressed in any order of

Re: [agi] UNDERSTANDING -- Part I -- the Survey, online discussion: Sunday 10 a.m. Pacific Time, evening in Europe, you are invited

2021-09-11 Thread Boris Kazachenko
@Brett, "true" means a match of current experience to to the model, which is what I meant by "understanding is recognition". And a model doesn't fall from the sky, it's composed from "recognitions" / confirmations of its element sub-models, starting from raw input. So, it's comparison ->

Re: [agi] Re: UNDERSTANDING -- Part I -- the Survey, online discussion: Sunday 10 a.m. Pacific Time, evening in Europe, you are invited

2021-09-08 Thread Boris Kazachenko
The only high-level term that needs to be defined constructively is GI. Understanding, wisdom, qualia... those are all excuses for loose talk.  -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
I aim for simplicity, but not baby talk. There is a reason people talk like complete morons on abstract subjects: we evolved to hunt and gather. You have to be a mutant to work on AGI.  -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
Yeah, me too. Maybe another time? Think about it. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-M54ac013991fb0e0a217bc1bb Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
I am confused. You keep saying that you are the smartest guy in AGI... -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-M87ee963c5eb06af353d6f89d Delivery options:

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
All patterns *are* predictions, it's just a matter of where and how strong. These are determined by projected accumulated match among constituents of each pattern (which is a set of matching inputs).   -- Artificial General Intelligence List: AGI

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
ave you a link:  On Sunday, August 15, 2021, at 2:56 PM, Boris Kazachenko wrote: > https://github.com/boris-kz/CogAlg/blob/master/line_1D_alg/line_patterns.py On Sunday, August 15, 2021, at 4:50 PM, immortal.discoveries wrote: > But I still want to know your simplest pattern finder, and how

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
Screw letters. Every pixel predicts adjacent pixels, prediction is merely a difference-projected match. If confirmed by cross-comp, it forms patterns, which predict proximate patterns. Then pattern cross-comp forms patterns of patterns, etc. You have to understand compositional hierarchy,

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
Delays / holes is what I call negative patterns / gaps, formed along with positive patterns. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-M36477279f560c124cd37584e Delivery options:

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
All that is explained in "outline of my approach", more code-specific in wiki:  https://github.com/boris-kz/CogAlg/wiki. No backprop, my feedback is only adjusting hyperparameters. I don't use any statistical methods. On Sunday, August 15, 2021, at 4:11 PM, immortal.discoveries wrote: > Also

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
I don't have any scores, the alg is far from complete. It won't be doing anything interesting until I implement level-recursion, which is a ways off even for 1D alg. This whole project is theory-first, as distinct from anything you may come across in ML.

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
Sound is actually a lot more complex, it has a huge frequency spectrum. You can make sense of grey-scale images, but not grey-scale sound. I started doing it here: https://github.com/boris-kz/CogAlg/blob/master/line_1D_alg/frequency_separation_audio.py, but it's not a priority, this whole

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
It won't be anything like your text predictor, if you ever get around to it. And you don't even have to do it in 2D, basic principles should be worked out in 1D first: just process one row of pixels of an image. That's my 1D alg:  https://github.com/boris-kz/CogAlg/tree/master/line_1D_alg, 1st

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-14 Thread Boris Kazachenko
Well, show me your code processing images, then we will talk. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-M4229832de56be44242537b71 Delivery options:

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-14 Thread Boris Kazachenko
Hint: pattern is a set of matching inputs. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-Mdd2357fb6ddcaac4617efa07 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-14 Thread Boris Kazachenko
On Saturday, August 14, 2021, at 6:00 PM, immortal.discoveries wrote: > .it would be better if you used more common words and examples to explain > what you're seeing visually. This happens to be the most abstract subject ever. Which means you need to think in terms of definitions, not

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-14 Thread Boris Kazachenko
On Saturday, August 14, 2021, at 1:20 PM, immortal.discoveries wrote: > You must not know how GPT nor my AI works then. How about keeping this discussion on a conceptual level? I explain my objections to all statistical / perceptron-based methods in "Comparison to ANN and BNN" section. On

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-14 Thread Boris Kazachenko
On Friday, August 13, 2021, at 10:45 PM, immortal.discoveries wrote: > I like this part. Well, cognition is very similar to goals and priming, it > all weighs in in predicting the next word to a sentence. > Thanks. But I think it's quite different: predictive value should be maximized

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-13 Thread Boris Kazachenko
Sorry, wrong link,  https://meta-evolution.blogspot.com/2012/01/cognitive-expansion-curiosity-as.html -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-Md383e9168039e25539f81635 Delivery options:

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-13 Thread Boris Kazachenko
You are mixing-up instincts, conditioning, and cognition. I am only talking about cognitive function. I make those distinctions in the last post:  https://www.blogger.com/blog/post/edit/4539256615029980916/407095229373126 -- Artificial General

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-13 Thread Boris Kazachenko
On Friday, August 13, 2021, at 7:28 PM, immortal.discoveries wrote: > The mind ideas are trying to survive, really they are, and in doing so they > eventually help the host survive. I don't thinks so. Ideas have no agency, and they only help the host in proportion to their predictive power. Ok,

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-13 Thread Boris Kazachenko
On Friday, August 13, 2021, at 6:36 PM, James Bowery wrote: > Are you aware that field observations of eusocial insects has pretty well > debunked the idea that the "cooperative" behavior of the sterile castes has > little to do with their relatedness and even less to do with reciprocation? I

[agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-13 Thread Boris Kazachenko
Utterly vacuous, as usual. Here is something actually meaningful, if you can understand it: https://meta-evolution.blogspot.com/ -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] How do I leave the group?

2019-05-20 Thread Boris Kazachenko
I am definitely a jackass, but that's what it takes to think on your own. On Sat, May 18, 2019 at 9:03 AM Jim Bromer wrote: > Am I a crackpot or a jackass? I think I'm more of a crackpot - although I > do have my jackass moments. > Jim Bromer > > > On Fri, May 17, 2019 at 9:25 PM MP via AGI

Re: [agi] How do I leave the group?

2019-05-20 Thread Boris Kazachenko
Out of touch with the nature of the problem he is trying to solve. Wanting doesn't make it so. He can be a great programmer and sell 10 chatbots every year, and still be a crackpot in AGI. On Sun, May 19, 2019 at 1:20 PM Mike Archbold wrote: > A crackpot to me is somebody out of touch with

Re: [agi] How do I leave the group?

2019-05-20 Thread Boris Kazachenko
"Outside the box" doesn't mean much, it includes " out of your fucking mind". What it really needs is a deep introspection, AKA intellectual integrity. And there is a dire scarcity of that in a monkey brain. Because there always are bananas to be picked up. On Mon, May 20, 2019 at 6:31 AM Brett N

Re: [agi] Yours truly, the world's brokest researcher, looks for a bit of credit

2019-03-09 Thread Boris Kazachenko
e, it has to do with agents being highly optimized to control >>> an environment in terms of ecological information supporting >>> perception/action. Just as uplifting apes will likely require only minor >>> changes, uplifting animaloid AGI will likely require only min

Re: [agi] Yours truly, the world's brokest researcher, looks for a bit of credit

2019-03-08 Thread Boris Kazachenko
uplifting apes will likely require only minor > changes, uplifting animaloid AGI will likely require only minor changes. > Even then we still haven't explicitly cared about language, we've cared > about cooperation by means of joint attention, which can be made use of > culturally develop l

Re: [agi] An Experiment

2019-03-08 Thread Boris Kazachenko
Doesn't surprise me, you have friends like Mentifex too. On Thu, Mar 7, 2019 at 4:24 PM Steve Richfield wrote: > Boris, > > I would like to introduce your AGI to a magician friend of mine. > > Steve > > > On Thu, Mar 7, 2019, 12:05 Boris Kazachenko wrote: > >>

Re: [agi] Yours truly, the world's brokest researcher, looks for a bit of credit

2019-03-07 Thread Boris Kazachenko
I would be more than happy to pay: https://github.com/boris-kz/CogAlg/blob/master/CONTRIBUTING.md , but I don't think you are working on AGI. No one here does, this is a NLP chatbot crowd. Anyone who thinks that AGI should be designed for NL data as a primary input is profoundly confused. On

Re: [agi] An Experiment

2019-03-07 Thread Boris Kazachenko
"But why would you think that AGI would not hallucinate?" Your "AGI" may hallucinate, because it is designed to feed on that incoherent second-hand natural-language data. Mine won't, it is designed to be integral and self-sufficient. It will believe what it sees, not what a bunch of nuts on the

Re: [agi] Logical Inference in First Working AGI MindForth

2018-06-25 Thread Boris Kazachenko via AGI
someone honestly > thought that, though! Maybe we could use an AI to tell the difference > between the two of us :P > > Sent from ProtonMail Mobile > > > On Mon, Jun 25, 2018 at 6:46 AM, Boris Kazachenko via AGI < > agi@agi.topicbox.com> wrote: > > Yeah, I thoug

Re: [agi] Logical Inference in First Working AGI MindForth

2018-06-25 Thread Boris Kazachenko via AGI
Yeah, I thought that too. But this list is freak show, go figure. On Mon, Jun 25, 2018 at 3:42 AM Giacomo Spigler via AGI < agi@agi.topicbox.com> wrote: > Is it only me that thinks that MP is another email controlled by AT Murray? > > > On Monday, June 25, 2018, MP via AGI wrote: > >> This

Re: [agi] Anyone interested in sharing your projects / data models

2018-06-13 Thread Boris Kazachenko via AGI
approach you seemingly take). One could consider a pattern of learning, > as a possible example of this. > > Comments? > > Rob > -- > *From:* Boris Kazachenko via AGI > *Sent:* Wednesday, 13 June 2018 12:03 AM > *To:* agi@agi.topicbox.com > *Subject:*

Re: [agi] Anyone interested in sharing your projects / data models

2018-06-12 Thread Boris Kazachenko via AGI
; set, in the role of knowledge codifier. > > I'm sure this is all old hat to you, but I'd appreciate your views on the > probable application of the points I raised. > > Rgds > > Rob > -- > *From:* Boris Kazachenko via AGI > *Sent:* Mond

Re: [agi] Anyone interested in sharing your projects / data models

2018-06-11 Thread Boris Kazachenko via AGI
of essential components still missing from your design > schema. Such system components may be related to your thinking on levels > of pattern recognition, and suitable to your notion of hierarchical > increments from the lowest (meta) level. > > The wood for the trees, and the RNA fo

Re: [agi] Anyone interested in sharing your projects / data models

2018-06-11 Thread Boris Kazachenko via AGI
> > > ​AGI's bottleneck must be in *learning*, any​one who focuses on something > else is barking under the wrong tree... > Not just a bottleneck, it's the very definition of GI, the fitness / objective function of intelligence. Specifically, unsupervised / value-free learning, AKA pattern