[agi] Kraken API will make me RICH (Scalping Bot)

2022-06-30 Thread stefan.reich.maker.of.eye via AGI
殺*Kraken API obeys my commands!! *  殺 (https://www.youtube.com/watch?v=4apATiVWCgI ) -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Re: Google engineer claims AI bot LaMDA became 'sentient'

2022-06-30 Thread EdFromNH
This article claims China has built a 174 trillion parameter artificial brain (174 trillion is larger than the number of synapses in the human cortex -- https://eurasiantimes.com/aping-a-human-brain-chinese-supercomputer-achieves-breakthrough-in-ai/ . If that is true it is impressive, at least

[agi] Re: Is there a file format purely for Huffman encoding of text? I need that right now.

2022-06-30 Thread stefan.reich.maker.of.eye via AGI
Solution 4: Extend the symbol range to 9 bits (thus wasting up to 32 bytes in the tree) so we can also encode control symbols (between 0x100 and 0x1FF). One of those, e.g. 0x100, would be a NOP ("no output") control symbol that you can use to fill up bytes. An incomplete byte following a NOP is

[agi] Re: Is there a file format purely for Huffman encoding of text? I need that right now.

2022-06-30 Thread stefan.reich.maker.of.eye via AGI
Ah there is one little thing I missed... *How do you know the text has ended?* It is encoded as a bit stream, so it might end in the middle of a byte. Solution 1: Insert text length somewhere. Kinda shit though, prohibits streaming. Solution 2: Put a special END symbol in the tree. But we have

Re: [agi] The next advance over transformer models

2022-06-30 Thread Boris Kazachenko
On Thursday, June 30, 2022, at 10:15 AM, Rob Freeman wrote: > But in the sense of having the same internal connectivity within two groups > which are not directly connected together. Yes, in the sense that inputs (input clusters) are parameterized with derivatives ("connections") from

Re: [agi] The next advance over transformer models

2022-06-30 Thread Rob Freeman
On Thu, Jun 30, 2022 at 2:18 PM Boris Kazachenko wrote: > On Thursday, June 30, 2022, at 6:10 AM, Rob Freeman wrote: > > what method do you use to do the "connectivity clustering" over it? > > > I design from the scratch, that's the only way to conceptual integrity in > the algorithm:

Re: [agi] The next advance over transformer models

2022-06-30 Thread Boris Kazachenko
On Thursday, June 30, 2022, at 6:10 AM, Rob Freeman wrote: > what method do you use to do the "connectivity clustering" over it?  I design from the scratch, that's the only way to conceptual integrity in the algorithm: http://www.cognitivealgorithm.info. Couldn't find any existing method that's

Re: [agi] The next advance over transformer models

2022-06-30 Thread Boris Kazachenko
I think "prediction" is a redundant term,  any representation is some kind of prediction.  "Shared property": I meant initially shared between two compared representations, and only later aggregated into higher-level shared property, within a cluster defined by one-to-one matches.  Vs. summing

Re: [agi] The next advance over transformer models

2022-06-30 Thread Rob Freeman
On Thu, Jun 30, 2022 at 1:51 PM Rob Freeman wrote: > On Thu, Jun 30, 2022 at 1:33 PM Boris Kazachenko > wrote: > >> ... >> My alternative is to directly search for shared properties: lateral >> cross-comparison and connectivity clustering. >> By the way, independently of what shared properties

Re: [agi] The next advance over transformer models

2022-06-30 Thread Rob Freeman
On Thu, Jun 30, 2022 at 1:33 PM Boris Kazachenko wrote: > On Thursday, June 30, 2022, at 3:00 AM, Rob Freeman wrote: > > I'm interested to hear what other mechanisms people might come up with to > replace back-prop, and do this on the fly.. > > > For shared predictions, I don't see much of an

Re: [agi] The next advance over transformer models

2022-06-30 Thread Boris Kazachenko
On Thursday, June 30, 2022, at 3:00 AM, Rob Freeman wrote: > I'm interested to hear what other mechanisms people might come up with to > replace back-prop, and do this on the fly.. For shared predictions, I don't see much of an alternative to backprop, it would have to be feedback-driven

Re: [agi] The next advance over transformer models

2022-06-30 Thread Rob Freeman
On Thu, Jun 30, 2022 at 10:40 AM Ben Goertzel wrote: > "what method could directly group hierarchies of elements in language > which share predictions?" > > First gut reaction is, some form of evolutionary learning where the > genomes are element-groups > > Thinking in terms of NN-ish. models,

Re: [agi] The next advance over transformer models

2022-06-30 Thread immortal . discoveries
On Thursday, June 30, 2022, at 1:47 AM, Rob Freeman wrote: > "ham and eggs", not "eggs and ham", that kind of thing, or "strong tea" not > "powerful tea" "I'm havving somme powerfulll tea right nowww!!" :DD not me but that someone that likes coffee --

Re: [agi] The next advance over transformer models

2022-06-30 Thread Ben Goertzel
"what method could directly group hierarchies of elements in language which share predictions?" First gut reaction is, some form of evolutionary learning where the genomes are element-groups Thinking in terms of NN-ish. models, this might mean some Neural Darwinism type approach for evolving the