Re: [agi] Re: Making an AI from a compressor

2021-05-05 Thread John Rose
Ah oh wow this is an interesting approach, glad somebody is on it:
https://arxiv.org/abs/2104.10670

Need to estimate the consciousness and also intelligence by analyzing the 
compressive characteristics of the information mutually communicated... hmm...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-M4639df5ae3ad26f4aed4141a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Making an AI from a compressor

2021-04-30 Thread John Rose
On Thursday, April 29, 2021, at 10:49 PM, Jim Bromer wrote:
> I was reading your comment that, "Storage is transmission," and I realized, 
> based on an idea I had a number of years ago, that if digital data was in 
> constant transmission form then the null signal could be used for a value 
> (like 0).

Yes, codecs are part of spatiotemporal transmitters and usually between 
conscious entities.  Storage is transmission through time, imagine a 
post-glacial petroglyph carved in a rock by an ancient person in a society. The 
carver is creating a compressed full-duplex transmission to his/herself to be 
received by other conscious entities as half-duplex, us. We can’t transmit 
backwards through time to the transmitter (not yet at least) and we decompress 
similarly to the original compressor. Storage can be single hop, one rock, or 
multi-hop, multi-rock when other conscious entities copy it and retransmit.

I’m still trying to decide if this is true – all compressed data is meant for 
transmission.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-Mcc15060981b1c62ea1cf76f3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Making an AI from a compressor

2021-04-29 Thread Jim Bromer
I was reading your comment that, "Storage is transmission," and I realized, 
based on an idea I had a number of years ago, that if digital data was in 
constant transmission form then the null signal could be used for a value (like 
0).  That can't be done when data is stored in a contemporary  memory device or 
when transmitted data needs to be boosted. I only mentioned it because it 
occurred to me while I was reading your comment. It is not actually about 
making AI from a compressor.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-M5ede26a0343f292ecacdcdc2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Making an AI from a compressor

2021-04-29 Thread immortal . discoveries
Gonna have to explain that one clearer Jim, how are you improving prediction 
here? What is the pattern?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-Mc592ea81ca118c5314397980
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Making an AI from a compressor

2021-04-29 Thread Jim Bromer
Free Lunch? Maybe
https://jamesbromer.com/Freelunch-Maybe.html
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-M03dd226382fa10c5519d6c69
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Making an AI from a compressor

2021-04-29 Thread John Rose
On Wednesday, April 28, 2021, at 6:22 PM, immortal.discoveries wrote:
> Either JR is just joking in every post or he really believes what he says. 
> Both are not good cases... My suggestion - build a AI like GPT-2 / PPM, and 
> see for yourself how one exactly works, to get out of this qualia bubble your 
> seriously stuck in (clearly you are, yes, I'm telling you, I've been there 
> one time when I was a young and dumb beginner).

Spuriously spewing verbal vomit then watching it dribble down and pusify does 
not a bubble popper make. It just forces a wipe off...

Do you think this is some sort of joke with nothing behind it?

You've been there as young and dumb beginner? I doubt that and if you were you 
abandoned the pursuit due to micro-cojones... now adrift in an AI sea on a 
rudderless boat blowing whichever way the crowd goes, getting a whiff of 
Elongated Muskiness on your dinghy once in a while that perks attention.


On Wednesday, April 28, 2021, at 6:22 PM, immortal.discoveries wrote:
> To test for A[G]I, you check it can [accurately] solve all sorts of [diverse] 
> problems that humans can, ex. building tall reliable towers, running fast, 
> riding bikes, solving cancer, etc. Now, this is time taking, and subjective/ 
> unclear.

Why waste the cycles if you don’t get it? Perhaps try contemplating once in a 
while instead of knee-jerkingly regurgitating errata.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-M75f2a69457ccd7c6c1c2f523
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Making an AI from a compressor

2021-04-28 Thread immortal . discoveries
Either JR is just joking in every post or he really believes what he says. Both 
are not good cases... My suggestion - build a AI like GPT-2 / PPM, and see for 
yourself how one exactly works, to get out of this qualia bubble your seriously 
stuck in (clearly you are, yes, I'm telling you, I've been there one time when 
I was a young and dumb beginner).

> Single brains specialize. Multibrains generalize
wrong
AGI is general intelligence.
Specialization is when a brain or group of brains get better at problem solving 
and tend to like to solve/ learn about only a few specific domains later in 
life (crystallization/ immortality).

To test for A[G]I, you check it can [accurately] solve all sorts of [diverse] 
problems that humans can, ex. building tall reliable towers, running fast, 
riding bikes, solving cancer, etc. Now, this is time taking, and subjective/ 
unclear. With Lossless Compression, it checks for you millions of tasks given 
to the AI, and extremely accurately checks its score on each task, in hours!! 
The text or image dataset contains all the same data/ problems as real life 
problems. AI can only find/ create patterns, nothing else in the universe can 
be leveraged. and therefore LC tests that AI finds patterns and then uses its 
experiences to predict the answers. All any AI can do is exact matches, 
translation, recency boosting, etc, it's simple to make AGI really, there is no 
nouns and verbs and etc, this is not what makes DALL-E or GPT-2, just simple 
pattern finding mechanisms. Do note you can teach it thousands of rare patterns 
like nouns etc, but this isn't what you hardcode, rather it is the data we or 
it makes an feeds to it, then it uses its hardcoded pattern mechanisms I 
explain.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-M9e7f671aa7d6ef991ee3d7c4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Making an AI from a compressor

2021-04-28 Thread John Rose
On Wednesday, April 28, 2021, at 11:55 AM, immortal.discoveries wrote:
> What matters here is brains can solve many problems by predicting solutions 
> based on context/ problem given

Single brains specialize. Multibrains generalize. That's why they communicate. 
Multiparty intelligence on a computational topology benefits from optimal 
information transmissivity. I can feel what you are thinking and predict based 
on that so that helps to minimize language transmission. Language is a 
compression...imperfect as it is. Humans didn’t evolve into one giant brain 
physically but we did virtually… before the internet I mean though now even 
more so.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-Ma20b0d75c37fb797974c2fe6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Making an AI from a compressor

2021-04-28 Thread John Rose
On Wednesday, April 28, 2021, at 12:01 PM, Jim Bromer wrote:
> Malleable compression is an interesting way to put it.

Well, we could reframe the concept of compression and redefine it in terms of 
consciousness and intelligence.

Assume panpsychism, all compressors have non-zero consciousness. And all 
compressed data is meant for transmission. Storage is transmission, 
spatiotemporally, to others and oneself. Then “compression” could be reworded 
as - perceptual information malleation for transfer with qualia utility 
minimization. Or something like that. IOW decompressors minimize conscious data 
perception via information re-representation/re-dimensionalization for 
transmission efficiency by utilizing intelligent exertion. Thus “compression” 
then is gone… and all of its associated stigma.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-M50b5f0edba648a036a769023
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Making an AI from a compressor

2021-04-28 Thread Jim Bromer
Malleable compression is an interesting way to put it. 
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-Mb517e1b5a5fc923ff46099fc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Making an AI from a compressor

2021-04-28 Thread immortal . discoveries
John says you can compress sometimes horribly/ nothing, sometimes ace it, but 
not all the time ace all the thing. Here's my answer: Given 100 letters of 
context that are random, yes a smart brain will fail here because it is random, 
and given 100 letters all 'a' ex. 'aaaa' it will ace it and 
compress it all maximally the most. And yes all the time it can't because 
sometimes it sees the 'aaa' and sometimes it sees the total random ex. 
'5!0fIs8'. What matters here is brains can solve many problems by predicting 
solutions based on context/ problem given, it may not do perfect for to do 
perfect would require knowing where the particles of your RAM is by storing 
such in the very same RAM, but it can get close to perfect. For some problem, 
like the 'js62nf' or 'a', it is trying its hardest to predict accurately, 
there is no sad 'sometimes it can compress perfectly, sometimes not at all'. 
AGI works.

And to Jim/ etc: AI takes context/ problem and predicts answer solution using 
past similar experiences, it's a predictor that sees patterns. This is the only 
way you can take advantage of the universe and be on top, by taking patterns 
and utilizing that fact that things are not random but repeat. The first thing 
you can notice in a dataset of text is that the same letter/ word/ etc 
re-occur, this allows compression / prediction. Which is AI. All deeper 
patterns start rooted by exact matches, ex. translation is shared contexts ex. 
of all the things cat and dog predict, they share 80% predictions, ex. dog ran, 
dog play, cat ran, cat play, and only 2 are not shared ex. cat meowed, dog 
barked, however these are similar at least here. So cat and dog likely share 
other contexts, so I can predict - if i see dog - dog>meowed, because cat and 
dog share many contexts/ predictions and hence likely is valid other contexts 
not shared/ are shared are accurate to place after dog/ cat. Translation uses 
exact matches. And translation also tells you how similar cat is to dog too, 
not just how likely meow can go after dog word. Also sames clump together in 
text and images, ex. one paragraph is on dogs, or rockets, so all the words etc 
will be that such related - dog based, you'll see dog saw a dog and my dog 
loves cats and my cat saw a dog and my cat...so it is easy to predict what 
follows cat> it is meow cuz you know it already or is cat again! And seen 
cat>meow more than cat>play tells you to predict it more often.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-M6abf9204bc54904f4afe3ef4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Making an AI from a compressor

2021-04-28 Thread John Rose
On Wednesday, April 28, 2021, at 9:24 AM, Jim Bromer wrote:
> I do not think that "compression" per se is the basis of making AI (which is 
> directly related to the topic). However, I do believe that an AGI (or an 
> advanced AI) program would like a compressor.

I'm with you there Jim, unlike some researchers I don't pigeonhole 
"compression" into crisp/restrictive functional mechanisms. Competing for the 
best data compressor is one goal. Utilizing malleable compression concepts in 
relation to AGI theory and engineering is another. I do accept that some 
researchers strongly believe their work regarding optimal compression is 
highest priority, which is fine we can all coexist since it's all related...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-M715adf084102787bdce9f4f5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Making an AI from a compressor

2021-04-28 Thread Jim Bromer
In response to something I said about cross-generalization, John Rose replied " 
You can optimally compress some of the data all of the time, you can optimally 
compress all of the data some of the time, but you can’t optimally compress all 
of the data all of the time. It is what it is bruh."

Generalization is not as narrowly defined as you seem to think. This topic is 
about making AI from a compressor, so I started thinking about 
cross-generalization as a compressor. Cross-generalization is a network theory, 
not just something to do with uniform horizontal and vertical relations or a 
filing cabinet system or something that was tightly constrained in that way. 
The term generalization includes variations of generality that would include  
non-optimal compression and cross-topical compression and so on.

I do not think that "compression" per se is the basis of making AI (which is 
directly related to the topic). However, I do believe that an AGI (or an 
advanced AI) program would like a compressor. I am also thinking of an 
Artificial Artificial Neural Network, but I do not use that term literally. I 
want to develop a discrete network that can create and include ANN-like 
encodings. So the idea that I am trying to develop is that there could be a 
network that could be traversed and lead to direct insights but which would 
could also act like ANNs.  So I do not spend much time on general optimal 
compressors as the basis for an AI device.  I am thinking about specialized 
relations that will act to compress concepts and parts of concepts and so on.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-M9251b32ea731788115cc09c9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Making an AI from a compressor

2021-04-27 Thread James Bowery
Ideal induction compresses all data you have to the smallest of all
programs that outputs that data exactly.
Ideal deduction/prediction runs that program until it outputs all that data
plus data you didn't have which are "deductions/predictions".
Ideal decision appends an action to your original data, does induction
again (re-compresses it) and runs that new program to yield predictions,
but it does so for each possible action, assigning to each consequence of
each action a value/utility measure and then decides to take the action
whose predicted consequences have the maximum value/utility measure.

There are assumptions that go into the above but they are pretty hard to
argue with if you subject yourself to the same degree of rigor/critique in
offering your own alternatives as to what AGI means.


On Sat, Apr 24, 2021 at 11:07 AM WriterOfMinds 
wrote:

> Disclaimer: I'm just trying to translate these talking points for better
> comprehension. I am not a member of the "all intelligence is best modeled
> as sequence prediction" camp. For one thing, that's a bit too flat or
> simplistic. (Like saying, "all intelligence is just choosing your next
> action." Well yes, but *how* to do it is the big awkward question.) And on
> an intuitive level, I don't think my understanding of a book is measured by
> how well I guessed the ending. Understanding is more like ... the ability
> to connect things in the text to my current world-model in the way that was
> intended by the author.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-M85d25e25d492178c9709d656
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Making an AI from a compressor

2021-04-27 Thread Jim Bromer
There is something about cross-generalizations and cross-categorizations or 
overlapping insights about concepts (or components of concepts) that make me 
think of a compression method that would not be an optimal compressor for a 
narrow task but would be highly sophisticated for AI / AGI. But I can't quite 
figure it out.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-Mb0d295e3890bb20eb0537743
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Making an AI from a compressor

2021-04-24 Thread immortal . discoveries
And, we've already saw DALL-E and Blender, clearly we have Nearly made AGI 
already, we are extremely close now. You will be able to throw it on anything 
and let it do the task ex. throw it down the vent and it'll clean your ducts 
for you, if you get me. Especially if it has a body of course.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-Mb3f8ecdce91c9d83a9f5cc41
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Making an AI from a compressor

2021-04-24 Thread immortal . discoveries
AGI with some enhancements to its brain will be able to do anything I can do, 
better. But we don't want to test it on a single narrow task ex. can it build 
towers taller than ones humans have made? We want to test it on a very large 
diverse set of tasks, like running, cancer, computer speedups, GPU inventions, 
etc. To do this, to test it to make sure it is general purpose solver like a 
Man (or handy woman) that you can just throw down a vent or alley and say fix 
that air duct or mark on a wall and walla job done without you needing to pay 
attention, we use enwik8 (yes, enwik8 has a bit too much html, markup, 
biographies, but it's ok-ish, and is used already for years so we kinda have to 
stick with it now). So, ya, we use enwik8, text, to see if it can accurately 
predict the right solution/ procedure/ data to many diverse problems. If it can 
find patterns in our real world data images/ text, then it can predict likely 
true unseen discoveries anew and compress data.

On Saturday, April 24, 2021, at 11:52 AM, WriterOfMinds wrote:
> You run your prediction algorithm on the file you wish to compress, and store 
> only those elements in the sequence that differ from the element your 
> algorithm predicted at that position. Store the errors, in other words. You 
> can then regenerate the original file (decompress) by running your algorithm 
> and substituting the stored errors into the output.
> 
> If your algorithm requires training, then you should be able to train it on 
> any representative example of the kind of data you want to compress
Correct, mine already does that. So do many others.

Do note a brain stores memories overlayed on top other memories ex. c>a/i>t/n 
stores both cat and cin. A brain compresses. This allows it to combine 
predictions and predict things unseen, accurately. This good prediction allows 
compression of a File, too, by storing outside the brainCompressor a error 
correction on paper, a other form of compression - the one we mean when we talk 
about compression often, though yes a brain too compresses memories into the 
network to crunch it all together, this in fact is kinda part of the storage of 
the compression as well cuz you add up the brain size plus error correction on 
that 'paper' and get the total size needed to compress it.On Saturday, April 
24, 2021, at 12:06 PM, 

WriterOfMinds wrote:
> the ability to connect things in the text to my current world-model in the 
> way that was intended by the author.
Recognition is prediction, part of how you Do prediction of the next letter. 
All machines can only react based on context (prediction...input>output).
Do not a full AGI will have sound and vision...it Does use multi_sensory like 
you say you use in your own human brain.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-M836ad3c5860fe89d517aa773
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Making an AI from a compressor

2021-04-24 Thread WriterOfMinds
Disclaimer: I'm just trying to translate these talking points for better 
comprehension. I am not a member of the "all intelligence is best modeled as 
sequence prediction" camp. For one thing, that's a bit too flat or simplistic. 
(Like saying, "all intelligence is just choosing your next action." Well yes, 
but *how* to do it is the big awkward question.) And on an intuitive level, I 
don't think my understanding of a book is measured by how well I guessed the 
ending. Understanding is more like ... the ability to connect things in the 
text to my current world-model in the way that was intended by the author.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-M7923cf6c20935776d1494d65
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Making an AI from a compressor

2021-04-24 Thread WriterOfMinds
It's not so much "making an AI from a compressor" as "making a compressor from 
an AI." Specifically, it has to be a kind of AI that predicts the next element 
in a sequence if given past elements of the sequence.

You run your prediction algorithm on the file you wish to compress, and store 
only those elements in the sequence that differ from the element your algorithm 
predicted at that position. Store the errors, in other words. You can then 
regenerate the original file (decompress) by running your algorithm and 
substituting the stored errors into the output.

If your algorithm requires training, then you should be able to train it on any 
representative example of the kind of data you want to compress.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-Mb1102cc0e0e1b8982d6c70a9
Delivery options: https://agi.topicbox.com/groups/agi/subscription