Ah oh wow this is an interesting approach, glad somebody is on it:
https://arxiv.org/abs/2104.10670
Need to estimate the consciousness and also intelligence by analyzing the
compressive characteristics of the information mutually communicated... hmm...
--
On Thursday, April 29, 2021, at 10:49 PM, Jim Bromer wrote:
> I was reading your comment that, "Storage is transmission," and I realized,
> based on an idea I had a number of years ago, that if digital data was in
> constant transmission form then the null signal could be used for a value
>
I was reading your comment that, "Storage is transmission," and I realized,
based on an idea I had a number of years ago, that if digital data was in
constant transmission form then the null signal could be used for a value (like
0). That can't be done when data is stored in a contemporary
Gonna have to explain that one clearer Jim, how are you improving prediction
here? What is the pattern?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-Mc592ea81ca118c5314397980
Delivery
Free Lunch? Maybe
https://jamesbromer.com/Freelunch-Maybe.html
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-M03dd226382fa10c5519d6c69
Delivery options:
On Wednesday, April 28, 2021, at 6:22 PM, immortal.discoveries wrote:
> Either JR is just joking in every post or he really believes what he says.
> Both are not good cases... My suggestion - build a AI like GPT-2 / PPM, and
> see for yourself how one exactly works, to get out of this qualia
Either JR is just joking in every post or he really believes what he says. Both
are not good cases... My suggestion - build a AI like GPT-2 / PPM, and see for
yourself how one exactly works, to get out of this qualia bubble your seriously
stuck in (clearly you are, yes, I'm telling you, I've
On Wednesday, April 28, 2021, at 11:55 AM, immortal.discoveries wrote:
> What matters here is brains can solve many problems by predicting solutions
> based on context/ problem given
Single brains specialize. Multibrains generalize. That's why they communicate.
Multiparty intelligence on a
On Wednesday, April 28, 2021, at 12:01 PM, Jim Bromer wrote:
> Malleable compression is an interesting way to put it.
Well, we could reframe the concept of compression and redefine it in terms of
consciousness and intelligence.
Assume panpsychism, all compressors have non-zero consciousness.
Malleable compression is an interesting way to put it.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tbdfca102d702de94-Mb517e1b5a5fc923ff46099fc
Delivery options:
John says you can compress sometimes horribly/ nothing, sometimes ace it, but
not all the time ace all the thing. Here's my answer: Given 100 letters of
context that are random, yes a smart brain will fail here because it is random,
and given 100 letters all 'a' ex. 'aaaa' it will
On Wednesday, April 28, 2021, at 9:24 AM, Jim Bromer wrote:
> I do not think that "compression" per se is the basis of making AI (which is
> directly related to the topic). However, I do believe that an AGI (or an
> advanced AI) program would like a compressor.
I'm with you there Jim, unlike
In response to something I said about cross-generalization, John Rose replied "
You can optimally compress some of the data all of the time, you can optimally
compress all of the data some of the time, but you can’t optimally compress all
of the data all of the time. It is what it is bruh."
Ideal induction compresses all data you have to the smallest of all
programs that outputs that data exactly.
Ideal deduction/prediction runs that program until it outputs all that data
plus data you didn't have which are "deductions/predictions".
Ideal decision appends an action to your original
There is something about cross-generalizations and cross-categorizations or
overlapping insights about concepts (or components of concepts) that make me
think of a compression method that would not be an optimal compressor for a
narrow task but would be highly sophisticated for AI / AGI. But I
And, we've already saw DALL-E and Blender, clearly we have Nearly made AGI
already, we are extremely close now. You will be able to throw it on anything
and let it do the task ex. throw it down the vent and it'll clean your ducts
for you, if you get me. Especially if it has a body of course.
AGI with some enhancements to its brain will be able to do anything I can do,
better. But we don't want to test it on a single narrow task ex. can it build
towers taller than ones humans have made? We want to test it on a very large
diverse set of tasks, like running, cancer, computer speedups,
Disclaimer: I'm just trying to translate these talking points for better
comprehension. I am not a member of the "all intelligence is best modeled as
sequence prediction" camp. For one thing, that's a bit too flat or simplistic.
(Like saying, "all intelligence is just choosing your next
It's not so much "making an AI from a compressor" as "making a compressor from
an AI." Specifically, it has to be a kind of AI that predicts the next element
in a sequence if given past elements of the sequence.
You run your prediction algorithm on the file you wish to compress, and store
only
19 matches
Mail list logo