c
> theory published, but the majority of it in private R, in preparation for a
> proof-of-concept prototype.
>
> Rob
>
> From: Jim Bromer via AGI
> Sent: Saturday, 13 October 2018 4:12 PM
> To: AGI
> Subject: Re: [agi] Compressed Algorithms
> > domain (autonomous effective complexity with least "brain power"required).
> > Suppose we viewed this as part of AGI "DNA".
> >
> > How would such a computational architecture be different to your version?
> >
> >
> >
that can work on compressed data.
> -Original Message-
> From: Nanograte Knowledge Technologies via AGI
>
>
> And both are beautifully brought together by this passage:
> "In order to discuss more fully the concept of effective complexity, it is
> essential to ex
On Sat, Oct 13, 2018 at 6:12 AM John Rose wrote:
> > It takes kT ln 2 = 9.57 x 10^-24 joules per kelvin to retrieve (and
> > copy) a bit of information.
> >
>
> Interesting! That's an average I bet. When there are many bits intelligence
> would optimize the sum?
Actually, no. That is the
main (autonomous effective complexity with least "brain power"required).
> > Suppose we viewed this as part of AGI "DNA".
> >
> > How would such a computational architecture be different to your version?
> >
> >
> > ___
2018 8:59 PM
To: AGI
Subject: Re: [agi] Compressed Algorithms that can work on compressed data
"which had the sole intent to achieve and maintain the
highest-potential level of competency"
This is obviously an exaggerated goal for anyone today and certainly
for any of us and it is really
to your version?
>
>
>
> From: Jim Bromer via AGI
> Sent: Friday, 12 October 2018 7:35 PM
> To: AGI
> Subject: Re: [agi] Compressed Algorithms that can work on compressed data
>
> The potential to create specialized data structures for AGI m
tential to open the
> door for access to the magical 256 NP-Complete findings.
>
> Rob
>
> From: Jim Bromer via AGI
> Sent: Friday, 12 October 2018 11:27 AM
> To: AGI
> Subject: Re: [agi] Compressed Algorithms that can work on compressed dat
e magical 256 NP-Complete findings.
>
> Rob
>
> From: Jim Bromer via AGI
> Sent: Friday, 12 October 2018 11:27 AM
> To: AGI
> Subject: Re: [agi] Compressed Algorithms that can work on compressed data.
>
> The idea of relative randomness of a given compr
e Jaguar (p.50))
Is Matt the observing CAS?
Rob
From: John Rose
Sent: Friday, 12 October 2018 4:36 PM
To: 'AGI'
Subject: RE: [agi] Compressed Algorithms that can work on compressed data.
> -Original Message-
> From: Nanograte Knowledge Technologie
> In my mind, my system would potentially cope with up to 16, real-time
integrated levels of abstraction.
Why 16? Do they have names? Is it a random number?
> Furthermore, this has potential to open the door for access to the
magical 256 NP-Complete findings.
Whatever that is.
Cheers
gt;
> Still, easy to translate across boundaries as well.
>
> *One's shoe may be another's steak. That is the nature of true relativity
> in motion.
>
> Rob
> ----------
> *From:* Jim Bromer via AGI
> *Sent:* Friday, 12 October 2018 3:34 AM
> *To:*
Matt said, "A string is random if there is no shorter description of
the string."
That is a conjecture, or a hypothesis.
Matt said, "... but there is no general algorithm to distinguish them in any
language.
"Encrypted data appears random if you don't know the key. But it is not
random because
> -Original Message-
> From: Matt Mahoney via AGI
>
> On Thu, Oct 11, 2018 at 12:38 PM John Rose
> wrote:
> > OK, what then is between a compression agents perspective (or any agent
> for that matter) and randomness? Including shades of randomness to
> relatively "pure" randomness.
>
>
To heck with work! A couple of beers makes it all ok. (Note to
employer: I am only kidding about that of course.)
As I said, I think the definition of randomness within a constrained
system makes a lot more sense then the alternative. However, a
definition must then be relative. Perhaps Matt
I think Matt's last post is wrong about the idea of the randomness of
a string but I am really supposed to be working.
I think John's abstract example would constitute an example of what I
was thinking about but there are also other exemplars, both abstract
and explicit.
Jim Bromer
On Thu, Oct
> -Original Message-
> From: Jim Bromer via AGI
>
> "Randomness" is merely computational distance from agent perspective."
>
> That is really interesting but why the fixation on the particular
> fictionalization? Randomness is computation distance from the agent
> perspective? No it
John said, ""Entropic extrema" as in computational resource expense
barrier, including chaotic boundaries, too expensive to mine into for
the compression agent causing symbol explosion and unpredictable time
complexity.. so effectively one-time symbolizing the whole region and
working around it
> -Original Message-
> From: Jim Bromer via AGI
>
> And if the concept of randomness is called into question then
> how do you think entropic extremas are going to hold up?
>
"Entropic extrema" as in computational resource expense barrier, including
chaotic boundaries, too expensive
This is such a weird statement. Like you try to make the human look stupid,
but it is really smarter for AI production to have smart humans. I kind of
conclude you are not actually in the AI game yourself.
On Mon, 8 Oct 2018 at 18:03, Matt Mahoney via AGI
wrote:
>
>
> On Mon, Oct 8, 2018, 9:44
> -Original Message-
> From: Jim Bromer via AGI
>
> Operating on compressed data without having to decompress it is the goal that
> I am thinking of so being able to access internal relations would be
> important.
> There can be some compressed data that does not contain explicit
On Mon, Oct 8, 2018, 9:44 AM Stefan Reich via AGI
wrote:
>
>
> Matt Mahoney via AGI schrieb am So., 7. Okt. 2018
> 03:25:
>
>> I understand the desire to understand what an AGI knows. But that makes
>> you smarter than the AGI. I don't think you want that.
>>
>
> Sure I want that!
>
No you
I understand the desire to understand what an AGI knows. But that makes you
smarter than the AGI. I don't think you want that.
A neural network learner compresses its training data lossily. It is lossy
because the training data information content can exceed the neural
network's memory capacity
to think about algorithms
as an enduring ecology, not as a "brilliant" event.
From: Ben Goertzel
Sent: Saturday, 06 October 2018 4:27 AM
To: AGI
Cc: agi
Subject: Re: [agi] Compressed Algorithms that can work on compressed data.
Jim,
If you look at ho
Jim,
If you look at how lossless compression works, e.g. lossless text
compression, it is mostly based on predictive probability models ...
If you have an opaque predictive model of a body of text, e.g. a deep
NN, then it's hard to manipulate the internals of the model ...
OTOH if you have a
Compressed Network Search Finds Complex Neural Controllers with a Million
Weights
First Deep Learner to learn control policies directly from high-dimensional
sensory input using reinforcement learning
Jürgen Schmidhuber, 2013
http://people.idsia.ch/~juergen/compressednetworksearch.html
On Fri,
A good goal for a next generation compression system is to allow
functional transformations to operate on some compressed data without
needing to decompress it first. (I forgot what this is called but
there is a Wikipedia entry on something s8milar in cryptography.)
This is how multiplication
27 matches
Mail list logo