Jim

1) Via resident, evolutionary algorithms for mutation, transformation, 
variation, and recombination.

2) Across a quantum communications network, damaged code would simply be resent 
via recombined message packages and rerouting algorithms until the atoms find a 
gap in the protective boundary to penetrate and reach its destination-quantum 
address. In theory, Loop-until logic would simplify this procedure. All 
enironmental input would be recombined via 1-step data mutation and seamlessly 
incorporated into the sum of the holistic DB. Given X input, the DB would 
morphgenetically adapt based on its management instructions for real-time 
adaptation. Given the processing power being predicted on an exponential 
Moore's Law scenario, learning should theoretically become near instantaneous.

This architecture would be suitable for seamlessly mapping the subjective 
internals of any computational platform to the objective realities of any 
external environment. Esssentially, a logical recombination at the data level 
could be one of the outcomes. A singular world view would be generated and 
could be symbolically stored as a conceptual map. Billions such symbolic worlds 
could be sequenced and logically manipulated. There would be no real need (as 
is the case with humans) to generate an integrated view of relative reality 
first before function could be applied to the relative understanding of 
sensorial-driven environmental input.


________________________________
From: Jim Bromer <[email protected]>
Sent: 18 June 2017 12:51 AM
To: AGI
Subject: Re: [agi] Re: Is It Feasible to Run a Computation on Compressed 
Logical Data?

How could the meta-operations be effectively (efficiently) learned (or acquired 
from experience)?

And even if an encoding was not hackable in the sense that it could not be 
understood and hacked into, it still could be hackable in the sense the data 
could be damaged.

Jim Bromer

On Fri, Jun 16, 2017 at 9:14 AM, Nanograte Knowledge Technologies 
<[email protected]<mailto:[email protected]>> wrote:

A friend pointed out to me the core of a Grape system is based on learning. 
This got me thinking about this logical problem you have proposed. How to learn 
without making it about learning?

So, in my laymans language the,I think you could generate a symbolic schema, 
which would relate back to the data architecture and an AI-heuristic DB 
architecture. For example, generate a symbolic grapical spaceship (or any othe 
robject of desire) and associate different parts of the spaceship architecture 
to different knowledge elements (not tables). The spaceship would represent the 
form the function of the databse would  take. When the DB is  operating 
normally, one could say the spaceship would be complete and functional.

Further then, one could associate every meta-architectural element (such as 
meta function, meta-data, meta-process, meta-context, meat-anything) to the 
undercarriage of the spaceship, and allow the program logic to locate and find 
all associated elements across the symbolic DB that have the strongest, 
probable, cohesive factors. Build verifiable contexts from this. The resultant 
contexts would become your user-interface layer. Obviously, the AI-heuristic 
architecturla platform would also consist of an adaptive context management 
module.

The fun part is, this high level of translatable abstraction may become 
tracable back to the elementary elements, which could in effect collectively 
describe the databass in any state or form. The design is to set up the paths 
to carry messages across architectural chain reactions.

Thinking:
Encoding by translatable association.
Decoding via translatable disassociation.

Hackable? Not without the source architecture and DBMS, no.
How to protect the source architecture and DBMS? Self destruction, morphic 
rulesets, or an anti-tamper evolutionary step. If you cannot see it, how can 
you break into it?

Transportability? Only abstract maps would be transported. The rest is an 
AI-based architectural platform. From there the whole DB could theoretically be 
rebuilt. In other words, an RNA-conceptual"" associative abstract is formed 
from the elelementary input. What about the actual data? Data counts as 
"message". Data would be encrypted using the same logic , but in a different 
system layer, encoding messages with light, or similar. The result would be an 
encrypted transportable symbolic map of data.

Combining the two maps in a paused state - on any AI-heuristic DB platform 
architecture - should enable running a computation on compressed, logical data.


I think it is theoretically feasible.


________________________________
From: Jim Bromer <[email protected]<mailto:[email protected]>>
Sent: 16 June 2017 01:48 PM
To: AGI
Subject: Re: [agi] Re: Is It Feasible to Run a Computation on Compressed 
Logical Data?

I have been really busy but I thought about the subject a little. Most programs 
use concise representations of data -as they relate to the program- and so I 
guess many programs do operate on these program-related representations. These 
might be characterized as partial compressions since typically the data is not 
totally compressed. The other issue, whether these are true compressions have 
to be decompressed to produce a result of some kind is a little more 
intangible. But it is easy to think of simple tasks in which the information 
produced by the program was operated on in compressed form and then used to 
output data (for example how display the data) by decompressing these symbols. 
Some characteristics of the data might be initially abstracted and represented 
in compressed form. Then the operations of the system could work on these 
compressed abstractions which are then represented as strings. The strings 
could subsequently be used on the original data to produce some effect.
And if the program was able to see that some of these strings of operations 
could refer to other relations (consisting of operations and abstracted 
characteristics of the data) these relations might be translated in some way.
So the goal would be fairly simple to achieve. The program would have to be 
able to detect relations in some symbols (or symbol-like representations) 
produced by the program. But I was wondering if this could be used in dealing 
with Logical Satisfiability problems. For that I would need to create 
(computational) rules that are more powerful and flexible than the familiar 
rules of elementary logic. The problem in contemporary logic is that you 
typically need to decompress the data to some degree in order to use it in 
elementary operations.
I believe the first step to the smushing problem in weighted reasoning is to 
track the sources of the weighted resultant that is to be subsequently used in 
another computation. That way, conflation producing smushing might be resolved 
to some degree as needed. But that situation sounds like it would need 
decompression. So the source characteristics would have to be characterized as 
abstractions which could be used in complicated computational operations 
without decompression. If the characteristic abstraction methods and the 
computational relations between them were sensitive enough this plan might work 
- at least to some degree. In innovative AI this would involve the 
specialization of abstract relations created in response to the 'experiences' 
the program encounters.

Jim Bromer

On Sat, Jun 10, 2017 at 1:25 PM, Jim Bromer 
<[email protected]<mailto:[email protected]>> wrote:
Ben, Thank you for your comments. I will look at Homomorphic Encryption when I 
get a chance.
When I said that I did not want to use binary arithmetic as an example, I meant 
that I did not want to use it as an example of operators that are able to act 
on compressed data. I did not mean that I did not want to use binary arithmetic 
although my opinion is that there has to be some other fundamental 
computational operations that we do not know about.

Jim Bromer

On Sat, Jun 10, 2017 at 12:46 PM, Ben Kapp 
<[email protected]<mailto:[email protected]>> wrote:
Reading this I can't help but think about Homomorphic encryption, which 
provides the ability to perform computation on encrypted data.  If you were to 
use a compression algorithm as your encryption method then you would be done.  
Unfortunately homomorphic encryption necessitates bitwise encryption and 
compression algorithms would most certainly take into account more than a 
single bit when they perform their compression.  And so there would be quite a 
bit of work to do to generalize this field of mathematics to work on that class 
of encryption methods, but it seems to be where I would focus my efforts if I 
wished to provide this capability.

But you seem to have dismissed homomorphic encryption for some reason when you 
said the following.

"I did not want to use binary arithmetic as an example because computers were 
designed around those principles."

Homomorphic encryption most certainly uses binary arithmetic.  Can you 
elaborate on why you wish to preclude this?


Memories in the brain are reconstructive, and confabulatory.  Which is to say 
when you ask someone to recall something, they will not recall information as 
it was, but rather as their brain is.  And one can alter the brain of others to 
effectively perform CRUD operations on their memories, allowing you to alter 
those memories to any extent you wish.  Such seems to be a rather big problem 
for memory systems which are brain inspired.  Computers have perfect recall and 
such is highly desirable and a great improvement over humans.  I'm not certain 
why you would wish to replace such a perfect system with a lossy (and 
confabulatory) system.


On Sat, Jun 10, 2017 at 4:02 AM, Jim Bromer 
<[email protected]<mailto:[email protected]>> wrote:
Rob,
I will look at the paper when I get a chance.

Jim Bromer

On Wed, Jun 7, 2017 at 7:16 PM, Rob Freeman 
<[email protected]<mailto:[email protected]>> wrote:
Jim,

Have a look at this paper and see if you find it relevant. I understand it to 
be a sketch for logic using distributed representation. RNN's still globally 
optimize, so I think they will still have lossy compression (instead of partial 
compression?) But the idea of using distributed representation is on the right 
track:

Semantic Compositionality through Recursive Matrix-Vector Spaces
Richard Socher   Brody Huval   Christopher D. Manning    Andrew Y. Ng
https://nlp.stanford.edu/pubs/SocherHuvalManningNg_EMNLP2012.pdf

-Rob
AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]
 <https://www.listbox.com/member/archive/rss/303/24379807-653794b5>  | 
Modify<https://www.listbox.com/member/?&;> Your Subscription      
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>

AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]
 <https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee>  | 
Modify<https://www.listbox.com/member/?&;> Your Subscription      
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>

AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]
 <https://www.listbox.com/member/archive/rss/303/24379807-653794b5>  | 
Modify<https://www.listbox.com/member/?&;> Your Subscription      
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>


AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]
 <https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc>  | 
Modify<https://www.listbox.com/member/?&;> Your Subscription      
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>

AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]
 <https://www.listbox.com/member/archive/rss/303/24379807-653794b5>  | 
Modify<https://www.listbox.com/member/?&;> Your Subscription      
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>

AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]
 <https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc>  | 
Modify<https://www.listbox.com/member/?&;> Your Subscription 
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to