Rob,
I am trying to make improvements on the analysis of logical statements. My
idea is that a program can analyze Satisfiabilty statements, (logical
statements where the states of some logical variables are not assigned
values), by using a system of selective compression. The problem of finding
logical values (true or false) that can satisfy a statement can become very
unwieldy in some cases. It is easy to find solutions to a  logical
statement like, A V B ^C. For instance A=T, B=F, and C=T will satisfy the
statement. But with more logical variables (or literals), finding solutions
to some logical satisfiability statements can become very complicated and
time consuming. If an analytical program could operate on the compressed
data (derived from a satisfiability statement) it might be able to find
solutions to those statements faster than can be done at this time.

Logic can be used to express many kinds of significant relations in
computer programs, so advances in finding solutions to these problems could
yield great increases in speeds and memory efficiency (because logical
satisfiability statements act just like compressions).

Since I am talking about selective compression then if I could make any
kind of advance it would mean that I could make some general advances in
the field by showing that you can use selective strategies in compression
(by using auto-generation of ad hoc sub programs which could operate with
other individuated ad hoc sub programs).

I was not really talking about lossy compression, but I did touch on that
subject. An advance in Satisfiability could be leveraged in lossy
compression. And as I was writing in this thread I started thinking about
the somewhat mysterious discrepancy between human recognition and recall. I
did not want to call it lossy compression exactly but if an agi program was
able to create components of recognition that relied on ranges of values
then by checking some input (the data from an ongoing event) against a
system of these components to see if there was a fit that ability might
explain the discrepancies between recognition and recall. The components of
the system would not necessarily be called lossy compression because it
might not make sense to try to decompress them. (So they would not serve as
methods to recall some event.) You would only use them to see if an input
could be fitted into them. (The system would also need some discrimination
methods as well.) The system would need some way to check all possible
combinations that might satisfy a given input so such a system would have
some of the earmarks of a satisfiability problem. That is why I thought
about this particular issue while writing this thread. One other thing
about this. This has been heavily studied using fuzzy logic, neural
networks and graph theory using weighted values (like probability
networks). However, most of those efforts have not taken discrete concepts
into account and they have not resolved the problem of using different
kinds of weighted values (what I consider to be pre-Cauchy thinking based
on an analogy from the contemporary miasma of using weighted reasoning
today to the unexplained failures in using calculus before Cauchy.)
Discrete concepts are important in recognition problems (using lossy
methods if you want to use that term) because they are needed to find other
related knowledge about the event that is occurring. However, this then
becomes a satisfiability problem. So even I were to be a modern day Cauchy
(I don't have the time to even pursue this fantasy) who could solve
contemporary dilemmas in weighted reasoning, it would not be useful without
some advances in satisfiabiilty problems.

(I am planning on using Cauchy-appreciation in my logical satisfiabiilty
analysis.)

I hope you can get past all my off-track tangential thinking and get what I
am saying as it relates to what you were saying.

Jim Bromer


On Wed, Jun 7, 2017 at 12:30 AM, Rob Freeman <[email protected]>
wrote:
>
> Jim,
>
> It is hard to understand what you are suggesting. But I get a hunch you
may be saying something like the idea I have been promoting for a long
time. This is that for cognition all or most compressions will at best be
partial (lossy?)
>
> I suppose you could keep a list of outlier data for each partial
compression, and adjust processing based on it.
>
> Is that what you are suggesting?
>
> My guess is that we won't gain that much by compressing at the end of the
day anyway. Processing distributed data in parallel will probably be as
efficient as processing compressed data in serial.
>
> So my guess is it will be better to just work with uncompressed data, in
parallel.
>
> But I like the insight that all compressions will be partial. If that is
what you are suggesting. If that is what you are suggesting I agree with
you.
>
> In short, I think the best way to address the partial/subjective
compression problem I believe you to be addressing, will be ad-hoc
compressions relevant to specific decisions.
>
> -Rob
>
> On Wed, Jun 7, 2017 at 3:16 AM, Jim Bromer <[email protected]> wrote:
>>
>> A program could create new rules at a higher abstraction level that
could operate on the rules that exist at a lower level. (This
characterization is relative but I think it is useful.) This might allow
for greater compressive individuation.
>>
>> So creating individuated sub programs based on the data that was being
compressed (or transformed from one compressed form to another) might be
useful in devising ways to effectively work with the data without fully
decompressing it.
>>
>> Logic is (typically) a compressed representation and you can often work
with in compressed form. But, there are many common situations where that
data has to be excessively decompressed in order to work with it. My
conjectured-goal is for a program to be able to create new rules during an
analytical run so that it could individuate the data (to compress it) and
still be able to operate on the different individuations without fully
decompressing them.Even if I can’t get it to work, I should be able to show
that it can work on some special cases.
>>
>>
>> Jim Bromer
>>
>> On Tue, Jun 6, 2017 at 10:02 AM, Jim Bromer <[email protected]> wrote:
>>>
>>> I meant:
>>> This would allow for greater individuation which is necessary in order
to work with the data in compressed form because the data would be too
extensive for the program to handle it in decompressed form.
>>>
>>> Jim Bromer
>>>
>>> On Tue, Jun 6, 2017 at 10:00 AM, Jim Bromer <[email protected]> wrote:
>>>>
>>>> Most programs operate on a higher level of rules which operate on the
data that is encountered. This higher level program is a kind of general
abstraction. A program can create new rules at higher abstraction that
could operate on the rules at a lower level. (This characterization is
relative but I think it is useful. Programming languages do this but the
essential part of the program produced is created by a person) This would
allow for greater individuation which is necessary in order to work with
the data in decompressed form because the data would be too extensive for
the program to handle it in decompressed form. This individuation would
mean that the data could not be decompressed without the higher level
abstractions (or sub programs) that each individual run would produce.
These higher level sub-programs would be part of the data the program would
create but I am trying to emphasize the idea that the program would create
sub-programs based on the data that it was encountering and compressing.
(My idea of working with SAT is that it would transform statements in
traditional Boolean Logical form into another form which would also be
compressed.)
>>>> So creating individuated sub programs based on the data that was being
compressed (or transformed from one compressed form to another) might be
useful in devising ways to compress the data.
>>>> This kind of program would use generalizations as components both to
directly represent the data in compressed form and to create sub-programs
that could operate on the compressed data.
>>>>
>>>> Jim Bromer



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to