Re: [Fis] Information as form conveyed by data--Jamie Rose

2011-10-10 Thread Pedro C. Marijuan

(message from Jamie Rose)
--

The difference that makes a difference puts the situation into a 
larger context
that includes the observing system / encountering system.  This is 
irrespective of
the sentience or cybernetic (secondary/tertiery/etc) 
awareness/interpretive/reapplication

capability of the sensoring system.

Gyorgy's question therefore is very important and needs clinical 
answering in _new ways_.
(beyond Bateson, Shannon, Weaver, et al. 

Simplistic action/reaction responsive systems are also worthy 
of 'information analysis'. 

A key characteristic is - is a system so constructed and organized that 
it will recognize 
(let lone be responsive to) energy/temporal/situational variances in its 
environment ?


We humans are awash in radio signals, but don't have the 
organs/organelles sensitive
to acknowledge their presence around us. Information is there -- but 
so what?


In contrast, atoms/molecules EM fields are exquisitely sensitive to 
fields variances
('information changes') all around them.  We harness and use this in 
human electronic
civilization -- but we design threshholds in the machinery we build.  
This prevents un-wanted
information (otherwise labelled as 'noise') from corrupting the data and 
effects desired.
We require that data-bit states be protected, until modified 
('informationed') by choice and
with utility concerns involved (cybernetic tiered information 
coordination, as it were).


A major change in appreciation of 'information systems' is needed.  What 
are they in regard to human information processing?  Whay is an 
information organelle in the natural world? 
What is an information system/component - more fundamentally - in the 
natural world?


But, most importantly -- what is a theoretical essense of an information 
'mechanism/function' ?


To be so bold, I will place something into the ether here for your 
consideration.


I made a presentation at the 1998 Univ Arizona, Towards a Science of 
Consciousness
conference.  While attending and listening to presentations during the 
week long conference,
I began to ponder in the variety of 'information' definitions floated 
about.  Wondering about
the possibility of general shared criteria and characteristics - I 
focussed on Shannon, and,
on Taylor's discussion of the fundamentals of The Calculus, 
and characteristics of physical

'tuned sensitivity radio equipment'.

Taylor, Leibnitz and Newton were particularly aware of measuement limits 
in regard to
mathemathics.  The notion of measured partitioning - heading toward the 
infinitely small/short -
required an important statement/disclaimer about the full domain of math 
and dimensional
and spatial measuring/sizing.  It was stated as an a priori axiom 
than no matter what size
partitions were under a curve or distance consideration, there would 
always be a distance
measure 'e', smaller/shorter than any partition size at any moment of 
consideration.


This is a critically important assertion. It is tantamount to saying 
that no matter how small
a partition (otherwise definable in current vernacular) as an 
'information bit', the mathematical
apparatus' would always be more sensitive than any partition-bit and 
would always be able to
'recognize' the information values - individually and cummulatively.   
An intangible always
sensitive mechanism cabale of encountering any and all information 
varainces. (smaller than

any wave length; shorter than any limit (ala Planck).


They were designing an intangible/virtual 'information processing 
mechanism', even if they didn't
express it in those terms; the Shannon-esque notion of 'defining 
information'.   What becomes humorous it the tautology this 
presents.  Shannon built his probability definition of 'information'
using the Calculus -- which was already an 'information processing 
function/mechanism' -- built  
on the Shannon-esque concept that there is such a thing as 'information' 
(bit/variance/probability).


This extraordinary cognition of what information can be, what 
mathematics and the Calculus
are as manulipators and processors of information datums -- already ;  
opens new vistas
to appreciate plural simultaneous information processing.   
Material/energy systems engage
and process 'information' -- sometimes in regard to human concerns, but 
ongoing and self-pertinent and self functional according to the nature 
and extent and capability of construction

themselves.

It is not incorrect to examine and evaluate any system as an 
'information system',  besides
whatever meaning, value or engagement (cybernetic translation or 
accomodation) potential for
other systems or alternate tiers/orders of systems in the companion 
environment(s).


Jamie Rose
- 
___
fis mailing list
fis@listas.unizar.es
https://webmail.unizar.es/cgi-bin/mailman/listinfo/fis


[Fis] Fw: The General Information Theory of Sunik

2011-10-10 Thread Krassimir Markov


-Original Message- 
From: boris.sunik
Sent: Tuesday, October 04, 2011 10:26 PM
To: 'Krassimir Markov'
Subject: RE: [Fis] Fw: The General Information Theory of Sunik




Dear Krassimir,
Below are my points regarding discussed issues.

Regards,
Boris Sunik

1. I never claimed that computer algorithms could provide all you know, and
all you need to know about information. To the contrary, I consider this
statement as wrong.

My idea is that the relevant way of information representation and
information explanation consists of viewing the real world in the same
conceptual coordinates, which are used for representation of computer
algorithms.

IMHO, this approach exactly matches the computing experience of the modern
world. Computer languages are not able to express any information except the
rules of manipulation with the bits and bytes of the computer storage. BUT,
these very limited abilities are nevertheless sufficient not only for the
controlling very different machines but also for the manipulating human
beings.

Why a computer is that efficient? It is while computer languages adequately
model the real world. Among other this means that data designated in
computer languages coincide with the outside real  objects as the names
coincide with the designated objects.
Hence follows the idea of creating the programming-language-like-notation,
which allows words directly designating external objects.

2. Brain: Are Neurons and bits really that different?) that are the proof of
the entire premise are unable to be proved, have no tests or evidence and
are taken as self-evident.

In my opinion, no proofs for that are necessary. The solution is to build
the knowledge system based on this premise and see whether it will
practically work. Neither C++ no other practically used programming
languages ever got any formal proofs of their functionality. The usability
of a language depends not on any formal checks but on whether they could be
effectively used in practice. I mean TMI could practically be used and hope
it will.

3. definition of meaning

In TMI semantics and meaning are synonyms. The  characteristic for TMI
understanding of semantics is firstly considered at the end of Problem
Statement. Another place is 2.6 where I deliberately chose the simplest
systems, because they are the best in showing the approach's basics. The
approach itself could be applied on arbitrary complex systems.

In a few words: ―  meaning of the linguistic item is the branch(s) of
algorithm(s) associated with this item.




-Original Message-
From: Krassimir Markov [mailto:mar...@foibg.com]
Sent: Monday, October 03, 2011 22:32 PM
To: boris.sunik
Subject: Fw: [Fis] Fw: The General Information Theory of Sunik



-Original Message-
From: Gavin Ritz
Sent: Monday, October 03, 2011 11:22 PM
To: 'Steven Ericsson-Zenith' ; 'Joseph Brenner'
Cc: 'Foundations of Information Science'
Subject: Re: [Fis] Fw: The General Information Theory of Sunik

I agree with you both.

The declarative statements (4 statements in 2.4.1 Digital Computer versus
Brain: Are Neurons and bits really that different?) that are the proof of
the entire premise are unable to be proved, have no tests or evidence and
are taken as self evident.

This path is a dead end.

Regards
Gavin



The document seems extremely confused to me. This is not least because the
author does not appear to present a clear definition of the terms in the
title or the expression of subject in the work. In particular, I can find no
definition of meaning other than the one presented in a quote from Shannon
and the subsequent use of the term is confused to say the least. Similarly,
the term semantic is not clearly defined and abused. The same goes for
other terms such as knowledge.

So I take an even harsher view than Joseph since it is not even a good
representative of the view that computer algorithms can provide all you
know, and all you need to know. The definitive representative of that view
is Stephen Wolfram's book A New Kind Of Science, and while I have my
problems with the theory in the book, it is - at least - well defined.

With respect,
Steven


On Oct 3, 2011, at 9:17 AM, Joseph Brenner wrote:

 Dear Krassimir,

 Thank you for bringing this document to our attention, for
 completeness. I

 would have wished, however, that you had made some comment on it,
 putting
it
 into relation with your own work and, for example, that of Mark
 Burgin, which are dismissed out of hand.

 From my point of view, Sunik's work is another one of those major
 steps backwards to an earlier, easier time when it was claimed that
 computer algorithms could provide all you know, and all you need to
 know about information. One example of a phrase the author presents
 as involving meaning is Peter's shirt size. . .

 From a methodological standpoint, I think it underlines, /a
 contrario/,
the
 danger of focus on a single approach to information. My current idea,
which
 I propose for discussion, 

[Fis] Fw: Fw: The General Information Theory of Sunik

2011-10-10 Thread Krassimir Markov


-Original Message- 
From: boris.sunik
Sent: Tuesday, October 04, 2011 10:26 PM
To: 'Krassimir Markov'
Subject: RE: [Fis] Fw: The General Information Theory of Sunik




Dear Krassimir,
Below are my points regarding discussed issues.

Regards,
Boris Sunik

1. I never claimed that computer algorithms could provide all you know, and
all you need to know about information. To the contrary, I consider this
statement as wrong.

My idea is that the relevant way of information representation and
information explanation consists of viewing the real world in the same
conceptual coordinates, which are used for representation of computer
algorithms.

IMHO, this approach exactly matches the computing experience of the modern
world. Computer languages are not able to express any information except the
rules of manipulation with the bits and bytes of the computer storage. BUT,
these very limited abilities are nevertheless sufficient not only for the
controlling very different machines but also for the manipulating human
beings.

Why a computer is that efficient? It is while computer languages adequately
model the real world. Among other this means that data designated in
computer languages coincide with the outside real  objects as the names
coincide with the designated objects.
Hence follows the idea of creating the programming-language-like-notation,
which allows words directly designating external objects.

2. Brain: Are Neurons and bits really that different?) that are the proof of
the entire premise are unable to be proved, have no tests or evidence and
are taken as self-evident.

In my opinion, no proofs for that are necessary. The solution is to build
the knowledge system based on this premise and see whether it will
practically work. Neither C++ no other practically used programming
languages ever got any formal proofs of their functionality. The usability
of a language depends not on any formal checks but on whether they could be
effectively used in practice. I mean TMI could practically be used and hope
it will.

3. definition of meaning

In TMI semantics and meaning are synonyms. The  characteristic for TMI
understanding of semantics is firstly considered at the end of Problem
Statement. Another place is 2.6 where I deliberately chose the simplest
systems, because they are the best in showing the approach's basics. The
approach itself could be applied on arbitrary complex systems.

In a few words: ―  meaning of the linguistic item is the branch(s) of
algorithm(s) associated with this item.




-Original Message-
From: Krassimir Markov [mailto:mar...@foibg.com]
Sent: Monday, October 03, 2011 22:32 PM
To: boris.sunik
Subject: Fw: [Fis] Fw: The General Information Theory of Sunik



-Original Message-
From: Gavin Ritz
Sent: Monday, October 03, 2011 11:22 PM
To: 'Steven Ericsson-Zenith' ; 'Joseph Brenner'
Cc: 'Foundations of Information Science'
Subject: Re: [Fis] Fw: The General Information Theory of Sunik

I agree with you both.

The declarative statements (4 statements in 2.4.1 Digital Computer versus
Brain: Are Neurons and bits really that different?) that are the proof of
the entire premise are unable to be proved, have no tests or evidence and
are taken as self evident.

This path is a dead end.

Regards
Gavin



The document seems extremely confused to me. This is not least because the
author does not appear to present a clear definition of the terms in the
title or the expression of subject in the work. In particular, I can find no
definition of meaning other than the one presented in a quote from Shannon
and the subsequent use of the term is confused to say the least. Similarly,
the term semantic is not clearly defined and abused. The same goes for
other terms such as knowledge.

So I take an even harsher view than Joseph since it is not even a good
representative of the view that computer algorithms can provide all you
know, and all you need to know. The definitive representative of that view
is Stephen Wolfram's book A New Kind Of Science, and while I have my
problems with the theory in the book, it is - at least - well defined.

With respect,
Steven


On Oct 3, 2011, at 9:17 AM, Joseph Brenner wrote:

 Dear Krassimir,

 Thank you for bringing this document to our attention, for
 completeness. I

 would have wished, however, that you had made some comment on it,
 putting
it
 into relation with your own work and, for example, that of Mark
 Burgin, which are dismissed out of hand.

 From my point of view, Sunik's work is another one of those major
 steps backwards to an earlier, easier time when it was claimed that
 computer algorithms could provide all you know, and all you need to
 know about information. One example of a phrase the author presents
 as involving meaning is Peter's shirt size. . .

 From a methodological standpoint, I think it underlines, /a
 contrario/,
the
 danger of focus on a single approach to information. My current idea,
which
 I propose for discussion, 

[Fis] MDA 2012, second circular

2011-10-10 Thread Krassimir Markov


Subj: MDA 2012, second circular

/*** APOLOGIES FOR MULTIPLE OR UNWANTED RECEIPT ***/

Dear Colleague,

We are glad to send you the second circular announcing the first
International Conference on
Mathematics of Distances and Applications, July 2-5, 2012, Varna (Bulgaria).

Both pure math papers and science papers (in the broad sense) are welcome.
A non exhaustive list of topics is available on the conference website:

http://www.foibg.com/conf/ITA2012/2012mda.htm

Papers and Proceedings:

Papers may be associated or not to a talk request.
No poster session is planned.
The anonymous peer review process applies.
Accepted manuscripts (surveys, regular papers, extended abstracts) will be
published in an appropriate International Journal or Book.
Accepted abstracts will be published on-line.

More information about publications, deadlines, instructions for authors,
etc.:

http://www.foibg.com/conf/ITA2012/2012_fees.htm


If you are willing to contribute, please register on the Conference website
and communicate to us a provisional title of your paper or/and talk.


The MDA 2012 Committee:

Tetsuo Asano (Japan)
Stefan Dodunekov (Bulgaria)
Vladimir Gurvich (USA)
Sandi Klavzar (Slovenia)
Jacobus Koolen (South Korea)
Svetlozar Rachev (USA)
Egon Schulte (USA)
Sergey Shpectorov (UK)
Kokichi Sugihara (Japan)
Koen Vanhoof (Belgium)
Cedric Villani (France) (Fields Medal 2010)

Michel Deza (France) (michel.d...@ens.fr)
Krassimir Markov (Bulgaria) (mar...@foibg.com): contact for local
organization questions
Michel Petitjean (France) (petitjean.chi...@gmail.com): contact for other
questions


Do not hesitate to contact us if you need more information.

Thanks for your attention.

Best regards,

Michel Petitjean
MTi, INSERM UMR-S 973, University Paris 7
35 rue Helene Brion, 75205 Paris Cedex 13, France.
Phone: +331 5727 8434; Fax: +331 5727 8372
E-mail: petitjean.chi...@gmail.com (preferred),
michel.petitj...@univ-paris-diderot.fr
http://petitjeanmichel.free.fr/itoweb.petitjean.html

___
ITHEA-ISS mailing list
ithea-...@ithea.org
http://www.ithea.org/mailman/listinfo/ithea-iss 

___
fis mailing list
fis@listas.unizar.es
https://webmail.unizar.es/cgi-bin/mailman/listinfo/fis


Re: [Fis] Fw: The General Information Theory of Sunik

2011-10-10 Thread Steven Ericsson-Zenith

 2. Brain: Are Neurons and bits really that different? 

Yes, the difference is stunning. I suggest you read a few papers on the subject.

  Neither C++ no other practically used programming
 languages ever got any formal proofs of their functionality.

This is simply not the case. Again, I suggest you search for a few papers on 
the subject and read them. There are many.

With respect,
Steven



On Oct 10, 2011, at 3:09 AM, Krassimir Markov wrote:

 
 
 -Original Message- 
 From: boris.sunik
 Sent: Tuesday, October 04, 2011 10:26 PM
 To: 'Krassimir Markov'
 Subject: RE: [Fis] Fw: The General Information Theory of Sunik
 
 
 
 
 Dear Krassimir,
 Below are my points regarding discussed issues.
 
 Regards,
 Boris Sunik
 
 1. I never claimed that computer algorithms could provide all you know, and
 all you need to know about information. To the contrary, I consider this
 statement as wrong.
 
 My idea is that the relevant way of information representation and
 information explanation consists of viewing the real world in the same
 conceptual coordinates, which are used for representation of computer
 algorithms.
 
 IMHO, this approach exactly matches the computing experience of the modern
 world. Computer languages are not able to express any information except the
 rules of manipulation with the bits and bytes of the computer storage. BUT,
 these very limited abilities are nevertheless sufficient not only for the
 controlling very different machines but also for the manipulating human
 beings.
 
 Why a computer is that efficient? It is while computer languages adequately
 model the real world. Among other this means that data designated in
 computer languages coincide with the outside real  objects as the names
 coincide with the designated objects.
 Hence follows the idea of creating the programming-language-like-notation,
 which allows words directly designating external objects.
 
 2. Brain: Are Neurons and bits really that different?) that are the proof of
 the entire premise are unable to be proved, have no tests or evidence and
 are taken as self-evident.
 
 In my opinion, no proofs for that are necessary. The solution is to build
 the knowledge system based on this premise and see whether it will
 practically work. Neither C++ no other practically used programming
 languages ever got any formal proofs of their functionality. The usability
 of a language depends not on any formal checks but on whether they could be
 effectively used in practice. I mean TMI could practically be used and hope
 it will.
 
 3. definition of meaning
 
 In TMI semantics and meaning are synonyms. The  characteristic for TMI
 understanding of semantics is firstly considered at the end of Problem
 Statement. Another place is 2.6 where I deliberately chose the simplest
 systems, because they are the best in showing the approach's basics. The
 approach itself could be applied on arbitrary complex systems.
 
 In a few words: ―  meaning of the linguistic item is the branch(s) of
 algorithm(s) associated with this item.
 
 
 
 
 -Original Message-
 From: Krassimir Markov [mailto:mar...@foibg.com]
 Sent: Monday, October 03, 2011 22:32 PM
 To: boris.sunik
 Subject: Fw: [Fis] Fw: The General Information Theory of Sunik
 
 
 
 -Original Message-
 From: Gavin Ritz
 Sent: Monday, October 03, 2011 11:22 PM
 To: 'Steven Ericsson-Zenith' ; 'Joseph Brenner'
 Cc: 'Foundations of Information Science'
 Subject: Re: [Fis] Fw: The General Information Theory of Sunik
 
 I agree with you both.
 
 The declarative statements (4 statements in 2.4.1 Digital Computer versus
 Brain: Are Neurons and bits really that different?) that are the proof of
 the entire premise are unable to be proved, have no tests or evidence and
 are taken as self evident.
 
 This path is a dead end.
 
 Regards
 Gavin
 
 
 
 The document seems extremely confused to me. This is not least because the
 author does not appear to present a clear definition of the terms in the
 title or the expression of subject in the work. In particular, I can find no
 definition of meaning other than the one presented in a quote from Shannon
 and the subsequent use of the term is confused to say the least. Similarly,
 the term semantic is not clearly defined and abused. The same goes for
 other terms such as knowledge.
 
 So I take an even harsher view than Joseph since it is not even a good
 representative of the view that computer algorithms can provide all you
 know, and all you need to know. The definitive representative of that view
 is Stephen Wolfram's book A New Kind Of Science, and while I have my
 problems with the theory in the book, it is - at least - well defined.
 
 With respect,
 Steven
 
 
 On Oct 3, 2011, at 9:17 AM, Joseph Brenner wrote:
 
 Dear Krassimir,
 
 Thank you for bringing this document to our attention, for
 completeness. I
 
 would have wished, however, that you had made some comment on it,
 putting
 it
 into relation with your own work and,