Dear All again,


Terry has introduced an absolutely essential concept on which we need to
focus, that of a generative logic of informational relationships. I would
just like to point out that we are not starting from zero. Some of us, for
example Mark J. and I have already recognized the need for a new logic, in
which understanding the dynamic relationships is central. In Logic in
Reality, for example, Terry's suggestion of the need to avoid "the tendency
to use language-like communication as the paradigm exemplar" is already
achieved by focus on the non-linguistic dynamic process properties of
information.



If Terry could expand his concept of the contours of a 'generative logic',
it might be possible to show this even more clearly.



Thank you and best wishes,



Joseph



  _____

From: Fis [mailto:fis-boun...@listas.unizar.es] On Behalf Of Terrence W.
DEACON
Sent: samedi, 13 janvier 2018 19:33
To: Alex Hankey
Cc: fis@listas.unizar.es; Emanuel Diamant; Sungchul Ji
Subject: Re: [Fis] I salute to Sungchul



Hi all,



I would be very encouraged if we are trying to develop beyond mere lists of
different uses of the term 'information' TO structured taxonomies of
distinct types of information TO a generative logic of how these distinct
modes of a complex information relationship are interrelated.



Dualistically distinguishing intrinsic properties of an informing medium
from relational properties that determine its reference provides an
important first step in growing the concept to encompas its full usefulness.
But I hope that we will also eventually begin to attend to the functional
value that the coveyed reference provides, since this too is often also
implicitly part of the various uses of the term 'infomation' in colloquial
and even scientific use. This requires more careful parsing of the term
"meaning" that is often invoked.



For instance, one can receive information that is unambiguously "about"
something but where that which it is about is already known and therefore is
"functionally redundant" (not to be confused with signal redundancy). Or
this information can be about something that is irrelevant to a given
function or end, while still being information about something.



An example would be telling me the time when I already know what time it is.
The statement about the time does indeed "mean" something-i.e. it is not
meaningless as gibberish woiuld be. Similarly, if I ask to know the current
temperature and I am instead told the time, the reference provided would be
useless to me-i.e. it wouldn't "make a difference" in the colloquial English
sense of that phrase. The concept of "meaning" tends to collapse or conflate
these two distinctions-reference and significance-which I think we should
endeavor to distinguish.



In this respect I like the suggestion by Alex Hankey that we consider an
example like the barely conscious "feeling" of being watched which both
conveys information about an extrinsic state of affairs and additionally has
a functional relevance which is implicit in the discomfort it typically
elicits. Both the aboutness and the significance are relational, not
intrinsic properties of information. They are are distinct relations because
they are asymmetrically dependent on one another. Thus if I am entirely
unaware of being watched I am nnot discomforted by it.



Note also the difference in these relational attrributes: aboutness or
reference is "in relation to" some state of affairs, whereas significance or
value is "in relation to" some telos intrinsic to an interpreting agent or
system.



Exploring such nondiscursive examples can help us to escape the tendency to
use language-like communication as the paradigm exemplar. The analysis of
the information intrinsic to and conveyed by music might in this respect
provide a useful platform for future discussion.



Are there other critical distinctions that we additionally need to
highlight?



Happy New Year, Terry



On Fri, Jan 12, 2018 at 9:24 PM, Alex Hankey <alexhan...@gmail.com> wrote:

And what about the Kinds of Information that you cannot put in a data set? 

The information that makes you turn your head and meet the gaze of someone
staring at you.

No one could do that, which we humans and all animals do constantly,

unless we had received such information at a subliminal level in the brain.

We all have that capacity, it is vital for survival in the wild. All animals
do it.

The 'Sense of Being Stared At' is a common experience for most animals,

how far down the tree of life no one yet knows.



Whatever triggers it is definitely 'A Difference that Makes a Difference', 

so fits in your definition of 'Meaningful Information' - it has to!

BUT IT CANNOT BE DIGITAL INFORMATION.

Please Face Up to This Fact.



All best wishes,



Alex





On 13 January 2018 at 07:30, Sungchul Ji <s...@pharmacy.rutgers.edu> wrote:

Hi Emmanuel and FISers,



Thank you, Emmanuel, for your generous remarks.  It is heartening to know
that our ideas converge, although we carried out our research independently
of each other, a clear example of consilience.



(1)  I like and agree with the Kolomogorov quote you cited in [1]:



"Information is a linguistic description of structures in a given data set."




It seems to me that there are 4 key concepts embedded in the above quote,
which we may view as the definition of what may be called the "Komogorov
information" or the "Kolmogorov-Bateson information" for  the convenience of
reference:


i)   data set (e.g., ACAGTCAACGGTCCAA)

ii)  linguistic description (e.g., Threonine, Valine, Asparagine, Glycine)

iii) structure (e.g., 16 mononucdotide, 8 dinucldotides, 5 trinucleotides
plus 1)

iv) mathematical description (e.g., tensor product of two 2x2 matrices of 4
nucleotides) [2, 3].



The first three elements are obvious, but the 4th is not so obvious but
justified in view of the recent work of Petoukhov [2, 3].



(2) Based on these ideas, I have constructed Table 1 below of the various
names applied to the two kinds of information which I described as I(-) and
I(+) in my previous post.








Table 1.  The arbitrariness of the signs referring to 'information'. It
doesn't matter what you call it, as long as your chosen label refers to the
right reality, thing, process, mechanisms, etc.


1

Type I Information

Type II information


2

Physical Information

Sematic information


3

Shannon information

Kolmogorov information, or

Kolmogorov-Bateson information


4

'Meaningless' information

'Meaningful' information


5

I(-) information, or simply I(-)

I(+) information, or simply I(+)


6

Quantitative information

Qualitative information


7

Mathematical information

Linguistic information (see Statement (1))


8

Formal information

Phenomenological information


9

Interpretant-less sign [4]

Triadic sign [4]



(3)  One practical application of the dual theory of information under
discussion is in deducing the structure of cell language, or the structure
of the linguistics of DNA, in a much more rigorous manner than was possible
in 1997 [5].

   It is the common practice in biology to use the terms "letters", "words",
"sentences", and "texts" without any rigorous definitions.  The general rule
is to follow the rules of concatenations used in linguistics literally and
say that



i) just as 26 letters in the English alphabet are combined to form words
(the process being called the second articulation [5]), so the 4 letters of
the genetic alphabets, A, C, G and T/U,  combine in triplets to form genetic
codons.  Similarly, just as words form sentences and sentences form texts by
the same concatenation procedure (or tensor multiplication, mathematically
speaking , i.e, linearly arranging words and sentences, respectively (see
the second column in Table 2), so the 64 nucleotide triplets combine to form
proteins and proteins combine to form metabolic pathways by continuing the
concatenation process, or the tensor multiplication of matrices of larger
and larger sizes (see the fourth column, which is based on the physical
theory of information, i.e., without any involvement of semantics or the
first articulation).

ii)   In contrast to the fourth column just described, we can justify an
alternative structural assignments based on the semantic theory of
information as shown in the fifth column of Table 2.  Here the letters of
the cell language alphabet are not always mononucloetoides but thought to be
n-nucleotides, such as dinucleotides (when n = 2), trinucleotides (when n
=3), tetranucleotides (when n = 4), penta-nucelotides (when n = 5), etc.
That is, unlike in human language where the letters of an alphabet usually
consist of one symbol, e.g., A, B, C, D, E, . . . , I am claiming that in
cell language, the letters can be mononucloetides (i.e., A, G, C, T/U),
dinucloeotides (i.e., AG, AC, . . . .) , trinucleotides (i.e., ACT, GTA,  .
. . ), tetranucleotides (i.e., ACTG, CCGT, . . . .), pentanucleotides (i.e.,
ACCTG, TCGAT, . . .) and, up to n-nucleotides (also called n-plets [2, 3]),
where n is an unknown number whose upper limit is not yet known (at least to
me).  If this conjecture turns out to be true, then the size of the cell
language alphabet can be much larger (10^3 - 10^9 ?) than the size of a
typical human linguistic alphabet which is usually less than 10^2, probably
due to the limitation of the memory capacity of the human brain.

(iii) From linguistics, we learn that there are at least 4 levels of
organization, each level characterized by a unique function (see the second
column).  Without presenting any detailed argument, I just wish to suggest
that the linguistic structures deduced based on the semantic information
theory (i.e., the fifth column) agree with the human linguistic structures
(i.e., the second column) better than does the linguistic structures based
on the physical/mathematical/quantitative information theory (i.e., the
fourth column), when the functional hierarchy given in the third column is
taken into account.






Table 2.  Two versions of the linguistics of DNA based on (i) the physical
information theory, and (ii) the semantic information theory [1]. M stands
for a 2x2 matrix whose elements are the 4 genetic nucleotides, A, C, G and
T/U, i.e., M = [C A; T G] (see Figure 16 in [2]). The symbol, (x), indicates
tensor multiplication [2, 3].  The I to II transition is known in
linguistics as the second articulation; the II to III transition as the
first articulation [4]; the III to IV transition was referred to as the
third articulation [5].


Organization  level

Human Language

Cell Language




Structure

Function/Semantics

Structure based on the Physical Information Theory (PIT) [1]

Structure based on the Semantic Information Theory (SIT) [1]


I

Letters

Basic building

blocks or basic physical signals

4 Nucleotides (A, C, G, T/U);

M = [C A;T G]*

mono-, di-, trinucleotides, 4-plets, 5-plets, . . . , n-plets of
nucleotides,  . . .


II

Words

To denote

16 dinucleotides;

M(x)M or M^2

Any combinations of the n-plets/ genes/proteins


III

Sentences

To decide

64 trinucleotides /amino acids;
M(x)M(x)M or M^3



Assembly of  genes/proteins; or metabolic pathways (MP)


IV

Texts

To argue/compute/

reason (e.g., syllogism)

254 tetranucleotides;

Metabolic pathways (?); M(x)M(x)M(x)M or M^4

Networks of MP's



characterized by a unique function (see the second column).  Without
presenting any detailed argument, I would like to suggest that the
linguistic structures deduced based on the semantic information theory
(i.e., the fifth column) agree with the human linguistic structures (i.e.,
the second column) better than does the linguistic structures based on the
physical/mathematical/quantitative information theory (i.e., the fourth
column).

In other words, the structure of cell language deduced based on the semantic
information theory agrees better, functionally, with that of the human
language than the structure of cell language deduced based on the physical
information theory, thus further supporting the 1997 postulate that cell and
human languages are isomorphic [5, 6].



If you have any questions or suggestions for improvements on the above
tables, I would appreciate hearing from you.



All the best.



Sung



References:

   [1] Emanuel Diamant, The brain is processing information, not data. Does
anybody care?, ISIS Summit Vienna 2015, Extended Abstract.
http://sciforum.net/
<https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fsciforum.ne
t%2Fconference%2Fisis-summit-vienna-2015%2Fpaper%2F2842&data=02%7C01%7Csji%4
0pharmacy.rutgers.edu%7C89f81861ee684f05e46b08d559d86fe1%7Cb92d2b234d3544709
3ff69aca6632ffe%7C1%7C1%7C636513708497810284&sdata=bMlZ324OoEHA5XMQibKiEFsm7
5NhcpkfIcSRUJbQZNg%3D&reserved=0>
conference/isis-summit-vienna-2015/paper/2842

  [2] Petoukhov, S. (2017).  Genetic coding and united-hypercomplex systems
in the models of algebraic biology. BioSystems 158: 31-46.


  [3] Petoukhov, S. (2016).  The system-resonance approach in modeling
genetic
structures. BiosySystems 139:1-11.

   [4] Ji, S. (2017).Neo-Semiotics: Introducing Zeroness into Peircean
Semiotics May Bridge the Knowable and the Unknowable. Prog. Biophys. Mol.
Biol.  131:387-401. PDF at http://www.sciencedirect.co
<http://www.sciencedirect.com/science/article/pii/S0079610717300858?via%3Dih
ub> m/science/article/pii/S0079610717300858?via%3Dihub
   [5] Ji, S. (1997). Isomorphism between cell and human languages:
molecualr
<http://www.conformon.net/wp-content/uploads/2012/05/Isomorphism1.pdf>
biological, bioinformatic and linguistic implications. BioSystems 44:17-39.
PDF at http://www.conformon.net/wp
<http://www.conformon.net/wp-content/uploads/2012/05/Isomorphism1.pdf>
-content/uploads/2012/05/Isomorphism1.pdf

    [6] Ji, S. (2017).  The Cell Language Theory: Connecting Mind and
Matter.  World Scientific, New Jersey.  Chapter 5.














  _____


From: Fis <fis-boun...@listas.unizar.es> on behalf of Emanuel Diamant
<emanl....@gmail.com>
Sent: Friday, January 12, 2018 11:20 AM
To: fis@listas.unizar.es
Subject: [Fis] I salute to Sungchul



Dear FISers,



I would like to express my pleasure with the current state of our discourse
- an evident attempt to reach a more common understanding about information
issues and to enrich preliminary given assessments.

In this regard, I would like to add my comment to Sungchul's post of January
12, 2018.



Sungchul proposes "to recognize two distinct types of information which, for
the lack of better terms, may be referred to as the "meaningless
information" or I(-)  and "meaningful information" or I(+)".

That is exactly what I am trying to put forward for years, albeit under more
historically rooted names: Physical and Semantic information [1]. Never
mind, what is crucially important here is that the duality of information
becomes publicly recognized and accepted by FIS community.



I salute to Sungchul's suggestion!



Best regards, Emanuel.



[1] Emanuel Diamant, The brain is processing information, not data. Does
anybody care?, ISIS Summit Vienna 2015, Extended Abstract.
http://sciforum.net/conference
<https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fsciforum.ne
t%2Fconference%2Fisis-summit-vienna-2015%2Fpaper%2F2842&data=02%7C01%7Csji%4
0pharmacy.rutgers.edu%7C89f81861ee684f05e46b08d559d86fe1%7Cb92d2b234d3544709
3ff69aca6632ffe%7C1%7C1%7C636513708497810284&sdata=bMlZ324OoEHA5XMQibKiEFsm7
5NhcpkfIcSRUJbQZNg%3D&reserved=0> /isis-summit-vienna-2015/paper/2842









_______________________________________________
Fis mailing list
Fis@listas.unizar.es
http://listas.unizar.es/cgi-bi
<http://listas.unizar.es/cgi-bin/mailman/listinfo/fis>
n/mailman/listinfo/fis







--

Alex Hankey M.A. (Cantab.) PhD (M.I.T.)
Distinguished Professor of Yoga and Physical Science,
SVYASA, Eknath Bhavan, 19 Gavipuram Circle
Bangalore 560019, Karnataka, India
Mobile (Intn'l): +44 7710 534195 <tel:+44%207710%20534195>

Mobile (India) +91 900 <tel:+91%2090080%2008789>  800 8789

____________________________________________________________



 <http://www.sciencedirect.com/science/journal/00796107/119/3> 2015 JPBMB
Special Issue on Integral Biomathics: Life Sciences, Mathematics and
Phenomenological Philosophy


_______________________________________________
Fis mailing list
Fis@listas.unizar.es
http://listas.unizar.es/cgi-
<http://listas.unizar.es/cgi-bin/mailman/listinfo/fis>
bin/mailman/listinfo/fis







--

Professor Terrence W. Deacon
University of California, Berkeley



---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel 
antivirus Avast.
https://www.avast.com/antivirus
_______________________________________________
Fis mailing list
Fis@listas.unizar.es
http://listas.unizar.es/cgi-bin/mailman/listinfo/fis

Reply via email to