[Fis] I salute to Sungchul

2018-01-12 Thread Emanuel Diamant
Dear FISers, 

 

I would like to express my pleasure with the current state of our discourse
- an evident attempt to reach a more common understanding about information
issues and to enrich preliminary given assessments. 

In this regard, I would like to add my comment to Sungchul's post of January
12, 2018. 

 

Sungchul proposes "to recognize two distinct types of information which, for
the lack of better terms, may be referred to as the "meaningless
information" or I(-)  and "meaningful information" or I(+)". 

That is exactly what I am trying to put forward for years, albeit under more
historically rooted names: Physical and Semantic information [1]. Never
mind, what is crucially important here is that the duality of information
becomes publicly recognized and accepted by FIS community.

 

I salute to Sungchul's suggestion!

 

Best regards, Emanuel.

 

[1] Emanuel Diamant, The brain is processing information, not data. Does
anybody care?, ISIS Summit Vienna 2015, Extended Abstract.
http://sciforum.net/conference/isis-summit-vienna-2015/paper/2842 

 

 

 

___
Fis mailing list
Fis@listas.unizar.es
http://listas.unizar.es/cgi-bin/mailman/listinfo/fis


Re: [Fis] I salute to Sungchul

2018-01-12 Thread Loet Leydesdorff

Dear Emanuel,

The two types are sometimes also called "Shannon-type" versus 
"Bateson-type" information.


In Chinese, there are two words for "information".

Both words contain two char­acters as depicted in Figure 13.1. The above 
one, ‘sjin sji’, corresponds to the mathe­matical definition of 
informa­tion as uncertainty.[1] <#_ftn1> The sec­ond, ‘tsjin bao,’ means 
infor­mation but also intelligence.[2] <#_ftn2> In other words, it means 
infor­mation which informs us, and which is thus considered meaningful.



<#_ftnref1> [1] ‘sjin’ means letter of reliability, and ‘sji’ means 
message.


<#_ftnref2> [2] ‘tsjin’ means situation or status, and ‘bao’ means 
report.



Wu Yishan (personal communication)

From: Leydesdorff, L. (1995). The Challenge of Scientometrics: The 
development, measurement, and self-organization of scientific 
communications. Leiden: DSWO Press, Leiden University; at 
http://www.universal-publishers.com/book.php?method=ISBN=1581126816, 
p. 295.


Best,
Loet


Loet Leydesdorff

Professor emeritus, University of Amsterdam
Amsterdam School of Communication Research (ASCoR)

l...@leydesdorff.net ; 
http://www.leydesdorff.net/
Associate Faculty, SPRU, University of 
Sussex;


Guest Professor Zhejiang Univ. , 
Hangzhou; Visiting Professor, ISTIC, 
Beijing;


Visiting Fellow, Birkbeck , University of London;

http://scholar.google.com/citations?user=ych9gNYJ=en


-- Original Message --
From: "Emanuel Diamant" 
To: fis@listas.unizar.es
Sent: 1/12/2018 5:20:14 PM
Subject: [Fis] I salute to Sungchul


Dear FISers,



I would like to express my pleasure with the current state of our 
discourse – an evident attempt to reach a more common understanding 
about information issues and to enrich preliminary given assessments.


In this regard, I would like to add my comment to Sungchul’s post of 
January 12, 2018.




Sungchul proposes “to recognize two distinct types of information 
which, for the lack of better terms, may be referred to as the 
"meaningless information" or I(-)  and "meaningful information" or 
I(+)”.


That is exactly what I am trying to put forward for years, albeit under 
more historically rooted names: Physical and Semantic information [1]. 
Never mind, what is crucially important here is that the duality of 
information becomes publicly recognized and accepted by FIS community.




I salute to Sungchul’s suggestion!



Best regards, Emanuel.



[1] Emanuel Diamant, The brain is processing information, not data. 
Does anybody care?, ISIS Summit Vienna 2015, Extended Abstract. 
http://sciforum.net/conference/isis-summit-vienna-2015/paper/2842







___
Fis mailing list
Fis@listas.unizar.es
http://listas.unizar.es/cgi-bin/mailman/listinfo/fis


Re: [Fis] I salute to Sungchul

2018-01-12 Thread Sungchul Ji
Hi Emmanuel and FISers,


Thank you, Emmanuel, for your generous remarks.  It is heartening to know that 
our ideas converge, although we carried out our research independently of each 
other, a clear example of consilience.


(1)  I like and agree with the Kolomogorov quote you cited in [1]:


"Information is a linguistic description of structures in a given data set."

It seems to me that there are 4 key concepts embedded in the above quote, which 
we may view as the definition of what may be called the "Komogorov information" 
or the "Kolmogorov-Bateson information" for  the convenience of reference:

i)   data set (e.g., ACAGTCAACGGTCCAA)
ii)  linguistic description (e.g., Threonine, Valine, Asparagine, Glycine)
iii) structure (e.g., 16 mononucdotide, 8 dinucldotides, 5 trinucleotides plus 
1)
iv) mathematical description (e.g., tensor product of two 2x2 matrices of 4 
nucleotides) [2, 3].

The first three elements are obvious, but the 4th is not so obvious but 
justified in view of the recent work of Petoukhov [2, 3].

(2) Based on these ideas, I have constructed Table 1 below of the various names 
applied to the two kinds of information which I described as I(-) and I(+) in 
my previous post.




Table 1.  The arbitrariness of the signs referring to ‘information’. It doesn’t 
matter what you call it, as long as your chosen label refers to the right 
reality, thing, process, mechanisms, etc.

1

Type I Information

Type II information

2

Physical Information

Sematic information

3

Shannon information

Kolmogorov information, or
Kolmogorov-Bateson information

4

‘Meaningless’ information

‘Meaningful’ information

5

I(-) information, or simply I(-)

I(+) information, or simply I(+)

6

Quantitative information

Qualitative information

7

Mathematical information

Linguistic information (see Statement (1))

8

Formal information

Phenomenological information

9

Interpretant-less sign [4]

Triadic sign [4]


(3)  One practical application of the dual theory of information under 
discussion is in deducing the structure of cell language, or the structure of 
the linguistics of DNA, in a much more rigorous manner than was possible in 
1997 [5].
   It is the common practice in biology to use the terms "letters", "words", 
"sentences", and "texts" without any rigorous definitions.  The general rule is 
to follow the rules of concatenations used in linguistics literally and say that

i) just as 26 letters in the English alphabet are combined to form words (the 
process being called the second articulation [5]), so the 4 letters of the 
genetic alphabets, A, C, G and T/U,  combine in triplets to form genetic 
codons.  Similarly, just as words form sentences and sentences form texts by 
the same concatenation procedure (or tensor multiplication, mathematically 
speaking , i.e, linearly arranging words and sentences, respectively (see the 
second column in Table 2), so the 64 nucleotide triplets combine to form 
proteins and proteins combine to form metabolic pathways by continuing the 
concatenation process, or the tensor multiplication of matrices of larger and 
larger sizes (see the fourth column, which is based on the physical theory of 
information, i.e., without any involvement of semantics or the first 
articulation).

ii)   In contrast to the fourth column just described, we can justify an 
alternative structural assignments based on the semantic theory of information 
as shown in the fifth column of Table 2.  Here the letters of the cell language 
alphabet are not always mononucloetoides but thought to be n-nucleotides, such 
as dinucleotides (when n = 2), trinucleotides (when n =3), tetranucleotides 
(when n = 4), penta-nucelotides (when n = 5), etc.  That is, unlike in human 
language where the letters of an alphabet usually consist of one symbol, e.g., 
A, B, C, D, E, . . . , I am claiming that in cell language, the letters can be 
mononucloetides (i.e., A, G, C, T/U), dinucloeotides (i.e., AG, AC, . . . .) , 
trinucleotides (i.e., ACT, GTA,  . . . ), tetranucleotides (i.e., ACTG, CCGT, . 
. . .), pentanucleotides (i.e., ACCTG, TCGAT, . . .) and, up to n-nucleotides 
(also called n-plets [2, 3]), where n is an unknown number whose upper limit is 
not yet known (at least to me).  If this conjecture turns out to be true, then 
the size of the cell language alphabet can be much larger (10^3 - 10^9 ?) than 
the size of a typical human linguistic alphabet which is usually less than 
10^2, probably due to the limitation of the memory capacity of the human brain.

(iii) From linguistics, we learn that there are at least 4 levels of 
organization, each level characterized by a unique function (see the second 
column).  Without presenting any detailed argument, I just wish to suggest that 
the linguistic structures deduced based on the semantic information theory 
(i.e., the fifth column) agree with the human linguistic structures (i.e., the 
second column) better than does the linguistic structures 

Re: [Fis] I salute to Sungchul

2018-01-12 Thread Alex Hankey
And what about the Kinds of Information that you cannot put in a data set?
The information that makes you turn your head and meet the gaze of someone
staring at you.
No one could do that, which we humans and all animals do constantly,
unless we had received such information at a subliminal level in the brain.
We all have that capacity, it is vital for survival in the wild. All
animals do it.
The 'Sense of Being Stared At' is a common experience for most animals,
how far down the tree of life no one yet knows.

Whatever triggers it is definitely 'A Difference that Makes a Difference',
so fits in your definition of 'Meaningful Information' - it has to!
BUT IT CANNOT BE DIGITAL INFORMATION.
Please Face Up to This Fact.

All best wishes,

Alex


On 13 January 2018 at 07:30, Sungchul Ji  wrote:

> Hi Emmanuel and FISers,
>
>
> Thank you, Emmanuel, for your generous remarks.  It is heartening to know
> that our ideas converge, although we carried out our research independently
> of each other, a clear example of consilience.
>
>
> (*1*)  I like and agree with the Kolomogorov quote you cited in [1]:
>
>
> "*Information is a linguistic description of structures in a given data
> set.*"
>
>
> It seems to me that there are 4 key concepts embedded in the above quote,
> which we may view as the definition of what may be called the "Komogorov
> information" or the "Kolmogorov-Bateson information" for  the
> convenience of reference:
>
> *i*)   data set (e.g., ACAGTCAACGGTCCAA)
> *ii*)  linguistic description (e.g., Threonine, Valine, Asparagine,
> Glycine)
> *iii*) structure (e.g., 16 mononucdotide, 8 dinucldotides, 5
> trinucleotides plus 1)
> *iv*) mathematical description (e.g., tensor product of two 2x2 matrices
> of 4 nucleotides) [2, 3].
>
> The first three elements are obvious, but the 4th is not so obvious but
> justified in view of the recent work of Petoukhov [2, 3].
>
> (*2*) Based on these ideas, I have constructed *Table 1* below of the
> various names applied to the two kinds of information which I described as
> I(-) and I(+) in my previous post.
>
>
>
>
>
>
> *Table 1.  *The *arbitrariness* of the signs referring to ‘information’.
> It doesn’t matter what you call it, as long as your chosen label refers to
> the right reality, thing, process, mechanisms, etc.
>
> 1
>
> Type I Information
>
> Type II information
>
> 2
>
> Physical Information
>
> Sematic information
>
> 3
>
> Shannon information
>
> Kolmogorov information, or
>
> Kolmogorov-Bateson information
>
> 4
>
> ‘Meaningless’ information
>
> ‘Meaningful’ information
>
> 5
>
> I(-) information, or simply I(-)
>
> I(+) information, or simply I(+)
>
> 6
>
> Quantitative information
>
> Qualitative information
>
> 7
>
> Mathematical information
>
> Linguistic information (see Statement (1))
>
> 8
>
> Formal information
>
> Phenomenological information
>
> 9
>
> Interpretant-less sign [4]
>
> Triadic sign [4]
>
>
>
> (*3*)  One practical application of the *dual theory of information *under
> discussion is in deducing the structure of cell language, or the
> structure of the linguistics of DNA, in a much more rigorous manner than
> was possible in 1997 [5].
>It is the common practice in biology to use the terms "letters",
> "words", "sentences", and "texts" without any rigorous definitions.  The
> general rule is to follow the rules of concatenations used in linguistics
> literally and say that
>
> *i*) just as 26 letters in the English alphabet are combined to form
> words (the process being called the second articulation [5]), so the 4
> letters of the genetic alphabets, A, C, G and T/U,  combine in triplets to
> form genetic codons.  Similarly, just as words form sentences and sentences
> form texts by the same concatenation procedure (or tensor multiplication,
> mathematically speaking , i.e, linearly arranging words and sentences,
> respectively (see the second column in Table 2), so the 64
> nucleotide triplets combine to form proteins and proteins combine to form
> metabolic pathways by continuing the concatenation process, or the tensor
> multiplication of matrices of larger and larger sizes (see the
> fourth column, which is based on the physical theory of information, i.e.,
> without any involvement of* semantics* or the first articulation).
>
> *ii*)   In contrast to the fourth column just described, we can justify
> an alternative structural assignments based on the semantic theory of
> information as shown in the fifth column of *Table 2*.  Here the letters
> of the cell language alphabet are not always mononucloetoides but thought
> to be n-nucleotides, such as dinucleotides (when n = 2), trinucleotides
> (when n =3), tetranucleotides (when n = 4), penta-nucelotides (when n = 5),
> etc.  That is, unlike in human language where the letters of an alphabet
> usually consist of one symbol, e.g., A, B, C, D, E, . . . , *I am
> claiming that in cell language, the letters can be mononucloetides
> (i.e., A, G, C,