Let me translate the main theses of Igor's and Loet's into concepts of information
management --previously thanking Joe for his excellent opening statements.

>Cognitive aspect of complexity in social
> systems can have at least three distinct dimensions. One deals with the
> virtual impossibility for humans to gather all the available information
> and compute the optimal decision among the possible alternatives. In
> literature this is usually called the problem of bounded rationality.

The scope of our sensory organs is limited. So we deduct from ourselves that
any system we conceptualise will have limits, beyond which it cannot reach.
There is a non-recognisable outside to any system of knowledge.

> Another dimension of complexity comes from the fact that human decision
> making is not only bounded by technical constraints related to information
> gathering and processing but is also significantly constrained by a bias
> that comes from the set of basic values and beliefs about the world and a
> society that a decision maker holds in his mind. In that sense certain
> solutions to a problem, which are technically accessible and rational,
> perhaps even optimal  for an external observer, are discarded or
> unrecognized as such because they clash with certain socially shared
> beliefs and values (a worldview).

There is a background to that what we recognise. The properties of the
background is that what we do not see.  Wittgenstein has drawn an eye and said
that the eye cannot see itself.
The preconceptions that we use restrict that what we can recognise.

> The third dimension might refer to self-referentiallity of human systems:
> we are inclined to conform our behavior to the predictions of our models of
> the world (e.g. self-fulfilling prophecies). According to Felix Geyer,
> self-referentiallity in human systems (called also second-order
> cybernetics) implies that a social system collects information about its
> functioning which in turn may alter this very functioning. The outcome of
> such a process is, however, unpredictable and may be recognized as a
> semiotic problem: what signs, among many, are captured as information, and
> what is its "societal" interpretation?

The only minor point to raise is the insertion of "practically almost" before
"... unpredictable and may be recognised...".
In theory, every state of a closed logical system can be predicted. In reality, we work overtime to understand the extreme unpredictability of e.g. the weather
and genetic variations/mutations. If we believed that the behaviour of complex
systems cannot be brought into a pattern of linear predictions (if now this,
then then that), we could not keep on doing science.
That human behaviour is partly more complex than a weather system is only a
gradual difference, and in many cases, a weather system is less predictable
than a human.
In actual social interaction, the micropantomime, closely linked to the pattern
of breathing, rules the embedding backround before which a signal will be

> Obviously, the specific cognitive dimensions of social systems, namely
> bounded rationality, perceptual bias that arise from a worldview, and
> self-referentiallity add to the complexity of societies which may be really
> different in kind (I refer  here to Stan's remark that social complexity
> and ecological complexity look like different applications, not kinds).
> There appears simply to be more degrees of freedom in a social system which
> are also qualitatively different from ecological.

If one transposes the behaviour of complex systems into stability, uniformity
and diversity properties of a set filled up with symbols and then left to its
own devices, the resulting decision paths will depict it quite deeply. The
model is equally suitable for social, ecological, meteorological, biochemical
and economical applications. From the interregulation aspect, these are all
homeostatic or quasi-homeostatic systems, where the linear distance between two
states (in a linear enumeration) will allow predictions about the composition
of symbols (then/there) based on the composition of symbols (now/here). The
statistics of large assemblies shows that they utilise a quite slight imbalance
between the counting systems. The imbalance does not show when we regard each
and every object in its turn, its shows only if we assign symbols to more than
32 objects... as I have argued so often in this list, it appears that large assemblies
of objects that have symbols obey their own laws, which are based
on the existence of the inexactitude between the counting systems.
So, in this view, human systems do not differ qualitatively from any other kind
of complex systems which are homeostatic.
Focusing on perceived differences will keep motivating us to recognise
similarities and rules that work on complex systems. A social system is a self-
regulating, cybernetical entity. It lives. So it can be described by rational

Thanks to everybody for the excellent messages.


fis mailing list

Reply via email to