Dear Alberto,

Thank you for this topic – it cuts to the heart of why we think the
study of information really matters, and most importantly, brings to
the fore the thorny issue of technology.

It has become commonplace to say that our digital computers have
changed the world profoundly. Yet at a deep level it has left us very
confused and disorientated, and we struggle to articulate exactly how
the world has been transformed. Norbert Wiener once remarked in the
wake of cybernetics, “We have changed the world. Now we have to change
ourselves to survive in it”. Things haven’t got any easier in the
intervening decades; quite the reverse.

The principal manifestation of the effects of technology is confusion
and ambiguity. In this context, it seems that the main human challenge
to which the topic of information has the greatest bearing is not
“information” per se, but decision. That, in a large part, depends of
hypothesis and the judgement of the human intellect.

The reaction to confusion and ambiguity is that some people and most
institutions acquire misplaced confidence in making decisions about
“the way forwards”, usually invoking some new tool or device as a
solution to the problem of dealing with ambiguity (right now, it’s
blockchain and big data). We - and particularly our institutions -
remain allergic to uncertainty. To what extent is “data-ism” a
reaction to the confusion produced by technology? Von Foerster sounded
the alarm in the 1970s:

“we have, hopefully only temporarily, relinquished our responsibility
to ask for a technology that will solve existent problems. Instead we
have allowed existent technology to create problems it can solve.” (in
Von Foerster, H (1981) "Observing Systems")

With every technical advance, there is an institutional reaction. The
Catholic church reacted to printing; Universities reacted to the
microscope and other empirical apparatus; political institutions
reacted to the steam engine, and so on. Today it is the institution of
science itself which reacts to the uncertainty it finds itself in. In
each case, technology introduces new options for doing things, and the
increased uncertainty of choice between an increased number of options
means that an attenuative process must ensue as the institution seeks
to preserve its identity. Technology in modern universities is a
particularly powerful example: what a stupid use of technology to
reproduce the ancient practices of the “classroom” online?! How
ridiculous in an age of self-publishing that academic journals seek to
use technology to maintain the “scarcity” (and cost) of their
publications through paywalls? And what is it about machine learning
and big data (I'm struggling with this in a project I'm doing at the
moment - the machine learning thing is not all it's cracked up to be!)

Judgement and decision are at the heart of this. Technologies do not
make people redundant: it is the decisions of leaders of companies and
institutions who do that. Technology does not poison the planet;
again, that process results from ineffective global political
decisions. Technology also sits in the context for decision-making,
and as Cohen and March pointed out in 1971, the process of
decision-making about technology is anything but rational (see “The
Garbage Can Model of Organisational Decision-making”
https://www.jstor.org/stable/2392088). Today we see “Blockchain” and
“big data” in Cohen and March’s Garbage can. It is the reached-for
"existent technology which creates problems it can solve".

My colleague Peter Rowlands, who some of you know, puts the blame on
our current way of thinking in science: most scientific methodologies
are "synthetic" - they attempt to amalgamate existing theory and
manifest phenomena into a coherent whole. Peter's view is that an
analytic approach is required, which thinks back to originating
mechanisms. Of course, our current institutions of science make such
analytical approaches very difficult, with few journals prepared to
publish the work. That's because they are struggling to manage their
own uncertainty.

So I want to ask a deeper question: Effective science and effective
decision-making go hand-in-hand. What does an effective society
operating in a highly ambiguous and technologically abundant
environment look like? How does it use its technology for effective
decision-making? My betting is it doesn’t look anything like what
we’ve currently got!

Best wishes,

Mark

On 6 March 2018 at 20:23, Alberto J. Schuhmacher <ajime...@iisaragon.es> wrote:
> Dear FIS Colleagues,
>
> I very much appreciate this opportunity to discuss with all of you.
>
> My mentors and science teachers taught me that Science had a method, rules
> and procedures that should be followed and pursued rigorously and with
> perseverance. The scientific research needed to be preceded by one or
> several hypotheses that should be subjected to validation or refutation
> through experiments designed and carried out in a laboratory. The Oxford
> Dictionaries Online defines the scientific method as "a method or procedure
> that has characterized natural science since the 17th century, consisting in
> systematic observation, measurement, and experiment, and the formulation,
> testing, and modification of hypotheses". Experiments are a procedure
> designed to test hypotheses. Experiments are an important tool of the
> scientific method.
>
> In our case, molecular, personalized and precision medicine aims to
> anticipate the future development of diseases in a specific individual
> through molecular markers registered in the genome, variome, metagenome,
> metabolome or in any of the multiple "omes" that make up the present "omics"
> language of current Biology.
>
> The possibilities of applying these methodologies to the prevention and
> treatment of diseases have increased exponentially with the rise of a new
> religion, Dataism, whose foundations are inspired by scientific agnosticism,
> a way of thinking that seems classical but applied to research, it hides a
> profound revolution.
>
> Dataism arises from the recent human desire to collect and analyze data,
> data and more data, data of everything and data for everything-from the most
> banal social issues to those that decide the rhythms of life and death.
> “Information flow” is one the “supreme values” of this religion. The next
> floods will be of data as we can see just looking at any electronic window.
>
> The recent development of gigantic clinical and biological databases, and
> the concomitant progress of the computational capacity to handle and analyze
> these growing tides of information represent the best substrate for the
> progress of Dataism, which in turn has managed to provide a solid content
> material to an always-evanescent scientific agnosticism.
>
> On many occasions the establishment of correlative observations seems to be
> sufficient to infer about the relevance of a certain factor in the
> development of some human pathologies. It seems that we are heading towards
> a path in which research, instead of being driven by hypotheses confirmed
> experimentally, in the near future experimental hypotheses themselves will
> arise from the observation of data of previously performed experiments. Are
> we facing the end of the wet lab? Is Dataism the end of classical
> hypothesis-driven research (and the beginning of data-correlation-driven
> research)?
>
> Deep learning is based on learning data representations, as opposed to
> task-specific algorithms. Learning can be supervised, semi-supervised or
> unsupervised. Deep learning models are loosely related to information
> processing and communication patterns in a biological nervous system, such
> as neural coding that attempts to define a relationship between various
> stimuli and associated neuronal responses in the brain. Deep learning
> architectures such as deep neural networks, deep belief networks and
> recurrent neural networks have been applied to fields including computer
> vision, audio recognition, speech recognition, machine translation, natural
> language processing, social network filtering, bioinformatics and drug
> design, where they have produced results comparable to and in some cases
> superior to human experts. Will be data-correlation-driven research the new
> scientific method for unsupervised deep learning machines? Will computers
> became fundamentalists of Dataism?
>
> Best regards,
>
> AJ
>
>
>
> ---
> Alberto J. Schuhmacher, PhD.
> Head, Molecular Oncology Group
>
> Aragon Health Research Institute (IIS Aragón)
> Biomedical Research Center of Aragon (CIBA)
> Avda. Juan Bosco 13, 50009 Zaragoza (Spain)
> email: ajime...@iisaragon.es
> Phone:(+34) 637939901
>
> _______________________________________________
> Fis mailing list
> Fis@listas.unizar.es
> http://listas.unizar.es/cgi-bin/mailman/listinfo/fis
>



-- 
Dr. Mark William Johnson
Institute of Learning and Teaching
Faculty of Health and Life Sciences
University of Liverpool

Phone: 07786 064505
Email: johnsonm...@gmail.com
Blog: http://dailyimprovisation.blogspot.com

_______________________________________________
Fis mailing list
Fis@listas.unizar.es
http://listas.unizar.es/cgi-bin/mailman/listinfo/fis

Reply via email to