Re: [Fis] Is Dataism the end of classical hypothesis-driven research and the beginning of data-correlation-driven research?
Mark Johnson wrote: So I want to ask a deeper question: Effective science and effective decision-making go hand-in-hand. What does an effective society operating in a highly ambiguous and technologically abundant environment look like? How does it use its technology for effective decision-making? My betting is it doesn't look anything like what we've currently got! These are good questions, Mark. Understanding 'science' as 'knowledge' it is plainly true that "Effective science and effective decision-making go hand-in-hand". As a gloss on that comment, I would add that there is an imbalance. Decision-making aspires to universal applicability. If the state changes the tax regime then it expects all citizens to conform, and increasingly technology can be used to achieve that. But knowledge of the consequences to society and individuals of those changes to the tax regime is partial. The state uses a regulatory framework, which is quite easily knowable, to regulate the chaotic interactions of society, which are complex to the degree that they are unknowable. In other words, governments use policy instruments to attenuate the variety of the society that they set out to regulate, and implicit in this is a recognition the impossibility of a complete knowledge of society. An open question is whether the tools of data surveillance can change or adjust that equation, and, if they can, whether that is desirable. In the past you have drawn my attention to Bataille's discussion of transgression, which I think is relevant here. The question arises: is it possible for political science, with technological support, to manage the attraction of transgression? That seems to be the project that is underway in China at the moment. We can watch the results with interest (and perhaps trepidation). > What does an effective society operating in a highly ambiguous and technologically abundant environment look like? My working suggestion for a guiding principle would be "An effective society should be humble about its ability to understand its own workings, and those of the people who constitute it" Dai -- - Professor David (Dai) Griffiths Professor of Education School of Education and Psychology The University of Bolton Deane Road Bolton, BL3 5AB Office: M106 SKYPE: daigriffiths Phones (please don't leave voice mail) UK Mobile +44 (0)7491151559 Spanish Mobile: + 34 687955912 Work landline: + 44 (0)1204903598 email d.e.griffi...@bolton.ac.uk dai.griffith...@gmail.com ___ Fis mailing list Fis@listas.unizar.es http://listas.unizar.es/cgi-bin/mailman/listinfo/fis
Dear Dr. Zou: Most interesting.I enclose a recently submitted manuscript for your perusal. Cordial wishes,Otto E.Rossler - Who can program the Einstein Rocketship? Otto E. Rossler1 and YaëlKolb1,2 1Faculty of Science,University of Tübingen, Auf der Morgenstelle 8, 72076 Tübingen, Germany 2University of Design (HfG), Lorenzstrasse 15, 76135Karlsruhe, Germany Abstract Acomputer-game version of the famous Einstein equivalence principle of 1907 isproposed. Surprising implications predictably follow. The idea appears worthchecking by the computer-game community as a contribution to science. (March 12, 2018) The Einstein rocketship of 1907  consistsof a constantly accelerating vertical paper strip (interpreted as the interiorof a roaring rocketship) and an internal light ray that is continually emittedvertically along the strip from the bottom to the tip. Einsteinfirst solved this typical computer-game problem in his mind, to in this waypredict out of the blue sky the famous “gravitational redshift”: The ascendinglight ray on arrival at the tip is slowed in its frequency by a negativeDoppler effect (like the sound of a departing ambulance) because the point oforigin of the vertical light ray is constantly falling back from the tip duringthe time it takes the light to arrive although the distance remains unchanged.This at the time absurd prediction enables accurate car navigation to date. Thejust described “Einstein task” is only the first step (one-dimensional case).It has never been simulated even though this is of course possible and indeeddesirable. The young Einstein thereafter in the same 1907-paper looked also atthe two-dimensional case: How does a horizontal light ray that hugs the floorof the rocketship appear from the tip when made visible towards above throughsome smoke in the air? This mental image would later become the “light clock” –a laser pulse inside a glass tube with reflecting ends and a bit of glitterinside to make the ticking visible to the outside world. Programmingthis 2-D game to make it totally transparent, too, is a bit more difficult butis bound to teach something new. While the light pulse is progressinghorizontally down there, the bottom is constantly falling back from the tipwhile keeping its distance as we saw. Therefore, the horizontally advancinglight pulse downstairs necessarily does so in a locally downwards-slantedfashion relative to the tip. This is a first post-Einsteinian Einsteinianprediction (PEEP). Megaconsequences follow suit if the PEEP can be successfully programmed rather thanremaining a mere mental fantasy. For it logically follows that the light pathdownstairs is increased in its length relative to the tip owing to its beingslanted everywhere locally relative to the tip, but this without appearingshortened due to the slant. For special relativity which governs the gadgetenforces preservation of optical width inside the rocketship. Hence theslowdown visible from the tip, seen in Einstein’s mind in 1907, reflects thefact that all objects downstairs are invisibly to above enlarged in sizerelative to the tip by the gravitational redshift factor. This prediction – iftrue – entails that the speed of light downstairs is actually non-reduceddespite appearances. If theproposed computer game confirms this new prediction made whilst anticipating it,surprising consequences follow suit. One of them reads: “No Big Bang” anymore becausethe speed of light is rendered a global constant again by the computer game. Notethat mutually very distant points in the universe now can no longer recede fromeach other at super-luminal speeds as is being assumed at present. So the proposed “Einstein computer game”(ECG) is a surprisingly serious playful proposal in the realm of games. Itwould be especially great if it could get implemented right away by utilizingan already existing game portal like “gamelab” . The race is on. We thankWolfgang Rindler, Susan J. Feingold and Andrei Ujica for stimulation. ForJ.O.R. References  A.Einstein, On the relativity principle and the conclusions drawn from it. Jahrbuchder Radioaktivität und Elektronik 4, 411-462 (1907), in German. http://www.pitt.edu/~jdnorton/teaching/GR_2007/pdf/Einstein_1907.pdf  https://code.org/educate/gamelab On Monday, March 19, 2018, 7:26:52 AM GMT+1, ZouXiaohui <949309...@qq.com> wrote: Dear colleagues The era of large-scale or big production of knowledge and small-scale or normal production of knowledge is about to come. Author: Zou Xiaohui Time: 2018-03-19 08:57:37 In the age of mobile networks where information and knowledge exponentially grows, any one of a small WeChat group and a circle of friends can detonate the spiritual world of any individual. This is incredible in ancient times. Therefore, it is already lagging behind to rely on the 2,000-year-long knowledge production method to do spiritual product
On 15/03/18 10:11, Karl Javorszky wrote: >To me, it does not appear necessary to make a distinction between “reality” and “data” That's a defensible position, but it does constrain 'reality' to 'that which we can perceive'. Which would rule out the reality of things that we cannot perceive, e.g. explanatory mechanisms, or the insides of black holes. > just like there is no necessity for musicians to distinguish between the note printed on the partiture, > and the acoustic sound, or for Chess champions to distinguish between the description of the position > in the protocol of the game and the actual pieces one can hold in his hands. I do not think that these are the same case. The description of the configuration of a chess game is lossless. I could note down the distribution of the pieces, take them off the board, mix them up and put them back again, and the game would not be changed for the players. The physical chess set and the physical context are also largely irrelevant. Players could leave one room, have a relaxed coffee or aquavit, go back into another room with a duplicate of the game with different pieces on another board, and continue with little disturbance. But sheet music is not a lossless representation of a performance. From the starting point of the sheet music, the performer has to decide on volumes, intonation and timing, and in some cases also ornament and variations. These issues arouse deep passions and ferocious debate! Nor would we be happy to buy a recording of a symphony in which different orchestras played different movements in different concert halls (although it might be interesting to hear). Dai ___ Fis mailing list Fis@listas.unizar.es http://listas.unizar.es/cgi-bin/mailman/listinfo/fis
Dear Dai, thank you for your thoughtful comments on diversity, particularities and generalities. In my case, setting “reality” equivalent to “data” is one more little effort on my part to make all things appear enumerable. As you graciously concede, this is an acceptable perspective. For the musician, it is irrelevant, whether he sees the note *a* on a score or hears it: it is the same data element in the inventory of his mental contents. Similarly, for the chess champion it is irrelevant, whether he has gained knowledge of the problem position by seeing it on the table, reading it from a protocol or having heard it narrated to him. The main point is, that the *modality *of the perception is of no relevance for the idealised content – the denotation – of the idea. Me always talking about the identifiable element, of course I prefer to say that the genesis – the connotations – of an element are relevant only to that extent as they do not hinder the communality of the object. We discuss the pen-ultimate steps of Kant peeling away the particularities of the object, where you warn, that too much of standardisation annihilates important properties of the mental objects. How interesting then, that common consensus reigns, that the world is best depicted by *one *kind of basic element, that faceless *i *of N, that does not even have its own place, and much less fights for it. The model being persistently presented to you deals with positions of 136 individuals. These get constantly reorganised, and are almost always under way to positions that appear to be more towards optimal, or towards which circumstances force the individual to migrate. In this theatre, there are sufficient role conflicts that entertain the participants: what kind of pileup comes up again, how can one annihilate the maximum number of alternatives, which position is the most restrictive for its successors, and so forth. What I am involved with is an exercise in accounting. No sounds, no chess, no reality, only data. We investigate the properties of data. How much reality is behind the results, will remain to be seen. How much reality has been behind the rows of green peas of Padre Mendel, behind his tables and behind the information theory of genetics? Have Mendel’s Laws existed while Mendel tried to explain them to his contemporaries? No, they were Mendel’s Obsession, Mendel’s Brainbug, anything but Mendel’s Laws. The counting system that hopefully, peu a peu, evolves in your mind is made up of a few dozen individual elements, the basic shape of which has around a dozen different variants. External influences cause that the inner order of the collection is in a continuous, dynamic process. There are rules to these inner processes. These rules are demonstrated in the tables relating to *a+b=c *being subjected to sorting and ordering. Our comprehension works by assigning the correct denotation to the perceived connotation. Then, it is an informational theoretical process, and a data processing challenge, namely: indexing, searching, filtering, classifying, categorising and identifying data elements. There are rules of doing so. The rules are given by how the natural numbers actually are. If in the context of whatever complex question we discuss, *a+b=c *holds, then the constituents of the picture of the denotation of the question will agree to the numeric facts that are registered in the tables regarding the behaviour of elements during reorganisations. Thank you for the opportunity of offering you my viewpoints. Karl 2018-03-19 16:22 GMT+01:00 Dai Griffiths
: > On 15/03/18 10:11, Karl Javorszky wrote: > > >To me, it does not appear necessary to make a distinction between > “reality” and “data” > > That's a defensible position, but it does constrain 'reality' to 'that > which we can perceive'. Which would rule out the reality of things that we > cannot perceive, e.g. explanatory mechanisms, or the insides of black holes. > > > just like there is no necessity for musicians to distinguish between the > note printed on the partiture, > > and the acoustic sound, or for Chess champions to distinguish between > the description of the position > > in the protocol of the game and the actual pieces one can hold in his > hands. > > I do not think that these are the same case. > > The description of the configuration of a chess game is lossless. I could > note down the distribution of the pieces, take them off the board, mix them > up and put them back again, and the game would not be changed for the > players. The physical chess set and the physical context are also largely > irrelevant. Players could leave one room, have a relaxed coffee or aquavit, > go back into another room with a duplicate of the game with different > pieces on another board, and continue with little disturbance. > > But sheet music is not a lossless representation of a performance. From > the starting point of the sheet music, the
Re: [Fis] Is Dataism the end of classical hypothesis-driven research and the beginning of data-correlation-driven research?
Dear Alex, Mark and FIS coleagues, Thanks a lot for your comments and inputs. I am learning a lot from all of you. From my ignorance computers are logic machines. I am not sure if intuition could be considered a logical way of thinking, somehow yes because it is based in our experience/learning, based in our success/failure (binary output of experiences). All the best, AJ El 13-03-2018 08:38, Alex Hankey escribió: > Dear Mark and Alberto, > > Let me propose a radical new input. > The Human intuition is far more > powerful than anything anyone > has previously imagined, except > those who use it regularly. > > It can be strengthen by particular > mental practices, well described > in the literature of Yoga. > > Digital Computing machines are > not capable of this, and although > number crunching is a way for > Technology to assist, it is no substitute > for the highest levels of the human mind. > > Alex > > On 13 March 2018 at 01:10, Mark Johnson
wrote: > >> Dear Alberto, >> >> Thank you for this topic - it cuts to the heart of why we think the >> study of information really matters, and most importantly, brings to >> the fore the thorny issue of technology. >> >> It has become commonplace to say that our digital computers have >> changed the world profoundly. Yet at a deep level it has left us very >> confused and disorientated, and we struggle to articulate exactly how >> the world has been transformed. Norbert Wiener once remarked in the >> wake of cybernetics, "We have changed the world. Now we have to change >> ourselves to survive in it". Things haven't got any easier in the >> intervening decades; quite the reverse. >> >> The principal manifestation of the effects of technology is confusion >> and ambiguity. In this context, it seems that the main human challenge >> to which the topic of information has the greatest bearing is not >> "information" per se, but decision. That, in a large part, depends of >> hypothesis and the judgement of the human intellect. >> >> The reaction to confusion and ambiguity is that some people and most >> institutions acquire misplaced confidence in making decisions about >> "the way forwards", usually invoking some new tool or device as a >> solution to the problem of dealing with ambiguity (right now, it's >> blockchain and big data). We - and particularly our institutions - >> remain allergic to uncertainty. To what extent is "data-ism" a >> reaction to the confusion produced by technology? Von Foerster sounded >> the alarm in the 1970s: >> >> "we have, hopefully only temporarily, relinquished our responsibility >> to ask for a technology that will solve existent problems. Instead we >> have allowed existent technology to create problems it can solve." (in >> Von Foerster, H (1981) "Observing Systems") >> >> With every technical advance, there is an institutional reaction. The >> Catholic church reacted to printing; Universities reacted to the >> microscope and other empirical apparatus; political institutions >> reacted to the steam engine, and so on. Today it is the institution of >> science itself which reacts to the uncertainty it finds itself in. In >> each case, technology introduces new options for doing things, and the >> increased uncertainty of choice between an increased number of options >> means that an attenuative process must ensue as the institution seeks >> to preserve its identity. Technology in modern universities is a >> particularly powerful example: what a stupid use of technology to >> reproduce the ancient practices of the "classroom" online?! How >> ridiculous in an age of self-publishing that academic journals seek to >> use technology to maintain the "scarcity" (and cost) of their >> publications through paywalls? And what is it about machine learning >> and big data (I'm struggling with this in a project I'm doing at the >> moment - the machine learning thing is not all it's cracked up to be!) >> >> Judgement and decision are at the heart of this. Technologies do not >> make people redundant: it is the decisions of leaders of companies and >> institutions who do that. Technology does not poison the planet; >> again, that process results from ineffective global political >> decisions. Technology also sits in the context for decision-making, >> and as Cohen and March pointed out in 1971, the process of >> decision-making about technology is anything but rational (see "The >> Garbage Can Model of Organisational Decision-making" >> https://www.jstor.org/stable/2392088 ). Today we see "Blockchain" and >> "big data" in Cohen and March's Garbage can. It is the reached-for >> "existent technology which creates problems it can solve". >> >> My colleague Peter Rowlands, who some of you know, puts the blame on >> our current way of thinking in science: most scientific methodologies >> are "synthetic" - they attempt to amalgamate existing theory and >> manifest phenomena into
Dear colleagues The era of large-scale or big production of knowledge and small-scale or normal production of knowledge is about to come. Author: Zou Xiaohui Time: 2018-03-19 08:57:37 In the age of mobile networks where information and knowledge exponentially grows, any one of a small WeChat group and a circle of friends can detonate the spiritual world of any individual. This is incredible in ancient times. Therefore, it is already lagging behind to rely on the 2,000-year-long knowledge production method to do spiritual product processing.The double-chessboards based on the wisdom integrated theory and cultural gene system engineering practice came into being. Its primary feature is that it is a combination of humans and machines that can instantly complete the knowledge production of any one knowledge module. The formation and promotion of popularity has gradually highlighted its unique charm.For example, any text segment imported into the word chessboard web development environment and application platform can instantly form almost all the language points, knowledge points, and original points contained in the world-wide super collaboration of the text segment. . This not only provides the convenience for the original creators or experts themselves to confirm their themes, styles, or characteristics, but also provides a common platform for teachers, students and the general public to participate in the finishing of knowledge modules. Such a large-scale production of knowledge is supported by the three major system engineering practices of language, knowledge, and software. It is a brand-new approach to education informatization. At the same time, it provides a typical example of collaborative innovation that focuses on the intelligence capabilities of human-computer dual-brain intelligence. Both men, women, and children can discover from their most interesting speech fragments. Their respective real interests, hobbies, and good at, and then used them to participate in the integration of teaching and learning of social system engineering and the combination of soft and hard language and formal system engineering double practice, so as to reflect the three basic categories of object-oriented text The generalized textual cultural genetic system project contributes meager forces and gradually discovers and finds their precise positioning in the overall system of human knowledge building construction. Data, language, information, and knowledge all have intersections. Therefore, it is often misunderstood. The text that records knowledge is a typical type of data. Obvious ambiguity allows the machine to be automatically ejected; human experts are easily ambiguous in their respective fields; the most difficult ambiguity is the category of overlapping (basic concepts). Best wish! Zou Xiaohui iPhone -- Original -- From: Syed Ali
Date: ,3?? 6,2018 11:20 To: ZouXiaohui <949309...@qq.com> Cc: ?? , fis Subject: Re: [Fis] A Paradox Many thanks Zou. Syed Confidential: This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom this email is addressed. If you are not one of the named recipient(s) or otherwise have reason to believe that you have received this message in error, please notify the sender and delete this message immediately from your computer. Any other use, retention, dissemination, forward, printing, or copying of this message is strictly prohibited. On Sun, Mar 4, 2018 at 6:35 PM, ZouXiaohui <949309...@qq.com> wrote: Dear Colleagues and Syed?? Thank you for your attention!Let me answer your questionCould you critique a view: Information is the container of meaning ?: Undoubtedly,the point of view ?? information is the container of meaning?? is certainly wrong. For first and foremost, phenomenal information is all-encompassing, in addition to carriers of mass and energy, which can be anything in the physical world, anything in mind, anything in narrow and broad language or generalized text. Among them, there is both formal information and content information. Furthermore, looking at ontology information, which is simplified in many ways and then focused on the same meaning or content, aims to disambiguate. Many people's cognitive errors and misunderstandings come from ambiguity. Finally, in fact, and most importantly, the essential information that can be calculated by using truth (this is the fundamental object or subject of information science). These are Zou Xiaohui's point of view. Please give comments or suggestions! Thank you! Best wish! Zou