Dear Maximilian,

Thank you very much for your detailed answer! Makes things much clearer. Let me comment again:


On 9/1/2016 12:45 πμ, Maximilian Schich wrote:
Dear Wolfgang, Martin, and all,

My Edge contribution does not put into question the CIDOC-CRM, or the hermeneutic circle in general. The argument is that quantitative measurement accelerates the process and extends beyond the reach of qualitative inquiry.
I am glad you don't put into  question the CRM :-) .

The hermeneutic circle is often confused with the argument structure itself. I do not believe we learn very much from the hermeneutic circle (see also: Doerr, M., Kritsotaki, A., & Boutsika, A. (2011). Factual argumentation - a core model for assertions making <http://dl.acm.org/citation.cfm?id=1921615>. /Journal on Computing and Cultural Heritage (JOCCH) /, /3/(3), 34, New York, NY, USA : ACM) . I think it distracts the focus from the actual logic of inference making in sciences and scholarship. It describes the accidental, the surface, rather than the substantial. I'd see it as an idealization of the actual complex interaction patterns you describe, as if a small group of researchers would spiral in their own soup, without giving insight in the scholarly logic. So, in both ways, there is not much more to learn from it.

The CRM has nothing to do with the hermeneutic circle. It describes a part of the factual world cultural-historical and scientific investigations are interested in and can easily decide on shared notions of identity, which enables information integration. In a hermenautic circle, researchers may like to modify one or more CRM instances graphs. The CRM does not prescribe research in any way.

If you replace "measurement" with "observation", I am much more on your line. Sciences such as geology and life sciences are full of observations, which hardly can be called measurements, such as the observation of species occurrences. I was more objecting against the often silent assumption that "measurement" provides per se objectivity
and is superior to observation.
I would further distinguish between quantitative measurement and systematic
observation that achieves some sense of coverage of phenomena. When it comes to scientific break-through, the most rare thought may win, as Paul Feyerabend describes in "Against Methods". Quantitative analysis may project the ineffective or irrelevant. Systematic observation indeed accelerates the process. What is effective systematics, is a highly complex in its own. Indeed, in CRM-SIG we propagate systematic observation. If we succeed, is another question ;-) .


For a more extended account with figures, please take a look at
Maximilian Schich: Figuring out Art History. DAH-Journal 2 (2016) [to appear]
Preprint: http://arxiv.org/abs/1512.03301 (22 Oct 2015)
I have read your article with interest. I completely follow you, that research in collective behavior has to be treated as you describe, and that new insight will be gained from that, and that disciplinary narrow-mindedness is an obstacle.

I note however, that the data collection you silently assume, is based on individual facts, for which you need other processes to validate them. We must not confuse the parameter selection for statistical studies of any kind with ontologies in the proper sense, which provide the overall causal model of the world from which we select parameters. These parameter sets may appear as "data models" as you describe. In the "ACGT" research project, the Consortium could show that all datamodels of clinical studies in cancer research on chemotherapy could be described as views of one global ontology, a "superdatamodel", which is more and more stabilized around scientific insight on the human body. From the examples you gave in your paper for instance, I do not see anything that would not be based on a CRM compatible data collection.

Therefore, I do not follow your figure 5, and your claims about datamodels. I do not see how you could feedback the working hypothesis expressed in the datamodel statistically without a ground theory. For the latter, a "speed up" would be just the opposite of an advancement of science. From your paper, I do not see any argument how this should work out. Please give more details on that! Note, that physics over centuries stabilizes more and more around fundamental laws, which more and more increase precision of predictions.

In general, I'd be more cautious with global claims such as "the actual history of all made things" or "a perspective
for a systematic science of art and culture" ;-) .

Please let me also answer your questions briefly...

Martin's questions:

  * Like glaciers, frozen models still move and are often beautiful
    and vital as a "reference". ;)

Well, I hope you don't see the CRM like that. For me the CRM is a domain theory as for instance Newton's mechanics. If you have a clearly defined scope and precision requirement, it gives the correct answers. It is not a question of beauty or arbitrary agreement. If it does not give the correct answers, you change it. It is veryfied empirically. If you go beyond its initial scope, you may need to modify it it, such as theory of Relativity. If you seek different kinds of answers, you find another model, such as Thermodynamics. If there are competing models for the same questions, either one wins by better answers, or you simply merge them. We have merged the CRM with the ABC Harmony Model and with FRBR. That is not a moving glacier. We will continue to merge the CRM with any model that will provide better answers in the same scope. See above about datamodels.

  * The initial hypotheses may be established in a traditional way:
    Facebook for example has updated their traditional gender-model
    from male/female to a list of 250 genders, emerging from user
    input, characterized by a frequency distribution and temporal
    dynamics that can only be measured in a quantitative way. The
    emerging ENRON email structure versus ENRON's defined corporate
    hierarchy is another good example for the need of quantification.

We should not confuse data models with terminology. Terminology is justified by intensional definitions, which are based on relationships, which are more fundamental and much less. Terminology has a different epistemological role than datamodels.You cannot create "data" with terms, only with relationsships. Terms appear in data as values.+ Have we understood what 250 genders should be? When you measure the distribution and dynamics, what sort of things do you learn from that? Terminology is typically fluent.

 *


  * Yes, after measurement, disproving hypotheses, and finding a way
    out of Uri Alon's "cloud of uncertainty" (aka science), the loop
    needs to be closed (by engineering). Therefore, the CRM-SIG will
    be even more important than before (doing both science and
    engineering).
  * I use the word "measurement" in the sense of Max Planck, who
    claimed that any observation of the "real world" is subject to
    measurement bias, either due to imperfections in our tools or our
    own sensory organs.

Sure.

 *


  * Quantity is not indicating quality per se. But quantification can
    reveal hidden quality as "more is different".

sure

  * There is no confusion: Cultural research is part of the cultural
    process itself.

I think there is still confusion. Of course is cultural research part of the cultural process, but not because it researches culture, but because all human activities are part of culture. Studying the research process reveals the research culture, but not the culture researched. It may help improve the research process, and by that indirectly improve the subject matter. Scientific insight comes from discrimination, not from declaring all to be the same.

 *


  * Yes, physicists are aiming to improve physics by studying
    interaction patterns between physicists => Sinatra et al. Nature
    Physics 11, 791-796 (2015) doi:10.1038/nphys3494 (cf. final paragraph)

Sorry for my floppy expression! I meant you will not find a new law of particle physics by studying the interaction patterns between physicists.

Wolfgang's questions:

  * Of course, data comes from databases old and new. In fact,
    analyzing old datasets most interesting, as their data models were
    usually formulated decades ago, without knowing the emerging
    complex patterns that result from "local activity" by curators and
    the heterogeneity of granular data collected over time.

Still I'd need concrete examples how activity patterns of curators would lead to question datamodels. As a global statement, it makes no sense to me, I will not exclude that there may be a particular effect some times.

  * The data and method of our Science paper is published in the
    Supporting Online Material (free access to the Science website via
    www.cultsci.net). This allows for reuse and feedback of conceptual
    ideas by others. I assume all steps of the hypercycle will have
    their own publication stream, feeding into following steps.
  * Our conceptual reference models are out of sync with (a) a large
    number of databases with tens of thousands of entity and property
    types, and (b) massive amounts of data where the entire
    ontological structure is hidden (for example in plain
    tagging/category systems). In both cases, quantification is
    essential to model the emerging structure and dynamics, and
    eventually update our conceptual models.

Well, our research in mapping thousands of database fields to the CRM neither shows an "out of sync", nor that "emergent semantics", which may play well for taxonomic systems, can apply to database design, and I cannot understand how the "dynamics" help eventually update conceptual models. The problem being, that the database surface structure is quite different from the mental models behind (see Fauconnier, "The Way We Think"). But if you can show us a nice application that works, I'll be more than happy to use your methods!

In sum, I see a world with much more analytical structure of very different, in which your research covers a certain valuable area, as mine does at another edge, but I would be hesitant with too global claims, such as " perspective for a systematic science of art and culture" ;-) .

All the best,

Martin

In sum, all stages of the hermeneutic hypercycle are essential. Quantification will play an important part. But this does not mean traditional ontology engineering will go away.

Best regards,
Max


*Dr. Maximilian Schich*
Associate Professor, Arts & Technology
Founding member, The Edith O'Donnell Institute of Art History

*/The University of Texas at Dallas/*
800 West Campbell Road, AT10
Richardson, Texas 75080 – USA
US phone: +1-214-673-3051
EU phone: +49-179-667-8041

www.utdallas.edu/atec/schich/ <http://www.utdallas.edu/atec/schich/>
www.schich.info <http://www.schich.info>
www.cultsci.net <http://www.cultsci.net>

Current location: Dallas, Texas


On 2016-01-08 8:58 , martin wrote:
Dear All,

Just to add to Wolfgang's remark:

The CRM is in no point a product of a priori intuition, but exclusively based on empirical study of database use and interpretation, and a continuous feed back to systematic updates of the CRM. More flexible mapping mechanisms and semantic Web technologies also enable the systematic update of the databases to new releases. The CRM, as ISO standard, is not "frozen", but has the regular update
 cycle of 5 years, which CRM SIG extensively uses.

How ontological relations can emerge from quantitative measurements is black magic to me: All quantitative measurement requires an a priori hypotheses, and competing hypotheses will reveal better or worse agreement with reality. So far sciences appear to me to work. So, what are the initial hypotheses about such patterns? Or is there again an ontology engineering step after the measurement?

I agree that the real ontological patterns are often not what expert intuition would suggest in the first place. This is our common experience. However once found to be operational, they must be compatible with scientific argumentation and expert can confirm. I agree with Maximilian that data structures must be based on
empirical research, but "measurement"?

The "quantitative" argument is equally puzzling to me. Is quantity now indicating quality? Aren't we here confusing the sociology of doing cultural research and the evolution of knowledge with nature of the subject matter and the structure and logic of the scholarly argument? Would anybody reasonably try to improve the science of physics by studying interaction patterns between physicists???

All the best,

martin

On 8/1/2016 9:12 πμ, Wolfgang Schmidle wrote:
Dear All,

Let me quote from fellow list member Maximilian Schich's critique of database models and CRM:

"Over decades, database models, to embody the underlying worldview, were mostly established using formal logic and a priori expert intuition. Database curators were subsequently used to collect vast numbers of specific observations, enabling further traditional research, while failing to feed back systematic updates into the underlying database models. As a consequence, "conceptual reference models" are frozen, sometimes as ISO standards, and out of sync with the non-intuitive complex patterns that would emerge from large numbers of specific observations by quantitative measurement. A systematic data science of art and culture is now closing the loop using quantification, computation, and visualization in addition."
http://edge.org/response-detail/26784

Max, let me start by asking where the data underlying visualisations such as https://www.youtube.com/watch?v=4gIhRkCcD4U is supposed to come from, if not an old-fashioned database? How did you feed the "non-intuitive complex patterns" emerging in this visualisation back into the database or somewhere else? And why do you think CRM is out of snyc with this?

Thanks
Wolfgang

_______________________________________________
Crm-sig mailing list
[email protected]
http://lists.ics.forth.gr/mailman/listinfo/crm-sig






--

--------------------------------------------------------------
 Dr. Martin Doerr              |  Vox:+30(2810)391625        |
 Research Director             |  Fax:+30(2810)391638        |
                               |  Email:[email protected]  |
                                                             |
               Center for Cultural Informatics               |
               Information Systems Laboratory                |
                Institute of Computer Science                |
   Foundation for Research and Technology - Hellas (FORTH)   |
                                                             |
               N.Plastira 100, Vassilika Vouton,             |
                GR70013 Heraklion,Crete,Greece               |
                                                             |
             Web-site:http://www.ics.forth.gr/isl            |
--------------------------------------------------------------

Reply via email to