Hi Julio,
 
What you said was, "A "mapping" is just a symbolic concept to denote a
relationship or transformation."
 
I understood a mapping in mathematics to be: basically, a relationship
between two sets, such that every element of one set corresponds to another
element in the other set. We could express the relationship as some kind of
function, and that function could I suppose have a fixed point.
 
When you survey and compile e.g. national accounts aggregates, it looks very
much like an empirical or empiricist exercise. In fact, it is not really so
much that, because what you are actually doing is collecting data to fit
with prior concepts to get your measure. You have this prior grid of
concepts and classifications, and you collect observations which, turned
into data using a classification grid, can be converted into aggregates that
are consistent with the prior concepts.
 
So really, the process is not: 
 
Experiential data -> formed concept.
 
But:
 
Concept -> experiential data -> measure of the concept.
 
The mathematical analysis only gets off the ground, once you have defined
the appropriately classified measuring units. It assumes that you have the
measuring units. Mathematical analysis can reveal the quantitative
implications, and therefore the logical coherence, of the measures, but the
initial conceptual step of defining the measurement units is not a purely
mathematical operation. You have to assume some conceptual distinctions
before you begin to perform your mathematical operations. 
 
If there are disputes about what the concepts ought to be, mathematical
analysis will not solve a great deal. At most you might be able to prove,
that the quantitative consequences of adopting particular concepts are such,
that some concepts seem more plausible or credible than others. And indeed
nowadays this plays a big role in e.g. producing national accounts, since
mathematical models are constructed to extrapolate the data for the
aggregates from partial or incomplete data, which are used as valid and
relevant indicators (a cost-saving method). To obtain e.g. quarterly GDP
measures, for example, it is rarely possible to run full surveys four times
a year. You have to extrapolate and adjust the data as more information
becomes available.
 
>From a data set, we can draw synthetic or inductive conclusions which are
generalisations not fully reducible to particular data. A "theory" typically
goes well beyond the data - there is typically much more theory, than
relevant data to back up the theory.
 
In social science, we frequently encounter the problem that a multiplicity
of individual behaviours leads to an aggregate social result, a social
condition which can be a "social force", such that the individual behaviours
and the total social effect coexist and influence each other. Simply put,
the parts act on the whole and the whole acts on the parts. The whole is
then not reducible to the particular parts, but coexists with them. That is
the sort of thing I had in mind. Dialectical analysis is typically concerned
with these kinds of reciprocal causation.
 
I personally think the vexed concept of abstract labour has been much
misunderstood, because it is thought of as a fixed condition, rather than as
an evolutionary category which gains additional dimensions as it develops.
 
J.

 

_______________________________________________
pen-l mailing list
[email protected]
https://lists.csuchico.edu/mailman/listinfo/pen-l

Reply via email to