I was inspired to post this by by one of the posts in the "word /
PowerPoint all wrong" thread.
In my opionion, One of the pitfalls of 'our' programmatic way of working
with data is that it is easy to move further away from the raw data.
As a little background I typically work on biomedical imaging data (optical
coherence tomography and very high resolution images of the human retina).
In my own work I am often caught by two traps. The first is garbage in
garbage out. I often lack suitable metrics of quality and when poor quality
data is only processed in .CSV format this lack of quality becomes
invisible. The second trap relates to the unknown nature of disease induced
changes. Often the most interesting changes are only observed under careful
examination of images. While these specific examples relate rro imaging
data, I'm sure the problems are not limited to this modality.
My approach to addressing these issues is constant visualisation of data,
something made easier by R and knitr and where possible the development and
use of quality metrics.
My question and hope is that other people have addressed these issues. If
you have any thoughts or suggestions I'd love to hear them.

Thx.
_______________________________________________
Discuss mailing list
[email protected]
http://lists.software-carpentry.org/mailman/listinfo/discuss_lists.software-carpentry.org

Reply via email to