+1 on putting the focus on *workflows* (rather than specific tools) On Fri, Sep 19, 2014 at 10:53 PM, W. Trevor King <[email protected]> wrote: > “Learn how to use R and join the open-statistics revolution. We'll > go over enough Excel so we can extract your data, and then talk > about regressions and plots while introducing you to modular, > test-backed, version-controlled development.”
I really like this pitch! :) I don't think that "Excel/VBA and R users" should be assumed to be "two different audience segments (very likely in different professional areas)". For what it's worth, my team at my former job would build/use a pipeline made of R parts and Excel parts (and other technologies upstream). Personally I would always find an excuse not to touch anything Excel, but I believe it is not uncommon for data analysts to use a few different tools--which may seem to belong to different cultures. It happens if you need to collaborate and/or build upon existing tools/code bases. We emphasize the notion of a *pipeline* in the section on Unix shell, so yes, of course, we want to pipe the output of some Excel processing into some R processing! And if not Excel, then SQL, etc. As an attendee, really that was the main take-away for me: *thinking* in terms of a pipeline, breaking things down, and *doing* it with modularization and automation. It didn't matter so much whether I was going to script in Bash or in Python, whether or not I would end up using the same tools I was shown, ... as long as things would connect to something familiar--and I don't mean to underestimate this aspect. Leaving the bootcamp with this enhanced envisioning of pipelines, in faster/smoother/safer workflows, felt very empowering--both conceptually and practically. So that's what I try to transmit as an instructor... Cheers, Marianne _______________________________________________ Discuss mailing list [email protected] http://lists.software-carpentry.org/mailman/listinfo/discuss_lists.software-carpentry.org
