On Thu, Jan 14, 2016 at 03:19:13PM +0000, Jan Kim wrote:
> On Wed, Jan 13, 2016 at 12:51:24PM -0800, C. Titus Brown wrote:
> 
> [...]
> 
> > I also don't see a conflict between teaching SWC and using Jupyter as a
> > specific teaching mechanism, as long as what you're teaching is not just a
> > Jupyter-specific "here's the magic button you press and it all works".
> 
> One problem with using Jupyter (and similar systems) for teaching is
> that this creates a gap that learners will have to cross after the
> workshop, when they want to apply their new skills at their home
> institutions.

Sure, but let's contrast that with what we were teaching before.

(I'll be using Greg as a foil here :)

In 2012, we were teaching Python at the terminal.  We couldn't rely on
people having specific editors installed, so Greg would spend a bunch
of time teaching lists, dictionaries, etc at the REPL.  Plotting and matplotlib
were Right Out -- too hard to install! And people would wonder why Python was
so difficult and useless.

There were two very real problems with this:

* we were unable to teach important and useful topics - topics that added
  significant value for learners - like plotting.

* no one who was actually using Python for real work did things that way, so we
  were teaching a completely irrelevant workflow.

On the flip side, we had little choice prior to IPython Notebook.  There were
essentially no robust cross-platform approaches, and everyone I knew (myself
included) had developed their own hacky workflow that depended on where they
did their compute, what software they were working with, and what front-end
they were using.

The tl;dr is we have to confront a certain amount of real-world complexity
in order to serve learners well.  The world of computational science has
changed substantially for the better with Jupyter, Anaconda, cloud/ docker,
Rstudio, and these are things that people are really using in actual
science. Let's teach that.

> Gaps of this kind can be rather frustrating, for those returning from
> a course or workshop, and require substantial amounts of additional
> training to develop a more principled understanding of the "recipes"
> they took home from the course. (And this also can take up rather
> non-trivial amounts of time of local computer experts / instructors,
> including myself -- having helped quite a number of course participants
> to figure out why that "cp /data/course/example1.fastq ." command won't
> work on our computers.)
> 
> I think it would be good to identify / gauge such gaps -- is that part
> of the post-workshop feedback / evaluation, or would it make sense to
> include that?

It's a longer-term evaluation than I think anyone is doing, and we should
do it! (I certainly observe the same set of problems.)

--titus

_______________________________________________
Discuss mailing list
[email protected]
http://lists.software-carpentry.org/mailman/listinfo/discuss_lists.software-carpentry.org

Reply via email to