Thanks again to everyone who participated! Feel free to reply with your thoughts. I've also posted this on the agilepdx list <http://tech.groups.yahoo.com/group/AgilePDX/message/2568>, which you may be interested in if you want to learn more about test-driven development. They've got a meeting on test-driven database development coming up shortly.
For those that missed my pitch, "randori" is a phrase borrowed from martial arts, where two students practice sparring in a ring. In a coding dojo randori, the sparring is between a pair of coders versus a programming problem. Here's one description of the the "coding dojo" and "randori" concepts: http://web.cs.wpi.edu/~gpollice/Dojo.html Last night I organized the first coding dojo randori at the Portland Python User Group. There were five participants (plus myself as facilitator) with a wide range of experience with Python, although all had at least *some* knowledge of the language and basic programming concepts.[1] We settled on Madlibs[2] as the kata of the evening, which was the least complex of the options I had prepared. Good thing too, as we didn't find it lacking for difficulty. I gave an introduction to the randori's ground rules for the evening[3], an even briefer introduction to unittest, and then we spent a little under two hours working in five-minute rotations in circular order[4]. At the end we took about fifteen minutes for retrospection. There are some notes from the retrospective below, but as facilitator I felt like the biggest thing for me to work on will be figuring out how and when to engage. I was almost entirely hands-off for most of the exercise, and I'm thrilled that it was as successful as it was in that form, but there were a number of things I'd love to resolve: - Some of the writing about this I've seen said to "not let anyone fall off the sled." I failed to implement that, and we got ourselves stuck when one navigator started something that their successor could not follow. Since the audience wasn't encouraged to talk much when they weren't in the pair, we didn't have a good mechanism for catching this. Give everyone the permission to pull the breaks at any time? More explicit checks for understanding? Have the facilitator watch for confused faces? Do you stop the rotation timer when this happens? - What to do when the navigator starts writing untested code? Given that I pitched the randori as an exercise in TDD, participants were generally willing to overcome their habitual reluctance to writing tests, but I didn't have a productive response when they objected with "I don't know how to test!" I wasn't sure how to lead them to understanding without pulling them out of the navigator chair. So we let some untested code in, but I think without exception that code ended up abandoned or deleted by the pairs that followed. ("Untested code is hard for your successors to pick up" would have been a brilliant conclusion for the group to draw in the debrief, but I don't know if any of us got that insight at the time. I didn't hear it voiced.) - Most everyone wasn't well-versed in Red, Green, Refactor. Very little refactoring happened. Not the worst thing in the world, but maybe a missed opportunity, especially if that's one of the skills we're here to practice. - Should the facilitator participate in the rotation? I didn't, which may be the right call if there are enough things that need facilitator-attention to keep that hat on consistently through the event. But I felt I was missing out (in both giving and receiving) by not pairing. Notes from the retrospective: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Things that worked: ------------------- Everyone participated, stuck around to the end, and agreed they had learned something. One participant said that TDD on greenfield development challenged them to think about tests in a way they don't usually do (and reported this as a positive :) Having to code out loud made one participant more aware of the up-front decisions they made while working on a problem. We took some time midway through to whiteboard inputs and outputs of not-yet-implemented functions, which provided something for people to refer to when they got lost. Things that didn't work so well: -------------------------------- The development environment I provided came with a broken "run tests" button and broken keyboard shortcuts for the 80% of the participants who don't type Dvorak. I had both the laptop (17") display and big external display enabled. The cursor was easy to lose between displays, and while it was occasionally useful to have the second little display, the audience couldn't see it. (Mirror the displays, or disable the smaller display entirely.) We had some periods where an entire round or two was spent thrashing. Some thrash is probably expected, but this felt like too much. (See above concerns about "falling off the sled.") While people did talk while pairing, I was less successful in getting people to speak up in the small group (including the retrospective). Suggested additions: -------------------- Cover "Yes, And" in the ground rules. Spend some time planning as a group before the first pair dives in. Start with at least a skeleton file for the tests, for those less familiar with xunit and python unittest invocation. Maybe get one of those utilities that makes it easier to see the mouse cursor when you lose it. Tinker with the pairing rotation to reduce occurrence of "blind leading the blind" feeling. (Random? Popcorn? Planned interleaving of inexperienced participants between experienced ones?) Be more explicit about the start time. At hack people tend to drift in throughout the course of the evening, so it took a while before we had what I felt was critical mass. Footnotes: 1: I chickened out and did not encourage the person who had not yet done their first "hello world" to join us. 2: Madlibs as described by Ruby Quiz #28 http://rubyquiz.com/quiz28.html 3: Ground Rules: * A pair of navigator and driver. The navigator decides where to go and tells the driver. The driver shall not get ahead of the navigator’s instructions. * Rotation period: 5 minutes. * At the bell, the navigator becomes the driver, the driver goes to the end of the queue, and the new navigator comes from the front of the queue. * Everyone takes a turn. * The randori computer is the only computer. Even when not part of the pair, your attention should be on the activity happening there. (Possible exception to this for reference documentation.) * Test first. * Those not in the pair should keep the comments and background chatter to a minimum. * When the tests are GREEN (passing), the navigator may solicit advice for next steps or refactorings. 4: I was going to say "round-robin" here, but a round-robin tournament (all-play-all) is actually quite different than round-robin scheduling, so that could be ambiguous. _______________________________________________ Portland mailing list [email protected] http://mail.python.org/mailman/listinfo/portland
