Re: PPIG discuss: When agile goes bad....
Errol writes, 1) why supporters of particular approaches to software development are talking past each other and not necessarily hearing what each other are saying? I would hypothesize that it is because they do not understand each other. Computer scientists are trying to solve a very difficult problem, how to formalize the writing of software, without the background to deal with the fundamental cause of the problem. (I mean humans. Humans define what the software is to do, execute the creation of that software, and accept the result.) Because they are not given a formal education in the major issues (organizational behavior and related subjects) they must internalize the problem and recast it in terms they understand. The terms that one computer scientists uses and coins will most likely not match anyone else's. On top of this, consider that the computer science discipline is not a science in the same sense as psychology. What I mean is that computer scientists tend not to observe phenomena, hypothesize on relationships, and then perform experiments designed to support or refute the hypothesis. The result is that very intelligent people propose solutions to a very difficult problem using language and concepts unique to them and their experience. When describing this to others, the lack of a shared set of terms makes it difficult to share ideas or see the merits of other approaches. (Computer scientist B reads computer scientists A's paper and then tries to map the concepts there to his own understanding rather then a commonly shared set of terms. When they don't match communication breaks down.) After which, they perform no science to see if there method is truly beneficial versus other methods. Take for example the Data Flow paradigm. It has data and operators. The object oriented paradigm also has data and operators, but clumps them together and calls them objects. They are the same concepts with different packaging, either of which may or may not have benefits to the programmer or the program to be written. -- PPIG Discuss List (discuss@ppig.org) Discuss admin: http://limitlessmail.net/mailman/listinfo/discuss Announce admin: http://limitlessmail.net/mailman/listinfo/announce PPIG Discuss archive: http://www.mail-archive.com/discuss%40ppig.org/
RE: PPIG discuss: When agile goes bad....
Alan Indeed this sounds very interesting, and an avenue worthy of being pursued. I and colleagues have been researching the human and social aspects of software development, with a particular focus on software practice, for quite some time. There is renewed interest in the area and the community is gradually growing, partly fuelled by the agile movement but also partly fuelled by events such as the 20th anniversary 'celebration' of Peopleware at ICSE this year. We are hoping that there will be a workshop on this subject at ICSE 2008 (but proposals have only just been submitted, so we will have to see). Wearing a different hat, I'm the associate editor for IEEE Software on exactly this topic (human and social aspects of software development) and so if there's anyone out there with research work (or any other work, in fact) they want to publish to a practitioner audence, then please get in touch (off the mailing list, I suggest ;-)). Thanks helen -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Alan Blackwell Sent: 12 October 2007 14:19 To: Andrew Ko Cc: Derek M Jones; Hanania Salzer; discuss@ppig.org; [EMAIL PROTECTED] Subject: Re: PPIG discuss: When agile goes bad A marvellous analysis of the reasons for the "mathematical" (or formal) versus human orientation in software engineering can be found in Phil Agre's chapter "Conceptions of the user in computer system design". So far as I know, his observations regarding the "user" as human person have not been extended to observations about the software engineer as human person. I offer this as a research opportunity to somebody out there ... Agre, Philip (1995) Conceptions of the user in computer system design. In P. Thomas (ed.) Social and Interactional Dimensions of Human-Computer Interfaces. Cambridge, CUP, pp. 67-106. Alan -- Alan Blackwell Computer Laboratory, University of Cambridge http://www.cl.cam.ac.uk/users/afb21/ Phone: +44 (0) 1223 334418 -- PPIG Discuss List (discuss@ppig.org) Discuss admin: http://limitlessmail.net/mailman/listinfo/discuss Announce admin: http://limitlessmail.net/mailman/listinfo/announce PPIG Discuss archive: http://www.mail-archive.com/discuss%40ppig.org/ -- PPIG Discuss List (discuss@ppig.org) Discuss admin: http://limitlessmail.net/mailman/listinfo/discuss Announce admin: http://limitlessmail.net/mailman/listinfo/announce PPIG Discuss archive: http://www.mail-archive.com/discuss%40ppig.org/
Re: PPIG discuss: When agile goes bad....
A marvellous analysis of the reasons for the "mathematical" (or formal) versus human orientation in software engineering can be found in Phil Agre's chapter "Conceptions of the user in computer system design". So far as I know, his observations regarding the "user" as human person have not been extended to observations about the software engineer as human person. I offer this as a research opportunity to somebody out there ... Agre, Philip (1995) Conceptions of the user in computer system design. In P. Thomas (ed.) Social and Interactional Dimensions of Human-Computer Interfaces. Cambridge, CUP, pp. 67-106. Alan -- Alan Blackwell Computer Laboratory, University of Cambridge http://www.cl.cam.ac.uk/users/afb21/ Phone: +44 (0) 1223 334418 -- PPIG Discuss List (discuss@ppig.org) Discuss admin: http://limitlessmail.net/mailman/listinfo/discuss Announce admin: http://limitlessmail.net/mailman/listinfo/announce PPIG Discuss archive: http://www.mail-archive.com/discuss%40ppig.org/
RE: PPIG discuss: When agile goes bad....
> > In the paper that I mentioned in a previous posting, Wieringa claimed > > that much of the Software Engineering (SE) research does not apply > > scientific methods. Not only that I agreed with him, but I claimed > > that the situation is even worth than that; in many of the SE papers > > the underlying research questions are not scientific. I presented this > > position in the 2007 European conference on Computing and Philosophy. > > I agree with what you say about the problem (I would throw > out a large chunk of the mathematical approach to the > solution proposed in your paper). I will apologise for not having read the paper. I am currently away from my home university and will be travelling for a while. However, I agree that many of the "research questions are not scientific". There was a remark in a previous email (I don't recall who from) that suggested I might be misapplying Kuhn's idea of paradigms? I will accept that this might be the case as I tend to synthesize ideas to look for commonalities but I would also contend that we need to ask questions about 1) why supporters of particular approaches to software development are talking past each other and not necessarily hearing what each other are saying? 2) are the concepts of paradigm change exposed by Kuhn only applicable to scientific changes or is there a more generic truth here related to changes in the underlying assumptions and approaches to a subject or a practice? 3) when we represent a problem with a competing approach whether we present that problem from a core set of concepts that are applicable to all software development approaches? I am not using the term software engineering because I find that this phrase carries with it assumptions about an appropriate way to develop software. Ok, I can see that statement can also be challenged as simplistic, etc. but I see this as one of the problems within this context that so many terms carry assumptions for one group or other and often those groups use those terms in incompatible ways. My current research is exploring the way that practitioners are aware of object-oriented software development. My method isn't scientific but comes originally from an educational context. My results so far from the analysis of 31 interviews show that there is a diversity of understanding. It also shows some interesting issues around some key concepts. It is easy to say that some of these variations are errors in understanding but are they? I am not saying that what I am uncovering are different paradigms. They aren't but it dies show that there is less consistency than we might expect within this particular programming paradigm. Errol Thompson Massey University Wellington 6140. E-Mail: [EMAIL PROTECTED] [EMAIL PROTECTED] Web: www.teach.thompsonz.net -- PPIG Discuss List (discuss@ppig.org) Discuss admin: http://limitlessmail.net/mailman/listinfo/discuss Announce admin: http://limitlessmail.net/mailman/listinfo/announce PPIG Discuss archive: http://www.mail-archive.com/discuss%40ppig.org/
Re: PPIG discuss: When agile goes bad....
As one of many working to shed light on the human side of software engineering in academia, I thought I'd raise a few points. It is true that many academics prefer the mathematical approach. But I've also spoken to several dozen over the years with other perspectives. For example, much of the funding outside the US is biased towards certain SE problems and approaches. I don't know why this bias persists, but it affects the work that gets done. Many faculty, when thinking of their tenure case, are hindered by the expectations of their CS faculty peers, choosing problems and methods that seem legitimate from outsiders' perspective. Another interesting factor is the skillset of many academic researchers. Many realize that to explore human factors in SE requires skills that they never learned. Worse yet, disciplinary boundaries in universities prevent collaborations that would fill this gap. I think the actual number of researchers who want to explore and account for human factors in their research is far greater than the number that actually do (and succeed). The rest face a number of long standing barriers outside their direct control. Some of us have to take the risk of breaching these barriers before others will feel safe to do the same. Andy • • • • • • • • • • • • Ph.D. Candidate HCI Institute Carnegie Mellon University http://www.andrewko.net [Sent from my iPhone] On Oct 12, 2007, at 8:18 AM, Derek M Jones <[EMAIL PROTECTED]> wrote: Hanania, In the paper that I mentioned in a previous posting, Wieringa claimed that much of the Software Engineering (SE) research does not apply scientific methods. Not only that I agreed with him, but I claimed that the situation is even worth than that; in many of the SE papers the underlying research questions are not scientific. I presented this position in the 2007 European conference on Computing and Philosophy. I agree with what you say about the problem (I would throw out a large chunk of the mathematical approach to the solution proposed in your paper). There is a serious problem with the academic software engineering culture. Many of the people involved in don't feel they have to change and there is no real pressure to change. So academic SE is stuck in the deep hole of being staffed by people interested in the mathematical approach and uninterested in experiments, with industry sucking out all of the practical oriented people, and a reward system that favours the status quo. The core science that universities teach to would be software engineers is computer science. The anecdotes presented in earlier postings of this thread indicated the existence of a problem for which computer science does not seem to be the source of cure. The community of PPIG may be interested in the proposition that we, software engineers, must be well educated in computer science, but the field of science to which the above mentioned problems belong is not mathematics, but psychology (which includes sociology, etc.). The extended abstract to the conference is available on request. So, Nick, the answer that I humbly offer to your question is two fold. One, Wieringa provides a rather detailed answer, which I couldn't write better. Two, what makes science what it is, is not only the methods but the questions asked. -- Hanania -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Nick Flor Sent: Saturday, October 06, 2007 13:46 To: Hanania Salzer; discuss@ppig.org Subject: RE: PPIG discuss: When agile goes bad Hanania, what scientific methods would you propose to evaluate competing software development perspectives? BTW, I think Fleck's "Genesis and Development of a Scientific Fact" is far more relevant to the discussion of method adoption than Kuhn's. - Nick -- Nick V. Flor, PhD Associate Professor, Information Systems Anderson School of Management University of New Mexico [EMAIL PROTECTED] -- Derek M. Jones tel: +44 (0) 1252 520 667 Knowledge Software Ltd mailto:[EMAIL PROTECTED] Applications Standards Conformance Testinghttp://www.knosof.co.uk -- PPIG Discuss List (discuss@ppig.org) Discuss admin: http://limitlessmail.net/mailman/listinfo/discuss Announce admin: http://limitlessmail.net/mailman/listinfo/announce PPIG Discuss archive: http://www.mail-archive.com/discuss%40ppig.org/ -- PPIG Discuss List (discuss@ppig.org) Discuss admin: http://limitlessmail.net/mailman/listinfo/discuss Announce admin: http://limitlessmail.net/mailman/listinfo/announce PPIG Discuss archive: http://www.mail-archive.com/discuss%40ppig.org/
Re: PPIG discuss: When agile goes bad....
Hanania, In the paper that I mentioned in a previous posting, Wieringa claimed that much of the Software Engineering (SE) research does not apply scientific methods. Not only that I agreed with him, but I claimed that the situation is even worth than that; in many of the SE papers the underlying research questions are not scientific. I presented this position in the 2007 European conference on Computing and Philosophy. I agree with what you say about the problem (I would throw out a large chunk of the mathematical approach to the solution proposed in your paper). There is a serious problem with the academic software engineering culture. Many of the people involved in don't feel they have to change and there is no real pressure to change. So academic SE is stuck in the deep hole of being staffed by people interested in the mathematical approach and uninterested in experiments, with industry sucking out all of the practical oriented people, and a reward system that favours the status quo. The core science that universities teach to would be software engineers is computer science. The anecdotes presented in earlier postings of this thread indicated the existence of a problem for which computer science does not seem to be the source of cure. The community of PPIG may be interested in the proposition that we, software engineers, must be well educated in computer science, but the field of science to which the above mentioned problems belong is not mathematics, but psychology (which includes sociology, etc.). The extended abstract to the conference is available on request. So, Nick, the answer that I humbly offer to your question is two fold. One, Wieringa provides a rather detailed answer, which I couldn't write better. Two, what makes science what it is, is not only the methods but the questions asked. -- Hanania -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Nick Flor Sent: Saturday, October 06, 2007 13:46 To: Hanania Salzer; discuss@ppig.org Subject: RE: PPIG discuss: When agile goes bad Hanania, what scientific methods would you propose to evaluate competing software development perspectives? BTW, I think Fleck's "Genesis and Development of a Scientific Fact" is far more relevant to the discussion of method adoption than Kuhn's. - Nick -- Nick V. Flor, PhD Associate Professor, Information Systems Anderson School of Management University of New Mexico [EMAIL PROTECTED] -- Derek M. Jones tel: +44 (0) 1252 520 667 Knowledge Software Ltd mailto:[EMAIL PROTECTED] Applications Standards Conformance Testinghttp://www.knosof.co.uk -- PPIG Discuss List (discuss@ppig.org) Discuss admin: http://limitlessmail.net/mailman/listinfo/discuss Announce admin: http://limitlessmail.net/mailman/listinfo/announce PPIG Discuss archive: http://www.mail-archive.com/discuss%40ppig.org/
RE: PPIG discuss: When agile goes bad....
Nick, In the paper that I mentioned in a previous posting, Wieringa claimed that much of the Software Engineering (SE) research does not apply scientific methods. Not only that I agreed with him, but I claimed that the situation is even worth than that; in many of the SE papers the underlying research questions are not scientific. I presented this position in the 2007 European conference on Computing and Philosophy. The core science that universities teach to would be software engineers is computer science. The anecdotes presented in earlier postings of this thread indicated the existence of a problem for which computer science does not seem to be the source of cure. The community of PPIG may be interested in the proposition that we, software engineers, must be well educated in computer science, but the field of science to which the above mentioned problems belong is not mathematics, but psychology (which includes sociology, etc.). The extended abstract to the conference is available on request. So, Nick, the answer that I humbly offer to your question is two fold. One, Wieringa provides a rather detailed answer, which I couldn't write better. Two, what makes science what it is, is not only the methods but the questions asked. -- Hanania -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Nick Flor Sent: Saturday, October 06, 2007 13:46 To: Hanania Salzer; discuss@ppig.org Subject: RE: PPIG discuss: When agile goes bad Hanania, what scientific methods would you propose to evaluate competing software development perspectives? BTW, I think Fleck's "Genesis and Development of a Scientific Fact" is far more relevant to the discussion of method adoption than Kuhn's. - Nick -- Nick V. Flor, PhD Associate Professor, Information Systems Anderson School of Management University of New Mexico [EMAIL PROTECTED]
Re: PPIG discuss: When agile goes bad....
Hi, Before everyone rushes off and reads Ludwig Fleck, Thomas Kuhn etc., it might be better to start with something like Boehm and Turner's book "Balancing Agility and Discipline" [1]. It contains a risk based model for choosing (and balancing) between agile and plan driven methods. Like most work from Boehm, its well worth a read. Returning to Ruven's original story about development problems around a framework, I think the book "Extreme Programming in Action" by Lippert, Roock and Wolf [2] is relevant. In this book they draw attention to the way XP has been appropriated for types of development it was never designed for - (1) the development of application frameworks, (2) the development of eBusiness applications, (3) software product development, and (4) outsourcing. They also discuss some of the common work-arounds employed by companies who have appropriated XP for these. I'm not sure how this generalises to Scrum etc. For anyone interested in Kuhn, I'd suggest reading Sharrock and Read's [3] discussion of his work. It will help in avoiding some common misinterpretations of Kuhn - particularly the over emphasis put on paradigms and paradigm shifts (As we can see in this discussion, people tend to see just about anything as a paradigm). [1] Boehm, B., and Turner, R. 2004. Balancing Agility and Discipline Addison Wesley, Boston. [2] Lippert, M., Roock, S., and Wolf, H. 2002. Extreme Programming in Action. Practical Examples from Real World Projects John Wiley and Sons, New York. [3] Sharrock, W., Read, R. 2002. Kuhn: Philosopher of Scientific Revolution. Polity, Cambridge. I hope these suggestions are of interest. Regards, John. On 09/10/2007, Clendon Gibson <[EMAIL PROTECTED] > wrote: > > Hi all, > > In reading this thread about agile methods for managing software, I can't > help but wonder if the point of the method has been lost. This might explain > why people schooled in a particular method could still fail to get the > benefits promised. > > All methods are a set of heuristics to confront and manage the complex > issues of software development. But what are the issues? It strikes me that > a person who knows the method, but does not have a solid grasp of the issues > the method is to address, will have a failing project. > > This would seem an easy state to arrive at. Any college student can read > about Agile, but the same student will be hard pressed to have the raw > experience necesarry to have the insight into why Agile works. More > importantly, the student is unlikely to have the insight to know when a > given method will work, and when it won't. > > It seems to me that a book styled like "The Myhical Man-Month", is more > appropriate because it discusses the issues, while not making anything more > then suggestion for how to handle them. > > The "ideal" method would most likely vary based on the managers > experience, the resources available, and the goals of the project. > > I guess what I am saying is that the tool itself, whether Agile or another > method, is blameless. Understanding when to use the tool, as well as how to > use it is what really counts.Unless of course you have a sea of nails and > one screwdriver. > > ----- Original Message > From: Hanania Salzer < [EMAIL PROTECTED]> > To: discuss@ppig.org > Sent: Saturday, October 6, 2007 3:45:13 AM > Subject: RE: PPIG discuss: When agile goes bad > > Errol, you say that in the debate over agile methods some people fail toput > aside their"own paradigm blinkers > and seek to find maybe another framework for evaluating the solution". To > continue along your line, I would add that both normal science and > revolutionary science use the same rigor. Therefore, we have two issues > here. (May I mention that my research revolves around occasional failure > to identify and separat e among issues …) > > - One is openness to the possibility that there are " other, equally valid > and possibly challenging perspectives ". > > - Another one is, that the alternative, potential perspectives should not > be based on anecdotes alone, but mostly on scientific methods , which are, in > this case, empiric ones. > > I presume that this segregation is in line with Kuhn's SSR. > > Hanania Salzer, > > Tel-Aviv University, School of Education > > -Original Message- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]<[EMAIL PROTECTED]> > ] On Behalf Of Errol Thompson > Sent: Saturday, October 06, 2007 08:02 > To: discuss@ppig.org > Subject: RE: PPIG discuss: When agile goes bad > > From a quick look at the article, I would agree with many of its points. > However, > I would also suggest reading
Re: PPIG discuss: When agile goes bad....
Hi all, In reading this thread about agile methods for managing software, I can't help but wonder if the point of the method has been lost. This might explain why people schooled in a particular method could still fail to get the benefits promised. All methods are a set of heuristics to confront and manage the complex issues of software development. But what are the issues? It strikes me that a person who knows the method, but does not have a solid grasp of the issues the method is to address, will have a failing project. This would seem an easy state to arrive at. Any college student can read about Agile, but the same student will be hard pressed to have the raw experience necesarry to have the insight into why Agile works. More importantly, the student is unlikely to have the insight to know when a given method will work, and when it won't. It seems to me that a book styled like "The Myhical Man-Month", is more appropriate because it discusses the issues, while not making anything more then suggestion for how to handle them. The "ideal" method would most likely vary based on the managers experience, the resources available, and the goals of the project. I guess what I am saying is that the tool itself, whether Agile or another method, is blameless. Understanding when to use the tool, as well as how to use it is what really counts.Unless of course you have a sea of nails and one screwdriver. - Original Message From: Hanania Salzer <[EMAIL PROTECTED]> To: discuss@ppig.org Sent: Saturday, October 6, 2007 3:45:13 AM Subject: RE: PPIG discuss: When agile goes bad Errol, you say that in the debate over agile methods some people fail to put aside their "own paradigm blinkers and seek to find maybe another framework for evaluating the solution". To continue along your line, I would add that both normal science and revolutionary science use the same rigor. Therefore, we have two issues here. (May I mention that my research revolves around occasional failure to identify and separate among issues…) - One is openness to the possibility that there are "other, equally valid and possibly challenging perspectives". - Another one is, that the alternative, potential perspectives should not be based on anecdotes alone, but mostly on scientific methods, which are, in this case, empiric ones. I presume that this segregation is in line with Kuhn’s SSR. Hanania Salzer, Tel-Aviv University, School of Education -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Errol Thompson Sent: Saturday, October 06, 2007 08:02 To: discuss@ppig.org Subject: RE: PPIG discuss: When agile goes bad From a quick look at the article, I would agree with many of its points. However, I would also suggest reading beyond our own domain and I am particularly thinking of Thomas Kuhn's (1996) work on Scientific Revolutions. A key issue there is how our paradigms for our field of research can close us off to other equally valid and possibly challenging perspectives. I don't want to reduce the rigour required in research but neither do I want to discard an alternative paradigm within my field without fully exploring its foundations and understanding whether it has anything to contribute. If I am to do this then I need to be able to put aside some of my own paradigm blinkers and seek to find maybe another framework for evaluating the solution. This what I would contend is not happening in the debate related to agile methods. Kuhn, T. S. (1996). The structure of scientific revolutions (3rd ed.). Chicago: University of Chicago.
RE: PPIG discuss: When agile goes bad....
Hanania, what scientific methods would you propose to evaluate competing software development perspectives? BTW, I think Fleck's "Genesis and Development of a Scientific Fact" is far more relevant to the discussion of method adoption than Kuhn's. - Nick -- Nick V. Flor, PhD Associate Professor, Information Systems Anderson School of Management University of New Mexico [EMAIL PROTECTED] From: [EMAIL PROTECTED] on behalf of Hanania Salzer Sent: Sat 10/6/2007 2:45 AM To: discuss@ppig.org Subject: RE: PPIG discuss: When agile goes bad Errol, you say that in the debate over agile methods some people fail to put aside their "own paradigm blinkers and seek to find maybe another framework for evaluating the solution". To continue along your line, I would add that both normal science and revolutionary science use the same rigor. Therefore, we have two issues here. (May I mention that my research revolves around occasional failure to identify and separate among issues...) - One is openness to the possibility that there are "other, equally valid and possibly challenging perspectives". - Another one is, that the alternative, potential perspectives should not be based on anecdotes alone, but mostly on scientific methods, which are, in this case, empiric ones. I presume that this segregation is in line with Kuhn's SSR. Hanania Salzer, Tel-Aviv University, School of Education CONFIDENTIALITY NOTICE "This e-mail, including all attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited unless specifically provided for under the New Mexico Inspection of Public Records Act. If you are not the intended recipient, please contact the sender and destroy all copies of this message. Any views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of the Anderson School at UNM." -- PPIG Discuss List (discuss@ppig.org) Discuss admin: http://limitlessmail.net/mailman/listinfo/discuss Announce admin: http://limitlessmail.net/mailman/listinfo/announce PPIG Discuss archive: http://www.mail-archive.com/discuss%40ppig.org/
RE: PPIG discuss: When agile goes bad....
Errol, you say that in the debate over agile methods some people fail to put aside their "own paradigm blinkers and seek to find maybe another framework for evaluating the solution". To continue along your line, I would add that both normal science and revolutionary science use the same rigor. Therefore, we have two issues here. (May I mention that my research revolves around occasional failure to identify and separate among issues.) - One is openness to the possibility that there are "other, equally valid and possibly challenging perspectives". - Another one is, that the alternative, potential perspectives should not be based on anecdotes alone, but mostly on scientific methods, which are, in this case, empiric ones. I presume that this segregation is in line with Kuhn's SSR. Hanania Salzer, Tel-Aviv University, School of Education -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Errol Thompson Sent: Saturday, October 06, 2007 08:02 To: discuss@ppig.org Subject: RE: PPIG discuss: When agile goes bad >From a quick look at the article, I would agree with many of its points. However, I would also suggest reading beyond our own domain and I am particularly thinking of Thomas Kuhn's (1996) work on Scientific Revolutions. A key issue there is how our paradigms for our field of research can close us off to other equally valid and possibly challenging perspectives. I don't want to reduce the rigour required in research but neither do I want to discard an alternative paradigm within my field without fully exploring its foundations and understanding whether it has anything to contribute. If I am to do this then I need to be able to put aside some of my own paradigm blinkers and seek to find maybe another framework for evaluating the solution. This what I would contend is not happening in the debate related to agile methods. Kuhn, T. S. (1996). The structure of scientific revolutions (3rd ed.). Chicago: University of Chicago.
RE: PPIG discuss: When agile goes bad....
>From a quick look at the article, I would agree with many of its points. However, I would also suggest reading beyond our own domain and I am particularly thinking of Thomas Kuhn's (1996) work on Scientific Revolutions. A key issue there is how our paradigms for our field of research can close us off to other equally valid and possibly challenging perspectives. I don't want to reduce the rigour required in research but neither do I want to discard an alternative paradigm within my field without fully exploring its foundations and understanding whether it has anything to contribute. If I am to do this then I need to be able to put aside some of my own paradigm blinkers and seek to find maybe another framework for evaluating the solution. This what I would contend is not happening in the debate related to agile methods. Kuhn, T. S. (1996). The structure of scientific revolutions (3rd ed.). Chicago: University of Chicago. _ From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Hanania Salzer Sent: Friday, 5 October 2007 12:38 p.m. To: discuss@ppig.org Subject: RE: PPIG discuss: When agile goes bad Errol, wrote: "...Let's stop knocking others because their paradigm of software development doesn't fit ours and look at ways of learning from each other's strengths and seeing our weaknesses. ..." Yes, Errol, but that is far from being sufficient for a scientist. May I suggest the following paper by Roel Wieringa (from The University of Twente, The Netherlands)? http://www.springerlink.com/content/l8rqr26x2530m71w/fulltext.html Wieringa, R. J. (2005). Requirements researchers: are we really doing research?. Requirements Engineering, 10(4), 304-306. -- PPIG Discuss List (discuss@ppig.org) Discuss admin: http://limitlessmail.net/mailman/listinfo/discuss Announce admin: http://limitlessmail.net/mailman/listinfo/announce PPIG Discuss archive: http://www.mail-archive.com/discuss%40ppig.org/
RE: PPIG discuss: When agile goes bad....
Errol, wrote: "...Let's stop knocking others because their paradigm of software development doesn't fit ours and look at ways of learning from each other's strengths and seeing our weaknesses. ..." Yes, Errol, but that is far from being sufficient for a scientist. May I suggest the following paper by Roel Wieringa (from The University of Twente, The Netherlands)? http://www.springerlink.com/content/l8rqr26x2530m71w/fulltext.html Wieringa, R. J. (2005). Requirements researchers: are we really doing research?. Requirements Engineering, 10(4), 304-306. Best regards, Hanania Salzer Tel-Aviv University, School of Education -- PPIG Discuss List (discuss@ppig.org) Discuss admin: http://limitlessmail.net/mailman/listinfo/discuss Announce admin: http://limitlessmail.net/mailman/listinfo/announce PPIG Discuss archive: http://www.mail-archive.com/discuss%40ppig.org/ <>
RE: PPIG discuss: When agile goes bad....
Ruven, They may be true stories but from my reading of the agile literature, I would see that the practices represented in the stories would be regarded as inaccurate representations of agile practice. In the most recent, you had "code for today". How can that be interpreted. The coders decided without consultation with the customer as to what was good to program today. Yes, I agree a bad practice. The naming is clearly designed to be critical of agile practices without wanting to look at any rigour or cross checks that might be in those practices. If what was selected as the stories for this iteration or today's coding where based on the priorities and risk factors worked out with the customer, is it still agile practice? Would this ensure that the project had some focus to deliver something of "benefit to the customer"? There were other aspects of the story that showed a lack of understanding of agile practices and how they may be applied to ensure maintainable design. The thing is that any development practice can have disaster stories told that reflect badly on the practices. Also having been on commercial projects that have used upfront design strategies and agile practices, I have come to see the strengths and weaknesses of these approaches. I have used some of these stories in my own teaching to emphasise why we want practices that will address these issues. My preference is clearly agile. It is also clear to me that many upfront design projects would benefit from the use of some agile practices such as behaviour-driven development or refactoring. Too many of the design upfront projects that I have been involved with have only come to completion because the programmers changed the design so it worked. They often floundered in maintenance because of duplicate coding and the lack of refactoring. They also ended up with redundant documentation that no longer reflected the operating code often because the designers had moved on to other projects or parts of the project and didn't want to talk with the programmers. Whose fault was this? No doubt the programmers, the designers, the analysts, the project managers, the changing requirements, the ... In this message you say "From the point of view of nearly all of the writing in the agile world, the statement that "indeed different organizations have different needs" would be considered heresy, much less something that one could go ahead and stipulate." From what I understand of agile methods, they are tailored for each project based on the client's and the project's requirements. Yes, there are some core practices but the team has flexibility to tailor the practices and tool set to meet the project requirements. Schwaber may not say that Scrum is unsuitable for some organisations but I am equally sure that he doesn't say that Scrum should be implemented identically in all organisations. Sure there are some core practices that they would want to see in all projects. Let's stop knocking others because their paradigm of software development doesn't fit ours and look at ways of learning from each other's strengths and seeing our weaknesses. I say this because I quite like some of the model-driven ideas but I would like to temper their thinking with some test or behaviour driven ideas. Isn't there something to be gained by having a well modelled system that is also proven with a solid set of automated tests that can be updated and reapplied each time a system change is requested. Yes, let us not forget that software spends most of its life in maintenance and not in development. So what happens if we produce development methods that focus on maintainable software rather than on development. In my research into software development (yet to be published), a high percentage of those I interviewed saw design quality and maintainability as the critical aspects in software development. Many of these people also talked of using some agile practices to help achieve those objectives. Let us not be blind to the possibilities presented by alternative paradigms to our own. --- Errol Thompson Massey University PO Box 756 Wellington 6140 Phone +64 4 801 2794 x 6531 Mobile: +64 21 210 1662 Home: +64 4 938 5069 Skype: Kiwielt MSN: [EMAIL PROTECTED] Email: [EMAIL PROTECTED] Web: www.teach.thompsonz.net --- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Ruven E Brooks Sent: Thursday, 4 October 2007 11:27 p.m. To: discuss@ppig.org Subject: Re: PPIG discuss: When agile goes bad [EMAIL PROTECTED] wrote on 10/02/2007 01:40:08 PM: > > I'm confused by the point of these anecdotes. Is there some study > that backs up these stories? > > Without d
Re: PPIG discuss: When agile goes bad....
[EMAIL PROTECTED] wrote on 10/02/2007 01:40:08 PM: > > I'm confused by the point of these anecdotes. Is there some study > that backs up these stories? > > Without defending the pros and cons of these (so called) agile > methodolgies we can stipulate that indeed different orginaztions have > different needs. However, these anecdotes come across as an attack on > non-formal methodologies and not a careful study of when and where > orginaztions need to apply different methodologies. > > Frankly, it seems just an attempt to troll for controversy. Any > methodology has aspects that can be abused. > > Cheers, > Chris Dean The "stories" are true stories, things that actually happened. If need be, I could have an outside auditor in to verify that. As things that actually happened in the real world, they're probably far better data than you'd get out of most controlled studies. >From the point of view of nearly all of the writing in the agile world, the statement that "indeed different organizations have different needs" would be considered heresy, much less something that one could go ahead and stipulate. I've been reading "Agile Project Management with Scrum" by Ken Schwaber. No where does it say that SCRUM might be unsuitable for some organizations; in fact, it attempts to suggest that when SCRUM fails, it probably is due to inadequate training (Chapter 3). Before we can do our "careful study of when and where organizations need to apply different methodologies," there has to be some awareness that this is, in fact, worth doing, because methodologies do, in fact, fail. It's also useful to have some real world data on which to base theories of methodology suitability. I offer my "stories" as serving both purposes. Ruven Brooks
Re: PPIG discuss: When agile goes bad....
I'm confused by the point of these anecdotes. Is there some study that backs up these stories? Without defending the pros and cons of these (so called) agile methodolgies we can stipulate that indeed different orginaztions have different needs. However, these anecdotes come across as an attack on non-formal methodologies and not a careful study of when and where orginaztions need to apply different methodologies. Frankly, it seems just an attempt to troll for controversy. Any methodology has aspects that can be abused. Cheers, Chris Dean -- PPIG Discuss List (discuss@ppig.org) Discuss admin: http://limitlessmail.net/mailman/listinfo/discuss Announce admin: http://limitlessmail.net/mailman/listinfo/announce PPIG Discuss archive: http://www.mail-archive.com/discuss%40ppig.org/
Re: PPIG discuss: When agile goes bad....
Ruven E Brooks wrote: For other organizations, with a different set of pathologies, the medicine might be more often fatal than the disease. Ruven, I think this is the key insight of the story. And if you don't mind a shameless plug: in a recent study of seven small companies we found, among other things, that their practices seemed to depend considerably on the context in which they operated; and we observed that practices should not be prescribed generally, but with context in mind. We're presenting the paper in the RE'07 conference (in Delhi, on Oct 17). It's geared to requirements engineering, but it's also related to broader software project management issues. If you're interested, you can find an electronic copy of the paper here: http://www.cs.toronto.edu/~jaranda/pubs/REintheWild-RE07.pdf Thanks, Jorge -- PPIG Discuss List (discuss@ppig.org) Discuss admin: http://limitlessmail.net/mailman/listinfo/discuss Announce admin: http://limitlessmail.net/mailman/listinfo/announce PPIG Discuss archive: http://www.mail-archive.com/discuss%40ppig.org/