I'm not sure what kind of automated testing you have in mind. Of course, a Fink developer who commits a package to CVS has checked that it compiles on his/her own machine. There are several levels of further testing that can be imagined:
Question 1: Are there hidden dependencies... things which should have been listed but which the developer didn't notice because those things were already installed on his/her system? Question 2: Are there hidden conflicts with other packages that the developer doesn't happen to have installed on his/her system? Question 3: Does it compile on a wide range of hardware with a wide range of different things installed? Question 4: OK it compiles. Does it run? Does it do what it is supposed to do? Questions 1 and 2 could be addressed rather well by automated systems. AFAIK, there is only one automated system in development by a member of the Fink team, and it is intended to build as many packages as possible, as frequently as possible. It could be used to help answer question 2, I suppose. Question 3 *might* be addressed by automated systems, but you would need a lot of them. Question 4 really needs feedback from actual human beings who want to use this software. Since we want question 4 answered, I guess we figure that we can get pretty good answers to 1, 2 and 3 from that same set of human beings. In fact, our standards are rather low: feedback from users is so rare, that many of us will use even minimal feedback from a single user as an excuse to move something to the stable tree. Does that endanger the overall stability of the system? Quite possibly so... -- Dave ------------------------------------------------------- This sf.net email is sponsored by: With Great Power, Comes Great Responsibility Learn to use your power at OSDN's High Performance Computing Channel http://hpc.devchannel.org/ _______________________________________________ Fink-users mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/fink-users