> I think its dangerous to automatically say that libfoo-a.b.c is going to
> be the same across OS's, as that assumes the compiler is the same.
> With the case of gcc, I'd be wary of that.

Agree, libraries were an example - my point was *all* dependencies,
including the compiler and kernel.

> As for minor bumps not being a worry, what if the part of the library
> that caused the bump is something that you really depend on?

My question is: do minor versions of the library introduce any API
change for example, or are most of the minor versions generally
"backward" compatible? I may be wrong in the major-minor version
interpretation though.

> After having done testing of software/hardware in the pre-open
> source world, I've come to the conclusion that its cutting corners to
> make assumptions, and no matter how much of a pain in the ass it
> is, testing everything, at least sometimes, is the right thing to do.
>
> --STeve Andre'

Agree -- doing away with complete testing always is not an option,
but isn't the behavior of the minor version bumps in terms of compat
predictable enough not to run full test cycles every time a minor
version is bumped?

I am not building a case against testing, but wanted to see if things
can be "optimized", and the general fear of "test all if *anything* has
changed" can be modified to "make a decision to test *all*, based on
*what* has changed".

Thanks for the feedback folks.

-Amarendra

Reply via email to