"Dann Corbit" <[EMAIL PROTECTED]> writes:

>> -----Original Message-----
>> From: Jason Earl [mailto:[EMAIL PROTECTED] 
>> Sent: Friday, June 20, 2003 3:32 PM
>> To: Dann Corbit
>> Cc: Jason Earl; The Hermit Hacker; PostgreSQL-development
>> Subject: Re: [HACKERS] Two weeks to feature freeze
>> "Dann Corbit" <[EMAIL PROTECTED]> writes:
>> >> 
>> >> Why couldn't you just release the win32 version of 7.4 when
>> >> it was finished.  If it takes an extra month then that just 
>> >> gives you guys the chance to circulate *two* press releases.  
>> >> The Native Win32 port is likely to make a big enough splash 
>> >> all by itself.
>> >
>> > A formal release needs a big testing effort.  Two separate releases 
>> > will double the work of validation.
>> There are several problems with that statement.  The first is 
>> that PostgreSQL's "testing effort" happens right here on this 
>> mailing list. 
> That's not exactly reassuring.  There is no regression test that
> gets formal acceptance?!

Yes, there are regression tests, and new tests get invented all of the
time whenever the real world finds new bugs.  Regression tests are
excellent for making sure that you don't make the same mistake twice,
but they aren't a substitute for handing the code over to actual end

>> The various PostgreSQL hackers code stuff up, 
>> and we try and break it. There's very little /effort/ 
>> involved.  People that want the new features go out on a limb 
>> and start using them.  If they don't work, then they bring it 
>> up on the list.  If they do work then very little gets said.
>> As it now stands Tom Lane is on the record as stating that 
>> the new Win32 version isn't going to be ready for production 
>> anyhow.  If anything the Win32 version *should* get released 
>> separately simply because we don't want people mistaking the 
>> Win32 version as being up to the PostgreSQL teams high 
>> standards.  Those people that want the Win32 version to 
>> become production ready are going to have to risk their 
>> precious data.  Otherwise, the Win32 version will likely 
>> remain a second class citizen forever.
>> The fact of the matter is that the Win32 specific bits are 
>> the parts that are likely to break in the new port.  If 
>> anything the Windows version will *benefit* from an earlier 
>> *nix release because the *nix users will chase down the bugs 
>> in the new PostgreSQL features.  Once the *nix version is up 
>> to 7.4.2 (or so) then a Windows release of 7.4.2 should allow 
>> the PostgreSQL hackers to simply chase down Windows specific problems.
> Then using the same logic, the new Windows version should wait
> indefinitely, since the *nix version will always be shaking out
> bugs.

That's not true at all.  Despite the excellent work by the PostgreSQL
team, and despite the beta testing that will be done by volunteers, if
history repeats itself, there will be problems with version 7.4.0,
even on platforms that have been well supported by PostgreSQL forever.
I am not saying that we should hold off indefinitely on the Win32
port, I am simply saying that it probably wouldn't hurt to shake out
the normal .0 release bugs before throwing the unique Win32 bugs into
the mix.

My guess is that reported Win32 bugs are going blamed on the Win32
specific bits at first no matter what happens.  Unless the bug can be
demonstrated on a *nix version it will be assumed that the problem is
a shortcoming of the Win32 specific code.  That's just common sense.

>> Adding a new platform--especially a platform as diverse from 
>> the rest of PostgreSQL's supported platforms as Windows--is 
>> what adds the work. Testing the new platform is relatively 
>> easy.  All you need to do is to start using the Win32 version 
>> with real live data.
> That is not testing.  Using the world as your beta team seems to be
> a business model used by a few software giants that is largely
> frowned upon.  I would think that there is an opportunity to do
> things differently. [Read 'properly'].

Hmm... I must have missed the huge corporation paying for in house
testing of PostgreSQL.  In the Free Software world the "beta team" is
all of those people that need the new features so badly that they are
willing to risk their own data and hardware testing it.  You might not
like the way that this sounds, but in practice it works astoundingly
well.  Chances are you can't name 25 pieces of commercial software
that run on the wide array of hardware platforms and OSes as
PostgreSQL, and PostgreSQL has a earned a well deserved reputation for
being a solid piece of software.  Clearly the PostgreSQL team is doing
*something* right.

> We (at CONNX Solutions Inc.) have a formal release procedure that
> includes many tens of thousands of automated tests using dozens of
> different platforms.  There are literally dozens of machines (I
> would guess 70 or so total) running around the clock for 7 days
> before we even know if we have a release candidate.  The QA team is
> distinct from the development team, and if they say "FLOP!" the
> release goes nowhere.  No formal release until QA passes it.

And yet when you release the software your customers invariably find
bugs, don't they?

Don't get me wrong.  I am all for testing, regression tests, and such,
but the fact of the matter is that there simply is no way that a
centralized authority could afford to really test PostgreSQL on even a
fraction of the supported platforms and configurations.  The way it
stands now the PostgreSQL teams gets the best testbed you could hope
for (the world) for the price of hosting a few web and FTP servers
(thanks Marc).

PostgreSQL betas almost certainly gest tested on an order of magnitude
more systems than the 70 that you boast about.  PostgreSQL gets tested
on everything from Sony Playstations to AMD Opterons to IBM
mainframes.  Heck, there are probably more than 70 machines running
CVS versions of PostgreSQL right this minute (Marc, any download
numbers to back this up?).  More importantly, PostgreSQL gets tested
on a wide variety of real world tasks, and not some lab application or
some test scripts.  Like I have mentioned several times before.
PostgreSQL gets tested by folks that put their actual data into the
beta versions and try it out.  Even with this sort of testing,
however, bugs still make it into the release version.  Even with a
large group of beta testers we simply can't test all of the possible
ways that the software might get used on every available platform.

> If there is no procedure for PostgreSQL of this nature, then there
> really needs to be.  I am sure that MySQL must have something in place
> like that.  Their "Crash-Me" test suite shows (at least) that they have
> put a large effort into testing.

Yow!  Have you read the crash-me script.  It's possible that they have
improved dramatically in the year or so since I last took a look at
them, but it used to be that MySQL's crash-me scripts were the worst
amalgamation of marketeering and poor relational theory ever conceived
by the human mind.  Basically the crash-me scripts were nothing more
than an attempt to put MySQL's competition in the worst light
possible.  Basically any time a competitor differed from MySQL an
error would be generated (despite the fact that it was very likely
that it was MySQL that was wrong).

MySQL even tried to pawn this single-process monstrosity off as a
"benchmark."  What a laugh.  It was a perfectly valid benchmark if
your database was designed to be used by one user at a time and one of
your biggest criteria was the time it took to create a valid
connection from a perl script.

PostgreSQL's regression tests (IMHO) are much better than MySQL's
crash-me scripts.


---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Reply via email to