Re: CRC was: Re: [HACKERS] beta testing version

2000-12-12 Thread Nathan Myers

On Thu, Dec 07, 2000 at 07:36:33PM -0500, Tom Lane wrote:
 [EMAIL PROTECTED] (Nathan Myers) writes:
  2. I disagree with way the above statistics were computed.  That eleven 
 million-year figure gets whittled down pretty quickly when you 
 factor in all the sources of corruption, even without crashes.  
 (Power failures are only one of many sources of corruption.)  They 
 grow with the size and activity of the database.  Databases are 
 getting very large and busy indeed.
 
 Sure, but the argument still holds.  If the net MTBF of your underlying
 system is less than a day, it's too unreliable to run a database that
 you want to trust.  Doesn't matter what the contributing failure
 mechanisms are.  In practice, I'd demand an MTBF of a lot more than a
 day before I'd accept a hardware system as satisfactory...

In many intended uses (such as Landmark's original plan?) it is not just 
one box, but hundreds or thousands.  With thousands of databases deployed, 
the MTBF (including power outages) for commodity hardware is well under a 
day, and there's not much you can do about that.

In a large database (e.g. 64GB) you have 8M blocks.  Each hash covers
one block.  With a 32-bit checksum, when you check one block, you have 
a 2^(-32) likelihood of missing an error, assuming there is one.  With 
8M blocks, you can only claim a 2^(-9) chance.

This is what I meant by "whittling".  A factor of ten or a thousand
here, another there, and pretty soon the possibility of undetected
corruption is something that can't reasonably be ruled out.


  3. Many users clearly hope to be able to pull the plug on their hardware 
 and get back up confidently.  While we can't promise they won't have 
 to go to their backups, we should at least be equipped to promise,
 with confidence, that they will know whether they need to.
 
 And the difference in odds between 2^32 and 2^64 matters here?  I made
 a numerical case that it doesn't, and you haven't refuted it.  By your
 logic, we might as well say that we should be using a 128-bit CRC, or
 256-bit, or heck, a few kilobytes.  It only takes a little longer to go
 up each step, right, so where should you stop?  I say MTBF measured in
 megayears ought to be plenty.  Show me the numerical argument that 64
 bits is the right place on the curve.

I agree that this is a reasonable question.  However, the magic of 
exponential growth makes any dissatisfaction with a 64-bit checksum
far less likely than with a 32-bit checksum.

It would forestall any such problems to arrange a configure-time
flag such as "--with-checksum crc-32" or "--with-checksum md4",
and make it clear where to plug in the checksum of one's choice.
Then, ship 7.2 with just crc-32 and let somebody else produce 
patches for alternatives if they want them.

BTW, I have been looking for Free 64-bit CRC codes/polynomials and 
the closest thing I have found so far was Mark Mitchell's hash, 
translated from the Modula-3 system.  All the tape drive makers
advertise (but don't publish (AFAIK)) a 64-bit CRC.

A reasonable approach would be to deliver CRC-32 in 7.2, and then
reconsider the default later if anybody contributes good alternatives.

Nathan Myers
[EMAIL PROTECTED]



Re: CRC was: Re: [HACKERS] beta testing version

2000-12-08 Thread Bruce Guenter

On Fri, Dec 08, 2000 at 01:58:12PM -0500, Tom Lane wrote:
 Bruce Guenter [EMAIL PROTECTED] writes:
  ... Taking an
  arbitrary 32 bits of a MD5 would likely be less collision prone than
  using a 32-bit CRC, and it appears faster as well.
 
 ... but that would be an algorithm that you know NOTHING about the
 properties of.  What is your basis for asserting it's better than CRC?

MD5 is a cryptographic hash, which means (AFAIK) that ideally it is
impossible to produce a collision using any other method than brute
force attempts.  In other words, any stream of input to the hash that is
longer than the hash length (8 bytes for MD5) is equally probable to
produce a given hash code.

 CRC is pretty well studied and its error-detection behavior is known
 (and good).  MD5 has been studied less thoroughly AFAIK, and in any
 case what's known about its behavior is that the *entire* MD5 output
 provides a good signature for a datastream.  If you pick some ad-hoc
 method like taking a randomly chosen subset of MD5's output bits,
 you really don't know anything at all about what the error-detection
 properties of the method are.

Actually, in my reading reagarding the properties of MD5, I read an
article that stated that if a smaller number of bits was desired, one
could either (and here's where my memory fails me) just select the
middle N bits from the resulting hash, or fold the hash using XOR until
the desired number of bits was reached.  I'll see if I can find a
reference...

RFC2289 (http://www.ietf.org/rfc/rfc2289.txt) includes an algorithm for
folding MD5 digests down to 64 bits by XORing the top half with the
bottom half.  See appendix A.
-- 
Bruce Guenter [EMAIL PROTECTED]   http://em.ca/~bruceg/

 PGP signature


Re: CRC was: Re: [HACKERS] beta testing version

2000-12-08 Thread Tom Lane

Bruce Guenter [EMAIL PROTECTED] writes:
 MD5 is a cryptographic hash, which means (AFAIK) that ideally it is
 impossible to produce a collision using any other method than brute
 force attempts.

True but irrelevant.  What we need to worry about is the probability
that a random error will be detected, not the computational effort that
a malicious attacker would need in order to insert an undetectable
error.

MD5 is designed for a purpose that really doesn't have much to do with
error detection, when you think about it.  It says "you will have a hard
time computing a different string that produces the same hash as some
prespecified string".  This is not the same as promising
better-than-random odds against a damaged copy of some string having the
same hash as the original.  CRC, on the other hand, is specifically
designed for error detection, and for localized errors (such as a
corrupted byte or two) it does a provably better-than-random job.
For nonlocalized errors you don't get a guarantee, but you do get
same-as-random odds of detection (ie, 1 in 2^N for an N-bit CRC).
I really doubt that MD5 can beat a CRC with the same number of output
bits for the purpose of error detection; given the lack of guarantee
about short burst errors, I doubt it's even as good.  (Wild-pointer
stomps on disk buffers are an example of the sort of thing that may
look like a burst error.)

Now, if you are worried about crypto-capable gremlins living in your
file system, maybe what you want is MD5.  But I'm inclined to think that
CRC is more appropriate for the job at hand.

regards, tom lane



Re: CRC was: Re: [HACKERS] beta testing version

2000-12-08 Thread Tom Lane

Bruce Guenter [EMAIL PROTECTED] writes:
 ... Taking an
 arbitrary 32 bits of a MD5 would likely be less collision prone than
 using a 32-bit CRC, and it appears faster as well.

... but that would be an algorithm that you know NOTHING about the
properties of.  What is your basis for asserting it's better than CRC?

CRC is pretty well studied and its error-detection behavior is known
(and good).  MD5 has been studied less thoroughly AFAIK, and in any
case what's known about its behavior is that the *entire* MD5 output
provides a good signature for a datastream.  If you pick some ad-hoc
method like taking a randomly chosen subset of MD5's output bits,
you really don't know anything at all about what the error-detection
properties of the method are.

I am reminded of Knuth's famous advice about random number generators:
"Random numbers should not be generated with a method chosen at random.
Some theory should be used."  Error-detection codes, like random-number
generators, have decades of theory behind them.  Seat-of-the-pants
tinkering, even if it starts with a known-good method, is not likely to
produce an improvement.

regards, tom lane



Re: AW: [HACKERS] beta testing version

2000-12-08 Thread Daniele Orlandi

Bruce Guenter wrote:
 
 CRCs are designed to catch N-bit errors (ie N bits in a row with their
 values flipped).  N is (IIRC) the number of bits in the CRC minus one.
 So, a 32-bit CRC can catch all 31-bit errors.  That's the only guarantee
 a CRC gives.  Everything else has a 1 in 2^32-1 chance of producing the
 same CRC as the original data.  That's pretty good odds, but not a
 guarantee.

Nothing is a guarante. Everywhere you have a non-null probability of
failure. Memories of any kind doesn't give you a *guarantee* that the
data you read is exactly the one you wrote. CPUs and transmsision lines
are subject to errors too.

You only may be guaranteed that the overall proabability of your system
is under a specified level. When the level is low enought you usually
suppose the absence of errors guaranteed.

With CRC32 you considerably reduce p, and given the frequency when CRC
would need to reveal an error, I would consider it enought.

Bye!

-- 
 Daniele

---
 Daniele Orlandi - Utility Line Italia - http://www.orlandi.com
 Via Mezzera 29/A - 20030 - Seveso (MI) - Italy
---



Re: CRC was: Re: [HACKERS] beta testing version

2000-12-07 Thread Hannu Krosing

Horst Herb wrote:
 
  This may be implemented very fast (if someone points me where
  I can find CRC func). And I could implement "physical log"
  till next monday.
 
 I have been experimenting with CRCs for the past 6 month in our database for
 internal logging purposes. Downloaded a lot of hash libraries, tried
 different algorithms, and implemented a few myself. Which algorithm do you
 want? Have a look at the openssl libraries (www.openssl.org) for a start -if
 you don't find what you want let me know.
 
 As the logging might include large data blocks, especially now that we can
 TOAST our data, I would strongly suggest to use strong hashes like RIPEMD or
 MD5 instead of CRC-32 and the like. Sure, it takes more time tocalculate and
 more place on the hard disk, but then: a database without data integrity
 (and means of _proofing_ integrity) is pretty worthless.

The choice of hash algoritm could be made a compile-time switch quite
easyly I guess.

-
Hannu



Re: CRC was: Re: [HACKERS] beta testing version

2000-12-07 Thread Nathan Myers

On Thu, Dec 07, 2000 at 06:40:49PM +1100, Horst Herb wrote:
  This may be implemented very fast (if someone points me where
  I can find CRC func). And I could implement "physical log"
  till next monday.
 
 As the logging might include large data blocks, especially now that
 we can TOAST our data, I would strongly suggest to use strong hashes
 like RIPEMD or MD5 instead of CRC-32 and the like. 

Cryptographically-secure hashes are unnecessarily expensive to compute.
A simple 64-bit CRC would be of equal value, at much less expense.

Nathan Myers
[EMAIL PROTECTED]




RE: [HACKERS] beta testing version

2000-12-07 Thread Mikheev, Vadim

  This may be implemented very fast (if someone points me where
  I can find CRC func).
 
 Lifted from the PNG spec (RFC 2083):

Thanks! What about Copyrights/licence?

Vadim



Re: [HACKERS] beta testing version

2000-12-07 Thread Tom Lane

"Mikheev, Vadim" [EMAIL PROTECTED] writes:
 This may be implemented very fast (if someone points me where
 I can find CRC func).
 
 Lifted from the PNG spec (RFC 2083):

 Thanks! What about Copyrights/licence?

Should fit fine under our regular BSD license.  CRC as such is long
since in the public domain...

regards, tom lane



RE: CRC was: Re: [HACKERS] beta testing version

2000-12-07 Thread Mikheev, Vadim

  This may be implemented very fast (if someone points me where
  I can find CRC func). And I could implement "physical log"
  till next monday.
 
 I have been experimenting with CRCs for the past 6 month in 
 our database for internal logging purposes. Downloaded a lot of
 hash libraries, tried different algorithms, and implemented a few
 myself. Which algorithm do you want? Have a look at the openssl
 libraries (www.openssl.org) for a start -if you don't find what
 you want let me know.

Thanks.

 As the logging might include large data blocks, especially 
 now that we can TOAST our data, 

TOAST breaks data into a few 2K (or so) tuples to be inserted
separately. But first after checkpoint btree split will require
logging of 2x8K record -:(

 I would strongly suggest to use strong hashes like RIPEMD or
 MD5 instead of CRC-32 and the like. Sure, it takes more time 
 tocalculate and more place on the hard disk, but then: a database
 without data integrity (and means of _proofing_ integrity) is
 pretty worthless.

Other opinions? Also, we shouldn't forget licence issues.

Vadim



Re: CRC was: Re: [HACKERS] beta testing version

2000-12-07 Thread Tom Lane

"Mikheev, Vadim" [EMAIL PROTECTED] writes:
 I would strongly suggest to use strong hashes like RIPEMD or
 MD5 instead of CRC-32 and the like.

 Other opinions? Also, we shouldn't forget licence issues.

I agree with whoever commented that crypto hashes are silly for this
application.  A 64-bit CRC *might* be enough stronger than a 32-bit
CRC to be worth the extra calculation, but frankly I doubt that too.

Remember that we are already sitting atop hardware that's really pretty
reliable, despite the carping that's been going on in this thread.  All
that we have to do is detect the infrequent case where a block of data
didn't get written due to system failure.  It's wildly pessimistic to
think that we might get called on to do so as much as once a day (if
you are trying to run a reliable database, and are suffering power
failures once a day, and haven't bought a UPS, you're a lost cause).
A 32-bit CRC will fail to detect such an error with a probability of
about 1 in 2^32.  So, a 32-bit CRC will have an MBTF of 2^32 days, or
11 million years, on the wildly pessimistic side --- real installations
probably 100 times better.  That's plenty for me, and improving the odds
to 2^64 or 2^128 is not worth any slowdown IMHO.

regards, tom lane



Re: CRC was: Re: [HACKERS] beta testing version

2000-12-07 Thread Nathan Myers

On Thu, Dec 07, 2000 at 04:35:00PM -0500, Tom Lane wrote:
 Remember that we are already sitting atop hardware that's really
 pretty reliable, despite the carping that's been going on in this
 thread. All that we have to do is detect the infrequent case where a
 block of data didn't get written due to system failure. It's wildly
 pessimistic to think that we might get called on to do so as much as
 once a day (if you are trying to run a reliable database, and are
 suffering power failures once a day, and haven't bought a UPS, you're
 a lost cause). A 32-bit CRC will fail to detect such an error with a
 probability of about 1 in 2^32. So, a 32-bit CRC will have an MBTF of
 2^32 days, or 11 million years, on the wildly pessimistic side ---
 real installations probably 100 times better. That's plenty for me,
 and improving the odds to 2^64 or 2^128 is not worth any slowdown
 IMHO.

1. Computing a CRC-64 takes only about twice as long as a CRC-32, for
   2^32 times the confidence.  That's pretty cheap confidence.

2. I disagree with way the above statistics were computed.  That eleven 
   million-year figure gets whittled down pretty quickly when you 
   factor in all the sources of corruption, even without crashes.  
   (Power failures are only one of many sources of corruption.)  They 
   grow with the size and activity of the database.  Databases are 
   getting very large and busy indeed.

3. Many users clearly hope to be able to pull the plug on their hardware 
   and get back up confidently.  While we can't promise they won't have 
   to go to their backups, we should at least be equipped to promise,
   with confidence, that they will know whether they need to.

4. For a way to mark the "current final" log entry, you want a lot more
   confidence, because you read a lot more of them, and reading beyond 
   the end may cause you to corrupt a currently-valid database, which 
   seems a lot worse than just using a corrupted database.

Still, I agree that a 32-bit CRC is better than none at all.  

Nathan Myers
[EMAIL PROTECTED]



Re: CRC was: Re: [HACKERS] beta testing version

2000-12-07 Thread Tom Lane

[EMAIL PROTECTED] (Nathan Myers) writes:
 2. I disagree with way the above statistics were computed.  That eleven 
million-year figure gets whittled down pretty quickly when you 
factor in all the sources of corruption, even without crashes.  
(Power failures are only one of many sources of corruption.)  They 
grow with the size and activity of the database.  Databases are 
getting very large and busy indeed.

Sure, but the argument still holds.  If the net MTBF of your underlying
system is less than a day, it's too unreliable to run a database that
you want to trust.  Doesn't matter what the contributing failure
mechanisms are.  In practice, I'd demand an MTBF of a lot more than a
day before I'd accept a hardware system as satisfactory...

 3. Many users clearly hope to be able to pull the plug on their hardware 
and get back up confidently.  While we can't promise they won't have 
to go to their backups, we should at least be equipped to promise,
with confidence, that they will know whether they need to.

And the difference in odds between 2^32 and 2^64 matters here?  I made
a numerical case that it doesn't, and you haven't refuted it.  By your
logic, we might as well say that we should be using a 128-bit CRC, or
256-bit, or heck, a few kilobytes.  It only takes a little longer to go
up each step, right, so where should you stop?  I say MTBF measured in
megayears ought to be plenty.  Show me the numerical argument that 64
bits is the right place on the curve.

 4. For a way to mark the "current final" log entry, you want a lot more
confidence, because you read a lot more of them,

You only need to make the distinction during a restart, so I don't
think that argument is correct.

regards, tom lane



Re: AW: [HACKERS] beta testing version

2000-12-06 Thread Tom Lane

Zeugswetter Andreas SB [EMAIL PROTECTED] writes:
 Yes, but there would need to be a way to verify the last page or
 record from txlog when running on crap hardware.

How exactly *do* we determine where the end of the valid log data is,
anyway?

regards, tom lane



Re: AW: [HACKERS] beta testing version

2000-12-06 Thread Daniele Orlandi

Tom Lane wrote:
 
 Zeugswetter Andreas SB [EMAIL PROTECTED] writes:
  Yes, but there would need to be a way to verify the last page or
  record from txlog when running on crap hardware.
 
 How exactly *do* we determine where the end of the valid log data is,
 anyway?

Couldn't you use a CRC ?

Anyway... may I suggest adding CRCs to the data ? I just discovered that
I had a faulty HD controller and I fear that something could have been
written erroneously (this could also help to detect faulty memory,
though only in certain cases).

Bye!

--
  Daniele Orlandi
  Planet Srl



Re: AW: [HACKERS] beta testing version

2000-12-06 Thread Bruce Guenter

On Wed, Dec 06, 2000 at 11:15:26AM -0500, Tom Lane wrote:
 Zeugswetter Andreas SB [EMAIL PROTECTED] writes:
  Yes, but there would need to be a way to verify the last page or
  record from txlog when running on crap hardware.
 How exactly *do* we determine where the end of the valid log data is,
 anyway?

I don't know how pgsql does it, but the only safe way I know of is to
include an "end" marker after each record.  When writing to the log,
append the records after the last end marker, ending with another end
marker, and fdatasync the log.  Then overwrite the previous end marker
to indicate it's not the end of the log any more and fdatasync again.

To ensure that it is written atomically, the end marker must not cross a
hardware sector boundary (typically 512 bytes).  This can be trivially
guaranteed by making the marker a single byte.

Any other way I've seen discussed (here and elsewhere) either
- Requires atomic multi-sector writes, which are possible only if all
  the sectors are sequential on disk, the kernel issues one large write
  for all of them, and you don't powerfail in the middle of the write.
- Assume that a CRC is a guarantee.  A CRC would be a good addition to
  help ensure the data wasn't broken by flakey drive firmware, but
  doesn't guarantee consistency.

-- 
Bruce Guenter [EMAIL PROTECTED]   http://em.ca/~bruceg/

 PGP signature


Re: AW: [HACKERS] beta testing version

2000-12-06 Thread Daniele Orlandi

Bruce Guenter wrote:
 
 - Assume that a CRC is a guarantee.  A CRC would be a good addition to
   help ensure the data wasn't broken by flakey drive firmware, but
   doesn't guarantee consistency.

Even a CRC per transaction (it could be a nice END record) ?

Bye!

-- 
 Daniele

---
 Daniele Orlandi - Utility Line Italia - http://www.orlandi.com
 Via Mezzera 29/A - 20030 - Seveso (MI) - Italy
---



Re: [HACKERS] beta testing version

2000-12-06 Thread Tom Lane

"Mikheev, Vadim" [EMAIL PROTECTED] writes:
 This may be implemented very fast (if someone points me where
 I can find CRC func).

Lifted from the PNG spec (RFC 2083):


15. Appendix: Sample CRC Code

   The following sample code represents a practical implementation of
   the CRC (Cyclic Redundancy Check) employed in PNG chunks.  (See also
   ISO 3309 [ISO-3309] or ITU-T V.42 [ITU-V42] for a formal
   specification.)


  /* Make the table for a fast CRC. */
  void make_crc_table(void)
  {
unsigned long c;
int n, k;
for (n = 0; n  256; n++) {
  c = (unsigned long) n;
  for (k = 0; k  8; k++) {
if (c  1)
  c = 0xedb88320L ^ (c  1);
else
  c = c  1;
  }
  crc_table[n] = c;
}
crc_table_computed = 1;
  }

  /* Update a running CRC with the bytes buf[0..len-1]--the CRC
 should be initialized to all 1's, and the transmitted value
 is the 1's complement of the final running CRC (see the
 crc() routine below)). */

  unsigned long update_crc(unsigned long crc, unsigned char *buf,
   int len)
  {
unsigned long c = crc;
int n;

if (!crc_table_computed)
  make_crc_table();
for (n = 0; n  len; n++) {
  c = crc_table[(c ^ buf[n])  0xff] ^ (c  8);
}
return c;
  }

  /* Return the CRC of the bytes buf[0..len-1]. */
  unsigned long crc(unsigned char *buf, int len)
  {
return update_crc(0xL, buf, len) ^ 0xL;
  }


regards, tom lane



Re: [HACKERS] beta testing version

2000-12-06 Thread Tom Lane

 Lifted from the PNG spec (RFC 2083):

Drat, I dropped the table declarations:

  /* Table of CRCs of all 8-bit messages. */
  unsigned long crc_table[256];

  /* Flag: has the table been computed? Initially false. */
  int crc_table_computed = 0;


regards, tom lane



RE: [HACKERS] beta testing version

2000-12-05 Thread Mikheev, Vadim

 As far as I know (and have tested in excess) Informix IDS 
 does survive any power loss without leaving the db in a
 corrupted state. The basic technology is, that it only relys
 on writes to one "file" (raw device in that case), the txlog,
 which is directly written. All writes to the txlog are basically
 appends to that log. Meaning that all writes are sync writes to
 the currently active (== last) page. All other IO is not a problem,
 because a backup image "physical log" is kept for each page 
 that needs to be written. During fast recovery the content of the
 physical log is restored to the originating pages (thus all pendig
 IO is undone) before rollforward is started.

Sounds great! We can follow this way: when first after last checkpoint
update to a page being logged, XLOG code can log not AM specific update
record but entire page (creating backup "physical log"). During after
crash recovery such pages will be redone first, ensuring page consistency
for further redo ops. This means bigger log, of course.

Initdb will not be required for these code changes, so it can be
implemented in any 7.1.X, X =1.

Thanks, Andreas!

Vadim



Re: [HACKERS] beta testing version

2000-12-05 Thread Martin A. Marques

On Sunday 03 December 2000 04:00, Vadim Mikheev wrote:
  There is risk here.  It isn't so much in the fact that PostgreSQL, Inc
  is doing a couple of modest closed-source things with the code.  After
  all, the PG community has long acknowleged that the BSD license would
  allow others to co-op the code and commercialize it with no obligations.
 
  It is rather sad to see PG, Inc. take the first step in this direction.
 
  How long until the entire code base gets co-opted?

 I totaly missed your point here. How closing source of ERserver is related
 to closing code of PostgreSQL DB server? Let me clear things:

 1. ERserver isn't based on WAL. It will work with any version = 6.5

 2. WAL was partially sponsored by my employer, Sectorbase.com,
 not by PG, Inc.

Has somebody thought about putting PG in the GPL licence instead of the BSD? 
PG inc would still be able to do there money giving support (just like IBM, 
HP and Compaq are doing there share with Linux), without been able to close 
the code.

Only a thought... 

Saludos... :-)

-- 
"And I'm happy, because you make me feel good, about me." - Melvin Udall
-
Martín Marqués  email:  [EMAIL PROTECTED]
Santa Fe - Argentinahttp://math.unl.edu.ar/~martin/
Administrador de sistemas en math.unl.edu.ar
-



Re: [HACKERS] beta testing version

2000-12-05 Thread Martin A. Marques

On Sunday 03 December 2000 12:41, mlw wrote:
 Thomas Lockhart wrote:
  As soon as you find a business model which does not require income, let
  me know. The .com'ers are trying it at the moment, and there seems to be
  a few flaws... ;)

 While I have not contributed anything to Postgres yet, I have
 contributed to other environments. The prospect that I could create a
 piece of code, spend weeks/years of my own time on something and some
 entity can come along, take what I've written and create a product which
 is better for it, and then not share back is offensive. Under GPL it is
 illegal. (Postgres should try to move to GPL)

With you on the last statemente.

 I am working on a full-text search engine for Postgres. A really fast
 one, something better than anything else out there. It combines the
 power and scalability of a web search engine, with the data-mining
 capabilities of SQL.

If you want to make something GPL I would be more then interested to help 
you. We could use something like that over here, and I have no problem at all 
with releasing it as GPL code.

 If I write this extension to Postgres, and release it, is it right that
 a business can come along, add a few things here and there and introduce
 a new closed source product on what I have written? That is certainly
 not what I intend. My intention was to honor the people before me for
 providing the rich environment which is Postgres. I have made real money
 using Postgres in a work environment. The time I would give back more
 than covers MSSQL/Oracle licenses.

I'm not sure, but you could introduce a peice of GPL code in the BSD code, 
but the result would have to be GPL.

Hoping to hear from you, 

-- 
"And I'm happy, because you make me feel good, about me." - Melvin Udall
-
Martín Marqués  email:  [EMAIL PROTECTED]
Santa Fe - Argentinahttp://math.unl.edu.ar/~martin/
Administrador de sistemas en math.unl.edu.ar
-



Re: [HACKERS] beta testing version

2000-12-05 Thread Martin A. Marques

On Sunday 03 December 2000 21:49, The Hermit Hacker wrote:

 I've been trying to follow this thread, and seem to have missed where
 someone arrived at the conclusion that we were proprietarizing(word?) this

I have missed that part as well.

 ... we do apologize that it didn't get out mid-October, but it is/was
 purely a scheduale slip ...

I would never say something about schedules of OSS. Let it be in BSD or GPL 
license.

Saludos... :-)

-- 
"And I'm happy, because you make me feel good, about me." - Melvin Udall
-
Martín Marqués  email:  [EMAIL PROTECTED]
Santa Fe - Argentinahttp://math.unl.edu.ar/~martin/
Administrador de sistemas en math.unl.edu.ar
-



Re: [HACKERS] beta testing version

2000-12-05 Thread Alfred Perlstein

   I totaly missed your point here. How closing source of 
   ERserver is related to closing code of PostgreSQL DB server?
   Let me clear things:
  
   1. ERserver isn't based on WAL. It will work with any version = 6.5
  
   2. WAL was partially sponsored by my employer, Sectorbase.com,
   not by PG, Inc.
  
  Has somebody thought about putting PG in the GPL licence 
  instead of the BSD? 
  PG inc would still be able to do there money giving support 
  (just like IBM, HP and Compaq are doing there share with Linux),
  without been able to close the code.

This gets brought up every couple of months, I don't see the point
in denying any of the current Postgresql developers the chance
to make some money selling a non-freeware version of Postgresql.

We can also look at it another way, let's say ER server was meant
to be closed source, if the code it was derived from was GPL'd
then that chance was gone before it even happened.  Hence no
reason to develop it.

*poof* no ER server.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] beta testing version

2000-12-05 Thread The Hermit Hacker

On Tue, 5 Dec 2000, Martin A. Marques wrote:

 On Sunday 03 December 2000 04:00, Vadim Mikheev wrote:
   There is risk here.  It isn't so much in the fact that PostgreSQL, Inc
   is doing a couple of modest closed-source things with the code.  After
   all, the PG community has long acknowleged that the BSD license would
   allow others to co-op the code and commercialize it with no obligations.
  
   It is rather sad to see PG, Inc. take the first step in this direction.
  
   How long until the entire code base gets co-opted?
 
  I totaly missed your point here. How closing source of ERserver is related
  to closing code of PostgreSQL DB server? Let me clear things:
 
  1. ERserver isn't based on WAL. It will work with any version = 6.5
 
  2. WAL was partially sponsored by my employer, Sectorbase.com,
  not by PG, Inc.
 
 Has somebody thought about putting PG in the GPL licence instead of the BSD? 

its been brought up and rejected continuously ... in some of our opinions,
GPL is more harmful then helpful ... as has been said before many times,
and I'm sure will continue to be said "changing the license to GPL is a
non-discussable issue" ...





Re: [HACKERS] beta testing version

2000-12-05 Thread Lamar Owen

The Hermit Hacker wrote:
 its been brought up and rejected continuously ... in some of our opinions,
 GPL is more harmful then helpful ... as has been said before many times,
 and I'm sure will continue to be said "changing the license to GPL is a
 non-discussable issue" ...

I've declined commenting on this thread until now -- but this statement
bears amplification. 

GPL is NOT the be-all end-all Free Software (in the FSF/GNU sense!)
license.  There is room for more than one license -- just as there is
room for more than one OS, more than one Unix, more than one Free RDBMS,
more than one Free webserver, more than one scripting language, more
than one compiler system, more than one Linux distribution, more than
one BSD, and more than one CPU architecture.

Why make a square peg development group fit a round peg license? :-) 
Use a round peg for round holes, and a square peg for square holes.

Choice of license for PostgreSQL is not negotiable. I don't say that as
an edict from Lamar Owen (after all, I am in no position to edict
anything :-)) -- I say that as a studied observation of the last times
this subject has come up.

I personally prefer GPL.  But my personal preference and what is good
for the project are two different things. BSD is good for this project
with this group of developers -- and it should not change.

And, like any other open development effort, there will be missteps --
which missteps should, IMHO, be put behind us.  No software is perfect;
no development team is, either.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



Re: [HACKERS] beta testing version

2000-12-05 Thread Mitch Vincent

Regardless of what license is best, could the license even be changed now? I
mean, some of the initial Berkeley code is still in there in some sense and
I would think that the original license (BSD I assume) of the initial source
code release would have to be somehow honored.. I'm just wondering if the PG
team could change the license even if they wanted to.. I should go read the
license again, I know the answer to the above is in there but it's been a
long time since I've looked it over and I'm in the middle of packing, so I
haven't got the time right now.. Thanks to anyone for satisfying my
curiosity in answering this question.

I think that it's very, very good if the license is indeed untouchable, it
keeps PostgreSQL from becoming totally closed-source and/or totally
commercial.. Obviously things can be added to PG and sold commercially, but
there will always be the base PostgreSQL out there for everyone.. I
hope.

Just my $0.02 worth..

-Mitch


- Original Message -
From: "Lamar Owen" [EMAIL PROTECTED]
To: "PostgreSQL Development" [EMAIL PROTECTED]
Sent: Tuesday, December 05, 2000 1:45 PM
Subject: Re: [HACKERS] beta testing version


 The Hermit Hacker wrote:
  its been brought up and rejected continuously ... in some of our
opinions,
  GPL is more harmful then helpful ... as has been said before many times,
  and I'm sure will continue to be said "changing the license to GPL is a
  non-discussable issue" ...

 I've declined commenting on this thread until now -- but this statement
 bears amplification.

 GPL is NOT the be-all end-all Free Software (in the FSF/GNU sense!)
 license.  There is room for more than one license -- just as there is
 room for more than one OS, more than one Unix, more than one Free RDBMS,
 more than one Free webserver, more than one scripting language, more
 than one compiler system, more than one Linux distribution, more than
 one BSD, and more than one CPU architecture.

 Why make a square peg development group fit a round peg license? :-)
 Use a round peg for round holes, and a square peg for square holes.

 Choice of license for PostgreSQL is not negotiable. I don't say that as
 an edict from Lamar Owen (after all, I am in no position to edict
 anything :-)) -- I say that as a studied observation of the last times
 this subject has come up.

 I personally prefer GPL.  But my personal preference and what is good
 for the project are two different things. BSD is good for this project
 with this group of developers -- and it should not change.

 And, like any other open development effort, there will be missteps --
 which missteps should, IMHO, be put behind us.  No software is perfect;
 no development team is, either.
 --
 Lamar Owen
 WGCR Internet Radio
 1 Peter 4:11





Re: [HACKERS] beta testing version

2000-12-05 Thread Martin A. Marques

On Tuesday 05 December 2000 18:03, The Hermit Hacker wrote:
 
  Has somebody thought about putting PG in the GPL licence instead of the
  BSD?

 its been brought up and rejected continuously ... in some of our opinions,
 GPL is more harmful then helpful ... as has been said before many times,
 and I'm sure will continue to be said "changing the license to GPL is a
 non-discussable issue" ...

It's pretty clear to me, and I respect the decision (I really do).

-- 
"And I'm happy, because you make me feel good, about me." - Melvin Udall
-
Martín Marqués  email:  [EMAIL PROTECTED]
Santa Fe - Argentinahttp://math.unl.edu.ar/~martin/
Administrador de sistemas en math.unl.edu.ar
-



Re: [HACKERS] beta testing version

2000-12-05 Thread Lamar Owen

Mitch Vincent wrote:
 
 Regardless of what license is best, could the license even be changed now? I
 mean, some of the initial Berkeley code is still in there in some sense and
 I would think that the original license (BSD I assume) of the initial source
 code release would have to be somehow honored.. I'm just wondering if the PG
 team could change the license even if they wanted to.. I should go read the
 license again, I know the answer to the above is in there but it's been a

_Every_single_ copyright holder of code in the core server would have to
agree to any change.

Not a likely event.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



Re: [HACKERS] beta testing version

2000-12-05 Thread Trond Eivind GlomsrØd

Lamar Owen [EMAIL PROTECTED] writes:

 Mitch Vincent wrote:
  
  Regardless of what license is best, could the license even be changed now? I
  mean, some of the initial Berkeley code is still in there in some sense and
  I would think that the original license (BSD I assume) of the initial source
  code release would have to be somehow honored.. I'm just wondering if the PG
  team could change the license even if they wanted to.. I should go read the
  license again, I know the answer to the above is in there but it's been a
 
 _Every_single_ copyright holder of code in the core server would have to
 agree to any change.

No - GPL projects can include BSD-copyrighted code, no problem
there. That being said, creating bad blood is not a good thing, so an
approach like this would hurt PostgreSQL a lot.
 

-- 
Trond Eivind Glomsrød
Red Hat, Inc.



Re: [HACKERS] beta testing version

2000-12-04 Thread Michael Fork

Judging by the information below, taken *directly* from PostgreSQL, Inc.
website, it appears that they will be releasing all code into the main
source code branch -- with the exception of "Advanced Replication and
Distributed Information capabilities" (to which capabilities they are
referring is not made clear) which may remain proprietary for up to 24
months "in order to assist us in recovering development costs and continue
to provide funding for our other Open Source contributions."

I have interpreted this to mean that basic replication (server - server,
server - client, possibly more)  will be available shortly for Postgres
(with the release of 7.1?) and that those more advanced features will
follow behind.  This is one of the last features that was missing from
Postgres (along with recordset returning functions and clusters, among
others) that was holding it back from the enterprise market -- and I do
not blame PostgreSQL, Inc. one bit for withholding some of the more
advanced features to recoup their development costs -- it was *their time*
and *their money* they spent developing the *product* and it must be
recoup'ed for projects like this to make sense in the future (who knows,
maybe next they will implement RS returning SP's or clusters, projects
that are funded with their profit off the advanced replication and
distributed information capabilities that they *may* withhold -- would
people still be whining then?)

Michael Fork - CCNA - MCP - A+ 
Network Support - Toledo Internet Access - Toledo Ohio

(http://www.pgsql.com/press/PR_5.html)
"At the moment we are limiting our test groups to our existing Platinum
Partners and those clients whose requirements include these
features." advises Jeff MacDonald, VP of Support Services. "We expect to
have the source code tested and ready to contribute to the open source
community before the middle of October. Until that time we are considering
requests from a number of development companies and venture capital groups
to join us in this process."

Davidson explains, "These initial Replication functions are important to
almost every commercial user of PostgreSQL. While we've fully funded all
of this development ourselves, we will be immediately donating these
capabilities to the open source PostgreSQL Global Development Project as
part of our ongoing commitment to the PostgreSQL community." 

http://www.erserver.com/
eRServer development is currently concentrating on core, universal
functions that will enable individuals and IT professionals to implement
PostgreSQL ORDBMS solutions for mission critical datawarehousing,
datamining, and eCommerce requirements. These initial developments will be
published under the PostgreSQL Open Source license, and made available
through our sites, Certified Platinum Partners, and others in PostgreSQL
community.

Advanced Replication and Distributed Information capabilities are also
under development to meet specific business and competitive requirements
for both PostgreSQL, Inc. and clients. Several of these enhanced
PostgreSQL, Inc. developments may remain proprietary for up to 24 months,
with availability limited to clients and partners, in order to assist us
in recovering development costs and continue to provide funding for our
other Open Source contributions. 

On Sun, 3 Dec 2000, Hannu Krosing wrote:

 The Hermit Hacker wrote:
 IIRC, this thread woke up on someone complaining about PostgreSQl inc
 promising 
 to release some code for replication in mid-october and asking for
 confirmation 
 that this is just a schedule slip and that the project is still going on
 and 
 going to be released as open source.
 
 What seems to be the answer is: "NO, we will keep the replication code
 proprietary".
 
 I have not seen this answer myself, but i've got this impression from
 the contents 
 of the whole discussion.
 
 Do you know if this is the case ?
 
 ---
 Hannu
 











Re: [HACKERS] beta testing version

2000-12-04 Thread Thomas Lockhart

 In fact, it might seem to be common courtesy...

An odd choice of words coming from you Don.

We are offering our services and expertise to a community outside
-hackers, as a business formed in a way that this new community expects
to see. Nothing special or sinister here. Other than it seems to have
raised the point that you expected each of us to be working for you,
gratis, on projects you find compelling, using all of our available
time, far into the future just as each of us has over the last five
years.

After your recent spewing, it irks me a little to admit that this will
not change, and that we are likely to continue to each work on OS
PostgreSQL projects using all of our available time, just as we have in
the past.

A recent example of non-sinister change in another area is the work done
to release 7.0.3. This is a release which would not have happened in
previous cycles, since we are so close to beta on 7.1. But GB paid Tom
Lane to work on it as part of *their* business plan, and he sheparded it
through the cycle. There was no outcry from you at this presumption, and
on this diversion of community resources for this effort. Not sure why,
other than you chose to pick some other fight.

And no matter which fight you chose, you're wasting the time of others
as you fight your demons.

   - Thomas



Re: [HACKERS] beta testing version

2000-12-04 Thread The Hermit Hacker

On Sun, 3 Dec 2000, Don Baccus wrote:

 At 11:59 PM 12/3/00 -0400, The Hermit Hacker wrote:
  the sanctity of the *core* server is *always*
 foremost in our minds, no matter what other projects we are working on ...
 
 What happens if financially things aren't entirely rosy with your
 company? The problem in taking itty-bitty steps in this direction is
 that you're involving outside money interests that don't necessarily
 adhere to this view.
 
 Having taken the first steps to a proprietary, closed source future,
 would you pledge to bankrupt your company rather than accept a large
 captital investment with an ROI based on proprietary extensions to the
 core that might not be likely to come out of the non-tainted side of
 the development house?

You mean sort of like Great Bridge investing in core developers?  Quite
frankly, I have yet to see anything but good come out of Tom as a result
of that, as now he has more time on his hands ... then again, maybe Outer
Joins was a bad idea? *raised eyebrow*

PgSQL is *open source* ... that means that if you don't like it, take the
code, fork off your own version if you don't like what's happening to the
current tree and build your own community *shrug*  





Re: [HACKERS] beta testing version

2000-12-04 Thread The Hermit Hacker

On Mon, 4 Dec 2000, Don Baccus wrote:

 A recent example of non-sinister change in another area is the work done
 to release 7.0.3. This is a release which would not have happened in
 previous cycles, since we are so close to beta on 7.1. But GB paid Tom
 Lane to work on it as part of *their* business plan, and he sheparded it
 through the cycle. There was no outcry from you at this presumption, and
 on this diversion of community resources for this effort. Not sure why,
 other than you chose to pick some other fight.
 
 There's a vast difference between releasing 7.0.3 in open source form
 TODAY and eRServer, which may not be released in open source form for
 up to two years after it enters the market on a closed source,
 proprietary footing. To suggest there is no difference, as you seem to
 be doing, is a hopelessly unconvincing argument.

Except, eRServer, the basic model, will be released Open Source, and, if
all goes as planned, in time for inclusion in contrib of v7.1 ... 





Re: [HACKERS] beta testing version

2000-12-04 Thread Vince Vielhaber

On Thu, 30 Nov 2000, Nathan Myers wrote:

 Second, the transaction log is not, as has been noted far too frequently
 for Vince's comfort, really written atomically.  The OS has promised
 to write it atomically, and given the opportunity, it will.  If you pull
 the plug, all promises are broken.

Say what?

Vince.
-- 
==
Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net
 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking
Online Campground Directoryhttp://www.camping-usa.com
   Online Giftshop Superstorehttp://www.cloudninegifts.com
==






Re: [HACKERS] beta testing version

2000-12-03 Thread Peter Eisentraut

Don Baccus writes:

 How long until the entire code base gets co-opted?

Yeah so what?  Nobody's forcing you to use, buy, or pay attention to any
such efforts.  The market will determine whether the release model of
PostgreSQL, Inc. appeals to customers.  Open source software is a
privilege, and nobody has the right to call someone "irresponsible"
because they want to get paid for their work and don't choose to give away
their code.

-- 
Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/





Re: [HACKERS] beta testing version

2000-12-03 Thread Ned Lilly

Ron Chmara wrote:

 As it is, any company trying to make a closed version of an open source
 product has some _massive_ work to do. Manuals. Documentation. Sales.
 Branding. Phone support lines. Legal departments/Lawsuit prevention. Figuring
 out how to prevent open source from stealing the thunder by duplicating
 features. And building a _product_.

 Most Open Source projects are not products, they are merely code, and some
 horrid documentation, and maybe some support. The companies making money
 are not making better code, they are making better _products_

 And I really havn't seen much in the way of full featured products, complete
 with printed docs, 24 hour support, tutorials, wizards, templates, a company
 to sue if the code causes damage, GUI install, setup, removal, etc. etc. etc.

This kind of stuff is more along the lines of what Great Bridge is doing.  In about
a week, we'll be releasing a GB-branded release of 7.0.3 - including printed
manuals (much of which is new), a GUI installer (which is open source), support
packages including fully-staffed 24/7.  Details to follow soon on pgsql-announce.

I don't want to speak for Pgsql Inc., but it seems to me that they are pursuing a
slightly different business model than us - more focused on providing custom
development around the base PostgreSQL software.  And that's a great way to get
more people using PostgreSQL.  Some of what they create for their customers may be
open source, some not.  It's certainly their decision - and it's a perfectly
justifiable business model, followed by open source companies such as Covalent
(Apache), Zend (PHP), and TurboLinux.  I don't think it's productive or appropriate
to beat up on Pgsql Inc for developing bolt-on products in a different way -
particularly with Vadim's clarification that the bolt-ons don't require anything
special in the open source backend.

Our own business model is, as I indicated, different.  We got a substantial
investment from our parent company, whose chairman sat on the Red Hat board for
three years, and a mandate to create a *big* company that could provide the
infrastructure (human and technical) to enable PostgreSQL to go up against the
proprietary players like Oracle and Microsoft.  A fully-staffed 24/7 data center
isn't cheap, and our services won't be either.  But it's a different type of
business - we're providing the benefits of the open source development model to a
group of customers that might not otherwise get involved, precisely because they
demand to see a company of Great Bridge's heft behind a product before they buy.

I think PostgreSQL and other open source projects are big enough for lots of
different companies, with lots of different types of business models.  Indeed, from
what I've seen of Pgsql Inc (and I hope I haven't mischaracterized them), our
business models are highly complementary.  At Great Bridge, we hope and expect that
other companies that "get it" will get more involved with PostgreSQL - that can
only add to the strength of the project.

Regards,
Ned
--

Ned Lilly e: [EMAIL PROTECTED]
Vice Presidentw: www.greatbridge.com
Evangelism / Hacker Relationsv: 757.233.5523
Great Bridge, LLCf: 757.233.




Re: [HACKERS] beta testing version

2000-12-03 Thread Horst Herb

  Branding. Phone support lines. Legal departments/Lawsuit prevention.
Figuring
  out how to prevent open source from stealing the thunder by duplicating
 ^^
  features. And building a _product_.

Oops. You didn't really mean that, did you? Could it be that there are some
people out there thinking "let them free software fools do the hard initial
work, once things are working nicely, we take over, add a few "secret"
ingredients, and voila - the commercial product has been created?

After reading the statement above I believe that surely most of the honest
developers involved in postgres would wish they had chosen GPL as licensing
scheme.

I agree that most of the work is always done by a few. I also agree that it
would be nice if they could get some financial reward for it. But no dirty
tricks please. Do not betray the base. Otherwise, the broad developer base
will be gone before you even can say "freesoftware".

I, for my part, have learned another lesson today. I was just about to give
in with the licensing scheme in our project to allow the GPL incompatible
OpenSSL to be used. After reading the above now I know it is worth the extra
effort to "roll our own" or wait for another GPL'd solution rather than
sacrificing the unique protection the GPL gives us.

Horst
coordinator gnumed project




Re: [HACKERS] beta testing version

2000-12-03 Thread Horst Herb

  How long until the entire code base gets co-opted?

 Yeah so what?  Nobody's forcing you to use, buy, or pay attention to any
 such efforts.  The market will determine whether the release model of
 PostgreSQL, Inc. appeals to customers.  Open source software is a
 privilege, and nobody has the right to call someone "irresponsible"
 because they want to get paid for their work and don't choose to give away
 their code.

Just bear in mind that although a few developers always deliver outstanding
performance in any project, those open source projects have usually seen a
huge broad developer base. Hundreds of people putting their effort into the
project. These people never ask for a cent, never even dream of some
commercial benefit. They do it for the sake of creating something good,
being part of something great.

Especially in the case of Postgres the "product" has a long heritage, and
the most active people today are not neccessarily the ones who have put in
most "total" effort (AFAIK, I might be wrong here). Anyway,  Postgres would
not be where it is today without the hundreds of small cooperators 
testers. Lock them out from the source code - even if it is only a side
branch, and Postgres will die (well, at least it would die for our project)

Open source is not a mere marketing model. It is a philosophy. It is about
essential freedom, about human progress, about freedom of speech and
thought. It is about sharing and caring. Those who don't understand this,
should please stick to their ropes and develop closed source from the
beginning and not try to fool the free software community.

Horst






Re: [HACKERS] beta testing version

2000-12-03 Thread mlw

Thomas Lockhart wrote:

 As soon as you find a business model which does not require income, let
 me know. The .com'ers are trying it at the moment, and there seems to be
 a few flaws... ;)

While I have not contributed anything to Postgres yet, I have
contributed to other environments. The prospect that I could create a
piece of code, spend weeks/years of my own time on something and some
entity can come along, take what I've written and create a product which
is better for it, and then not share back is offensive. Under GPL it is
illegal. (Postgres should try to move to GPL)

I am working on a full-text search engine for Postgres. A really fast
one, something better than anything else out there. It combines the
power and scalability of a web search engine, with the data-mining
capabilities of SQL.

If I write this extension to Postgres, and release it, is it right that
a business can come along, add a few things here and there and introduce
a new closed source product on what I have written? That is certainly
not what I intend. My intention was to honor the people before me for
providing the rich environment which is Postgres. I have made real money
using Postgres in a work environment. The time I would give back more
than covers MSSQL/Oracle licenses.

Open source is a social agreement, not a business model. If you break
the social agreement for a business model, the business model will fail
because the society which fundamentally created the product you wish to
sell will crumble from mistrust (or shun you). In short, it is wrong to
sell the work of others without proper compensation and the full
agreement of everyone that has contributed. If you don't get that, get
out of the open source market now.

That said, there is a long standing business model which is 100%
compatible with Open Source and it is of the lowly 'VAR.' You do not
think for one minute that an Oracle VAR would dare to add features to
Oracle and make their own SQL do you?

As a PostgreSQL "VAR" you are in a better position that any other VAR.
You get to partner in the code development process. (You couldn't ask
Oracle to add a feature and expect to keep it to yourself, could you?)

I know this is a borderline rant, and I am sorry, but I think it is very
important that the integrity of open source be preserved at 100% because
it is a very slippery slope, and we are all surrounded by the temptation
cheat the spirit of open source "just a little" for short term gain. 


-- 
http://www.mohawksoft.com



Re: [HACKERS] beta testing version

2000-12-03 Thread Don Baccus

At 11:00 PM 12/2/00 -0800, Vadim Mikheev wrote:
 There is risk here.  It isn't so much in the fact that PostgreSQL, Inc
 is doing a couple of modest closed-source things with the code.  After
 all, the PG community has long acknowleged that the BSD license would
 allow others to co-op the code and commercialize it with no obligations.
 
 It is rather sad to see PG, Inc. take the first step in this direction.
 
 How long until the entire code base gets co-opted?

I totaly missed your point here. How closing source of ERserver is related
to closing code of PostgreSQL DB server? Let me clear things:

(not based on WAL)

That's wasn't clear from the blurb.

Still, this notion that PG, Inc will start producing closed-source products
poisons the well.  It strengthens FUD arguments of the "open source can't
provide enterprise solutions" variety.  "Look, even PostgreSQL, Inc realizes
that you must follow a close sourced model in order to provide tools for
the corporate world."




- Don Baccus, Portland OR [EMAIL PROTECTED]
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



Re: [HACKERS] beta testing version

2000-12-03 Thread Vadim Mikheev

 I totaly missed your point here. How closing source of ERserver is related
 to closing code of PostgreSQL DB server? Let me clear things:
 
 (not based on WAL)
 
 That's wasn't clear from the blurb.
 
 Still, this notion that PG, Inc will start producing closed-source products
 poisons the well.  It strengthens FUD arguments of the "open source can't
 provide enterprise solutions" variety.  "Look, even PostgreSQL, Inc realizes
 that you must follow a close sourced model
 in order to provide tools for the corporate world."


Did you miss Thomas' answer? Wasn't it clear that the order is to provide
income?

Vadim





Re: [HACKERS] beta testing version

2000-12-03 Thread Gary MacDougall

I think this trend is MUCH bigger than what Postgres, Inc. is doing... its
happening all over
the comminity.  Heck take a look around... Jabber, Postgres, Red Hat, SuSe,
Storm etc. etc.
these companies are making good money off a business plan that was basically
"hey, lets take some
of that open source and make a real product out of it...". As long as they
dribble releases into
the community, they're not in violation... Its not a bad business model if
you think about it, if you
can take a product that is good (great as in PG) and add value, sell it and
make money, why not?
Hell, you didn't have to spend the gazillion RD dollars on the initial
design and implementation,
your basically reaping the rewards off of the work of other people.

Are you ready for hundreds upon hundreds of little projects  turning into
"startup" companies?
It was bound to happen.  Why? because money is involved, plain and simple.

Maybe its a natural progression of this stuff, who knows, I just know that
I've been around
the block a couple times, been in the industry too long to know that the
minority voice never
gets the prize... we usually set the trend and pay for it in the end...
fatalistic? maybe. But not
far from the truth...

Sorry to be a downer... The Red Sox didn't get Mussina

- Original Message -
From: "Don Baccus" [EMAIL PROTECTED]
To: "Ross J. Reedstrom" [EMAIL PROTECTED]
Cc: "Peter Eisentraut" [EMAIL PROTECTED]; "PostgreSQL Development"
[EMAIL PROTECTED]
Sent: Saturday, December 02, 2000 5:11 PM
Subject: Re: [HACKERS] beta testing version


 At 03:51 PM 12/2/00 -0600, Ross J. Reedstrom wrote:

 "We expect to have the source code tested and ready to contribute to
 the open source community before the middle of October. Until that time
 we are considering requests from a number of development companies and
 venture capital groups to join us in this process."
 
 Where's the damn core code? I've seen a number of examples already of
 people asking about remote access/replication function, with an eye
 toward implementing it, and being told "PostgreSQL, Inc. is working
 on that". It's almost Microsoftesque: preannounce future functionality
 suppressing the competition.

 Well, this is just all 'round a bad precedent and an unwelcome path
 for PostgreSQL, Inc to embark upon.

 They've also embarked on one fully proprietary product (built on PG),
 which means they're not an Open Source company, just a sometimes Open
 Source company.

 It's a bit ironic to learn about this on the same day I learned that
 Solaris 8 is being made available in source form.  Sun's slowly "getting
 it" and moving glacially towards Open Source, while PostgreSQL, Inc.
 seems to be drifting in the opposite direction.

 if I absolutely need
 something that's only in CVS right now, I can bite the bullet and use
 a snapshot server.

 This work might be released as Open Source, but it isn't an open
development
 scenario.  The core work's not available for public scrutiny, and the
details
 of what they're actually up don't appear to be public either.

 OK, they're probably funding Vadim's work on WAL, so the idictment's
probably
 not 100% accurate - but I don't know that.

 I'd be really happy with someone reiterating the commitment to an
 open release, and letting us all know how badly the schedule has
 slipped. Remember, we're all here to help! Get everyone stomping bugs
 in code you're going to release soon anyway, and concentrate on the
 quasi-propriatary extensions.

 Which makes me wonder, is Vadim's time going to be eaten up by working
 on these quasi-proprietary extensions that the rest of us won't get
 for two years unless we become customers of Postgres, Inc?

 Will Great Bridge step to the plate and fund a truly open source
alternative,
 leaving us with a potential code fork?  If IB gets its political problems
 under control and developers rally around it, two years is going to be a
 long time to just sit back and wait for PG, Inc to release eRServer.

 These developments are a major annoyance.



 - Don Baccus, Portland OR [EMAIL PROTECTED]
   Nature photos, on-line guides, Pacific Northwest
   Rare Bird Alert Service and other goodies at
   http://donb.photo.net.





Re: [HACKERS] beta testing version

2000-12-03 Thread mlw

Peter Eisentraut wrote:
 
 mlw writes:
 
  There are hundreds (thousands?) of people that have contributed to the
  development of Postgres, either directly with code, or beta testing,
  with the assumption that they are benefiting a community. Many would
  probably not have done so if they had suspected that what they do is
  used in a product that excludes them.
 
 With the BSD license it has always been clear that this would be possible,
 and for as long as I've been around the core/active developers have
 frequently reiterated that this is a desirable aspect and in fact
 encouraged.  If you don't like that, then you should have read the license
 before using the product.
 
  I have said before, open source is a social contract, not a business
  model.
 
 Well, you're free to take the PostgreSQL source and start your own "social
 contract" project; but we don't do that around here.

And you don't feel that this is a misappropriation of a public trust? I
feel shame for you.

-- 
http://www.mohawksoft.com



Re: [HACKERS] beta testing version

2000-12-03 Thread The Hermit Hacker

On Sat, 2 Dec 2000, Adam Haberlach wrote:

   In any case, can we create pgsql-politics so we don't have to go over
 this issue every three months?  Can we create pgsql-benchmarks while we
 are at it, to take care of the other thread that keeps popping up?

no skin off my back:

pgsql-advocacy
pgsql-chat
pgsql-benchmarks

-advocacy/-chat are pretty much the same concept ... 





Re: [HACKERS] beta testing version

2000-12-03 Thread The Hermit Hacker

On Sat, 2 Dec 2000, Don Baccus wrote:

  I *am* one of those volunteers
 
 Yes, I well remember you screwing up PG 7.0 just before beta, without bothering
 to test your code, and leaving on vacation.  
 
 You were irresponsible then, and you're being irresponsible now.

Okay, so let me get this one straight ... it was irresponsible for him to
put code in that was broken the last time, but it wouldn't be
irresponsible for us to release code that we don't feel is ready this
time? *raised eyebrow*

Just want to get this straight, as it kinda sounds hypocritical to me, but
want to make sure that I understand before I fully arrive at that
conclusion ... :)




Re: [HACKERS] beta testing version

2000-12-03 Thread Hannu Krosing

Don Baccus wrote:
 
 At 04:42 AM 12/3/00 +, Thomas Lockhart wrote:
  This statement of yours kinda belittles the work done over the past
  few years by volunteers.
 
 imho it does not,
 
 Sure it does.  You in essence are saying that "advanced replication is so
 hard that it could only come about if someone were willing to finance a
 PROPRIETARY solution.  The PG developer group couldn't manage it if
 it were done Open Source".
 
 In other words, it is much harder than any of the work done by the
 same group of people before they started working on proprietary
 versions.
 
 And that the only way to get them doing their best work is to put them
 on proprietary, or "semi-proprietary" projects, though 24 months from
 now, who's going to care?  You've opened the door to IB prominence, not
 only shooting PG's open source purity down in flames, but probably PG, Inc's
 as well - IF IB can figure out their political problems.

 IB, as it stands, is a damned good product in many ways ahead of PG.  You're
 giving them life by this approach, which is a kind of bizarre businees strategy.
 

You (and others ;) may also be interested in SAPDB (SAP's version of
Adabas), 
that is soon to be released under GPL. It is already downloadable for
free use 
from www.sapdb.org

-
Hannu



Re: [HACKERS] beta testing version

2000-12-03 Thread Hannu Krosing

mlw wrote:
 
 Thomas Lockhart wrote:
 
  As soon as you find a business model which does not require income, let
  me know. The .com'ers are trying it at the moment, and there seems to be
  a few flaws... ;)
 
 While I have not contributed anything to Postgres yet, I have
 contributed to other environments. The prospect that I could create a
 piece of code, spend weeks/years of my own time on something and some
 entity can come along, take what I've written and create a product which
 is better for it, and then not share back is offensive. Under GPL it is
 illegal. (Postgres should try to move to GPL)

I think that forbidding anyone else from profiting from your work is
also 
somewhat obscene ;)

The whole idea of open source is that in open ideas mature faster, bugs
are 

 I am working on a full-text search engine for Postgres. A really fast
 one, something better than anything else out there.

Is'nt everybody ;)

 It combines the power and scalability of a web search engine, with 
 the data-mining capabilities of SQL.

Are you doing it in a fully open-source fashion or just planning to
release 
it as OS "when it somewhat works" ?

 If I write this extension to Postgres, and release it, is it right that
 a business can come along, add a few things here and there and introduce
 a new closed source product on what I have written? That is certainly 
 not what I intend. 

If your intention is to later cash in on proprietary uses of your code 
you should of course use GPL.

 My intention was to honor the people before me for
 providing the rich environment which is Postgres. I have made real money
 using Postgres in a work environment. The time I would give back more
 than covers MSSQL/Oracle licenses.
 
 Open source is a social agreement, not a business model.

Not one but many (and btw. incompatible) social agreements.

 If you break the social agreement for a business model, 

You are free to put your additions under GPL, it is just a tradition in
PG 
community not to contaminate the core with anything less free than BSD
(and yes, 
forcing your idea of freedom on other people qualifies as "less free" ;)

 the business model will fail
 because the society which fundamentally created the product you wish to
 sell will crumble from mistrust (or shun you). In short, it is wrong to
 sell the work of others without proper compensation and the full
 agreement of everyone that has contributed. If you don't get that, get
 out of the open source market now.

SO now a social contract is a market ? I _am_ confused.

 That said, there is a long standing business model which is 100%
 compatible with Open Source and it is of the lowly 'VAR.' You do not
 think for one minute that an Oracle VAR would dare to add features to
 Oracle and make their own SQL do you?

But if Oracle were released under BSD license, it might benefit both the 
VAR and the customer to do so under some circumstances.

 As a PostgreSQL "VAR" you are in a better position that any other VAR.
 You get to partner in the code development process. (You couldn't ask
 Oracle to add a feature and expect to keep it to yourself, could you?)

You could ask another VAR to do that if you yourself are incapable/don't 
have time, etc.

And of course I can keep it to myself even if done by Oracle. 
What I can't do is forbid others from having it too .

 I know this is a borderline rant, and I am sorry, but I think it is very
 important that the integrity of open source be preserved at 100% because
 it is a very slippery slope, and we are all surrounded by the temptation
 cheat the spirit of open source "just a little" for short term gain.

Do you mean that anyone who has contributed to an opensource project
should 
be forbidden from doing any closed-source development ?


---
Hannu



Re: [HACKERS] beta testing version

2000-12-03 Thread Hannu Krosing

The Hermit Hacker wrote:
 
 On Sat, 2 Dec 2000, Don Baccus wrote:
 
   I *am* one of those volunteers
 
  Yes, I well remember you screwing up PG 7.0 just before beta, without bothering
  to test your code, and leaving on vacation.
 
  You were irresponsible then, and you're being irresponsible now.
 
 Okay, so let me get this one straight ... it was irresponsible for him to
 put code in that was broken the last time, but it wouldn't be
 irresponsible for us to release code that we don't feel is ready this
 time? *raised eyebrow*
 
 Just want to get this straight, as it kinda sounds hypocritical to me, but
 want to make sure that I understand before I fully arrive at that
 conclusion ... :)

IIRC, this thread woke up on someone complaining about PostgreSQl inc
promising 
to release some code for replication in mid-october and asking for
confirmation 
that this is just a schedule slip and that the project is still going on
and 
going to be released as open source.

What seems to be the answer is: "NO, we will keep the replication code
proprietary".

I have not seen this answer myself, but i've got this impression from
the contents 
of the whole discussion.

Do you know if this is the case ?

---
Hannu



Re: [HACKERS] beta testing version

2000-12-03 Thread mlw

Hannu Krosing wrote:
  I know this is a borderline rant, and I am sorry, but I think it is very
  important that the integrity of open source be preserved at 100% because
  it is a very slippery slope, and we are all surrounded by the temptation
  cheat the spirit of open source "just a little" for short term gain.
 
 Do you mean that anyone who has contributed to an opensource project
 should be forbidden from doing any closed-source development ?

No, not at all. At least for me, if I write code which is dependent on
the open source work of others, then hell yes, that work should also be
open source. That, to me, is the difference between right and wrong.

If you write a program which stands on its own, takes no work from
uncompensated parties, then you have the unambiguous right to do what
ever you want.

I honestly feel that it is wrong to take what others have shared and use
it for the basis of something you will not share, and I can't understand
how anyone could think differently.

-- 
http://www.mohawksoft.com



Re: [HACKERS] beta testing version

2000-12-03 Thread Trond Eivind GlomsrØd

"Gary MacDougall" [EMAIL PROTECTED] writes:

 No offense Trond, if you were in on the Red Hat IPO from the start,
 you'd have to say those people made "good money".

I'm talking about the business as such, not the IPO where the price
went stratospheric (we were priced like we were earning 1 or 2 billion
dollars year, which was kindof weird). 


-- 
Trond Eivind Glomsrød
Red Hat, Inc.



Re: [HACKERS] beta testing version

2000-12-03 Thread mlw

Gary MacDougall wrote:
 
  No, not at all. At least for me, if I write code which is dependent on
  the open source work of others, then hell yes, that work should also be
  open source. That, to me, is the difference between right and wrong.
 
 
 Actually, your not legally bound to anything if you write "new" additional
 code, even if its dependant on something.  You could consider it
 "propietary"
 and charge for it.  There a tons of these things going on right now.
 
 Having dependancy on an open source product/code/functionality does not
 make one bound to make thier code "open source".
 
  If you write a program which stands on its own, takes no work from
  uncompensated parties, then you have the unambiguous right to do what
  ever you want.
 
 Thats a given.
 
  I honestly feel that it is wrong to take what others have shared and use
  it for the basis of something you will not share, and I can't understand
  how anyone could think differently.
 
 The issue isn't "fairness", the issue really is really trust.  And from what
 I'm
 seeing, like anything else in life, if you rely solely on trust when money
 is
 involved, the system will fail--eventually.
 
 sad... isn't it?

That's why, as bad as it is, GPL is the best answer.


-- 
http://www.mohawksoft.com



Re: [HACKERS] beta testing version

2000-12-03 Thread Gary MacDougall

 No, not at all. At least for me, if I write code which is dependent on
 the open source work of others, then hell yes, that work should also be
 open source. That, to me, is the difference between right and wrong.


Actually, your not legally bound to anything if you write "new" additional
code, even if its dependant on something.  You could consider it
"propietary"
and charge for it.  There a tons of these things going on right now.

Having dependancy on an open source product/code/functionality does not
make one bound to make thier code "open source".

 If you write a program which stands on its own, takes no work from
 uncompensated parties, then you have the unambiguous right to do what
 ever you want.

Thats a given.

 I honestly feel that it is wrong to take what others have shared and use
 it for the basis of something you will not share, and I can't understand
 how anyone could think differently.

The issue isn't "fairness", the issue really is really trust.  And from what
I'm
seeing, like anything else in life, if you rely solely on trust when money
is
involved, the system will fail--eventually.

sad... isn't it?


 --
 http://www.mohawksoft.com





Re: [HACKERS] beta testing version

2000-12-03 Thread Nathan Myers

On Sun, Dec 03, 2000 at 05:17:36PM -0500, mlw wrote:
 ... if I write code which is dependent on
 the open source work of others, then hell yes, that work should also be
 open source. That, to me, is the difference between right and wrong.

This is short and I will say no more:

The entire social contract around PostgreSQL is written down in the 
license.  Those who have contributed to the project (are presumed to) 
have read it and agreed to it before submitting their changes.  Some
people have contributed intending someday to fold the resulting code 
base into their proprietary product, and carefully checked to ensure 
the license would allow it.  Nobody has any legal or moral right to 
impose extra use restrictions, on their own code or (especially!) on 
anybody else's.

If you would like to place additional restrictions on your own 
contributions, you can:

1. Work on other projects.  (Adabas will soon be GPL, but you can 
   start now.  Others are coming, too.)  There's always plenty of 
   work to be done on Free Software.

2. Fork the source base, add your code, and release the whole thing 
   under GPL.  You can even fold in changes from the original project, 
   later.  (Don't expect everybody to get along, afterward.)  A less
   drastic alternative is to release GPL'd patches.

3. Grin and bear it.  Greed is a sin, but so is envy.

Flame wars about licensing mainly distract people from writing code.  
How would *you* like the time spent?  

Nathan Myers
[EMAIL PROTECTED]




Re: [HACKERS] beta testing version

2000-12-03 Thread Jan Wieck

Adam Haberlach wrote:
In any case, can we create pgsql-politics so we don't have to go over
 this issue every three months?  Can we create pgsql-benchmarks while we
 are at it, to take care of the other thread that keeps popping up?

pgsql-yawn, where any of them can happen as often and long as
they want.


Jan

--

#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #





Re: [HACKERS] beta testing version

2000-12-03 Thread Tom Lane

 mlw wrote:  [heavily edited]
 No, not at all. At least for me, if I write code which is dependent on
 the open source work of others, then hell yes, that work should also be
 open source. That, to me, is the difference between right and wrong.
 I honestly feel that it is wrong to take what others have shared and use
 it for the basis of something you will not share, and I can't understand
 how anyone could think differently.

You're missing the point almost completely.  We've been around on this
GPL-vs-BSD discussion many many many times before, and the discussion
always ends up at the same place: we aren't changing the license.

The two key reasons (IMHO) are:

1. The original code base is BSD.  We do not have the right to
unilaterally relabel that code as GPL.  Maybe we could try to say that
all additions/changes after a certain date are GPL, but that'd become a
hopeless mess very shortly; how would you keep track of what was which?
Not to mention the fact that a mixed-license project would not satisfy
GPL partisans anyway.

2. Since Postgres is a database, and the vast majority of uses for
databases are business-related, we have to have a license that
businesses will feel comfortable with.  One aspect of that comfort is
that they be able to do things like building proprietary applications
atop the database.  If we take a purist GPL approach, we'll just drive
away a lot of potential users and contributors.  (I for one wouldn't be
here today, most likely, if Postgres had been GPL --- my then company
would not have gotten involved with it.)

I have nothing against GPL; it's appropriate for some things.  But
it's not appropriate for *this* project, because of history and subject
matter.  We've done just fine with the BSD license and I do not see a
reason to think that GPL would be an improvement.

regards, tom lane



Re: [HACKERS] beta testing version

2000-12-03 Thread Peter Bierman

At 5:17 PM -0500 12/3/00, mlw wrote:
I honestly feel that it is wrong to take what others have shared and use
it for the basis of something you will not share, and I can't understand
how anyone could think differently.

Yeah, it really sucks when companies that are in buisness to make money by creating 
solutions and support for end users take the hard work of volenteers, commit resources 
to extending and enhancing that work, and make that work more accessable end users 
(for a fee).

Maybe it's unfair that the people at the bottom of that chain don't reap a percentage 
of the revenue generated at the top, but those people were free to read the license of 
the product they were contributing to.

Ironically, the GPL protects the future income a programmer much bettter than the BSD 
license, becuase under the GPL the original author can sell the code to a commercial 
enterprise who otherwise would not have been able to use it. Even more ironically, the 
GPL doesn't prevent 3rd parties from feeding at the trough as long as they DON'T 
extend and enhance the product. (Though Red Hat and friends donate work back to 
maintain community support.)

To me, Open Source is about admitting that the Computer Science field is in it's 
infancy, and the complex systems we're building today are the fundamental building 
blocks of tomorrow's systems. It is about exchanging control for adoption, a trade-off 
that has millions of case studies.

Think Different,
-pmb

--
"Every time you provide an option, you're asking the user to make a decision.
 That means they will have to think about something and decide about it.
 It's not necessarily a bad thing, but, in general, you should always try to
 minimize the number of decisions that people have to make."
 http://joel.editthispage.com/stories/storyReader$51





Re: [HACKERS] beta testing version

2000-12-03 Thread The Hermit Hacker

On Sun, 3 Dec 2000, Hannu Krosing wrote:

 The Hermit Hacker wrote:
  
  On Sat, 2 Dec 2000, Don Baccus wrote:
  
I *am* one of those volunteers
  
   Yes, I well remember you screwing up PG 7.0 just before beta, without bothering
   to test your code, and leaving on vacation.
  
   You were irresponsible then, and you're being irresponsible now.
  
  Okay, so let me get this one straight ... it was irresponsible for him to
  put code in that was broken the last time, but it wouldn't be
  irresponsible for us to release code that we don't feel is ready this
  time? *raised eyebrow*
  
  Just want to get this straight, as it kinda sounds hypocritical to me, but
  want to make sure that I understand before I fully arrive at that
  conclusion ... :)
 
 IIRC, this thread woke up on someone complaining about PostgreSQl inc
 promising 
 to release some code for replication in mid-october and asking for
 confirmation 
 that this is just a schedule slip and that the project is still going on
 and 
 going to be released as open source.
 
 What seems to be the answer is: "NO, we will keep the replication code
 proprietary".
 
 I have not seen this answer myself, but i've got this impression from
 the contents 
 of the whole discussion.
 
 Do you know if this is the case ?

If this is the impression that someone gave, I am shocked ... Thomas
himself has already posted stating that it was a scheduale slip on his
part.  Vadim did up the software days before the Oracle OpenWorld
conference, but it was a very rudimentary implementation.  At the show,
Thomas dove in to build a basic interface to it, and, as time permits, has
been working on packaging to get it into contrib before v7.1 is released
...

I've been trying to follow this thread, and seem to have missed where
someone arrived at the conclusion that we were proprietarizing(word?) this
... we do apologize that it didn't get out mid-October, but it is/was
purely a scheduale slip ...




Re: [HACKERS] beta testing version

2000-12-03 Thread The Hermit Hacker

On Sun, 3 Dec 2000, mlw wrote:

 Hannu Krosing wrote:
   I know this is a borderline rant, and I am sorry, but I think it is very
   important that the integrity of open source be preserved at 100% because
   it is a very slippery slope, and we are all surrounded by the temptation
   cheat the spirit of open source "just a little" for short term gain.
  
  Do you mean that anyone who has contributed to an opensource project
  should be forbidden from doing any closed-source development ?
 
 No, not at all. At least for me, if I write code which is dependent on
 the open source work of others, then hell yes, that work should also be
 open source. That, to me, is the difference between right and wrong.
 
 If you write a program which stands on its own, takes no work from
 uncompensated parties, then you have the unambiguous right to do what
 ever you want.
 
 I honestly feel that it is wrong to take what others have shared and use
 it for the basis of something you will not share, and I can't understand
 how anyone could think differently.





Re: [HACKERS] beta testing version

2000-12-03 Thread The Hermit Hacker

On Sun, 3 Dec 2000, Gary MacDougall wrote:

  If you write a program which stands on its own, takes no work from
  uncompensated parties, then you have the unambiguous right to do what
  ever you want.
 
 Thats a given.

okay, then now I'm confused ... neither SePICK or erServer are derived
from uncompensated parties ... they work over top of PgSQL, but are not
integrated into them, nor have required any changes to PgSQL in order to
make it work ...

... so, where is this whole outcry coming from?





Re: [HACKERS] beta testing version

2000-12-03 Thread Ross J. Reedstrom

On Sun, Dec 03, 2000 at 08:49:09PM -0400, The Hermit Hacker wrote:
 On Sun, 3 Dec 2000, Hannu Krosing wrote:
 
  
  IIRC, this thread woke up on someone complaining about PostgreSQl inc
  promising 
  to release some code for replication in mid-october and asking for
  confirmation 
  that this is just a schedule slip and that the project is still going on
  and 
  going to be released as open source.
  

That would be me asking the question, as a reply to Don's concern regarding
the 'prorietary extension on a 24 mo. release delay'

  What seems to be the answer is: "NO, we will keep the replication code
  proprietary".
  
  I have not seen this answer myself, but i've got this impression from
  the contents 
  of the whole discussion.
  
  Do you know if this is the case ?
 
 If this is the impression that someone gave, I am shocked ... Thomas
 himself has already posted stating that it was a scheduale slip on his
 part.  

Actually, Thomas said:

Thomas Hmm. What has kept replication from happening in the past? It
Thomas is a big job and difficult to do correctly. It is entirely my
Thomas fault that you haven't seen the demo code released; I've been
Thomas packaging it to make it a bit easier to work with.

I noted the use of the words "demo code" rather than "core code". That
bothered (and still bothers) me, but I didn't reply at the time,
since there was already enough heat in this thread. I'll take your
interpretation to mean it's just a matter of semantics.

 [...] Vadim did up the software days before the Oracle OpenWorld
 conference, but it was a very rudimentary implementation.  At the show,
 Thomas dove in to build a basic interface to it, and, as time permits, has
 been working on packaging to get it into contrib before v7.1 is released
 ...
 
 I've been trying to follow this thread, and seem to have missed where
 someone arrived at the conclusion that we were proprietarizing(word?) this
 ... we do apologize that it didn't get out mid-October, but it is/was
 purely a scheduale slip ...
 

Mixture of the silent schedule slip on the core code, and the explicit
statement on the erserver.com page regarding the 'proprietary extensions'
with a delayed source release.

The biggest problem I see with having core developers making proprietary
extensions is the potentional for conflict of interest when and if
some of us donate equivalent code to the core. The core developers who
have also done proprietary  versions will have to be very cautious
when working on such code. They're in a bind, with two parts.  First,
they have obligations to their employer and their employer's partners
to not release the closed work early. Second, possibly ignoring such
independent extensions, or even actively excluding them for the core,
in favor of their own code. The core developers _do_ have a bit of a
track record favoring each others code over external code, as is natural:
we all trust work more from sources we know better, especially when that
source is ourselves. But this favoratism could work against the earliest
possible open solution.

I'm still anxious to see the core patches needed to support replication.
Since you've leaked that they work going back to v6.5, I have a feeling
the approach may not be the one I was hoping for. 

Ross



Re: [HACKERS] beta testing version

2000-12-03 Thread Thomas Lockhart

 I'm still anxious to see the core patches needed to support replication.
 Since you've leaked that they work going back to v6.5, I have a feeling
 the approach may not be the one I was hoping for.

There are no core patches required to support replication. This has been
said already, but perhaps lost in the noise.

   - Thomas



Re: [HACKERS] beta testing version

2000-12-03 Thread Gary MacDougall

I'm agreeing with the people like SePICK and erServer.
I'm only being sort of cheeky in saying that they wouldn't have had a
product had
it not been for the Open Source that they are leveraging off  of.
Making money? I don't know what they're plans are, but at some point I would
fully expect *someone* to make money.



- Original Message -
From: "The Hermit Hacker" [EMAIL PROTECTED]
To: "Gary MacDougall" [EMAIL PROTECTED]
Cc: "mlw" [EMAIL PROTECTED]; "Hannu Krosing" [EMAIL PROTECTED]; "Thomas
Lockhart" [EMAIL PROTECTED]; "Don Baccus"
[EMAIL PROTECTED]; "PostgreSQL Development"
[EMAIL PROTECTED]
Sent: Sunday, December 03, 2000 7:53 PM
Subject: Re: [HACKERS] beta testing version


 On Sun, 3 Dec 2000, Gary MacDougall wrote:

   If you write a program which stands on its own, takes no work from
   uncompensated parties, then you have the unambiguous right to do what
   ever you want.
 
  Thats a given.

 okay, then now I'm confused ... neither SePICK or erServer are derived
 from uncompensated parties ... they work over top of PgSQL, but are not
 integrated into them, nor have required any changes to PgSQL in order to
 make it work ...

 ... so, where is this whole outcry coming from?







Re: [HACKERS] beta testing version

2000-12-03 Thread Gary MacDougall

Correct me if I'm wrong but in the last 3 years what company that you
know of didn't consider an IPO part of the "business and such".  Most
tech companies that have been formed in the last 4 - 5 years have one
thing on the brain--IPO.  It's the #1 thing (sadly) that they care about.
I only wished these companies cared as much about *creating* and
inovation more than they cared about going public...

g.

 No offense Trond, if you were in on the Red Hat IPO from the start,
 you'd have to say those people made "good money".

I'm talking about the business as such, not the IPO where the price
went stratospheric (we were priced like we were earning 1 or 2 billion
dollars year, which was kindof weird).


--
Trond Eivind Glomsrød
Red Hat, Inc.






Re: [HACKERS] beta testing version

2000-12-03 Thread The Hermit Hacker

On Sun, 3 Dec 2000, Ross J. Reedstrom wrote:

  If this is the impression that someone gave, I am shocked ... Thomas
  himself has already posted stating that it was a scheduale slip on his
  part.  
 
 Actually, Thomas said:
 
 Thomas Hmm. What has kept replication from happening in the past? It
 Thomas is a big job and difficult to do correctly. It is entirely my
 Thomas fault that you haven't seen the demo code released; I've been
 Thomas packaging it to make it a bit easier to work with.
 
 I noted the use of the words "demo code" rather than "core code". That
 bothered (and still bothers) me, but I didn't reply at the time,
 since there was already enough heat in this thread. I'll take your
 interpretation to mean it's just a matter of semantics.

there is nothing that we are developing at this date that is *core* code
... the "demo code" that we are going to be putting into contrib is a
simplistic version, and the first cut, of what we are developing ... like
everything in contrib, it will be hack-on-able, extendable, etc ...

 I'm still anxious to see the core patches needed to support
 replication. Since you've leaked that they work going back to v6.5, I
 have a feeling the approach may not be the one I was hoping for.

this is where the 'confusion' appears to be arising .. there are no
*patches* ... anything that will require patches to the core server will
almost have to be put to the open source or we hit problems where
development continues without us ... what we are doing with replication
requires *zero* patches to the server, it is purely a third-party
application ...





Re: [HACKERS] beta testing version

2000-12-03 Thread The Hermit Hacker

On Sun, 3 Dec 2000, Gary MacDougall wrote:

 I'm agreeing with the people like SePICK and erServer.
 I'm only being sort of cheeky in saying that they wouldn't have had a
 product had
 it not been for the Open Source that they are leveraging off  of.

So, basically, if I hadn't pulled together Thomas, Bruce and Vadim 5 years
ago, when Jolly and Andrew finished their graduate thesis, and continued
to provide the resources required to bring PgSQL from v1.06 to now, we
wouldn't be able to use that as a basis for third party applications
... pretty much, ya, that sums it up ...

 - Original Message -
 From: "The Hermit Hacker" [EMAIL PROTECTED]
 To: "Gary MacDougall" [EMAIL PROTECTED]
 Cc: "mlw" [EMAIL PROTECTED]; "Hannu Krosing" [EMAIL PROTECTED]; "Thomas
 Lockhart" [EMAIL PROTECTED]; "Don Baccus"
 [EMAIL PROTECTED]; "PostgreSQL Development"
 [EMAIL PROTECTED]
 Sent: Sunday, December 03, 2000 7:53 PM
 Subject: Re: [HACKERS] beta testing version
 
 
  On Sun, 3 Dec 2000, Gary MacDougall wrote:
 
If you write a program which stands on its own, takes no work from
uncompensated parties, then you have the unambiguous right to do what
ever you want.
  
   Thats a given.
 
  okay, then now I'm confused ... neither SePICK or erServer are derived
  from uncompensated parties ... they work over top of PgSQL, but are not
  integrated into them, nor have required any changes to PgSQL in order to
  make it work ...
 
  ... so, where is this whole outcry coming from?
 
 
 
 
 

Marc G. Fournier   ICQ#7615664   IRC Nick: Scrappy
Systems Administrator @ hub.org 
primary: [EMAIL PROTECTED]   secondary: scrappy@{freebsd|postgresql}.org 




Re: [HACKERS] beta testing version

2000-12-03 Thread Ross J. Reedstrom

On Sun, Dec 03, 2000 at 08:53:08PM -0400, The Hermit Hacker wrote:
 On Sun, 3 Dec 2000, Gary MacDougall wrote:
 
   If you write a program which stands on its own, takes no work from
   uncompensated parties, then you have the unambiguous right to do what
   ever you want.
  
  Thats a given.
 
 okay, then now I'm confused ... neither SePICK or erServer are derived
 from uncompensated parties ... they work over top of PgSQL, but are not
 integrated into them, nor have required any changes to PgSQL in order to
 make it work ...
 
 ... so, where is this whole outcry coming from?

This paragraph from erserver.com:

eRServer development is currently concentrating on core, universal
functions that will enable individuals and IT professionals
to implement PostgreSQL ORDBMS solutions for mission critical
datawarehousing, datamining, and eCommerce requirements. These
initial developments will be published under the PostgreSQL Open
Source license, and made available through our sites, Certified
Platinum Partners, and others in PostgreSQL community.

led me (and many others) to believe that this was going to be a tighly
integrated service, requiring code in the PostgreSQL core, since that's the
normal use of 'core' around here.

Now that I know it's a completely external implementation, I feel bad about
griping about deadlines. I _do_ wish I'd known this _design choice_ a bit
earlier, as it impacts how I'll try to do some things with pgsql, but that's
my own fault for over interpreting press releases and pre-announcements.

Ross



Re: [HACKERS] beta testing version

2000-12-03 Thread The Hermit Hacker

On Sun, 3 Dec 2000, Ross J. Reedstrom wrote:

 On Sun, Dec 03, 2000 at 08:53:08PM -0400, The Hermit Hacker wrote:
  On Sun, 3 Dec 2000, Gary MacDougall wrote:
  
If you write a program which stands on its own, takes no work from
uncompensated parties, then you have the unambiguous right to do what
ever you want.
   
   Thats a given.
  
  okay, then now I'm confused ... neither SePICK or erServer are derived
  from uncompensated parties ... they work over top of PgSQL, but are not
  integrated into them, nor have required any changes to PgSQL in order to
  make it work ...
  
  ... so, where is this whole outcry coming from?
 
 This paragraph from erserver.com:
 
 eRServer development is currently concentrating on core, universal
 functions that will enable individuals and IT professionals
 to implement PostgreSQL ORDBMS solutions for mission critical
 datawarehousing, datamining, and eCommerce requirements. These
 initial developments will be published under the PostgreSQL Open
 Source license, and made available through our sites, Certified
 Platinum Partners, and others in PostgreSQL community.
 
 led me (and many others) to believe that this was going to be a tighly
 integrated service, requiring code in the PostgreSQL core, since that's the
 normal use of 'core' around here.
 
 Now that I know it's a completely external implementation, I feel bad about
 griping about deadlines. I _do_ wish I'd known this _design choice_ a bit
 earlier, as it impacts how I'll try to do some things with pgsql, but that's
 my own fault for over interpreting press releases and pre-announcements.

Apologies from our side as well ... failings on the english language and
choice of said on our side ... the last thing that we want to do is have
to maintain patches across multiple versions for stuff that is core to the
server ... Thomas/Vadim can easily correct me if I've missed something,
but to the best of my knowledge, from our many discussions, anything that
is *core* to the PgSQL server itself will always be released similar to
any other project (namely, tested and open) ... including hooks for any
proprietary projects ... the sanctity of the *core* server is *always*
foremost in our minds, no matter what other projects we are working on ...





Re: [HACKERS] beta testing version

2000-12-03 Thread Gary MacDougall

bingo.

Not just third-party app's, but think of all the vertical products that
include PG...
I'm right now wondering if  TIVO uses it?

You have to think that PG will show up in some pretty interesting money
making products...

So yes, had you not got the ball rolling well, you know what I'm saying.

g.

- Original Message -
From: "The Hermit Hacker" [EMAIL PROTECTED]
To: "Gary MacDougall" [EMAIL PROTECTED]
Cc: "mlw" [EMAIL PROTECTED]; "Hannu Krosing" [EMAIL PROTECTED]; "Thomas
Lockhart" [EMAIL PROTECTED]; "Don Baccus"
[EMAIL PROTECTED]; "PostgreSQL Development"
[EMAIL PROTECTED]
Sent: Sunday, December 03, 2000 10:18 PM
Subject: Re: [HACKERS] beta testing version


 On Sun, 3 Dec 2000, Gary MacDougall wrote:

  I'm agreeing with the people like SePICK and erServer.
  I'm only being sort of cheeky in saying that they wouldn't have had a
  product had
  it not been for the Open Source that they are leveraging off  of.

 So, basically, if I hadn't pulled together Thomas, Bruce and Vadim 5 years
 ago, when Jolly and Andrew finished their graduate thesis, and continued
 to provide the resources required to bring PgSQL from v1.06 to now, we
 wouldn't be able to use that as a basis for third party applications
 ... pretty much, ya, that sums it up ...

  - Original Message -
  From: "The Hermit Hacker" [EMAIL PROTECTED]
  To: "Gary MacDougall" [EMAIL PROTECTED]
  Cc: "mlw" [EMAIL PROTECTED]; "Hannu Krosing" [EMAIL PROTECTED]; "Thomas
  Lockhart" [EMAIL PROTECTED]; "Don Baccus"
  [EMAIL PROTECTED]; "PostgreSQL Development"
  [EMAIL PROTECTED]
  Sent: Sunday, December 03, 2000 7:53 PM
  Subject: Re: [HACKERS] beta testing version
 
 
   On Sun, 3 Dec 2000, Gary MacDougall wrote:
  
 If you write a program which stands on its own, takes no work from
 uncompensated parties, then you have the unambiguous right to do
what
 ever you want.
   
Thats a given.
  
   okay, then now I'm confused ... neither SePICK or erServer are derived
   from uncompensated parties ... they work over top of PgSQL, but are
not
   integrated into them, nor have required any changes to PgSQL in order
to
   make it work ...
  
   ... so, where is this whole outcry coming from?
  
  
 
 
 

 Marc G. Fournier   ICQ#7615664   IRC Nick:
Scrappy
 Systems Administrator @ hub.org
 primary: [EMAIL PROTECTED]   secondary:
scrappy@{freebsd|postgresql}.org






Re: [HACKERS] beta testing version

2000-12-03 Thread Nathan Myers

On Fri, Dec 01, 2000 at 12:00:12AM -0400, The Hermit Hacker wrote:
 On Thu, 30 Nov 2000, Nathan Myers wrote:
  On Thu, Nov 30, 2000 at 07:02:01PM -0400, The Hermit Hacker wrote:
   v7.1 should improve crash recovery ...
   ... with the WAL stuff that Vadim is producing, you'll be able to
   recover up until the point that the power cable was pulled out of 
   the wall.
  
  Please do not propagate falsehoods like the above.  It creates
  unsatisfiable expectations, and leads people to fail to take
  proper precautions and recovery procedures.  
  
  After a power outage on an active database, you may have corruption
  at low levels of the system, and unless you have enormous redundancy
  (and actually use it to verify everything) the corruption may go 
  undetected and result in (subtly) wrong answers at any future time.
  
  The logging in 7.1 protects transactions against many sources of 
  database crash, but not necessarily against OS crash, and certainly
  not against power failure.  (You might get lucky, or you might just 
  think you were lucky.)  This is the same as for most databases; an
  embedded database that talks directly to the hardware might be able
  to do better.  
 
 We're talking about transaction logging here ... nothing gets written
 to it until completed ... if I take a "known to be clean" backup from
 the night before, restore that and then run through the transaction
 logs, my data should be clean, unless my tape itself is corrupt. If
 the power goes off half way through a write to the log, then that
 transaction wouldn't be marked as completed and won't roll into the
 restore ...

Sorry, wrong.  First, the only way that your backups could have any
relationship with the transaction logs is if they are copies of the
raw table files with the database shut down, rather than the normal 
"snapshot" backup.  

Second, the transaction log is not, as has been noted far too frequently
for Vince's comfort, really written atomically.  The OS has promised
to write it atomically, and given the opportunity, it will.  If you pull 
the plug, all promises are broken.

 if a disk goes corrupt, I'd expect that the redo log would possibly
 have a problem with corruption .. but if I pull the plug, unless I've
 somehow damaged the disk, I would expect my redo log to be clean
 *and*, unless Vadim totally messed something up, if there is any
 corruption in the redo log, I'd expect that restoring from it would
 generate from red flags ...

You have great expectations, but nobody has done the work to satisfy
them, so when you pull the plug, I'd expect that you will be left 
in the dark, alone and helpless.

Vadim has done an excellent job on what he set out to do: optimize
transaction processing.  Designing and implementing a factor-of-twenty 
speed improvement on a professional-quality database engine demanded
great effort and expertise.  To complain that he hasn't also done 
a lot of other stuff would be petty.

Nathan Myers
[EMAIL PROTECTED]




Re: [HACKERS] beta testing version

2000-12-03 Thread Don Baccus

At 01:06 PM 12/3/00 +0100, Peter Eisentraut wrote:

 Open source software is a
privilege,

I admit that I don't subscribe to Stallman's "source to software is a
right" argument.  That's far off my reality map.

 and nobody has the right to call someone "irresponsible"
because they want to get paid for their work and don't choose to give away
their code.

However, I do have the right to make such statements, just as you have the
right to disagree.  It's called the first amendment in my country.



- Don Baccus, Portland OR [EMAIL PROTECTED]
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



Re: [HACKERS] beta testing version

2000-12-03 Thread Don Baccus

At 03:35 PM 11/30/00 -0800, Nathan Myers wrote:
On Thu, Nov 30, 2000 at 07:02:01PM -0400, The Hermit Hacker wrote:
 
 v7.1 should improve crash recovery ...
 ... with the WAL stuff that Vadim is producing, you'll be able to
 recover up until the point that the power cable was pulled out of 
 the wall.

Please do not propagate falsehoods like the above.  It creates
unsatisfiable expectations, and leads people to fail to take
proper precautions and recovery procedures.  

Yeah, I posted similar stuff to the PHPbuilder forum in regard to
PG.

The logging in 7.1 protects transactions against many sources of 
database crash, but not necessarily against OS crash, and certainly
not against power failure.  (You might get lucky, or you might just 
think you were lucky.)  This is the same as for most databases; an
embedded database that talks directly to the hardware might be able
to do better.  

Let's put it this way ... Oracle, a transaction-safe DB with REDO
logging, has for a very long time implemented disk mirroring.  Now,
why would they do that if you could pull the plug on the processor
and depend on REDO logging to save you?

And even then you're expected to provide adequate power backup to
enable clean shutdown.

The real safety you get is that your battery sez "we need to shut
down!" but has enough power to let you.  Transactions in progress
aren't logged, but everything else can tank cleanly, and your DB is
in a consistent state.  

Mirroring protects you against (some) disk drive failures (but not
those that are transparent to the RAID controller/driver - if your
drive writes crap to the primary side of the mirror and no errors
are returned to the hardware/driver, the other side of the mirror
can faithfully reproduce them on the mirror!)

But since drives contain bearings and such that are much more likely
to fail than electronics (good electronics and good designs, at least),
mechanical failure's more likely and will be known to whatever is driving
the drive.  And you're OK then...



- Don Baccus, Portland OR [EMAIL PROTECTED]
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



Re: [HACKERS] beta testing version

2000-12-02 Thread Peter Eisentraut

Don Baccus writes:

 Exactly what is PostgreSQL, Inc doing in this area?

Good question...  See http://www.erserver.com/.

 I've not seen discussions about it here, and the two of the three most
 active developers (Jan and Tom) work for Great Bridge, not PostgreSQL,
 Inc...

Vadim Mikheev and Thomas Lockhart work for PostgreSQL, Inc., at least in
some form or another.  Which *might* be construed as a reason for their
perceived inactivity.

-- 
Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/




Re: [HACKERS] beta testing version

2000-12-02 Thread Magnus Naeslund\(f\)

From: "Nathan Myers" [EMAIL PROTECTED]
 On Thu, Nov 30, 2000 at 07:02:01PM -0400, The Hermit Hacker wrote:
 
[snip]
 The logging in 7.1 protects transactions against many sources of
 database crash, but not necessarily against OS crash, and certainly
 not against power failure.  (You might get lucky, or you might just
 think you were lucky.)  This is the same as for most databases; an
 embedded database that talks directly to the hardware might be able
 to do better.


If PG had a type of tree based logging filesystem, that it self handles,
wouldn't that be almost perfectly safe? I mean that you might lose some data
in an transaction, but the client never gets an OK anyways...
Like a combination of raw block io and tux2 like fs.
Doesn't Oracle do it's own block io, no?

Magnus

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 Programmer/Networker [|] Magnus Naeslund
 PGP Key: http://www.genline.nu/mag_pgp.txt
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-







Re: [HACKERS] beta testing version

2000-12-02 Thread Don Baccus

At 05:42 PM 12/2/00 +0100, Peter Eisentraut wrote:
Don Baccus writes:

 Exactly what is PostgreSQL, Inc doing in this area?

Good question...  See http://www.erserver.com/.

"Advanced Replication and Distributed Information capabilities are also under 
development to meet specific
 business and competitive requirements for both PostgreSQL, Inc. and clients. Several 
of these enhanced
 PostgreSQL, Inc. developments may remain proprietary for up to 24 months, with 
availability limited to
 clients and partners, in order to assist us in recovering development costs and 
continue to provide funding
 for our other Open Source contributions. "

Boy, I can just imagine the uproar this statement will cause on Slashdot when
the world finds out about it.



- Don Baccus, Portland OR [EMAIL PROTECTED]
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



Re: [HACKERS] beta testing version

2000-12-02 Thread Ross J. Reedstrom

On Sat, Dec 02, 2000 at 11:31:37AM -0800, Don Baccus wrote:
 At 05:42 PM 12/2/00 +0100, Peter Eisentraut wrote:
 Don Baccus writes:
 
  Exactly what is PostgreSQL, Inc doing in this area?
 
 Good question...  See http://www.erserver.com/.
 
snip
 
 Boy, I can just imagine the uproar this statement will cause on Slashdot when
 the world finds out about it.
 

That one doesn't worry me us much as this quote from the press release at

http://www.pgsql.com/press/PR_5.html

"We expect to have the source code tested and ready to contribute to
the open source community before the middle of October. Until that time
we are considering requests from a number of development companies and
venture capital groups to join us in this process."

Where's the damn core code? I've seen a number of examples already of
people asking about remote access/replication function, with an eye
toward implementing it, and being told "PostgreSQL, Inc. is working
on that". It's almost Microsoftesque: preannounce future functionality
suppressing the competition.

I realize this is probably just the typical deadline slip that we see
on the public releases of pgsql itself, not a silent retraction of the
promise to release the code (especially since some of the same core
people are involved), but there is a difference: if I absolutely need
something that's only in CVS right now, I can bite the bullet and use
a snapshot server. With erserver, I'm stuck sitting on my hands, with a
promise of future functionality. Well, not really sitting on my hands:
working on other tasks, with the assumption that erserver will be there
soon. I'd rather not roll my own in an incompatable way, and have to
port or redo the custom parts.

So, now I'm going into a couple critical, funding decision making
meetings in the next few weeks. I was planning on being able to promise
certain systems with concrete knowledge of what I will and won't be
able to provide, and how much custom coding will be needed. Now, If the
schedsule slips much more, I won't. It's even possible that the erserver's
implementation won't fit my needs at all, and I'll be back rolling my own.

I realize this sounds a bit ungrateful: they're giving away the code,
after all, and potentially saving my a lot of work.

It's just the contrast between the really open work on the core server,
and the lack of a peep when the promised deadlines have rolled past that
gets under my skin.

I'd be really happy with someone reiterating the commitment to an
open release, and letting us all know how badly the schedule has
slipped. Remember, we're all here to help! Get everyone stomping bugs
in code you're going to release soon anyway, and concentrate on the
quasi-propriatary extensions.

Ross



Re: [HACKERS] beta testing version

2000-12-02 Thread Don Baccus

At 03:51 PM 12/2/00 -0600, Ross J. Reedstrom wrote:

"We expect to have the source code tested and ready to contribute to
the open source community before the middle of October. Until that time
we are considering requests from a number of development companies and
venture capital groups to join us in this process."

Where's the damn core code? I've seen a number of examples already of
people asking about remote access/replication function, with an eye
toward implementing it, and being told "PostgreSQL, Inc. is working
on that". It's almost Microsoftesque: preannounce future functionality
suppressing the competition.

Well, this is just all 'round a bad precedent and an unwelcome path
for PostgreSQL, Inc to embark upon.

They've also embarked on one fully proprietary product (built on PG),
which means they're not an Open Source company, just a sometimes Open
Source company.

It's a bit ironic to learn about this on the same day I learned that
Solaris 8 is being made available in source form.  Sun's slowly "getting
it" and moving glacially towards Open Source, while PostgreSQL, Inc.
seems to be drifting in the opposite direction.

if I absolutely need
something that's only in CVS right now, I can bite the bullet and use
a snapshot server. 

This work might be released as Open Source, but it isn't an open development
scenario.  The core work's not available for public scrutiny, and the details
of what they're actually up don't appear to be public either.

OK, they're probably funding Vadim's work on WAL, so the idictment's probably
not 100% accurate - but I don't know that.  

I'd be really happy with someone reiterating the commitment to an
open release, and letting us all know how badly the schedule has
slipped. Remember, we're all here to help! Get everyone stomping bugs
in code you're going to release soon anyway, and concentrate on the
quasi-propriatary extensions.

Which makes me wonder, is Vadim's time going to be eaten up by working
on these quasi-proprietary extensions that the rest of us won't get
for two years unless we become customers of Postgres, Inc?

Will Great Bridge step to the plate and fund a truly open source alternative,
leaving us with a potential code fork?  If IB gets its political problems
under control and developers rally around it, two years is going to be a
long time to just sit back and wait for PG, Inc to release eRServer.

These developments are a major annoyance.



- Don Baccus, Portland OR [EMAIL PROTECTED]
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



Re: [HACKERS] beta testing version

2000-12-02 Thread Adam Haberlach

On Sat, Dec 02, 2000 at 03:51:15PM -0600, Ross J. Reedstrom wrote:
 On Sat, Dec 02, 2000 at 11:31:37AM -0800, Don Baccus wrote:
  At 05:42 PM 12/2/00 +0100, Peter Eisentraut wrote:
  Don Baccus writes:
  
   Exactly what is PostgreSQL, Inc doing in this area?
  
  Good question...  See http://www.erserver.com/.
  
 snip
  
  Boy, I can just imagine the uproar this statement will cause on Slashdot when
  the world finds out about it.
  
 
 That one doesn't worry me us much as this quote from the press release at
 
 http://www.pgsql.com/press/PR_5.html
 
 "We expect to have the source code tested and ready to contribute to
 the open source community before the middle of October. Until that time
 we are considering requests from a number of development companies and
 venture capital groups to join us in this process."
 
 Where's the damn core code? I've seen a number of examples already of
 people asking about remote access/replication function, with an eye
 toward implementing it, and being told "PostgreSQL, Inc. is working
 on that". It's almost Microsoftesque: preannounce future functionality
 suppressing the competition.

For What It's Worth: In the three years (has it really been that long?)
that I've been off and on Postgres mailing lists, I've probably seen at
least 100 requests for replication, with about 40 of them mentioning
implementing it themself.

I'm pretty sure that being told "PostgreSQL Inc. is working on that" is
not the only thing stopping it from happening.  Most people just aren't up
to making it happen.

-- 
Adam Haberlach   |"California's the big burrito, Texas is the big
[EMAIL PROTECTED]  | taco ... and following that theme, Florida is
http://www.newsnipple.com| the big tamale ... and the only tamale that 
'88 EX500| counts any more." -- Dan Rather 



Re: [HACKERS] beta testing version

2000-12-02 Thread Tom Samplonius


On Sat, 2 Dec 2000, Don Baccus wrote:

...
 Will Great Bridge step to the plate and fund a truly open source alternative,
 leaving us with a potential code fork?  If IB gets its political problems
 under control and developers rally around it, two years is going to be a
 long time to just sit back and wait for PG, Inc to release eRServer.

  I doubt that.  There is an IB (Interbase) replication option today, but
you must purchase it.  That isn't so bad actually.  PostgreSQL looks to be
going that way too:  base functionality is open source, periphial
companies make money selling extensions.

  Besides simple master-slave replication is old news anyhow, and not
terribly useful.  Products like FrontBase (www.frontbase.com) have full
shared-nothing cluster support too (FrontBase is commerical).  Clustering
is a much better solution for redundancy purposes that replication.


Tom




Re: [HACKERS] beta testing version

2000-12-02 Thread Ross J. Reedstrom

On Sat, Dec 02, 2000 at 03:47:19PM -0800, Adam Haberlach wrote:
  
  Where's the damn core code? I've seen a number of examples already of
  people asking about remote access/replication function, with an eye
  toward implementing it, and being told "PostgreSQL, Inc. is working
  on that". It's almost Microsoftesque: preannounce future functionality
  suppressing the competition.

Well, I'll admit that this was getting a little over the top, especially
quoted out of context. ;-)

 
   For What It's Worth: In the three years (has it really been that long?)
 that I've been off and on Postgres mailing lists, I've probably seen at
 least 100 requests for replication, with about 40 of them mentioning
 implementing it themself.
 
   I'm pretty sure that being told "PostgreSQL Inc. is working on that" is
 not the only thing stopping it from happening.  Most people just aren't up
 to making it happen.

Indeed. And it's only been less than a year that that response
has been given. However, it is only in that same timespan that the
functionality and performance of the core server gotten to the point
were replication/remote access is one of immediately fruitful itches to
scratch. We'll see what happens in the future.

Ross



Re: [HACKERS] beta testing version

2000-12-02 Thread Thomas Lockhart

 PostgreSQL, Inc perhaps has that as a game plan.
 I'm not so much concerned about exactly what PG, Inc is planning to offer
 as a proprietary piece - I'm purist enough that I worry about what this
 signals for their future direction.

Hmm. What has kept replication from happening in the past? It is a big
job and difficult to do correctly. It is entirely my fault that you
haven't seen the demo code released; I've been packaging it to make it a
bit easier to work with.

 If PG, Inc starts doing proprietary chunks, and Great Bridge remains 100%
 dedicated to Open Source, I know who I'll want to succeed and prosper.

Let me be clear: PostgreSQL Inc. is owned and controlled by people who
have lived the Open Source philosophy, which is not typical of most
companies in business today. We are eager to show how this can be done
on a full time basis, not only as an avocation. And we are eager to do
this as part of the community we have helped to build.

As soon as you find a business model which does not require income, let
me know. The .com'ers are trying it at the moment, and there seems to be
a few flaws... ;)

  - Thomas



Re: [HACKERS] beta testing version

2000-12-02 Thread Adam Haberlach

On Sat, Dec 02, 2000 at 07:32:14PM -0800, Don Baccus wrote:
 At 02:58 AM 12/3/00 +, Thomas Lockhart wrote:
  PostgreSQL, Inc perhaps has that as a game plan.
  I'm not so much concerned about exactly what PG, Inc is planning to offer
  as a proprietary piece - I'm purist enough that I worry about what this

.
.
.

 As soon as you find a business model which does not require income, let
 me know.
 
 Red herring, and you know it.  The question isn't whether or not your business
 generates income, but how it generates income.

So far, Open Source doesn't.  The VA Linux IPO made ME some income,
but I'm not sure that was part of their plan...

 Your comment is the classic one tossed out by closed-source, proprietary
 software advocates who dismiss open source software out-of-hand.  
 
 Couldn't you think of something better, at least?  Like ... something 
 original?
 
  The .com'ers are trying it at the moment, and there seems to be
 a few flaws... ;)
 
 That's a horrible analogy, and I suspect you know it, but at least it is
 original.

It wasn't an analogy.

In any case, can we create pgsql-politics so we don't have to go over
this issue every three months?  Can we create pgsql-benchmarks while we
are at it, to take care of the other thread that keeps popping up?

-- 
Adam Haberlach   |"California's the big burrito, Texas is the big
[EMAIL PROTECTED]  | taco ... and following that theme, Florida is
http://www.newsnipple.com| the big tamale ... and the only tamale that 
'88 EX500| counts any more." -- Dan Rather 



Re: [HACKERS] beta testing version

2000-12-02 Thread Thomas Lockhart

 This statement of yours kinda belittles the work done over the past
 few years by volunteers.

imho it does not, and if somehow you can read that into it then you have
a much different understanding of language than I. I *am* one of those
volunteers, and know that the hundreds of hours I have contributed is
only a small part of the whole.

My discussion on this is over; apologies to others for helping to waste
bandwidth :(

I'll be happy to continue it next over some beers, which is a much more
appropriate setting.

- Thomas



Re: [HACKERS] beta testing version

2000-12-02 Thread Ron Chmara

Thomas Lockhart wrote:
 
  PostgreSQL, Inc perhaps has that as a game plan.
  I'm not so much concerned about exactly what PG, Inc is planning to offer
  as a proprietary piece - I'm purist enough that I worry about what this
  signals for their future direction.
 Hmm. What has kept replication from happening in the past? It is a big
 job and difficult to do correctly.

Well, this has nothing whatsoever to do with open or closed source. Linux
and FreeBSD are much larger, much harder to do correctly, as they are supersets
of thousands of open source projects. Complexity is not relative to licensing.

  If PG, Inc starts doing proprietary chunks, and Great Bridge remains 100%
  dedicated to Open Source, I know who I'll want to succeed and prosper.
 Let me be clear: PostgreSQL Inc. is owned and controlled by people who
 have lived the Open Source philosophy, which is not typical of most
 companies in business today.

That's one of the reasons why it's worked... open source meant open
contribution, open collaboration, open bug fixing. The price of admission
was doing your own installs, service, support, and giving something back

PG, I assume, is pretty much the same as most open source projects, massive
amounts of contribution shepherded by one or two individuals.

 We are eager to show how this can be done
 on a full time basis, not only as an avocation. And we are eager to do
 this as part of the community we have helped to build.
 As soon as you find a business model which does not require income, let
 me know. The .com'ers are trying it at the moment, and there seems to be
 a few flaws... ;)

Well, whether or not a product is open, or closed, has very little
to do with commercial success. Heck, the entire IBM PC spec was open, and
that certainly didn't hurt Dell, Compaq, etc the genie coming out
of the bottle _only_ hurt IBM. In this case, however, the genie's been
out for quite a while

BUT:
People don't buy a product because it's open, they buy it because it offers
significant value above and beyond what they can do *without* paying for
a product. Linus didn't start a new kernel out of some idealistic mantra
of freeing the world, he was broke and wanted a *nix-y OS. Years later,
the product has grown massively. Those who are profiting off of it are
unrelated to the code, to most of the developers why is this?

As it is, any company trying to make a closed version of an open source
product has some _massive_ work to do. Manuals. Documentation. Sales.
Branding. Phone support lines. Legal departments/Lawsuit prevention. Figuring
out how to prevent open source from stealing the thunder by duplicating
features. And building a _product_.

Most Open Source projects are not products, they are merely code, and some
horrid documentation, and maybe some support. The companies making money
are not making better code, they are making better _products_

And I really havn't seen much in the way of full featured products, complete
with printed docs, 24 hour support, tutorials, wizards, templates, a company
to sue if the code causes damage, GUI install, setup, removal, etc. etc. etc.

Want to make money from open source? Well, you have to find, or build,
a _product_. Right now, there are no OS db products that can compare to oh,
an Oracle product, a MSSQL product. There may be superior code, but that
doesn't make a difference in business. Business has very little to do
with building the perfect mousetrap, if nobody can easily use it.

-Bop
--
Brought to you from boop!, the dual boot Linux/Win95 Compaq Presario 1625
laptop, currently running RedHat 6.1. Your bopping may vary.



Re: [HACKERS] beta testing version

2000-12-02 Thread Peter Bierman

And I really havn't seen much in the way of full featured products, complete
with printed docs, 24 hour support, tutorials, wizards, templates, a company
to sue if the code causes damage, GUI install, setup, removal, etc. etc. etc.

Mac OS X.

;-)

-pmb

--
[EMAIL PROTECTED]

"4 out of 5 people with the wrong hardware want to run Mac OS X because..."
http://www.newertech.com/oscompatibility/osxinfo.html





Re: [HACKERS] beta testing version

2000-12-02 Thread Don Baccus

At 09:56 PM 12/2/00 -0700, Ron Chmara wrote:
...

And I really havn't seen much in the way of full featured products, complete
with printed docs, 24 hour support, tutorials, wizards, templates, a company
to sue if the code causes damage, GUI install, setup, removal, etc. etc. etc.

Want to make money from open source? Well, you have to find, or build,
a _product_. Right now, there are no OS db products that can compare to oh,
an Oracle product, a MSSQL product. There may be superior code, but that
doesn't make a difference in business. Business has very little to do
with building the perfect mousetrap, if nobody can easily use it.

Which of course is the business model - certainly not a "zero revenue" model
as Thomas arrogantly suggests - which OSS service companies are following.

They provide the cocoon around the code.

I buy RH releases from Fry's.  Yes, I could download, but the price is such
that I'd rather just go buy the damned release CDs.  I don't begrudge it,
they're providing me a real SERVICE, saving me time, which saves me dollars
in opportunity costs (given my $200/hr customer billing rate).  They make
money buy publishing releases, I still get all the sources.  We all win.

It is not a bad model.  

Question - if this model sucks, then certainly PG, Inc's net revenue last
year was greater than any true open source software company's?  I mean, let's
see that slam against the "zero revenue business model" be proven by showing
us some real numbers.  

Just what was PG, Inc's net revenue last year, and just how does their mixed
revenue model stack up against the OSS world?

(NOT the .com world, which is in a different business, no matter what Thomas
wants to claim).



- Don Baccus, Portland OR [EMAIL PROTECTED]
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



Re: [HACKERS] beta testing version

2000-12-02 Thread Don Baccus

At 04:42 AM 12/3/00 +, Thomas Lockhart wrote:
 This statement of yours kinda belittles the work done over the past
 few years by volunteers.

imho it does not,

Sure it does.  You in essence are saying that "advanced replication is so
hard that it could only come about if someone were willing to finance a
PROPRIETARY solution.  The PG developer group couldn't manage it if
it were done Open Source".

In other words, it is much harder than any of the work done by the
same group of people before they started working on proprietary 
versions.

And that the only way to get them doing their best work is to put them
on proprietary, or "semi-proprietary" projects, though 24 months from
now, who's going to care?  You've opened the door to IB prominence, not
only shooting PG's open source purity down in flames, but probably PG, Inc's
as well - IF IB can figure out their political problems.  

IB, as it stands, is a damned good product in many ways ahead of PG.  You're
giving them life by this approach, which is a kind of bizarre businees strategy.

 I *am* one of those volunteers

Yes, I well remember you screwing up PG 7.0 just before beta, without bothering
to test your code, and leaving on vacation.  

You were irresponsible then, and you're being irresponsible now.



- Don Baccus, Portland OR [EMAIL PROTECTED]
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



Re: [HACKERS] beta testing version

2000-12-02 Thread Don Baccus

At 09:29 PM 12/2/00 -0800, Adam Haberlach wrote:
 Red herring, and you know it.  The question isn't whether or not your business
 generates income, but how it generates income.

   So far, Open Source doesn't.  The VA Linux IPO made ME some income,
but I'm not sure that was part of their plan...

VA Linux is a HARDWARE COMPANY.  They sell servers.  "We've engineered 2U
performance into a 1U box" is their current line.

Dell probably makes more money on their Linux server offerings (I have to
admit that donb.photo.net is running on one of their PowerEdge servers) than
VA Linux does.

If I can show you a HARDWARE COMPANY that is diving on selling MS NT servers,
will you agree that this proves that the closed source and open source models
both must be wrong, because HARDWARE COMPANIES based on each paradigm are
losing money???
  The .com'ers are trying it at the moment, and there seems to be
 a few flaws... ;)
 
 That's a horrible analogy, and I suspect you know it, but at least it is
 original.

   It wasn't an analogy.

Sure it is.  Read, damn it.  First he makes the statement that a business
based on open source is, by definition, a zero-revenue company then he
raises the spectre of .com companies (how many of them are open source?)
as support for his argument.  

OK, it's not an analogy, it's a disassociation with reality.  Feel better?

   In any case, can we create pgsql-politics so we don't have to go over
this issue every three months? 

Maybe you don't care about the open source aspect of this, but as a user
with about 1500 Open Source advocates using my code, I do.  If IB comes 
forth in a fully Open Source state my user base will insist I switch.

And I will.

And I'll stop telling the world that MySQL sucks, too.  Or at least that
they suck worse than the PG world :)

There is risk here.  It isn't so much in the fact that PostgreSQL, Inc
is doing a couple of modest closed-source things with the code.  After
all, the PG community has long acknowleged that the BSD license would
allow others to co-op the code and commercialize it with no obligations.

It is rather sad to see PG, Inc. take the first step in this direction.

How long until the entire code base gets co-opted?

(Yeah, that's extremist, but seeing PG, Inc. lay down the formal foundation
for such co-opting by taking the first step might well make the potential
reality become real.  It certainly puts some of the long-term developers
in no position to argue against such a co-opted snitch of the code).

I have to say I'm feeling pretty silly about raising such an effort to
increase PG awareness in mindshare vs. MySQL.  I mean, if PG, Inc's 
efforts somehow delineate the hopes and goals of the PG community, I'm
fairly disgusted.  





- Don Baccus, Portland OR [EMAIL PROTECTED]
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



Re: [HACKERS] beta testing version

2000-12-02 Thread Vadim Mikheev

 There is risk here.  It isn't so much in the fact that PostgreSQL, Inc
 is doing a couple of modest closed-source things with the code.  After
 all, the PG community has long acknowleged that the BSD license would
 allow others to co-op the code and commercialize it with no obligations.
 
 It is rather sad to see PG, Inc. take the first step in this direction.
 
 How long until the entire code base gets co-opted?

I totaly missed your point here. How closing source of ERserver is related
to closing code of PostgreSQL DB server? Let me clear things:

1. ERserver isn't based on WAL. It will work with any version = 6.5

2. WAL was partially sponsored by my employer, Sectorbase.com,
not by PG, Inc.

Vadim





Re: [HACKERS] beta testing version

2000-12-02 Thread Prasanth A. Kumar

Don Baccus [EMAIL PROTECTED] writes:

 At 04:42 AM 12/3/00 +, Thomas Lockhart wrote:
  This statement of yours kinda belittles the work done over the past
  few years by volunteers.
 
 imho it does not,
 
 Sure it does.  You in essence are saying that "advanced replication is so
 hard that it could only come about if someone were willing to finance a
 PROPRIETARY solution.  The PG developer group couldn't manage it if
 it were done Open Source".
snip
 
 - Don Baccus, Portland OR [EMAIL PROTECTED]
   Nature photos, on-line guides, Pacific Northwest
   Rare Bird Alert Service and other goodies at
   http://donb.photo.net.

Mr. Baccus,

It is funny how you rant and rave about the importance of opensource
and how Postgresql Inc. making an non-opensource product is bad. Yet I
go to your website which is full of photographs and you make it a big
deal about people should not steal your photographs and how someone
must buy a commercial license to use them. That doesn't sound very
'open-source' to me! Why don't you practice what you preach and allow
redistribution of those photographs?

-- 
Prasanth Kumar
[EMAIL PROTECTED]



Re: [HACKERS] beta testing version

2000-12-01 Thread Nathan Myers

On Fri, Dec 01, 2000 at 01:54:23AM -0500, Alex Pilosov wrote:
 On Thu, 30 Nov 2000, Nathan Myers wrote:
  After a power outage on an active database, you may have corruption
  at low levels of the system, and unless you have enormous redundancy
  (and actually use it to verify everything) the corruption may go 
  undetected and result in (subtly) wrong answers at any future time.

 Nathan, why are you so hostile against postgres? Is there an ax to grind?

Alex, please don't invent enemies.  It's clear what important features
PostgreSQL still lacks; over the next several releases these features
will be implemented, at great expense.  PostgreSQL is useful and usable
now, given reasonable precautions and expectations.  In the future it
will satisfy greater (albeit still reasonable) expectations.

 The conditions under which WAL will completely recover your database:

 1) OS guarantees complete ordering of fsync()'d writes. (i.e. having two
 blocks A and B, A is fsync'd before B, it could NOT happen that B is on
 disk but A is not).
 2) on boot recovery, OS must not corrupt anything that was fsync'd.
 
 Rule 1) is met by all unixish OSes in existance. Rule 2 is met by some
 filesystems, such as reiserfs, tux2, and softupdates. 

No.  The OS asks the disk to write blocks in a certain order, but 
disks normally reorder writes.  Not only that; as noted earlier, 
typical disks report the write completed long before the blocks 
actually hit the disk.

A logging file system protects against the simpler forms of OS crash,
where the OS data-structure corruption is noticed before any more disk
writes are scheduled.  It can't (by itself) protect against disk
errors.  For critical applications, you must supply that protection
yourself, with (e.g.) battery-backed mirroring.

  The logging in 7.1 protects transactions against many sources of 
  database crash, but not necessarily against OS crash, and certainly
  not against power failure.  (You might get lucky, or you might just 
  think you were lucky.)  This is the same as for most databases; an
  embedded database that talks directly to the hardware might be able
  to do better.  

The best possible database code can't overcome a broken OS or a broken 
disk.  It would be unreasonable to expect otherwise.

Nathan Myers
[EMAIL PROTECTED] 



Re: [HACKERS] beta testing version

2000-12-01 Thread Don Baccus

At 11:06 PM 11/30/00 -0800, Vadim Mikheev wrote:
 As for replaying logs against a restored snapshot dump... AIUI, a 
 dump records tuples by OID, but the WAL refers to TIDs.  Therefore, 
 the WAL won't work as a re-do log to recover your transactions 
 because the TIDs of the restored tables are all different.   

True for current way of backing up - ie saving data in "external"
(sql) format. But there is another way - saving data files in their
natural (binary) format. WAL records may be applyed to
such dump, right?

Right.  That's what's missing in PG 7.1, the existence of tools to
make such backups.  

Probably the best answer to the "what does WAL get us, if it doesn't
get us full recoverability" questions is to simply say "it's a prerequisite
to getting full recoverability, PG 7.1 sets the foundation and later
work will get us there".



- Don Baccus, Portland OR [EMAIL PROTECTED]
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



Re: [HACKERS] beta testing version

2000-12-01 Thread Don Baccus

At 12:30 AM 12/1/00 -0800, Ian Lance Taylor wrote:
For example, I would hope that EMC
disk systems handle power loss gracefully.

They must, their marketing literature says so :)

  But if you buy ordinary
off the shelf PC hardware, you really do need to arrange for a UPS,
and some sort of automatic shutdown if the UPS is running low.

Which is what disk subsystems like those from EMC do for you.  They've
got build-in battery backup that lets them guarantee (assuming the
hardware's working right) that in the case of a power outage, all blocks
the operating system thinks have been written will in actuality be written
before the disk subsystem powers itself down.



- Don Baccus, Portland OR [EMAIL PROTECTED]
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



Re: [HACKERS] beta testing version

2000-12-01 Thread Vadim Mikheev

   As for replaying logs against a restored snapshot dump... AIUI, a 
   dump records tuples by OID, but the WAL refers to TIDs.  Therefore, 
   the WAL won't work as a re-do log to recover your transactions 
   because the TIDs of the restored tables are all different.   
  
  True for current way of backing up - ie saving data in "external"
  (sql) format. But there is another way - saving data files in their
  natural (binary) format. WAL records may be applyed to
  such dump, right?
 
 But (AIUI) you can only safely/usefully copy those files when the 
 database is shut down.

No. You can read/save datafiles at any time. But block reads must be
"atomic" - no one should be able to change any part of a block while
we read it. Cp  tar are probably not suitable for this, but internal
BACKUP command could do this.

Restoring from such backup will like recovering after pg_ctl -m i stop: all
data blocks are consistent and WAL records may be applyed to them.

 Many people hope to run PostgreSQL 24x7x365.  With vacuuming, you 
 might just as well shut down afterward; but when that goes away 
 (in 7.2?), when will you get the chance to take your backups?  

Ability to shutdown 7.2 will be preserved -:))
But it's not required for backup.

   To get replaying we need an "update log", something that might be
   in 7.2 if somebody does a lot of work.
  
  What did you mean by "update log"?
  Are you sure that WAL is not "update log" ? -:)
 
 No, I'm not sure.  I think it's possible that a new backup utility 
 could be written to make a hot backup which could be restored and 
 then replayed using the current WAL format.  It might be easier to
 add another log which could be replayed against the existing form
 of backups.  That last is what I called the "update log".

Consistent read of data blocks is easier to implement, sure.

 The point is, WAL now does one job superbly: maintain a consistent
 on-disk database image.  Asking it to do something else, such as 
 supporting hot BAR, could interfere with it doing its main job.  
 Of course, only the person who implements hot BAR can say.

There will be no interference because of BAR will not ask WAL to do
anything else it does right now - redo-ing changes.

Vadim





Re: [HACKERS] beta testing version

2000-12-01 Thread Don Baccus

At 11:02 AM 12/1/00 -0800, Nathan Myers wrote:
On Fri, Dec 01, 2000 at 06:39:57AM -0800, Don Baccus wrote:
 
 Probably the best answer to the "what does WAL get us, if it doesn't
 get us full recoverability" questions is to simply say "it's a 
 prerequisite to getting full recoverability, PG 7.1 sets the foundation 
 and later work will get us there".

Not to quibble, but for most of us, the answer to Don's question is:
"It gives a ~20x speedup over 7.0."  That's pretty valuable to some of us.
If it turns out to be useful for other stuff, that's gravy.

Oh, but given that power failures eat disks anyway, you can just run PG 7.0
with -F and be just as fast as PG 7.1, eh?  With no theoretical loss in
safety?  Where's your faith in all that doom and gloom you've been 
spreading? :) :)

You're right, of course, we'll get roughly -F performance while maintaining
a much more comfortable level of risk than you get with -F.



- Don Baccus, Portland OR [EMAIL PROTECTED]
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



Re: [HACKERS] beta testing version

2000-12-01 Thread Nathan Myers

On Fri, Dec 01, 2000 at 09:13:28PM +1100, Philip Warner wrote:
 
 You have raised some interesting issues regrading write-order etc. Can we
 assume that when fsync *returns*, all records are written - though not
 necessarily in the order that the IO's were executed?

Not with ordinary disks.  With a battery-backed disk server, yes.

Nathan Myers
[EMAIL PROTECTED]




Re: [HACKERS] beta testing version

2000-12-01 Thread Nathan Myers

On Fri, Dec 01, 2000 at 11:48:23AM -0800, Don Baccus wrote:
 At 11:09 AM 12/1/00 -0800, Nathan Myers wrote:
 On Fri, Dec 01, 2000 at 10:01:15AM +0100, Zeugswetter Andreas SB wrote:
 
  If you need to restore from offsite backup you loose transactions
  unless you transfer the WAL synchronously with every commit. 
 
 Currently the only way to avoid losing those transactions is by 
 replicating transactions at the application layer.  That is, the
 application talks to two different database instances, and enters
 transactions into both.  That's pretty hard to retrofit into an
 existing application, so you'd really rather have replication in
 the database.  Of course, that's something PostgreSQL, Inc. is also 
 working on.
 
 Recovery alone isn't quite that difficult.  You don't need to instantiate
 your database instance until you need to apply the archived transactions,
 i.e. after catastrophic failure destroys your db server.

True, it's sufficient for the application just to log the text of 
its updating transactions off-site.  Then, to recover, instantiate 
a database from a backup and have the application re-run its 
transactions.  

 You need to do two things:

(Remember, we're talking about what you could do *now*, with 7.1.
Presumably with 7.2 other options will open.)
 
 1. Transmit a consistent (known-state) snapshot of the database offsite.

 2. Synchronously tranfer the WAL as part of every commit (question, do
wait to log a "commit" locally until after the remote site acks that
it got the WAL?)
 
 Then you take a new machine, build a database out of the snapshot, and
 apply the archived redo logs and off you go.  If you get tired of saving
 oodles of redo archives, you make a new snapshot and accumulate the
 WAL from that point forward.
 
I don't know of any way to synchronously transfer the WAL, currently.

Anyway, I would expect doing it to interfere seriously with performance.
The "wait to log a 'commit' locally until after the remote site acks that
it got the WAL" is (akin to) the familiar two-phase commit.

Nathan Myers
[EMAIL PROTECTED]



Re: [HACKERS] beta testing version

2000-12-01 Thread Don Baccus

At 12:56 PM 12/1/00 -0800, Nathan Myers wrote:

(Remember, we're talking about what you could do *now*, with 7.1.
Presumably with 7.2 other options will open.)

Maybe *you* are :)  Seriously, I'm thinking out loud about future
possibilities.  Putting a lot of work into building up a temporary
solution on top of 7.1 doesn't make a lot of sense, anyone wanting
to work on such things ought to think about 7.2, which presumably will
beta sometime mid-2001 or so???

And I don't think there are 7.1 hacks that are simple ... could be
wrong, though.

I don't know of any way to synchronously transfer the WAL, currently.

Nope.

Anyway, I would expect doing it to interfere seriously with performance.

Yep.  Anyone here have experience with replication and Oracle or others?
I've heard from one source that setting it up reliabily in Oracle and
getting the switch from the dead to the backup server working properly was
something of a DBA nightmare, but that's true of just about anything in
Oracle.  Once it was up, it worked reliably, though (also typical
of Oracle).

The "wait to log a 'commit' locally until after the remote site acks that
it got the WAL" is (akin to) the familiar two-phase commit.

Right.



- Don Baccus, Portland OR [EMAIL PROTECTED]
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



Re: [HACKERS] beta testing version

2000-11-30 Thread The Hermit Hacker


v7.1 should improve crash recovery for situations like this ... you'll
still have to do a recovery of the data on corruption of this magnitude,
but at least with the WAL stuff that Vadim is producing, you'll be able to
recover up until the point that the power cable was pulled out of the wall
...


On Wed, 29 Nov 2000, xuyifeng wrote:

 NO, I just tested how solid PgSQL  is,  I run a program busy inserting record into 
PG table,  when I 
 suddenly pulled out power from my machine and  restarted PG, I can not insert any 
record into database
 table,  all backends are dead without any respone (not core dump), note that I am 
using FreeBSD 4.2, 
 it's rock solid,  it's not OS crash, it just losted power.  We use WindowsNT and 
MSSQL on our production
 server,  before we accept MSSQL, we use this method to test if MSSQL can endure this 
kind of strik,
 it's OK, all databases are safely recovered, we can continue our work. we are a 
stock exchange company,
 our server are storing millilion $ finance number, we don't hope there are any 
problems in this case, 
 we are using UPS,  but UPS is not everything,  it you bet everything on UPS, you 
must be idiot. 
 I know you must be an avocation of PG, but we are professional customer, corporation 
user, we store critical
 data into database, not your garbage data.
 
 Regards,
 XuYifeng
 
 - Original Message - 
 From: Don Baccus [EMAIL PROTECTED]
 To: Ron Chmara [EMAIL PROTECTED]; Mitch Vincent [EMAIL PROTECTED]; 
[EMAIL PROTECTED]
 Sent: Wednesday, November 29, 2000 6:58 AM
 Subject: Re: [HACKERS] beta testing version
 
 
  At 03:25 PM 11/28/00 -0700, Ron Chmara wrote:
  Mitch Vincent wrote:
   
   This is one of the not-so-stomped boxes running PostgreSQL -- I've never
   restarted PostgreSQL on it since it was installed.
   12:03pm  up 122 days,  7:54,  1 user,  load average: 0.08, 0.11, 0.09
   I had some index corruption problems in 6.5.3 but since 7.0.X I haven't
   heard so much as a peep from any PostgreSQL backend. It's superbly stable on
   all my machines..
  
  I have a 6.5.x box at 328 days of active use.
  
  Crash "recovery" seems silly to me. :-)
  
  Well, not really ... but since our troll is a devoted MySQL user, it's a bit
  of a red-herring anyway, at least as regards his own server.
  
  You know, the one he's afraid to put Postgres on, but sleeps soundly at
  night knowing the mighty bullet-proof MySQL with its full transaction
  semantics, archive logging and recovery from REDO logs and all that
  will save him? :)
  
  Again ... he's a troll, not even a very entertaining one.
  
  
  
  
  - Don Baccus, Portland OR [EMAIL PROTECTED]
Nature photos, on-line guides, Pacific Northwest
Rare Bird Alert Service and other goodies at
http://donb.photo.net.
  
 

Marc G. Fournier   ICQ#7615664   IRC Nick: Scrappy
Systems Administrator @ hub.org 
primary: [EMAIL PROTECTED]   secondary: scrappy@{freebsd|postgresql}.org 




Re: [HACKERS] beta testing version

2000-11-30 Thread Don Baccus

At 07:02 PM 11/30/00 -0400, The Hermit Hacker wrote:

v7.1 should improve crash recovery for situations like this ... you'll
still have to do a recovery of the data on corruption of this magnitude,
but at least with the WAL stuff that Vadim is producing, you'll be able to
recover up until the point that the power cable was pulled out of the wall

No, WAL won't help if an actual database file is corrupted, say by a
disk drive hosing a block or portion thereof with zeros.  WAL-based
recovery at startup works on an intact database.

Still, in the general case you need real backup and recovery tools.
Then you can apply archives of REDOs to a backup made of a snapshot
and rebuild up to the last transaction.   As opposed to your last
pg_dump.

So what about mirroring (RAID 1)?  As the docs tell ya, that protects
you against one drive failing but not against power failure, which can
cause bad data to be written to both mirrors if both are actively 
writing when the plug is pulled.

Power failures are evil, face it! :)



- Don Baccus, Portland OR [EMAIL PROTECTED]
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



  1   2   >