Re: [mail] Re: [HACKERS] Windows Build System

2003-01-26 Thread Al Sutton
Theres a script at http://ptolemy.eecs.berkeley.edu/other/makevcgen which
may work, I've not tried it, but someone may want to give it a spin.

Combining it with the software at http://unxutils.sourceforge.net could give
us a MS build environment which only relies on installation support programs
rather than relying on the installation and use of the whole Cygwin
environment for the build process.

Al.

- Original Message -
From: Bruce Momjian [EMAIL PROTECTED]
To: Curtis Faith [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, January 27, 2003 12:57 AM
Subject: [mail] Re: [HACKERS] Windows Build System



 Are there no already-written converters from Makefile to VC project
 files?

 --
-

 Curtis Faith wrote:
  I (Curtis Faith) previously wrote:
The Visual C++ Workspaces and Projects files are actually
text files that have a defined format. I don't think the format is
published but it looks pretty easy to figure out.
 
  Hannu Krosing replied:
   will probably change between releases
 
  Even if the format changes, the environment always has a converter that
  updates the project and workspace files to the new format. In other
  words, Visual C++ 6.0 reads 5.0 projects, 7.0 reads 6.0, etc.
 
  The format is mostly a bunch of options specifications (which wouldn't
  get touched) followed by a set of named groups of source files. Even if
  the overall format changes, it will be much more likely to change in the
  specifications rather than the way lists of source file formats are
  specified.
 
  A conversion tool, call it BuildWindowsProjectFile, would only need to:
 
  1) Read in the template file (containing all the options specifications
  and Visual C++ speficic stuff, debug and release target options,
  libraries to link in, etc.) This part might change with new versions of
  the IDE and would be manually created by someone with Visual C++
  experience.
 
  2) Read in the postgreSQL group/directory map, or alternately just
  mirror the groups with the directories.
 
  3) Output the files from the PostgreSQL directories in the appropriate
  grouping according to the project format into the appropriate space in
  the template.
 
  An excerpt of the format follows:
 
  # Begin Group Access
  # Begin Group Common
  # PROP Default_Filter cpp;c;cxx
  # Begin Source File
 
  SOURCE=.\access\common\heaptuple.c
  # End Source File
  # Begin Source File
 
  SOURCE=.access\common\indextuple.c
  # End Source File
 
  ... other files in access\common go here
  # End Group
 
  # Begin Group Index
 
  # PROP Default_Filter cpp;c;cxx
  # Begin Source File
 
  SOURCE=.\access\index\genam.c
  # End Source File
  # Begin Source File
 
  SOURCE=.access\index\indexam.c
  # End Source File
 
  ... other files in access\index go here
  # End Group
 
  # End Group
 
 
  As you can see, this is a really simple format, and the direct
  folder/group mapping to PostgreSQL directory is pretty natural and
  probably the way to go.
 
  Using the approach I outline, it should be possible to have the Unix
  make system automatically run the BuildWindowsProjectFile tool whenever
  any makefile changes so the Windows projects would stay up to date
  without additional work for Unix developers.
 
  Hannu Krosing also wrote:
   (also I dont think you can easily compile C source on a
   C# compiler) ;/
 
  I don't think it makes much sense target a compiler that won't compile
  the source, therefore, if what you say is true, we shouldn't bother with
  targeting C#.
 
  - Curtis
 
 
 
  ---(end of broadcast)---
  TIP 6: Have you searched our list archives?
 
  http://archives.postgresql.org
 

 --
   Bruce Momjian|  http://candle.pha.pa.us
   [EMAIL PROTECTED]   |  (610) 359-1001
   +  If your life is a hard drive, |  13 Roberts Road
   +  Christ can be your backup.|  Newtown Square, Pennsylvania
19073

 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?

 http://archives.postgresql.org





---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [mail] Re: [HACKERS] Win32 port patches submitted

2003-01-21 Thread Al Sutton
I would back keeping the windows specific files, and if anything moving the
code away from using the UNIX like programs.  My reasoning is that the more
unix tools you use for compiling, the less likley you are to attract
existing windows-only developers to work on the code. I see the Win32 patch
as a great oppertunity to attract more eyes to the code, and don't want the
oppertunity to be lost because of the build requirements.

Al.

- Original Message -
From: Peter Eisentraut [EMAIL PROTECTED]
To: Jan Wieck [EMAIL PROTECTED]
Cc: Postgres development [EMAIL PROTECTED]
Sent: Tuesday, January 21, 2003 5:40 PM
Subject: [mail] Re: [HACKERS] Win32 port patches submitted


 Jan Wieck writes:

  I just submitted the patches for the native Win32 port of v7.2.1 on the
  patches mailing list.

 I'm concerned that you are adding all these *.dsp files for build process
 control.  This is going to be a burden to maintain.  Everytime someone
 changes an aspect of how a file is built the Windows port needs to be
 fixed.  And since the tool that operates on these files is probably not
 freely available this will be difficult.  I don't see a strong reason not
 to stick with good old configure; make; make install.  You're already
 requiring various Unix-like tools, so you might as well require the full
 shell environment.  A lot of the porting aspects such as substitute
 implemenations of the C library functions could be handled nearly for free
 using the existing infrastructure and this whole patch would become much
 less intimidating.

 --
 Peter Eisentraut   [EMAIL PROTECTED]


 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?

 http://www.postgresql.org/users-lounge/docs/faq.html




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [mail] Re: [HACKERS] Update on replication

2002-12-18 Thread Al Sutton
The reason I favour a GBorg is that VA Linux (who own sourceforge) have yet
to turn in a profit and so maylook to trim some of it's assets in order to
improve profitability at some point in the future.

I think it would be a bad move to shift everything to sourceforge, only to
find that a year or more down the line the site dissappears/degrades to a
level where it causes problems for the project, and loose the time we could
have spent building up the reputation of GBorg.

Al.


- Original Message -
From: Neil Conway [EMAIL PROTECTED]
To: Greg Copeland [EMAIL PROTECTED]
Cc: Alvaro Herrera [EMAIL PROTECTED]; Marc G. Fournier
[EMAIL PROTECTED]; Tatsuo Ishii [EMAIL PROTECTED]; [EMAIL PROTECTED];
PostgresSQL Hackers Mailing List [EMAIL PROTECTED]
Sent: Wednesday, December 18, 2002 2:55 AM
Subject: [mail] Re: [HACKERS] Update on replication


 On Tue, 2002-12-17 at 21:33, Greg Copeland wrote:
  I do agree, GBorg needs MUCH higher visibility!

 I'm just curious: why do we need GBorg at all? Does it offer anything
 that SourceForge, or a similar service does not offer?

 Especially given that (a) most other OSS projects don't have a site for
 related projects (unless you count something like CPAN, which is
 totally different) (b) GBorg is completely unknown to anyone outside the
 PostgreSQL community and even to many people within it...

 Cheers,

 Neil
 --
 Neil Conway [EMAIL PROTECTED] || PGP Key ID: DB3C29FC




 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster




---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [mail] Re: [HACKERS] Big 7.4 items - Replication

2002-12-15 Thread Al Sutton
Many thanks for the explanation. Could you explain to me where the order or
the writeset for the following scenario;

If a tranasction takes 50ms to reach one database from another, for a
specific data element (called X), the following timeline occurs

at 0ms, T1(X) is written to system A.
at 10ms, T2(X) is written to system B.

Where T1(X) and T2(X) conflict.

My concern is that if the Group Communication Daemon (gcd) is operating on
each database,  a successful result for T1(X) will returned to the client
talking to database A because T2(X) has not reached it, and thus no conflict
is known about, and a sucessful result is returned to the client submitting
T2(X) to database B because it is not aware of T1(X). This would mean that
the two clients beleive bothe T1(X) and T2(X) completed succesfully, yet
they can not due to the conflict.

Thanks,

Al.

- Original Message -
From: Darren Johnson [EMAIL PROTECTED]
To: Al Sutton [EMAIL PROTECTED]
Cc: Bruce Momjian [EMAIL PROTECTED]; Jan Wieck
[EMAIL PROTECTED]; [EMAIL PROTECTED];
PostgreSQL-development [EMAIL PROTECTED]
Sent: Saturday, December 14, 2002 6:48 PM
Subject: Re: [mail] Re: [HACKERS] Big 7.4 items - Replication


 
 
 
 b) The Group Communication blob will consist of a number of processes
which
 need to talk to all of the others to interrogate them for changes which
may
 conflict with the current write that being handled and then issue the
 transaction response. This is basically the two phase commit solution
with
 phases moved into the group communication process.
 
 I can see the possibility of using solution b and having less group
 communication processes than databases as attempt to simplify things, but
 this would mean the loss of a number of databases if the machine running
the
 group communication process for the set of databases is lost.
 
 The group communication system doesn't just run on one system.  For
 postgres-r using spread
 there is actually a spread daemon that runs on each database server.  It
 has nothing to do with
 detecting the conflicts.  Its job is to deliver messages in a total
 order for writesets or simple order
 for commits, aborts, joins, etc.

 The detection of conflicts will be done at the database level, by a
 backend processes.  The basic
 concept is if all databases get the writesets (changes) in the exact
 same order, apply them in a
 consistent order, avoid conflicts, then one copy serialization is
 achieved.  (one copy of the database
 replicated across all databases in the replica)

 I hope that explains the group communication system's responsibility.

 Darren


 



 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?

 http://www.postgresql.org/users-lounge/docs/faq.html



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [MLIST] Re: [mail] Re: [HACKERS] Big 7.4 items - Replication

2002-12-15 Thread Al Sutton
David,

This can be resolved by requiring that for any transaction to succeed the
entrypoint database must receive acknowlegements from n/2 + 0.5 (rounded up
to the nearest integer) databases where n is the total number in the
replicant set. The following cases are shown as an example;

Total Number of databases: 2
Number required to accept transaction: 2

Total Number of databases: 3
Number required to accept transaction: 2

Total Number of databases: 4
Number required to accept transaction: 3

Total Number of databases: 5
Number required to accept transaction: 3

Total Number of databases: 6
Number required to accept transaction: 4

Total Number of databases: 7
Number required to accept transaction: 4

Total Number of databases: 8
Number required to accept transaction: 5

This would prevent two replicant sub-sets forming, because it is impossible
for both sets to have over 50% of the databases.

Applications could be able to detect when a database has dropped out of the
replicant set because the database could report a state of Unable to obtain
majority consesus. This would allow applications differentiate between a
database out of the set where writing to other databases in the set could
yield a sucessful result, and Unable to commit due to conflict where
trying other databases is pointless.

Al

Example
- Original Message -
From: David Walker [EMAIL PROTECTED]
To: Al Sutton [EMAIL PROTECTED]; Darren Johnson
[EMAIL PROTECTED]
Cc: Bruce Momjian [EMAIL PROTECTED]; Jan Wieck
[EMAIL PROTECTED]; [EMAIL PROTECTED];
PostgreSQL-development [EMAIL PROTECTED]
Sent: Sunday, December 15, 2002 2:29 PM
Subject: Re: [MLIST] Re: [mail] Re: [HACKERS] Big 7.4 items - Replication


 Another concern I have with multi-master systems is what happens if the
 network splits in 2 so that 2 master systems are taking commits for 2
 separate sets of clients.  It seems to me that to re-sync the 2 databases
 upon the network healing would be a very complex task or impossible task.

 On Sunday 15 December 2002 04:16 am, Al Sutton wrote:
  Many thanks for the explanation. Could you explain to me where the order
or
  the writeset for the following scenario;
 
  If a tranasction takes 50ms to reach one database from another, for a
  specific data element (called X), the following timeline occurs
 
  at 0ms, T1(X) is written to system A.
  at 10ms, T2(X) is written to system B.
 
  Where T1(X) and T2(X) conflict.
 
  My concern is that if the Group Communication Daemon (gcd) is operating
on
  each database,  a successful result for T1(X) will returned to the
client
  talking to database A because T2(X) has not reached it, and thus no
  conflict is known about, and a sucessful result is returned to the
client
  submitting T2(X) to database B because it is not aware of T1(X). This
would
  mean that the two clients beleive bothe T1(X) and T2(X) completed
  succesfully, yet they can not due to the conflict.
 
  Thanks,
 
  Al.
 
  - Original Message -
  From: Darren Johnson [EMAIL PROTECTED]
  To: Al Sutton [EMAIL PROTECTED]
  Cc: Bruce Momjian [EMAIL PROTECTED]; Jan Wieck
  [EMAIL PROTECTED]; [EMAIL PROTECTED];
  PostgreSQL-development [EMAIL PROTECTED]
  Sent: Saturday, December 14, 2002 6:48 PM
  Subject: Re: [mail] Re: [HACKERS] Big 7.4 items - Replication
 
   b) The Group Communication blob will consist of a number of processes
 
  which
 
   need to talk to all of the others to interrogate them for changes
which
 
  may
 
   conflict with the current write that being handled and then issue the
   transaction response. This is basically the two phase commit solution
 
  with
 
   phases moved into the group communication process.
   
   I can see the possibility of using solution b and having less group
   communication processes than databases as attempt to simplify things,
but this would mean the loss of a number of databases if the machine
running
 
  the
 
   group communication process for the set of databases is lost.
  
   The group communication system doesn't just run on one system.  For
   postgres-r using spread
   there is actually a spread daemon that runs on each database server.
It
   has nothing to do with
   detecting the conflicts.  Its job is to deliver messages in a total
   order for writesets or simple order
   for commits, aborts, joins, etc.
  
   The detection of conflicts will be done at the database level, by a
   backend processes.  The basic
   concept is if all databases get the writesets (changes) in the exact
   same order, apply them in a
   consistent order, avoid conflicts, then one copy serialization is
   achieved.  (one copy of the database
   replicated across all databases in the replica)
  
   I hope that explains the group communication system's responsibility.
  
   Darren
  
  
  
  
  
  
   ---(end of
broadcast)---
   TIP 5: Have you checked our extensive FAQ?
  
   http://www.postgresql.org/users-lounge/docs/faq.html

Re: [mail] Re: [HACKERS] Big 7.4 items - Replication

2002-12-15 Thread Al Sutton
Jonathan,

How do the group communication daemons on system A and B agree that T2 is
after T1?,

As I understand it the operation is performed locally before being passed on
to the group for replication, when T2 arrives at system B, system B has no
knowlege of T1 and so can perform T2 sucessfully.

I am guessing that the System B performs T2 locally, sends it to the group
communication daemon for ordering, and then receives it back from the group
communication order queue after it's position in the order queue has been
decided before it is written to the database.

This would indicate to me that there is a single central point which decides
that T2 is after T1.

Is this true?

Al.

- Original Message -
From: Jonathan Stanton [EMAIL PROTECTED]
To: Al Sutton [EMAIL PROTECTED]
Cc: Darren Johnson [EMAIL PROTECTED]; Bruce Momjian
[EMAIL PROTECTED]; Jan Wieck [EMAIL PROTECTED];
[EMAIL PROTECTED]; PostgreSQL-development
[EMAIL PROTECTED]
Sent: Sunday, December 15, 2002 5:00 PM
Subject: Re: [mail] Re: [HACKERS] Big 7.4 items - Replication


 The total order provided by the group communication daemons guarantees
 that every member will see the tranactions/writesets in the same order.
 So both A and B will see that T1 is ordered before T2 BEFORE writing
 anything back to the client. So for both servers T1 will be completed
 successfully, and T2 will be aborted because of conflicting writesets.

 Jonathan

 On Sun, Dec 15, 2002 at 10:16:22AM -, Al Sutton wrote:
  Many thanks for the explanation. Could you explain to me where the order
or
  the writeset for the following scenario;
 
  If a tranasction takes 50ms to reach one database from another, for a
  specific data element (called X), the following timeline occurs
 
  at 0ms, T1(X) is written to system A.
  at 10ms, T2(X) is written to system B.
 
  Where T1(X) and T2(X) conflict.
 
  My concern is that if the Group Communication Daemon (gcd) is operating
on
  each database,  a successful result for T1(X) will returned to the
client
  talking to database A because T2(X) has not reached it, and thus no
conflict
  is known about, and a sucessful result is returned to the client
submitting
  T2(X) to database B because it is not aware of T1(X). This would mean
that
  the two clients beleive bothe T1(X) and T2(X) completed succesfully, yet
  they can not due to the conflict.
 
  Thanks,
 
  Al.
 
  - Original Message -
  From: Darren Johnson [EMAIL PROTECTED]
  To: Al Sutton [EMAIL PROTECTED]
  Cc: Bruce Momjian [EMAIL PROTECTED]; Jan Wieck
  [EMAIL PROTECTED]; [EMAIL PROTECTED];
  PostgreSQL-development [EMAIL PROTECTED]
  Sent: Saturday, December 14, 2002 6:48 PM
  Subject: Re: [mail] Re: [HACKERS] Big 7.4 items - Replication
 
 
   
   
   
   b) The Group Communication blob will consist of a number of processes
  which
   need to talk to all of the others to interrogate them for changes
which
  may
   conflict with the current write that being handled and then issue the
   transaction response. This is basically the two phase commit solution
  with
   phases moved into the group communication process.
   
   I can see the possibility of using solution b and having less group
   communication processes than databases as attempt to simplify things,
but
   this would mean the loss of a number of databases if the machine
running
  the
   group communication process for the set of databases is lost.
   
   The group communication system doesn't just run on one system.  For
   postgres-r using spread
   there is actually a spread daemon that runs on each database server.
It
   has nothing to do with
   detecting the conflicts.  Its job is to deliver messages in a total
   order for writesets or simple order
   for commits, aborts, joins, etc.
  
   The detection of conflicts will be done at the database level, by a
   backend processes.  The basic
   concept is if all databases get the writesets (changes) in the exact
   same order, apply them in a
   consistent order, avoid conflicts, then one copy serialization is
   achieved.  (one copy of the database
   replicated across all databases in the replica)
  
   I hope that explains the group communication system's responsibility.
  
   Darren
  
  
   
  
  
  
   ---(end of
broadcast)---
   TIP 5: Have you checked our extensive FAQ?
  
   http://www.postgresql.org/users-lounge/docs/faq.html
 
 
 
  ---(end of broadcast)---
  TIP 6: Have you searched our list archives?
 
  http://archives.postgresql.org

 --
 ---
 Jonathan R. Stanton [EMAIL PROTECTED]
 Dept. of Computer Science
 Johns Hopkins University
 ---




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere

Re: [mail] Re: [HACKERS] Big 7.4 items - Replication

2002-12-15 Thread Al Sutton
Jonathan,

Many thanks for clarifying the situation some more. With token passing, I
have the following concerns;

1) What happends if a server holding the token should die whilst it is in
posession of the token.

2) If I have n servers, and the time to pass the token between each server
is x milliseconds, I may have to wait for upto m times x milliseconds in
order for a transaction to be processed. If a server is limited to a single
transaction per posession of the token (in order to ensure no system hogs
the token), and the server develops a queue of length y, I will have to wait
m times x times y for the transaction to be processed.  Both scenarios I
beleive would not scale well beyond a small subset of  servers with low
network latency between them.

If we consider the following situation I can illustrate why I'm still in
favour of a two phase commit;

Imagine, for example, credit card details about the status of an account
replicated in real time between databases in London, Moscow, Singapore,
Syndey, and New York. If any server can talk to any other server with a
guarenteed packet transfer time of 150ms a two phase commit could complete
in 600ms as it's worst case (assuming that the two phases consist of
request/response pairs, and that each server talks to all the others in
parallel). A token passing system may have to wait for the token to pass
through every other server before reaching the one that has the transaction
comitted to it, which could take about 750ms.

If you then expand the network to allow for a primary and disaster recover
database at each location the two phase commit still maintains it's 600ms
response time, but the token passing system doubles to 1500ms.

Allowing disjointed segments to continue executing is also a concern because
any split in the replication group could effectively double the accepted
card limit for any card holder should they purchase items from various
locations around the globe.

I can see an idea that the token may be passed to the system with the most
transactions in a wait state, but this would cause low volume databases to
loose out on response times to higher volume ones, which is again,
undesirable.

Al.

- Original Message -
From: Jonathan Stanton [EMAIL PROTECTED]
To: Al Sutton [EMAIL PROTECTED]
Cc: Darren Johnson [EMAIL PROTECTED]; Bruce Momjian
[EMAIL PROTECTED]; Jan Wieck [EMAIL PROTECTED];
[EMAIL PROTECTED]; PostgreSQL-development
[EMAIL PROTECTED]
Sent: Sunday, December 15, 2002 9:17 PM
Subject: Re: [mail] Re: [HACKERS] Big 7.4 items - Replication


 On Sun, Dec 15, 2002 at 07:42:35PM -, Al Sutton wrote:
  Jonathan,
 
  How do the group communication daemons on system A and B agree that T2
is
  after T1?,

 Lets split this into two separate problems:

 1) How do the daemons totally order a set of messages (abstract
 messages)

 2) How do database transactions get split into writesets that are sent
 as messages through the group communication system.

 As to question 1, the set of daemons (usually one running on each
 participating server) run a distributed ordering algorithm, as well as
 distributed algorithms to provide message reliability, fault-detection,
 and membership services. These are completely distributed algorithms, no
 central controller node exists, so even if network partitions occur
 the group communication system keeps running and providing ordering and
 reliability guarantees to messages.

 A number of different algorithms exist as to how to provide a total
 order on messages. Spread currently uses a token algorithm, that
 involves passing a token between the daemons, and a counter attached to
 each message, but other algorithms exist and we have implemneted some
 other ones in our research. You can find lots of details in the papers
 at www.cnds.jhu.edu/publications/ and www.spread.org.

 As to question 2, there are several different approaches to how to use
 such a total order for actual database replication. They all use the gcs
 total order to establish a single sequence of events that all the
 databases see. Then each database can act on the events as they are
 delivered by teh gcs and be guaranteed that no other database will see a
 different order.

 In the postgres-R case, the action received from a client is performned
 partially at the originating postgres server, the writesets are then
 sent through the gcs to order them and determine conflicts. Once they
 are delivered back, if no conflicts occured in the meantime, the
 original transaction is completed and the result returned to the client.
 If a conflict occured, the original transaction is rolled back and
 aborted. and the abort is returned to the client.

 
  As I understand it the operation is performed locally before being
passed on
  to the group for replication, when T2 arrives at system B, system B has
no
  knowlege of T1 and so can perform T2 sucessfully.
 
  I am guessing that the System B performs T2 locally, sends it to the
group

Re: [mail] Re: [HACKERS] Big 7.4 items - Replication

2002-12-14 Thread Al Sutton
I see it as very difficult to avoid a two stage process because there will
be the following two parts to any transaction;

1) All databases must agree upon the acceptability of a transaction before
the client can be informed of it's success. 2) All databases must be
informed as to whether or not the transaction was accepted by the entire
replicant set, and thus whether it should be written to the database.

If stage1 is missed then the client application may be informed of a
sucessful transaction which may fail when it is replicated to other
databases.

If stage 2 is missed then databases may become out of sync because they have
accepted transactions that were rejected by other databases.

From reading the PDF on Postgres-R I can see that either one of two things
will occur;

a) There will be a central point of synchronization where conflicts will be
tested and delt with. This is not desirable because it will leave the
synchronization and replication processing load concentrated in one place
which will limit scaleability as well as leaving a single point of failure.

or

b) The Group Communication blob will consist of a number of processes which
need to talk to all of the others to interrogate them for changes which may
conflict with the current write that being handled and then issue the
transaction response. This is basically the two phase commit solution with
phases moved into the group communication process.

I can see the possibility of using solution b and having less group
communication processes than databases as attempt to simplify things, but
this would mean the loss of a number of databases if the machine running the
group communication process for the set of databases is lost.

Al.

- Original Message -
From: Bruce Momjian [EMAIL PROTECTED]
To: Al Sutton [EMAIL PROTECTED]
Cc: Darren Johnson [EMAIL PROTECTED]; Jan Wieck
[EMAIL PROTECTED]; [EMAIL PROTECTED];
PostgreSQL-development [EMAIL PROTECTED]
Sent: Saturday, December 14, 2002 4:59 PM
Subject: [mail] Re: [HACKERS] Big 7.4 items - Replication



 This sounds like two-phase commit. While it will work, it is probably
 slower than Postgres-R's method.

 --
-

 Al Sutton wrote:
  For live replication could I propose that we consider the systems A,B,
and C
  connected to each other independantly (i.e. A has links to B and C, B
has
  links to A and C, and C has links to A and B), and that replication is
  handled by the node receiving the write based transaction.
 
  If we consider a write transaction that arrives at A (called WT(A)),
system
  A will then send WT(A) to systems B and C via it's direct connections.
  System A will receive back either an OK response if there are not
conflicts,
  a NOT_OK response if there are conflicts, or no response if the system
is
  unavailable.
 
  If system A receives a NOT_OK response from any other node it begins the
  process of rolling back the transaction from all nodes which previously
  issued an OK, and the transaction returns a failure code to the client
which
  submitted WT(A). The other systems (B and C) would track recent
transactions
  and there would be a specified timeout after which the transaction is
  considered safe and could not be rolled out.
 
  Any system not returning an OK or NOT_OK state is assumed to be down,
and
  error messages are logged to state that the transaction could not be
sent to
  the system due it it's unavailablility, and any monitoring system would
  alter the administrator that a replicant is faulty.
 
  There would also need to be code developed to ensure that a system could
be
  brought into sync with the current state of other systems within the
group
  in order to allow new databases to be added, and faulty databases to be
  re-entered to the group. This code could also be used for non-realtime
  replication to allow databases to be syncronised with the live master.
 
  This would give a multi-master solution whereby a write transaction to
any
  one node would guarentee that all available replicants would also hold
the
  data once it is completed, and would also provide the code to handle
  scenarios where non-realtime data replication is required.
 
  This system assumes that a majority of transactions will be sucessful
(which
  should be the case for a well designed system).
 
  Comments?
 
  Al.
 
 
 
 
 
 
  - Original Message -
  From: Darren Johnson [EMAIL PROTECTED]
  To: Jan Wieck [EMAIL PROTECTED]
  Cc: Bruce Momjian [EMAIL PROTECTED];
  [EMAIL PROTECTED]; PostgreSQL-development
  [EMAIL PROTECTED]
  Sent: Saturday, December 14, 2002 1:28 AM
  Subject: [mail] Re: [HACKERS] Big 7.4 items
 
 
   
   
   
   Lets say we have systems A, B and C.  Each one has some
   changes and sends a writeset to the group communication
   system (GSC).  The total order dictates WS(A), WS(B), and
   WS(C) and the writes sets are recieved in that order at
   each system.  Now C gets WS

Re: [mail] Re: [HACKERS] 7.4 Wishlist

2002-12-10 Thread Al Sutton
Would it be possible to make compression an optional thing, with the default
being off?

I'm in a position that many others may be in where the link between my app
server and my database server isn't the bottleneck, and thus any time spent
by the CPU performing compression and decompression tasks is CPU time that
is in effect wasted.

If a database is handling numerous small queries/updates and the
request/response packets are compressed individually, then the overhead of
compression and decompression may result in slower performance compared to
leaving the request/response packets uncompressed.

Al.

- Original Message -
From: Greg Copeland [EMAIL PROTECTED]
To: Stephen L. [EMAIL PROTECTED]
Cc: PostgresSQL Hackers Mailing List [EMAIL PROTECTED]
Sent: Tuesday, December 10, 2002 4:56 PM
Subject: [mail] Re: [HACKERS] 7.4 Wishlist


 On Tue, 2002-12-10 at 09:36, Stephen L. wrote:
  6. Compression between client/server interface like in MySQL
 

 Mammoth is supposed to be donating their compression efforts back to
 this project, or so I've been told.  I'm not exactly sure of their
 time-line as I've slept since my last conversation with them.  The
 initial feedback that I've gotten back from them on this subject is that
 the compression has been working wonderfully for them with excellent
 results.  IIRC, in their last official release, they announced their
 compression implementation.  So, I'd think that it would be available
 for 7.4 of 7.5 time frame.


 --
 Greg Copeland [EMAIL PROTECTED]
 Copeland Computer Consulting


 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [mail] Re: [HACKERS] 7.4 Wishlist

2002-12-10 Thread Al Sutton
I'd like to show/register interest.

I can see it being very useful when combined with replication for situations
where the replicatiant databases are geographically seperated (i.e. Disaster
Recover sites or systems maintaining replicants in order to reduce the
distance from user to app to database). The bandwidth cost savings from
compressing the replication information would be immensly useful.

Al.

- Original Message -
From: Joshua D. Drake [EMAIL PROTECTED]
To: Bruce Momjian [EMAIL PROTECTED]
Cc: Greg Copeland [EMAIL PROTECTED]; Al Sutton
[EMAIL PROTECTED]; Stephen L. [EMAIL PROTECTED]; PostgresSQL Hackers
Mailing List [EMAIL PROTECTED]
Sent: Tuesday, December 10, 2002 8:04 PM
Subject: Re: [mail] Re: [HACKERS] 7.4 Wishlist


 Hello,

We would probably be open to contributing it if there was interest.
 There wasn't interest initially.

 Sincerely,

 Joshua Drake


 Bruce Momjian wrote:
  Greg Copeland wrote:
 
 On Tue, 2002-12-10 at 11:25, Al Sutton wrote:
 
 Would it be possible to make compression an optional thing, with the
default
 being off?
 
 
 I'm not sure.  You'd have to ask Command Prompt (Mammoth) or wait to see
 what appears.  What I originally had envisioned was a per database and
 user permission model which would better control use.  Since compression
 can be rather costly for some use cases, I also envisioned it being
 negotiated where only the user/database combo with permission would be
 able to turn it on.  I do recall that compression negotiation is part of
 the Mammoth implementation but I don't know if it's a simple capability
 negotiation or part of a larger scheme.
 
 
  I haven't heard anything about them contributing it.  Doesn't mean it
  will not happen, just that I haven't heard it.
 
  I am not excited about per-db/user compression because of the added
  complexity of setting it up, and even set up, I can see cases where some
  queries would want it, and others not.  I can see using GUC to control
  this.  If you enable it and the client doesn't support it, it is a
  no-op.  We have per-db and per-user settings, so GUC would allow such
  control if you wish.
 
  Ideally, it would be a tri-valued parameter, that is ON, OFF, or AUTO,
  meaning it would determine if there was value in the compression and do
  it only when it would help.
 

 --
 COMPANYCommandPrompt - http://www.commandprompt.com /COMPANY
 CONTACT   PHONE+1.503.222-2783/PHONE  /CONTACT





---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Broadcast replication (Was Re: [HACKERS] 7.4 Wishlist)

2002-12-04 Thread Al Sutton

- Original Message -
From: Kevin Brown [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, December 03, 2002 8:49 PM
Subject: [mail] Re: [HACKERS] 7.4 Wishlist


 Al Sutton wrote:
  Point to Point and Broadcast replication
  
  With point to point you specify multiple endpoints, with broadcast you
can
  specify a subnet address and the updates are broadcast over that subnet.
 
  The difference being that point to point works well for cross network
  replication, or where you have a few replicants. I have multiple
database
  servers which could have a deadicated class C network that they are all
on,
  by broadcasting updates you can cutdown the amount of traffic on that
net by
  a factor of n minus 1 (where n is the number of servers involved).

 Yech.  Now you can't use TCP anymore, so the underlying replication
 code has to handle all the issues that TCP deals with transparently,
 like error checking, retransmits, data windows, etc.  I don't think
 it's wise to assume that your transport layer is 100% reliable.

 Further, this doesn't even address the problem of bringing up a leaf
 server that's been down a while.  It can be significantly out of date
 relative to the other servers on the subnet.

 I suspect you'll be better off implementing a replication protocol
 that has the leaf nodes keeping each other up to date, to minimize the
 traffic coming from the next level up.  Then you can use TCP for the
 connections but minimize the traffic generated by any given node.


I wasn't saying that ALL replication traffic must be broadcast, if a
specific server needs a refresh when it comes then point to point is fine
because only one machine needs the data, and thus broadcasting it to all
would load machines with data they didn't need.

The aim of using broadcast is to cut down the ongoing traffic, say, for
example, I have a cluster of ten database servers I can connect them onto a
dedicated LAN shared only by database servers and I would see 10% of the
traffic I would get if I were using point to point (this is assuming that
the addition of error checking, retransmits, etc. to the broadcast protocol
adds a similiar overhead per packet as TCP point to point).

If others wish to know more about this I can prepare an overview for how I
see it working.

[Other points snipped]



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] 7.4 Wishlist

2002-11-30 Thread Al Sutton
My list is;

Point to Point and Broadcast replication

With point to point you specify multiple endpoints, with broadcast you can
specify a subnet address and the updates are broadcast over that subnet.

The difference being that point to point works well for cross network
replication, or where you have a few replicants. I have multiple database
servers which could have a deadicated class C network that they are all on,
by broadcasting updates you can cutdown the amount of traffic on that net by
a factor of n minus 1 (where n is the number of servers involved).

Ability to use raw partitions


I've not seen an install of PostgreSQL yet that didn't put the database
files onto a filesystem, so I'm assuming it's the only way of doing it. By
using the filesystem the files are at the mercy of filesystem handler code
as to where they end up on the disk, and thus the speed of access will
always have some dependancy on the speed of the filesystem.

With a raw partition it would be possible to use two devices (e.g. /dev/hde
and /dev/hdg on an eight channel ide linux box), and PostgreSQL could then
ensure the WALs were located on one the disk with the entries running
sequentally, and that the database files were located on the other disk in
the most appropriate location (e.g. index data starting near the center of
the disk, and user table data starting near the outside).

Win32 Port

I've explained the reasons before. Apart from that it's always useful to
open PostgreSQL up to a larger audience.



- Original Message -
From: Daniele Orlandi [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, November 29, 2002 11:06 PM
Subject: [mail] Re: [HACKERS] 7.4 Wishlist


 Christopher Kings-Lynne wrote:
  Hi guys,
 
  Just out of interest, if someone was going to pay you to hack on
Postgres
  for 6 months, what would you like to code for 7.4?

 Replication. Replication. Replication. Replication. Replication.
 Replication. Replication. Replication. Replication. Replication.
 Replication. Replication. Replication. Replication. Replication.

 Well, jokes apart, I think this is one of the most needed features to
 me. Currently I'm using strange voodoo to replicate some tables on other
 machines in order to spread load and resilency. Compared to what I am
 doing now a good master to slave replication would be heaven.

 I understand that a good replication is painful but in my experience, if
 you start by integrating some rude, experimental implementation in the
 mainstream PostgreSQL the rest will come by itself.

 For example, RI was something I wouldn't consider production level in
 7.2, but was a start, now in 7.3 is much much better, probably complete
 in the most important parts.

 Other wishes (not as important as the replication issue) are:

 - Better granularity of security and access control, like in mysql.

 - Ability to reset the state of an open backend, including aborting open
 transaction to allow for better connection pooling and reusing, maybe
 giving the client the ability to switch between users...

 Bye!

 --
   Daniele Orlandi
   Planet Srl


 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [mail] Re: [HACKERS] Native Win32 sources

2002-11-27 Thread Al Sutton
I've posted an Email to the list as to why I'm avoiding a move to linux
(cost of training -v- cost of database (free) + money saved from recycling
current DB machines).

My experience with PostgreSQL has always been good, and I beleive that we
can test any potential bugs that we may beleive are in the database by
running our app in our the QA environment against the Linux version of the
database (to test platform specifics), and then the database version in
production (to test version specifics).

I'm quite happy to spend the time doing this to gain the cost benefit of
freeing up the extra machines my developers currently have.

Al.

- Original Message -
From: Shridhar Daithankar [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, November 27, 2002 8:41 AM
Subject: Re: [mail] Re: [HACKERS] Native Win32 sources


 On 27 Nov 2002 at 8:21, Al Sutton wrote:

  The problem I have with VMWare is that for the cost of a licence plus
the
  additional hardware on the box running it (CPU power, RAM, etc.) I can
buy a
  second cheap machine, using VMWare doesn't appear to save me my biggest
  overheads of training staff on Unix and cost of equipment (software and
  hardware). I've been looking at Bochs, but 1.4.1 wasn't stable enough to
  install RedHat, PostgreSQL, etc. reliably.

 I have been reading this thread all along and I have some suggestions.
They are
 not any different than already made but just summerising them.

 1) Move to linux.

 You can put a second linux box with postgresql on it. Anyway your app. is
on
 windows so it does not make much of a difference because developers will
be
 accessing database from their machines.

 Secondly if you buy a good enough mid-range machine, say with 40GB SCSI
with 2G
 of RAM, each developer can develop on his/her own database. In case of
 performance testing, you can schedule it just like any other shared
resource.

 It is very easy to run multiple isolated postgresql instances on a linux
 machine. Just change the port number and use a separate data directory.
That's
 it..

 Getting people familiarized with unix/.linux upto a point where they can
use
 their own database is matter of half a day.

 2) Do not bank too much on windows port yet.

 Will all respect to people developing native windows port of postgresql,
unless
 you know the correct/stable behaviour of postgresql on unix, you might end
up
 in a situation where you don't know whether a bug/problem is in postgresql
or
 with postgresql/windows. I would not recommend getting into such a
situation.

 Your contribution is always welcome in any branch but IMO it is not worth
at
 the risk of slipping your own product development.

 Believe me, moving to linux might seem scary at first but it is no more
than
 couple of days matter to get a box to play around. Untill you need a good
 machine for performance tests, a simple 512MB machie with enough disk
would be
 sufficient for any development among the group..

  HTH

 Bye
  Shridhar

 --
 My father taught me three things: (1) Never mix whiskey with anything but
 water. (2) Never try to draw to an inside straight. (3) Never discuss
business
 with anyone who refuses to give his name.


 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [spam] Re: [mail] Re: [HACKERS] Native Win32 sources

2002-11-27 Thread Al Sutton
Hannu,

Using a Win32 platform will allow them to perform relative metrics. I'm not
looking for a statement saying things are x per cent faster than production,
I'm looking for reproducable evidence that an improvement offers y per cent
faster performance than another configuration on the same platform.

The QA environment is designed to do final testing and compiling definitive
metrics against production systems, what I'm looking for is an easy method
of allowing developers to see the relative change on performance for a given
change on the code base.

I'm fully aware that they'll still have to use the config files of
PostgreSQL on a Win32 port, but the ability to edit the config files, modify
sql dumps to load data into new schema, transfer files between themselves,
and perform day to day tasks such as reading Email and MS-Word formatted
documents sent to us using tools that they are currently familiar with is a
big plus for me.

The bottom line is I can spend money training my developers on Linux and
push project deadlines back until they become familiar with it, or I can
obtain a free database on their native platform and reduce the number of
machines needed per developer as well as making the current DB machines
available as the main machine for new staff. The latter makes the most sense
in the profit based business environment which I'm in.

Al.

- Original Message -
From: Hannu Krosing [EMAIL PROTECTED]
To: Al Sutton [EMAIL PROTECTED]
Cc: scott.marlowe [EMAIL PROTECTED]; bpalmer
[EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Wednesday, November 27, 2002 10:54 AM
Subject: [spam] Re: [mail] Re: [HACKERS] Native Win32 sources


 On Wed, 2002-11-27 at 08:21, Al Sutton wrote:
  The problem I have with VMWare is that for the cost of a licence plus
the
  additional hardware on the box running it (CPU power, RAM, etc.) I can
buy a
  second cheap machine, using VMWare doesn't appear to save me my biggest
  overheads of training staff on Unix and cost of equipment (software and
  hardware). I've been looking at Bochs, but 1.4.1 wasn't stable enough to
  install RedHat, PostgreSQL, etc. reliably.
 
  The database in question holds order information for over 2000 other
  companies, and is growing daily. There is also a requirement to keep the
  data indefinatley.
 
  The developers are developing two things;
 
  1- Providing an interface for the companies employees to update customer
  information and answer customer queries.
 
  2- Providing an area for merchants to log into that allows them to
generate
  some standardised reports over the order data, change passwords, setup
  repeated payment system, etc.
 
  Developing these solutions does include the possibilities of modify the
  database schema, the configuration of the database, and the datatypes
used
  to represent the data (e.g. representing encyrpted data as a Base64
string
  or blob), and therefore the developers may need to make fundamental
changes
  to the database and perform metrics on how they have affected
performance.

 If you need metrics and the production runs on some kind of unix, you
 should definitely do the measuring on unix as well. A developers machine
 with different os and other db tuning parameters may give you _very_
 different results from the real deployment system.

 Also, porting postgres to win32 wont magically make it into MS Access -
 most DB management tasks will be exactly the same. If your developer are
 afraid of command line, give them some graphical or web tool for
 managing the db.

 If they dont want to manage linux, then just set it up once and don't
 give them the root pwd ;)

 --
 Hannu







---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [spam] Re: [mail] Re: [HACKERS] Native Win32 sources

2002-11-27 Thread Al Sutton
It's an option, but I can see it being a bit of an H-Bomb to kill an ant if
the Win32 source appears within the next 6 weeks.

I've played used cygwin before and I've always been uncomfortable with the
way it's integrated with Windows. It always came accross as something that
isn't really for the windows masses, but more for techies who want Unix  on
an MS platform. My main dislikes about it are;

- Changing paths. If my developers install something in c:\temp they expect
to find it under /temp on cygwin.

- Duplicating home directories. The users already have a home directory
under MS, why does cygwin need to use a different location?

My current plan is to use the Win32 native port myself when it first appears
and thrash our app against it. Once I'm happy that the major functionality
of our app works against the Win32 port, I'll introduce it to a limited
number of developers who enjoy hacking code if it goes wrong and get them to
note a log any problems the come accross.

If nothing else it should mean a few more bodies testing the Win32 port
(although I expect you'll find they'll be a large number of those as soon as
it hits CVS).

Al.


- Original Message -
From: scott.marlowe [EMAIL PROTECTED]
To: Al Sutton [EMAIL PROTECTED]
Cc: Hannu Krosing [EMAIL PROTECTED]; bpalmer [EMAIL PROTECTED];
[EMAIL PROTECTED]
Sent: Wednesday, November 27, 2002 11:08 PM
Subject: Re: [spam] Re: [mail] Re: [HACKERS] Native Win32 sources


 On Wed, 27 Nov 2002, Al Sutton wrote:

  Hannu,
 
  Using a Win32 platform will allow them to perform relative metrics. I'm
not
  looking for a statement saying things are x per cent faster than
production,
  I'm looking for reproducable evidence that an improvement offers y per
cent
  faster performance than another configuration on the same platform.

 So, does cygwin offer any win?  I know it's still unix on windows but
 it's the bare minimum of unix, and it is easy to create one image of an
 install and copy it around onto other boxes in a semi-ready to go format.


 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [mail] Re: [HACKERS] Native Win32 sources

2002-11-26 Thread Al Sutton
Is there a rough date for when they'll be available?

I have a development team at work who currently have an M$-Windows box and a
Linux box each in order to allow them to read M$-Office documents sent to us
and develop against PostgreSQL (which we use in production).

I know I could have a shared Linux box with multiple databases and have them
bind to that, but one of the important aspects of  our application is
response time, and you can't accurately measure response times for code
changes on a shared system.

Having a Win32 native version would save a lot of hassles for me.

Al.

- Original Message -
From: Bruce Momjian [EMAIL PROTECTED]
To: Ulrich Neumann [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, November 25, 2002 5:51 PM
Subject: [mail] Re: [HACKERS] Native Win32 sources


 Ulrich Neumann wrote:
  Hello,
 
  i've read that there are 2 different native ports for Windows
  somewhere.
 
  I've searched for them but didn't found them. Is there anyone who can
  point me to a link or send me a copy of the sources?

 Oh, you are probably asking about the sources.  They are not publically
 available yet.

 --
   Bruce Momjian|  http://candle.pha.pa.us
   [EMAIL PROTECTED]   |  (610) 359-1001
   +  If your life is a hard drive, |  13 Roberts Road
   +  Christ can be your backup.|  Newtown Square, Pennsylvania
19073

 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]




---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [mail] Re: [HACKERS] Native Win32 sources

2002-11-26 Thread Al Sutton
Lee,

I wouldn't go for 7.4 in production until after it's gone gold, but being
able to cut the number of boxes per developer by giving them a Win32 native
version would save on everything from the overhead of getting the developers
familiar enough with Linux to be able to admin their own systems, to cutting
the network usage by having the DB and app on the same system, through to
cutting the cost of electricity by only having one box per developer. It
would also be a good way of testing 7.4 against our app so we can plan for
an upgrade when it's released ;).

I've tried open office 1.0.1 and had to ditch it. It had problems with font
rendering and tables that ment many of the forms that people sent as word
documents had chunks that weren't displayed or printed. We did try it on a
box with MS-Word on it to ensure that the setup of the machine wasn't the
issue, Word had no problems, OO failed horribly.

Thanks for the ideas,

Al.

- Original Message -
From: Lee Kindness [EMAIL PROTECTED]
To: Al Sutton [EMAIL PROTECTED]
Cc: Bruce Momjian [EMAIL PROTECTED]; Ulrich Neumann
[EMAIL PROTECTED]; [EMAIL PROTECTED]; Lee Kindness
[EMAIL PROTECTED]
Sent: Tuesday, November 26, 2002 9:08 AM
Subject: Re: [mail] Re: [HACKERS] Native Win32 sources


 Al, to be honest I don't think the Windows native would save hassle,
 rather it'd probably cause more! No disrespect to those doing the
 version, read on for reasoning...

 Yes, you get a beta of a Windows native version just now, yes it
 probably will not be that long till the source is a available... But
 how long till it's part of a cosha PostgreSQL release? Version
 7.4... Could be up to six months... Do you want to run pre-release
 versions in the meantime? Don't think so, not in a production
 environment!

 So, the real way to save hassle is probably a cheap commodity PC with
 Linux installed... Or settle for the existing, non-native, Windows
 version.

 By the way, just to open Office documents? Have you tried OpenOffice?

 Regards, Lee Kindness.

 Al Sutton writes:
   Is there a rough date for when they'll be available?
  
   I have a development team at work who currently have an M$-Windows box
and a
   Linux box each in order to allow them to read M$-Office documents sent
to us
   and develop against PostgreSQL (which we use in production).
  
   I know I could have a shared Linux box with multiple databases and have
them
   bind to that, but one of the important aspects of  our application is
   response time, and you can't accurately measure response times for code
   changes on a shared system.
  
   Having a Win32 native version would save a lot of hassles for me.
  
   Al.
  
   - Original Message -
   From: Bruce Momjian [EMAIL PROTECTED]
  
Ulrich Neumann wrote:
 Hello,

 i've read that there are 2 different native ports for Windows
 somewhere.

 I've searched for them but didn't found them. Is there anyone who
can
 point me to a link or send me a copy of the sources?
   
Oh, you are probably asking about the sources.  They are not
publically
available yet.




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [mail] Re: [HACKERS] Native Win32 sources

2002-11-26 Thread Al Sutton
D'Arcy,

In production the database servers are seperate multi-processor machines
with mirrored disks linked via Gigabit ethernet to the app server.

In development I have people extremely familiar with MS, but not very hot
with Unix in any flavour, who are developing Java and PHP code which is then
passed into the QA phase where it's run on a replica of the production
environment.

My goal is to allow my developers to work on the platform they know (MS),
using as many of the aspects of the production environment as possible (JVM
version, PHP version, and hopefully database version), without needing to
buy each new developer two machines, and incur the overhead of them
familiarising themselves with a flavour of Unix.

Hope this helps you understand where I'm comming from,

Al.

- Original Message -
From: D'Arcy J.M. Cain [EMAIL PROTECTED]
To: Al Sutton [EMAIL PROTECTED]; Lee Kindness [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Tuesday, November 26, 2002 11:59 AM
Subject: Re: [mail] Re: [HACKERS] Native Win32 sources


 On November 26, 2002 06:33 am, Al Sutton wrote:
  I wouldn't go for 7.4 in production until after it's gone gold, but
being
  able to cut the number of boxes per developer by giving them a Win32
native
  version would save on everything from the overhead of getting the
  developers familiar enough with Linux to be able to admin their own
  systems, to cutting the network usage by having the DB and app on the
same
  system, through to cutting the cost of electricity by only having one
box
  per developer. It would also be a good way of testing 7.4 against our
app
  so we can plan for an upgrade when it's released ;).

 If your database is of any significant size you probably want a separate
 database machine anyway.  We run NetBSD everywhere and could easily put
the
 apps on the database machine but choose not to.  We have 6 production
servers
 running various apps and web servers and they all talk to a central
database
 machine which has lots of RAM.  Forget about bandwidth.  Just get a
100MBit
 switch and plug everything into it.  Network bandwidth won't normally be
your
 bottleneck.  Memory and CPU will be.

 We actually have 4 database machines, 3 running transaction databases and
1
 with an rsynced copy for reporting purposes.  We use 3 networks, 1 for the
 app servers to talk to the Internet, 1 for the app servers to talk to the
 databases and one for the databases to talk amongst themselves.

 Even for development we keep a separate database machine that developers
all
 use.  They run whatever they want - we have people using NetBSD, Linux and
 Windows - but they work on one database which is tuned for the purpose.
They
 can even create their own databases on that system if they want for local
 testing.

 --
 D'Arcy J.M. Cain darcy@{druid|vex}.net   |  Democracy is three wolves
 http://www.druid.net/darcy/|  and a sheep voting on
 +1 416 425 1212 (DoD#0082)(eNTP)   |  what's for dinner.

 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] Fw: Missing file from CVS?

2002-11-20 Thread Al Sutton
Heres a patch which will create the sql_help.h file if it doesn't already
exist using an installed copy of perl. I've tested it using perl v5.6.1 from
ActiveState and all appears to work.

Can someone commit this for me, or throw back some comments.

Thanks,

Al.


--- src/bin/psql/win32.mak  2002/10/29 04:23:30 1.11
+++ src/bin/psql/win32.mak  2002/11/20 19:44:35
@@ -7,14 +7,16 @@
 !ENDIF

 CPP=cl.exe
+PERL=perl.exe

 OUTDIR=.\Release
 INTDIR=.\Release
+REFDOCDIR= ../../../doc/src/sgml/ref
 # Begin Custom Macros
 OutDir=.\Release
 # End Custom Macros

-ALL : $(OUTDIR)\psql.exe
+ALL : sql_help.h $(OUTDIR)\psql.exe

 CLEAN :
-@erase $(INTDIR)\command.obj
@@ -91,3 +93,7 @@
$(CPP) @
$(CPP_PROJ) $
 
+
+sql_help.h: create_help.pl
+$(PERL) create_help.pl $(REFDOCDIR) $@
+




- Original Message -
From: Al Sutton [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, November 15, 2002 8:48 PM
Subject: Missing file from CVS?


 All,

 I've just tried to build the Win32 components under Visual Studio's C++
 compiler from the win32.mak CVS archive at
 :pserver:[EMAIL PROTECTED]:/projects/cvsroot and found that
the
 following file was missing;

 src\bin\psql\sql_help.h

 I've copied the file from the the source tree of version 7.2.3 and the
 compile works with out any problems.

 Should the file be in CVS?

 Al.




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Missing file from CVS?

2002-11-17 Thread Al Sutton
 .\Release\libpqdll.exp
cd ..\..\bin\psql
nmake /f win32.mak

Microsoft (R) Program Maintenance Utility Version 7.00.9466
Copyright (C) Microsoft Corporation.  All rights reserved.

if not exist .\Release/ mkdir .\Release
cl.exe @C:\DOCUME~1\Al\LOCALS~1\Temp\nm1CD.tmp
getopt.c
cl.exe @C:\DOCUME~1\Al\LOCALS~1\Temp\nm1CE.tmp
command.c
command.c(497) : warning C4244: 'function' : conversion from 'unsigned
short' to
 'bool', possible loss of data
common.c
help.c
help.c(31) : fatal error C1083: Cannot open include file: 'sql_help.h': No
such
file or directory
input.c
stringutils.c
mainloop.c
copy.c
startup.c
prompt.c
sprompt.c
variables.c
large_obj.c
print.c
print.c(1009) : warning C4244: 'function' : conversion from 'const unsigned
shor
t' to 'bool', possible loss of data
describe.c
tab-complete.c
mbprint.c
NMAKE : fatal error U1077: 'cl.exe' : return code '0x2'
Stop.
NMAKE : fatal error U1077: 'C:\Program Files\Microsoft Visual Studio
.NET\VC7\B
IN\nmake.exe' : return code '0x2'
Stop.

C:\Projects\pgsql\src




- Original Message -
From: Joe Conway [EMAIL PROTECTED]
To: Al Sutton [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Sunday, November 17, 2002 4:17 AM
Subject: Re: [HACKERS] Missing file from CVS?


 Al Sutton wrote:
  All,
 
  I've just tried to build the Win32 components under Visual Studio's C++
  compiler from the win32.mak CVS archive at
  :pserver:[EMAIL PROTECTED]:/projects/cvsroot and found that
the
  following file was missing;
 
  src\bin\psql\sql_help.h
 
  I've copied the file from the the source tree of version 7.2.3 and the
  compile works with out any problems.
 
  Should the file be in CVS?
 

 I'm not seeing a problem here with cvs tip and VS .Net's C++, although I
am
 now getting a few pedantic warnings that I wasn't seeing a few weeks ago.

 Where exactly are you getting an error?

 Joe

 p.s. here's my output:

 C:\Documents and Settings\jconway\My Documents\Visual Studio
 Projects\pgsql\srcnmake -f win32.mak

 Microsoft (R) Program Maintenance Utility Version 7.00.9466
 Copyright (C) Microsoft Corporation.  All rights reserved.

  cd include
  if not exist pg_config.h copy pg_config.h.win32 pg_config.h
  1 file(s) copied.
  cd ..
  cd interfaces\libpq
  nmake /f win32.mak

 Microsoft (R) Program Maintenance Utility Version 7.00.9466
 Copyright (C) Microsoft Corporation.  All rights reserved.

 Building the Win32 static library...

  if not exist .\Release/ mkdir .\Release
  cl.exe @C:\DOCUME~1\jconway\LOCALS~1\Temp\nm1A.tmp
 dllist.c
  cl.exe @C:\DOCUME~1\jconway\LOCALS~1\Temp\nm1B.tmp
 md5.c
  cl.exe @C:\DOCUME~1\jconway\LOCALS~1\Temp\nm1C.tmp
 wchar.c
  cl.exe @C:\DOCUME~1\jconway\LOCALS~1\Temp\nm1D.tmp
 encnames.c
  cl.exe @C:\DOCUME~1\jconway\LOCALS~1\Temp\nm1E.tmp
 win32.c
 fe-auth.c
 fe-connect.c
 fe-exec.c
 fe-lobj.c
 fe-misc.c
 fe-print.c
 fe-secure.c
 pqexpbuffer.c
  link.exe -lib @C:\DOCUME~1\jconway\LOCALS~1\Temp\nm1F.tmp
  cl.exe @C:\DOCUME~1\jconway\LOCALS~1\Temp\nm20.tmp
 libpqdll.c
  rc.exe /l 0x409 /fo.\Release\libpq.res libpq.rc
  link.exe @C:\DOCUME~1\jconway\LOCALS~1\Temp\nm21.tmp
 Creating library .\Release\libpqdll.lib and object
.\Release\libpqdll.exp
  cd ..\..\bin\psql
  nmake /f win32.mak

 Microsoft (R) Program Maintenance Utility Version 7.00.9466
 Copyright (C) Microsoft Corporation.  All rights reserved.

  if not exist .\Release/ mkdir .\Release
  cl.exe @C:\DOCUME~1\jconway\LOCALS~1\Temp\nm27.tmp
 getopt.c
  cl.exe @C:\DOCUME~1\jconway\LOCALS~1\Temp\nm28.tmp
 command.c
 command.c(497) : warning C4244: 'function' : conversion from 'unsigned
short'
 to 'bool', possible loss of data
 common.c
 help.c
 help.c(166) : warning C4244: 'function' : conversion from 'unsigned short'
to
 'bool', possible loss of data
 input.c
 stringutils.c
 mainloop.c
 copy.c
 startup.c
 prompt.c
 sprompt.c
 variables.c
 large_obj.c
 print.c
 print.c(1009) : warning C4244: 'function' : conversion from 'const
unsigned
 short' to 'bool', possible loss of data
 describe.c
 tab-complete.c
 mbprint.c
  link.exe @C:\DOCUME~1\jconway\LOCALS~1\Temp\nm29.tmp
  cd ..\..
  echo All Win32 parts have been built!
 All Win32 parts have been built!






---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] Missing file from CVS?

2002-11-16 Thread Al Sutton
All,

I've just tried to build the Win32 components under Visual Studio's C++
compiler from the win32.mak CVS archive at
:pserver:[EMAIL PROTECTED]:/projects/cvsroot and found that the
following file was missing;

src\bin\psql\sql_help.h

I've copied the file from the the source tree of version 7.2.3 and the
compile works with out any problems.

Should the file be in CVS?

Al.



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html