Re: [HACKERS] Bug in FOREIGN KEY

2001-01-23 Thread Max Khon

hi, there!

On Mon, 22 Jan 2001, Bruce Momjian wrote:

 
  This problem with foreign keys has been reported to me, and I have confirmed
  the bug exists in current sources.  The DELETE should succeed:
  
  ---
  
  CREATE TABLE primarytest2 (
 col1 INTEGER, 
 col2 INTEGER, 
 PRIMARY KEY(col1, col2)
);
  
  CREATE TABLE foreigntest2 (col3 INTEGER, 
 col4 INTEGER,
 FOREIGN KEY (col3, col4) REFERENCES primarytest2
   );
  test= BEGIN;
  BEGIN
  test= INSERT INTO primarytest2 VALUES (5,5);
  INSERT 27618 1
  test= DELETE FROM primarytest2 WHERE col1 = 5 AND col2 = 5;
  ERROR:  triggered data change violation on relation "primarytest2"

I have another (slightly different) example:
--- cut here ---
test= CREATE TABLE pr(obj_id int PRIMARY KEY);
NOTICE:  CREATE TABLE/PRIMARY KEY will create implicit index 'pr_pkey' for
table 'pr'
CREATE
test= CREATE TABLE fr(obj_id int REFERENCES pr ON DELETE CASCADE);
NOTICE:  CREATE TABLE will create implicit trigger(s) for FOREIGN KEY
check(s)
CREATE
test= BEGIN;
BEGIN
test= INSERT INTO pr (obj_id) VALUES (1);
INSERT 200539 1
test= INSERT INTO fr (obj_id) SELECT obj_id FROM pr;
INSERT 200540 1
test= DELETE FROM fr;
ERROR:  triggered data change violation on relation "fr"
test= 
--- cut here ---

we are running postgresql 7.1 beta3

/fjoe




AW: [HACKERS] int4 or int32

2001-01-23 Thread Zeugswetter Andreas SB


 There were only a few to fix, so I fixed them.
 
  Peter Eisentraut [EMAIL PROTECTED] writes:
   Which one of these should we use?
   int4 is a data type, int32 isn't.  c.h has DatumGetInt8, but no
   DatumGetInt64; it also has DatumGetInt32 but no 
 DatumGetInt4.  fmgr has

Wait a sec !
The patch to timestamp.h and date.h replaces int4 with int instead of int32.
At least the timestamp.h struct is on disk stuff, thus the patch is not so good :-)

Andreas



Re: [HACKERS] question

2001-01-23 Thread Robert B. Easter

On Friday 19 January 2001 20:28, Cameron Laird wrote:

  I am a Comp. Sci. student at Ryerson Polytechnic University in toronto.  I
 am in the midst of a software engineering project that involves the
 development of a (possibly) relational database on a RedHat 6.2 development
 environment, we are coding in C.  now my question is, how closely related
 are Postgre and MySQL, and are the necessary PostgreSQL libraries included
 in RedHat 6.2?

AFAIK, PostgreSQL and MySQL are from totally different codebases (never 
shared any code).  PostgreSQL is BSD license and MySQL is now GNU GPL. They 
both implement SQL to varying levels of conformance. PostgreSQL has some 
object-oriented features, like table inheritance.  Try them both and see what 
you like, but I think you'll find PostgreSQL more interesting.  For instance, 
Postgres can load C functions from shared objects and use them as functions 
in SQL, user defined aggregates, procedural language call handlers, and to 
create user defined data types (and possibly other things). The 7.1 beta has 
implemented some great new features, like write-ahead logging (WAL) and 
complete support for SQL table joins, among other things.  A C project can do 
a lot with Postgres.

RPM packages of PostgreSQL are available at:

http://www.postgresql.org/sites.html

You'll have to check redhat.com or do an rpm query to see if it should be or 
is installed on RedHat 6.2.


 thanks,

 Cameron Laird


Content-Type: text/html; charset="iso-8859-1"; name="Attachment: 1"
Content-Transfer-Encoding: quoted-printable
Content-Description: 


-- 
 Robert B. Easter  [EMAIL PROTECTED] -
-- CompTechNews Message Board http://www.comptechnews.com/ --
-- CompTechServ Tech Services http://www.comptechserv.com/ --
-- http://www.comptechnews.com/~reaster/ 



[HACKERS] Re: postgres memory management

2001-01-23 Thread Pete Forman

Justin Clift writes:
  I found the solution to this being to edit the ipcclean script and
  change the "ps x | grep -s 'postmaster'" part to "ps -e | grep -s
  'postmaster'".  This then works correctly with Mandrake 7.2.

A standard way of finding a process by name without the grep itself
appearing is use something like "grep '[p]ostmaster'".
-- 
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  opinion of Schlumberger, Baker
http://www.crosswinds.net/~petef  -./\.-  Hughes or their divisions.



Re: AW: [HACKERS] int4 or int32

2001-01-23 Thread Bruce Momjian

[ Charset ISO-8859-1 unsupported, converting... ]
 
  There were only a few to fix, so I fixed them.
  
   Peter Eisentraut [EMAIL PROTECTED] writes:
Which one of these should we use?
int4 is a data type, int32 isn't.  c.h has DatumGetInt8, but no
DatumGetInt64; it also has DatumGetInt32 but no 
  DatumGetInt4.  fmgr has
 
 Wait a sec !
 The patch to timestamp.h and date.h replaces int4 with int instead of int32.
 At least the timestamp.h struct is on disk stuff, thus the patch is not so good :-)

Fixed to int32 now.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



[HACKERS] Re: [PORTS] PostgreSQL pre-7.1 Linux/Alpha Status...

2001-01-23 Thread Ryan Kirkpatrick

On Mon, 22 Jan 2001, Bruce Momjian wrote:

 How are you on Alpha now?

Great! Downloaded 7.1beta3, and it works fine right out of the
box. Built it in the standard way (./configure; make all), and then ran
the regression tests. All 76 of 76 tests passed on my Alpha XLT 366
w/Debian 2.2. I think we can finally say that Linux/Alpha is a viable
platform for PostgreSQL. :)
TTYL.

---
|   "For to me to live is Christ, and to die is gain."|
|--- Philippians 1:21 (KJV)   |
---
|   Ryan Kirkpatrick  |  Boulder, Colorado  |  http://www.rkirkpat.net/   |
---





Re: PL/Python (was: Re: [GENERAL] Re: [HACKERS] Trigger)

2001-01-23 Thread Karel Zak


 Comments anyone?

 Yes Bruce, I already told about it with Peter in private mails. Because 
it's 1/2 of year and nobody answer I already start work on PL/Python.

 The PL/Python will in 7.2 - as soon as I can I send some proposal to 
hackers list. 

Karel

  
  On Tue, 19 Dec 2000, Bruce Momjian wrote:
  
PS. -hackers: What happen with PL/Python? Before 1/2 of year I ask if 
anyone works on this and answer was: "yes, but 'he' is waiting for new 
fmgr design". Tom's fmgr is done... IMHO it's big worse - The Python 
has very good design for integration to other programs.
   
   Good question.  I don't remember this old message, though.
  




Re: [HACKERS] realloc suggestion

2001-01-23 Thread Karel Zak


On Mon, 22 Jan 2001, Tom Lane wrote:

 Karel Zak [EMAIL PROTECTED] writes:
   I again a little look at aset code and I probably found small performance 
  reserve in small chunks (chunk = ALLOC_CHUNK_LIMIT) reallocation.
 
 Hmm.  I wouldn't have thought that realloc got called often enough to be
 worth optimizing, but it does seem to get called a few hundred times
 during the regress tests, so maybe it's worth a little more code to do
 this.  (Looks like most of the realloc calls come from enlargeStringInfo
 while dealing with long query strings --- since in this case the string
 buffer is the only thing yet allocated in QueryContext, the special-case
 check wins.)
 
 I've committed this change.  Thanks for the suggestion!

 I love OpenSource and CVS source distribution model - only couple hours
between idea and official source change :-)

Karel




Re: [HACKERS] question

2001-01-23 Thread Tom Lane

"Robert B. Easter" [EMAIL PROTECTED] writes:
 You'll have to check redhat.com or do an rpm query to see if it should be or 
 is installed on RedHat 6.2.

I believe redhat does ship Postgres RPMs, but they're PG version
6.5.something, which is pretty old --- ie, fewer features and more bugs
than later versions.  You really ought to install PG 7.0.3 (use RPMs
from www.postgresql.org) or if you're feeling bleeding edge, try out the
7.1 beta distribution.

regards, tom lane



[HACKERS] Does Oracle store values in indices?

2001-01-23 Thread Denis Perchine

Hello,

just small question.
I just realized that it seems that Oracle stores indexed values in the index 
itself. This mean that it is not necessary to access table when you need to 
get only indexed values.

iso table has an index for vin field. Here is an output for different queries.

SQL explain plan for select * from iso where vin='dfgdfgdhf';
 
Explained.
 
SQL @?/rdbms/admin/utlxpls
 
Plan Table

| Operation |  Name|  Rows | Bytes|  Cost  | Pstart| 
Pstop |

| SELECT STATEMENT  |  | 6 |  402 |  8 |   |  
 |
|  TABLE ACCESS BY INDEX ROW|ISO   | 6 |  402 |  8 |   |  
 |
|   INDEX RANGE SCAN|IX_ISO_VI | 6 |  |  3 |   |  
 |

 
6 rows selected.
 
SQL explain plan for select vin from iso where vin='dfgdfgdhf';
 
Explained.
 
SQL @?/rdbms/admin/utlxpls
 
Plan Table

| Operation |  Name|  Rows | Bytes|  Cost  | Pstart| 
Pstop |

| SELECT STATEMENT  |  | 6 |   42 |  3 |   |  
 |
|  INDEX RANGE SCAN |IX_ISO_VI | 6 |   42 |  3 |   |  
 |



I think this question already was raised here, but... Why PostgreSQL does not 
do this? What are the pros, and contros?

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] question

2001-01-23 Thread Ned Lilly

Great Bridge makes PostgreSQL 7.0.3 RPMs for 8 different Linux distros 
at http://www.greatbridge.com/download ...



Tom Lane wrote:

 "Robert B. Easter" [EMAIL PROTECTED] writes:
 
 You'll have to check redhat.com or do an rpm query to see if it should be or 
 is installed on RedHat 6.2.
 
 
 I believe redhat does ship Postgres RPMs, but they're PG version
 6.5.something, which is pretty old --- ie, fewer features and more bugs
 than later versions.  You really ought to install PG 7.0.3 (use RPMs
 from www.postgresql.org) or if you're feeling bleeding edge, try out the
 7.1 beta distribution.
 
   regards, tom lane
 
 

-- 

Ned Lilly e: [EMAIL PROTECTED]
Vice Presidentw: www.greatbridge.com
Evangelism / Hacker Relationsv: 757.233.5523
Great Bridge, LLCf: 757.233.




[HACKERS] Re: [GENERAL] MySQL - Postgres dump converter

2001-01-23 Thread Bruce Momjian


Can some PostgreSQL people comment on this?  This person wrote a
MySQL-PostgreSQL converter too.  His version is at:

http://ziet.zhitomir.ua/~fonin/code


 Bruce,
 
 On Tue, 23 Jan 2001 09:35:49 -0500 (EST)
 Bruce Momjian [EMAIL PROTECTED] wrote:
 
  We used someone elses.  Here is a copy. Please submit any patches
  against this version.
 
 Sucks found:
 - doesn't handle mediumint, converts it to mediuminteger.
 The same for bigint, and probably shorting  tinyint as well.
 I don't know whether 7.1 release has such type but even if yes 
 more preferrable to keep compatibility with old releases (7.0.x doesn't have, right 
?)
 - it doesn't handle mysql UNIQUE (that is keyword for unique index) inside CREATE 
TABLE block
 - better to create indices after data load (it does before)
 - doesn't handle UNSIGNED keyword (should a least skip it, or, at user option, 
convert to CHECK(field=0))
 - doesn't convert AUTO_INCREMENT in right way, at least in my production database.
 
 I don't see conversion of MySQL's SET and ENUM types.
 
 Well, before do any improvements on mysql2pgsql, I want to inform you that my 
 converter has all features described above. Maybe it's easier to modify it to fit 
your requirements ?
 At least take a look at it.
 I don't like to do the same work twice, and this one promises to be exactly so.
 
 Sending you my MySQL db dump which I used to play with it.
 
 Max Rudensky.

[ Attachment, skipping... ]

[ Attachment, skipping... ]

[ Attachment, skipping... ]


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [HACKERS] question

2001-01-23 Thread Lamar Owen

Tom Lane wrote:
 "Robert B. Easter" [EMAIL PROTECTED] writes:
  You'll have to check redhat.com or do an rpm query to see if it should be or
  is installed on RedHat 6.2.

 I believe redhat does ship Postgres RPMs, but they're PG version
 6.5.something, which is pretty old --- ie, fewer features and more bugs
 than later versions.  You really ought to install PG 7.0.3 (use RPMs
 from www.postgresql.org) or if you're feeling bleeding edge, try out the
 7.1 beta distribution.

RH 6.2 shipped with PostgreSQL 6.5.3, RPM release 6.  PostgreSQL 7.0 was
in beta at the time.

PostgreSQL 7.0 was first shipped as 7.0.2, release 17, in RedHat 7.0.

RPMS for PostgreSQL 7.0.3 for RedHat 6.2 are available on
ftp.postgresql.org, as Tom mentioned, in
/pub/binary/v7.0.3/RPMS/RedHat-6.2 

The upgrade from 6.5.3 RPM to 7.0.3 RPM is not the easiest in the world
-- please be sure to read the README.rpm-dist file in the main
postgresql RPM.

Also, you will need to read this file to see which packages you want --
for a full client-server install, install postgresql and
postgresql-server.  Pick and choose the other clients and development
RPM's you need from there.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



[HACKERS] WAL documentation

2001-01-23 Thread Oliver Elphick

Here is documentation for WAL, as text for immediate review and as SGML
source, generated from Vadim's original text with my editing.

Please review for correctness.

=== WAL chapter ==

Write-Ahead Logging (WAL) in Postgres

Author: Written by Vadim Mikheev and Oliver Elphick.



General description

Write Ahead Logging (WAL) is a standard approach to transaction logging.
Its detailed description may be found in most (if not all) books about
transaction processing. Briefly, WAL's central concept is that changes to
data files (where tables and indices reside) must be written only after
those changes have been logged - that is, when log records have been
flushed to permanent storage. When we follow this procedure, we do not
need to flush data pages to disk on every transaction commit, because we
know that in the event of a crash we will be able to recover the database
using the log: any changes that have not been applied to the data pages
will first be redone from the log records (this is roll-forward recovery,
also known as REDO) and then changes made by uncommitted transactions
will be removed from the data pages (roll-backward recovery - UNDO).



Immediate benefits of WAL

The first obvious benefit of using WAL is a significantly reduced number
of disk writes, since only the log file needs to be flushed to disk at
the time of transaction commit; in multi-user environments, commits of
many transactions may be accomplished with a single fsync() of the log
file. Furthermore, the log file is written sequentially, and so the cost
of syncing the log is much less than the cost of syncing the data pages.

The next benefit is consistency of the data pages. The truth is that,
before WAL, PostgreSQL was never able to guarantee consistency in the
case of a crash.  Before WAL, any crash during writing could result in:

1. index tuples pointing to non-existent table rows;
2. index tuples lost in split operations;
3. totally corrupted table or index page content, because of
   partially written data pages.

(Actually, the first two cases could even be caused by use of the "pg_ctl
-m {fast | immediate} stop" command.)  Problems with indices (problems
1 and 2) might have been capable of being fixed by additional fsync()
calls, but it is not obvious how to handle the last case without WAL;
WAL saves the entire data page content in the log if that is required
to ensure page consistency for after-crash recovery.



Future benefits

In this first release of WAL, UNDO operation is not implemented, because
of lack of time. This means that changes made by aborted transactions
will still occupy disk space and that we still need a permanent pg_log
file to hold the status of transactions, since we are not able to re-use
transaction identifiers.  Once UNDO is implemented, pg_log will no longer
be required to be permanent; it will be possible to remove pg_log at
shutdown, split it into segments and remove old segments.

With UNDO, it will also be possible to implement SAVEPOINTs to allow
partial rollback of invalid transaction operations (parser errors caused
by mistyping commands, insertion of duplicate primary/unique keys and
so on) with the ability to continue or commit valid operations made by
the transaction before the error.  At present, any error will invalidate
the whole transaction and require a transaction abort.

WAL offers the opportunity for a new method for database on-line backup
and restore (BAR).  To use this method, one would have to make periodic
saves of data files to another disk, a tape or another host and also
archive the WAL log files.  The database file copy and the archived
log files could be used to restore just as if one were restoring after a
crash. Each time a new database file copy was made the old log files could
be removed.  Implementing this facility will require the logging of data
file and index creation and deletion; it will also require development of
a method for copying the data files (O/S copy commands are not suitable).



Implementation

WAL is automatically enabled from release 7.1 onwards.  No action is
required from the administrator with the exception of ensuring that the
additional disk-space requirements of the WAL logs are met, and that
any necessary tuning is done (see below).

WAL logs are stored in $PGDATA/pg_xlog, as a set of segment files, each
16Mb in size.  Each segment is divided into 8Kb pages.  The log record
headers are described in access/xlog.h; record content is dependent on the
type of event that is being logged.  Segment files are given sequential
numbers as names, starting at .  The numbers do not wrap,
at present, but it should take a very long time to exhaust the available
stock of numbers.

The WAL buffers and control structure are in shared memory, and are
handled by the backends; they are protected by spinlocks.  The demand
on shared memory is dependent on the number of buffers; the default 

Re: [HACKERS] Does Oracle store values in indices?

2001-01-23 Thread Tom Lane

Denis Perchine [EMAIL PROTECTED] writes:
 I think this question already was raised here, but... Why PostgreSQL
 does not do this? What are the pros, and contros?

The reason you have to visit the main table is that tuple validity
status is only stored in the main table, not in each index.  See prior
discussions in the archives.

regards, tom lane



[HACKERS] Re: [GENERAL] MySQL - Postgres dump converter

2001-01-23 Thread Thomas Lockhart

 Can some PostgreSQL people comment on this?  This person wrote a
 MySQL-PostgreSQL converter too.  His version is at:
 http://ziet.zhitomir.ua/~fonin/code

-- THIS VERSION IS EXTREMELY BUGSOME ! USE IT ON YOUR OWN RISK !!!

Hmm. My version does not have this feature, but it could be added ;)

Seriously, I haven't looked at the differences, but there is a licensing
difference (BSD vs GPL). Someone else with experience with MySQL should
evaluate both packages.

mysql2pgsql has been used to convert SourceForge, with ~90 tables and
moderately complicated schema, but that did not include enumerated types
(done with ints at SF) and "unique" keys (done with sequences at SF)
afaicr.

 Sucks found:...

Each is a one-liner to fix in mysql2pgsql. The (nonstandard) types
mentioned weren't used in the test cases I had available. I didn't
realize that we had *any* reports of troubles or lacking features in the
existing converter, but I'll leave it up to y'all to decide if the
licensing issues and feature issues are significant.

I'm willing to provide patches to address some of the concerns, but of
course will not be able to look at the GPL'd code for hints and can only
use the information posted here to help afaik.

Comments?

   - Thomas



Re: [HACKERS] Does Oracle store values in indices?

2001-01-23 Thread Denis Perchine

 Denis Perchine [EMAIL PROTECTED] writes:
  I think this question already was raised here, but... Why PostgreSQL
  does not do this? What are the pros, and contros?

 The reason you have to visit the main table is that tuple validity
 status is only stored in the main table, not in each index.  See prior
 discussions in the archives.

But how Oracle handles this?

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [INTERFACES] Re: [HACKERS] PHP and PostgreSQL

2001-01-23 Thread Adam Haberlach

On Mon, Jan 22, 2001 at 11:55:56PM -0500, Bruce Momjian wrote:
  On Wed, Dec 27, 2000 at 12:56:26AM -0500, Bruce Momjian wrote:
   I have been asked by the major PHP developer Rasmus Lerdorf to see if
   the PostgreSQL/PHP interface needs any improvements.
   
   Is the current PostgreSQL interface module in PHP adequate?  Does it
   support all the current libpq features?
   
   If not, would someone submit some patches to the PHP folks.  They want
   us to work well with PHP.  They are basically encouraging us to improve
   it in any way we can.
  
  I use PHP and Postgres together quite a bit, and find the APIs
  complete enough for most things.  However, just last week I implemented
  
  pg_lolseek($loid, $offset $whence)
  and
  pg_lotell($loid)
  
  For some stuff that we are working on.  They are pretty straightforward,
  and I can package them up and submit them if someone wants.
  

 Would to send this over to the PHP folks for inclusion?  Thanks.

I sent them patches against the at-the-time up-to-date CVS tree back
when this first came up, they said that they failed, so I sent them another
set, and have not heard back from them.  It doesn't seem to show up in the
cvs logs.

I'll bug someone again and see if I can find out what happened.  I mean,
they only have 240 people with write access to the cvs tree...

-- 
Adam Haberlach|A cat spends her life conflicted between a
[EMAIL PROTECTED]   |deep, passionate, and profound desire for
http://www.newsnipple.com |fish and an equally deep, passionate, and
'88 EX500 |profound desire to avoid getting wet.



RE: [HACKERS] Does Oracle store values in indices?

2001-01-23 Thread Mikheev, Vadim

  The reason you have to visit the main table is that tuple validity
  status is only stored in the main table, not in each index. 
  See prior discussions in the archives.
 
 But how Oracle handles this?

Oracle doesn't have non-overwriting storage manager but uses
rollback segments to maintain MVCC. Rollback segments are used
to restore valid version of entire index/table page.

Vadim



Re: [HACKERS] Does Oracle store values in indices?

2001-01-23 Thread Denis Perchine

   The reason you have to visit the main table is that tuple validity
   status is only stored in the main table, not in each index.
   See prior discussions in the archives.
 
  But how Oracle handles this?

 Oracle doesn't have non-overwriting storage manager but uses
 rollback segments to maintain MVCC. Rollback segments are used
 to restore valid version of entire index/table page.

Are there any plans to have something like this? I mean overwriting storage 
manager.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: GreatBridge RPMs (was: Re: [HACKERS] question)

2001-01-23 Thread Poul Laust Christiansen

I've just moved from Redhat to Mandrake.

But do I have to use the Mandrake RPM? Doesn't the standard RPM work on
mandrake? 

What is the difference between these two RPM's?

I'd hate to wait for the Mandrake specific RPM for every release.

Poul L. Christiansen

On Tue, 23 Jan 2001, Lamar Owen wrote:

 Ned Lilly wrote:
  
  Great Bridge makes PostgreSQL 7.0.3 RPMs for 8 different Linux distros
  at http://www.greatbridge.com/download ...
 
 For the record (with permission of Great Bridge a few months back), I
 want to thank Great Bridge for helping with the development of the
 current Official RPMs, including financial assistance (:-)), servers
 running the distributions in question for building/testing, and top-tier
 professional feedback (when they say this release has been
 professionally QA tested, they _mean_ it!) on my little project.
 
 Kudos to GreatBridge!
 --
 Lamar Owen
 WGCR Internet Radio
 1 Peter 4:11
 




Re: [HACKERS] Bug in FOREIGN KEY

2001-01-23 Thread Stephan Szabo


  Think  I misinterpreted the SQL3 specs WR to this detail. The
  checks must be made per statement,  not  at  the  transaction
  level.  I'll  try  to fix it, but we need to define what will
  happen with referential actions in the  case  of  conflicting
  actions on the same key - there are some possible conflicts:
  
  1.  DEFERRED ON DELETE NO ACTION or RESTRICT
  
  Do  the referencing rows reference to the new PK row with
  the  same  key  now,  or  is  this  still  a   constraint
  violation?  I  would say it's not, because the constraint
  condition is satisfied at the end of the transaction. How
  do other databases behave?
  
  2.  DEFERRED ON DELETE CASCADE, SET NULL or SET DEFAULT
  
  Again  I'd  say  that  the  action  should  be suppressed
  because a matching PK row is present at transaction end -
  it's  not  the same old row, but the constraint itself is
  still satisfied.

I'm not actually sure on the cascade, set null and set default.  The
way they are written seems to imply to me that it's based on the state
of the database before/after the command in question as opposed to the
deferred state of the database because of the stuff about updating the
state of partially matching rows immediately after the delete/update of
the row which wouldn't really make sense when deferred.  Does anyone know
what other systems do with a case something like this all in a
transaction:

create table a (a int primary key);
create table b (b int references a match full on update cascade
 on delete cascade deferrable initially deferred);
insert into a values (1);
insert into a values (2);
insert into b values (1);
delete from a where a=1;
select * from b;
commit;




Re: [HACKERS] Does Oracle store values in indices?

2001-01-23 Thread Bruce Momjian

[ Charset KOI8-R unsupported, converting... ]
The reason you have to visit the main table is that tuple validity
status is only stored in the main table, not in each index.
See prior discussions in the archives.
  
   But how Oracle handles this?
 
  Oracle doesn't have non-overwriting storage manager but uses
  rollback segments to maintain MVCC. Rollback segments are used
  to restore valid version of entire index/table page.
 
 Are there any plans to have something like this? I mean overwriting storage 
 manager.

We hope to have it some day, hopefully soon.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



[HACKERS] Re: GreatBridge RPMs (was: Re: question)

2001-01-23 Thread Thomas Lockhart

 I'd hate to wait for the Mandrake specific RPM for every release.

I've been building the Mandrake RPMs, and there is currently a small
problem in the build which I haven't had time to pursue (yet). The
Mandrake distro should be available on the postgresql.org ftp site very
soon after release.

 - Thomas



Re: [HACKERS] Patches with vacuum fixes available for 7.0.x

2001-01-23 Thread Alfred Perlstein

* Bruce Momjian [EMAIL PROTECTED] [010122 19:55] wrote:
 
 Vadim, did these patches ever make it into 7.1?

According to:
http://www.postgresql.org/cgi/cvsweb.cgi/pgsql/src/backend/parser/gram.y?rev=2.217content-type=text/x-cvsweb-markup

nope. :(

 
  We recently had a very satisfactory contract completed by
  Vadim.
  
  Basically Vadim has been able to reduce the amount of time
  taken by a vacuum from 10-15 minutes down to under 10 seconds.
  
  We've been running with these patches under heavy load for
  about a week now without any problems except one:
don't 'lazy' (new option for vacuum) a table which has just
had an index created on it, or at least don't expect it to
take any less time than a normal vacuum would.
  
  There's three patchsets and they are available at:
  
  http://people.freebsd.org/~alfred/vacfix/
  
  complete diff:
  http://people.freebsd.org/~alfred/vacfix/v.diff
  
  only lazy vacuum option to speed up index vacuums:
  http://people.freebsd.org/~alfred/vacfix/vlazy.tgz
  
  only lazy vacuum option to only scan from start of modified
  data:
  http://people.freebsd.org/~alfred/vacfix/mnmb.tgz
  
  Although the patches are for 7.0.x I'm hoping that they
  can be forward ported (if Vadim hasn't done it already)
  to 7.1.
  
  enjoy!
  
  -- 
  -Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
  "I have the heart of a child; I keep it in a jar on my desk."
  
 
 
 -- 
   Bruce Momjian|  http://candle.pha.pa.us
   [EMAIL PROTECTED]   |  (610) 853-3000
   +  If your life is a hard drive, |  830 Blythe Avenue
   +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] Does Oracle store values in indices?

2001-01-23 Thread Alfred Perlstein

* Bruce Momjian [EMAIL PROTECTED] [010123 11:17] wrote:
 [ Charset KOI8-R unsupported, converting... ]
 The reason you have to visit the main table is that tuple validity
 status is only stored in the main table, not in each index.
 See prior discussions in the archives.
   
But how Oracle handles this?
  
   Oracle doesn't have non-overwriting storage manager but uses
   rollback segments to maintain MVCC. Rollback segments are used
   to restore valid version of entire index/table page.
  
  Are there any plans to have something like this? I mean overwriting storage 
  manager.
 
 We hope to have it some day, hopefully soon.

Vadim says that he hopes it to be done by 7.2, so if things go
well it shouldn't be that far off...

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



[HACKERS] Re: [GENERAL] MySQL - Postgres dump converter

2001-01-23 Thread Max Rudensky

Bruce,

On Tue, 23 Jan 2001 09:35:49 -0500 (EST)
Bruce Momjian [EMAIL PROTECTED] wrote:

 We used someone elses.  Here is a copy. Please submit any patches
 against this version.

Sucks found:
- doesn't handle mediumint, converts it to mediuminteger.
The same for bigint, and probably shorting  tinyint as well.
I don't know whether 7.1 release has such type but even if yes 
more preferrable to keep compatibility with old releases (7.0.x doesn't have, right ?)
- it doesn't handle mysql UNIQUE (that is keyword for unique index) inside CREATE 
TABLE block
- better to create indices after data load (it does before)
- doesn't handle UNSIGNED keyword (should a least skip it, or, at user option, convert 
to CHECK(field=0))
- doesn't convert AUTO_INCREMENT in right way, at least in my production database.

I don't see conversion of MySQL's SET and ENUM types.

Well, before do any improvements on mysql2pgsql, I want to inform you that my 
converter has all features described above. Maybe it's easier to modify it to fit your 
requirements ?
At least take a look at it.
I don't like to do the same work twice, and this one promises to be exactly so.

Sending you my MySQL db dump which I used to play with it.

Max Rudensky.

 test.init
 test.mysql
 test.sql92


Re: GreatBridge RPMs (was: Re: [HACKERS] question)

2001-01-23 Thread Peter Eisentraut

Poul Laust Christiansen writes:

 I've just moved from Redhat to Mandrake.

 But do I have to use the Mandrake RPM? Doesn't the standard RPM work on
 mandrake?

In general, RPMs only work on systems that are the same as the one they
were built on, for various degrees of "same".  If you're not picking up
the RPMs from your distributor or you're sure that the builder used the
same version as you have, it's always prudent to rebuild from the source
RPM.  That should work, unless the package spec makes some unportable
assumptions, such as different file system layouts.  But that is often
only an annoyance, not a real problem.

-- 
Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/




Re: [HACKERS] Re: AW: Re: MySQL and BerkleyDB (fwd)

2001-01-23 Thread dom

 This 'pre-commit' 'really commit' two-step (get 'yer cowboy hats, right
 here) is what's needed, and is currently missing from pgsql. 

  Hello,

  I'm very interested in this topic since I am involved in a
distributed, several-PostgreSQLs-backed, open-source,
buzzword-compliant database replication middleware (still in the draft
stage though --- this is not an announcement :-).
  I had thought that the pre-commit information could be stored in an
auxiliary table by the middleware program ; we would then have
to re-implement some sort of higher-level WAL (I thought of the list
of the commands performed in the current transaction, with a sequence
number for each of them that would guarantee correct ordering between
concurrent transactions in case of a REDO). But I fear I am missing
a number of important issues there ; so could you please comment on my
idea ? 
  * what should I try not to forget to record in the higher-level WAL
  if I want consistency ?
  * how could one collect consistent ordering information without
  impacting performance too much ? Will ordering suffice to guarantee
  correctness of the REDO ? (I mean, are there sources of
  nondeterminism in PostgreSQL such as resource exhaustion etc. that I
  should be aware of ?)
  * would it be easier or harder to help implement 2-phase commit
  inside PostgreSQL (but I am not quite a PostgreSQL hacker yet !)

  Many thanks in advance !

-- 
 Tout n'y est pas parfait, mais on y honore certainement les jardiniers 

Dominique Quatravaux [EMAIL PROTECTED]



[HACKERS] Re: Patches with vacuum fixes available for 7.0.x

2001-01-23 Thread G. Anthony Reina

Alfred,

Is there a tarbar with the updated files for the vacuum patch? Or,
is there some way to use the 'v.diff' file without the need to modify
the files by hand? I started changing the files by hand, but realized
that there is so much information that I'm bound to make a mistake in
the manual update.

Thanks.
-Tony Reina




 There's three patchsets and they are available at:

 http://people.freebsd.org/~alfred/vacfix/

 complete diff:
 http://people.freebsd.org/~alfred/vacfix/v.diff

 only lazy vacuum option to speed up index vacuums:
 http://people.freebsd.org/~alfred/vacfix/vlazy.tgz

 only lazy vacuum option to only scan from start of modified
 data:
 http://people.freebsd.org/~alfred/vacfix/mnmb.tgz





[HACKERS] pg_shadow.usecatupd attribute

2001-01-23 Thread Peter Eisentraut

While I'm at it and before I forget the 76 places one needs to edit to
add/remove a system catalog column, what are people's feelings about the
usecatupd column?

The use of this field is that, if false, it disallows any direct
modification of system catalogs, even for superusers.  In the past there
were several opinions that this field was useless/confusing/stupid/not
worthwhile.

I'm also going to remove the usetrace column, since that's not used.

(post-7.1 material, btw.)

-- 
Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/




Re: [HACKERS] pg_shadow.usecatupd attribute

2001-01-23 Thread Bruce Momjian


Yes, I vote for removal.

 While I'm at it and before I forget the 76 places one needs to edit to
 add/remove a system catalog column, what are people's feelings about the
 usecatupd column?
 
 The use of this field is that, if false, it disallows any direct
 modification of system catalogs, even for superusers.  In the past there
 were several opinions that this field was useless/confusing/stupid/not
 worthwhile.
 
 I'm also going to remove the usetrace column, since that's not used.
 
 (post-7.1 material, btw.)
 
 -- 
 Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/
 
 


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [HACKERS] initdb -t destroys all databases

2001-01-23 Thread Peter Eisentraut

Tom Lane writes:

 Accordingly, I suggest that initdb -t should be flushed entirely.

I guess we won't need two separate files global.bki and template1.bki
anymore.  That would simplify some things, but maybe it's still a
stilistic thing.

-- 
Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/




Re: GreatBridge RPMs (was: Re: [HACKERS] question)

2001-01-23 Thread Roderick A. Anderson

On Tue, 23 Jan 2001, Peter Eisentraut wrote:

 In general, RPMs only work on systems that are the same as the one they
 were built on, for various degrees of "same".  If you're not picking up
 the RPMs from your distributor or you're sure that the builder used the
 same version as you have, it's always prudent to rebuild from the source
 RPM.  That should work, unless the package spec makes some unportable
 assumptions, such as different file system layouts.  But that is often
 only an annoyance, not a real problem.

While trying to get the FrontPage Extensions installed on a RedHat/Apache
system I ran into to different version numbering systems between RedHat
and Mandrake.  Major pain.  One called for perl 5.6.0-xxx and the other
perl 5.60-xxx.  After several hours of screwing around with it I took a
break.  Fortunately before I spent any more time on it the client I was
going to do it for decided to not run them with Apache.

I'm glad to see GreatBridge will be providing RPM's for many
distributions.  Though I do tend to re-compile from source I've found that
those mdk's don't work too good with RHL.

Rod
-- 





RE: [HACKERS] Re: AW: Re: MySQL and BerkleyDB (fwd)

2001-01-23 Thread Mikheev, Vadim

   I had thought that the pre-commit information could be stored in an
 auxiliary table by the middleware program ; we would then have
 to re-implement some sort of higher-level WAL (I thought of the list
 of the commands performed in the current transaction, with a sequence
 number for each of them that would guarantee correct ordering between
 concurrent transactions in case of a REDO). But I fear I am missing

This wouldn't work for READ COMMITTED isolation level.
But why do you want to log commands into WAL where each modification
is already logged in, hm, correct order?
Well, it has sense if you're looking for async replication but
you need not in two-phase commit for this and should aware about
problems with READ COMMITTED isolevel.

Back to two-phase commit - it's easiest part of work required for
distributed transaction processing.
Currently we place single commit record to log and transaction is
committed when this record (and so all other transaction records)
is on disk.
Two-phase commit:

1. For 1st phase we'll place into log "prepared-to-commit" record
   and this phase will be accomplished after record is flushed on disk.
   At this point transaction may be committed at any time because of
   all its modifications are logged. But it still may be rolled back
   if this phase failed on other sites of distributed system.

2. When all sites are prepared to commit we'll place "committed"
   record into log. No need to flush it because of in the event of
   crash for all "prepared" transactions recoverer will have to
   communicate other sites to know their statuses anyway.

That's all! It is really hard to implement distributed lock- and
communication- managers but there is no problem with logging two
records instead of one. Period.

Vadim



Re: [HACKERS] initdb -t destroys all databases

2001-01-23 Thread Tom Lane

Peter Eisentraut [EMAIL PROTECTED] writes:
 I guess we won't need two separate files global.bki and template1.bki
 anymore.  That would simplify some things, but maybe it's still a
 stilistic thing.

It's probably not absolutely necessary to have two, but why change it?

regards, tom lane



Re: [HACKERS] Re: AW: Re: MySQL and BerkleyDB (fwd)

2001-01-23 Thread Bruce Momjian

[ Charset ISO-8859-1 unsupported, converting... ]
I had thought that the pre-commit information could be stored in an
  auxiliary table by the middleware program ; we would then have
  to re-implement some sort of higher-level WAL (I thought of the list
  of the commands performed in the current transaction, with a sequence
  number for each of them that would guarantee correct ordering between
  concurrent transactions in case of a REDO). But I fear I am missing
 
 This wouldn't work for READ COMMITTED isolation level.
 But why do you want to log commands into WAL where each modification
 is already logged in, hm, correct order?
 Well, it has sense if you're looking for async replication but
 you need not in two-phase commit for this and should aware about
 problems with READ COMMITTED isolevel.
 

I believe the issue here is that while SERIALIZABLE ISOLATION means all
queries can be run serially, our default is READ COMMITTED, meaning that
open transactions see committed transactions, even if the transaction
committed after our transaction started.  (FYI, see my chapter on
transactions for help,  http://www.postgresql.org/docs/awbook.html.)

To do higher-level WAL, you would have to record not only the queries,
but the other queries that were committed at the start of each command
in your transaction.

Ideally, you could number every commit by its XID your log, and then
when processing the query, pass the "committed" transaction ids that
were visible at the time each command began.

In other words, you can replay the queries in transaction commit order,
except that you have to have some transactions committed at specific
points while other transactions are open, i.e.:

XID Open XIDS   Query
500 UPDATE t SET col = 3;
501 500 BEGIN;
501 500 UPDATE t SET col = 4;
501 UPDATE t SET col = 5;
501 COMMIT;

This is a silly example, but it shows that 500 must commit after the
first command in transaction 501, but before the second command in the
transaction.  This is because UPDATE t SET col = 5 actually sees the
changes made by transaction 500 in READ COMMITTED isolation level.

I am not advocating this.  I think WAL is a better choice.  I just
wanted to outline how replaying the queries in commit order is 
insufficient.

 Back to two-phase commit - it's easiest part of work required for
 distributed transaction processing.
 Currently we place single commit record to log and transaction is
 committed when this record (and so all other transaction records)
 is on disk.
 Two-phase commit:
 
 1. For 1st phase we'll place into log "prepared-to-commit" record
and this phase will be accomplished after record is flushed on disk.
At this point transaction may be committed at any time because of
all its modifications are logged. But it still may be rolled back
if this phase failed on other sites of distributed system.
 
 2. When all sites are prepared to commit we'll place "committed"
record into log. No need to flush it because of in the event of
crash for all "prepared" transactions recoverer will have to
communicate other sites to know their statuses anyway.
 
 That's all! It is really hard to implement distributed lock- and
 communication- managers but there is no problem with logging two
 records instead of one. Period.

Great.


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: GreatBridge RPMs (was: Re: [HACKERS] question)

2001-01-23 Thread Lamar Owen

"Roderick A. Anderson" wrote:
 On Tue, 23 Jan 2001, Peter Eisentraut wrote:
  RPM.  That should work, unless the package spec makes some unportable
  assumptions, such as different file system layouts.  But that is often
  only an annoyance, not a real problem.

 I'm glad to see GreatBridge will be providing RPM's for many
 distributions.  Though I do tend to re-compile from source I've found that
 those mdk's don't work too good with RHL.

And I _love_ to get feedback about the nonportable things I do in the
spec files (right, Peter ? :-)).

I am trying (and Great Bridge helped) to get a fully
distribution-independent source RPM working.  I am closer than I was --
the same spec file now works on RedHat, Mandrake, Turbo, and (to a
lesser extent) Caldera, and soon will work seamlessly on SuSE. It may
very well work on others.  The hooks are there now for SuSE -- just some
fill-in work left to be done.

Portability is hard.  C programmers have known this for some time -- but
the RPM specfile doesn't really lend itself to vast portability. 
Although, I am learning some real tricks that really help.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



Re: [HACKERS] LEFT JOIN in pg_dumpall is a bug

2001-01-23 Thread Tom Lane

Peter Eisentraut [EMAIL PROTECTED] writes:
 This snippet in pg_dumpall
 $PSQL -d template1 -At -F ' ' \
   -c "SELECT datname, usename, pg_encoding_to_char(d.encoding),
 datistemplate, datpath FROM pg_database d LEFT JOIN pg_shadow u ON (datdba
 = usesysid) WHERE datallowconn;" | \
 while read DATABASE DBOWNER ENCODING ISTEMPLATE DBPATH; do

 won't actually work if there indeed happens to be a database without a
 valid owner, because the 'read' command will take ENCODING as the dba
 name.

Oops, you're right, the read won't keep the columns straight.  Come to
think of it, it would do the wrong thing for empty-string datname or
usename, too, and it's only because datpath is the last column that
we haven't noticed it doing the wrong thing on empty datpath.

Is there a more robust way of reading the data into the script?

 I guess the real question is, what should be done in this case?  I think
 it might be better to error out and let the user fix his database before
 backing it up.

Possibly.  The prior state of the code (before I put in the LEFT JOIN)
would silently ignore any database with no matching user, which was
definitely NOT a good idea.

I think I'd rather see a warning, though, and let the script try to dump
the DB anyway.

 (At a glance, I think pg_dump also has some problems with these sort of
 constellations.)

Yes, there are a number of places where pg_dump should be doing outer
joins and isn't.  I think Tatsuo is at work on that.

regards, tom lane



Re: [HACKERS] pg_shadow.usecatupd attribute

2001-01-23 Thread Tom Lane

Peter Eisentraut [EMAIL PROTECTED] writes:
 While I'm at it and before I forget the 76 places one needs to edit to
 add/remove a system catalog column, what are people's feelings about the
 usecatupd column?

Unless someone pops up and says that they're actually using it,
I'd agree with removing it.  It seems like the sort of thing that
might be a good idea but isn't actually getting used.

regards, tom lane



RE: [HACKERS] Does Oracle store values in indices?

2001-01-23 Thread Mikheev, Vadim

   But how Oracle handles this?
 
  Oracle doesn't have non-overwriting storage manager but uses
  rollback segments to maintain MVCC. Rollback segments are used
  to restore valid version of entire index/table page.
 
 Are there any plans to have something like this? I mean 
 overwriting storage manager.

Well, I have plans to reimplement storage manager to allow space
re-use without vacuum but without switching to overwriting, at least
in near future - achievements/drawbacks are still questionable.

We could add transaction data to index tuples but this would increase
their size by ~ 16bytes. To estimate how this would affect performance
for mostly statical tables one can run tests with schema below:

create table i1 (i int, k int, l char(16));
create index i_i1 on i1 (i);
create table i2 (i int, k int, l char(16));
create index i_i2 on i2 (i, k, l);

Now fill tables with same data and run queries using only "I" in where
clause.

Vadim



[HACKERS] Re: [PATCHES] binary operators on integers

2001-01-23 Thread Tom Lane

I wrote:
 Given the infrequency of use of postfix operators compared to prefix,
 I am inclined to think that we should change the grammar to make the
 latter easier to use at the expense of the former.  On the other hand,
 it seems there's a pretty large risk of backwards-incompatibility here.
 Comments?

I backed away from part of the proposed patch --- changing the
precedence of all the prefix-operator productions to UMINUS would
probably break people's queries.  But I've applied the part that
changes the behavior of a_expr Op Op a_expr.  This will now be
parsed as an infix operator followed by a prefix operator.

regards, tom lane



Re: [HACKERS] Does Oracle store values in indices?

2001-01-23 Thread Tom Lane

"Mikheev, Vadim" [EMAIL PROTECTED] writes:
 We could add transaction data to index tuples but this would increase
 their size by ~ 16bytes.

The increased space is the least of the drawbacks.  Consider also the
time needed to maintain N copies of a tuple's commit status instead of
one.  Even finding the N copies would cost a lot more than the single
disk transfer involved now ...

regards, tom lane



Re: [HACKERS] pg_shadow.usecatupd attribute

2001-01-23 Thread Alex Pilosov

Just to clarify for stupid me: you want to remove it and forbid catalog
updates or remove it and allow catalog updates? (I hope its latter :)

On Tue, 23 Jan 2001, Tom Lane wrote:

 Peter Eisentraut [EMAIL PROTECTED] writes:
  While I'm at it and before I forget the 76 places one needs to edit to
  add/remove a system catalog column, what are people's feelings about the
  usecatupd column?
 
 Unless someone pops up and says that they're actually using it,
 I'd agree with removing it.  It seems like the sort of thing that
 might be a good idea but isn't actually getting used.
 
   regards, tom lane
 
 




Re: [HACKERS] R-Tree implementation using GiST

2001-01-23 Thread Bruce Momjian

I have added the URL to the GIST SGML docs.

 Hi,
 
 I've put R-Tree realization using GiST (yet another test of our changes in

 gist code )on my gist page http://www.sai.msu.su/~megera/postgres/gist/
 

 Also, I've put some GiST related papers for interested readers.
 The package( contrib-rtree_box_gist.tar.gz ) is built for 7.1.
 If you find it's interesting you may include it into contrib area for 7.1
 
 from README.rtree_box_gist:
 
 
 1. One interesting thing is that insertion time for built-in R-Tree is
about 8 times more than ones for GiST implementation of R-Tree !!!
 2. Postmaster requires much more memory for built-in R-Tree
 3. Search time depends on dataset. In our case we got:
 ++---+--+
 |Number boxes|R-tree, sec|R-tree using  |
 ||   |   GiST, sec  |
 ++---+--+
 |  10|  0.002| 0.002|
 ++---+--+
 | 100|  0.002| 0.002|
 ++---+--+
 |1000|  0.002| 0.002|
 ++---+--+
 |   1|  0.015| 0.025|
 ++---+--+
 |   2|  0.029| 0.048|
 ++---+--+
 |   4|  0.055| 0.092|
 ++---+--+
 |   8|  0.113| 0.178|
 ++---+--+
 |  16|  0.338| 0.337|
 ++---+--+
 |  32|  0.674| 0.673|
 ++---+--+
 
 
   Regards,
   Oleg
 _
 Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
 Sternberg Astronomical Institute, Moscow University (Russia)
 Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
 phone: +007(095)939-16-83, +007(095)939-23-83
 
 


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [HACKERS] Postgresql on win32

2001-01-23 Thread Bruce Momjian


Thanks.  Applied.

[ Charset ISO-8859-1 unsupported, converting... ]
 Hello!
 
 Here is a patch to make the current snapshot compile on Win32 (native, libpq
 and psql) again. Changes are:
 1) psql requires the includes of "io.h" and "fcntl.h" in command.c in order
 to make a call to open() work (io.h for _open(), fcntl.h for the O_xxx)
 2) PG_VERSION is no longer defined in version.h[.in], but in configure.in.
 Since we don't do configure on native win32, we need to put it in
 config.h.win32 :-(
 3) Added define of SYSCONFDIR to config.h.win32 - libpq won't compile
 without it. This functionality is *NOT* tested - it's just defined as "" for
 now. May work, may not.
 4) DEF_PGPORT renamed to DEF_PGPORT_STR
 
 I have done the "basic tests" on it - it connects to a database, and I can
 run queries. Haven't tested any of the fancier functions (yet).
 
 However, I stepped on a much bigger problem when fixing psql to work. It no
 longer works when linked against the .DLL version of libpq (which the
 Makefile does for it). I have left it linked against this version anyway,
 pending the comments I get on this mail :-)
 The problem is that there are strings being allocated from libpq.dll using
 PQExpBuffers (for example, initPQExpBuffer() on line 92 of input.c). These
 are being allocated using the malloc function used by libpq.dll. This
 function *may* be different from the malloc function used by psql.exe - only
 the resulting pointer must be valid. And with the default linking methods,
 it *WILL* be different. Later, psql.exe tries to free() this string, at
 which point it crashes because the free() function can't find the allocated
 block (it's on the allocated blocks list used by the runtime lib of
 libpq.dll).
 
 Shouldn't the right thing to do be to have psql call termPQExpBuffer() on
 the data instead? As it is now, gets_fromFile() will just return the pointer
 received from the PQExpBuffer.data (this may well be present at several
 places - this is the one I was bitten by so far). Isn't that kind of
 "accessing the internals of the PQExpBuffer structure" wrong? Instead,
 perhaps it shuold make a copy of the string, adn then termPQExpBuffer() it?
 In that case, the string will have been allocated from within the same
 library as the free() is called.
 
 I can get it to work just fine by doing this - changing from (around line
 100 of input.c):
 if (buffer.data[buffer.len - 1] == '\n')
 {
 buffer.data[buffer.len - 1] = '\0';
 return buffer.data;
 }
 to
   if (buffer.data[buffer.len - 1] == '\n')
   {
   char *tmps;
   buffer.data[buffer.len - 1] = '\0';
   tmps = strdup(buffer.data);
   termPQExpBuffer(buffer);
   return tmps;
   }
 
 and the same a bit further down in the same function.
 
 But, as I said above, this may be at more places in the code? Perhaps
 someone more familiar to it could comment on that?
 
 
 What do you think shuld be done about this? Personally, I go by the "If you
 allocate a piece of memory using an interface, use the same interface to
 free it", but the question is how to make it work :-)
 
 
 Also, AFAIK this only affects psql.exe, so the changes made to the libpq
 files by this patch are required no matter how the other issue is handled.
 
 Regards,
  Magnus
 
 
  pgsql-win32.patch 

[ Attachment, skipping... ]


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



[HACKERS] Re: [PATCHES] Small patch to replace 'idle' by 'trans' if transactionis still open

2001-01-23 Thread Bruce Momjian


Any solid consensus on this?

 
 Well, I'm still for this ... how about something simiple like 'idle in
 trans'?  show, and easy to figure out what it means?
 
 On Mon, 22 Jan 2001, Christof Petig wrote:
 
  If you don't know what is all about read the bottom (I'll requote my posting)
 
  Bruce Momjian wrote:
 
On Fri, 19 Jan 2001, Bruce Momjian wrote:
 Seems we decided against this.  Sorry.
   
Huh?  from reading Tom's response, sounds like it would be something
useful?  I know I find having as much information about state in the ps
listing helps me alot, and knowing if its 'idle' vs 'idle (in
transaction)' provides at lesat more detailed information then just 'idle'
  
Who was the 'we' in the above decision?  Tom seemed in favor of it, I know
I'm in favor of it .. and you are not in favor of it ...
  
   There must have been some discussion about it.  I don't see it in the
   code, and I remember it was rejected for some reason.  Check the archives.
 
  The thing which comes most close to a rejection was the 'I can't decide' mail
  by you (answered by Tom). The conclusion sounded like 'since we're not clear on
  this subject we won't touch this, yet'. And there was some unsettled discussion
  about the best wording to show in 'ps'.
 
  'trans' seemed too short (and too much unlike 'idle') (as indicated by Bruce)
  and 'idle (open transaction)' might give difficulties on platforms which limit
  the length of the string (as indicated by Tom)
 
  I'll CC Hackers (where this discussion belongs)
 
  Christof
 
  -
 
  Quoting:
 
  Subject: Re: [PATCHES] Small patch to replace 'idle' by 'trans' if transaction
  is still open
  Date: Mon, 09 Oct 2000 22:46:56 -0400
  From: Tom Lane [EMAIL PROTECTED]
 
  Bruce Momjian [EMAIL PROTECTED] writes:
   I can't decide if this is of general use.  My inclination is that
   someone in a transaction sitting a prompt should still show as idle.
 
  The idea seemed good to me, although I didn't look at the code to see
  if the implementation was any good ;-).  I know we've frequently had
  questions on the lists where it was interesting to know if any
  transactions were being held open --- and right now there's no easy
  way to tell.
 
  regards, tom lane
 
  --
 
  Subject: Small patch to replace 'idle' by 'trans' if transaction is still open
  Date: Tue, 03 Oct 2000 21:28:36 +0200
  From: Christof Petig [EMAIL PROTECTED]
 
  If you are looking for programs which tend to hold longstanding
  transactions, this micro patch might be handy.
  Whether it is of general use is debatable. It will replace 'idle' by
  'trans' if the backend is idle but a transaction is pending.
 
  Simply use ps to view the backend's status.
 
  Christof
 
  --- src/backend/commands/async.c~   Sun May 14 05:18:35 2000
  +++ src/backend/commands/async.cTue Oct  3 10:31:54 2000
  @@ -818,7 +818,7 @@
   */
  pq_flush();
 
  -   PS_SET_STATUS("idle");
  +   PS_SET_STATUS(IsTransactionBlock()?"trans":"idle");
  TPRINTF(TRACE_NOTIFY, "ProcessIncomingNotify: done");
   }
 
  --- src/backend/tcop/postgres.c~Thu Aug 31 09:18:57 2000
  +++ src/backend/tcop/postgres.c Tue Oct  3 10:32:23 2000
  @@ -1496,7 +1496,7 @@
 
  for (;;)
  {
  -   PS_SET_STATUS("idle");
  +   PS_SET_STATUS(IsTransactionBlock()?"trans":"idle");
 
  /* 
   *   (1) tell the frontend we're ready for a new query.
 
 
 
 
 Marc G. Fournier   ICQ#7615664   IRC Nick: Scrappy
 Systems Administrator @ hub.org
 primary: [EMAIL PROTECTED]   secondary: scrappy@{freebsd|postgresql}.org
 
 


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



[HACKERS] Re: [PATCHES] update to contrib/pgcrypto

2001-01-23 Thread Bruce Momjian


Thanks.  Applied.

 
 I would like to do a interface change in pgcrypto.  (Good
 timing, I know :))  At the moment the digest() function returns
 hexadecimal coded hash, but I want it to return pure binary.  I
 have also included functions encode() and decode() which support 
 'base64' and 'hex' encodings, so if anyone needs digest() in hex
 he can do encode(digest(...), 'hex').
 
 Main reason for it is "to do one thing and do it well" :)
 
 Another reason is if someone needs really lot of digesting, in
 the end he wants to store the binary not the hexadecimal result.
 It is really silly to convert it to hex then back to binary
 again.  As I said if someone needs hex he can get it.
 
 Well, and the real reason that I am doing encrypt()/decrypt()
 functions and _they_ return binary.  For testing I like to see
 it in hex occasionally, but it is really wrong to let them
 return hex.  Only now it caught my eye that hex-coding in
 digest() is wrong.  When doing digest() I thought about 'common
 case' but hacking with psql is probably _not_ the common case :)
 
 -- 
 marko
 
 
 diff -urNX /home/marko/misc/diff-exclude contrib/pgcrypto.orig/Makefile 
contrib/pgcrypto/Makefile
 --- contrib/pgcrypto.orig/MakefileTue Oct 31 15:11:28 2000
 +++ contrib/pgcrypto/Makefile Sun Jan 21 00:14:54 2001
 @@ -34,7 +34,7 @@
  endif
  
  NAME := pgcrypto
 -SRCS += pgcrypto.c
 +SRCS += pgcrypto.c encode.c
  OBJS := $(SRCS:.c=.o)
  SO_MAJOR_VERSION = 0
  SO_MINOR_VERSION = 1
 diff -urNX /home/marko/misc/diff-exclude contrib/pgcrypto.orig/README.pgcrypto 
contrib/pgcrypto/README.pgcrypto
 --- contrib/pgcrypto.orig/README.pgcrypto Tue Oct 31 15:11:28 2000
 +++ contrib/pgcrypto/README.pgcrypto  Sun Jan 21 00:21:29 2001
 @@ -1,14 +1,21 @@
  
  DESCRIPTION
  
 -  Here is a implementation of crypto hashes for PostgreSQL.
 -  It exports 2 functions to SQL level:
 +  Here are various cryptographic and otherwise useful
 +  functions for PostgreSQL.
 +
 +encode(data, type)
 +encodes binary data into ASCII-only representation.
 + Types supported are 'hex' and 'base64'.
 +
 +decode(data, type)
 + decodes the data processed by encode()
  
  digest(data::text, hash_name::text)
 - which returns hexadecimal coded hash over data by
 + which returns cryptographic checksum over data by
   specified algorithm. eg
  
 -  select digest('blah', 'sha1');
 +  select encode(digest('blah', 'sha1'), 'hex');
   5bf1fd927dfb8679496a2e6cf00cbe50c1c87145
  
  digest_exists(hash_name::text)::bool
 diff -urNX /home/marko/misc/diff-exclude contrib/pgcrypto.orig/encode.c 
contrib/pgcrypto/encode.c
 --- contrib/pgcrypto.orig/encode.cThu Jan  1 03:00:00 1970
 +++ contrib/pgcrypto/encode.c Sun Jan 21 23:48:55 2001
 @@ -0,0 +1,345 @@
 +/*
 + * encode.c
 + *   Various data encoding/decoding things.
 + * 
 + * Copyright (c) 2001 Marko Kreen
 + * All rights reserved.
 + *
 + * Redistribution and use in source and binary forms, with or without
 + * modification, are permitted provided that the following conditions
 + * are met:
 + * 1. Redistributions of source code must retain the above copyright
 + *notice, this list of conditions and the following disclaimer.
 + * 2. Redistributions in binary form must reproduce the above copyright
 + *notice, this list of conditions and the following disclaimer in the
 + *documentation and/or other materials provided with the distribution.
 + *
 + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
 + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
 + * ARE DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
 + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
 + * SUCH DAMAGE.
 + *
 + * $Id$
 + */
 +
 +#include postgres.h
 +#include fmgr.h
 +
 +#include "encode.h"
 +
 +/*
 + * NAMEDATALEN is used for hash names
 + */
 +#if NAMEDATALEN  16
 +#error "NAMEDATALEN  16: too small"
 +#endif
 +
 +static pg_coding *
 +find_coding(pg_coding *hbuf, text *name, int silent);
 +static pg_coding *
 +pg_find_coding(pg_coding *res, char *name);
 +
 +
 +/* SQL function: encode(bytea, text) returns text */
 +PG_FUNCTION_INFO_V1(encode);
 +
 +Datum
 +encode(PG_FUNCTION_ARGS)
 +{
 + text *arg;
 + text *name;
 + uint len, rlen, rlen0;
 + pg_coding *c, cbuf;
 + text *res;
 + 
 + if (PG_ARGISNULL(0) || PG_ARGISNULL(1))
 + PG_RETURN_NULL();
 + 
 + name = PG_GETARG_TEXT_P(1); 

[HACKERS] beta3 vacuum crash

2001-01-23 Thread Frank Joerdens

I haven't tried everything to recover from this yet, but will quickly try to document 
the
crash before I lose track of what exactly went into it and what I did: Basically I 
deleted
a table and then ran vacuum verbose, with the net result that I cannot connect to this
database anymore with the error:

frank@kelis:/usr/local/httpd/htdocs  psql mpi
psql: FATAL 1:  Index 'pg_trigger_tgrelid_index' does not exist

This is, fortunately, not the production system but my development machine. I was 
going to
go live with this in a couple of week's time on beta3. Should I reconsider and move 
back
to 7.03 (I'd hate to cuz I'll have rows bigger than 32K, potentially . . . )?

The vacuum went like this:

--- begin vacuum ---
mpi=# drop table wimis;
DROP
mpi=# vacuum verbose;
NOTICE:  --Relation pg_type--
NOTICE:  Pages 3: Changed 2, reaped 2, Empty 0, New 0; Tup 159: Vac 16, Keep/VTL 0/0,
Crash 0, UnUsed 0, MinLen 106, MaxLen 109; Re-using: Free/Avail. Space 6296/156;
EndEmpty/Avail. Pages 0/1. CPU 0.00s/0.00u sec.
NOTICE:  Index pg_type_oid_index: Pages 2; Tuples 159: Deleted 16. CPU 0.00s/0.00u sec.
NOTICE:  Index pg_type_typname_index: Pages 2; Tuples 159: Deleted 16. CPU 0.00s/0.00u
sec.
NOTICE:  Rel pg_type: Pages: 3 -- 3; Tuple(s) moved: 1. CPU 0.01s/0.00u sec.
NOTICE:  Index pg_type_oid_index: Pages 2; Tuples 159: Deleted 1. CPU 0.00s/0.00u sec.
NOTICE:  Index pg_type_typname_index: Pages 2; Tuples 159: Deleted 1. CPU 0.00s/0.00u 
sec.
NOTICE:  --Relation pg_attribute--
NOTICE:  Pages 16: Changed 9, reaped 8, Empty 0, New 0; Tup 1021: Vac 160, Keep/VTL 
0/0,
Crash 0, UnUsed 0, MinLen 98, MaxLen 98; Re-using: Free/Avail. Space 16480/16480;
EndEmpty/Avail. Pages 0/8. CPU 0.00s/0.00u sec.
NOTICE:  Index pg_attribute_relid_attnam_index: Pages 16; Tuples 1021: Deleted 160. CPU
0.00s/0.01u sec.
NOTICE:  Index pg_attribute_relid_attnum_index: Pages 8; Tuples 1021: Deleted 160. CPU
0.00s/0.00u sec.
NOTICE:  Rel pg_attribute: Pages: 16 -- 14; Tuple(s) moved: 43. CPU 0.01s/0.01u sec.
NOTICE:  Index pg_attribute_relid_attnam_index: Pages 16; Tuples 1021: Deleted 43. CPU
0.00s/0.00u sec.
NOTICE:  Index pg_attribute_relid_attnum_index: Pages 8; Tuples 1021: Deleted 43. CPU
0.00s/0.00u sec.
NOTICE:  --Relation pg_class--
NOTICE:  Pages 7: Changed 1, reaped 7, Empty 0, New 0; Tup 136: Vac 257, Keep/VTL 0/0,
Crash 0, UnUsed 0, MinLen 115, MaxLen 160; Re-using: Free/Avail. Space 38880/31944;
EndEmpty/Avail. Pages 0/6. CPU 0.00s/0.00u sec.
NOTICE:  Index pg_class_oid_index: Pages 2; Tuples 136: Deleted 257. CPU 0.00s/0.01u 
sec.
NOTICE:  Index pg_class_relname_index: Pages 6; Tuples 136: Deleted 257. CPU 
0.00s/0.00u
sec.
NOTICE:  Rel pg_class: Pages: 7 -- 3; Tuple(s) moved: 76. CPU 0.01s/0.01u sec.
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!# \q
--- end vacuum ---

The log says (I'm running the backend with -d 2):

--- begin log ---
DEBUG:  query: vacuum verbose;
DEBUG:  ProcessUtility: vacuum verbose;
NOTICE:  --Relation pg_type--
NOTICE:  Pages 3: Changed 2, reaped 2, Empty 0, New 0; Tup 159: Vac 16, Keep/VTL 0/0,
Crash 0, UnUsed 0, MinLen 106, MaxLen 109; Re-using: Free/Avail. Space 6296/156;
EndEmpty/Avail. Pages 0/1. CPU 0.00s/0.00u sec.
NOTICE:  Index pg_type_oid_index: Pages 2; Tuples 159: Deleted 16. CPU 0.00s/0.00u sec.
NOTICE:  Index pg_type_typname_index: Pages 2; Tuples 159: Deleted 16. CPU 0.00s/0.00u
sec.
NOTICE:  Rel pg_type: Pages: 3 -- 3; Tuple(s) moved: 1. CPU 0.01s/0.00u sec.
NOTICE:  Index pg_type_oid_index: Pages 2; Tuples 159: Deleted 1. CPU 0.00s/0.00u sec.
NOTICE:  Index pg_type_typname_index: Pages 2; Tuples 159: Deleted 1. CPU 0.00s/0.00u 
sec.
NOTICE:  --Relation pg_attribute--
NOTICE:  Pages 16: Changed 9, reaped 8, Empty 0, New 0; Tup 1021: Vac 160, Keep/VTL 
0/0,
Crash 0, UnUsed 0, MinLen 98, MaxLen 98; Re-using: Free/Avail. Space 16480/16480;
EndEmpty/Avail. Pages 0/8. CPU 0.00s/0.00u sec.
NOTICE:  Index pg_attribute_relid_attnam_index: Pages 16; Tuples 1021: Deleted 160. CPU
0.00s/0.01u sec.
NOTICE:  Index pg_attribute_relid_attnum_index: Pages 8; Tuples 1021: Deleted 160. CPU
0.00s/0.00u sec.
NOTICE:  Rel pg_attribute: Pages: 16 -- 14; Tuple(s) moved: 43. CPU 0.01s/0.01u sec.
NOTICE:  Index pg_attribute_relid_attnam_index: Pages 16; Tuples 1021: Deleted 43. CPU
0.00s/0.00u sec.
NOTICE:  Index pg_attribute_relid_attnum_index: Pages 8; Tuples 1021: Deleted 43. CPU
0.00s/0.00u sec.
NOTICE:  --Relation pg_class--
NOTICE:  Pages 7: Changed 1, reaped 7, Empty 0, New 0; Tup 136: Vac 257, Keep/VTL 0/0,
Crash 0, UnUsed 0, MinLen 115, MaxLen 160; Re-using: Free/Avail. Space 38880/31944;
EndEmpty/Avail. Pages 0/6. CPU 0.00s/0.00u sec.
NOTICE:  Index 

[HACKERS] This script will crash the connection

2001-01-23 Thread Steve Howe

Helo all,

Please try this script, it will crash the current connection.
I'm using the 01/18/2001 PostgreSQL v7.1 beta3 snapshot.

-- Script begin -
create table blah(
  var_field varchar(8),
  n1 integer default 23,
  n2 integer,
  arr_str varchar[],
  m money,
  s text
);

create rule blah_update as
 on update to blah
   do
 notify TestEvent;

INSERT INTO blah (var_field, n1, n2, arr_str, m, s) VALUES ('aaa', 1, 2,
NULL, NULL, NULL);
UPDATE blah SET n1=n1+1;  -- Won't crash the connection
UPDATE blah SET n1=2 WHERE var_field='aaa' AND n1=1 AND n2=2 AND arr_str IS
NULL AND m IS NULL; -- Will crash the connection

-- Script end -
psql will print :

pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!#

Any comments ?

I need this kind of code working for a demo for ZeosDBO users...


Best Regards,
Steve Howe





Re: [HACKERS] C++ interface build on FreeBSD 4.2 broken?

2001-01-23 Thread Patrick Welche

On Sat, Jan 20, 2001 at 08:06:51PM -0500, Tom Lane wrote:
 What I've done to solve the immediate C++ problem is to take the
 declaration of sys_nerr out of c.h entirely, and put it into the
 two C modules that actually need it.  However, I'm still wondering
 whether we should not drop the rangecheck on errno completely.

Probably not useful, but in our errno.h, sys_nerr is defined

#if !defined(_ANSI_SOURCE)  !defined(_POSIX_C_SOURCE)  \
!defined(_XOPEN_SOURCE)


P



[HACKERS] Strange error in PHP/Postgre on RadHat?

2001-01-23 Thread Dmitri E. Gurevich

Hi all!

I'm using the PostgreSQL 7.03 and PHP 4.x to create a DB driven site, which
is intended to work under RedHat.

And I often get very strange errors like
"Warning:  PostgreSQL query failed:  pqReadData() --  read() failed:
errno=32 Broken pipe in /usr/local/home/httpd/htdocs/zyx/xyz.php on line xx"

Would somebody please write me what is the problem?

P.S.
  In some docs I've found that there were an error reported and fixed in
early years (1994,...) in HTTPD when using POST methods for forms caused to
almost the same situation (SIGPIPE error).

Thanks in advance,
Dmitri.





[HACKERS] Strange..

2001-01-23 Thread Patrick Welche

rfb=# insert into person (id,surname) values (2274,'Unknown!');
ERROR:  Relation 'subject' does not exist

Correct - where does subject come from?!

rfb=# \d person
Table "person"
 Attribute | Type  | Modifier 
---+---+--
 id| bigint| 
 surname   | character(20) | 
 firstname | character(30) | 
 email | character(30) | 
 phone | character(16) | 
 rfbdate   | date  | 
Index: name_idx

(in fact no 'suject' in any table anywhere)

Am I going spare?

Cheers,

Patrick

 PostgreSQL 7.1beta3 on i386-unknown-netbsdelf1.5Q, compiled by GCC egcs-1.1.2




Re: [HACKERS] Re: MySQL and BerkleyDB (fwd)

2001-01-23 Thread Emmanuel Charpentier

Peter Eisentraut wrote:
 
 The Hermit Hacker writes:
 
  Is anyone looking at doing this?  Is this purely a MySQL-ism, or is it
  something that everyone else has except us?



 It's not required by SQL, that's for sure.  I think in 7.2 we'll tackle
 schema support, which will accomplish the same thing.  Many people
 (including myself) are of the opinion that not allowing cross-db access is
 in fact a feature.

I am of the inverse opinion : cross-DB joining is the only reasonable
way to cope with the unfortunate, disgracious, unreasonable, but quite
inescapable real-life fact that all data do not live in the same server
in any but the smallest sites ...

I recently did a plea in this list ("A post-7.1 wishlist") in this
direction, and got an answer (Peter Einstraut ?) that was more or less
on the lines of "over our dead bodies !" ... Sigh ...

However, I *think* that it could be done by another tool, such as
Easysoft's (Nick Gorham's, I think) SQL Engine, which allows for joins
between any ODBC-reachable tools. This tool is unreasonably expensive
for private use ($800 + $200/year mandatory maintainance). A PostgreSQL
alternative would be, IMSAO, a huge benefit, even huger if able to
cross-join with ODBC data sources ...

M$ Access has this, since version 1, and that's a hell of a handy
feature for a lot of cases involving management of multiple data sources
...

   why not just build PgSQL, and have transaction support *with* subselects
   and everything else that mySQL doesn't have?
 
  I'd *love* to use PgSQL, but it doesn't support cross-DB joins (or at
  least I couldn't figure out how to do it.)  MySQL handles this, so
  I'm using MySQL and would also like to have transaction support...

I have to tell that my daily work involves this kind of problems, with
data sources ranging from SAS datasets under MVS/XA to Excel files to
Oracle databases to younameit ... That's the kind of problem I would
*love* to have PostgreSQL to cope with, and *not* M$ Access ...

[ Back to lurking mode ... ]

E. Charpentier

--
Emmanuel Charpentier



[HACKERS] beta4 ... almost time to wrap one ...

2001-01-23 Thread The Hermit Hacker


There has been alot of fixing/patches going into the tree ... woudl like
to wrap up a beta4 before the weekend, unless there are any objections?


Marc G. Fournier   ICQ#7615664   IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: [EMAIL PROTECTED]   secondary: scrappy@{freebsd|postgresql}.org




Re: [HACKERS] WAL documentation

2001-01-23 Thread Barry Lind

Not knowing much about WAL, but understanding a good deal about Oracle's
logs, I read the WAL documentation below.  While it is good, after
reading it I am still left with a couple of questions and therefore
believe the doc could be improved a bit.

The two questions I am left with after reading the WAL doc are:

1) In the 'WAL Parameters' section, paragraph 3 there is the following
sentence:
"After a checkpoint has been made, any log segments written before the
redo record may be removed/archived..."  What does the 'may' refer
mean?  Does the database administrator need to go into the directory and
remove the no longer necessary log files?  What does archiving have to
do with this?  If I archived all log files, could I roll forward a
backup made previously?  That is the only reason I can think of that you
would archive log files (at least that is why you archive log files in
Oracle).

2) The doc doesn't seem to explain how on database recovery the database
knows which log file to start with.  I think walking through an example
of how after a database crash, the log file is used for recovery, would
be useful.  At least it would make me as a user of postgres feel better
if I understood how crashes are recovered from.

thanks,
--Barry




Oliver Elphick wrote:
 
 Here is documentation for WAL, as text for immediate review and as SGML
 source, generated from Vadim's original text with my editing.
 
 Please review for correctness.
 
 === WAL chapter ==
 
 Write-Ahead Logging (WAL) in Postgres
 
 Author: Written by Vadim Mikheev and Oliver Elphick.
 
 General description
 
 Write Ahead Logging (WAL) is a standard approach to transaction logging.
 Its detailed description may be found in most (if not all) books about
 transaction processing. Briefly, WAL's central concept is that changes to
 data files (where tables and indices reside) must be written only after
 those changes have been logged - that is, when log records have been
 flushed to permanent storage. When we follow this procedure, we do not
 need to flush data pages to disk on every transaction commit, because we
 know that in the event of a crash we will be able to recover the database
 using the log: any changes that have not been applied to the data pages
 will first be redone from the log records (this is roll-forward recovery,
 also known as REDO) and then changes made by uncommitted transactions
 will be removed from the data pages (roll-backward recovery - UNDO).
 
 Immediate benefits of WAL
 
 The first obvious benefit of using WAL is a significantly reduced number
 of disk writes, since only the log file needs to be flushed to disk at
 the time of transaction commit; in multi-user environments, commits of
 many transactions may be accomplished with a single fsync() of the log
 file. Furthermore, the log file is written sequentially, and so the cost
 of syncing the log is much less than the cost of syncing the data pages.
 
 The next benefit is consistency of the data pages. The truth is that,
 before WAL, PostgreSQL was never able to guarantee consistency in the
 case of a crash.  Before WAL, any crash during writing could result in:
 
 1. index tuples pointing to non-existent table rows;
 2. index tuples lost in split operations;
 3. totally corrupted table or index page content, because of
partially written data pages.
 
 (Actually, the first two cases could even be caused by use of the "pg_ctl
 -m {fast | immediate} stop" command.)  Problems with indices (problems
 1 and 2) might have been capable of being fixed by additional fsync()
 calls, but it is not obvious how to handle the last case without WAL;
 WAL saves the entire data page content in the log if that is required
 to ensure page consistency for after-crash recovery.
 
 Future benefits
 
 In this first release of WAL, UNDO operation is not implemented, because
 of lack of time. This means that changes made by aborted transactions
 will still occupy disk space and that we still need a permanent pg_log
 file to hold the status of transactions, since we are not able to re-use
 transaction identifiers.  Once UNDO is implemented, pg_log will no longer
 be required to be permanent; it will be possible to remove pg_log at
 shutdown, split it into segments and remove old segments.
 
 With UNDO, it will also be possible to implement SAVEPOINTs to allow
 partial rollback of invalid transaction operations (parser errors caused
 by mistyping commands, insertion of duplicate primary/unique keys and
 so on) with the ability to continue or commit valid operations made by
 the transaction before the error.  At present, any error will invalidate
 the whole transaction and require a transaction abort.
 
 WAL offers the opportunity for a new method for database on-line backup
 and restore (BAR).  To use this method, one would have to make periodic
 saves of data files to another disk, a tape or another host and also
 archive the WAL log 

[HACKERS] Strange.. solved

2001-01-23 Thread Patrick Welche

By comparing backups, I found

CREATE CONSTRAINT TRIGGER "unnamed" AFTER INSERT OR UPDATE ON "person"  NOT 
DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE "RI_FKey_check_ins" 
('unnamed', 'person', 'subject', 'UNSPECIFIED', 'subjectid', 'id');

Don't know where that came from, but probably operator error.. There isn't
an easy way of scrubbing an unnamed trigger is there? (I dump/edit/reloaded)

Cheers,

Patrick



Re: [HACKERS] pg_shadow.usecatupd attribute

2001-01-23 Thread Tom Lane

Alex Pilosov [EMAIL PROTECTED] writes:
 Just to clarify for stupid me: you want to remove it and forbid catalog
 updates or remove it and allow catalog updates? (I hope its latter :)

Right, the latter.  If anyone is actually using usecatupd to prevent
themselves from shooting themselves in the foot, speak now or forever
hold your peace ...

regards, tom lane



[HACKERS] Re: [PATCHES] Small patch to replace 'idle' by 'trans' if transactionis still open

2001-01-23 Thread Christof Petig

If you don't know what is all about read the bottom (I'll requote my posting)

Bruce Momjian wrote:

  On Fri, 19 Jan 2001, Bruce Momjian wrote:
   Seems we decided against this.  Sorry.
 
  Huh?  from reading Tom's response, sounds like it would be something
  useful?  I know I find having as much information about state in the ps
  listing helps me alot, and knowing if its 'idle' vs 'idle (in
  transaction)' provides at lesat more detailed information then just 'idle'

  Who was the 'we' in the above decision?  Tom seemed in favor of it, I know
  I'm in favor of it .. and you are not in favor of it ...

 There must have been some discussion about it.  I don't see it in the
 code, and I remember it was rejected for some reason.  Check the archives.

The thing which comes most close to a rejection was the 'I can't decide' mail
by you (answered by Tom). The conclusion sounded like 'since we're not clear on
this subject we won't touch this, yet'. And there was some unsettled discussion
about the best wording to show in 'ps'.

'trans' seemed too short (and too much unlike 'idle') (as indicated by Bruce)
and 'idle (open transaction)' might give difficulties on platforms which limit
the length of the string (as indicated by Tom)

I'll CC Hackers (where this discussion belongs)

Christof

-

Quoting:

Subject: Re: [PATCHES] Small patch to replace 'idle' by 'trans' if transaction
is still open
Date: Mon, 09 Oct 2000 22:46:56 -0400
From: Tom Lane [EMAIL PROTECTED]

Bruce Momjian [EMAIL PROTECTED] writes:
 I can't decide if this is of general use.  My inclination is that
 someone in a transaction sitting a prompt should still show as idle.

The idea seemed good to me, although I didn't look at the code to see
if the implementation was any good ;-).  I know we've frequently had
questions on the lists where it was interesting to know if any
transactions were being held open --- and right now there's no easy
way to tell.

regards, tom lane

--

Subject: Small patch to replace 'idle' by 'trans' if transaction is still open
Date: Tue, 03 Oct 2000 21:28:36 +0200
From: Christof Petig [EMAIL PROTECTED]

If you are looking for programs which tend to hold longstanding
transactions, this micro patch might be handy.
Whether it is of general use is debatable. It will replace 'idle' by
'trans' if the backend is idle but a transaction is pending.

Simply use ps to view the backend's status.

Christof

--- src/backend/commands/async.c~   Sun May 14 05:18:35 2000
+++ src/backend/commands/async.cTue Oct  3 10:31:54 2000
@@ -818,7 +818,7 @@
 */
pq_flush();

-   PS_SET_STATUS("idle");
+   PS_SET_STATUS(IsTransactionBlock()?"trans":"idle");
TPRINTF(TRACE_NOTIFY, "ProcessIncomingNotify: done");
 }

--- src/backend/tcop/postgres.c~Thu Aug 31 09:18:57 2000
+++ src/backend/tcop/postgres.c Tue Oct  3 10:32:23 2000
@@ -1496,7 +1496,7 @@

for (;;)
{
-   PS_SET_STATUS("idle");
+   PS_SET_STATUS(IsTransactionBlock()?"trans":"idle");

/* 
 *   (1) tell the frontend we're ready for a new query.





Re: [HACKERS] $PGDATA/base/???

2001-01-23 Thread bpalmer

On Tue, 23 Jan 2001, Bruce Momjian wrote:

 I have added this to /contrib for 7.1.

Not sure if you know this,  but you checked in the code compiled and w/
the .o file...

FYI.

b. palmer,  [EMAIL PROTECTED]
pgp:  www.crimelabs.net/bpalmer.pgp5




Re: [HACKERS] $PGDATA/base/???

2001-01-23 Thread Bruce Momjian

  What I will probably do is make a wrapper around it so it I can do:
 
  ls | oidmapper
 
  and see the files as table names.
 
 Hmmm I think I can add that to the code..
 
 will try..
 

It has to be pretty smart.  Consider this:

$ pwd
/u/pg/data/base/18720
$ ls -l

It has to read the directories above, looking for a directory name that
is all numbers.  It needs to then use that to find the database name. 
Of course, if you are not in the directory, you may have a problem with
the database and require them to specify it on the command line.

It then has to process the the contents of ls -l and find the oids in
there and map them:

total 2083
-rw---  1 postgres  postgres8192 Jan 15 23:43 1215
-rw---  1 postgres  postgres8192 Jan 15 23:43 1216
-rw---  1 postgres  postgres8192 Jan 15 23:43 1219
-rw---  1 postgres  postgres   24576 Jan 15 23:43 1247
-rw---  1 postgres  postgres  114688 Jan 19 21:43 1249
-rw---  1 postgres  postgres  229376 Jan 15 23:43 1255
-rw---  1 postgres  postgres   24576 Jan 15 23:59 1259
-rw---  1 postgres  postgres8192 Jan 15 23:43 16567
-rw---  1 postgres  postgres   16384 Jan 16 00:04 16579

The numbers 16k are system tables so you probably need code to lookup
stuff 16k, and if it doesn't begin with pg_, it is not an oid.

It also should handle 'du':

$ du
1517./1
1517./18719
2085./18720
1517./27592
20561   ./27593
27198   .

As you can see, this could be tricky.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [HACKERS] beta4 ... almost time to wrap one ...

2001-01-23 Thread Tom Lane

The Hermit Hacker [EMAIL PROTECTED] writes:
 There has been alot of fixing/patches going into the tree ... woudl like
 to wrap up a beta4 before the weekend, unless there are any objections?

Agreed, we should push out beta4 before most of core leaves for
LinuxWorld.

I'd like to see if I can get that HandleDeadLock rewrite done
beforehand.  Anyone else have any "must fix" items?

regards, tom lane



[HACKERS] Re: [PATCHES] Small patch to replace 'idle' by 'trans' if transactionisstill open

2001-01-23 Thread Bruce Momjian

I liked the 'idle in trans' because people can search for just the first
word if they want.

 Bruce Momjian [EMAIL PROTECTED] writes:
  Any solid consensus on this?
 
 I'm for it (given a short status string --- "idle in tx" or "idle in trans"
 seem fine).  Marc's for it.  Who's against it?
 
   regards, tom lane
 


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [HACKERS] This script will crash the connection

2001-01-23 Thread Tom Lane

"Steve Howe" [EMAIL PROTECTED] writes:
 Please try this script, it will crash the current connection.

Crash confirmed.  Thanks for the report --- I'm on it...

regards, tom lane



Re: [HACKERS] $PGDATA/base/???

2001-01-23 Thread Bruce Momjian

Thanks.  Removed.

 On Tue, 23 Jan 2001, Bruce Momjian wrote:
 
  I have added this to /contrib for 7.1.
 
 Not sure if you know this,  but you checked in the code compiled and w/
 the .o file...
 
 FYI.
 
 b. palmer,  [EMAIL PROTECTED]
 pgp:  www.crimelabs.net/bpalmer.pgp5
 
 


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [HACKERS] beta4 ... almost time to wrap one ...

2001-01-23 Thread Bruce Momjian


FYI, I still have 50 open items.  I will post a list.

 
 There has been alot of fixing/patches going into the tree ... woudl like
 to wrap up a beta4 before the weekend, unless there are any objections?
 
 
 Marc G. Fournier   ICQ#7615664   IRC Nick: Scrappy
 Systems Administrator @ hub.org
 primary: [EMAIL PROTECTED]   secondary: scrappy@{freebsd|postgresql}.org
 
 


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [HACKERS] beta4 ... almost time to wrap one ...

2001-01-23 Thread Bruce Momjian

 The Hermit Hacker [EMAIL PROTECTED] writes:
  There has been alot of fixing/patches going into the tree ... woudl like
  to wrap up a beta4 before the weekend, unless there are any objections?
 
 Agreed, we should push out beta4 before most of core leaves for
 LinuxWorld.
 
 I'd like to see if I can get that HandleDeadLock rewrite done
 beforehand.  Anyone else have any "must fix" items?

Tons of them before final. I am about to put out an email.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



[HACKERS] Open 7.1 items

2001-01-23 Thread Bruce Momjian

I have about 20 open 7.1 items that I need to get resolved before I can
start getting the doc TODO list started.  The issues relate to JDBC,
ODBC, and lots of other stuff that need to be settled before we can
finalize 7.1.

They can not be easily summarized in one line.  You really have to see
the whole email to understand the issues.

How do people want to do this?  I can post them to hackers, or put them
on my web site.  I posted them to hackers during the past few days, but
many went unanswered.  These are all relatively new from the past few
months.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [HACKERS] beta4 ... almost time to wrap one ...

2001-01-23 Thread The Hermit Hacker

On Wed, 24 Jan 2001, Tom Lane wrote:

 The Hermit Hacker [EMAIL PROTECTED] writes:
  There has been alot of fixing/patches going into the tree ... woudl like
  to wrap up a beta4 before the weekend, unless there are any objections?

 Agreed, we should push out beta4 before most of core leaves for
 LinuxWorld.

Okay, am going to aim for Friday for Beta4 ... if we can get as many fixes
in as possible before then, great ...





Re: [HACKERS] This script will crash the connection

2001-01-23 Thread Tom Lane

"Steve Howe" [EMAIL PROTECTED] writes:
 create rule blah_update as
  on update to blah
do
  notify TestEvent;

 UPDATE blah SET n1=n1+1;  -- Won't crash the connection
 UPDATE blah SET n1=2 WHERE var_field='aaa' AND n1=1 AND n2=2 AND arr_str IS
 NULL AND m IS NULL; -- Will crash the connection

The problem here is that the query rewriter tries to hang the query's
qualification (WHERE clause) onto the rule's action query, so that
the action query won't be done unless the query finds at least one
row to update.

NOTIFY commands, being utility statements, don't have qualifications.
In 7.0 and before, the qual clause just vanished into the ether, and
so in this example the NOTIFY would execute whether the UPDATE updated
any rows or not.  In 7.1 there is physically noplace to hang the qual
(no jointree) and thus a crash.

Not sure what to do here.  Adding quals to utility statements is right
out, however --- even if we weren't late in beta, the concept doesn't
make any sense to me.  For one reason, utility statements don't have
FROM clauses against which to evaluate the quals.  I am leaning to the
idea that we should forbid NOTIFY in rules altogether.  Jan, what's your
thought?

Steve, your immediate move is to use a trigger rather than a rule to
execute the NOTIFY.  Meanwhile, we have to think about what to do...

regards, tom lane



RE: [HACKERS] WAL documentation

2001-01-23 Thread Christopher Kings-Lynne

Also, what happens with the size of the WAL logs?  Do they just grow forever 
eventually filling up your hard drive, or should they reach a stable point where they 
tend not to grow any further?

ie. Will we sysadmins have to put cron jobs in to tar/gz old WAL logs or what???

Chris

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED]]On Behalf Of Barry Lind
 Sent: Wednesday, January 24, 2001 12:32 PM
 To: [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Subject: Re: [HACKERS] WAL documentation
 
 
 Not knowing much about WAL, but understanding a good deal about Oracle's
 logs, I read the WAL documentation below.  While it is good, after
 reading it I am still left with a couple of questions and therefore
 believe the doc could be improved a bit.
 
 The two questions I am left with after reading the WAL doc are:
 
 1) In the 'WAL Parameters' section, paragraph 3 there is the following
 sentence:
 "After a checkpoint has been made, any log segments written before the
 redo record may be removed/archived..."  What does the 'may' refer
 mean?  Does the database administrator need to go into the directory and
 remove the no longer necessary log files?  What does archiving have to
 do with this?  If I archived all log files, could I roll forward a
 backup made previously?  That is the only reason I can think of that you
 would archive log files (at least that is why you archive log files in
 Oracle).
 
 2) The doc doesn't seem to explain how on database recovery the database
 knows which log file to start with.  I think walking through an example
 of how after a database crash, the log file is used for recovery, would
 be useful.  At least it would make me as a user of postgres feel better
 if I understood how crashes are recovered from.
 
 thanks,
 --Barry
 
 
 
 
 Oliver Elphick wrote:
  
  Here is documentation for WAL, as text for immediate review and as SGML
  source, generated from Vadim's original text with my editing.
  
  Please review for correctness.
  
  === WAL chapter ==
  
  Write-Ahead Logging (WAL) in Postgres
  
  Author: Written by Vadim Mikheev and Oliver Elphick.
  
  General description
  
  Write Ahead Logging (WAL) is a standard approach to transaction logging.
  Its detailed description may be found in most (if not all) books about
  transaction processing. Briefly, WAL's central concept is that 
 changes to
  data files (where tables and indices reside) must be written only after
  those changes have been logged - that is, when log records have been
  flushed to permanent storage. When we follow this procedure, we do not
  need to flush data pages to disk on every transaction commit, because we
  know that in the event of a crash we will be able to recover 
 the database
  using the log: any changes that have not been applied to the data pages
  will first be redone from the log records (this is roll-forward 
 recovery,
  also known as REDO) and then changes made by uncommitted transactions
  will be removed from the data pages (roll-backward recovery - UNDO).
  
  Immediate benefits of WAL
  
  The first obvious benefit of using WAL is a significantly reduced number
  of disk writes, since only the log file needs to be flushed to disk at
  the time of transaction commit; in multi-user environments, commits of
  many transactions may be accomplished with a single fsync() of the log
  file. Furthermore, the log file is written sequentially, and so the cost
  of syncing the log is much less than the cost of syncing the data pages.
  
  The next benefit is consistency of the data pages. The truth is that,
  before WAL, PostgreSQL was never able to guarantee consistency in the
  case of a crash.  Before WAL, any crash during writing could result in:
  
  1. index tuples pointing to non-existent table rows;
  2. index tuples lost in split operations;
  3. totally corrupted table or index page content, because of
 partially written data pages.
  
  (Actually, the first two cases could even be caused by use of 
 the "pg_ctl
  -m {fast | immediate} stop" command.)  Problems with indices (problems
  1 and 2) might have been capable of being fixed by additional fsync()
  calls, but it is not obvious how to handle the last case without WAL;
  WAL saves the entire data page content in the log if that is required
  to ensure page consistency for after-crash recovery.
  
  Future benefits
  
  In this first release of WAL, UNDO operation is not implemented, because
  of lack of time. This means that changes made by aborted transactions
  will still occupy disk space and that we still need a permanent pg_log
  file to hold the status of transactions, since we are not able to re-use
  transaction identifiers.  Once UNDO is implemented, pg_log will 
 no longer
  be required to be permanent; it will be possible to remove pg_log at
  shutdown, split it into segments and remove old segments.
  
  With UNDO, it will also be 

[HACKERS] Re: [GENERAL] postgres memory management

2001-01-23 Thread Alexander Jerusalem

At 21:40 22.01.01, Peter Mount wrote:
At 13:18 21/01/01 +0100, Alexander Jerusalem wrote:
Hi all,

I'm experiencing some strange behaviour with postgresql 7.0.3 on Red Hat 
Linux 7. I'm sending lots of insert statements to the postgresql server
from another machine via JDBC. During that process postgresql continues to
take up more and more memory and seemingly never returns it to the 
system. Oddly if I watch the postmaster and it's sub processes in ktop, I 
can't see which process takes up this memory. ktop shows that the 
postgresql related processes have a constant memory usage but the overall 
memory usage always increases as long as I continue to send insert statements.

When the database connection is closed, no memory is reclaimed, the 
overall memory usage stays the same. And when I close down all postgresql 
processes including postmaster, it's the same.
I'm rather new to Linux and postgresql so I'm not sure if I should call 
this a memory leak :-)
Has anybody experienced a similar thing?

I'm not sure myself. You can rule out JDBC (or Java) here as you say you 
are connecting from another machine.

When your JDBC app closes, does it call the connection's close() method? 
Does any messages like "Unexpected EOF from client" appear on the server side?

The only other thing that comes to mine is possibly something weird is 
happening with IPC. After you closed down postgres, does ipcclean free up 
any memory?

I'm cc'in the hackers list and the new jdbc list.

Peter

Thanks for your answer!

Yes I'm calling Connection.close(). I don't get any error messages but 
maybe I just don't see them because postgresql is started automatically at 
run level 3. I'm not sure where the output goes. (pg_log contains only 
garbage or maybe it's a binary file) I tried ipcclean right now and it 
doesn't free the memory but it gives me some messages that I cannot interpret:

Shared memory 0 ... skipped. Process still exists (pid ).
Shared memory 1 ... skipped. Process still exists (pid ).
Shared memory 2 ... skipped. Process still exists (pid ).
Shared memory 3 ... skipped. Process still exists (pid ).
Semaphore 0 ... resource(s) deleted
Semaphore 1 ... resource(s) deleted

Oddly, when I try to run ipcclean a second time, it says: ipcclean: You 
still have a postmaster running. Which is not the case as ps -e proves.

Alexander Jerusalem
[EMAIL PROTECTED]
vknn




Re: [HACKERS] Re: postgres memory management

2001-01-23 Thread Alexander Jerusalem

At 22:29 22.01.01, Alfred Perlstein wrote:
* Peter Mount [EMAIL PROTECTED] [010122 13:21] wrote:
  At 13:18 21/01/01 +0100, Alexander Jerusalem wrote:
  Hi all,
  
  I'm experiencing some strange behaviour with postgresql 7.0.3 on Red Hat
  Linux 7. I'm sending lots of insert statements to the postgresql server
  from another machine via JDBC. During that process postgresql 
 continues to
  take up more and more memory and seemingly never returns it to the 
 system.
  Oddly if I watch the postmaster and it's sub processes in ktop, I can't
  see which process takes up this memory. ktop shows that the postgresql
  related processes have a constant memory usage but the overall memory
  usage always increases as long as I continue to send insert statements.
  
  When the database connection is closed, no memory is reclaimed, the
  overall memory usage stays the same. And when I close down all postgresql
  processes including postmaster, it's the same.
  I'm rather new to Linux and postgresql so I'm not sure if I should call
  this a memory leak :-)
  Has anybody experienced a similar thing?
 
  I'm not sure myself. You can rule out JDBC (or Java) here as you say you
  are connecting from another machine.
 
  When your JDBC app closes, does it call the connection's close() method?
  Does any messages like "Unexpected EOF from client" appear on the 
 server side?
 
  The only other thing that comes to mine is possibly something weird is
  happening with IPC. After you closed down postgres, does ipcclean free up
  any memory?

I don't know if this is valid for Linux, but it is how FreeBSD
works, for the most part used memory is never free'd, it is only
marked as reclaimable.  This is so the system can cache more data.
On a freshly booted FreeBSD box you'll have a lot of 'free' memory,
after the box has been running for a long time the 'free' memory
will probably never go higher that 10megs, the rest is being used
as cache.

The main things you have to worry about is:
a) really running out of memory (are you useing a lot of swap?)
b) not cleaning up IPC as Peter suggested.

Thanks for your answer!

I'm rather new to Linux, so I can't tell if it's that way on Linux. But I 
noticed that other programs free some memory when I quit them. But it's 
true that I'm not running out of memory. I have 300 MB of free RAM and no 
swap space is used. As I wrote in reply to Peters mail, ipcclean doesn't 
change anything.

Alexander Jerusalem
[EMAIL PROTECTED]
vknn




[HACKERS] Re: postgres memory management

2001-01-23 Thread Justin Clift

Hi Alexander,

I've noticed that the PG 7.03 ipcclean script uses "ps x | grep -s
'postmaster'" to determine if a postmaster daemon is still running,
which at least for Mandrake Linux 7.2 doesn't work as expected.  With
this version of linux, the ps  grep combination will find itself and
then ipcclean will complain about an existing postmaster.

I found the solution to this being to edit the ipcclean script and
change the "ps x | grep -s 'postmaster'" part to "ps -e | grep -s
'postmaster'".  This then works correctly with Mandrake 7.2.

Regards and best wishes,

Justin Clift

snip
 
 Oddly, when I try to run ipcclean a second time, it says: ipcclean: You
 still have a postmaster running. Which is not the case as ps -e proves.
 
 Alexander Jerusalem
 [EMAIL PROTECTED]
 vknn