Re: [GENERAL] Discovering time of last database write

2007-01-08 Thread Andy Dale

Hi,

Sorry for the slight delay in my response.

I am using 3 PostgreSQL databases and writing to them using an SQL proxy.
These databases have a high write volume.  On rebooting all 3 servers for
OS/Software updates, i would like to figure out which was the last written
to DB (this is assuming the DB/Servers are not all taken down at the same
time), the times are kept in sync with NTP.

I imagine it is possible to get this behaviour with after triggers, but this
means i have to attach the same trigger to each table ??

Thanks,

Andy

On 04/01/07, Scott Marlowe [EMAIL PROTECTED] wrote:


On Thu, 2007-01-04 at 11:11, Andy Dale wrote:
 Hi,

 I need to be able to determine the last time (and date) that a
 database was written to.  I know it could be possible just to check
 the last modified dates in the PGDATA directory, but i need to compare
 the last write time of 3 databases (connecting via JDBC).  Hopefully
 the last write date is contained somewhere in a system table
 (information schema) but i have no idea of the table(s) i would need
 to query.

Bad news, it's not generally stored.

Good news, it's not that hard to implement.

Perhaps if you give us the bigger picture we can make more logical
suggestions on how to accomplish it.



Re: [GENERAL] Discovering time of last database write

2007-01-08 Thread Bruno Wolff III
On Mon, Jan 08, 2007 at 09:22:05 +0100,
  Andy Dale [EMAIL PROTECTED] wrote:
 Hi,
 
 Sorry for the slight delay in my response.
 
 I am using 3 PostgreSQL databases and writing to them using an SQL proxy.
 These databases have a high write volume.  On rebooting all 3 servers for
 OS/Software updates, i would like to figure out which was the last written
 to DB (this is assuming the DB/Servers are not all taken down at the same
 time), the times are kept in sync with NTP.
 
 I imagine it is possible to get this behaviour with after triggers, but this
 means i have to attach the same trigger to each table ??

I think what Scott was suggesting was that you tell us what you are planning
to do with the time. Depending on what you are trying to do, there may be
better ways of doing things.

Also the time of last update for an MVCC is a bit nebulous and to get it
in the database might not be possible with the semantics you want. For example
getting the time a transaction is committed is going to be hard without
modifying the backend, as any triggers will run before a transaction is
committed and can't know the precise time of the commit.

 
 Thanks,
 
 Andy
 
 On 04/01/07, Scott Marlowe [EMAIL PROTECTED] wrote:
 
 On Thu, 2007-01-04 at 11:11, Andy Dale wrote:
  Hi,
 
  I need to be able to determine the last time (and date) that a
  database was written to.  I know it could be possible just to check
  the last modified dates in the PGDATA directory, but i need to compare
  the last write time of 3 databases (connecting via JDBC).  Hopefully
  the last write date is contained somewhere in a system table
  (information schema) but i have no idea of the table(s) i would need
  to query.
 
 Bad news, it's not generally stored.
 
 Good news, it's not that hard to implement.
 
 Perhaps if you give us the bigger picture we can make more logical
 suggestions on how to accomplish it.
 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Discovering time of last database write

2007-01-08 Thread Andy Dale

Ok.

The SQL Proxy i am using (HA-JDBC) has some limitations with regard to
getting it's cluster back into sync.  If ha-jdbc uses the wrong DB (one
that has been out of action for a while) as the starting point for the
cluster it will then try and delete stuff from the other DB's on their
introduction to the cluster.

I thought the easiest way to control a complete cluster restart would be
to extract the last write date and introduce the one with the last write
date first, this will make certain the above scenario does not happen.

Thanks,

Andy

On 08/01/07, Bruno Wolff III [EMAIL PROTECTED] wrote:


On Mon, Jan 08, 2007 at 09:22:05 +0100,
  Andy Dale [EMAIL PROTECTED] wrote:
 Hi,

 Sorry for the slight delay in my response.

 I am using 3 PostgreSQL databases and writing to them using an SQL
proxy.
 These databases have a high write volume.  On rebooting all 3 servers
for
 OS/Software updates, i would like to figure out which was the last
written
 to DB (this is assuming the DB/Servers are not all taken down at the
same
 time), the times are kept in sync with NTP.

 I imagine it is possible to get this behaviour with after triggers, but
this
 means i have to attach the same trigger to each table ??

I think what Scott was suggesting was that you tell us what you are
planning
to do with the time. Depending on what you are trying to do, there may be
better ways of doing things.

Also the time of last update for an MVCC is a bit nebulous and to get it
in the database might not be possible with the semantics you want. For
example
getting the time a transaction is committed is going to be hard without
modifying the backend, as any triggers will run before a transaction is
committed and can't know the precise time of the commit.


 Thanks,

 Andy

 On 04/01/07, Scott Marlowe [EMAIL PROTECTED] wrote:
 
 On Thu, 2007-01-04 at 11:11, Andy Dale wrote:
  Hi,
 
  I need to be able to determine the last time (and date) that a
  database was written to.  I know it could be possible just to check
  the last modified dates in the PGDATA directory, but i need to
compare
  the last write time of 3 databases (connecting via JDBC).  Hopefully
  the last write date is contained somewhere in a system table
  (information schema) but i have no idea of the table(s) i would need
  to query.
 
 Bad news, it's not generally stored.
 
 Good news, it's not that hard to implement.
 
 Perhaps if you give us the bigger picture we can make more logical
 suggestions on how to accomplish it.
 



[GENERAL] registering ODBC driver...

2007-01-08 Thread riki
hi,

i'm trying to register/install postgres ODBC driver on windowsME.
i've copied driver from winXP machine. this is one .dll file.
i would like to get it on the list in ODBC manager.
how can i register this file as ODBC driver on winME machine?

thanks 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] registering ODBC driver...

2007-01-08 Thread A. Kretschmer
am  Mon, dem 08.01.2007, um 10:54:39 +0100 mailte riki folgendes:
 hi,
 
 i'm trying to register/install postgres ODBC driver on windowsME.
 i've copied driver from winXP machine. this is one .dll file.
 i would like to get it on the list in ODBC manager.
 how can i register this file as ODBC driver on winME machine?

I think, you should better use the regular installer to install the
ODBC-Driver, not just copy the DLL...


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Database versus filesystem for storing images

2007-01-08 Thread Clodoaldo

2007/1/6, Maurice Aubrey [EMAIL PROTECTED]:

Clodoaldo wrote:
 But the main factor to push me in the file system direction is the
 HTTP cache management. I want the internet web clients and proxies to
  cache the images. The Apache web server has it ready and easy. If
 the images where to be stored in the DB I would have to handle the
 HTTP cache headers myself. Another code layer. Not too big a deal,
 but if Apache give me it for free...

There's a hybrid approach which has worked well for us.

You store the binary data in the database along with a signature.

On the Apache side, you write a 404 handler that, based on the request,
fetches the binary from the database and writes it locally to the
filesystem based on the signature (using a multi-level hashing scheme
possibly as detailed in previous posts).

When a request comes in to Apache, if the file exists it is served
directly without any db interaction. OTOH, if it's missing, your 404
handler kicks in to build it and you get a single trip to the db.

You get the benefits of keeping the data in the db (transaction
semantics, etc.) but also get the scalability and caching benefits
of having the front-end webservers handle delivery.

If you lose the locally cached data it's not an issue. They'll be
faulted back into existence on demand.

With multiple webservers, you can just allow the data to be cached on
each machine, or if there's too much data for that, have your load
balancer divide the requests to different webserver pools based on the
signature.

As an extension, if you need different versions of the data (like
different sizes of an image, etc.), you can modify your URLs to indicate
the version wanted and have the 404 handler take that into account when
building them. You only store the original content in the database but
could have any number of transformed versions on the webservers. Again,
losing those versions is not an issue and do not require backup.


Very interesting approach. And I think it is also original as I have
not seen any mention of it. Thanks for sharing it.

--
Clodoaldo Pinto Neto

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] TRIGGER BEFORE INSERT

2007-01-08 Thread Rafal Pietrak
Hi All!

I have some old piece of code, that worked two years ago (Postgres
version 7.2, I think), but doesn't work within Postgres 8.1.4 now.

The story is, that I have a trigger on a table (business day
statistics), that is fired before insert; it updates another table
(detailed transaction log), and saves statistics from that update within
the freshly inserted record.

Cutting down much larger (and obfuscated) schema to its critical
section, I've came with the following snippet:

CREATE TABLE test_days (id serial unique, dnia date not null default
current_date-'1day'::interval, total int not null);
CREATE TABLE test_utarg(tm timestamp not null, nic int, amount int not
null, dnia int references test_days(id));

INSERT INTO test_utarg (tm,nic, amount) SELECT current_timestamp -
interval_mul('1min'::interval, (random()*1)::integer),
generate_series(1,88), (random()*1)::integer; 

CREATE FUNCTION prado() RETURNS trigger AS $$ DECLARE wydano INTEGER;
BEGIN UPDATE test_utarg SET dnia=new.id WHERE tm BETWEEN new.dnia AND
new.dnia+'1day'::interval; GET DIAGNOSTICS wydano := ROW_COUNT;
new.total := wydano; RETURN new; END; $$ LANGUAGE plpgsql;
CREATE TRIGGER mocarny BEFORE INSERT ON test_days EXECUTE PROCEDURE
prado();

INSERT INTO test_days (dnia) VALUES ('2007-01-06'); 
ERROR:  record new is not assigned yet
DETAIL:  The tuple structure of a not-yet-assigned record is
indeterminate.
CONTEXT:  PL/pgSQL function prado line 1 at SQL statement


And to my ultimate surprise, this one breaks with yet another ERROR.

In the original schema, the ERROR was abount TEST_UTARG.DNIA referencing
a not yet available NEW.ID. ... as if constraints within transactions
(inside trigger) were checked on each step, and not at the end of
transaction  as it looks was the case of postgres v7.2.

But the ERROR quoted abobe warries me even more. Is it true, that NEW is
really not-yet-assigned in BEFORE INSERT trigger?? Or may be I'm not
seeing some other obvious mistake I've done in the code above?

Help, pls!

-- 
-R

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Web interface to update/change postgres data.

2007-01-08 Thread Ravi Malghan
Hi: I want to create a simple web interface that lists table data, select 
entries to add/delete/change table data from a table in postgres database. 
Whatz the best and easiest way to do this? Wondering if there are any modules 
or code out there already.

TIA
Ravi

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Command connect by prior in PostgreSQL ?

2007-01-08 Thread Leandro Repolho

Hello everybody,
In Oracle i use the command connect by prior and i need to use it in
PostgreSQL, what is the sintax?


[GENERAL] Sorting with DISTINCT ON

2007-01-08 Thread Nico Grubert

Hi there,

I have a problem sorting a SQL result if I use DISTINCT ON.

I have a table tblcomment with these columns:
  id (serial)
  path (varchar)
  created (timestamp)
  title (varchar)

These records are in the table tblcomment:

id  pathcreated title

11  /var/black  2007-01-07 22:17:03.001837  Any title
17  /var/blue   2007-01-07 20:35:55.289713  Any title
13  /var/blue   2007-01-15 15:52:58.438375  Any title
12  /var/green  2007-01-08 19:03:50.470073  Any title
18  /var/red2007-01-07 08:41:47.152676  Any title

Now, I want to get all results from this table and if there are 
duplicates, I want the row whose created column has the latest date.

In this example, I want to have this result:

id  pathcreated title

11  /var/black  2007-01-07 22:17:03.001837  Any title
13  /var/blue   2007-01-15 15:52:58.438375  Any title
12  /var/green  2007-01-08 19:03:50.470073  Any title
18  /var/red2007-01-07 08:41:47.152676  Any title


My first try was this SQL query:

  SELECT DISTINCT ON (path) path, comment_id, created, title
FROM bewertungen.tblcomments

This does not allow me to append ORDER BY created since I can only 
sort on path because of  DISTINCT ON (path).


My second try was a sub query like this:

SELECT comment_id, path, created, title
 FROM
  ( SELECT DISTINCT ON (path) path, comment_id, created, title
FROM bewertungen.tblcomments
  ) foo_alias
ORDER BY created DESC

But this results into:

id  pathcreated title

11  /var/black  2007-01-07 22:17:03.001837  Any title
17  /var/blue   2007-01-07 20:35:55.289713  Any title
12  /var/green  2007-01-08 19:03:50.470073  Any title
18  /var/red2007-01-07 08:41:47.152676  Any title

No matter, if I user  ORDER BY created DESC or
ORDER BY created ASC. It seems that postgres always takes the first row 
of the duplicates. In this example:

  17  /var/blue   2007-01-07 20:35:55.289713  Any title.


Any idea, how I can solve my problem?


Regards,
   Nico

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Web interface to update/change postgres data.

2007-01-08 Thread Devrim GUNDUZ
Hi,

On Mon, 2007-01-08 at 05:36 -0800, Ravi Malghan wrote:
 I want to create a simple web interface that lists table data, select
 entries to add/delete/change table data from a table in postgres
 database. Whatz the best and easiest way to do this?

You can use phpPgAdmin: http://www.phpPgAdmin.org (It provides more than
you want, BTW)

Regards,
-- 
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, ODBCng - http://www.commandprompt.com/





signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] Sorting with DISTINCT ON

2007-01-08 Thread Jorge Godoy
Nico Grubert [EMAIL PROTECTED] writes:

 My first try was this SQL query:

   SELECT DISTINCT ON (path) path, comment_id, created, title
 FROM bewertungen.tblcomments

 This does not allow me to append ORDER BY created since I can only sort on
 path because of  DISTINCT ON (path).

It does allow you to sort on both columns.  

   SELECT DISTINCT ON (path) path, comment_id, created, title
 FROM bewertungen.tblcomments ORDER BY path, created

Maybe you need a more complex query to get what you want (and use
max(created)...). 

-- 
Jorge Godoy  [EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Advice needed on using postgres in commercial product

2007-01-08 Thread pakt sardines

Hi all,

  We're currently developing some large-scale software with the intention
of selling it (online and off-the-shelf).  The version in development uses
Postgres to store stuff in a number of large databases.

Ignoring potential licensing issues for now, the big issue for us is that
the data in the databases has significant intellectual property value.  It
has taken literally years of work to collect the data.  We do not want the
users of the commercial product to be able to fire up postgres and type
something like:  user]% pg_dump their_data  our_product

Additionally, we don't want to have to encrypt every entry in the database,
because that will cause significant overhead during processing.

My question is, what options do we have?  Can postgres store data as some
sort of unreadable binary, much like you would find in a C binary data file?

If not postgres, what other database could possibly do this, if any?  I
really don't want to have to write our own RDBMS.  :)

Thanks in advance for any advice or suggestions on this subject,

Pakt.


Re: [GENERAL] GUI tool that can reverse engineering schemas

2007-01-08 Thread nyenyec
Sorry about the typo in the title, of course I don't want to reverse
any engineering schemas. :)

I ended up using Squirrel SQL. (A tool I haven't used in a long time
and almost forgot about.)
http://squirrel-sql.sourceforge.net/

The diagraming part is not supersmart, but good enough for my purposes.
For now at least.

I also forgot to mention that I'm looking for something that runs on a
Mac.

If anyone knows a better or comparable solution, let me know.

Thanks,
nyenyec

nyenyec wrote:
 Hi,

 Can anyone suggest a free GUI tool that can reverse engineer a
 postgresql schema and show it as a diagram?

 It doesn't have to be very sophisticated, I just need to get a quick
 understanding of schemas that I'm not familiar with.
 
 Thanks,
 nyenyec


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] doubts

2007-01-08 Thread sangeetha k.s

hello,

  postgresql uses bitmap indexing for indexing  in databases.
  *Is there any technique used*
*  used to compress the bitmap indices in postgresql*,like oracle that uses
Byte Aligned bitmap compression to compress bitmaps.

regards,
sangeetha.


Re: [GENERAL] Database versus filesystem for storing images

2007-01-08 Thread Andrew Chernow

apache has very good page and image caching.  You could take advantage
of that using this technique.

 I wonder why this HTTP cache headers argument didn't surface in this
 heated debate.

I did other up this argument by the way.

Andrew


Clodoaldo wrote:

2007/1/5, Jorge Godoy [EMAIL PROTECTED]:

Andrew Chernow [EMAIL PROTECTED] writes:
 meet those requirements.  It is far more effecient to have apache 
access

 them

Where weren't we meeting his/her requirements?  All the discussion is 
around
available means to do that.  One option is having the files on the 
database,
the other is on the filesystem.  From my understanding we're 
discussing the

benefits of each one.  Aren't we?


Yes, although I suggested two solutions I asked for anything that
would be considered the best practice. Now I think there is not a best
practice or better, there should be one best practice for each of the
solutions.

I have done an intranet application that stored images in the
database. It worked perfectly and I used the same engine in another
intranet application to store not only images but any document which
also worked perfectly.  The decision to go the dabatase only route was
easy: The filesystem space would have to be negotiated while the space
occupied by the databases were not controlled and used an advanced
storage solution that gave lots of terabytes to be used at will. Also
the any document application should not loose a single document and
access control should be strictly enforced which was much easier to do
with the database since I had no control over the webserver and even
if I had I think the database access is still easier to control than
the filesystem access. That was in a corporate intranet.

What I'm doing now is an internet application. While the FS x DB
synchronicity is very important in some kinds of document management,
it is not in this application. Indeed if a few images are lost each
day it has no meaning in a 500K to 1M inventory. The offended clients
just upload them again. No one will be sued. The images are all
public. No need to control the access.

But the main factor to push me in the file system direction is the
HTTP cache management. I want the internet web clients and proxies to
cache the images. The Apache web server has it ready and easy. If the
images where to be stored in the DB I would have to handle the HTTP
cache headers myself. Another code layer. Not too big a deal, but if
Apache give me it for free...

I wonder why this HTTP cache headers argument didn't surface in this
heated debate. Aren't DB developers/admins aware of the internet
client's bandwidth limitations? Or they just assume the application
would handle the HTTP cache headers? In the applications I created for
intranet bandwidth was almost a non issue and I didn't care to make
them bandwidth efficient, but for the internet the problem is there
and it is big.

Regards,


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Database versus filesystem for storing images

2007-01-08 Thread Maurice Aubrey

Clodoaldo wrote:
But the main factor to push me in the file system direction is the 
HTTP cache management. I want the internet web clients and proxies to

 cache the images. The Apache web server has it ready and easy. If
the images where to be stored in the DB I would have to handle the
HTTP cache headers myself. Another code layer. Not too big a deal,
but if Apache give me it for free...


There's a hybrid approach which has worked well for us.

You store the binary data in the database along with a signature.

On the Apache side, you write a 404 handler that, based on the request,
fetches the binary from the database and writes it locally to the
filesystem based on the signature (using a multi-level hashing scheme
possibly as detailed in previous posts).

When a request comes in to Apache, if the file exists it is served
directly without any db interaction. OTOH, if it's missing, your 404
handler kicks in to build it and you get a single trip to the db.

You get the benefits of keeping the data in the db (transaction
semantics, etc.) but also get the scalability and caching benefits
of having the front-end webservers handle delivery.

If you lose the locally cached data it's not an issue. They'll be
faulted back into existence on demand.

With multiple webservers, you can just allow the data to be cached on
each machine, or if there's too much data for that, have your load
balancer divide the requests to different webserver pools based on the
signature.

As an extension, if you need different versions of the data (like
different sizes of an image, etc.), you can modify your URLs to indicate
the version wanted and have the 404 handler take that into account when
building them. You only store the original content in the database but
could have any number of transformed versions on the webservers. Again,
losing those versions is not an issue and do not require backup.

Maurice

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Postgres Differential backup

2007-01-08 Thread Vijayaram Boopalan - TLS , Chennai
Title: Postgres Differential backup






Hi,


I do not kb=now how to take differential database backup in postgre 8.1.5.

Please help.


B. VijayaRam

Chennai.



DISCLAIMER:
---

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
It shall not attach any liability on the originator or HCL or its affiliates. 
Any views or opinions presented in 
this email are solely those of the author and may not necessarily reflect the 
opinions of HCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of 
this message without the prior written consent of the author of this e-mail is 
strictly prohibited. If you have
received this email in error please delete it and notify the sender 
immediately. Before opening any mail and 
attachments please check them for viruses and defect.

---
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] TRIGGER BEFORE INSERT

2007-01-08 Thread Jerry Sievers
Rafal Pietrak [EMAIL PROTECTED] writes:

 Hi All!
 
 I have some old piece of code, that worked two years ago (Postgres
 version 7.2, I think), but doesn't work within Postgres 8.1.4 now.
 
 The story is, that I have a trigger on a table (business day
 statistics), that is fired before insert; it updates another table
 (detailed transaction log), and saves statistics from that update within
 the freshly inserted record.
 
 Cutting down much larger (and obfuscated) schema to its critical
 section, I've came with the following snippet:
 
 CREATE TABLE test_days (id serial unique, dnia date not null default
 current_date-'1day'::interval, total int not null);
 CREATE TABLE test_utarg(tm timestamp not null, nic int, amount int not
 null, dnia int references test_days(id));
 
 INSERT INTO test_utarg (tm,nic, amount) SELECT current_timestamp -
 interval_mul('1min'::interval, (random()*1)::integer),
 generate_series(1,88), (random()*1)::integer; 
 
 CREATE FUNCTION prado() RETURNS trigger AS $$ DECLARE wydano INTEGER;
 BEGIN UPDATE test_utarg SET dnia=new.id WHERE tm BETWEEN new.dnia AND
 new.dnia+'1day'::interval; GET DIAGNOSTICS wydano := ROW_COUNT;
 new.total := wydano; RETURN new; END; $$ LANGUAGE plpgsql;
 CREATE TRIGGER mocarny BEFORE INSERT ON test_days EXECUTE PROCEDURE
 prado();

Did you want a statement level trigger here?

Try adding for each row' to your create trigger statement above.

HTH

 INSERT INTO test_days (dnia) VALUES ('2007-01-06'); 
 ERROR:  record new is not assigned yet
 DETAIL:  The tuple structure of a not-yet-assigned record is
 indeterminate.
 CONTEXT:  PL/pgSQL function prado line 1 at SQL statement
 
 
 And to my ultimate surprise, this one breaks with yet another ERROR.
 
 In the original schema, the ERROR was abount TEST_UTARG.DNIA referencing
 a not yet available NEW.ID. ... as if constraints within transactions
 (inside trigger) were checked on each step, and not at the end of
 transaction  as it looks was the case of postgres v7.2.
 
 But the ERROR quoted abobe warries me even more. Is it true, that NEW is
 really not-yet-assigned in BEFORE INSERT trigger?? Or may be I'm not
 seeing some other obvious mistake I've done in the code above?
 
 Help, pls!
 
 -- 
 -R
 
 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org/
 

-- 
---
Jerry Sievers   305 854-3001 (home) Production Database Administrator
305 321-1144 (mobil WWW E-Commerce Consultant

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Database versus filesystem for storing images

2007-01-08 Thread Maurice Aubrey

Clodoaldo wrote:
But the main factor to push me in the file system direction is the 
HTTP cache management. I want the internet web clients and proxies to

 cache the images. The Apache web server has it ready and easy. If
the images where to be stored in the DB I would have to handle the
HTTP cache headers myself. Another code layer. Not too big a deal,
but if Apache give me it for free...


There's a hybrid approach which has worked well for us.

You store the binary data in the database along with a signature.

On the Apache side, you write a 404 handler that, based on the request,
fetches the binary from the database and writes it locally to the
filesystem based on the signature (using a multi-level hashing scheme
possibly as detailed in previous posts).

When a request comes in to Apache, if the file exists it is served
directly without any db interaction. OTOH, if it's missing, your 404
handler kicks in to build it and you get a single trip to the db.

You get the benefits of keeping the data in the db (transaction
semantics, etc.) but also get the scalability and caching benefits
of having the front-end webservers handle delivery.

If you lose the locally cached data it's not an issue. They'll be
faulted back into existence on demand.

With multiple webservers, you can just allow the data to be cached on
each machine, or if there's too much data for that, have your load
balancer divide the requests to different webserver pools based on the
signature.

As an extension, if you need different versions of the data (like
different sizes of an image, etc.), you can modify your URLs to indicate
the version wanted and have the 404 handler take that into account when
building them. You only store the original content in the database but
could have any number of transformed versions on the webservers. Again,
losing those versions is not an issue and do not require backup.

Maurice


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Postgres Differential backup

2007-01-08 Thread Bill Moran
In response to Vijayaram Boopalan - TLS , Chennai [EMAIL PROTECTED]:

 I do not kb=3Dnow how to take differential=
 database backup in postgre 8.1.5.

What on Earth is wrong with your MUA?

See:
http://www.postgresql.org/docs/8.1/interactive/backup-online.html

-- 
Bill Moran
Collaborative Fusion Inc.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Advice needed on using postgres in commercial product

2007-01-08 Thread Walter Vaughan

pakt sardines wrote:


Hi all,

   We're currently developing some large-scale software with the 
intention of selling it (online and off-the-shelf).  The version in 
development uses Postgres to store stuff in a number of large databases.


Ignoring potential licensing issues for now, the big issue for us is 
that the data in the databases has significant intellectual property 
value.  It has taken literally years of work to collect the data.  We do 
not want the users of the commercial product to be able to fire up 
postgres and type something like:  user]% pg_dump their_data  our_product


So you don't trust your customers.

Then the honest thing to do then is not give them the raw data. I hate the idea 
of *buying* sofware and then being told I can't use the software I bought in any 
fashion I need, rather than what the developer thinks I need.


If the data is valuable, then stop acting like its 1994.

Do the whole thing as web service. If you really got to have something to 
install from a CD, just make it a link back to your web server.


Invest money in a quality server environment for the customer, rather than 
locking the data from your customer. If it took you years to collect this 
information, make the requirement to use it a web enabled device...


Do you really want to deal with support questions about postgresql 
installation/support/upgrades/bugs/versions/OS's/etc... ?


--
Walter

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Sorting with DISTINCT ON

2007-01-08 Thread Nico Grubert


It does allow you to sort on both columns.  


   SELECT DISTINCT ON (path) path, comment_id, created, title
 FROM bewertungen.tblcomments ORDER BY path, created


Thank you very much. Works perfect! :-)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Command connect by prior in PostgreSQL ?

2007-01-08 Thread Shoaib Mir

There is no default support but you can always use connectby() function from
the tablefunc contrib module

--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 1/8/07, Leandro Repolho [EMAIL PROTECTED] wrote:


Hello everybody,
In Oracle i use the command connect by prior and i need to use it in
PostgreSQL, what is the sintax?



Re: [GENERAL] Postgres Differential backup

2007-01-08 Thread Alvaro Herrera
Bill Moran wrote:
 In response to Vijayaram Boopalan - TLS , Chennai [EMAIL PROTECTED]:
 
  I do not kb=3Dnow how to take differential=
  database backup in postgre 8.1.5.
 
 What on Earth is wrong with your MUA?

This:
 X-MimeOLE: Produced By Microsoft Exchange V6.5

It's fairly common that messages produced by Microsoft programs violate
recommendations etc.  For example messages from Microsoft programs are
always misplaced in threads because neither the References: nor
the In-Reply-To: headers are set.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] losing my large objects with Postgresql 8.1.4

2007-01-08 Thread Eric Davies

thank you Tom!
that did the trick!
I'm still in the dark why my test data type didn't exhibit the 
problem, but I'm certainly a much happier camper now.


Eric.
At 05:40 PM 05/01/2007, Tom Lane wrote:

Eric Davies [EMAIL PROTECTED] writes:
 Some of  my custom server functions/data types that work correctly
 under Postgresql 8.0.1 are having trouble with lost large objects
 under Postgresql 8.1.4 and Postgresql 8.1.5, but only in particular
 usages.

 When I execute the following sequence of commands:
  select   MyTypeToText( BuildMyType('asdf'));
 I get the following error
  ERROR:  large object 33016 does not exist
 \lo_list (in psql) doesn't show any new large objects.

MyTypeToText is probably referencing a start-of-statement snapshot,
in which the LO doesn't exist yet.  This is a consequence of making
read-only accesses to LOs be MVCC-compliant.  Probably your best bet
for working around it is to open the LO in read/write mode, even if
you don't intend to write it --- that will make the behavior like 8.0.

Relevant 8.1 release note:

 Read-only large object descriptors now obey MVCC snapshot semantics

 When a large object is opened with INV_READ (and not INV_WRITE),
 the data read from the descriptor will now reflect a snapshot of
 the large object's state at the time of the transaction snapshot in
 use by the query that called lo_open(). To obtain the old behavior
 of always returning the latest committed data, include INV_WRITE in
 the mode flags for lo_open().


regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


**
Eric Davies, M.Sc.
Barrodale Computing Services Ltd.
Tel: (250) 472-4372 Fax: (250) 472-4373
Web: http://www.barrodale.com
Email: [EMAIL PROTECTED]
**
Mailing Address:
P.O. Box 3075 STN CSC
Victoria BC Canada V8W 3W2

Shipping Address:
Hut R, McKenzie Avenue
University of Victoria
Victoria BC Canada V8W 3W2
**




Re: [GENERAL] Postgres Differential backup

2007-01-08 Thread Chad Wagner

I have jotted down some notes on performing a hot backup (which is what Bill
is referring you to), and I included a script called pg_hotbackup that
automates the relatively simple tasks required to take a snapshot of the
cluster data directory.

http://www.postgresqlforums.com/forums/viewtopic.php?f=14t=12

In that link there is a discussion, and a link to the Wiki article which is
located here:

http://www.postgresqlforums.com/wiki/Backup_%26_Recovery




On 1/8/07, Vijayaram Boopalan - TLS , Chennai [EMAIL PROTECTED] wrote:


 Hi,

I do not kb=now how to take differential database backup in postgre 8.1.5.
Please help.

*B. VijayaRam***
*Chennai.***

DISCLAIMER:

---

The contents of this e-mail and any attachment(s) are confidential and
intended for the named recipient(s) only.
It shall not attach any liability on the originator or HCL or its
affiliates. Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect
the opinions of HCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure,
modification, distribution and / or publication of
this message without the prior written consent of the author of this
e-mail is strictly prohibited. If you have
received this email in error please delete it and notify the sender
immediately. Before opening any mail and
attachments please check them for viruses and defect.


---


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq





Re: [GENERAL] doubts

2007-01-08 Thread Alvaro Herrera
sangeetha k.s wrote:
 hello,
 
   postgresql uses bitmap indexing for indexing  in databases.
   *Is there any technique used*
 *  used to compress the bitmap indices in postgresql*,like oracle that uses
 Byte Aligned bitmap compression to compress bitmaps.

Postgres does not support bitmap indexes yet.  What it does support is
bitmap index *scanning*.  And yes, there is a lossy compression method
that is sometimes used.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Discovering time of last database write

2007-01-08 Thread Scott Marlowe
On Mon, 2007-01-08 at 02:22, Andy Dale wrote:
 Hi,
 
 Sorry for the slight delay in my response.
 
 I am using 3 PostgreSQL databases and writing to them using an SQL
 proxy.  These databases have a high write volume.  On rebooting all 3
 servers for OS/Software updates, i would like to figure out which was
 the last written to DB (this is assuming the DB/Servers are not all
 taken down at the same time), the times are kept in sync with NTP. 
 
 I imagine it is possible to get this behaviour with after triggers,
 but this means i have to attach the same trigger to each table ??

H.   Still not completely clear on what exactly you're doing, but I
think I am getting an idea.

You could set each table to have a field for a timestamp, and build a
simple rule / trigger that updates it with the current time stamp for
every row as it's inserted / updated.  Then you could select
max(timestampcolumn) from a table to see which db had the latest
version.

I still think there might be a better solution to your problem.  Can we
get a more generic overview of what you're trying to do.  The 10,000 
foot high view, so to speak.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Discovering time of last database write

2007-01-08 Thread Scott Marlowe
On Mon, 2007-01-08 at 03:26, Andy Dale wrote:
 Ok.
 
 The SQL Proxy i am using (HA-JDBC) has some limitations with regard to
 getting it's cluster back into sync.  If ha-jdbc uses the wrong DB
 (one that has been out of action for a while) as the starting point
 for the cluster it will then try and delete stuff from the other DB's
 on their introduction to the cluster. 
 
 I thought the easiest way to control a complete cluster restart
 would be to extract the last write date and introduce the one with the
 last write date first, this will make certain the above scenario does
 not happen. 

Sorry, I hadn't seen this post when I wrote my lost one.

Yeah, I think having a timestamp column with a rule so it has the
current timestamp when written to and then selecting for the max in each
table would work out.  You could probably get fancier, but I'm guessing
that cluster startup is a pretty rare thing, so it's probably easier to
write a script that selects all the tablenames from pg_tables (???) in
your schema and checks for the highest time in each table and selects
the master from that.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Command connect by prior in PostgreSQL ?

2007-01-08 Thread Scott Marlowe
On Mon, 2007-01-08 at 07:38, Leandro Repolho wrote:
 Hello everybody,
 In Oracle i use the command connect by prior and i need to use it in
 PostgreSQL, what is the sintax?

What does connect by prior do?  Reuse a connection from one db to
another?  I don't think there's anything like that in postgresql.  

You might wanna tell us what an oracle command does next time, since
many folks here aren't that familiar with Oracle.  Heck, I use oracle
everyday and I'm not familiar with connect by prior...

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Command connect by prior in PostgreSQL ?

2007-01-08 Thread Shoaib Mir

It is used for Hierarchical queries in Oracle :)


Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 1/8/07, Scott Marlowe [EMAIL PROTECTED] wrote:


On Mon, 2007-01-08 at 07:38, Leandro Repolho wrote:
 Hello everybody,
 In Oracle i use the command connect by prior and i need to use it in
 PostgreSQL, what is the sintax?

What does connect by prior do?  Reuse a connection from one db to
another?  I don't think there's anything like that in postgresql.

You might wanna tell us what an oracle command does next time, since
many folks here aren't that familiar with Oracle.  Heck, I use oracle
everyday and I'm not familiar with connect by prior...

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/



Re: [GENERAL] Command connect by prior in PostgreSQL ?

2007-01-08 Thread Thomas Kellerer

On 08.01.2007 17:24 Scott Marlowe wrote:

On Mon, 2007-01-08 at 07:38, Leandro Repolho wrote:

Hello everybody,
In Oracle i use the command connect by prior and i need to use it in
PostgreSQL, what is the sintax?


What does connect by prior do?  Reuse a connection from one db to
another?  I don't think there's anything like that in postgresql.  


You might wanna tell us what an oracle command does next time, since
many folks here aren't that familiar with Oracle.  Heck, I use oracle
everyday and I'm not familiar with connect by prior...


It's used to retrieve hierarchical data e.g. a parent/child 
relationship. The connect by operator defines the columns which define 
the hierarchy.


Thomas


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] More activity in pg_stat_activity

2007-01-08 Thread Erik Jones

Tom Lane wrote:

Erik Jones [EMAIL PROTECTED] writes:
  
Before migrating to 8.2, even during peak times, unless queries were 
seriously stacking (not completing in a timely manner), we'd see at most 
50 - 100 queries active at any given time (we did have 
stats_command_string = on).  Since the migration, during peak times it's 
not uncommon to see the query count vary radically between around 80 and 
the upper 400s (we have max_connections set to 512).  This variation is 
per second and when the count is high, the vast majority listed are 
sub-second.  It would seem that this is due to some fine tuning 
somewhere on th backside of 8.2 versus previous versions.  Was there 
previously a more limited precision to the query lengths that would  be 
picked up by the pg_stat_activity view? 



Hmm ... 8.2 has a completely rewritten pg_stat_activity implementation,
but I'm not sure why it would show more active queries.  The old code
essentially showed you snapshots taken every half second, whereas the
new code gives you The Truth as of the start of your transaction.
So it should be more up-to-date but I'm not seeing why the old method
would have capped the number of active entries to less than what you see
now.  Maybe there was some weird scheduling interaction before?  (As in,
the stats collector was more likely to be able to run and push out its
snapshot when there were fewer active backends?)  Or maybe the stats
subsystem is just reflecting reality, and some other change(s) elsewhere
in 8.2 allow it to achieve higher concurrency than you got before.
I dunno, but it would be worth looking closer to try to figure out
what the story is.  Do you have numbers from before from other
monitoring tools such as vmstat, that you could compare to 8.2?
  
Alas, no, we don't.  From a purely interrogative standpoint, I wish we 
did as understanding these things is never less than worthwhile.  
Luckily, from our production standpoint, we always knew before that 
weren't seeing the whole picture when we queried pg_stat_activity as 
when we clocked the number of transactions we do in an hour to be over 
three million (three months ago), we realized that the vast majority of 
queries run through the system weren't registering, we assumed due them 
them completing virtually instantaneously.  Then, the only time the 
active query count would come anywhere near our max connection setting 
was when queries were stacking, or taking forever to complete.  I must 
say that we had an initial freak-out point when we saw those numbers 
climbing and jumping around until we saw that the vast majority were 
running in sub-second time.


One question regarding my previous thread about the 8.2 client tools.  
We have yet to have time (personal as well as usage pattern constraints) 
to dump our schema to analyze it for any possible discrepencies and 
clock schema queries.  Is there any reason we couldn't use the 8.1 
pg_dump facility until such a time as we can figure out our issues with 
the 8.2 pg_dump client (and psql, etc...)?


--
erik jones [EMAIL PROTECTED]
software development
emma(r)


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Discovering time of last database write

2007-01-08 Thread Erik Jones

Scott Marlowe wrote:

On Mon, 2007-01-08 at 03:26, Andy Dale wrote:
  

Ok.

The SQL Proxy i am using (HA-JDBC) has some limitations with regard to
getting it's cluster back into sync.  If ha-jdbc uses the wrong DB
(one that has been out of action for a while) as the starting point
for the cluster it will then try and delete stuff from the other DB's
on their introduction to the cluster. 


I thought the easiest way to control a complete cluster restart
would be to extract the last write date and introduce the one with the
last write date first, this will make certain the above scenario does
not happen. 



Sorry, I hadn't seen this post when I wrote my lost one.

Yeah, I think having a timestamp column with a rule so it has the
current timestamp when written to and then selecting for the max in each
table would work out.  You could probably get fancier, but I'm guessing
that cluster startup is a pretty rare thing, so it's probably easier to
write a script that selects all the tablenames from pg_tables (???)

pg_class


--
erik jones [EMAIL PROTECTED]
software development
emma(r)


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Discovering time of last database write

2007-01-08 Thread Andy Dale

Hi,

I am still not so certain about adding a timestamp column to each table, as
within a few months the table will be quite big.  My current thinking is to
have a trigger per table that overwrties a single value in a single utility
table after every write, this will be far quicker to select when working
with large tables ??? HA-JDBC does not care about the individual table last
write date/time as it is currently not spphisticated enough to do per table
sync with different masters per table, it just loops through each table in
the first db activated treating it as being the most up to date. So HA-JDBC
(i) just need the last write time of  the database on a whole, as this will
hopefully mean it is the most up to date.

Andy

On 08/01/07, Scott Marlowe [EMAIL PROTECTED] wrote:


On Mon, 2007-01-08 at 03:26, Andy Dale wrote:
 Ok.

 The SQL Proxy i am using (HA-JDBC) has some limitations with regard to
 getting it's cluster back into sync.  If ha-jdbc uses the wrong DB
 (one that has been out of action for a while) as the starting point
 for the cluster it will then try and delete stuff from the other DB's
 on their introduction to the cluster.

 I thought the easiest way to control a complete cluster restart
 would be to extract the last write date and introduce the one with the
 last write date first, this will make certain the above scenario does
 not happen.

Sorry, I hadn't seen this post when I wrote my lost one.

Yeah, I think having a timestamp column with a rule so it has the
current timestamp when written to and then selecting for the max in each
table would work out.  You could probably get fancier, but I'm guessing
that cluster startup is a pretty rare thing, so it's probably easier to
write a script that selects all the tablenames from pg_tables (???) in
your schema and checks for the highest time in each table and selects
the master from that.



[GENERAL] Sorting

2007-01-08 Thread Bart McFarling
I have a column that is a varchar(6) I need to sort it by the rows that
are integers 1st then the character ones or vice versa, I just need the
values that can be converted to integer to sort by their numeric value.
 
i.e
1, 2, 3, 4, 5, 10, 11, A, B, C
instead of
1, 10, 11, 2, 3, 4, 5, A, B, C
 
Any suggestions?


Re: [GENERAL] More activity in pg_stat_activity

2007-01-08 Thread Thomas F. O'Connell


On Jan 8, 2007, at 10:32 AM, Erik Jones wrote:


Erik Jones [EMAIL PROTECTED] writes:

One question regarding my previous thread about the 8.2 client  
tools.  We have yet to have time (personal as well as usage pattern  
constraints) to dump our schema to analyze it for any possible  
discrepencies and clock schema queries.  Is there any reason we  
couldn't use the 8.1 pg_dump facility until such a time as we can  
figure out our issues with the 8.2 pg_dump client (and psql, etc...)?


If I recall correctly, older pg_dump clients won't work at all with  
newer postmasters. In fact, I think it will error out.


--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Discovering time of last database write

2007-01-08 Thread Michael Nolan

On several occasions I have thought that each row in a table should have a
SYSTEM COLUMN which gave the timestamp of the last update of that row.  This
could get a bit expensive on space and in some cases might be redundant with
(or have a slightly different value from) a user-maintained timestamp field.

I have also thought that each table should have values for:

Last DDL
Last Insert
Last Update
Last Delete
--
Mike Nolan


Re: [GENERAL] More activity in pg_stat_activity

2007-01-08 Thread Erik Jones
Ah, I'd been looking at the following from Ch. 23.5 Backup and Restore 
and was hoping it would go in both directions:


It is recommended that you use the pg_dump and pg_dumpall programs from 
the newer version of PostgreSQL, to take advantage of any enhancements 
that may have been made in these programs. Current releases of the dump 
programs can read data from any server version back to 7.0.


Thomas F. O'Connell wrote:


On Jan 8, 2007, at 10:32 AM, Erik Jones wrote:


Erik Jones [EMAIL PROTECTED] writes:

One question regarding my previous thread about the 8.2 client 
tools.  We have yet to have time (personal as well as usage pattern 
constraints) to dump our schema to analyze it for any possible 
discrepencies and clock schema queries.  Is there any reason we 
couldn't use the 8.1 pg_dump facility until such a time as we can 
figure out our issues with the 8.2 pg_dump client (and psql, etc...)?


If I recall correctly, older pg_dump clients won't work at all with 
newer postmasters. In fact, I think it will error out.


--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/



--
erik jones [EMAIL PROTECTED]
software development
emma(r)


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Sorting

2007-01-08 Thread A. Kretschmer
am  Mon, dem 08.01.2007, um 10:21:38 -0600 mailte Bart McFarling folgendes:
 I have a column that is a varchar(6) I need to sort it by the rows that are
 integers 1st then the character ones or vice versa, I just need the values 
 that
 can be converted to integer to sort by their numeric value.
  
 i.e
 1, 2, 3, 4, 5, 10, 11, A, B, C
 instead of
 1, 10, 11, 2, 3, 4, 5, A, B, C
  
 Any suggestions?

perhaps something like this:

test=*# select * from foo;
 w

 10
 1
 A
 3
 C
(5 rows)

Time: 1.349 ms
test=*# select w, case when w ~ '^[0-9]*$' then w::int else 1 end from foo 
order by 2,1;
 w  | case
+---
 1  | 1
 3  | 3
 10 |10
 A  | 1
 C  | 1
(5 rows)


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] More activity in pg_stat_activity

2007-01-08 Thread Tom Lane
Thomas F. O'Connell [EMAIL PROTECTED] writes:
 On Jan 8, 2007, at 10:32 AM, Erik Jones wrote:
 One question regarding my previous thread about the 8.2 client  
 tools.  We have yet to have time (personal as well as usage pattern  
 constraints) to dump our schema to analyze it for any possible  
 discrepencies and clock schema queries.  Is there any reason we  
 couldn't use the 8.1 pg_dump facility until such a time as we can  
 figure out our issues with the 8.2 pg_dump client (and psql, etc...)?

 If I recall correctly, older pg_dump clients won't work at all with  
 newer postmasters. In fact, I think it will error out.

Yeah, I wouldn't recommend trying.  Instead, update to 8.2.1 (released
today) and see if it doesn't fix the problem.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Command connect by prior in PostgreSQL ?

2007-01-08 Thread John Sidney-Woollett
Take a look at contrib/tablefunc there is a function called connectby() 
that should do what you're looking for.


Otherwise roll your own...

http://archives.postgresql.org/pgsql-sql/2003-10/msg00374.php

John

Thomas Kellerer wrote:

On 08.01.2007 17:24 Scott Marlowe wrote:

On Mon, 2007-01-08 at 07:38, Leandro Repolho wrote:

Hello everybody,
In Oracle i use the command connect by prior and i need to use it in
PostgreSQL, what is the sintax?


What does connect by prior do?  Reuse a connection from one db to
another?  I don't think there's anything like that in postgresql. 
You might wanna tell us what an oracle command does next time, since

many folks here aren't that familiar with Oracle.  Heck, I use oracle
everyday and I'm not familiar with connect by prior...


It's used to retrieve hierarchical data e.g. a parent/child 
relationship. The connect by operator defines the columns which define 
the hierarchy.


Thomas


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Sorting

2007-01-08 Thread Ragnar
On mán, 2007-01-08 at 17:59 +0100, A. Kretschmer wrote:
 am  Mon, dem 08.01.2007, um 10:21:38 -0600 mailte Bart McFarling folgendes:
  I have a column that is a varchar(6) I need to sort it by the rows that are
  integers 1st then the character ones or vice versa, I just need the values 
  that
  can be converted to integer to sort by their numeric value.
   
  i.e
  1, 2, 3, 4, 5, 10, 11, A, B, C
  instead of
  1, 10, 11, 2, 3, 4, 5, A, B, C
   
  Any suggestions?
 
 perhaps something like this:
 
 test=*# select * from foo;
  w
 
  10
  1
  A
  3
  C
 (5 rows)
 
 Time: 1.349 ms
 test=*# select w, case when w ~ '^[0-9]*$' then w::int else 1 end from 
 foo order by 2,1;

possible improvements:
  a) w ~ '^[0-9]+$'
  b) use NULL instead of 1


gnari



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Sorting

2007-01-08 Thread Andreas Kretschmer
Ragnar [EMAIL PROTECTED] schrieb:
  test=*# select w, case when w ~ '^[0-9]*$' then w::int else 1 end from 
  foo order by 2,1;
 
 possible improvements:
   a) w ~ '^[0-9]+$'
   b) use NULL instead of 1

Thanks, right.


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.(unknow)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Sorting

2007-01-08 Thread Bart McFarling
Thanks, 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Andreas 
Kretschmer
Sent: Monday, January 08, 2007 11:19 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Sorting

Ragnar [EMAIL PROTECTED] schrieb:
  test=*# select w, case when w ~ '^[0-9]*$' then w::int else 1 
  end from foo order by 2,1;
 
 possible improvements:
   a) w ~ '^[0-9]+$'
   b) use NULL instead of 1

Thanks, right.


Andreas
--
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.(unknow)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

---(end of broadcast)---
TIP 6: explain analyze is your friend

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] is there a tracking trace tool like the SQL Analizer in MS sqlserver.?

2007-01-08 Thread guillermo arias
is there a tracking trace tool in postgre? like the "SQL Analizer" in MS sqlserver.I have downloaded the PGAdmin III and i have not found any tool like this.ThanksGet your FREE, LinuxWaves.com Email Now! -- http://www.LinuxWaves.comJoin Linux Discussions! -- http://Community.LinuxWaves.com


Re: [GENERAL] Autovacuum Improvements

2007-01-08 Thread Csaba Nagy
On Sun, 2006-12-24 at 03:03, Christopher Browne wrote:
[snip]
 Seems to me that you could get ~80% of the way by having the simplest
 2 queue implementation, where tables with size  some threshold get
 thrown at the little table queue, and tables above that size go to
 the big table queue.

That would most definitely not cut it for me, I have more than 2
categories of tables:

 - a few small but very often updated/inserted/deleted table: these must
be continuously vacuumed, your little queue is not good enough for
that, as even the round trip between the small tables could lead to
bloat on them;

 - a few small and moderately updated, that could live with the little
queue;

 - a few big and frequently updated, but which only have a small
percentage of rows actively updated at any time: those could live with
the big queue;

 - the rest which are rarely updated, I would put those in a separate
queue so they won't affect the rest, cause vacuuming them is really
mostly not critical;

The point is that I'm not sure there couldn't be even more reasons to
split the tables in even more queues based on the importance of
vacuuming them combined with update rate and their size. If I can set up
my own queues I can experiment with what works best for me... for the
base setup you could set up some default queues. I wonder though how
would you handle dynamics of tables, I mean when will a small table
which grows start to be considered a big table for the purpose of
putting it in one queue or the other ? I guess it would be done on
analyzing the table, which is also handled by autovacuum, so tables with
no vacuum queue settings could go to one of the 2 default queues you
mention.

 That should keep any small tables from getting vacuum-starved.
 
 I'd think the next step would be to increase the number of queues,
 perhaps in a time-based fashion.  There might be times when it's
 acceptable to vacuum 5 tables at once, so you burn thru little tables
 like the blazes, and handle larger ones fairly promptly.  And other
 times when you don't want to do *any* big tables, and limit a single
 queue to just the itty bitty ones.

This is all nice and it would be cool if you could set it up per vacuum
queue. I mean how much more effort would be to allow vacuum queues with
generic settings like time windows with max number of threads for each
window, and let the user explicitly assign tables to those queues,
instead of hard coding the queues and their settings and assign tables
to them based on size or any other heuristics ?

For the average application which needs simple settings, there could be
a default setup with the 2 queues you mention. If it would be possible
to set up some rules to assign tables to queues based on their
properties on analyze time, instead of explicitly assigning to one queue
or other, that would be nice too, and then you can completely cover the
default setup with those settings, and allow for more complex setups for
those who need it.

 This approach allows you to stay mostly heuristic-based, as opposed to
 having to describe policies in gratuitous detail.

I agree that for simple setups that would be OK, but like I said, if it
would be easy enough to code that heuristics, and provide some sane
setup as default, and then let the user optimize it, that would be a
cool solution.

Now it's true I don't really know how would you code 'assign all tables
which are smaller than x rows to vacuum queue little-tables' ... maybe
by providing a function to the queue which matches on the table ? And
you can change that function ? No idea, but it probably can be done...

 Having a mechanism that requires enormous DBA effort and where there
 is considerable risk of simple configuration errors that will be hard
 to notice may not be the best kind of feature :-).

I think most people will not want to touch the default settings unless
it will not work good enough for them. I definitely not like too much
that I had to set up some cron jobs beside autovacuum, as they are most
definitely not doing optimal job, but autovacuum was not doing that
either, and I'm afraid a 2-queue system would also not do it at least
for the queue-like tables I have, which must be vacuumed continuously,
but only if they need it... that's what I expect from autovacuum, to
vacuum all tables in the proper periodicity/time window for each of
them, but only if they need it... and I can imagine way more such
periodicity/time window settings than 2. Now if autovacuum could figure
out on itself all those settings, that would be even cooler, but if I
can set it up myself that would be good enough.

Actually I think all vacuum patterns could be automatically figured out
by looking at the statistics AND the dynamics of those statistics (i.e.
it changes in bursts, or steadily increasing over time, etc.), and
possibly also the read access statistics (there's no big reward in too
frequently vacuuming a table which is only inserted and deleted and
rarely read), and 

[GENERAL] Slony in Windows installer?

2007-01-08 Thread Raymond O'Donnell
Hello all,

When installing PostgreSQL via the Windows installer, Slony-I is one 
of the options offered. Does it install the the Slony binaries, or 
just the SQL scripts/functions that Slony uses?

Thanks,

Ray.


--

Raymond O'Donnell
Director of Music, Galway Cathedral, Galway, Ireland
[EMAIL PROTECTED]
--



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Slony in Windows installer?

2007-01-08 Thread Shoaib Mir

It does install both the Slony binaries and scripts.


Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 1/8/07, Raymond O'Donnell [EMAIL PROTECTED] wrote:


Hello all,

When installing PostgreSQL via the Windows installer, Slony-I is one
of the options offered. Does it install the the Slony binaries, or
just the SQL scripts/functions that Slony uses?

Thanks,

Ray.


--

Raymond O'Donnell
Director of Music, Galway Cathedral, Galway, Ireland
[EMAIL PROTECTED]
--



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



Re: [GENERAL] is there a tracking trace tool like the SQL Analizer in MS sqlserver.?

2007-01-08 Thread Ian Harding

There is no GUI tool that I know of, but there is EXPLAIN which gives
the same information.

- Ian

On 1/8/07, guillermo arias [EMAIL PROTECTED] wrote:

is there a tracking trace tool in postgre? like the SQL Analizer in MS
sqlserver.

I have downloaded the PGAdmin III and i have not found any tool like this.

Thanks



Get your FREE, LinuxWaves.com Email Now! -- http://www.LinuxWaves.com
Join Linux Discussions! -- http://Community.LinuxWaves.com


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Command connect by prior in PostgreSQL ?

2007-01-08 Thread Guy Rouillier

Leandro Repolho wrote:

Hello everybody,
In Oracle i use the command connect by prior and i need to use it in 
PostgreSQL, what is the sintax?


The last time I saw this discussed, Tom Lane referred to a pg-hier
modification but said this: Beware that it makes an
incompatible change in rule representation, which means you must initdb
when installing or removing it.  I was unable to find pg-hier in a
quick cruise through PgFoundry.

--
Guy Rouillier



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] is there a tracking trace tool like the SQL Analizer in MS sqlserver.?

2007-01-08 Thread Jeffrey Melloy

On 1/8/07, Jeffrey Melloy [EMAIL PROTECTED] wrote:


Not exactly.  SQL Analyzer also includes live monitoring of whatever
queries are coming into the database.  You can achieve something similar by
enabling query logging in the settings.

On 1/8/07, Ian Harding [EMAIL PROTECTED] wrote:

 There is no GUI tool that I know of, but there is EXPLAIN which gives
 the same information.

 - Ian

 On 1/8/07, guillermo arias [EMAIL PROTECTED]  wrote:
  is there a tracking trace tool in postgre? like the SQL Analizer in
 MS
  sqlserver.
 
  I have downloaded the PGAdmin III and i have not found any tool like
 this.
 
  Thanks
 
 
  
  Get your FREE, LinuxWaves.com Email Now! -- http://www.LinuxWaves.com
  Join Linux Discussions! -- http://Community.LinuxWaves.com

 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match





Re: [GENERAL] is there a tracking trace tool like the SQL

2007-01-08 Thread Joshua D. Drake
On Mon, 2007-01-08 at 12:56 -0600, Jeffrey Melloy wrote:
 
 
 On 1/8/07, Jeffrey Melloy [EMAIL PROTECTED] wrote:
 Not exactly.  SQL Analyzer also includes live monitoring of
 whatever queries are coming into the database.  You can
 achieve something similar by enabling query logging in the
 settings.

pg_stat_activity?

 
 
 On 1/8/07, Ian Harding [EMAIL PROTECTED] wrote:
 There is no GUI tool that I know of, but there is
 EXPLAIN which gives
 the same information.
 
 - Ian
 
 On 1/8/07, guillermo arias
 [EMAIL PROTECTED] wrote:
  is there a tracking trace tool in postgre? like the
 SQL Analizer in MS
  sqlserver.
 
  I have downloaded the PGAdmin III and i have not
 found any tool like this.
  
  Thanks
 
 
  
  Get your FREE, LinuxWaves.com Email Now! --
 http://www.LinuxWaves.com
  Join Linux Discussions! --
 http://Community.LinuxWaves.com
 
 ---(end of
 broadcast)--- 
 TIP 9: In versions below 8.0, the planner will ignore
 your desire to 
choose an index scan if your joining column's
 datatypes do not
match
 
 
 
-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] is there a tracking trace tool like the SQL Analizer

2007-01-08 Thread Tony Caduto

guillermo arias wrote:
is there a tracking trace tool in postgre? like the SQL Analizer in 
MS sqlserver.


I have downloaded the PGAdmin III and i have not found any tool like this.

Thanks

 


Get your FREE, LinuxWaves.com Email Now! -- http://www.LinuxWaves.com
Join Linux Discussions! -- http://Community.LinuxWaves.com

Hi,
You can do this with Postgresql by turning on log statements in the 
postgresql.conf file.
You can then trace all the statements that the server executed in the 
log and the values that where passed.


You could also check out Lightning Admin instead of pgAdmin III. 
We have a feature that can capture the currently running statement from 
a process and list them all in a memo.

Can be handy.
You can find out more here:
http://www.amsoftwaredesign.com/onlinehelp/pgla/server_status.htm
While not free, Lightning Admin is SUPER inexpensive :-) and you will 
get great support.


--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] is there a tracking trace tool like the SQL Analizer in MS sqlserver.?

2007-01-08 Thread Ian Harding

I thought that was called SQL Profiler.

http://msdn2.microsoft.com/en-us/library/ms181091.aspx

Query Analyzer is EXPLAIN with a GUI.

http://msdn2.microsoft.com/en-gb/library/aa178423(SQL.80).aspx

Anyway, I have not heard of such a thing for PostgreSQL, although I am
sure the basic information you want could be obtained from logging
queries and timing.

- Ian

On 1/8/07, Jeffrey Melloy [EMAIL PROTECTED] wrote:

Not exactly.  SQL Analyzer also includes live monitoring of whatever queries
are coming into the database.  You can achieve something similar by enabling
query logging in the settings.


 On 1/8/07, Ian Harding [EMAIL PROTECTED] wrote:

 There is no GUI tool that I know of, but there is EXPLAIN which gives
 the same information.

 - Ian

 On 1/8/07, guillermo arias [EMAIL PROTECTED]  wrote:
  is there a tracking trace tool in postgre? like the SQL Analizer in MS
  sqlserver.
 
  I have downloaded the PGAdmin III and i have not found any tool like
this.
 
  Thanks
 
 
  
  Get your FREE, LinuxWaves.com Email Now! -- http://www.LinuxWaves.com
  Join Linux Discussions! -- http://Community.LinuxWaves.com

 ---(end of
broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match





---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] is there a tracking trace tool like the SQL Analizer in MS sqlserver.?

2007-01-08 Thread Jeffrey Melloy

Whoops, you're right.

On 1/8/07, Ian Harding [EMAIL PROTECTED] wrote:


I thought that was called SQL Profiler.

http://msdn2.microsoft.com/en-us/library/ms181091.aspx

Query Analyzer is EXPLAIN with a GUI.

http://msdn2.microsoft.com/en-gb/library/aa178423(SQL.80).aspx

Anyway, I have not heard of such a thing for PostgreSQL, although I am
sure the basic information you want could be obtained from logging
queries and timing.

- Ian

On 1/8/07, Jeffrey Melloy [EMAIL PROTECTED] wrote:
 Not exactly.  SQL Analyzer also includes live monitoring of whatever
queries
 are coming into the database.  You can achieve something similar by
enabling
 query logging in the settings.


  On 1/8/07, Ian Harding [EMAIL PROTECTED] wrote:
 
  There is no GUI tool that I know of, but there is EXPLAIN which gives
  the same information.
 
  - Ian
 
  On 1/8/07, guillermo arias [EMAIL PROTECTED]  wrote:
   is there a tracking trace tool in postgre? like the SQL Analizer
in MS
   sqlserver.
  
   I have downloaded the PGAdmin III and i have not found any tool like
 this.
  
   Thanks
  
  
   
   Get your FREE, LinuxWaves.com Email Now! --
http://www.LinuxWaves.com
   Join Linux Discussions! -- http://Community.LinuxWaves.com
 
  ---(end of
 broadcast)---
  TIP 9: In versions below 8.0, the planner will ignore your desire to
 choose an index scan if your joining column's datatypes do not
 match
 





Re: [GENERAL] is there a tracking trace tool like the SQL Analizer in MS sqlserver.?

2007-01-08 Thread Ian Harding

On 1/8/07, Joshua D. Drake [EMAIL PROTECTED] wrote:

On Mon, 2007-01-08 at 12:56 -0600, Jeffrey Melloy wrote:


 On 1/8/07, Jeffrey Melloy [EMAIL PROTECTED] wrote:
 Not exactly.  SQL Analyzer also includes live monitoring of
 whatever queries are coming into the database.  You can
 achieve something similar by enabling query logging in the
 settings.

pg_stat_activity?



The SQL Profiler tool also keeps data that meet your criteria for
later analysis as well.  It is very good.

- Ian

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Advice needed on using postgres in commercial product

2007-01-08 Thread Ron Mayer
pakt sardines wrote:
 ...the big issue for us is
 that the data in the databases has significant intellectual property
 value.  It has taken literally years of work to collect the data.  We do
 not want the users of the commercial product to be able to fire up
 postgres and type something like:  user]% pg_dump their_data  our_product

That seems more like a legal question than a technical one.

The first thing that comes to mind is a lawyer to review your license
agreements, contracts, and NDAs with your customers.  Perhaps a contract
giving you rights to audit their facilities in the extreme cases.

 Additionally, we don't want to have to encrypt every entry in the
 database, because that will cause significant overhead during processing.

That's unlikely to work anyway. Organizations protecting valuable data
using technical approaches (DVDs, etc) find it gets out anyway.
Since you'll ship a client that can decrypt the data anyway, anyone with
a debugger could decrypt it (unless you only want it to run on Trusted
computing platform / palladium computers).

 My question is, what options do we have? 

I'd say that many of the more successful companies that sell products
with valuable data (geospatial data vendors; market research
companies) use the legal options rather than the technical ones.

 Can postgres store data as
 some sort of unreadable binary, much like you would find in a C binary
 data file?

Huh?

 If not postgres, what other database could possibly do this, if any?  I
 really don't want to have to write our own RDBMS.  :)

Doesn't seem much like a database question.  I'd say ask on some trusted
computing (google Trusted Computing) list if you want a technical
solution or a lawyer if you want a legal one.

 Thanks in advance for any advice or suggestions on this subject,

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Database Failure on Windows XP Pro psql (PostgreSQL) 8.1.4

2007-01-08 Thread Oisin Glynn

I am running postgres on Windows XP and have been for quite a while as

the database engine for our application.

On an ODBC connection the following error has started happening. The gap 
in activity is normal for our application. but the connection failing is 
a first we have this application live in approx 5 locations some for as 
long as 12 months (though only 2 live are on XP)


We did enable the system to check for Windows updates automatically 
recently not sure if this could be a factor.
There is another one of our apps connecting through the C++ API which is 
not seeing any loss of service.  I suspect some sort of tcp-ip issue as 
I cannot connect with PgAdminIII locally on the server once the 
condition occurs. I have had this 3 times now and it seems to be 
happening very 2-3hours of operation. I am not getting any eventvwr or 
other Windows errors telling me I am hitting a limit or such?  And when 
I restarted the Postgresql service I got the following as part of my 
log(See attached logs)


2007-01-05 14:38:28 LOG:  pgpipe failed to bind: 10055i
2007-01-05 14:38:28 FATAL:  could not create pipe for statistics buffer: 
No error


I am attaching my postgresql.conf (perhaps I can turn on some more 
logging to make things easier to spot) and the log files I have from the 
first occurrence. These show my attempts to restart the postgresql 
service and the errors I received.


This is a snipit of my applications log file showing the error pop up at 
14:11:07


01/05/07 13:58:57.453 2296 DEBUG  - Call No: 10131  SELECT   SQL = 
SELECT * FROM zfunc_access_mailbox_cleanup(10131)
01/05/07 13:58:57.468 2296 DEBUG  - Call No: 10131  SELECT Complete   
Return Code = $VC1
01/05/07 13:58:57.468 2296 DEBUG  - BX_SVR_TRP009 Sent TRP_IVR_ODBCOK 
for call no= 10131

01/05/07 13:58:57.484 2296 DEBUG  - Call No: 10131  DISCONNECT
01/05/07 13:58:57.484 2296 DEBUG  - Call No: 10131  DISCONNECT Free 
Statement Handle

01/05/07 13:58:57.484 2296 DEBUG  - Call No: 10131DISCONNECT OK Voicemail
01/05/07 13:58:57.484 2296 DEBUG  - BX_SVR_TRP009 Sent TRP_IVR_ODBCOK 
for call no= 10131

01/05/07 14:11:07.734 2296 DEBUG  - Call No: 10132  CONNECT Voicemail
01/05/07 14:11:07.734 2296 DEBUG  - Call No: 10132  CONNECT Allocate DBC 
Handle Voicemail
01/05/07 14:11:07.734 2296 DEBUG  - Call No: 10132  CONNECT Connect 
Voicemail
01/05/07 14:11:07.750 2296 DEBUG  - Call No: 10132CONNECT SQLConnect 
failed   08001  could not connect to server: No buffer space available 
(0x2747/10055)

   Is the server running on host 127.0.0.1 and accepting
   TCP/IP connections on port 5432?
Error Message = 165Voicemail

If there is anything I can do to generate better logging I am all ears,

Oisin





# -
# PostgreSQL configuration file
# -
#
# This file consists of lines of the form:
#
#   name = value
#
# (The '=' is optional.) White space may be used. Comments are introduced
# with '#' anywhere on a line. The complete list of option names and
# allowed values can be found in the PostgreSQL documentation. The
# commented-out settings shown in this file represent the default values.
#
# Please note that re-commenting a setting is NOT sufficient to revert it
# to the default value, unless you restart the postmaster.
#
# Any option can also be given as a command line switch to the
# postmaster, e.g. 'postmaster -c log_connections=on'. Some options
# can be changed at run-time with the 'SET' SQL command.
#
# This file is read on postmaster startup and when the postmaster
# receives a SIGHUP. If you edit the file on a running system, you have 
# to SIGHUP the postmaster for the changes to take effect, or use 
# pg_ctl reload. Some settings, such as listen_addresses, require
# a postmaster shutdown and restart to take effect.


#---
# FILE LOCATIONS
#---

# The default values of these variables are driven from the -D command line
# switch or PGDATA environment variable, represented here as ConfigDir.

#data_directory = 'ConfigDir'   # use data in another directory
#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file
#ident_file = 'ConfigDir/pg_ident.conf' # IDENT configuration file

# If external_pid_file is not explicitly set, no extra pid file is written.
#external_pid_file = '(none)'   # write an extra pid file


#---
# CONNECTIONS AND AUTHENTICATION
#---

# - Connection Settings -

#listen_addresses = 'localhost' # what IP address(es) to listen on; 
# comma-separated list of addresses;
# defaults to 'localhost', '*' = all
port = 5432
max_connections = 100
# note: increasing 

Re: [GENERAL] Command connect by prior in PostgreSQL ?

2007-01-08 Thread Matt Miller
  In Oracle i use the command connect by prior and i need to use it in 
  PostgreSQL, what is the sintax?
 
 I was unable to find pg-hier in a quick cruise through PgFoundry.

Try http://gppl.moonbone.ru/

The patch is not yet fixed to 8.2.0.  8.1.2 looks to be the latest.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] is there a tracking trace tool like the SQL Analizer in MS sqlserver.?

2007-01-08 Thread Harald Armin Massa

Ian,


Query Analyzer is EXPLAIN with a GUI.
Anyway, I have not heard of such a thing for PostgreSQL,


On my installation of PostgreSQL from the stock windows installer
there was installed PgAdmin, which has quite a GUI for EXPLAIN.

The query plan is displayed very nice and I find it easier to
understand then text output.

Harald


--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Reinsburgstraße 202b
70197 Stuttgart
0173/9409607
-
Python: the only language with more web frameworks than keywords.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Advice needed on using postgres in commercial product

2007-01-08 Thread David Fetter
On Sat, Jan 06, 2007 at 11:14:42AM +1100, pakt sardines wrote:
 
Hi all,
 We're currently developing some large-scale software with the
 intention of selling it (online and off-the-shelf).  The version in
 development uses Postgres to store stuff in a number of large
 databases.  Ignoring potential licensing issues for now,

There aren't any.  PostgreSQL is under the 3-clause BSD license.

 the big issue for us is that the data in the databases has
 significant intellectual property value.  It has taken literally
 years of work to collect the data.  We do not want the users of the
 commercial product to be able to fire up postgres and type something
 like: user]% pg_dump their_data  our_product

If you don't want your customers to see it, don't ship it to your
customers.  That's true whether you use PostgreSQL or not.

 Additionally, we don't want to have to encrypt every entry in the
 database, because that will cause significant overhead during
 processing.  My question is, what options do we have?  Can postgres
 store data as some sort of unreadable binary, much like you would
 find in a C binary data file?  If not postgres, what other database
 could possibly do this, if any?

None, no matter what they purport, and you won't be able to build one
either.  You should also consider carefully what it is you're telling
your customers by starting off with an adversary relationship.  If
your data is that secret and that valuable, you should not ship it in
the first place.

Cheers,
D
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Slony in Windows installer?

2007-01-08 Thread Dave Page


 --- Original Message ---
 From: Raymond O'Donnell [EMAIL PROTECTED]
 To: pgsql-general@postgresql.org
 Sent: 1/8/07, 5:57:59 PM
 Subject: [GENERAL] Slony in Windows installer?
 
 Hello all,
 
 When installing PostgreSQL via the Windows installer, Slony-I is one 
 of the options offered. Does it install the the Slony binaries, or 
 just the SQL scripts/functions that Slony uses?

Binaries  scripts.

Regards, Dave


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] is there a tracking trace tool like the SQL Analizer

2007-01-08 Thread Dave Page


 --- Original Message ---
 From: Ian Harding [EMAIL PROTECTED]
 To: Jeffrey Melloy [EMAIL PROTECTED]
 Sent: 1/8/07, 7:06:31 PM
 Subject: Re: [GENERAL] is there a tracking trace tool like the SQL Analizer 
 in MS sqlserver.?
 
 I thought that was called SQL Profiler.
 
 http://msdn2.microsoft.com/en-us/library/ms181091.aspx
 
 Query Analyzer is EXPLAIN with a GUI.
 
 http://msdn2.microsoft.com/en-gb/library/aa178423(SQL.80).aspx
 
 Anyway, I have not heard of such a thing for PostgreSQL, although I am
 sure the basic information you want could be obtained from logging
 queries and timing.

pgAdmin has graphical explain, and basic activity monitoring. It is free, and 
has great support!

Regards, Dave

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Database Failure on Windows XP Pro psql (PostgreSQL) 8.1.4

2007-01-08 Thread Shelby Cain
I'm just throwing this out here... but the usual suspects for strange 
connection issues like these would be a software firewall or antivirus 
(especially AV software that has anti-worm features).

Also, Windows XP Pro isn't really viable as a server OS as Microsoft has 
intentionally limited certain features of the TCP/IP stack (e.g.: low limits 
for the TCP connection backlog).  In theory, if you had a flood of incoming 
connections only some of them would be serviced correctly before the backlog 
limit was reached.

Regards,

Shelby Cain

- Original Message 
From: Oisin Glynn [EMAIL PROTECTED]
To: pgsql-general@postgresql.org
Sent: Monday, January 8, 2007 1:33:54 PM
Subject: [GENERAL] Database Failure on Windows XP Pro psql (PostgreSQL) 8.1.4

I am running postgres on Windows XP and have been for quite a while as

the database engine for our application.

On an ODBC connection the following error has started happening. The gap 
in activity is normal for our application. but the connection failing is 
a first we have this application live in approx 5 locations some for as 
long as 12 months (though only 2 live are on XP)

We did enable the system to check for Windows updates automatically 
recently not sure if this could be a factor.
There is another one of our apps connecting through the C++ API which is 
not seeing any loss of service.  I suspect some sort of tcp-ip issue as 
I cannot connect with PgAdminIII locally on the server once the 
condition occurs. I have had this 3 times now and it seems to be 
happening very 2-3hours of operation. I am not getting any eventvwr or 
other Windows errors telling me I am hitting a limit or such?  And when 
I restarted the Postgresql service I got the following as part of my 
log(See attached logs)

2007-01-05 14:38:28 LOG:  pgpipe failed to bind: 10055i
2007-01-05 14:38:28 FATAL:  could not create pipe for statistics buffer: 
No error

I am attaching my postgresql.conf (perhaps I can turn on some more 
logging to make things easier to spot) and the log files I have from the 
first occurrence. These show my attempts to restart the postgresql 
service and the errors I received.

This is a snipit of my applications log file showing the error pop up at 
14:11:07

01/05/07 13:58:57.453 2296 DEBUG  - Call No: 10131  SELECT   SQL = 
SELECT * FROM zfunc_access_mailbox_cleanup(10131)
01/05/07 13:58:57.468 2296 DEBUG  - Call No: 10131  SELECT Complete   
Return Code = $VC1
01/05/07 13:58:57.468 2296 DEBUG  - BX_SVR_TRP009 Sent TRP_IVR_ODBCOK 
for call no= 10131
01/05/07 13:58:57.484 2296 DEBUG  - Call No: 10131  DISCONNECT
01/05/07 13:58:57.484 2296 DEBUG  - Call No: 10131  DISCONNECT Free 
Statement Handle
01/05/07 13:58:57.484 2296 DEBUG  - Call No: 10131DISCONNECT OK Voicemail
01/05/07 13:58:57.484 2296 DEBUG  - BX_SVR_TRP009 Sent TRP_IVR_ODBCOK 
for call no= 10131
01/05/07 14:11:07.734 2296 DEBUG  - Call No: 10132  CONNECT Voicemail
01/05/07 14:11:07.734 2296 DEBUG  - Call No: 10132  CONNECT Allocate DBC 
Handle Voicemail
01/05/07 14:11:07.734 2296 DEBUG  - Call No: 10132  CONNECT Connect 
Voicemail
01/05/07 14:11:07.750 2296 DEBUG  - Call No: 10132CONNECT SQLConnect 
failed   08001  could not connect to server: No buffer space available 
(0x2747/10055)
Is the server running on host 127.0.0.1 and accepting
TCP/IP connections on port 5432?
 Error Message = 165Voicemail

If there is anything I can do to generate better logging I am all ears,

Oisin






# -
# PostgreSQL configuration file
# -
#
# This file consists of lines of the form:
#
#   name = value
#
# (The '=' is optional.) White space may be used. Comments are introduced
# with '#' anywhere on a line. The complete list of option names and
# allowed values can be found in the PostgreSQL documentation. The
# commented-out settings shown in this file represent the default values.
#
# Please note that re-commenting a setting is NOT sufficient to revert it
# to the default value, unless you restart the postmaster.
#
# Any option can also be given as a command line switch to the
# postmaster, e.g. 'postmaster -c log_connections=on'. Some options
# can be changed at run-time with the 'SET' SQL command.
#
# This file is read on postmaster startup and when the postmaster
# receives a SIGHUP. If you edit the file on a running system, you have 
# to SIGHUP the postmaster for the changes to take effect, or use 
# pg_ctl reload. Some settings, such as listen_addresses, require
# a postmaster shutdown and restart to take effect.


#---
# FILE LOCATIONS
#---

# The default values of these variables are driven from the -D command line
# switch or PGDATA environment variable, represented here as ConfigDir.

#data_directory = 'ConfigDir'# use data in another directory
#hba_file = 'ConfigDir/pg_hba.conf' 

Re: [GENERAL] GUI tool that can reverse engineering schemas

2007-01-08 Thread Lenorovitz, Joel
I've been using a product called HappyFish, which does reverse
engineering on Postgres and has proven to be a great DB development
tool.  While it's not free, it is very low cost and you can easily get a
full-featured evaluation version to try out.  I've been running it
through its paces with a pretty complex Postgres project and I'm really
pleased.  The product is maturing and getting more capable all the time
and responsiveness on part of the development team is excellent.  Check
it out here:

http://www.polderij.nl/happyfish/


-Original Message-
From: nyenyec [mailto:[EMAIL PROTECTED] 
Sent: Thursday, January 04, 2007 6:26 PM
To: pgsql-general@postgresql.org
Subject: GUI tool that can reverse engineering schemas

Hi,

Can anyone suggest a free GUI tool that can reverse engineer a
postgresql schema and show it as a diagram?

It doesn't have to be very sophisticated, I just need to get a quick
understanding of schemas that I'm not familiar with.

Thanks,
nyenyec


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] is there a tracking trace tool like the SQL Analizer

2007-01-08 Thread Jeffrey Melloy

How long has that been available for OS X?  Last time I looked at it it
wasn't.

On 1/8/07, Dave Page [EMAIL PROTECTED] wrote:




 --- Original Message ---
 From: Ian Harding [EMAIL PROTECTED]
 To: Jeffrey Melloy [EMAIL PROTECTED]
 Sent: 1/8/07, 7:06:31 PM
 Subject: Re: [GENERAL] is there a tracking trace tool like the SQL
Analizer in MS sqlserver.?

 I thought that was called SQL Profiler.

 http://msdn2.microsoft.com/en-us/library/ms181091.aspx

 Query Analyzer is EXPLAIN with a GUI.

 http://msdn2.microsoft.com/en-gb/library/aa178423(SQL.80).aspx

 Anyway, I have not heard of such a thing for PostgreSQL, although I am
 sure the basic information you want could be obtained from logging
 queries and timing.

pgAdmin has graphical explain, and basic activity monitoring. It is free,
and has great support!

Regards, Dave

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



Re: [GENERAL] is there a tracking trace tool like the SQL Analizer

2007-01-08 Thread Dave Page


 --- Original Message ---
 From: Jeffrey Melloy [EMAIL PROTECTED]
 To: Dave Page [EMAIL PROTECTED]
 Sent: 1/8/07, 8:18:02 PM
 Subject: Re: [GENERAL] is there a tracking trace tool like the SQL Analizer
 
 How long has that been available for OS X?  Last time I looked at it it
 wasn't.

2 years or so iirc.

Regards, Dav3

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Database Failure on Windows XP Pro psql (PostgreSQL)

2007-01-08 Thread Oisin Glynn

Shelby Cain wrote:

I'm just throwing this out here... but the usual suspects for strange 
connection issues like these would be a software firewall or antivirus 
(especially AV software that has anti-worm features).

Also, Windows XP Pro isn't really viable as a server OS as Microsoft has 
intentionally limited certain features of the TCP/IP stack (e.g.: low limits 
for the TCP connection backlog).  In theory, if you had a flood of incoming 
connections only some of them would be serviced correctly before the backlog 
limit was reached.

Regards,

Shelby Cain

- Original Message 
From: Oisin Glynn [EMAIL PROTECTED]
To: pgsql-general@postgresql.org
Sent: Monday, January 8, 2007 1:33:54 PM
Subject: [GENERAL] Database Failure on Windows XP Pro psql (PostgreSQL) 8.1.4

I am running postgres on Windows XP and have been for quite a while as

the database engine for our application.

On an ODBC connection the following error has started happening. The gap 
in activity is normal for our application. but the connection failing is 
a first we have this application live in approx 5 locations some for as 
long as 12 months (though only 2 live are on XP)


We did enable the system to check for Windows updates automatically 
recently not sure if this could be a factor.
There is another one of our apps connecting through the C++ API which is 
not seeing any loss of service.  I suspect some sort of tcp-ip issue as 
I cannot connect with PgAdminIII locally on the server once the 
condition occurs. I have had this 3 times now and it seems to be 
happening very 2-3hours of operation. I am not getting any eventvwr or 
other Windows errors telling me I am hitting a limit or such?  And when 
I restarted the Postgresql service I got the following as part of my 
log(See attached logs)


2007-01-05 14:38:28 LOG:  pgpipe failed to bind: 10055i
2007-01-05 14:38:28 FATAL:  could not create pipe for statistics buffer: 
No error


I am attaching my postgresql.conf (perhaps I can turn on some more 
logging to make things easier to spot) and the log files I have from the 
first occurrence. These show my attempts to restart the postgresql 
service and the errors I received.


This is a snipit of my applications log file showing the error pop up at 
14:11:07


01/05/07 13:58:57.453 2296 DEBUG  - Call No: 10131  SELECT   SQL = 
SELECT * FROM zfunc_access_mailbox_cleanup(10131)
01/05/07 13:58:57.468 2296 DEBUG  - Call No: 10131  SELECT Complete   
Return Code = $VC1
01/05/07 13:58:57.468 2296 DEBUG  - BX_SVR_TRP009 Sent TRP_IVR_ODBCOK 
for call no= 10131

01/05/07 13:58:57.484 2296 DEBUG  - Call No: 10131  DISCONNECT
01/05/07 13:58:57.484 2296 DEBUG  - Call No: 10131  DISCONNECT Free 
Statement Handle

01/05/07 13:58:57.484 2296 DEBUG  - Call No: 10131DISCONNECT OK Voicemail
01/05/07 13:58:57.484 2296 DEBUG  - BX_SVR_TRP009 Sent TRP_IVR_ODBCOK 
for call no= 10131

01/05/07 14:11:07.734 2296 DEBUG  - Call No: 10132  CONNECT Voicemail
01/05/07 14:11:07.734 2296 DEBUG  - Call No: 10132  CONNECT Allocate DBC 
Handle Voicemail
01/05/07 14:11:07.734 2296 DEBUG  - Call No: 10132  CONNECT Connect 
Voicemail
01/05/07 14:11:07.750 2296 DEBUG  - Call No: 10132CONNECT SQLConnect 
failed   08001  could not connect to server: No buffer space available 
(0x2747/10055)

Is the server running on host 127.0.0.1 and accepting
TCP/IP connections on port 5432?
 Error Message = 165Voicemail

If there is anything I can do to generate better logging I am all ears,

Oisin






# -
# PostgreSQL configuration file
# -
#
# This file consists of lines of the form:
#
#   name = value
#
# (The '=' is optional.) White space may be used. Comments are introduced
# with '#' anywhere on a line. The complete list of option names and
# allowed values can be found in the PostgreSQL documentation. The
# commented-out settings shown in this file represent the default values.
#
# Please note that re-commenting a setting is NOT sufficient to revert it
# to the default value, unless you restart the postmaster.
#
# Any option can also be given as a command line switch to the
# postmaster, e.g. 'postmaster -c log_connections=on'. Some options
# can be changed at run-time with the 'SET' SQL command.
#
# This file is read on postmaster startup and when the postmaster
# receives a SIGHUP. If you edit the file on a running system, you have 
# to SIGHUP the postmaster for the changes to take effect, or use 
# pg_ctl reload. Some settings, such as listen_addresses, require

# a postmaster shutdown and restart to take effect.


#---
# FILE LOCATIONS
#---

# The default values of these variables are driven from the -D command line
# switch or PGDATA environment variable, represented here as ConfigDir.

#data_directory = 'ConfigDir'# use data in another directory

Re: [GENERAL] Newbie Constraint ?

2007-01-08 Thread Jeanna Geier
 Hello List!

 OK, so I'm new to SQL and Postgres and am working on taking over this DB
work, and ran across a command that I'm not sure of and am wondering if you
can help me with... Probably a basic SQL question, but thought more than one
person on here would be able to point me in the right direction!!

 In on of the tables in the DB, there is a constraint defined that I'm not
familiar with:

   CONSTRAINT Relationship182 FOREIGN KEY (elementid)
  REFERENCES element (elementid) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE CASCADE


 Not quite sure what the: CONSTRAINT Relationship182 is exactly...  can
anyone help me with this one?  Haven't seen this one yet...


 Thanks much for your help and support!
 -Jeanna


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Database Failure on Windows XP Pro psql (PostgreSQL) 8.1.4

2007-01-08 Thread Brandon Aiken
If you are suffering from this particular error, you will see an entry
in the event log.  Look for an error from Tcpip with an ID of 4226.  The
message will say TCP/IP has reached the security limit imposed on the
number of concurrent (incomplete) TCP connect attempts..  If you do not
see this message, you are not hitting this limit.

See:
http://www.microsoft.com/products/ee/transform.aspx?ProdName=Windows%20O
perating%20SystemProdVer=5.1.2600.2180EvtID=4226EvtSrc=TcpipFileVer=
5.1.2600.2180FileName=xpsp2res.dllEvtType=WarningLCID=

The limit is 10 outbound half-open connections.  Typically, you will
only see this limit if you're running a server or using P2P apps.  The
other limit is a maximum of 10 connections to the Server component
(which does file and print sharing; people were using Win2k Pro as a
file and print server).  The only way to modify the limit is to manually
modify binary files.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Shelby Cain
Sent: Monday, January 08, 2007 3:12 PM
To: Oisin Glynn; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Database Failure on Windows XP Pro psql
(PostgreSQL) 8.1.4

I'm just throwing this out here... but the usual suspects for strange
connection issues like these would be a software firewall or antivirus
(especially AV software that has anti-worm features).

Also, Windows XP Pro isn't really viable as a server OS as Microsoft has
intentionally limited certain features of the TCP/IP stack (e.g.: low
limits for the TCP connection backlog).  In theory, if you had a flood
of incoming connections only some of them would be serviced correctly
before the backlog limit was reached.

Regards,

Shelby Cain

- Original Message 
From: Oisin Glynn [EMAIL PROTECTED]
To: pgsql-general@postgresql.org
Sent: Monday, January 8, 2007 1:33:54 PM
Subject: [GENERAL] Database Failure on Windows XP Pro psql (PostgreSQL)
8.1.4

I am running postgres on Windows XP and have been for quite a while as

the database engine for our application.

On an ODBC connection the following error has started happening. The gap

in activity is normal for our application. but the connection failing is

a first we have this application live in approx 5 locations some for as 
long as 12 months (though only 2 live are on XP)

We did enable the system to check for Windows updates automatically 
recently not sure if this could be a factor.
There is another one of our apps connecting through the C++ API which is

not seeing any loss of service.  I suspect some sort of tcp-ip issue as 
I cannot connect with PgAdminIII locally on the server once the 
condition occurs. I have had this 3 times now and it seems to be 
happening very 2-3hours of operation. I am not getting any eventvwr or 
other Windows errors telling me I am hitting a limit or such?  And when 
I restarted the Postgresql service I got the following as part of my 
log(See attached logs)

2007-01-05 14:38:28 LOG:  pgpipe failed to bind: 10055i
2007-01-05 14:38:28 FATAL:  could not create pipe for statistics buffer:

No error

I am attaching my postgresql.conf (perhaps I can turn on some more 
logging to make things easier to spot) and the log files I have from the

first occurrence. These show my attempts to restart the postgresql 
service and the errors I received.

This is a snipit of my applications log file showing the error pop up at

14:11:07

01/05/07 13:58:57.453 2296 DEBUG  - Call No: 10131  SELECT   SQL = 
SELECT * FROM zfunc_access_mailbox_cleanup(10131)
01/05/07 13:58:57.468 2296 DEBUG  - Call No: 10131  SELECT Complete   
Return Code = $VC1
01/05/07 13:58:57.468 2296 DEBUG  - BX_SVR_TRP009 Sent TRP_IVR_ODBCOK 
for call no= 10131
01/05/07 13:58:57.484 2296 DEBUG  - Call No: 10131  DISCONNECT
01/05/07 13:58:57.484 2296 DEBUG  - Call No: 10131  DISCONNECT Free 
Statement Handle
01/05/07 13:58:57.484 2296 DEBUG  - Call No: 10131DISCONNECT OK
Voicemail
01/05/07 13:58:57.484 2296 DEBUG  - BX_SVR_TRP009 Sent TRP_IVR_ODBCOK 
for call no= 10131
01/05/07 14:11:07.734 2296 DEBUG  - Call No: 10132  CONNECT Voicemail
01/05/07 14:11:07.734 2296 DEBUG  - Call No: 10132  CONNECT Allocate DBC

Handle Voicemail
01/05/07 14:11:07.734 2296 DEBUG  - Call No: 10132  CONNECT Connect 
Voicemail
01/05/07 14:11:07.750 2296 DEBUG  - Call No: 10132CONNECT SQLConnect 
failed   08001  could not connect to server: No buffer space available 
(0x2747/10055)
Is the server running on host 127.0.0.1 and accepting
TCP/IP connections on port 5432?
 Error Message = 165Voicemail

If there is anything I can do to generate better logging I am all ears,

Oisin






# -
# PostgreSQL configuration file
# -
#
# This file consists of lines of the form:
#
#   name = value
#
# (The '=' is optional.) White space may be used. Comments are
introduced
# with '#' anywhere on a line. The complete list of option names and
# 

Re: [GENERAL] Database Failure on Windows XP Pro psql (PostgreSQL)

2007-01-08 Thread Oisin Glynn

Brandon Aiken wrote:

If you are suffering from this particular error, you will see an entry
in the event log.  Look for an error from Tcpip with an ID of 4226.  The
message will say TCP/IP has reached the security limit imposed on the
number of concurrent (incomplete) TCP connect attempts..  If you do not
see this message, you are not hitting this limit.

See:
http://www.microsoft.com/products/ee/transform.aspx?ProdName=Windows%20O
perating%20SystemProdVer=5.1.2600.2180EvtID=4226EvtSrc=TcpipFileVer=
5.1.2600.2180FileName=xpsp2res.dllEvtType=WarningLCID=

The limit is 10 outbound half-open connections.  Typically, you will
only see this limit if you're running a server or using P2P apps.  The
other limit is a maximum of 10 connections to the Server component
(which does file and print sharing; people were using Win2k Pro as a
file and print server).  The only way to modify the limit is to manually
modify binary files.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Shelby Cain
Sent: Monday, January 08, 2007 3:12 PM
To: Oisin Glynn; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Database Failure on Windows XP Pro psql
(PostgreSQL) 8.1.4

I'm just throwing this out here... but the usual suspects for strange
connection issues like these would be a software firewall or antivirus
(especially AV software that has anti-worm features).

Also, Windows XP Pro isn't really viable as a server OS as Microsoft has
intentionally limited certain features of the TCP/IP stack (e.g.: low
limits for the TCP connection backlog).  In theory, if you had a flood
of incoming connections only some of them would be serviced correctly
before the backlog limit was reached.

Regards,

Shelby Cain

- Original Message 
From: Oisin Glynn [EMAIL PROTECTED]
To: pgsql-general@postgresql.org
Sent: Monday, January 8, 2007 1:33:54 PM
Subject: [GENERAL] Database Failure on Windows XP Pro psql (PostgreSQL)
8.1.4

I am running postgres on Windows XP and have been for quite a while as

the database engine for our application.

On an ODBC connection the following error has started happening. The gap

in activity is normal for our application. but the connection failing is

a first we have this application live in approx 5 locations some for as 
long as 12 months (though only 2 live are on XP)


We did enable the system to check for Windows updates automatically 
recently not sure if this could be a factor.

There is another one of our apps connecting through the C++ API which is

not seeing any loss of service.  I suspect some sort of tcp-ip issue as 
I cannot connect with PgAdminIII locally on the server once the 
condition occurs. I have had this 3 times now and it seems to be 
happening very 2-3hours of operation. I am not getting any eventvwr or 
other Windows errors telling me I am hitting a limit or such?  And when 
I restarted the Postgresql service I got the following as part of my 
log(See attached logs)


2007-01-05 14:38:28 LOG:  pgpipe failed to bind: 10055i
2007-01-05 14:38:28 FATAL:  could not create pipe for statistics buffer:

No error

I am attaching my postgresql.conf (perhaps I can turn on some more 
logging to make things easier to spot) and the log files I have from the


first occurrence. These show my attempts to restart the postgresql 
service and the errors I received.


This is a snipit of my applications log file showing the error pop up at

14:11:07

01/05/07 13:58:57.453 2296 DEBUG  - Call No: 10131  SELECT   SQL = 
SELECT * FROM zfunc_access_mailbox_cleanup(10131)
01/05/07 13:58:57.468 2296 DEBUG  - Call No: 10131  SELECT Complete   
Return Code = $VC1
01/05/07 13:58:57.468 2296 DEBUG  - BX_SVR_TRP009 Sent TRP_IVR_ODBCOK 
for call no= 10131

01/05/07 13:58:57.484 2296 DEBUG  - Call No: 10131  DISCONNECT
01/05/07 13:58:57.484 2296 DEBUG  - Call No: 10131  DISCONNECT Free 
Statement Handle

01/05/07 13:58:57.484 2296 DEBUG  - Call No: 10131DISCONNECT OK
Voicemail
01/05/07 13:58:57.484 2296 DEBUG  - BX_SVR_TRP009 Sent TRP_IVR_ODBCOK 
for call no= 10131

01/05/07 14:11:07.734 2296 DEBUG  - Call No: 10132  CONNECT Voicemail
01/05/07 14:11:07.734 2296 DEBUG  - Call No: 10132  CONNECT Allocate DBC

Handle Voicemail
01/05/07 14:11:07.734 2296 DEBUG  - Call No: 10132  CONNECT Connect 
Voicemail
01/05/07 14:11:07.750 2296 DEBUG  - Call No: 10132CONNECT SQLConnect 
failed   08001  could not connect to server: No buffer space available 
(0x2747/10055)

Is the server running on host 127.0.0.1 and accepting
TCP/IP connections on port 5432?
 Error Message = 165Voicemail

If there is anything I can do to generate better logging I am all ears,

Oisin






# -
# PostgreSQL configuration file
# -
#
# This file consists of lines of the form:
#
#   name = value
#
# (The '=' is optional.) White space may be used. Comments are
introduced
# with '#' anywhere on a line. The 

Re: [GENERAL] Newbie Constraint ?

2007-01-08 Thread Bricklen Anderson

Jeanna Geier wrote:

 Hello List!

 OK, so I'm new to SQL and Postgres and am working on taking over this DB
work, and ran across a command that I'm not sure of and am wondering if you
can help me with... Probably a basic SQL question, but thought more than one
person on here would be able to point me in the right direction!!

 In on of the tables in the DB, there is a constraint defined that I'm not
familiar with:

   CONSTRAINT Relationship182 FOREIGN KEY (elementid)
  REFERENCES element (elementid) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE CASCADE


 Not quite sure what the: CONSTRAINT Relationship182 is exactly...  can
anyone help me with this one?  Haven't seen this one yet...


That's the constraint name.
More detailed here: 
http://www.postgresql.org/docs/8.1/interactive/sql-createtable.html


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] configure; make on cygwin doesn't produce DLLs

2007-01-08 Thread Curran Schiefelbein
If this is not the right list for this problem, could someone please 
point me to the right area?


Curran Schiefelbein wrote:


Hi,

I'm trying to compile postgres-8.2.0 from source on WinXP-SP2, for use
with libpqxx.

At first I was able to compile in cygwin with ./configure followed by
make. However, postgres-8.2.0/src/interfaces/libpq/Debug and libpqd.dll
were not created, just Release and libpq.dll, much as Brian Doyle (this
list and
http://gborg.postgresql.org/pipermail/libpqxx-general/2006-July/001360.html) 


and others have experienced. I'd like to get to the point of following
Bart Samwell's advice to use nmake, but unfortunately something else has
gone wrong upstream. After accidentally mucking up the directory, I
re-untarred the postgresql src package, uninstalled postgres packages
from cygwin and PostgreSQL from Windows, rebooted, and started over.

Now, from a clean src package, I get different behavior from configure.
Formerly, it seemed to build as for win32. I had a spot of trouble the
first time through with the compiler finding src/port/pg_config_paths.h
wile compiling in src/interfaces/ecpg/ecpglib. This problem did not
remanifest. Unfortunately I didn't capture the configure output the one
time it worked, but looking at the output now, configure is using the
cygwin template rather than the win32 one. Make doesn't error; neither
does it produce libpq.dll and Release much less libpqd.dll and Debug.

Short of a clean machine with a fresh install of cygwin (which I can't
afford to do; other software depends on it in fragile ways), I'm
stumped. Does anyone have any suggestions for getting configure/make to
produce these DLLs?

I will eventually be compiling libpqxx and then my client program with
MSVC 2005, so if I am barking up the wrong tree altogether, please do
say so :)

Curran




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] configure; make on cygwin doesn't produce DLLs

2007-01-08 Thread Scott Marlowe
On Mon, 2007-01-08 at 15:19, Curran Schiefelbein wrote:
 If this is not the right list for this problem, could someone please 
 point me to the right area?

according to this page:

http://www.postgresql.org/community/lists/

Current win32 development issues should be raised on -hackers. Cygwin
issues should be raised on -cgywin.

so, one of those.  I'm not sure you really want to be building under
cygwin, but would be looking for the native windows build.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Database Failure on Windows XP Pro psql (PostgreSQL)

2007-01-08 Thread Brandon Aiken
Try a 180-day Win2k3 trial to see if the issue persists.  Realistically,
though, if you can't afford the proprietary software don't develop with
it.

If it's a Windows XP bug (or feature) then it's not anything we can
help with since PG is working correctly.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Oisin Glynn
Sent: Monday, January 08, 2007 3:54 PM
To: Shelby Cain
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Database Failure on Windows XP Pro psql
(PostgreSQL)

I have exceptions in the FW for postgresql.
I know XP Pro is not a server OS but it is a $$ question (I know *nix is

cheaper but it is not an option without a major rewrite). The Server is 
really more like a network appliance it is not expecting to have extra 
connections all over. In fact the DB is only accessible on 127.0.0.1

I have been checking for XP limits coming into play but the kicker is 
that there are no errors appearing (which other people are reporting 
lots of with SP2)
The 3 limits I am aware of are
a) There is a new limit on numbers of outbound connections being created

per second (people using e mule and torrents etc are having issues) We 
are not creating masses of outbound requests per second (at least AFAIK 
unless something in PG or something else is? without us noticing)
b)  There is a max open TCP/IP connections which is not physically 
enforced but may legally be inforced..(according to MS)
c) There is a hard limit on NAMED_PIPE and file shares that is allegedly

at 10. It is unclear exactly what is being counted here as I had 
NAMED_PIPES and 5 file shares from different PC's open earlier trying to

cause a failure.

We have run netstat to check for odd looking connections on each 
occurrence but never seen any?
Are there pieces of logging I should turn on in my .conf that would be 
helpful?

Oisin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] insert only unique values in to a table, ignore rest?

2007-01-08 Thread George Nychis
Hi,

I have approximately 2 billion data entries that I would like to insert into a 
database.
Each entry consists of:
INT BOOLEAN INT BOOLEAN

I want to populate a table such that it only contains the unique rows, all 
other data
should be thrown out.  I would say a significant amount of the insertions are 
going to
fail due to unique constraints.  The unique constraint is on the two integers, 
not on the
booleans.

Using mysql, I was able to do this with the following query, for all data files 
(25574
data files total):
mysql -e \use connectivity; LOAD DATA INFILE '/path/to/file' IGNORE INTO TABLE 
ipconn
FIELDS TERMINATED BY ' ';\

What I *think* mysql did was sort each data file and do a sort of merge sort 
between the
data I was inserting and the data in the database.  It would insert the first 
unique
instance of a row it saw, and reject all other insertions that violated the 
unique
constraint due to the IGNORE.

From what I understand, this functionality is not in postgresql.  Fine, I 
certainly can't
change that.  But I am looking for a comparable solution for the size of my 
data.

One solution is to have a temporary table, insert all 2 billion rows, and then 
copy the
distinct entries to another table.  This would be like one massive sort?

Is this the only/best solution using postgresql?

Thanks!
George

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Slony in Windows installer?

2007-01-08 Thread Raymond O'Donnell
On 8 Jan 2007 at 23:15, Shoaib Mir wrote:

 It does install both the Slony binaries and scripts.

Grand - thanks for confirming.

Ray.


--

Raymond O'Donnell
Director of Music, Galway Cathedral, Galway, Ireland
[EMAIL PROTECTED]
--



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] insert only unique values in to a table, ignore rest?

2007-01-08 Thread Scott Marlowe
On Mon, 2007-01-08 at 14:58, George Nychis wrote:
 Hi,
 
 I have approximately 2 billion data entries that I would like to insert into 
 a database.
 Each entry consists of:
 INT BOOLEAN INT BOOLEAN
 
 I want to populate a table such that it only contains the unique rows, all 
 other data
 should be thrown out.  I would say a significant amount of the insertions are 
 going to
 fail due to unique constraints.  The unique constraint is on the two 
 integers, not on the
 booleans.
 
 Using mysql, I was able to do this with the following query, for all data 
 files (25574
 data files total):
 mysql -e \use connectivity; LOAD DATA INFILE '/path/to/file' IGNORE INTO 
 TABLE ipconn
 FIELDS TERMINATED BY ' ';\

A quick question.  Could you run selects or other inserts on that table
while the load data infile was running?  Cause I'm guessing that it
basically locked the whole table while running.

 What I *think* mysql did was sort each data file and do a sort of merge sort 
 between the
 data I was inserting and the data in the database.  It would insert the first 
 unique
 instance of a row it saw, and reject all other insertions that violated the 
 unique
 constraint due to the IGNORE.

Me too.  Which would require one big lock on the table which would
mean no parallel access.

It's also likely that it used a temp table which doubled the size of the
database while you were inserting.

 From what I understand, this functionality is not in postgresql.  Fine, I 
 certainly can't
 change that.  But I am looking for a comparable solution for the size of my 
 data.
 
 One solution is to have a temporary table, insert all 2 billion rows, and 
 then copy the
 distinct entries to another table.  This would be like one massive sort?
 
 Is this the only/best solution using postgresql?

TANSTAAFL.  PostgreSQL is designed so that you can run an import process
on that table while 100 other users still access it at the same time. 
Because of that, you don't get to do dirty, nasty things under the
sheets that allow for super easy data loading and merging like you got
with MySQL.  Apples and Oranges.

Assuming you're loading into an empty table, the load to temp, select
distinct out and into the final table seems reasonable, should run
reasonably fast.  If you need to load to an existing table, it might get
a little more complex.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] insert only unique values in to a table, ignore rest?

2007-01-08 Thread George Nychis


Scott Marlowe wrote:
 On Mon, 2007-01-08 at 14:58, George Nychis wrote:
 Hi,

 I have approximately 2 billion data entries that I would like to insert into 
 a database.
 Each entry consists of:
 INT BOOLEAN INT BOOLEAN

 I want to populate a table such that it only contains the unique rows, all 
 other data
 should be thrown out.  I would say a significant amount of the insertions 
 are going to
 fail due to unique constraints.  The unique constraint is on the two 
 integers, not on the
 booleans.

 Using mysql, I was able to do this with the following query, for all data 
 files (25574
 data files total):
 mysql -e \use connectivity; LOAD DATA INFILE '/path/to/file' IGNORE INTO 
 TABLE ipconn
 FIELDS TERMINATED BY ' ';\
 
 A quick question.  Could you run selects or other inserts on that table
 while the load data infile was running?  Cause I'm guessing that it
 basically locked the whole table while running.
What does this have to do with my question?  I don't need to run selects or 
inserts on the
table while the load data is running...

 
 What I *think* mysql did was sort each data file and do a sort of merge sort 
 between the
 data I was inserting and the data in the database.  It would insert the 
 first unique
 instance of a row it saw, and reject all other insertions that violated the 
 unique
 constraint due to the IGNORE.
 
 Me too.  Which would require one big lock on the table which would
 mean no parallel access.
Thats fine, it doesn't matter.

 
 It's also likely that it used a temp table which doubled the size of the
 database while you were inserting.
 
 From what I understand, this functionality is not in postgresql.  Fine, I 
 certainly can't
 change that.  But I am looking for a comparable solution for the size of my 
 data.

 One solution is to have a temporary table, insert all 2 billion rows, and 
 then copy the
 distinct entries to another table.  This would be like one massive sort?

 Is this the only/best solution using postgresql?
 
 TANSTAAFL.  PostgreSQL is designed so that you can run an import process
 on that table while 100 other users still access it at the same time. 
 Because of that, you don't get to do dirty, nasty things under the
 sheets that allow for super easy data loading and merging like you got
 with MySQL.  Apples and Oranges.
 
 Assuming you're loading into an empty table, the load to temp, select
 distinct out and into the final table seems reasonable, should run
 reasonably fast.  If you need to load to an existing table, it might get
 a little more complex.
 

The goal is not to run queries while the data is being insertedI am 
wondering if the
postgresql method I have mentioned to actually insert and get only distinct 
values is most
optimal, which would produce the same results method I explained in mysql.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] insert only unique values in to a table, ignore rest?

2007-01-08 Thread Scott Marlowe
On Mon, 2007-01-08 at 15:52, George Nychis wrote:
 Scott Marlowe wrote:
  On Mon, 2007-01-08 at 14:58, George Nychis wrote:
  Hi,
 
  I have approximately 2 billion data entries that I would like to insert 
  into a database.
  Each entry consists of:
  INT BOOLEAN INT BOOLEAN
 
  I want to populate a table such that it only contains the unique rows, all 
  other data
  should be thrown out.  I would say a significant amount of the insertions 
  are going to
  fail due to unique constraints.  The unique constraint is on the two 
  integers, not on the
  booleans.
 
  Using mysql, I was able to do this with the following query, for all data 
  files (25574
  data files total):
  mysql -e \use connectivity; LOAD DATA INFILE '/path/to/file' IGNORE INTO 
  TABLE ipconn
  FIELDS TERMINATED BY ' ';\
  
  A quick question.  Could you run selects or other inserts on that table
  while the load data infile was running?  Cause I'm guessing that it
  basically locked the whole table while running.
 What does this have to do with my question?  I don't need to run selects or 
 inserts on the
 table while the load data is running...
 
  
  What I *think* mysql did was sort each data file and do a sort of merge 
  sort between the
  data I was inserting and the data in the database.  It would insert the 
  first unique
  instance of a row it saw, and reject all other insertions that violated 
  the unique
  constraint due to the IGNORE.
  
  Me too.  Which would require one big lock on the table which would
  mean no parallel access.
 Thats fine, it doesn't matter.
 
  
  It's also likely that it used a temp table which doubled the size of the
  database while you were inserting.
  
  From what I understand, this functionality is not in postgresql.  Fine, I 
  certainly can't
  change that.  But I am looking for a comparable solution for the size of 
  my data.
 
  One solution is to have a temporary table, insert all 2 billion rows, and 
  then copy the
  distinct entries to another table.  This would be like one massive sort?
 
  Is this the only/best solution using postgresql?
  
  TANSTAAFL.  PostgreSQL is designed so that you can run an import process
  on that table while 100 other users still access it at the same time. 
  Because of that, you don't get to do dirty, nasty things under the
  sheets that allow for super easy data loading and merging like you got
  with MySQL.  Apples and Oranges.
  
  Assuming you're loading into an empty table, the load to temp, select
  distinct out and into the final table seems reasonable, should run
  reasonably fast.  If you need to load to an existing table, it might get
  a little more complex.
  
 
 The goal is not to run queries while the data is being insertedI am 
 wondering if the
 postgresql method I have mentioned to actually insert and get only distinct 
 values is most
 optimal, which would produce the same results method I explained in mysql.

Did I fail to answer your question?

Sorry if I gave you more information than you needed.  Please feel free
to ask someone else next time.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] insert only unique values in to a table, ignore rest?

2007-01-08 Thread George Nychis


Scott Marlowe wrote:
 On Mon, 2007-01-08 at 15:52, George Nychis wrote:
 Scott Marlowe wrote:
 On Mon, 2007-01-08 at 14:58, George Nychis wrote:
 Hi,

 I have approximately 2 billion data entries that I would like to insert 
 into a database.
 Each entry consists of:
 INT BOOLEAN INT BOOLEAN

 I want to populate a table such that it only contains the unique rows, all 
 other data
 should be thrown out.  I would say a significant amount of the insertions 
 are going to
 fail due to unique constraints.  The unique constraint is on the two 
 integers, not on the
 booleans.

 Using mysql, I was able to do this with the following query, for all data 
 files (25574
 data files total):
 mysql -e \use connectivity; LOAD DATA INFILE '/path/to/file' IGNORE INTO 
 TABLE ipconn
 FIELDS TERMINATED BY ' ';\
 A quick question.  Could you run selects or other inserts on that table
 while the load data infile was running?  Cause I'm guessing that it
 basically locked the whole table while running.
 What does this have to do with my question?  I don't need to run selects or 
 inserts on the
 table while the load data is running...

 What I *think* mysql did was sort each data file and do a sort of merge 
 sort between the
 data I was inserting and the data in the database.  It would insert the 
 first unique
 instance of a row it saw, and reject all other insertions that violated 
 the unique
 constraint due to the IGNORE.
 Me too.  Which would require one big lock on the table which would
 mean no parallel access.
 Thats fine, it doesn't matter.

 It's also likely that it used a temp table which doubled the size of the
 database while you were inserting.

 From what I understand, this functionality is not in postgresql.  Fine, I 
 certainly can't
 change that.  But I am looking for a comparable solution for the size of 
 my data.

 One solution is to have a temporary table, insert all 2 billion rows, and 
 then copy the
 distinct entries to another table.  This would be like one massive sort?

 Is this the only/best solution using postgresql?
 TANSTAAFL.  PostgreSQL is designed so that you can run an import process
 on that table while 100 other users still access it at the same time. 
 Because of that, you don't get to do dirty, nasty things under the
 sheets that allow for super easy data loading and merging like you got
 with MySQL.  Apples and Oranges.

 Assuming you're loading into an empty table, the load to temp, select
 distinct out and into the final table seems reasonable, should run
 reasonably fast.  If you need to load to an existing table, it might get
 a little more complex.

 The goal is not to run queries while the data is being insertedI am 
 wondering if the
 postgresql method I have mentioned to actually insert and get only distinct 
 values is most
 optimal, which would produce the same results method I explained in mysql.
 
 Did I fail to answer your question?
 
 Sorry if I gave you more information than you needed.  Please feel free
 to ask someone else next time.
 

ahhh i missed your last paragraph... so much text.  Actually yeah that answers 
my
question, thank you.  I guess its more a single run through than the mysql 
method which
was piece-wise.

Thanks for the help/response.

- George

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] insert only unique values in to a table, ignore rest?

2007-01-08 Thread Scott Marlowe
On Mon, 2007-01-08 at 15:59, George Nychis wrote:
 Scott Marlowe wrote:
  On Mon, 2007-01-08 at 15:52, George Nychis wrote:
  Scott Marlowe wrote:
  On Mon, 2007-01-08 at 14:58, George Nychis wrote:
  Hi,
 
  I have approximately 2 billion data entries that I would like to insert 
  into a database.
  Each entry consists of:
  INT BOOLEAN INT BOOLEAN
 

SNIP

  Assuming you're loading into an empty table, the load to temp, select
  distinct out and into the final table seems reasonable, should run
  reasonably fast.  If you need to load to an existing table, it might get
  a little more complex.
 
  The goal is not to run queries while the data is being insertedI am 
  wondering if the
  postgresql method I have mentioned to actually insert and get only 
  distinct values is most
  optimal, which would produce the same results method I explained in mysql.
  
  Did I fail to answer your question?
  
  Sorry if I gave you more information than you needed.  Please feel free
  to ask someone else next time.
  
 
 ahhh i missed your last paragraph... so much text.  Actually yeah that 
 answers my
 question, thank you.  I guess its more a single run through than the mysql 
 method which
 was piece-wise.

Note that things will go faster if you do your initial data load using
copy from stdin for the initial bulk data load.  individual inserts in
postgresql are quite costly compared to mysql.  It's the transactional
overhead.  by grouping them together you can make things much faster. 
copy from stdin does all the inserts in one big transaction.  

If you use insert statements, wrap them in a begin; end; pair to make
them be one transaction.  not as fast as copy, due to parsing, but still
much faster than individual transactions.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Autovacuum Improvements

2007-01-08 Thread Chris Browne
[EMAIL PROTECTED] (Csaba Nagy) writes:
 On Sun, 2006-12-24 at 03:03, Christopher Browne wrote:
 [snip]
 Seems to me that you could get ~80% of the way by having the simplest
 2 queue implementation, where tables with size  some threshold get
 thrown at the little table queue, and tables above that size go to
 the big table queue.

 That would most definitely not cut it for me, I have more than 2
 categories of tables:

  - a few small but very often updated/inserted/deleted table: these must
 be continuously vacuumed, your little queue is not good enough for
 that, as even the round trip between the small tables could lead to
 bloat on them;

I disagree; if we added more work processes, that could eat quickly
through the short end of the queue.

  - a few small and moderately updated, that could live with the little
 queue;

  - a few big and frequently updated, but which only have a small
 percentage of rows actively updated at any time: those could live with
 the big queue;

  - the rest which are rarely updated, I would put those in a separate
 queue so they won't affect the rest, cause vacuuming them is really
 mostly not critical;

 The point is that I'm not sure there couldn't be even more reasons
 to split the tables in even more queues based on the importance of
 vacuuming them combined with update rate and their size. If I can
 set up my own queues I can experiment with what works best for
 me... for the base setup you could set up some default queues. I
 wonder though how would you handle dynamics of tables, I mean when
 will a small table which grows start to be considered a big table
 for the purpose of putting it in one queue or the other ? I guess it
 would be done on analyzing the table, which is also handled by
 autovacuum, so tables with no vacuum queue settings could go to one
 of the 2 default queues you mention.

The heuristic I was thinking of didn't involve having two queues, but
rather just 1.  By having some size information, work processes could
eat at the queue from both ends.

If you have cases where tables need to be vacuumed *really*
frequently, then you make sure that they are being injected
frequently, and that some of the workers are tied to Just Doing Small
Tables.

I think that *does* cover your scenario quite adequately, and without
having to get into having a bunch of queues.

The heuristic is incomplete in one other fashion, namely that it
doesn't guarantee that tables in the middle will ever get gotten to.
That mandates having a third policy, namely to have a worker that goes
through tables in the (singular) queue some form of chronological
order.

 That should keep any small tables from getting vacuum-starved.
 
 I'd think the next step would be to increase the number of queues,
 perhaps in a time-based fashion.  There might be times when it's
 acceptable to vacuum 5 tables at once, so you burn thru little tables
 like the blazes, and handle larger ones fairly promptly.  And other
 times when you don't want to do *any* big tables, and limit a single
 queue to just the itty bitty ones.

 This is all nice and it would be cool if you could set it up per vacuum
 queue. I mean how much more effort would be to allow vacuum queues with
 generic settings like time windows with max number of threads for each
 window, and let the user explicitly assign tables to those queues,
 instead of hard coding the queues and their settings and assign tables
 to them based on size or any other heuristics ?

 For the average application which needs simple settings, there could be
 a default setup with the 2 queues you mention. If it would be possible
 to set up some rules to assign tables to queues based on their
 properties on analyze time, instead of explicitly assigning to one queue
 or other, that would be nice too, and then you can completely cover the
 default setup with those settings, and allow for more complex setups for
 those who need it.

My thinking has headed more towards simplifying this; two queues seems
to be one too many :-).

 This approach allows you to stay mostly heuristic-based, as opposed to
 having to describe policies in gratuitous detail.

 I agree that for simple setups that would be OK, but like I said, if it
 would be easy enough to code that heuristics, and provide some sane
 setup as default, and then let the user optimize it, that would be a
 cool solution.

 Now it's true I don't really know how would you code 'assign all tables
 which are smaller than x rows to vacuum queue little-tables' ... maybe
 by providing a function to the queue which matches on the table ? And
 you can change that function ? No idea, but it probably can be done...

Based on the three policies I've seen, it could make sense to assign
worker policies:

1. You have a worker that moves its way through the queue in some sort of
   sequential order, based on when the table is added to the queue, to
   guarantee that all tables get processed, eventually.

2. You have workers that always 

Re: [GENERAL] insert only unique values in to a table, ignore rest?

2007-01-08 Thread Jeremy Haile
 Note that things will go faster if you do your initial data load using
 copy from stdin for the initial bulk data load.  individual inserts in
 postgresql are quite costly compared to mysql.  It's the transactional
 overhead.  by grouping them together you can make things much faster. 
 copy from stdin does all the inserts in one big transaction.  

You could do copy from file as well right?  (no performance difference
compared to copy from stdin)  I do this all the time.

Also - maybe I misunderstand something, but why does PostgreSQL's
implementation prohibit it from ignoring insert errors during a copy? 
If you added a unique constraint to the table before copying, PostgreSQL
would generate errors due to the unique constraint violation - so I
don't think any additional locking would be required for it to simply
say If there is an error while copying in, ignore it and continue
inserting other rows  

PostgreSQL's copy command doesn't currently support this, so the temp
table followed by a distinct select is the way to go.  But I didn't
follow all of the talk about it requiring locking the table and being
inherently impossible for PostgreSQL to support.

I've wanted a similar feature.  I select rows into a table on a regular
basis.  I'd like to be able to overlap old values and have PostgreSQL
ignore failed inserts.  SQL Server offers a flag that allows you to
ignore inserts whose primary key already exists in the table.  The only
solution in PostgreSQL is to run a query to manually delete the
duplicate rows from a temp table before inserting - which takes much
more time.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] insert only unique values in to a table, ignore rest?

2007-01-08 Thread George Nychis


Jeremy Haile wrote:
 Note that things will go faster if you do your initial data load using
 copy from stdin for the initial bulk data load.  individual inserts in
 postgresql are quite costly compared to mysql.  It's the transactional
 overhead.  by grouping them together you can make things much faster. 
 copy from stdin does all the inserts in one big transaction.  
 
 You could do copy from file as well right?  (no performance difference
 compared to copy from stdin)  I do this all the time.
 
 Also - maybe I misunderstand something, but why does PostgreSQL's
 implementation prohibit it from ignoring insert errors during a copy? 
 If you added a unique constraint to the table before copying, PostgreSQL
 would generate errors due to the unique constraint violation - so I
 don't think any additional locking would be required for it to simply
 say If there is an error while copying in, ignore it and continue
 inserting other rows  
 
 PostgreSQL's copy command doesn't currently support this, so the temp
 table followed by a distinct select is the way to go.  But I didn't
 follow all of the talk about it requiring locking the table and being
 inherently impossible for PostgreSQL to support.
 
 I've wanted a similar feature.  I select rows into a table on a regular
 basis.  I'd like to be able to overlap old values and have PostgreSQL
 ignore failed inserts.  SQL Server offers a flag that allows you to
 ignore inserts whose primary key already exists in the table.  The only
 solution in PostgreSQL is to run a query to manually delete the
 duplicate rows from a temp table before inserting - which takes much
 more time.
 

I would also like this feature... :) (obviously)

I also didn't exactly follow the locking, I don't need it as far as I know.

- George

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] SELECT INTO using Views?

2007-01-08 Thread Jeanna Geier
Hello List!

I have a question regarding SELECT  INTO...

Can it be used with Views?  I have a View that is populated (~35,000 rows)
that I want to create a Table from the data in it

So, would I be able to copy the data from the View to the Table using the
SELECT INTO command?

Thanks much,
-Jeanna


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Database versus filesystem for storing images

2007-01-08 Thread Scott Ribe
 My point is: if I need to be 100% sure that what is referenced on the database
 is accessible all the time when the reference is, then I need to have this on
 the database

Not necessarily. It does take carefully controlling access, with a good deal
of thought and error-checking on the part of the code that has write access
to the files, but it can certainly be done.

 But if there are other people touching
 things -- moving / creating / deleting / changing files and directories --
 then things get more complicated to manage.

Absolutely. But allowing the kinds of tools mentioned earlier for examining
files does not require giving anyone write access ;-)


-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Questions about horizontal partitioning

2007-01-08 Thread John Sales
Suppose I have a table - lets say it was developed by someone with little or no 
understanding of database design - and it has 230 columns.

Now, it turns out that 99% of the time only about 8 colums are required, but 
all 230 columns are populated.  However, legacy applications (which are run 
nightly for batch processing, but not during the day, when heavy volume occurs) 
require the old table design.  New applications only select a limited number of 
columns, but require the old design names.

I want to do the following:

1.  Split the table into two tables (one with 6 columns, the other with 224 
columns), using the primary key to establish a 1-1 relationship between them.
2.  Create a view that displays the join between the two tables.

By doing this, I'm hoping that the query optimizer is smart enough to see that 
if a query comes in and requests only the six columns (that are in the narrower 
table) that PostgreSQL won't have to load the wider table into the buffer pool, 
and thereby actually have to only access about 10% the amount of disk that it 
presently does.

Is this a sound theory?  Is the query optimizer smart enough ignore part of a 
join when a portion of the join will have no effect on the end result set?

thanks


 __
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

Re: [GENERAL] Autovacuum Improvements

2007-01-08 Thread Joris Dobbelsteen
Why not collect some information from live databases and perform some
analysis on it?

Possible values required for (to be defined) vacuum heuristic,
Human classification of tables,
Updates/Transactions done (per table/db),
Growth of tables and indexes,
(all with respect to time I believe)

Collecting real data has worked for me in the past way better than
arguing about potential situations. Now you can let you model go wild
and see if it does what you thing should happen (obviously comparing
with the current autovacuum implementation).

What statistics would you need?
Who is capable  willing of capturing it?
Who is willing to do some analysis...

Just my EUR 0.02...

- Joris

-Original Message-
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of Chris Browne
Sent: maandag 8 januari 2007 22:30
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Autovacuum Improvements

[EMAIL PROTECTED] (Csaba Nagy) writes:
 On Sun, 2006-12-24 at 03:03, Christopher Browne wrote:
 [snip]
 Seems to me that you could get ~80% of the way by having 
the simplest
 2 queue implementation, where tables with size  some 
threshold get 
 thrown at the little table queue, and tables above that 
size go to 
 the big table queue.

 That would most definitely not cut it for me, I have more than 2 
 categories of tables:

  - a few small but very often updated/inserted/deleted table: these 
 must be continuously vacuumed, your little queue is not 
good enough 
 for that, as even the round trip between the small tables could lead 
 to bloat on them;

I disagree; if we added more work processes, that could eat 
quickly through the short end of the queue.

  - a few small and moderately updated, that could live with the 
 little queue;

  - a few big and frequently updated, but which only have a small 
 percentage of rows actively updated at any time: those could 
live with 
 the big queue;

  - the rest which are rarely updated, I would put those in a 
separate 
 queue so they won't affect the rest, cause vacuuming them is really 
 mostly not critical;

 The point is that I'm not sure there couldn't be even more 
reasons to 
 split the tables in even more queues based on the importance of 
 vacuuming them combined with update rate and their size. If 
I can set 
 up my own queues I can experiment with what works best for me... for 
 the base setup you could set up some default queues. I wonder though 
 how would you handle dynamics of tables, I mean when will a small 
 table which grows start to be considered a big table for the purpose 
 of putting it in one queue or the other ? I guess it would 
be done on 
 analyzing the table, which is also handled by autovacuum, so tables 
 with no vacuum queue settings could go to one of the 2 
default queues 
 you mention.

The heuristic I was thinking of didn't involve having two 
queues, but rather just 1.  By having some size information, 
work processes could eat at the queue from both ends.

If you have cases where tables need to be vacuumed *really* 
frequently, then you make sure that they are being injected 
frequently, and that some of the workers are tied to Just 
Doing Small Tables.

I think that *does* cover your scenario quite adequately, and 
without having to get into having a bunch of queues.

The heuristic is incomplete in one other fashion, namely that 
it doesn't guarantee that tables in the middle will ever get 
gotten to.
That mandates having a third policy, namely to have a worker 
that goes through tables in the (singular) queue some form of 
chronological order.

 That should keep any small tables from getting vacuum-starved.
 
 I'd think the next step would be to increase the number of queues, 
 perhaps in a time-based fashion.  There might be times when it's 
 acceptable to vacuum 5 tables at once, so you burn thru 
little tables 
 like the blazes, and handle larger ones fairly promptly.  
And other 
 times when you don't want to do *any* big tables, and limit 
a single 
 queue to just the itty bitty ones.

 This is all nice and it would be cool if you could set it up per 
 vacuum queue. I mean how much more effort would be to allow vacuum 
 queues with generic settings like time windows with max number of 
 threads for each window, and let the user explicitly assign 
tables to 
 those queues, instead of hard coding the queues and their 
settings and 
 assign tables to them based on size or any other heuristics ?

 For the average application which needs simple settings, there could 
 be a default setup with the 2 queues you mention. If it would be 
 possible to set up some rules to assign tables to queues based on 
 their properties on analyze time, instead of explicitly assigning to 
 one queue or other, that would be nice too, and then you can 
 completely cover the default setup with those settings, and 
allow for 
 more complex setups for those who need it.

My thinking has headed more towards simplifying this; two 
queues seems to be one too 

Re: [GENERAL] SELECT INTO using Views?

2007-01-08 Thread Merlin Moncure

On 1/9/07, Jeanna Geier [EMAIL PROTECTED] wrote:

Hello List!

I have a question regarding SELECT  INTO...

Can it be used with Views?  I have a View that is populated (~35,000 rows)
that I want to create a Table from the data in it

So, would I be able to copy the data from the View to the Table using the
SELECT INTO command?


Administrator=# create temp view v as select 'postgresql r0x0r'::text;
CREATE VIEW
Administrator=# create temp table t as select * from v;
SELECT


merlin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] COPY FROM and sequences

2007-01-08 Thread Matthew Terenzio
Is it true that you can't use COPY FROM to fill a table with a SERIAL 
type column?


Or rather, how does one approach that situation most effectively?


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] SELECT INTO using Views?

2007-01-08 Thread Guy Rouillier

Jeanna Geier wrote:

Hello List!

I have a question regarding SELECT  INTO...

Can it be used with Views?  I have a View that is populated (~35,000 rows)
that I want to create a Table from the data in it

So, would I be able to copy the data from the View to the Table using the
SELECT INTO command?


SELECT INTO is used in procedural language functions to place column
values into program variables.  To copy the contents of a view into a
table, look at instead CREATE TABLE AS or INSERT with SELECT.

--
Guy Rouillier



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] COPY FROM and sequences

2007-01-08 Thread Matthew Terenzio


On Jan 8, 2007, at 8:55 PM, Michael Glaesemann wrote:



On Jan 8, 2007, at 19:20 , Matthew Terenzio wrote:

Is it true that you can't use COPY FROM to fill a table with a SERIAL 
type column?


Or rather, how does one approach that situation most effectively?


Could you give an example of what you're trying to do? What errors are 
you getting?


Just trying to fill a large number of rows in a table that uses a 
serial for the primary key, from a comma delimited file


 invalid input syntax for integer: 

I'm not quoting anything, just leaving the the delimited field empty 
and expected a seuqence would fill the column as it would in an insert





Michael Glaesemann
grzm seespotcode net






---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] COPY FROM and sequences

2007-01-08 Thread Matthew Terenzio


On Jan 8, 2007, at 9:13 PM, Matthew Terenzio wrote:



On Jan 8, 2007, at 8:55 PM, Michael Glaesemann wrote:



On Jan 8, 2007, at 19:20 , Matthew Terenzio wrote:

Is it true that you can't use COPY FROM to fill a table with a 
SERIAL type column?


Or rather, how does one approach that situation most effectively?


Could you give an example of what you're trying to do? What errors 
are you getting?


Just trying to fill a large number of rows in a table that uses a 
serial for the primary key, from a comma delimited file


 invalid input syntax for integer: 

I'm not quoting anything, just leaving the the delimited field empty 
and expected a seuqence would fill the column as it would in an insert


Looks like WITH NULL AS '' might work








Michael Glaesemann
grzm seespotcode net






---(end of 
broadcast)---

TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly





---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] COPY FROM and sequences

2007-01-08 Thread Matthew Terenzio


On Jan 8, 2007, at 9:20 PM, Matthew Terenzio wrote:



On Jan 8, 2007, at 9:13 PM, Matthew Terenzio wrote:



On Jan 8, 2007, at 8:55 PM, Michael Glaesemann wrote:



On Jan 8, 2007, at 19:20 , Matthew Terenzio wrote:

Is it true that you can't use COPY FROM to fill a table with a 
SERIAL type column?


Or rather, how does one approach that situation most effectively?


Could you give an example of what you're trying to do? What errors 
are you getting?


Just trying to fill a large number of rows in a table that uses a 
serial for the primary key, from a comma delimited file


 invalid input syntax for integer: 

I'm not quoting anything, just leaving the the delimited field empty 
and expected a seuqence would fill the column as it would in an 
insert


Looks like WITH NULL AS '' might work


I see! you've just got to specify the columns in the COPY FROM 
statement and it will use the defaults for those columns not listed













Michael Glaesemann
grzm seespotcode net






---(end of 
broadcast)---

TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that 
your

  message can get through to the mailing list cleanly





---(end of 
broadcast)---

TIP 6: explain analyze is your friend





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Questions about horizontal partitioning

2007-01-08 Thread Tom Lane
John Sales [EMAIL PROTECTED] writes:
 By doing this, I'm hoping that the query optimizer is smart enough to see 
 that if a query comes in and requests only the six columns (that are in the 
 narrower table) that PostgreSQL won't have to load the wider table into the 
 buffer pool, and thereby actually have to only access about 10% the amount of 
 disk that it presently does.

 Is this a sound theory?

No.  It still has to touch the second table to confirm the existence of
rows to join to.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Autovacuum Improvements

2007-01-08 Thread Alvaro Herrera
Joris Dobbelsteen wrote:
 Why not collect some information from live databases and perform some
 analysis on it?

We already capture and utilize operational data from databases: for each
table, how many tuples there are, how many tuples have been inserted,
deleted, updated.  We already analyse it to determine if a table needs
vacuuming or not.

But that data alone is not sufficient.  You (the DBA) have to provide
the system with timing information (e.g., at what time is it appropriate
to vacuum huge tables).  We also need to extend the system to be able to
vacuum multiple tables in parallel, but the DBA needs to give some
constraints: for example that you don't want more than 3 vacuum
processes running at any time.

Capturing data about someone's database is not going to help someone
else's vacuuming strategy, because their usage patterns are potentially
so different; and there are as many usage patterns as Imelda Marcos had
shoes (well, maybe not that many), so any strategy that considers only
two particular pairs of shows is not going to fly.  We need to provide
enough configurability to allow DBAs to make the vacuumer fit their
situation.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Postgres Differential backup

2007-01-08 Thread Bruce Momjian
Bill Moran wrote:
 In response to Vijayaram Boopalan - TLS , Chennai [EMAIL PROTECTED]:
 
  I do not kb=3Dnow how to take differential=
  database backup in postgre 8.1.5.
 
 What on Earth is wrong with your MUA?

For some reason I thought that was hilarious.  :-)

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


  1   2   >