Re: [GENERAL] PQntuples return type

2007-09-26 Thread Bruce Momjian
Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> This is silly.  Have you forgotten that the max number of columns is
> >> constrained to 1600 on the backend side?
> 
> > Uh, this is the number of returned rows, right?  How does this relate to
> > columns?
> 
> Duh, brain fade on my part, sorry.  Still, I suspect the return type of
> PQntuples would be the very least of the changes we'd need to make to
> support resultsets > 2G rows.  And I would not advise changing it to an
> unsigned type, since that could cause hard-to-find breakage in
> comparison logic in applications.

Uh, yea.  Not sure if anyone has ever tried.  At least we have a marker
in the docs now that it might be a problem.  ;-)

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] df output change (in psql) after 8.2 version

2007-09-26 Thread சிவகுமார் மா
After upgrading to 8.2.4 version of PostgreSQL (Suse Linux, compiled from
source), function display in psql is changed.

In 8.1 version, using \df+ command we get the function description as
entered while creating it. In 8.2 version this seems to have changed. There
are additional characters and white-spaces added to the function
description.

Switching to unaligned output format (using \a) gives the same result
as 8.1version. Is there a way to restore the earlier behaviour without
using
unaligned output format?

Best regards,

Ma Sivakumar

-- 
மா சிவகுமார்
எல்லோரும் எல்லாமும் பெற வேண்டும்
http://masivakumar.blogspot.com


Re: [GENERAL] pg_dump (8.1.9) does not output copy statements

2007-09-26 Thread Tom Lane
Jan de Visser <[EMAIL PROTECTED]> writes:
> In my world two identical pilot errors within a short timeframe are indicat=
> ive=20
> of a bad interface.

Yeah, it's inconsistent.  How many people's dump scripts do you want to
break to make it more consistent?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Arabic Language

2007-09-26 Thread Pavel Stehule
> Dear Sir/Madam,
>
> We have developed an accounting solution using Linux and Postgresql as an
> open source database. Storing data in English is not an issue but we need
> your assistance to guide us on how can we store multi language
> English/Arabic characters using Postgresql.
>

Hello

simply, use UTF8 encoding:

createdb -E UTF8 yourdatabase

Regards
Pavel Stehule

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] datestyle question

2007-09-26 Thread Erik Jones

On Sep 26, 2007, at 5:24 PM, Scott Marlowe wrote:


On 9/26/07, Diego Gil <[EMAIL PROTECTED]> wrote:

Hi,

I have a file to import to postgresql that have an unusual date  
format.

For example, Jan 20 2007 is 20022007, in DDMM format, without any
separator. I know that a 20072002 (MMDD) is ok, but I don't  
know how

to handle the DDMM dates.

I tried and tried but I can't import those dates to postgresql.

Any hint, other than editing file ?


There are two approaches.  One is to use something like sed or awk or
perl or php to read the file and rearrange those bits to a format that
makes sense to pgsql, or you can import that field into a text field,
and use something like substring() in postgresql to update a new field
that holds dates with the right numbers.


You know, this type of request is fairly common and has got me  
thinking.  If postgres had some kind of identity function a useful  
extension to the COPY syntax would be to allow the user to specify  
functions for each column that the imported data would be passed  
through.


So, say you had the following table:

CREATE TABLE test (
test_id  serial primary key,
test_val text,
test_date timestamp);

The COPY could be something like (with id being a built in identity  
function):


COPY test (test_val, test_date) VALUES (id, regexp_replace(id, '(..) 
(..)()', '\\3-\\2-\\1') FROM '/somepath/somefile.csv' CSV;


Alternatively, if the usage of id is obtuse, the particular field  
name could be used but I think that would probably work a little  
differently on the backend although not being involved with the  
backend I'm no expert.


Just a random idea anyway.

Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] pg_dump (8.1.9) does not output copy statements

2007-09-26 Thread Jan de Visser
On Wednesday 26 September 2007 20:24:12 Tom Lane wrote:
> "Matthew Dennis" <[EMAIL PROTECTED]> writes:
> > Maybe I'm just missing something but I can't seem to get pg_dump to
> > output copy statements.  Regardless of the -d / --inserts flag it always
> > outputs insert statements.
>
> I'm betting this is the same type of pilot error discussed earlier
> today:
> http://archives.postgresql.org/pgsql-general/2007-09/msg01230.php
>

In my world two identical pilot errors within a short timeframe are indicative 
of a bad interface.

And yes, I've done this to (specified -d on a pg_dump command line, that is).

jan

-- 
--
Jan de Visser                     [EMAIL PROTECTED]

                Baruk Khazad! Khazad ai-menu!
--

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] RETURN NEXT on result set

2007-09-26 Thread Scott Ribe
> There is a new RETURN QUERY in 8.3 that may be what you want.

Sounds good.

-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pg_dump (8.1.9) does not output copy statements

2007-09-26 Thread Alvaro Herrera
Matthew Dennis wrote:
> Maybe I'm just missing something but I can't seem to get pg_dump to output
> copy statements.  Regardless of the -d / --inserts flag it always outputs
> insert statements.  The doc says that pg_dump will output copy statements by
> default and will only output insert statements with the -d / --inserts flag
> set.  I can't seem to find an option to explicitly state that copy should be
> used instead of insert.  I'm using 8.1.9

Let's see your command line.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_dump (8.1.9) does not output copy statements

2007-09-26 Thread Tom Lane
"Matthew Dennis" <[EMAIL PROTECTED]> writes:
> Maybe I'm just missing something but I can't seem to get pg_dump to output
> copy statements.  Regardless of the -d / --inserts flag it always outputs
> insert statements.

I'm betting this is the same type of pilot error discussed earlier
today:
http://archives.postgresql.org/pgsql-general/2007-09/msg01230.php

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] pg_dump (8.1.9) does not output copy statements

2007-09-26 Thread Matthew Dennis
Maybe I'm just missing something but I can't seem to get pg_dump to output
copy statements.  Regardless of the -d / --inserts flag it always outputs
insert statements.  The doc says that pg_dump will output copy statements by
default and will only output insert statements with the -d / --inserts flag
set.  I can't seem to find an option to explicitly state that copy should be
used instead of insert.  I'm using 8.1.9


Re: [GENERAL] PQntuples return type

2007-09-26 Thread Tom Lane
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> This is silly.  Have you forgotten that the max number of columns is
>> constrained to 1600 on the backend side?

> Uh, this is the number of returned rows, right?  How does this relate to
> columns?

Duh, brain fade on my part, sorry.  Still, I suspect the return type of
PQntuples would be the very least of the changes we'd need to make to
support resultsets > 2G rows.  And I would not advise changing it to an
unsigned type, since that could cause hard-to-find breakage in
comparison logic in applications.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] datestyle question

2007-09-26 Thread Diego Gil
El mié, 26-09-2007 a las 17:24 -0500, Scott Marlowe escribió:
> On 9/26/07, Diego Gil <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > I have a file to import to postgresql that have an unusual date format.
> > For example, Jan 20 2007 is 20022007, in DDMM format, without any
> > separator. I know that a 20072002 (MMDD) is ok, but I don't know how
> > to handle the DDMM dates.
> >
> > I tried and tried but I can't import those dates to postgresql.
> >
> > Any hint, other than editing file ?
> 
> There are two approaches.  One is to use something like sed or awk or
> perl or php to read the file and rearrange those bits to a format that
> makes sense to pgsql, or you can import that field into a text field,
> and use something like substring() in postgresql to update a new field
> that holds dates with the right numbers.

That is what I did on a previous file, sometime ago. Having now several
date fields, I was trying to simplify the task, is possible. But it
seems I will have no luck !.  

I will explore a little what Erik Jones suggested: inserting dashes with
awk.

Thanks,
Diego.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] datestyle question

2007-09-26 Thread Diego Gil
El mié, 26-09-2007 a las 17:22 -0500, Erik Jones escribió:
> On Sep 26, 2007, at 3:42 PM, Diego Gil wrote:
> 
> > Hi,
> >
> > I have a file to import to postgresql that have an unusual date  
> > format.
> > For example, Jan 20 2007 is 20022007, in DDMM format, without any
> > separator. I know that a 20072002 (MMDD) is ok, but I don't  
> > know how
> > to handle the DDMM dates.
> >
> > I tried and tried but I can't import those dates to postgresql.
> >
> > Any hint, other than editing file ?
> >
> > Regards,
> > Diego.
> 
> Check out the following link that explains how Postgres parses date  
> inputs: http://www.postgresql.org/docs/8.2/interactive/x71187.html,  
> particularly section 1.c
> 
> The simplest thing I can think of in your case would be to do a  
> little bit of text processing on that field before inserting it.  If  
> you simply insert dashes between the different fields so that you  
> have DD-MM-YYY then you can do
> 
> SET DateStyle TO 'DMY';
> 
> and then your copy should be ok.
> 
> Erik Jones
> 

Thanks Erik. I was trying to avoid this, mainly because I will have to
import several and different files. But having no other option, I will
start to refreshing my awk knowledge.

Regards,
Diego.




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] RETURN NEXT on result set

2007-09-26 Thread Alvaro Herrera
Scott Ribe wrote:
> Feature request: allow some way to "return next" a set of values. Usage:
> recursive stored procedures to walk a tree. Example: given a table, find all
> tables that inherit from it.

There is a new RETURN QUERY in 8.3 that may be what you want.


-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] decode access privileges

2007-09-26 Thread John Smith
On 9/26/07, Michael Glaesemann <[EMAIL PROTECTED]> wrote:
>
> On Sep 26, 2007, at 14:51 , John Smith wrote:
>
> > what does this mean?
> >
> > {postgres=arwdRxt/postgres,username=r/postgres}
>
> http://www.postgresql.org/docs/8.2/interactive/sql-grant.html

...purrrfect! thanks michael. i got there by "\dp"

a sorta follow-up:
"\dp" gets access privileges for objects inside the public schema.
how'd i find access privileges for a private schema (not objects
inside that private schema- "\dp schemaname." gets that fine)?

just did a "grant on..." a private schema and wanna see it. something
like "\z schemaname"?
cheers, jzs

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] RETURN NEXT on result set

2007-09-26 Thread Scott Ribe
Feature request: allow some way to "return next" a set of values. Usage:
recursive stored procedures to walk a tree. Example: given a table, find all
tables that inherit from it.

Right now, as far as can tell, that requires a little extra effort to merge
the results from different levels of recursion:


create or replace function "DbGetDescendantTables" (oid) returns setof oid
as $$
declare parentid alias for $1;
curid1 oid; curid2 oid;
c1 refcursor;
c2 refcursor;
begin
return next parentid;
open c1 for select inhrelid from pg_inherits where inhparent = parentid;
while 1 loop
fetch c1 into curid1;
if found then
open c2 for select * from "DbGetDescendantTables"(curid1);
while 1 loop
fetch c2 into curid2;
if found then
return next curid2;
else
exit;
end if;
end loop;
close c2;
else
exit;
end if;
end loop;
close c1;
end;
$$ language 'plpgsql';


But if a query result could directly be added to the result set being
accumulated, this would become:


create or replace function "DbGetDescendantTables" (oid) returns setof oid
as $$
declare parentid alias for $1;
curid1 oid;
c1 refcursor;
begin
return next parentid;
open c1 for select inhrelid from pg_inherits where inhparent = parentid;
while 1 loop
fetch c1 into curid1;
if found then
return next (select * from "DbGetDescendantTables"(curid1));
else
exit;
end if;
end loop;
close c1;
end;
$$ language 'plpgsql';


Sure, some of this could be avoid by accumulating and returning an array,
but in my case it's convenient for the procedures to produce result sets.

-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Arabic Language

2007-09-26 Thread TUC Solutions
Dear Sir/Madam,

 

We have developed an accounting solution using Linux and Postgresql as an
open source database. Storing data in English is not an issue but we need
your assistance to guide us on how can we store multi language
English/Arabic characters using Postgresql.

 

Your assistance is highly appreciated.

 

Samir Faisal

G.M.

T.U.C.

Mobile:+ 966569879720

Tel:+ 966222841167

Fax: + 96626658715 ext 108

email: [EMAIL PROTECTED] 

www.tucsolutions.com

 



Re: [GENERAL] datestyle question

2007-09-26 Thread Scott Marlowe
On 9/26/07, Diego Gil <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have a file to import to postgresql that have an unusual date format.
> For example, Jan 20 2007 is 20022007, in DDMM format, without any
> separator. I know that a 20072002 (MMDD) is ok, but I don't know how
> to handle the DDMM dates.
>
> I tried and tried but I can't import those dates to postgresql.
>
> Any hint, other than editing file ?

There are two approaches.  One is to use something like sed or awk or
perl or php to read the file and rearrange those bits to a format that
makes sense to pgsql, or you can import that field into a text field,
and use something like substring() in postgresql to update a new field
that holds dates with the right numbers.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] datestyle question

2007-09-26 Thread Erik Jones

On Sep 26, 2007, at 3:42 PM, Diego Gil wrote:


Hi,

I have a file to import to postgresql that have an unusual date  
format.

For example, Jan 20 2007 is 20022007, in DDMM format, without any
separator. I know that a 20072002 (MMDD) is ok, but I don't  
know how

to handle the DDMM dates.

I tried and tried but I can't import those dates to postgresql.

Any hint, other than editing file ?

Regards,
Diego.


Check out the following link that explains how Postgres parses date  
inputs: http://www.postgresql.org/docs/8.2/interactive/x71187.html,  
particularly section 1.c


The simplest thing I can think of in your case would be to do a  
little bit of text processing on that field before inserting it.  If  
you simply insert dashes between the different fields so that you  
have DD-MM-YYY then you can do


SET DateStyle TO 'DMY';

and then your copy should be ok.

Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PQntuples return type

2007-09-26 Thread Bruce Momjian
Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > Greg Sabino Mullane wrote:
> >> There may be some other safeguards in place I did not see to prevent this, 
> >> but I don't see a reason why we shouldn't use unsigned int or 
> >> unsigned long int here, both for ntups and the return value of the 
> >> function.
> 
> > On second thought, I have at least updated the function documentation:
> 
> >Returns the number of rows (tuples) in the query result.  Because
> >it returns an integer result, large result sets might overflow the
> >return value on 32-bit operating systems.
> 
> This is silly.  Have you forgotten that the max number of columns is
> constrained to 1600 on the backend side?

Uh, this is the number of returned rows, right?  How does this relate to
columns?

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] datestyle question

2007-09-26 Thread Diego Gil
Hi,

I have a file to import to postgresql that have an unusual date format.
For example, Jan 20 2007 is 20022007, in DDMM format, without any
separator. I know that a 20072002 (MMDD) is ok, but I don't know how
to handle the DDMM dates.

I tried and tried but I can't import those dates to postgresql. 

Any hint, other than editing file ? 

Regards,
Diego.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Memory usage of COPY command

2007-09-26 Thread Keaton Adams
When loading (inserting) data into a table with COPY I have read in the
documentation that rows are appended to the end of the table instead of
being added to existing table pages, so I'm wondering about memory
utilization.  Our application uses a number of COPY statements in
parallel, so COPY performance is key.  Does COPY use the shared buffer
pool, or does it allocate its own block of memory to bulk-load the data
into?

 

Would tuning shared_buffers in postgresql.conf have an effect on COPY
performance, or is the buffer-pool bypassed altogether on a load?

 

Thanks,

 

Keaton

 



Re: [GENERAL] Autostart PostgreSQL in Ubuntu

2007-09-26 Thread Martijn van Oosterhout
On Wed, Sep 26, 2007 at 10:05:21PM +0200, Peter Eisentraut wrote:
> > I tried to change the location of the PID target directory in 
> > postgresql.conf, but then clients like psql still try to find the PID
> > file in /var/run/ postgresql and fail.
> 
> You must be mistaken about this.  psql shouldn't have a reason to read 
> the server's PID file.

Sounds to me like he didn't actually uninstall the Debian postgresql
installation, which would leave a whole bunch of scripts lying doing
all sorts of interesting things...

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] decode access privileges

2007-09-26 Thread Michael Glaesemann


On Sep 26, 2007, at 14:51 , John Smith wrote:


what does this mean?

{postgres=arwdRxt/postgres,username=r/postgres}


http://www.postgresql.org/docs/8.2/interactive/sql-grant.html

If you provide a bit more information (such as where and how you see  
this information), you might assist those trying to help. I ended up  
looking through the system catalogs, in particular pg_database, which  
pointed me to the GRANT and REVOKE documentation. Not the quickest  
route, but I got there. Hope this helps.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] decode access privileges

2007-09-26 Thread Richard Broersma Jr
--- John Smith <[EMAIL PROTECTED]> wrote:
> what does this mean?
> {postgres=arwdRxt/postgres,username=r/postgres}

This link describes each of the letters:
http://www.tldp.org/LDP/intro-linux/html/sect_03_04.html

Regards,
Richard Broersma Jr.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Autostart PostgreSQL in Ubuntu

2007-09-26 Thread Peter Eisentraut
Johann Maar wrote:
> But if I try to start PostgreSQL by running "sudo /etc/init.d/
> postgresql start" it will fail because it tries to write a PID file
> to "/var/run/postgresql" which does not exist. If I create this
> directory and set the permissions for postgres to write it works (!),
> but after the next restart of the machine the directory is already
> gone.

/var/run/ might be on a temporary file system.  So you need to adjust 
your init script to create that directory if it doesn't exist.

> I tried to change the location of the PID target directory in 
> postgresql.conf, but then clients like psql still try to find the PID
> file in /var/run/ postgresql and fail.

You must be mistaken about this.  psql shouldn't have a reason to read 
the server's PID file.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] decode access privileges

2007-09-26 Thread John Smith
what does this mean?

{postgres=arwdRxt/postgres,username=r/postgres}

cheers, jzs

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] DAGs and recursive queries

2007-09-26 Thread Oleg Bartunov

take a look on contrib/ltree

On Wed, 26 Sep 2007, paul.dorman wrote:


Hi everyone,

I would like to know the best way to implement a DAG in PostgreSQL. I
understand there has been some talk of recursive queries, and I'm
wondering if there has been much progress on this.

Are there any complete examples of DAGs which work with PostgreSQL? I
would like to be able to do the following operations for a
categorization system:

1. Given a node, get one or more field values out of every parent node
2. Given a parent node, get one or more field values out of every
child node
3. Given two or more parent nodes, identify any common children.

I do not need to determine shortest paths between parents and
children, only to be able to iterate over them as efficiently as
possible.

I'd like to keep things dynamic so changes up the hierarchy don't
require changes to any of the children (unless their direct parents
are changed). I'd also like to keep as much processing as possible in
the database to minimize the traffic between my application and the
DB, so I think I'm looking for SQL and stored procedure solutions.

Any pointers would be great, as I'm not a DBA and do not have the
experience to make judgments about the best possible approach.

Regards,
Paul Dorman


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq



Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Help tuning a large table off disk and into RAM

2007-09-26 Thread Scott Marlowe
On 9/26/07, James Williams <[EMAIL PROTECTED]> wrote:

> The last is based mostly on the observation that another tiddly
> unrelated mysql db which normally runs fast, grinds to a halt when
> we're querying the postgres db (and cpu, memory appear to have spare
> capacity).

Just a quick observation here.

When the drives are thrashing, is it straight db-storage thrashing, or
is the system swapping out a lot?

Run vmstat 10 while running this query when this happens and pay
attentino to bi bo and si so

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Autostart PostgreSQL in Ubuntu

2007-09-26 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Carlos Moreno wrote:
> Johann Maar wrote:
>> But if I try to start PostgreSQL by running "sudo /etc/init.d/
>> postgresql start" it will fail because it tries to write a PID file to
>> "/var/run/postgresql" which does not exist. If I create this directory
>> and set the permissions for postgres to write it works (!), but after
>> the next restart of the machine the directory is already gone. 
> 
> With Red Hat systems, you would do chkconfig postgresql on  if you
> installed
> the postgresql that they distribute.
> 
> On Ubuntu  (and I imagine with all Debian-based systems), AFAIK you have
> to manually adjust the init scripts for the runlevels that you want. 
> Assuming

You can use update-rc.d.

Joshua D. Drake
- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG+pxmATb/zqfZUUQRAmHeAJ9DhNAHqrM+lYeCwYeSH+WOfVkHDgCdGlyN
/mKS9XFfGJTM2HkctVTURsM=
=hBY/
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] More on migragting the server.

2007-09-26 Thread Tom Lane
David Siebert <[EMAIL PROTECTED]> writes:
> I set up a test server using the latest 8.2 as suggest by the list and
> did pg_dump of the old data base.
> I created a new empty database with the same name an created a user with
> the same name as was on the old server.
> I then tried to do a restore using webmin just as a test and got errors.

Hm, your old version was 7.1 right?  That predates the availability of
pg_depend information, which is what pg_dump must have to ensure that
it dumps objects in an order that has no forward references.  When
dumping from such an old server, modern pg_dump versions will use some
ordering heuristics that sort of mostly work, but a few problems are
to be expected.  In this example, for instance, it seems to have dumped
function pgadmin_get_rows(oid) before table pgadmin_table_cache, which
doesn't work.

What you'll need to do to get this reloaded is to manually modify the
load order.  pg_restore has some options that help you do that ---
basically you get a listing of the TOC (table of contents) of the
archive file, and then manually rearrange that listing, and then tell
pg_restore to restore in the manually specified order.  It will probably
take a bit of trial and error before you get it right, so I'd suggest
using pg_restore's -s option to not bother trying to load data until
you have a working load order.

Also, it might not be a bad idea to just omit the old pgAdmin objects
from the reload altogether, because they aren't going to be helpful
anyway for a modern pgAdmin.  Leave them out and then install a modern
pgAdmin release after you've successfully loaded your own stuff.
(The same goes for any other contrib or third-party stuff you might have
in there --- 7.1-era code is going to need replaced.)

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Help tuning a large table off disk and into RAM

2007-09-26 Thread Greg Smith

On Wed, 26 Sep 2007, James Williams wrote:


The box has 4 x Opterons, 4Gb RAM & five 15k rpm disks, RAID 5.  We
wanted fast query/lookup.  We know we can get fast disk IO.


You might want to benchmark to prove that if you haven't already.  You 
would not be the first person to presume you have fast disk I/O on RAID 5 
only to discover that's not actually true when tested. 
http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm gives 
some details here.



shared_buffers  = 128MB
temp_buffers= 160MB
work_mem= 200MB
max_stack_depth = 7MB


The one you're missing is effective_cache_size, and I'd expect you'd need 
to more than double shared_buffers to have that impact things given what 
you've described of your tasks.  Take a look at 
http://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm to get a 
better idea the right range for those two you should be considering; 128MB 
for shared_buffers is way low for your system, something >1GB is probably 
right, and effective_cache_size should probably be in the multiple GB 
range.


If you actually want to see what's inside the shared_buffers memory, take 
a look at the contrib/pg_buffercache module.  Installing that for your 
database will let you see how the memory is being used, to get a better 
idea how much of your indexes are staying in that part of memory.  The 
hint you already got from Bill Moran about using pg_relation_size() will 
give you some basis for figuring out what % of the index is being held 
there.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Duplicate public schema and user tables

2007-09-26 Thread Tom Lane
Martijn van Oosterhout <[EMAIL PROTECTED]> writes:
> On Wed, Sep 26, 2007 at 10:51:43AM +0200, Romain Roure wrote:
>> After checking through the logs, it doesn't appear to be a problem 
>> resulting from wrap-around OID's. Though the logs mention 
>> transaction-wraparound may have happened.

> Please shouw us the xmin,xmax columns to the pg_class tables. But if
> you've wrapped around so far to get old column, then you passed the
> wraparound horizon 2 billion transactions ago. Please show us exactly
> what the logs say:

If there are complaints like that in the logs, it seems hugely
optimistic to suppose that you don't have a wraparound problem ...

If it is wraparound, it's possible that a VACUUM on pg_class would fix
this.  It will certainly not do any harm to try it.

>> We're running PostgreSQL 8.0.1. Any help would be appreciated.

> You need to be running VACUUM...

Not to mention running a much newer release.  8.0 is still supported,
but it's up to 8.0.14 now.  You're not going to find a lot of sympathy
if this turns out to have been caused by a bug that was fixed since
8.0.1 --- that subrelease was obsoleted over two years ago.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Autostart PostgreSQL in Ubuntu

2007-09-26 Thread Martin Marques

Johann Maar wrote:

Hi folks,

sorry I do not get it right and I have to ask now.

I manually compiled PostgreSQL on my Kubuntu machine to /usr/local/opt/
pgsql and did all this stuff like creating a "postgres" user and I
have a startup script in /etc/init.d.
But if I try to start PostgreSQL by running "sudo /etc/init.d/
postgresql start" it will fail because it tries to write a PID file to
"/var/run/postgresql" which does not exist. If I create this directory
and set the permissions for postgres to write it works (!), but after
the next restart of the machine the directory is already gone.


You should try to find out why the /var/run/postgresql directory 
disappears after reboot.


--
 21:50:04 up 2 days,  9:07,  0 users,  load average: 0.92, 0.37, 0.18
-
Lic. Martín Marqués |   SELECT 'mmarques' ||
Centro de Telemática|   '@' || 'unl.edu.ar';
Universidad Nacional|   DBA, Programador,
del Litoral |   Administrador
-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Filesystem crash - corupted database

2007-09-26 Thread Martijn van Oosterhout
On Wed, Sep 26, 2007 at 11:59:28AM +0200, Martin Bedná? wrote:
> Hi,
> 
> It's possible to extract data directly from data files ?
> I have two tablespaces one for data and one for indexes.
> After filesystem crash I lost my /var/lib/postgresql/data folder :( All 
> data is in /lost+found :(, I found folders with data and index tablespe 
> that looks ok.
> It's possible to directly access these files and extract data as CSV for 
> example ?
> Or it's possible "mount" these tablespaces to new database instance ?

In theory if you resurrect the schema exactly as it was, and then copy
the old files in place, you *may* be able to read them. However, you're
going to have trouble with non-existing XIDs.  Yo umight be able to
recreate the XLOG/CLOGs but the whether it's going to be consistant is
anyones guess...

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Duplicate public schema and user tables

2007-09-26 Thread Martijn van Oosterhout
On Wed, Sep 26, 2007 at 10:51:43AM +0200, Romain Roure wrote:
> Hi,
> 
> We suddenly stumbled upon duplicate entities. Some of our databases 
> ended up with two 'public' schemas and several duplicate user tables 
> (sharing the same oid).
> After checking through the logs, it doesn't appear to be a problem 
> resulting from wrap-around OID's. Though the logs mention 
> transaction-wraparound may have happened.

Please shouw us the xmin,xmax columns to the pg_class tables. But if
you've wrapped around so far to get old column, then you passed the
wraparound horizon 2 billion transactions ago. Please show us exactly
what the logs say:

Oh, and do you have any backups?

> We're running PostgreSQL 8.0.1. Any help would be appreciated.

You need to be running VACUUM...

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] CLUSTER = slower vacuum?

2007-09-26 Thread Alvaro Herrera
Phoenix Kiula escribió:
> After I clustered the primary key index of a table with about 300,000
> rows, my vacuum/analyze on that table is taking too long ... over 15
> mins when originally it was 15 seconds! Nothing else has been changed
> with this table. Is clustering not good for vacuums?

No.  Something else must be happening.  Maybe examine the output of
vacuum verbose to see where the time is going?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Poor performance with ON DELETE CASCADE

2007-09-26 Thread Tom Lane
Conal <[EMAIL PROTECTED]> writes:
> I have a database schema which has a "central" table with several
> others depending on it. The dependent tables all have foreign key
> constraints with ON DELETE CASCADE so that I can remove tuples from
> the "central" table and have the dependent rows removed automatically.
> This all works, but it's very slow, and I can't see why. The dependent
> tables are all indexed by this foreign key, so the deletions should be
> very fast.

Did you recently add the required indexes?  Existing releases of
Postgres cache query plans for FK queries for the life of a session,
so it seems barely possible that you are just working with a stale
plan.  Another possibility is that you need to ANALYZE the tables
involved so that the planner knows what it's dealing with.

> Unfortunately EXPLAIN doesn't provide any information about the
> details of how it executes the cascading deletion; there's no query
> plan for this, so I can't see why it is taking so long. Is it possible
> to obtain a query plan for these "cascaded" delete queries?

If you have the log message level cranked up high enough when the FK
trigger is first fired during a session, it'll log the actual FK query,
and then you can use PREPARE and EXPLAIN EXECUTE to see how it gets
planned.  (You need to take that route because it'll be a parameterized
query --- do NOT just plug in some constants and assume you'll get the
same plan.)

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Help tuning a large table off disk and into RAM

2007-09-26 Thread Tom Lane
Bill Moran <[EMAIL PROTECTED]> writes:
> Give it enough shared_buffers and it will do that.  You're estimating
> the size of your table @ 3G (try a pg_relation_size() on it to get an
> actual size)  If you really want to get _all_ of it in all the time,
> you're probably going to need to add RAM to the machine.

The table alone will barely fit in RAM, and he says he's got a boatload
of indexes too; and apparently Postgres isn't the only thing running on
the machine.  He *definitely* has to buy more RAM if he wants it all
to fit.  I wouldn't necessarily advise going to gigs of shared buffers;
you'll be putting a lot of temptation on the kernel to swap parts of
that out, and it does not sound at all like the workload will keep all
of the buffers "hot" enough to prevent that.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Help tuning a large table off disk and into RAM

2007-09-26 Thread Jimmy Choi
Have you tried clustering tables based on the most-frequently used
indexes to improve locality?

http://www.postgresql.org/docs/8.2/static/sql-cluster.html

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Bill Moran
Sent: Wednesday, September 26, 2007 11:24 AM
To: James Williams
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Help tuning a large table off disk and into RAM

In response to "James Williams" <[EMAIL PROTECTED]>:

> I'm stuck trying to tune a big-ish postgres db and wondering if anyone
> has any pointers.
> 
> I cannot get Postgres to make good use of plenty of available RAM and
> stop thrashing the disks.
> 
> One main table. ~30 million rows, 20 columns all integer, smallint or
> char(2).  Most have an index.  It's a table for holding webserver
> logs.  The main table is all foreign key ids.  Row size is ~100bytes.
> 
> The typical query is an aggregate over a large number of rows (~25%
say).
> 
>  SELECT COUNT(*), COUNT(DISTINCT user_id)
>  FROM table
>  WHERE epoch > ...
>  AND epoch < ...
>  AND country = ...
> 
> The box has 4 x Opterons, 4Gb RAM & five 15k rpm disks, RAID 5.  We
> wanted fast query/lookup.  We know we can get fast disk IO.
> 
> Running a typical query like above seems to:
> 
> * hardly tax a single CPU
> * plenty of RAM free
> * disks thrash about
> 
> The last is based mostly on the observation that another tiddly
> unrelated mysql db which normally runs fast, grinds to a halt when
> we're querying the postgres db (and cpu, memory appear to have spare
> capacity).
> 
> We've currently got these settings, and have tried doubling/halving
> them, restarted and benchmarked a test query.  They don't appear to
> materially alter our query time.
> 
>  shared_buffers  = 128MB

shared_buffers = 1.5GB

Unless you've got a lot of stuff other than PostgreSQL on this machine.

>  temp_buffers= 160MB
>  work_mem= 200MB
>  max_stack_depth = 7MB

These look reasonable, although I can't be sure without more details.

> 
> We're less concerned about insert speed.  Typically 1 or 2 users, but
> want fast queries.
> 
> Perhaps a little extreme, but I'm trying to find a way to express this
> in a way that Postgres understands:
> 
> * Load this table, and one or two indexes (epoch, user_id) into RAM.

Give it enough shared_buffers and it will do that.  You're estimating
the size of your table @ 3G (try a pg_relation_size() on it to get an
actual size)  If you really want to get _all_ of it in all the time,
you're probably going to need to add RAM to the machine.  With 8G, you
could allocate about 3G to shared_buffers, but that would be ignoring
the size of indexes.

However, I think you'll be surprised how much performance improves
with 1.5G of shared_buffers.  You may not need any more.  128M is
really forcing PG to work within limited space.

> * All of the table, all of those indexes.
> * Keep them there, but keep a disk based backup for integrity.
> * Run all selects against the in RAM copy.  Always.

This is what PG does if you allocate enough shared_buffers.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/



Confidentiality Notice.  This message may contain information that is 
confidential or otherwise protected from disclosure.
If you are not the intended recipient, you are hereby notified that any use, 
disclosure, dissemination, distribution, 
or copying of this message, or any attachments, is strictly prohibited.  If you 
have received this message in error, 
please advise the sender by reply e-mail, and delete the message and any 
attachments.  Thank you.




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] DAGs and recursive queries

2007-09-26 Thread Gregory Stark

"paul.dorman" <[EMAIL PROTECTED]> writes:

> Hi everyone,
>
> I would like to know the best way to implement a DAG in PostgreSQL. I
> understand there has been some talk of recursive queries, and I'm
> wondering if there has been much progress on this.

The ANSI recursive queries didn't make it into 8.3. I still hope it makes 8.4.

You could check out the tablefunc contrib which includes a function called
connectby() which implements a kind of recursive query.

Alternatively you might look at the ltree contrib module but that doesn't work
the way you describe. It denormalizes the data for very fast but less flexible
operations.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] More on migragting the server.

2007-09-26 Thread David Siebert
I set up a test server using the latest 8.2 as suggest by the list and
did pg_dump of the old data base.
I created a new empty database with the same name an created a user with
the same name as was on the old server.
I then tried to do a restore using webmin just as a test and got errors.
I am seeing some reference to pgadmin.. I did use pgadminII to manage
the data base.
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 8; 1255 18860 FUNCTION
pgadmin_get_rows(oid) phone
pg_restore: [archiver (db)] could not execute query: ERROR:  type
"pgadmin_table_cache" does not exist
Command was: CREATE FUNCTION pgadmin_get_rows(oid) RETURNS
pgadmin_table_cache
AS $_$SELECT DISTINCT ON(table_oid) * FROM pgadmin_tab...
pg_restore: [archiver (db)] could not execute query: ERROR:  function
public.pgadmin_get_rows(oid) does not exist
Command was: ALTER FUNCTION public.pgadmin_get_rows(oid) OWNER TO phone;
pg_restore: [archiver (db)] Error from TOC entry 9; 1255 18861 FUNCTION
pgadmin_get_sequence(oid) phone
pg_restore: [archiver (db)] could not execute query: ERROR:  type
"pgadmin_seq_cache" does not exist
Command was: CREATE FUNCTION pgadmin_get_sequence(oid) RETURNS
pgadmin_seq_cache
AS $_$SELECT DISTINCT ON(sequence_oid) * FROM pgadmi...
pg_restore: [archiver (db)] could not execute query: ERROR:  function
public.pgadmin_get_sequence(oid) does not exist
Command was: ALTER FUNCTION public.pgadmin_get_sequence(oid) OWNER
TO phone;
pg_restore: [archiver (db)] Error from TOC entry 837; 1259 18862 VIEW
pgadmin_databases phone
pg_restore: [archiver (db)] could not execute query: ERROR:  column
d.datpath does not exist
LINE 2: oid AS database_oid, d.datname AS database_name, d.datpath ...
 ^
Command was: CREATE VIEW pgadmin_databases AS
SELECT d.oid AS database_oid, d.datname AS database_name, d.datpath
AS database_path, p...
pg_restore: [archiver (db)] could not execute query: ERROR:  relation
"public.pgadmin_databases" does not exist
Command was: ALTER TABLE public.pgadmin_databases OWNER TO phone;
pg_restore: [archiver (db)] Error from TOC entry 838; 1259 18877 VIEW
pgadmin_checks phone
pg_restore: [archiver (db)] could not execute query: ERROR:  relation
"pg_relcheck" does not exist
Command was: CREATE VIEW pgadmin_checks AS
SELECT r.oid AS check_oid, r.rcname AS check_name, c.oid AS
check_table_oid, c.relname AS ...
pg_restore: [archiver (db)] could not execute query: ERROR:  relation
"public.pgadmin_checks" does not exist
Command was: ALTER TABLE public.pgadmin_checks OWNER TO phone;
pg_restore: [archiver (db)] Error from TOC entry 840; 1259 18914 VIEW
pgadmin_groups phone
pg_restore: [archiver (db)] could not execute query: ERROR:  column
pg_group.oid does not exist
LINE 2: SELECT pg_group.oid AS group_oid, pg_group.groname AS gr...
   ^
Command was: CREATE VIEW pgadmin_groups AS
SELECT pg_group.oid AS group_oid, pg_group.groname AS group_name,
pg_group.grosysid AS gro...
pg_restore: [archiver (db)] could not execute query: ERROR:  relation
"public.pgadmin_groups" does not exist
Command was: ALTER TABLE public.pgadmin_groups OWNER TO phone;
pg_restore: [archiver (db)] Error from TOC entry 841; 1259 18928 VIEW
pgadmin_indexes phone
pg_restore: [archiver (db)] could not execute query: ERROR:  column
x.indislossy does not exist
LINE 2: ...t_userbyid(i.relowner) AS index_owner, CASE WHEN (x.indislos...
 ^
Command was: CREATE VIEW pgadmin_indexes AS
SELECT i.oid AS index_oid, i.relname AS index_name, c.relname AS
index_table, pg_get_user...
pg_restore: [archiver (db)] could not execute query: ERROR:  relation
"public.pgadmin_indexes" does not exist
Command was: ALTER TABLE public.pgadmin_indexes OWNER TO phone;
pg_restore: [archiver (db)] Error from TOC entry 842; 1259 18956 VIEW
pgadmin_languages phone
pg_restore: [archiver (db)] could not execute query: ERROR:  column
l.lancompiler does not exist
LINE 2: oid AS language_oid, l.lanname AS language_name, l."lancomp...
 ^
Command was: CREATE VIEW pgadmin_languages AS
SELECT l.oid AS language_oid, l.lanname AS language_name,
l."lancompiler" AS language_c...
pg_restore: [archiver (db)] could not execute query: ERROR:  relation
"public.pgadmin_languages" does not exist
Command was: ALTER TABLE public.pgadmin_languages OWNER TO phone;
pg_restore: [archiver (db)] Error from TOC entry 843; 1259 18972 VIEW
pgadmin_sequences phone
pg_restore: [archiver (db)] could not execute query: ERROR:  syntax
error at or near "."
LINE 2: ...elacl AS sequence_acl, pgadmin_get_sequence(c.oid).sequence_...
 ^
Command was: CREATE VIEW pgadmin_sequences AS
SELECT c.oid AS sequence_oid, c.rel

Re: [GENERAL] Help tuning a large table off disk and into RAM

2007-09-26 Thread Alban Hertroys
James Williams wrote:
> The box has 4 x Opterons, 4Gb RAM & five 15k rpm disks, RAID 5.  We
> wanted fast query/lookup.  We know we can get fast disk IO.

RAID 5 is usually adviced against here. It's not particularly fast or
safe, IIRC. Try searching the ML archives for RAID 5 ;)

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Dumping from older version

2007-09-26 Thread Raymond O'Donnell

On 26/09/2007 16:26, Carlos Moreno wrote:

Maybe you used the switch -d to specify the database?  (like with psql 
and some other client applications).


Duhhh! I've just realised my mistake - here's my command line:

pg_dump -h 192.168.200.2 -U postgres -d assetreg -f assetreg.txt -E utf8

I had thought that the -d option was to specify the database, but of 
course not.


Thanks all - I'm sadder and wiser!

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Autostart PostgreSQL in Ubuntu

2007-09-26 Thread Carlos Moreno

Johann Maar wrote:

But if I try to start PostgreSQL by running "sudo /etc/init.d/
postgresql start" it will fail because it tries to write a PID file to
"/var/run/postgresql" which does not exist. If I create this directory
and set the permissions for postgres to write it works (!), but after
the next restart of the machine the directory is already gone. 


With Red Hat systems, you would do chkconfig postgresql on  if you 
installed

the postgresql that they distribute.

On Ubuntu  (and I imagine with all Debian-based systems), AFAIK you have
to manually adjust the init scripts for the runlevels that you want.  
Assuming
that you want postgresql started at runlevels 3 and 5, there should be 
symlinks
in /etc/rc3.d  and  /etc/rc5.d  pointing to /etc/init.d/postgresql  (so 
that the boot

sequence runs  /etc/init.d/postgresql start  for you --- so to speak)

These symlinks should be named S??postgresql  (where ?? is a two-digit
code that indicates the order in which the service is started --- maybe 
99 or

some high number would be convenient, so that it is started after other
services like networking).

A  ls -l /etc/rc?.d  should help you visualize what you need to do.  If 
you're
not familiar at all with all this, do a search on runlevels and init 
scripts;  I'm

sure you'll find plenty of documents/tutorials out there.

Or  The big, "brute force" hammer, would be to add a line in the
/etc/rc.local file with the very command that you're running to start it
(without sudo, of course, since that'a already being run by root)

HTH,

Carlos
--


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Dumping from older version

2007-09-26 Thread Carlos Moreno

Raymond O'Donnell wrote:
Just wondering - when using a newer pg_dump to dump from an older 
Postgres, does pg_dump automatically generate INSERT statements for 
the data rather than using COPY?


I noticed this today when transferring data to a newer server - 
pg_dump generated INSERTs although I didn't ask for them. Not a 
problem, but I was curious.


Maybe you used the switch -d to specify the database?  (like with psql 
and some

other client applications).

The switch -d in pg_dump goes for "Generate inserts instead of COPY 
commands"


Double-check the syntax/switches for pg_dump  (pg_dump --help)

HTH,

Carlos
--


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Help tuning a large table off disk and into RAM

2007-09-26 Thread Bill Moran
In response to "James Williams" <[EMAIL PROTECTED]>:

> I'm stuck trying to tune a big-ish postgres db and wondering if anyone
> has any pointers.
> 
> I cannot get Postgres to make good use of plenty of available RAM and
> stop thrashing the disks.
> 
> One main table. ~30 million rows, 20 columns all integer, smallint or
> char(2).  Most have an index.  It's a table for holding webserver
> logs.  The main table is all foreign key ids.  Row size is ~100bytes.
> 
> The typical query is an aggregate over a large number of rows (~25% say).
> 
>  SELECT COUNT(*), COUNT(DISTINCT user_id)
>  FROM table
>  WHERE epoch > ...
>  AND epoch < ...
>  AND country = ...
> 
> The box has 4 x Opterons, 4Gb RAM & five 15k rpm disks, RAID 5.  We
> wanted fast query/lookup.  We know we can get fast disk IO.
> 
> Running a typical query like above seems to:
> 
> * hardly tax a single CPU
> * plenty of RAM free
> * disks thrash about
> 
> The last is based mostly on the observation that another tiddly
> unrelated mysql db which normally runs fast, grinds to a halt when
> we're querying the postgres db (and cpu, memory appear to have spare
> capacity).
> 
> We've currently got these settings, and have tried doubling/halving
> them, restarted and benchmarked a test query.  They don't appear to
> materially alter our query time.
> 
>  shared_buffers  = 128MB

shared_buffers = 1.5GB

Unless you've got a lot of stuff other than PostgreSQL on this machine.

>  temp_buffers= 160MB
>  work_mem= 200MB
>  max_stack_depth = 7MB

These look reasonable, although I can't be sure without more details.

> 
> We're less concerned about insert speed.  Typically 1 or 2 users, but
> want fast queries.
> 
> Perhaps a little extreme, but I'm trying to find a way to express this
> in a way that Postgres understands:
> 
> * Load this table, and one or two indexes (epoch, user_id) into RAM.

Give it enough shared_buffers and it will do that.  You're estimating
the size of your table @ 3G (try a pg_relation_size() on it to get an
actual size)  If you really want to get _all_ of it in all the time,
you're probably going to need to add RAM to the machine.  With 8G, you
could allocate about 3G to shared_buffers, but that would be ignoring
the size of indexes.

However, I think you'll be surprised how much performance improves
with 1.5G of shared_buffers.  You may not need any more.  128M is
really forcing PG to work within limited space.

> * All of the table, all of those indexes.
> * Keep them there, but keep a disk based backup for integrity.
> * Run all selects against the in RAM copy.  Always.

This is what PG does if you allocate enough shared_buffers.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Dumping from older version

2007-09-26 Thread Alvaro Herrera
Raymond O'Donnell wrote:
> Just wondering - when using a newer pg_dump to dump from an older Postgres, 
> does pg_dump automatically generate INSERT statements for the data rather 
> than using COPY?

No.

> I noticed this today when transferring data to a newer server - pg_dump 
> generated INSERTs although I didn't ask for them. Not a problem, but I was 
> curious.

Perhaps you included -d in the command line options?

-- 
Alvaro Herrera   Valdivia, Chile   ICBM: S 39º 49' 18.1", W 73º 13' 56.4"
"El miedo atento y previsor es la madre de la seguridad" (E. Burke)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PG_DUMP not working

2007-09-26 Thread Alvaro Herrera
Dan99 escribió:

> > Update your pgsql.  7.4.2 is old in two ways.  the 7.4 branch is
> > pretty old.  plan an upgrade as soon as you can get this backup to
> > work.  Secondly, pg 7.4 is up to a number near 20 now, i.e. 7.4.18.
> > There are over two years of bug fixes you're missing, and one of them
> > could well be the solution to your problem.
> >
> > Upgrading from 7.4 to 8.2 requires a dump and reload, but 7.4.2 to
> > 7.4.18 is just an rpm -Uvh or apt-get update away
> 
> I currently am running apache + pgsql 7.4.2 + php 4.2.  Do you think I
> would have any troubles when upgrading to pgsql 8.2 ~ ie. existing php
> queries not working?

There could be.  However, upgrading to 7.4.18 (?) should work without
issues, so short-term it is a very good idea to upgrade to that.  Then
you can plan an upgrade to 8.2 or 8.3 in a longer-term future.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] CLUSTER = slower vacuum?

2007-09-26 Thread Phoenix Kiula
After I clustered the primary key index of a table with about 300,000
rows, my vacuum/analyze on that table is taking too long ... over 15
mins when originally it was 15 seconds! Nothing else has been changed
with this table. Is clustering not good for vacuums?

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] UNIQUE_VIOLATION, Finding out which index would have been violated

2007-09-26 Thread Petri Simolin
Greetings list,

I have created a function which inserts a row in a table which has 2 unique
indexes on two different columns. 

I was wondering if there is a way in case of UNIQUE_VIOLATION exception to
find out which index would have been violated?

Petri Simolin


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PG_DUMP not working

2007-09-26 Thread Dan99
On Sep 25, 10:32 am, [EMAIL PROTECTED] ("Scott Marlowe") wrote:
> On 9/18/07, Dan99 <[EMAIL PROTECTED]> wrote:
>
>
>
> > Hi,
>
> > I found out this morning that I cannot get pg_dump to work at all on
> > my database.  It refuses to create a dump and instead just freezes.
> > When using the verbose option (-v) i get the following output and then
> > it stops (it at one point ran for days on end before i even noticed)
>
> > pg_dump: saving encoding
> > pg_dump: saving database definition
> > pg_dump: reading schemas
> > pg_dump: reading user-defined types
> > pg_dump: reading user-defined functions
> > pg_dump: reading user-defined aggregate functions
> > pg_dump: reading user-defined operators
> > pg_dump: reading user-defined operator classes
> > pg_dump: reading user-defined tables
>
> > I think this problem is somehow related to a VIEW problem that I
> > have.  I created a VIEW of semi-large tables, which did not come back
> > with any errors.  However, when I go to view it, it never finishes
> > loading.  Also, I cannot drop this view as this as well never
> > finishes.
>
> > PGSQL version: 7.4.2
>
> Two things.
>
> What's in the postgresql logs (if you're not logging pgsql output,
> then turn it on and watch it while you're running pg_dump.
>
> Update your pgsql.  7.4.2 is old in two ways.  the 7.4 branch is
> pretty old.  plan an upgrade as soon as you can get this backup to
> work.  Secondly, pg 7.4 is up to a number near 20 now, i.e. 7.4.18.
> There are over two years of bug fixes you're missing, and one of them
> could well be the solution to your problem.
>
> Upgrading from 7.4 to 8.2 requires a dump and reload, but 7.4.2 to
> 7.4.18 is just an rpm -Uvh or apt-get update away
>
> ---(end of broadcast)---
> TIP 6: explain analyze is your friend

I currently am running apache + pgsql 7.4.2 + php 4.2.  Do you think I
would have any troubles when upgrading to pgsql 8.2 ~ ie. existing php
queries not working?


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PG_DUMP not working

2007-09-26 Thread Dan99
On Sep 25, 11:02 am, [EMAIL PROTECTED] (Alvaro Herrera)
wrote:
> Dan99 escribió:
>
>
>
> > Hi,
>
> > I found out this morning that I cannot get pg_dump to work at all on
> > my database.  It refuses to create a dump and instead just freezes.
> > When using the verbose option (-v) i get the following output and then
> > it stops (it at one point ran for days on end before i even noticed)
>
> > pg_dump: saving encoding
> > pg_dump: saving database definition
> > pg_dump: reading schemas
> > pg_dump: reading user-defined types
> > pg_dump: reading user-defined functions
> > pg_dump: reading user-defined aggregate functions
> > pg_dump: reading user-defined operators
> > pg_dump: reading user-defined operator classes
> > pg_dump: reading user-defined tables
>
> > I think this problem is somehow related to a VIEW problem that I
> > have.  I created a VIEW of semi-large tables, which did not come back
> > with any errors.  However, when I go to view it, it never finishes
> > loading.  Also, I cannot drop this view as this as well never
> > finishes.
>
> Perhaps somebody has a lock on a table or view.  Try
>
> select relation::regclass, database, transaction, pid, mode, granted
> from pg_locks;
>
> Do you see anything related to the view you created?
>
> --
> Alvaro Herrerahttp://www.amazon.com/gp/registry/DXLWNGRJD34J
> "This is a foot just waiting to be shot"(Andrew Dunstan)
>
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match

I think I fixed my problem.  I waited until the weekend when
practically nobody was going to be using my site and then just
restarted the server.  After that I could delete the view and pg_dump
worked.


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Autostart PostgreSQL in Ubuntu

2007-09-26 Thread Johann Maar
Hi folks,

sorry I do not get it right and I have to ask now.

I manually compiled PostgreSQL on my Kubuntu machine to /usr/local/opt/
pgsql and did all this stuff like creating a "postgres" user and I
have a startup script in /etc/init.d.
But if I try to start PostgreSQL by running "sudo /etc/init.d/
postgresql start" it will fail because it tries to write a PID file to
"/var/run/postgresql" which does not exist. If I create this directory
and set the permissions for postgres to write it works (!), but after
the next restart of the machine the directory is already gone. I tried
to change the location of the PID target directory in postgresql.conf,
but then clients like psql still try to find the PID file in /var/run/
postgresql and fail.

What is a good solution for this problem? It also seems that the
postgres user won't ever have permissions to write to /var/run so does
anybody know how to solve this?

With best regards,

Sebastian


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Help tuning a large table off disk and into RAM

2007-09-26 Thread James Williams
I'm stuck trying to tune a big-ish postgres db and wondering if anyone
has any pointers.

I cannot get Postgres to make good use of plenty of available RAM and
stop thrashing the disks.

One main table. ~30 million rows, 20 columns all integer, smallint or
char(2).  Most have an index.  It's a table for holding webserver
logs.  The main table is all foreign key ids.  Row size is ~100bytes.

The typical query is an aggregate over a large number of rows (~25% say).

 SELECT COUNT(*), COUNT(DISTINCT user_id)
 FROM table
 WHERE epoch > ...
 AND epoch < ...
 AND country = ...

The box has 4 x Opterons, 4Gb RAM & five 15k rpm disks, RAID 5.  We
wanted fast query/lookup.  We know we can get fast disk IO.

Running a typical query like above seems to:

* hardly tax a single CPU
* plenty of RAM free
* disks thrash about

The last is based mostly on the observation that another tiddly
unrelated mysql db which normally runs fast, grinds to a halt when
we're querying the postgres db (and cpu, memory appear to have spare
capacity).

We've currently got these settings, and have tried doubling/halving
them, restarted and benchmarked a test query.  They don't appear to
materially alter our query time.

 shared_buffers  = 128MB
 temp_buffers= 160MB
 work_mem= 200MB
 max_stack_depth = 7MB

We're less concerned about insert speed.  Typically 1 or 2 users, but
want fast queries.

Perhaps a little extreme, but I'm trying to find a way to express this
in a way that Postgres understands:

* Load this table, and one or two indexes (epoch, user_id) into RAM.
* All of the table, all of those indexes.
* Keep them there, but keep a disk based backup for integrity.
* Run all selects against the in RAM copy.  Always.

Aka, I know we're hitting this table (and a couple of columns) lots
and lots, so just get it into RAM and stop thrashing disks.

Pointers welcome.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Poor performance with ON DELETE CASCADE

2007-09-26 Thread Conal
I have a database schema which has a "central" table with several
others depending on it. The dependent tables all have foreign key
constraints with ON DELETE CASCADE so that I can remove tuples from
the "central" table and have the dependent rows removed automatically.
This all works, but it's very slow, and I can't see why. The dependent
tables are all indexed by this foreign key, so the deletions should be
very fast.

I have tried to use EXPLAIN and ANALYZE, and this tells me that the
deletion from the central table is not a problem, but that the
cascading deletions are very slow. e.g.

   Trigger for constraint topic_map_object_parent_uid_fkey:
time=93063.837 calls=115

Unfortunately EXPLAIN doesn't provide any information about the
details of how it executes the cascading deletion; there's no query
plan for this, so I can't see why it is taking so long. Is it possible
to obtain a query plan for these "cascaded" delete queries?


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] could not [extend relation|write block N of temporary file|write to hash-join temporary file]

2007-09-26 Thread Alessandra Bilardi


Tom Lane-2 wrote:
> 
> Alessandra Bilardi <[EMAIL PROTECTED]> writes:
>> ERROR:  could not write to hash-join temporary file: No space left on
>> device
> 
> Check your queries.  I suspect you've written an incorrectly constrained
> join that is producing many more rows than you expect.
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 6: explain analyze is your friend
> 
> 

Sorry, I reply only Tom Lane:

From: Alessandra Bilardi <[EMAIL PROTECTED]>
To: Tom Lane <[EMAIL PROTECTED]>
Subject: Re: [GENERAL] could not [extend relation|write block N of temporary
file|write to hash-join temporary file] 
Date: Tue, 7 Aug 2007 11:23:55 +0200

Dear Tom,
I attach you the sql script and file.log and file.err obtain with command:
psql -d sgdlite < sgdlite_mart.sql > sgdlite_mart.log 2> sgdlite_mart.err
first server machine is:AMD Opteron(tm) Processor 244, 1790.535MHz, 1024MB
second server machine is:AMD Opteron(tm) Bi Processor 244, 1804.153MHz,
8192MB

I had "could not write to hash-join temporary file" one time with
second server machine.
Thanks for you help.
Regards,

Alessandra Bilardi

http://www.nabble.com/file/p12884512/sgdlite_mart.sql sgdlite_mart.sql 
http://www.nabble.com/file/p12884512/sgdlite_mart1.err sgdlite_mart1.err 
http://www.nabble.com/file/p12884512/sgdlite_mart1.log sgdlite_mart1.log 
http://www.nabble.com/file/p12884512/sgdlite_mart2.err sgdlite_mart2.err 
http://www.nabble.com/file/p12884512/sgdlite_mart2.log sgdlite_mart2.log 

-- 
View this message in context: 
http://www.nabble.com/could-not--extend-relation%7Cwrite-block-N-of-temporary-file%7Cwrite-to-hash-join-temporary-file--tf4216684.html#a12884512
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Duplicate public schema and user tables

2007-09-26 Thread Romain Roure

Hi,

We suddenly stumbled upon duplicate entities. Some of our databases 
ended up with two 'public' schemas and several duplicate user tables 
(sharing the same oid).
After checking through the logs, it doesn't appear to be a problem 
resulting from wrap-around OID's. Though the logs mention 
transaction-wraparound may have happened.


For the moment, we are trying to get ride of the duplicates and can't 
manage to pull that off. We tried restarting the server in singe-user 
mode with -P option to reindex the database, but it failed complaining 
about the uniqueness of the index in 'pg_class'.


We're running PostgreSQL 8.0.1. Any help would be appreciated.

Regards,

Romain

The two public schemas :

bddreco=# SELECT *,oid from pg_namespace ;
  nspname   | nspowner |   nspacl| 
  oid

+--+-+--
 pg_toast   |1 | | 
  99
 pg_temp_1  |1 | | 
   16847
 pg_catalog |1 | {postgres=UC/postgres,=U/postgres}  | 
  11
 public |1 | {postgres=UC/postgres,=UC/postgres} | 
2200
 information_schema |1 | {postgres=UC/postgres,=U/postgres}  | 
   17057
 public |1 | {postgres=UC/postgres,=UC/postgres} | 
73794132

(6 rows)


Duplicate user table :

bddreco=# SELECT *,oid from pg_class where relname='series';
 relname | relnamespace | reltype  | relowner | relam | relfilenode | 
reltablespace | relpages | reltuples | reltoastrelid | reltoastidxid | 
relhasindex | relisshared | relkind | relnatts | relchecks | reltriggers 
| relukeys | relfkeys | relrefs | relhasoids | relhaspkey | relhasrules 
| relhassubclass |  relacl 
|   oid


 series  | 73794132 | 73794327 |  102 | 0 |73794326 | 
   0 |0 | 0 | 0 | 0 | t 
  | f   | r   |4 | 0 |   5 
|0 |0 |   0 | t  | t  | f 
| f  | 
{granitadmin=arwdRxt/granitadmin,granitguest=r/granitadmin,granitserie=arwd/granitadmin} 
| 73794326
 series  | 73794132 | 73794327 |  102 | 0 |73794326 | 
   0 |0 | 0 | 0 | 0 | t 
  | f   | r   |4 | 0 |   5 
|0 |0 |   0 | t  | t  | f 
| f  | 
{granitadmin=arwdRxt/granitadmin,granitguest=r/granitadmin,granitserie=arwd/granitadmin} 
| 73794326
 series  | 73794132 | 73794327 |  102 | 0 |73794326 | 
   0 |0 | 0 | 0 | 0 | t 
  | f   | r   |4 | 0 |   5 
|0 |0 |   0 | t  | t  | f 
| f  | 
{granitadmin=arwdRxt/granitadmin,granitguest=r/granitadmin,granitserie=arwd/granitadmin} 
| 73794326
 series  | 73794132 | 73794327 |  102 | 0 |73794326 | 
   0 |1 | 1 | 0 | 0 | t 
  | f   | r   |4 | 0 |   5 
|0 |0 |   0 | t  | t  | f 
| f  | 
{granitadmin=arwdRxt/granitadmin,granitguest=r/granitadmin,granitserie=arwd/granitadmin} 
| 73794326

(4 rows)


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] DAGs and recursive queries

2007-09-26 Thread paul.dorman
Hi everyone,

I would like to know the best way to implement a DAG in PostgreSQL. I
understand there has been some talk of recursive queries, and I'm
wondering if there has been much progress on this.

Are there any complete examples of DAGs which work with PostgreSQL? I
would like to be able to do the following operations for a
categorization system:

1. Given a node, get one or more field values out of every parent node
2. Given a parent node, get one or more field values out of every
child node
3. Given two or more parent nodes, identify any common children.

I do not need to determine shortest paths between parents and
children, only to be able to iterate over them as efficiently as
possible.

I'd like to keep things dynamic so changes up the hierarchy don't
require changes to any of the children (unless their direct parents
are changed). I'd also like to keep as much processing as possible in
the database to minimize the traffic between my application and the
DB, so I think I'm looking for SQL and stored procedure solutions.

Any pointers would be great, as I'm not a DBA and do not have the
experience to make judgments about the best possible approach.

Regards,
Paul Dorman


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Dumping from older version

2007-09-26 Thread Raymond O'Donnell
Just wondering - when using a newer pg_dump to dump from an older 
Postgres, does pg_dump automatically generate INSERT statements for the 
data rather than using COPY?


I noticed this today when transferring data to a newer server - pg_dump 
generated INSERTs although I didn't ask for them. Not a problem, but I 
was curious.


Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Filesystem crash - corupted database

2007-09-26 Thread Scott Marlowe
On 9/26/07, Martin Bednář <[EMAIL PROTECTED]> wrote:
> Hi,
>
> It's possible to extract data directly from data files ?
> I have two tablespaces one for data and one for indexes.
> After filesystem crash I lost my /var/lib/postgresql/data folder :( All
> data is in /lost+found :(, I found folders with data and index tablespe
> that looks ok.
> It's possible to directly access these files and extract data as CSV for
> example ?
> Or it's possible "mount" these tablespaces to new database instance ?

You can read them with a hex dump program, but that's about it.  The
data may well be too corrupted to use anyway.  OTOH, if the files are
intact and you can rename them back to what they were you might be
able to get your db back up.  But I wouldn't count on it.

> btw: I know - restore from backups, but DB is too big, and I have
> backups 6days ago.

That's why there's PITR.  If you had been backing up with PITR, then
you could roll forward your standby machine and be right back up.

Use this to make a case to the powers that be that you need better
backup procedures and hardware to handle your data.  The cost of your
data is much higher than the cost of the hardware it lives on.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PQntuples return type

2007-09-26 Thread Tom Lane
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Greg Sabino Mullane wrote:
>> There may be some other safeguards in place I did not see to prevent this, 
>> but I don't see a reason why we shouldn't use unsigned int or 
>> unsigned long int here, both for ntups and the return value of the 
>> function.

> On second thought, I have at least updated the function documentation:

>Returns the number of rows (tuples) in the query result.  Because
>it returns an integer result, large result sets might overflow the
>return value on 32-bit operating systems.

This is silly.  Have you forgotten that the max number of columns is
constrained to 1600 on the backend side?

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] subquery/alias question

2007-09-26 Thread Madison Kelly

Gregory Stark wrote:

"Madison Kelly" <[EMAIL PROTECTED]> writes:


SELECT d.dom_id, d.dom_name FROM domains d WHERE (SELECT COUNT(*) FROM users u
WHERE u.usr_dom_id=d.dom_id) > 0 ORDER BY d.dom_name ASC;

  Which gives me just the domains with at least one user under them, but not
the count. This is not ideal, and I will have to come back to it next week. In
the meantime, any idea what the GROUP BY error is? If not, I'll read through
the docs on 'GROUP'ing once I get this deadline out of the way.


I think you just want simply:

SELECT dom_id, dom_name, count(*) 
  FROM users 
  JOIN domains ON (usr_dom_id=dom_id) 
 GROUP BY dom_id, dom_nmae

 ORDER BY dom_name

You don't actually need the HAVING (though it wouldn't do any harm either)
since only domains which match a user will come out of the join anyways.

You can also write it using a subquery instead of a join

SELECT * 
  FROM (
SELECT dom_id, dom_name, 
   (SELECT count(*) FROM users WHERE user_dom_id = dom_id) as nusers

  FROM domains
   ) as subq
 WHERE nusers > 0
 ORDER BY dom_name

But that will perform worse in many cases.



You are right, the 'HAVING' clause does seem to be redundant. I removed 
it and ran several 'EXPLAIN ANALYZE's on it with and without the 
'HAVING' clause and found no perceivable difference. I removed the 
'HAVING' clause anyway, since I like to keep queries as minimal as possible.


Thank you!

Madi

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] subquery/alias question

2007-09-26 Thread Michael Glaesemann


On Sep 26, 2007, at 7:41 , Madison Kelly wrote:


  Unfortunately, in both cases I get the error:


Um, the two cases could not be giving the same error as they don't  
both contain the syntax that the  error is complaining about: the  
first case uses count in a subquery so it couldn't throw this exact  
error.


nmc=> SELECT dom_id, dom_name, COUNT(usr_dom_id) AS usr_count FROM  
domains JOIN users ON (usr_dom_id=dom_id) HAVING COUNT (usr_dom_id)  
> 0 ORDER BY dom_name;

ERROR:  syntax error at or near "COUNT" at character 25
LINE 1: SELECT dom_id, dom_name COUNT(usr_dom_id) AS usr_count  
FROM ...


The error message doesn't match the query you've provided. Note that  
in the line marked LINE 1, there's no comma after dom_name, which I  
assume is what the server is complaining about. However, the query  
you show *does* have this comma. Something isn't right. Is this an  
exact copy and paste from psql?



  I've been struggling with some deadlines, so for now I'm using just:

SELECT d.dom_id, d.dom_name FROM domains d WHERE (SELECT COUNT(*)  
FROM users u WHERE u.usr_dom_id=d.dom_id) > 0 ORDER BY d.dom_name ASC;


  Which gives me just the domains with at least one user under  
them, but not the count. This is not ideal, and I will have to come  
back to it next week. In the meantime, any idea what the GROUP BY  
error is?


Ah. You haven't actually shown us a GROUP BY error. A GROUP BY clause  
is needed when you've got columns that aren't included in the  
aggregate (COUNT in this case), e.g.,


select dom_id,
   dom_name,
   usr_count
  from domains
  natural join (select usr_dom_id as dom_id,
   count(usr_dom_id) as usr_count
  from users
  group by dom_id) u
  where usr_count > 0
  order by dom_name;

select dom_id, dom_name, count(usr_dom_id) as usr_count
  from domains
  join users on (usr_dom_id = dom_id)
  group by dom_id, dom_name
  having count(usr_dom_id) > 0
  order by dom_name;

Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Solved! Was: (subquery/alias question)

2007-09-26 Thread Madison Kelly

Alvaro Herrera wrote:

Madison Kelly wrote:


Thanks for your reply!

  Unfortunately, in both cases I get the error:

nmc=> SELECT dom_id, dom_name, COUNT(usr_dom_id) AS usr_count FROM domains 
JOIN users ON (usr_dom_id=dom_id) HAVING COUNT (usr_dom_id) > 0 ORDER BY 
dom_name;

ERROR:  syntax error at or near "COUNT" at character 25
LINE 1: SELECT dom_id, dom_name COUNT(usr_dom_id) AS usr_count FROM ...


Try to avoid missing the comma before the COUNT (and do not cheat when
cut'n pasting ...)

Also it seems you will need a GROUP BY clause:
GROUP BY dom_id, dom_name
(placed just before the HAVING clause).


Bingo!

Now to answer the performance questions (using my actual queries, 
unedited so they are a little longer):


-=-=-=-=-=-
nmc=> EXPLAIN ANALYZE SELECT dom_id, dom_name, dom_note, 
COUNT(usr_dom_id) AS usr_count FROM domains JOIN users ON 
(usr_dom_id=dom_id) GROUP BY dom_id, dom_name, dom_note HAVING COUNT 
(usr_dom_id) > 0 ORDER BY dom_name;
   QUERY PLAN 


-
 Sort  (cost=10.70..10.78 rows=31 width=72) (actual time=2.107..2.133 
rows=17 loops=1)

   Sort Key: domains.dom_name
   ->  HashAggregate  (cost=9.39..9.93 rows=31 width=72) (actual 
time=1.899..1.956 rows=17 loops=1)

 Filter: (count(usr_dom_id) > 0)
 ->  Hash Join  (cost=7.20..9.00 rows=31 width=72) (actual 
time=0.942..1.411 rows=96 loops=1)

   Hash Cond: ("outer".dom_id = "inner".usr_dom_id)
   ->  Seq Scan on domains  (cost=0.00..1.31 rows=31 
width=68) (actual time=0.227..0.321 rows=31 loops=1)
   ->  Hash  (cost=6.96..6.96 rows=96 width=4) (actual 
time=0.673..0.673 rows=96 loops=1)
 ->  Seq Scan on users  (cost=0.00..6.96 rows=96 
width=4) (actual time=0.010..0.371 rows=96 loops=1)

 Total runtime: 2.454 ms
(10 rows)
-=-=-=-=-=-

  Versus:

-=-=-=-=-=-
nmc=> EXPLAIN ANALYZE SELECT d.dom_id, d.dom_name, d.dom_note, (SELECT 
COUNT(*) FROM users u WHERE u.usr_dom_id=d.dom_id) AS usr_count FROM 
domains d WHERE (SELECT COUNT(*) FROM users u WHERE 
u.usr_dom_id=d.dom_id) > 0 ORDER BY d.dom_name ASC;
  QUERY PLAN 


--
 Sort  (cost=297.37..297.39 rows=10 width=68) (actual 
time=10.171..10.196 rows=17 loops=1)

   Sort Key: dom_name
   ->  Seq Scan on domains d  (cost=0.00..297.20 rows=10 width=68) 
(actual time=0.508..10.013 rows=17 loops=1)

 Filter: ((subplan) > 0)
 SubPlan
   ->  Aggregate  (cost=7.21..7.21 rows=1 width=0) (actual 
time=0.203..0.204 rows=1 loops=31)
 ->  Seq Scan on users u  (cost=0.00..7.20 rows=1 
width=0) (actual time=0.127..0.189 rows=3 loops=31)

   Filter: (usr_dom_id = $0)
   ->  Aggregate  (cost=7.21..7.21 rows=1 width=0) (actual 
time=0.184..0.186 rows=1 loops=17)
 ->  Seq Scan on users u  (cost=0.00..7.20 rows=1 
width=0) (actual time=0.058..0.164 rows=6 loops=17)

   Filter: (usr_dom_id = $0)
 Total runtime: 10.593 ms
(12 rows)
-=-=-=-=-=-

  So using the JOIN you all helped me with, the query returns in 2.454 
ms compared to my early query of 10.593 ms!


  I have not yet looked into any indexing either. I am waiting until 
the program is done and then will go back and review queries to look for 
bottlenecks.


  Thanks to all of you!!

Madi

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] subquery/alias question

2007-09-26 Thread Gregory Stark
"Madison Kelly" <[EMAIL PROTECTED]> writes:

> SELECT d.dom_id, d.dom_name FROM domains d WHERE (SELECT COUNT(*) FROM users u
> WHERE u.usr_dom_id=d.dom_id) > 0 ORDER BY d.dom_name ASC;
>
>   Which gives me just the domains with at least one user under them, but not
> the count. This is not ideal, and I will have to come back to it next week. In
> the meantime, any idea what the GROUP BY error is? If not, I'll read through
> the docs on 'GROUP'ing once I get this deadline out of the way.

I think you just want simply:

SELECT dom_id, dom_name, count(*) 
  FROM users 
  JOIN domains ON (usr_dom_id=dom_id) 
 GROUP BY dom_id, dom_nmae
 ORDER BY dom_name

You don't actually need the HAVING (though it wouldn't do any harm either)
since only domains which match a user will come out of the join anyways.

You can also write it using a subquery instead of a join

SELECT * 
  FROM (
SELECT dom_id, dom_name, 
   (SELECT count(*) FROM users WHERE user_dom_id = dom_id) as nusers
  FROM domains
   ) as subq
 WHERE nusers > 0
 ORDER BY dom_name

But that will perform worse in many cases.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] regds bulk of records

2007-09-26 Thread manju arumugam
hi friends,
   I am new to this group, i want one help,

I want to get a lakh of records from database, but it 
is taking too much time, what can i do for this,

thanks in Advance.


  Get the freedom to save as many mails as you wish. To know how, go to 
http://help.yahoo.com/l/in/yahoo/mail/yahoomail/tools/tools-08.html

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Compare Content in Multidimensional Array [PHP/SQL]

2007-09-26 Thread Stefan Schwarzer

Hi there,

I have some global national statistical data sets.

The table design is like this for each variable:

name 20012002   2003   2004   2005
-
Afghanistan
Albania


I would like to offer the possibility to compare two (or more)  
variables for a given country. The data values need to be set to 100  
at the first year which both variables have in common.


So, one variable could have an annual vallues between 1960 and 2005;  
another one only from 1975 to 2005.


So, the values for the country would be set to 100 based on the year  
1975.


In the moment I have a loop for the number of variables, selecting  
all data and stocking them into an array:



   $data[$number_of_dataset][$year] = $value_of_that_year;


Probably there could be better approach in finding the  
"smallest_common_year"?!


If not, I wonder how, by using PHP, I can get that information...

Can anyone give me a hint?

Thank you very much!

Stef


  

  Stefan Schwarzer

  Lean Back and Relax - Enjoy some Nature Photography: http:// 
photoblog.la-famille-schwarzer.de


  Or: Appetite for Global Data? UNEP GEO Data Portal: http:// 
geodata.grid.unep.ch

  




Re: [GENERAL] subquery/alias question

2007-09-26 Thread Alvaro Herrera
Madison Kelly wrote:

> Thanks for your reply!
>
>   Unfortunately, in both cases I get the error:
>
> nmc=> SELECT dom_id, dom_name, COUNT(usr_dom_id) AS usr_count FROM domains 
> JOIN users ON (usr_dom_id=dom_id) HAVING COUNT (usr_dom_id) > 0 ORDER BY 
> dom_name;
> ERROR:  syntax error at or near "COUNT" at character 25
> LINE 1: SELECT dom_id, dom_name COUNT(usr_dom_id) AS usr_count FROM ...

Try to avoid missing the comma before the COUNT (and do not cheat when
cut'n pasting ...)

Also it seems you will need a GROUP BY clause:
GROUP BY dom_id, dom_name
(placed just before the HAVING clause).

-- 
Alvaro Herrera http://www.amazon.com/gp/registry/CTMLCN8V17R4
"Some men are heterosexual, and some are bisexual, and some
men don't think about sex at all... they become lawyers" (Woody Allen)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] subquery/alias question

2007-09-26 Thread Madison Kelly

Michael Glaesemann wrote:


On Sep 25, 2007, at 17:30 , Alvaro Herrera wrote:


Michael Glaesemann wrote:


select dom_id,
   dom_name,
   usr_count
  from domains
  natural join (select usr_dom_id as dom_id,
   count(usr_dom_id) as usr_count
  from users) u
  where usr_count > 0
  order by dom_name;


Maybe the usr_count should be tested in a HAVING clause instead of
WHERE?  And put the count(*) in the result list instead of a subselect.
That feels more natural to me anyway.


I believe you'd have to write it like

select dom_id, dom_name, count(usr_dom_id) as usr_count
  from domains
  join users on (usr_dom_id = dom_id)
  having count(usr_dom_id) > 0
  order by dom_name;

I don't know how the performance would compare. I think the backend is 
smart enough to know it doesn't need to perform two seq scans to 
calculate count(usr_dom_id), but I wasn't sure.


Madison, how do the two queries compare with explain analyze?


Thanks for your reply!

  Unfortunately, in both cases I get the error:

nmc=> SELECT dom_id, dom_name, COUNT(usr_dom_id) AS usr_count FROM 
domains JOIN users ON (usr_dom_id=dom_id) HAVING COUNT (usr_dom_id) > 0 
ORDER BY dom_name;

ERROR:  syntax error at or near "COUNT" at character 25
LINE 1: SELECT dom_id, dom_name COUNT(usr_dom_id) AS usr_count FROM ...

  I've been struggling with some deadlines, so for now I'm using just:

SELECT d.dom_id, d.dom_name FROM domains d WHERE (SELECT COUNT(*) FROM 
users u WHERE u.usr_dom_id=d.dom_id) > 0 ORDER BY d.dom_name ASC;


  Which gives me just the domains with at least one user under them, 
but not the count. This is not ideal, and I will have to come back to it 
next week. In the meantime, any idea what the GROUP BY error is? If not, 
I'll read through the docs on 'GROUP'ing once I get this deadline out of 
the way.


  Thank you all for your help! I am sure I will have more question(s) 
next week as soon as I can get back to this.


Madi

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] =?ISO-8859-2?Q?libpg.dll problem while changing from version 8.2.4 no-installer to 8.2.5 n=

2007-09-26 Thread pawo509
Hi,
I was using PostgreSQL in version postgresql-8.2.4-1-binaries-no-installer.zip 
under Windows. I did the following:
1. I unzipped PostgreSQL into D:\PostgreSQL and created directory named 
"database" inside.
2. I exceuted (on non-administrator account "postgres"): initdb -D 
"D:\PostgreSQL\database" -U root --encoding=UTF8 --no-locale
3. I executed (as an administrator): pg_ctl -D "D:\PostgreSQL\database" 
register -N myPostgreSQLService
Everything worked ok. 
I changed version to postgresql-8.2.5-1-binaries-no-installer.zip, and tried to 
repeat procedure above."initdb" failed because libpq.dll was not found. I 
discovered that localization of this library changed from /bin to /lib 
directory. When I added this /lib to PATH everything worked OK.
Does anyone know what is the reason of that change? According to version 
numbering of PostgreSQL, changing minor version number (here: 4->5) means small 
bug fixing and no enhancements. Changing library localization is rather not a 
bug fix, and there should be a comment about that in release notes.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pgpoolAdmin:No such file or directory in /var/www/html/pgpoolAdmin-1.0.0/libs/Smarty.class.php on line 1258

2007-09-26 Thread Moiz Kothari
Hi Ashish,

Looks like a smarty issue and not a pgpooladmin issue. Check your smarty
global variables for folder paths.

Regards,
Moiz Kothari
-- 
Hobby Site : http://dailyhealthtips.blogspot.com

On 9/26/07, Ashish Karalkar <[EMAIL PROTECTED]> wrote:
>
>  Hello All,
>
> I have installed pgpoolAdmin on linux box. first pgpool configuration
> setting pages work properly after that when I try to login nothing happens.
>
> Following is the error in the apache server log
>
> PHP Warning: fetch(templates_c/%%6A^6A5^6A537DD8%%login.tpl.php): failed
> to open stream: No such file or directory in /var/www/html/pgpoolAdmin-
> 1.0.0/libs/Smarty.class.php on line 1258
>
> [client 172.17.2.23] PHP Warning: fetch(): Failed opening
> 'templates_c/%%6A^6A5^6A537DD8%%login.tpl.php' for inclusion
> (include_path='/usr/local/lib/php') in 
> /var/www/html/pgpoolAdmin-1.0.0/libs/Smarty.class.php
> on line 1258
>
>
>
> Can anybody suggest what is going wrong.
>
> I have physicaly verifies the file
> 'templates_c/%%6A^6A5^6A537DD8%%login.tpl.php'  is there and also the
> permission is 777 to user apche .
>
> Thanks in advance
>
>
>
> with regards
>
> Ashish
>
>
>
>
>
>
>


[GENERAL] Filesystem crash - corupted database

2007-09-26 Thread Martin Bednář

Hi,

It's possible to extract data directly from data files ?
I have two tablespaces one for data and one for indexes.
After filesystem crash I lost my /var/lib/postgresql/data folder :( All 
data is in /lost+found :(, I found folders with data and index tablespe 
that looks ok.
It's possible to directly access these files and extract data as CSV for 
example ?

Or it's possible "mount" these tablespaces to new database instance ?

Thnx for advice
Bedy

btw: I know - restore from backups, but DB is too big, and I have 
backups 6days ago.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] pgpoolAdmin:No such file or directory in /var/www/html/pgpoolAdmin-1.0.0/libs/Smarty.class.php on line 1258

2007-09-26 Thread Ashish Karalkar
Hello All,

I have installed pgpoolAdmin on linux box. first pgpool configuration setting 
pages work properly after that when I try to login nothing happens.

Following is the error in the apache server log

PHP Warning: fetch(templates_c/%%6A^6A5^6A537DD8%%login.tpl.php): failed to 
open stream: No such file or directory in 
/var/www/html/pgpoolAdmin-1.0.0/libs/Smarty.class.php on line 1258

[client 172.17.2.23] PHP Warning: fetch(): Failed opening 
'templates_c/%%6A^6A5^6A537DD8%%login.tpl.php' for inclusion 
(include_path='/usr/local/lib/php') in 
/var/www/html/pgpoolAdmin-1.0.0/libs/Smarty.class.php on line 1258



Can anybody suggest what is going wrong.

I have physicaly verifies the file 
'templates_c/%%6A^6A5^6A537DD8%%login.tpl.php'  is there and also the 
permission is 777 to user apche .

Thanks in advance



with regards

Ashish








[GENERAL] 8.3devel, csvlog, missing info?

2007-09-26 Thread hubert depesz lubaczewski
hi,
i just fetched newest 8.3 from cvs head, compiled, ran.
when i set logs to "stderr", and enter query with error, i get this
information in logs:
ERROR:  subquery in FROM must have an alias
HINT:  For example, FROM (SELECT ...) [AS] foo.
STATEMENT:  select count(*) from (select x from q order by x);

but when logging is set to csvlog, only this is logged:
2007-09-26 10:47:15.522
CEST,"depesz","depesz",46fa1c02.2fa6,[local],12198,9,"idle",2007-09-26
10:44:50 CEST,0,ERROR,,"subquery in FROM must have an alias"

there is no hint, and not statement.

will it stay that way? i think it's bad because it makes cvslogging less
useful.

best regards,

depesz

-- 
quicksil1er: "postgres is excellent, but like any DB it requires a
highly paid DBA.  here's my CV!" :)
http://www.depesz.com/ - blog dla ciebie (i moje CV)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PQntuples return type

2007-09-26 Thread Bruce Momjian
Greg Sabino Mullane wrote:
> > So, my doubt is: if the return type is int instead of unsigned int, 
> > is this function testable for negative return values?
> 
> A quick glance at the code in fe-exec.c and fe-protocol3.c shows that 
> the underlying variable starts at 0 as an int and in incremented by 
> one every row, so it seems possible that it could wrap around for 
> very large results sets and/or boxes with a low representation of 'int'. 
> There may be some other safeguards in place I did not see to prevent this, 
> but I don't see a reason why we shouldn't use unsigned int or 
> unsigned long int here, both for ntups and the return value of the 
> function.

On second thought, I have at least updated the function documentation:

   Returns the number of rows (tuples) in the query result.  Because
   it returns an integer result, large result sets might overflow the
   return value on 32-bit operating systems.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PQntuples return type

2007-09-26 Thread Bruce Momjian
Greg Sabino Mullane wrote:
> > So, my doubt is: if the return type is int instead of unsigned int, 
> > is this function testable for negative return values?
> 
> A quick glance at the code in fe-exec.c and fe-protocol3.c shows that 
> the underlying variable starts at 0 as an int and in incremented by 
> one every row, so it seems possible that it could wrap around for 
> very large results sets and/or boxes with a low representation of 'int'. 
> There may be some other safeguards in place I did not see to prevent this, 
> but I don't see a reason why we shouldn't use unsigned int or 
> unsigned long int here, both for ntups and the return value of the 
> function.

I think we need more use cases before we break the API on this one.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match