Re: [GENERAL] standard LOB support

2007-06-21 Thread EBIHARA, Yuichiro
Thomas,

Thank you for your comment.

> I found that using getBinaryStream(), setBinaryStream(), 
> getCharacterStream() 
> and setCharacterStream() to handle LOBs across different DBMS 
> is much more 
> portable (and reliably) than using the Clob()/Blob() methods.

According to JDBC 3.0 specifiction, those 4 methods may not be compatible to 
BLOB/CLOB.
Some databases may support them to access LOB data but not all databases.

But my target databases are, actually, only PostgreSQL, Oracle and DB2 and 
there is no problem with
PostgreSQL.
Also, according to the Oracle JDBC driver manual, Oracle supports stream access 
to LOB through the 4
methods.

I'll also try DB2 soon.

Thanks,

ebi



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] standard LOB support

2007-06-21 Thread Tom Lane
"EBIHARA, Yuichiro" <[EMAIL PROTECTED]> writes:
> Using Large Objects may solve my issue but I have to note that a large
> object is not automatically deleted when the record referring to it is
> deleted.

The contrib/lo module can help with this.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] standard LOB support

2007-06-21 Thread Thomas Kellerer

EBIHARA, Yuichiro wrote on 22.06.2007 06:09:

It seems like PG JDBC driver CANNOT  handle 'bytea' as BLOB nor 'text' as CLOB.
getBlob()/setBlob()/getClob()/setClob() can work with only Large Objects (at 
least with
postgresql-8.1-405.jdbc3.jar).

org.postgresql.util.PSQLException: Bad Integer Z\273\330x\336\335\226\243
at 
org.postgresql.jdbc1.AbstractJdbc1ResultSet.toInt(AbstractJdbc1ResultSet.java:862)
at 
org.postgresql.jdbc1.AbstractJdbc1ResultSet.getInt(AbstractJdbc1ResultSet.java:287)
at org.postgresql.jdbc3.Jdbc3ResultSet.getBlob(Jdbc3ResultSet.java:42)
at PgTest.main(PgTest.java:33)

The same exception occurs when using getClob() against a text column.

Using Large Objects may solve my issue but I have to note that a large object 
is not automatically
deleted when the record referring to it is deleted.

I found that using getBinaryStream(), setBinaryStream(), getCharacterStream() 
and setCharacterStream() to handle LOBs across different DBMS is much more 
portable (and reliably) than using the Clob()/Blob() methods.


The Postgres JDBC driver handles the stream/writer methods just fine to read and 
write text and bytea columns.


Thomas



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Excell

2007-06-21 Thread Bob Pawley

Yes please send me a copy.

Bob


- Original Message - 
From: "Harvey, Allan AC" <[EMAIL PROTECTED]>
To: "Joshua D. Drake" <[EMAIL PROTECTED]>; "Scott Marlowe" 
<[EMAIL PROTECTED]>
Cc: "Csaba Nagy" <[EMAIL PROTECTED]>; "David Gardner" 
<[EMAIL PROTECTED]>; "Postgres general mailing list" 


Sent: Thursday, June 21, 2007 9:01 PM
Subject: Re: [GENERAL] Excell



> Because I'm delivering reports to dozens of people who have windows, no
> psql client, and just want to go to a web page, click a button, and get
> their report (or was that a banana?)

I do exactly this with bog basic HTML and bash scripts.
Can send you a copy if you want examples.

Allan


The material contained in this email may be confidential, privileged or 
copyrighted. If you are not the intended recipient, use, disclosure or 
copying of this information is prohibited. If you have received this 
document in error, please advise the sender and delete the document. 
Neither OneSteel nor the sender accept responsibility for any viruses 
contained in this email or any attachments.


---(end of broadcast)---
TIP 6: explain analyze is your friend




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Excell

2007-06-21 Thread Harvey, Allan AC
> > Because I'm delivering reports to dozens of people who have windows, no 
> > psql client, and just want to go to a web page, click a button, and get 
> > their report (or was that a banana?)
I do exactly this with bog basic HTML and bash scripts.
Can send you a copy if you want examples.

Allan


The material contained in this email may be confidential, privileged or 
copyrighted. If you are not the intended recipient, use, disclosure or copying 
of this information is prohibited. If you have received this document in error, 
please advise the sender and delete the document. Neither OneSteel nor the 
sender accept responsibility for any viruses contained in this email or any 
attachments.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] standard LOB support

2007-06-21 Thread EBIHARA, Yuichiro
Hi,

I found my understanding was incorrect.

> > > Is there any plan to support BLOB and CLOB in future releases?
> > >   
> > Looking at the spec, and postgresql's implementation, I can't
> > see much reason you couldn't just use a domain to declare that
> > a bytea is a blob and varchar is a clob.
> 
> That sounds a good idea!
> My application accesses databases via JDBC and PostgreSQL 
> JDBC driver can handle 'bytea' as BLOB. I'm not sure if 
> 'text' is compatible with CLOB, but I guess it'll also work well.

It seems like PG JDBC driver CANNOT  handle 'bytea' as BLOB nor 'text' as CLOB.
getBlob()/setBlob()/getClob()/setClob() can work with only Large Objects (at 
least with
postgresql-8.1-405.jdbc3.jar).

org.postgresql.util.PSQLException: Bad Integer Z\273\330x\336\335\226\243
at 
org.postgresql.jdbc1.AbstractJdbc1ResultSet.toInt(AbstractJdbc1ResultSet.java:862)
at 
org.postgresql.jdbc1.AbstractJdbc1ResultSet.getInt(AbstractJdbc1ResultSet.java:287)
at org.postgresql.jdbc3.Jdbc3ResultSet.getBlob(Jdbc3ResultSet.java:42)
at PgTest.main(PgTest.java:33)

The same exception occurs when using getClob() against a text column.

Using Large Objects may solve my issue but I have to note that a large object 
is not automatically
deleted when the record referring to it is deleted.

Thanks,

ebi



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] ORDER BY with exception

2007-06-21 Thread brian

Michael Glaesemann wrote:

On Jun 21, 2007, at 17:35 , brian wrote:


I have a lookup table with a bunch of disciplines:



To answer your ordering question first:

SELECT id, name
FROM discipline
ORDER BY name = 'other'
, name;
id |name
+-
  8 | community
  4 | dance
  5 | film and television
  9 | fine craft
  7 | media arts
  3 | music
  6 | theatre
  2 | visual arts
  1 | writing
10 | other
(10 rows)

This relies on the fact that FALSE orders before TRUE. I don't always  
remember which way, so I often have to rewrite it using <> or = to  get 
the behavior I want.




Of course! (slaps forehead)

I don't think you really need to use a function for this. I believe  you 
should be able to do this all in one SQL statement, something  like (if 
I've understood your query and intent correctly):


SELECT discipline.name, COUNT(showcase_id) AS total
FROM discipline
LEFT JOIN (
SELECT DISTINCT discipline_id, showcase.id as showcase_id
FROM showcase
JOIN showcase_item on (showcase.id = showcase_id)
WHERE accepted) AS accepted_showcases
ON (discipline.id = discipline_id)
GROUP BY discipline.name
ORDER BY discipline.name = 'other'
, discipline.name;
name | total
-+---
community   | 0
dance   | 0
film and television | 0
fine craft  | 0
media arts  | 0
music   | 0
theatre | 0
visual arts | 1
writing | 2
other   | 0
(10 rows)



That's bang on, Michael, thanks a bunch. I never remember to explore 
joining on a select. I'm forever thinking in terms of joining on a 
table. Things to study this evening.


As a general rule, it's generally better to let the server handle the  
data in sets (i.e., tables) as much as possible rather than using  
procedural code.


Hope this helps.


It helped lots, thanks again.

brian

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] ORDER BY with exception

2007-06-21 Thread brian

Josh Tolley wrote:
It seems to me you could replace it all with one query, something like 
this:


SELECT discipline, COUNT(1) FROM showcase WHERE EXISTS (SELECT * FROM
showcase_item WHERE showcase_id = showcase.id LIMIT 1) GROUP BY
discipline ORDER BY (discipline != 'other'), discipline;



discipline is its own table, not a column, so i'd need to throw a join 
in there. I have another table, showcase_discipline to relate from 
showcases. I did it this way because there are several other tables that 
rely on disciplines (one to one and many to one).


And thanks for the ORDER BY tip! I had no idea i could do that. I'll see 
if i can work that in.


brian

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] ORDER BY with exception

2007-06-21 Thread Michael Glaesemann


On Jun 21, 2007, at 17:35 , brian wrote:


I have a lookup table with a bunch of disciplines:


To answer your ordering question first:

SELECT id, name
FROM discipline
ORDER BY name = 'other'
, name;
id |name
+-
  8 | community
  4 | dance
  5 | film and television
  9 | fine craft
  7 | media arts
  3 | music
  6 | theatre
  2 | visual arts
  1 | writing
10 | other
(10 rows)

This relies on the fact that FALSE orders before TRUE. I don't always  
remember which way, so I often have to rewrite it using <> or = to  
get the behavior I want.



and a function that returns each discipline name along with the  
total number of records in another table (showcase) that are  
related to each discipline. Each showcase entry may have 0 or more  
items (showcase_item) related to it, so ones that have no items are  
disregarded here. Also, only showcases that have been accepted  
should be counted.


First, here's the working function:


I don't think you really need to use a function for this. I believe  
you should be able to do this all in one SQL statement, something  
like (if I've understood your query and intent correctly):


SELECT discipline.name, COUNT(showcase_id) AS total
FROM discipline
LEFT JOIN (
SELECT DISTINCT discipline_id, showcase.id as showcase_id
FROM showcase
JOIN showcase_item on (showcase.id = showcase_id)
WHERE accepted) AS accepted_showcases
ON (discipline.id = discipline_id)
GROUP BY discipline.name
ORDER BY discipline.name = 'other'
, discipline.name;
name | total
-+---
community   | 0
dance   | 0
film and television | 0
fine craft  | 0
media arts  | 0
music   | 0
theatre | 0
visual arts | 1
writing | 2
other   | 0
(10 rows)

This should give you the total number of showcases that have been  
accepted for each discipline. (DDL and data below.)


As a general rule, it's generally better to let the server handle the  
data in sets (i.e., tables) as much as possible rather than using  
procedural code.


Hope this helps.

Michael Glaesemann
grzm seespotcode net

CREATE TABLE discipline
(
id INTEGER NOT NULL UNIQUE
, name TEXT PRIMARY KEY
);

INSERT INTO discipline (id, name) VALUES
(1, 'writing')
, (2, 'visual arts')
, (3, 'music')
, (4, 'dance')
, (5, 'film and television')
, (6, 'theatre')
, (7, 'media arts')
, (8, 'community')
, (9, 'fine craft')
, (10, 'other');

SELECT *
FROM discipline
ORDER BY name;

SELECT *
FROM discipline
ORDER BY name = 'other', name;


CREATE TABLE showcase
(
id INTEGER NOT NULL UNIQUE
, name TEXT PRIMARY KEY
, discipline_id INTEGER NOT NULL
REFERENCES discipline(id)
, accepted BOOLEAN NOT NULL
);

INSERT INTO showcase (id, name, discipline_id, accepted)
VALUES
(1, 'foo', 1, true)
, (2, 'bar', 2, true)
, (3, 'baz', 1, true)
, (4, 'quux', 1, false)
, (5, 'blurfl', 2, false);

CREATE TABLE showcase_item
(
id INTEGER NOT NULL UNIQUE
, description TEXT NOT NULL
, showcase_id INTEGER NOT NULL
REFERENCES showcase (id)
, PRIMARY KEY (description, showcase_id)
);

INSERT INTO showcase_item (id, description, showcase_id)
VALUES
(1, 'a', 1)
, (2, 'b', 1)
, (3, 'c', 1)
, (4, 'd', 2)
, (5, 'e', 2)
, (6, 'f', 2)
, (7, 'g', 3)
, (8, 'h', 3)
, (9, 'i', 4)
, (10, 'j', 5);

SELECT *
FROM showcase;
id |  name  | discipline_id | accepted
++---+--
  1 | foo| 1 | t
  2 | bar| 2 | t
  3 | baz| 1 | t
  4 | quux   | 1 | f
  5 | blurfl | 2 | f
(5 rows)

SELECT *
FROM showcase
JOIN showcase_item ON (showcase.id = showcase_id);
id |  name  | discipline_id | accepted | id | description | showcase_id
++---+--++- 
+-
  1 | foo| 1 | t|  1 | a
|   1
  1 | foo| 1 | t|  2 | b
|   1
  1 | foo| 1 | t|  3 | c
|   1
  2 | bar| 2 | t|  4 | d
|   2
  2 | bar| 2 | t|  5 | e
|   2
  2 | bar| 2 | t|  6 | f
|   2
  3 | baz| 1 | t|  7 | g
|   3
  3 | baz| 1 | t|  8 | h
|   3
  4 | quux   | 1 | f|  9 | i
|   4
  5 | blurfl | 2 | f| 10 | j
|   5

(10 rows)




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Excell

2007-06-21 Thread Joshua D. Drake

Scott Marlowe wrote:

Csaba Nagy wrote:

On Thu, 2007-06-21 at 16:45, Scott Marlowe wrote:
 
Another option is to use your favorite scripting language and throw 
an excel header then the data in tab delimited format.  Or even in 
excel xml format.



Why would you need any scripting language ? COPY supports CSV output
pretty well, it can even put you a header on the top. 


Because I'm delivering reports to dozens of people who have windows, no 
psql client, and just want to go to a web page, click a button, and get 
their report (or was that a banana?)


If you make psql with the \H option output html, excel will open it 
directly.




I guess I could give them an account on the reporting server and a copy 
of pgadmin or something, but most of them are happier with a web page 
and a set of buttons.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] ORDER BY with exception

2007-06-21 Thread Josh Tolley

On 6/21/07, brian <[EMAIL PROTECTED]> wrote:

I have a lookup table with a bunch of disciplines:

# SELECT id, name FROM discipline;
  id |name
+-
   1 | writing
   2 | visual arts
   3 | music
   4 | dance
   5 | film and television
   6 | theatre
   7 | media arts
   8 | community
   9 | fine craft
  10 | other
(10 rows)

and a function that returns each discipline name along with the total
number of records in another table (showcase) that are related to each
discipline. Each showcase entry may have 0 or more items (showcase_item)
related to it, so ones that have no items are disregarded here. Also,
only showcases that have been accepted should be counted.

First, here's the working function:

CREATE FUNCTION getshowcasetotalsbydiscipline(OUT name text, OUT total
integer) RETURNS SETOF record
AS $$

DECLARE
  rec record;

BEGIN
  FOR rec IN
   EXECUTE 'SELECT id, name, 1 AS total FROM discipline'
   LOOP
 name := rec.name;

 SELECT INTO rec.total

   -- a showcase may be in the DB but not accepted by an admin
   SUM(CASE s.accepted WHEN TRUE THEN 1 ELSE 0 END)
   FROM showcase AS s
   WHERE s.id IN

 -- a showcase may exist with no items, so should be ignored
 (SELECT si.showcase_id FROM showcase_item AS si
   WHERE si.discipline_id = rec.id);

  -- If no showcase items have this discipline,
  -- give it a total of zero

 IF rec.total IS NULL THEN
   SELECT INTO total 0;
 ELSE
   total := rec.total;
 END IF;

   RETURN NEXT;
   END LOOP;

   RETURN;
END;
$$ LANGUAGE plpgsql IMMUTABLE;

test=# SELECT * FROM getShowcaseTotalsByDiscipline();
 name | total
-+---
  writing |130
  visual arts |252
  music   |458
  dance   |131
  film and television |102
  theatre |271
  media arts  | 83
  community   | 20
  fine craft  | 78
  other   | 59
(10 rows)

Works fine, but i'd like to order the disciplines alphabetically
*except* have 'other' fall at the end. So, should i loop a second time,
after summing the totals, and keep the 'other' row aside, then add it to
the end?

(btw, the output of this function is cached until a new showcase is
accepted)

Or, should i re-order the disciplines alphabetically in the lookup
trable, keeping 'other' to be last?

I could do the latter, although it would mean a fair bit of work because
the disciplines table relates to a bunch of other stuff, as well. Also,
there's always the chance that a new discipline will be added in the
future. I suppose i could write a trigger that bumped the 'other' id
above that of the new entry, then re-relate everything else in the DB
that's connected to the 'other' discipline. But that strikes me as kind
of a hack.

The third option is to re-order the resultset in the PHP script that
displays this. But that wasn't why i chose Postgres for this app ;-)

brian

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



It seems to me you could replace it all with one query, something like this:

SELECT discipline, COUNT(1) FROM showcase WHERE EXISTS (SELECT * FROM
showcase_item WHERE showcase_id = showcase.id LIMIT 1) GROUP BY
discipline ORDER BY (discipline != 'other'), discipline;

- Josh

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Throwing exceptions

2007-06-21 Thread Scott Marlowe

Germán Hüttemann Arza wrote:

Hi,

I need a way to throw a message in a function, when an exception occurs, but I 
don't want to write again and again the same message in every place I need to 
throw it. So, is there a way to handle this situation in a more general 
manner?


Why not create a table of error messages / numbers, then you can just 
pull the message from the table?


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Excell

2007-06-21 Thread Scott Marlowe

Csaba Nagy wrote:

On Thu, 2007-06-21 at 16:45, Scott Marlowe wrote:
  
Another option is to use your favorite scripting language and throw an 
excel header then the data in tab delimited format.  Or even in excel 
xml format.



Why would you need any scripting language ? COPY supports CSV output
pretty well, it can even put you a header on the top. 


Because I'm delivering reports to dozens of people who have windows, no 
psql client, and just want to go to a web page, click a button, and get 
their report (or was that a banana?)


I guess I could give them an account on the reporting server and a copy 
of pgadmin or something, but most of them are happier with a web page 
and a set of buttons.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] ORDER BY with exception

2007-06-21 Thread brian

I have a lookup table with a bunch of disciplines:

# SELECT id, name FROM discipline;
 id |name
+-
  1 | writing
  2 | visual arts
  3 | music
  4 | dance
  5 | film and television
  6 | theatre
  7 | media arts
  8 | community
  9 | fine craft
 10 | other
(10 rows)

and a function that returns each discipline name along with the total 
number of records in another table (showcase) that are related to each 
discipline. Each showcase entry may have 0 or more items (showcase_item) 
related to it, so ones that have no items are disregarded here. Also, 
only showcases that have been accepted should be counted.


First, here's the working function:

CREATE FUNCTION getshowcasetotalsbydiscipline(OUT name text, OUT total 
integer) RETURNS SETOF record

AS $$

DECLARE
 rec record;

BEGIN
 FOR rec IN
  EXECUTE 'SELECT id, name, 1 AS total FROM discipline'
  LOOP
name := rec.name;

SELECT INTO rec.total

  -- a showcase may be in the DB but not accepted by an admin
  SUM(CASE s.accepted WHEN TRUE THEN 1 ELSE 0 END)
  FROM showcase AS s
  WHERE s.id IN

-- a showcase may exist with no items, so should be ignored
(SELECT si.showcase_id FROM showcase_item AS si
  WHERE si.discipline_id = rec.id);

 -- If no showcase items have this discipline,
 -- give it a total of zero

IF rec.total IS NULL THEN
  SELECT INTO total 0;
ELSE
  total := rec.total;
END IF;

  RETURN NEXT;
  END LOOP;

  RETURN;
END;
$$ LANGUAGE plpgsql IMMUTABLE;

test=# SELECT * FROM getShowcaseTotalsByDiscipline();
name | total
-+---
 writing |130
 visual arts |252
 music   |458
 dance   |131
 film and television |102
 theatre |271
 media arts  | 83
 community   | 20
 fine craft  | 78
 other   | 59
(10 rows)

Works fine, but i'd like to order the disciplines alphabetically 
*except* have 'other' fall at the end. So, should i loop a second time, 
after summing the totals, and keep the 'other' row aside, then add it to 
the end?


(btw, the output of this function is cached until a new showcase is 
accepted)


Or, should i re-order the disciplines alphabetically in the lookup 
trable, keeping 'other' to be last?


I could do the latter, although it would mean a fair bit of work because 
the disciplines table relates to a bunch of other stuff, as well. Also, 
there's always the chance that a new discipline will be added in the 
future. I suppose i could write a trigger that bumped the 'other' id 
above that of the new entry, then re-relate everything else in the DB 
that's connected to the 'other' discipline. But that strikes me as kind 
of a hack.


The third option is to re-order the resultset in the PHP script that 
displays this. But that wasn't why i chose Postgres for this app ;-)


brian

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Dynamic Log tigger (plpgsql)

2007-06-21 Thread Noah Heusser

Sergey Konoplev schrieb:

My Question:
How can I do "OLD.columnName != NEW.columnName" if I don't know what the
columnNames are at Compile Time?
I have the columnName in a variable.



I suggest you use plpython. In this case you'll be able to do it.

TD['old'][colNameVar] != TD['new'][colNameVar]


thx, you are right. These Languages are trusted like pgsql,

Did it in Perl: 


 foreach $key (keys %{$_TD->{old}}) {
   if($_TD->{old}{$key} ne $_TD->{new}{$key}){

   }
 }




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Experiences of PostgreSQL on-disk bitmap index patch

2007-06-21 Thread Hannes Dorbath
Christan Josefsson wrote:
> Any guess when 8.4 could be production ready? A year or more?

Why don't you just use Bizgres?

Right, they don't release that often, and 0.9 misses various fixes that
went into PostgreSQL. But if it has what you are after and works for you..


-- 
Best regards,
Hannes Dorbath

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] 8.2.3 PANIC with "corrupted item pointer"

2007-06-21 Thread Hannes Dorbath
Henk - CityWEB wrote:
> I can't wait to get a decent master/multi-slave setup going where I can
> turn fsync on and still get semi-decent performance...

I don't see how replication can help you with fsync performance
problems. Controllers with battery backed write cache are cheap. What is
the point of disabling fsync these days?


-- 
Best regards,
Hannes Dorbath

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Aggregates

2007-06-21 Thread John D. Burger

Richard Huxton wrote:


Ah, but this just includes the time of the last message, not its data.


Oops, I read the OP's question as "date and time", rather than "data  
and time".  Nevermind. :)


- John D. Burger
  MITRE



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] How to install Postgresql on MS Vista?

2007-06-21 Thread Lee Keel
The first thing you have to do is disable the User Access Control.

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:pgsql-general-
> [EMAIL PROTECTED] On Behalf Of dfx
> Sent: Thursday, June 21, 2007 12:58 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] How to install Postgresql on MS Vista?
> 
> I tryied it but get errors on create user postgres.
> 
> Is there some workaround?
> 
> Thank you
> 
> Domenico
> 
> ---(end of broadcast)---
> TIP 6: explain analyze is your friend
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the sender. This message 
contains confidential information and is intended only for the individual 
named. If you are not the named addressee you should not disseminate, 
distribute or copy this e-mail.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] How to install Postgresql on MS Vista?

2007-06-21 Thread Andreas Kretschmer
Joshua D. Drake <[EMAIL PROTECTED]> schrieb:

> >>I tryied it but get errors on create user postgres.
> >>Is there some workaround?
> >I'm not familiar with this crappy OS, but maybe you should disable UAC.
> 
> In your mind, it may be crappy but it is indeed an officially supported 
> operating system by this project. Let's keep our responses kind shall we.

Yes, of course, no problem.


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."(unknow)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] How to install Postgresql on MS Vista?

2007-06-21 Thread Joshua D. Drake

Andreas Kretschmer wrote:

dfx <[EMAIL PROTECTED]> schrieb:


I tryied it but get errors on create user postgres.

Is there some workaround?


I'm not familiar with this crappy OS, but maybe you should disable UAC.



In your mind, it may be crappy but it is indeed an officially supported 
operating system by this project. Let's keep our responses kind shall we.


Sincerely,

Joshua D. Drake




Andreas



--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] How to install Postgresql on MS Vista?

2007-06-21 Thread Andreas Kretschmer
dfx <[EMAIL PROTECTED]> schrieb:

> I tryied it but get errors on create user postgres.
> 
> Is there some workaround?

I'm not familiar with this crappy OS, but maybe you should disable UAC.


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."(unknow)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] How to install Postgresql on MS Vista?

2007-06-21 Thread dfx
I tryied it but get errors on create user postgres.

Is there some workaround?

Thank you

Domenico

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] 8.2.3 PANIC with "corrupted item pointer"

2007-06-21 Thread Henk - CityWEB


On Thu, 21 Jun 2007, Gregory Stark wrote:
> Ugh. The worst part is that you won't even know that there's anything wrong
> with your data. I would actually suggest that if you run with fsync off and
> have a power failure or kernel crash you should just immediately restore from
> your last backup and not risk running with the possibly corrupt database.
>
> Honestly this seems like a weird error to occur as a result of crashing with
> fsync off but anything's possible. More likely is you have records that you
> have partial transactions in your database, ie, records which were inserted or
> deleted in a transaction but missing other records that were inserted or
> deleted in the same transaction.
>
> You could probably fix this particular problem by reindexing the corrupted
> index. But you may never know if some of the data is incorrect.

Thanks, Greg.  Luckily the data is for internal/behind-the-scenes
use only, with no customer access.  So the situation isn't dire.

I can't wait to get a decent master/multi-slave setup going where I can
turn fsync on and still get semi-decent performance...

Regards
Henry

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] 8.2.3 PANIC with "corrupted item pointer"

2007-06-21 Thread Henk - CityWEB

On Thu, 21 Jun 2007, Tom Lane wrote:

> "Henka" <[EMAIL PROTECTED]> writes:
> > I happened to notice this error in the log when my application was refused
> > a db connection (quite unexpectedly):
>
> > PANIC:  corrupted item pointer: offset = 3308, size = 28
> > LOG:  autovacuum process (PID 18165) was terminated by signal 6
>
> FWIW, the only occurrences of that specific message text are in
> PageIndexTupleDelete and PageIndexMultiDelete, so you can be pretty sure
> that this is just a corrupted-index problem.  Once you've identified
> which table has the problem, a REINDEX should fix it.
>
> I concur with the other comments that a crash with fsync off might well
> have allowed more corruption than just this to sneak in, though :-(

Thanks for the comments, Tom.  I've started the reindex on the offending
index, so let's see how it goes.

I think once that's done, and I'm able to dump-all without error, I'll
upgrade to 8.2.4 and perform a restore.  That process should
uncover any other funnies.

Regards
Henry

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Throwing exceptions

2007-06-21 Thread Germán Hüttemann Arza
Hi,

I need a way to throw a message in a function, when an exception occurs, but I 
don't want to write again and again the same message in every place I need to 
throw it. So, is there a way to handle this situation in a more general 
manner?

Thanks in advance,

-- 
Germán Hüttemann Arza
CNC - Centro Nacional de Computación
UNA - Universidad Nacional de Asunción
Campus Universitario, San Lorenzo - Paraguay
http://www.cnc.una.py - Tel.: +595 21 585550

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Recovery/Restore and Roll Forward Question.

2007-06-21 Thread Erik Jones


On Jun 21, 2007, at 5:16 AM, Bruce McAlister wrote:

Thats exactly what I think. There is something strange going on. At  
the

moment I think it is the disk I am writing the data to that is slow,
possibly due to the fact that it is mounted up as "forcedirectio",  
so as
not to interfere with the file system cache which we want to have  
mainly

pg datafiles in, and the RAID controller has this particular logical
driver configured as write-through, so there is no buffering in- 
between.

The cpu's and network are not the problem here (2 x Dual Core Opterons
and Quad Gigabit Ethernet, total cpu usage is around 10%, NIC's are
pushing around 3Mbit/s over each).

It's not all that big to be honest, the total database size is around
11GB and I'm currently recking my head to find out how to improve the
backup times, and not adversely affect our running instance. I just
recently tried to use UFS snapshots, but the backing store filled up
before i could complete a backup of the snapshot. I need to find a way
to improve the write speed of our destination disk. I have another
question in this pg group about autovacuum that is not running on  
one of

our database tables which adds an average of around 2.1GB of bloat to
the database each day. I've now (today) scheduled a cron job every 10
minutes to get around this in the meantime. Hopefully that should  
reduce

the amount of data backed up by 2GB when we get to the bottom of that
issue (autovacuum)



You said in your other thread that your on Solaris 10, right?  We are  
as well and just discovered that having stats_block_level set to on  
increases write volume a lot and noticed a significant drop when we  
turned it off as well a significant drop in wal file traffic.  The  
same goes for stats_row_level (wrt write volume at least), but you  
need that if you want query information to come through  
pg_stat_activity (we left that on).  We just migrated off of a server  
wherein forcedirectio actually helped us a lot, but now we're  
wondering if that was due to us having forcedirectio on.  We only at  
the beginning of a lot of systems migrations and restructuring so now  
that we have some new avenues  and room to experiment, I'll try to  
post our results in a couple weeks.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] finding items with 0 rels for a 0 to many relationship

2007-06-21 Thread Michael Glaesemann


On Jun 21, 2007, at 11:57 , Josh Tolley wrote:


On 6/21/07, danmcb <[EMAIL PROTECTED]> wrote:

Hi

I have two tables, say A and B,  that have a many-to-many
relationship, implemented in the usual way with a join table A_B.

How can I economically find all the rows in table A whose id's are  
not

in A_B at all (i.e. they have zero instances of B associated)?


Use a left join. For instance, say there are a.id and b.id columns,
which are the primary keys in A and B respectively. Also say A_B
contains columns aid and bid which reference a.id and b.id
respectively.

SELECT * FROM A LEFT JOIN A_B ON (A.ID = A_B.AID) WHERE A_B.BID IS  
NULL;


Alternatively you can use EXCEPT. Using Josh's schema:

SELECT id
FROM A
EXCEPT
SELECT aid
FROM A_B.

You'll want to check with EXPLAIN ANALYZE, but in general I suspect  
the outer join is faster.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] finding items with 0 rels for a 0 to many relationship

2007-06-21 Thread Josh Tolley

On 6/21/07, danmcb <[EMAIL PROTECTED]> wrote:

Hi

I have two tables, say A and B,  that have a many-to-many
relationship, implemented in the usual way with a join table A_B.

How can I economically find all the rows in table A whose id's are not
in A_B at all (i.e. they have zero instances of B associated)?


Use a left join. For instance, say there are a.id and b.id columns,
which are the primary keys in A and B respectively. Also say A_B
contains columns aid and bid which reference a.id and b.id
respectively.


SELECT * FROM A LEFT JOIN A_B ON (A.ID = A_B.AID) WHERE A_B.BID IS NULL;


- Josh

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] finding items with 0 rels for a 0 to many relationship

2007-06-21 Thread danmcb
Hi

I have two tables, say A and B,  that have a many-to-many
relationship, implemented in the usual way with a join table A_B.

How can I economically find all the rows in table A whose id's are not
in A_B at all (i.e. they have zero instances of B associated)?

Thanks

Daniel


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] 8.2.3 PANIC with "corrupted item pointer"

2007-06-21 Thread Tom Lane
"Henka" <[EMAIL PROTECTED]> writes:
> I happened to notice this error in the log when my application was refused
> a db connection (quite unexpectedly):

> PANIC:  corrupted item pointer: offset = 3308, size = 28
> LOG:  autovacuum process (PID 18165) was terminated by signal 6

FWIW, the only occurrences of that specific message text are in
PageIndexTupleDelete and PageIndexMultiDelete, so you can be pretty sure
that this is just a corrupted-index problem.  Once you've identified
which table has the problem, a REINDEX should fix it.

I concur with the other comments that a crash with fsync off might well
have allowed more corruption than just this to sneak in, though :-(

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Experiences of PostgreSQL on-disk bitmap index patch

2007-06-21 Thread Andrew Sullivan
On Thu, Jun 21, 2007 at 10:39:29AM +0200, Christan Josefsson wrote:
> Any guess when 8.4 could be production ready? A year or more?

"In the future" is what I'd be willing to state out loud ;-)  8.3
hasn't finished development yet.  I wouldn't hold my breath.

You can find out more about bizgres at http://bizgres.org/home.php.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
The plural of anecdote is not data.
--Roger Brinner

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] persistent db connections in PHP

2007-06-21 Thread Scott Marlowe

Raymond O'Donnell wrote:


[EMAIL PROTECTED] wrote:

However, with this new Postgres site, I don't have access to my temp
tables after I've traversed another pg_connect. So PHP is either
creating a new connection, or giving me another session, not the one
which I created my tables in.


You wouldn't expect to be given back the same connection (and hence 
the same temp tables) from a pool of connections - they're returned 
randomly.


Scott Marlowe wrote:
MySQL reuses old connections within the same script.  PostgreSQL's 
php extension does not, it starts a new connection each time.


Isn't pg_pconnect supposed to recycle a pooled connection?


I wasn't speaking of pg_pconnect.  I was speaking of mysql_connect and 
pg_connect.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] postgresql and solaris 10: pitch to sysadmins

2007-06-21 Thread Robert Treat
On Monday 18 June 2007 16:27, John Smith wrote:
> guys
> need to pitch postgresql to some hard-to-budge solaris sysadmins- they
> don't even know about the postgresql-solaris 10 package, just used to
> oracle and don't want to break their backs over postgresql. plus i
> don't know enough slony yet.
>
> can someone point me to some standard backup/restore etc sysadmin
> stuff/scripts/processes? also what's best left to the sysadmins that i
> shouldn't take responsibility for? any trainings/books for
> sysadmins/myself?
> thanks, jzs

We run a number of pg systems on various versions of solaris.  We did a case 
study on the instance available here: 
http://images.omniti.net/omniti.com/media/c_06-1081_SunOmniTl_SB-Fhr.pdf

We don't really offer PostgreSQL/Solaris specific training (though maybe we 
should, Theo is a dtrace wizard), we're more of a professional services firm, 
helping with setup and ongoing maintenence.  I don't think Sun offers that 
level of training either though you might want to contact your local Sun rep 
to find out.  

OH, I do have some pg specific bookmarks from Sun you can pass on to your 
admins, really setting up postgresql is pretty simple if your guys already 
know thier way around the OS.  
http://www.sun.com/software/solaris/howtoguides/postgresqlhowto.jsp
http://docs.sun.com/app/docs/doc/819-5578?q=+postgresql&a=expand

Hmm... one other thing, we've been dumping some of our stuff onto our labs 
site, you can see some of it here: https://labs.omniti.com/trac/pgsoltools

HTH
-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL question re select for update (locking??)

2007-06-21 Thread Richard Huxton

Reid Thompson wrote:


Each server process claims a jobq record by selecting for update a
jobq record where the pid column is null, then rewrites the record with
the pid set in the pid column.

The "distilled" sql select statement is:

* SELECT J.*, C.name, C.client_id, C.priority
* FROM jobq J, campaign C
* WHERE J.pid IS NULL
* AND 'my_MMDDhhmmss'>=J.due_date
* AND J.campaign_id=C.id
* ORDER BY C.priority,J.due_date,J.item_id
* LIMIT 1 FOR UPDATE;


What we are seeing is:

- We dump hundreds of records into the jobq table, expecting that a
  server process will each claim a record (and process it), but some of
  the processes call the claim function but get nothing returned by the
  select; they sleep for a short time (eg 10 seconds - this sleep
  differs on a per process basis) then attempt to claim again.
  Sometimes this claim returns a jobq record, other times it may take 4
  or 5 attempted claims before a record is actually returned by the
  select.

  Our expectation was that with hundreds of selectable records available,
  that each server process would 'get' the next available record -- 
  Is this a false expectation on our part?

  Is the currently 'active' select for update limit 1 blocking the others?
  If not, can someone give us some direction..??


The problem is that the order of events isn't quite what you expect.

You can think of the SELECT ... LIMIT 1 as identifying a set containing 
one row. Since it's FOR UPDATE, the backend tries to get a lock, and if 
necessary waits until it can. Of course, all the other processors are 
doing the same thing.


When another process updates that row, the backend re-checks the 
conditions and you end up with a set of zero rows, which it helpfully 
gains a lock on and then returns to you!


So - what to do? Well, there are several options.

1. Keep the lock/update period short by just marking the row as 
"claimed" and committing. You still need to loop trying to get a job to 
process, but the process is smoother because the possible window for 
conflicts is short.


2. Select "random" jobs from the list if you don't care about order. 
Again, you're just making things smoother, not removing the requirement 
to loop checking.


3. Have a "master" process that allocates jobs to processors. Or 
simulate such a thing by having processors call an update_jobs() 
function. That way you can pre-allocate jobs to processors (at perhaps a 
cost in efficiency if the number of jobs in-queue falls below the number 
of processors).


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Excell

2007-06-21 Thread Csaba Nagy
On Thu, 2007-06-21 at 16:45, Scott Marlowe wrote:
> Another option is to use your favorite scripting language and throw an 
> excel header then the data in tab delimited format.  Or even in excel 
> xml format.

Why would you need any scripting language ? COPY supports CSV output
pretty well, it can even put you a header on the top. And 8.2 can COPY a
SELECT too, so you can generate the CSV dump directly from joins too...
on older systems I usually create a temporary table as the result of the
join and then COPY that - plain SQL is all you need... I always did it
this way when it comes to occasional data from/to excel.

Now if it is about regularly exchanging data with excel, possibly using
excel as a DB interface, probably ODBC is the only viable choice, but if
the OP really needs a DB for the data, I would say using excel as the
interface to it is likely a bad decision...

Cheers,
Csaba.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Aggregates

2007-06-21 Thread Richard Huxton

John D. Burger wrote:


On Jun 21, 2007, at 09:22, Richard Huxton wrote:


Naz Gassiep wrote:

Hi,
If I have a table with users and a table with messages, is it
possible to have a query that returns user.* as well as one extra column
with the number of messages they have posted and the data and time of
the last message? At the moment I am using a subquery to do this,
however it seems suboptimal. Is there a better way?


Not really. You have three separate queries really:
1. User details
2. Total number of messages posted
3. Details on last message posted

Unless you have a messaging-summary table that you keep up-to-date 
with triggers you're looking at a three-part query.


Certainly except for the user details it could be a single GROUP BY with 
several aggregate functions, something like:


  select user.userid, count(*), max(message.datetime)
from user join message using (userid)
group by user.userid;


Ah, but this just includes the time of the last message, not its data.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Excell

2007-06-21 Thread Scott Marlowe

David Gardner wrote:
Agreed ODBC is the way to go, depending on what you are doing, Access 
may be helpfull as an intermediate step.


Joshua D. Drake wrote:

Bob Pawley wrote:


Hi All
 
Is there a fast and easy method of transferring information between 
MS Excel and PostgreSQL??


odbc?



Another option is to use your favorite scripting language and throw an 
excel header then the data in tab delimited format.  Or even in excel 
xml format.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] PostgreSQL question re select for update (locking??)

2007-06-21 Thread Reid Thompson
Hello list,
We are using PostgreSQL 8.0.3.  Some background, and a couple of
questions..

We have a database table called "jobq" on the database machine,
and 2 networked server machines.
One of the network server machines has around 20 server processes
connecting over the network using ODBC.
These servers all attempt to "claim" jobs off the jobq to process
them.

Each server process claims a jobq record by selecting for update a
jobq record where the pid column is null, then rewrites the record with
the pid set in the pid column.

The "distilled" sql select statement is:

* SELECT J.*, C.name, C.client_id, C.priority
* FROM jobq J, campaign C
* WHERE J.pid IS NULL
* AND 'my_MMDDhhmmss'>=J.due_date
* AND J.campaign_id=C.id
* ORDER BY C.priority,J.due_date,J.item_id
* LIMIT 1 FOR UPDATE;


What we are seeing is:

- We dump hundreds of records into the jobq table, expecting that a
  server process will each claim a record (and process it), but some of
  the processes call the claim function but get nothing returned by the
  select; they sleep for a short time (eg 10 seconds - this sleep
  differs on a per process basis) then attempt to claim again.
  Sometimes this claim returns a jobq record, other times it may take 4
  or 5 attempted claims before a record is actually returned by the
  select.

  Our expectation was that with hundreds of selectable records available,
  that each server process would 'get' the next available record -- 
  Is this a false expectation on our part?
  Is the currently 'active' select for update limit 1 blocking the others?
  If not, can someone give us some direction..??

- We do not see any SQL error in our server process diagnostic output
  (but I will revisit this to double check).

- We do not see any SQL error in the postgres logs.


Here is an example / snapshot of postgres processes on the database machine:
(ps -ef |grep "postgres.*192" |grep -v idle)

postgres 27866 6434 0 05:05 ? 00:03:21 postgres: obc obc 192.168.100.164(34499) 
SELECT
postgres 27871 6434 0 05:08 ? 00:05:57 postgres: obc obc 192.168.100.171(37066) 
SELECT waiting
postgres 27887 6434 0 05:09 ? 00:01:44 postgres: obc obc 192.168.100.171(37130) 
SELECT



Here is an example / snapshot of sessions:

 datid | procpid | usesysid |current_query | query_start
---+-+--+--+-
 17231 |   23540 |  100 |  |
 17231 |   23541 |  100 |  |
 17231 |   23542 |  100 |  |
 17231 |   23543 |  100 |  |
 17231 |   23544 |  100 |  |
 17231 |   23545 |  100 |  |
 17231 |   23546 |  100 |  |
 17231 |   23547 |  100 |  |
 17231 |   23548 |  100 |  |
 17231 |   23549 |  100 |  |
 17231 |   23550 |  100 |  |
 17231 |   23551 |  100 |  |
 17231 |   23552 |  100 |  |
 17231 |   23553 |  100 |  |
 17231 |   23554 |  100 |  |
 17231 |   23555 |  100 |  |
 17231 |   23556 |  100 |  |
 17231 |   23557 |  100 |  |
 17231 |   23558 |  100 |  |
 17231 |   23559 |  100 |  |
 17231 |   23560 |  100 |  |
 17231 |   23561 |  100 |  |
 17231 |   23562 |  100 |  |
 17231 |   23563 |  100 |  |
 17231 |   23564 |  100 |  |
 17231 |   23565 |  100 |  |
 17231 |   23566 |  100 |  |
 17231 |   23567 |  100 |  |
 17231 |   23568 |  100 |  |
 17231 |   23569 |  100 |  |
 17231 |   23570 |  100 |  |
 17231 |   23571 |  100 |  |
 17231 |   23572 |  100 |  |
 17231 |   23573 |  100 |  |
 17231 |   23574 |  100 |  |
 17231 |   23575 |  100 |  |
(36 rows)



Here is an example / snapshot of pg_locks:

 relation | database | transaction |  pid  |   mode   | granted
--+--+-+---+--+-
18384 |17231 | | 23544 | AccessShareLock  | t
18267 |17231 | | 23556 | AccessShareLock  | t
18267 |17231 | | 23556 | RowShareLock | t
  |  |32900999 | 23564 | ShareLock| f
  |  |32900999 | 23556 | ExclusiveLock| t
  |  |32901005 | 23568 | ExclusiveLock| t
18418 |17231 | | 23556 | AccessShareLock  | t
18384 |17231 | | 23556 | AccessShareLock  | t
18357 |17231 | | 23555 | AccessShareLock  | t
18256 |17231 | | 23544 | AccessShareLock  | t
18352 |17231 | | 23554 | AccessShareLock  | t
18352 |17231 | | 23554 | RowShareLock | t

Re: [GENERAL] Aggregates

2007-06-21 Thread John D. Burger


On Jun 21, 2007, at 09:22, Richard Huxton wrote:


Naz Gassiep wrote:

Hi,
If I have a table with users and a table with messages, is it
possible to have a query that returns user.* as well as one extra  
column

with the number of messages they have posted and the data and time of
the last message? At the moment I am using a subquery to do this,
however it seems suboptimal. Is there a better way?


Not really. You have three separate queries really:
1. User details
2. Total number of messages posted
3. Details on last message posted

Unless you have a messaging-summary table that you keep up-to-date  
with triggers you're looking at a three-part query.


Certainly except for the user details it could be a single GROUP BY  
with several aggregate functions, something like:


  select user.userid, count(*), max(message.datetime)
from user join message using (userid)
group by user.userid;

But if userid is UNIQUE, then so is user.*.  You can't do something  
like GROUP BY USER.*, but you can group by all the user columns  
you're actually interested in selecting:


  select userid, user.name, user.address, count(*), max 
(message.datetime)

from user join message using (userid)
group by userid, user.name, user.address;

As to whether this is faster or prettier than a subquery, I dunno.

- John D. Burger
  MITRE



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [PGSQL 8.2.x] INSERT+INSERT

2007-06-21 Thread Dawid Kuroczko

On 6/21/07, Vincenzo Romano <[EMAIL PROTECTED]> wrote:

Hi all.
I'd like to do the following:

insert into t1
  values (
'atextvalue',(
  insert into t2
values ( 'somethingelse' )
returning theserial
)
  )
;

that is, I first insert data into t2 getting back the newly created
serial values, then i insert this values in another table.
I get an error message:
ERROR:  syntax error at or near "into"
referring to thwe second inner "into".
Is there a way to do this?
The inner insert...returning should be the "expression" to be used in
the outer insert.
My objective is to create an SQL script to load some 20+ million
records and avoiding function calls would save some time.


I'm afraid INSERT ... RETURNING cannot be used where a (sub)select
could be.  It returns data to the calling application only.

Given tables:

qnex=# CREATE TABLE t1 (t text, id int);
qnex=# CREATE TABLE t2 (id serial, sth text);
NOTICE:  CREATE TABLE will create implicit sequence "t2_id_seq" for
serial column "t2.id"

You want to:

qnex=# INSERT INTO t2 (sth) VALUES ('somethingelse');
INSERT 0 1
qnex=# INSERT INTO t1 VALUES ('atextvalue', currval('t2_id_seq'));
INSERT 0 1

Or wrap it around SQL function:

qnex=# CREATE OR REPLACE FUNCTION t_insert(sth text, t text) RETURNS VOID AS $$
 INSERT INTO t2 (sth) VALUES ($1);
 INSERT INTO t1 (t,id) VALUES ($2, currval('t2_id_seq'));
$$ LANGUAGE SQL;
CREATE FUNCTION
qnex=# SELECT t_insert('foo', 'bar');

...which should be inlined nicely, without PL/PgSQL overhead.

  Regards,
Dawid

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Aggregates

2007-06-21 Thread Richard Huxton

Naz Gassiep wrote:

Hi,
If I have a table with users and a table with messages, is it
possible to have a query that returns user.* as well as one extra column
with the number of messages they have posted and the data and time of
the last message? At the moment I am using a subquery to do this,
however it seems suboptimal. Is there a better way?


Not really. You have three separate queries really:
1. User details
2. Total number of messages posted
3. Details on last message posted

Unless you have a messaging-summary table that you keep up-to-date with 
triggers you're looking at a three-part query.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Accent insensitive search

2007-06-21 Thread Albe Laurenz
PFC wrote:
>> Hi. I have a few databases created with UNICODE encoding, and I would  
>> like to be able to search with accent insensitivity. There's something  
>> in Oracle (NLS_COMP, NLS_SORT) and SQL Server (don't remember) to do  
>> this, but I found nothing in PostgreSQL, just the 'to_ascii' function,  
>> which AFAIK, doesn't work with UNICODE.
> 
>   The easiest way is to create an extra column which will hold a copy of  
> your text, with all accents removed. You can also convert it to lowercase  
> and remove apostrophes, punctuation etc. Said column is kept up to date  
> with a trigger.

Creating an extra column which holds a copy of the data is certainly not
a very good idea, as it will create unnecessary redundancy (breaking
normal forms), bloat your table, and the trigger will have a certain
performance impact.

My suggestion is to write a function that removes the accents in a string
for your language, let's call it noaccents(text).

Then you can index the column atext with

CREATE INDEX atable_atext_idx ON atable ((noaccents(atext)))

Then every query of the form

... WHERE noaccents(atext) = noaccents('SOMÉTHING')

can use the index.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] 8.2.3 PANIC with "corrupted item pointer"

2007-06-21 Thread Gregory Stark

"Henka" <[EMAIL PROTECTED]> writes:

>> Other than that it might be interesting to know the values of some server
>> parameters: "fsync" and "full_page_writes". Have you ever had this machine
>> crash or had a power failure? And what kind of i/o controller is this?
>
> fsync = off
> full_page_writes = default
>
> Sadly yes, the machine has experienced a power failure about 3 weeks ago
> (genset startup problem).  With fsync=off this presents a problem wrt safe
> recovery, I know...

Ugh. The worst part is that you won't even know that there's anything wrong
with your data. I would actually suggest that if you run with fsync off and
have a power failure or kernel crash you should just immediately restore from
your last backup and not risk running with the possibly corrupt database.

Honestly this seems like a weird error to occur as a result of crashing with
fsync off but anything's possible. More likely is you have records that you
have partial transactions in your database, ie, records which were inserted or
deleted in a transaction but missing other records that were inserted or
deleted in the same transaction.

You could probably fix this particular problem by reindexing the corrupted
index. But you may never know if some of the data is incorrect.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] 8.2.3 PANIC with "corrupted item pointer"

2007-06-21 Thread Henka


>> I'm using PG 8.2.3:
>
> You should update to 8.2.4, it includes a security fix and several bug
> fixes.

That was my next option.  My last backup dump looks suspiciously small,
but the day before that looks about right.


> My first thought is bad memory. It's always good to rule that out since
> it's
> quite common and can cause a lot of hair-pulling. If you can schedule some
> downtime download memtest86 and run it overnight.

Thanks for the suggestion - will give it a try.


> Other than that it might be interesting to know the values of some server
> parameters: "fsync" and "full_page_writes". Have you ever had this machine
> crash or had a power failure? And what kind of i/o controller is this?

fsync = off
full_page_writes = default

Sadly yes, the machine has experienced a power failure about 3 weeks ago
(genset startup problem).  With fsync=off this presents a problem wrt safe
recovery, I know...


> Ideally it would be good to get a dump of this block, it looks like it's
> probably a block of an index (unless you have a table with extremely
> narrow
> rows?). But there doesn't seem to be enough information in this error to
> tell
> which block it happened on.
>
> If you manually "vacuum verbose" each table does it cause a crash? If so
> send

Giving that a try now on the suspect table.




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Recovery/Restore and Roll Forward Question.

2007-06-21 Thread Bruce McAlister
Albe Laurenz wrote:
> Richard Huxton wrote:
>>> In our environment it takes approx 2 hours to perform a PIT backup of
>>> our live system:
>>>
>>> [1] select pg_start_backup('labe;')
>>> [2] cpio & compress database directory (exclude wals)
>>> [3] select pg_stop_backup()
>>>
>>> However, if we perform a plain dump (pg_dump/pg_dumpall) we 
>>> can dump the whole lot in 15 minutes. For us this is more efficient.
>> It sounds like there's something strange with your setup if it's
> quicker 
>> for pg_dump to read your data than cp. Do you have *lots* of indexes,
> or 
>> perhaps a lot of dead rows?
> 
> That sounds like a good bet.
> 
> Did you ever do a VACUUM FULL?
> What is the size of the online backup?
> What is the size of the pg_dumpall?
> 

I have autovacuum configured on the system, but have only just recently
(yesterday) found out that the autovacuum daemon only appears to be
selecting 1 database to work on, i have another thread open on this
mailing list for that particular issue.

Online backup size compressed is around 3.4GB.
pg_dumpall size compressed is around 2GB.

We do have a couple indexes, however, the only one that has a large
amount of bloat is the one referenced in the autovacuum post on this
mailing list. Hopefully if i can get autovacuum working with that
database, then that would be one less worry :)


> Yours,
> Laurenz Albe
> 
> ---(end of broadcast)---
> TIP 6: explain analyze is your friend
> 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Recovery/Restore and Roll Forward Question.

2007-06-21 Thread Bruce McAlister
Richard Huxton wrote:
> Bruce McAlister wrote:
>> Thats exactly what I think. There is something strange going on. At the
>> moment I think it is the disk I am writing the data to that is slow,
>> possibly due to the fact that it is mounted up as "forcedirectio", so as
>> not to interfere with the file system cache which we want to have mainly
>> pg datafiles in, and the RAID controller has this particular logical
>> driver configured as write-through, so there is no buffering in-between.
>> The cpu's and network are not the problem here (2 x Dual Core Opterons
>> and Quad Gigabit Ethernet, total cpu usage is around 10%, NIC's are
>> pushing around 3Mbit/s over each).
>>
>> It's not all that big to be honest, the total database size is around
>> 11GB and I'm currently recking my head to find out how to improve the
>> backup times, and not adversely affect our running instance. 
> 
> 11GB in 2 hours? Let's see, that's ~1.5MB/sec. Something is horribly
> wrong there - you could do better than that with a USB 1 drive.
> 

LOL, now that you put it that way, I'll be sure to put some time in to
get to the bottom of the issue. Jeez, not even USB 2 speeds, maybe i
should hook my thumb drive in the back of the box and send backups there
to prove a point :)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] 8.2.3 PANIC with "corrupted item pointer"

2007-06-21 Thread Gregory Stark
"Henka" <[EMAIL PROTECTED]> writes:

> Hello all,
>
> I'm using PG 8.2.3:

You should update to 8.2.4, it includes a security fix and several bug fixes.
However afaik none of them look like this.

> PANIC:  corrupted item pointer: offset = 3308, size = 28
> LOG:  autovacuum process (PID 18165) was terminated by signal 6

Huh, that's pretty strange.

My first thought is bad memory. It's always good to rule that out since it's
quite common and can cause a lot of hair-pulling. If you can schedule some
downtime download memtest86 and run it overnight.

Other than that it might be interesting to know the values of some server
parameters: "fsync" and "full_page_writes". Have you ever had this machine
crash or had a power failure? And what kind of i/o controller is this?

Ideally it would be good to get a dump of this block, it looks like it's
probably a block of an index (unless you have a table with extremely narrow
rows?). But there doesn't seem to be enough information in this error to tell
which block it happened on.

If you manually "vacuum verbose" each table does it cause a crash? If so send
that along and at least we'll know which table or index has the corrupted
data. You probably don't want to do this during peak production time though...

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Accent insensitive search

2007-06-21 Thread Gregory Stark

"PFC" <[EMAIL PROTECTED]> writes:

>> Hi. I have a few databases created with UNICODE encoding, and I would like to
>> be able to search with accent insensitivity. There's something  in Oracle
>> (NLS_COMP, NLS_SORT) and SQL Server (don't remember) to do  this, but I found
>> nothing in PostgreSQL, just the 'to_ascii' function,  which AFAIK, doesn't
>> work with UNICODE.

Postgres supports localized collation orders but what it doesn't currently
support is having multiple collation orders within the same server.

So when you initialize the server with initdb it takes the setting of
LC_COLLATE (usually from LC_ALL) and stores that permanently. If you initdb
with LC_COLLATE set to a locale like en_GB.UTF-8 or something like that you
may find one that has the behaviour you want. I think they won't be entirely
accent insensitive but they'll consider accents only if the rest of the string
is identical.

You can test the sort order of a locale by writing a file with sample words
and sorting with something like:

LC_ALL=en_GB.UTF-8 sort /tmp/data

>   The easiest way is to create an extra column which will hold a copy of
> your text, with all accents removed. You can also convert it to lowercase  and
> remove apostrophes, punctuation etc. Said column is kept up to date  with a
> trigger.

That's another alternative which is useful if you need multiple collations in
your database. This gives you control over which collation is used when and
exactly what the rules are. The downside is that you have to reinvent the
collation rules which the localized collations already provide.

You don't necessarily have to keep a column in your table with the normalized
strings. You can normalize "on-the-fly" using an expression index as long as
your function always returns the same data given the same inputs (and is
therefore marked "immutable").

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Aggregates

2007-06-21 Thread Naz Gassiep
Hi,
If I have a table with users and a table with messages, is it
possible to have a query that returns user.* as well as one extra column
with the number of messages they have posted and the data and time of
the last message? At the moment I am using a subquery to do this,
however it seems suboptimal. Is there a better way?
- Naz.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Recovery/Restore and Roll Forward Question.

2007-06-21 Thread Albe Laurenz
Richard Huxton wrote:
>> In our environment it takes approx 2 hours to perform a PIT backup of
>> our live system:
>> 
>> [1] select pg_start_backup('labe;')
>> [2] cpio & compress database directory (exclude wals)
>> [3] select pg_stop_backup()
>> 
>> However, if we perform a plain dump (pg_dump/pg_dumpall) we 
>> can dump the whole lot in 15 minutes. For us this is more efficient.
> 
> It sounds like there's something strange with your setup if it's
quicker 
> for pg_dump to read your data than cp. Do you have *lots* of indexes,
or 
> perhaps a lot of dead rows?

That sounds like a good bet.

Did you ever do a VACUUM FULL?
What is the size of the online backup?
What is the size of the pg_dumpall?

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] 8.2.3 PANIC with "corrupted item pointer"

2007-06-21 Thread Henka
Hello all,

I'm using PG 8.2.3:

PostgreSQL 8.2.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.6

I happened to notice this error in the log when my application was refused
a db connection (quite unexpectedly):

PANIC:  corrupted item pointer: offset = 3308, size = 28
LOG:  autovacuum process (PID 18165) was terminated by signal 6
LOG:  terminating any other active server processes
LOG:  all server processes terminated; reinitializing
LOG:  database system was interrupted at 2007-06-21 12:27:13 SAST
LOG:  checkpoint record is at D6/F00F418C
LOG:  redo record is at D6/F00C24B8; undo record is at 0/0; shutdown FALSE
LOG:  next transaction ID: 0/2090506603; next OID: 101300203
LOG:  next MultiXactId: 35676; next MultiXactOffset: 85365
LOG:  database system was not properly shut down; automatic recovery in
progress
LOG:  redo starts at D6/F00C24B8
FATAL:  the database system is starting up
LOG:  record with zero length at D6/F138DDA4
LOG:  redo done at D6/F138DD7C
LOG:  database system is ready

Just prior to the panic, an INSERT was executed three times and rejected
as expected because of a unique constraint index.

Database size is about 180GB and growing.

Any comments would be appreciated.

Regards
Henry


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Recovery/Restore and Roll Forward Question.

2007-06-21 Thread Richard Huxton

Bruce McAlister wrote:

Thats exactly what I think. There is something strange going on. At the
moment I think it is the disk I am writing the data to that is slow,
possibly due to the fact that it is mounted up as "forcedirectio", so as
not to interfere with the file system cache which we want to have mainly
pg datafiles in, and the RAID controller has this particular logical
driver configured as write-through, so there is no buffering in-between.
The cpu's and network are not the problem here (2 x Dual Core Opterons
and Quad Gigabit Ethernet, total cpu usage is around 10%, NIC's are
pushing around 3Mbit/s over each).

It's not all that big to be honest, the total database size is around
11GB and I'm currently recking my head to find out how to improve the
backup times, and not adversely affect our running instance. 


11GB in 2 hours? Let's see, that's ~1.5MB/sec. Something is horribly 
wrong there - you could do better than that with a USB 1 drive.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Recovery/Restore and Roll Forward Question.

2007-06-21 Thread Bruce McAlister
Richard Huxton wrote:
> Bruce McAlister wrote:
>> Hi All,
>>
>> Is it at all possible to "roll forward" a database with archive logs
>> when it has been recovered using a dump?
>>
>> Assuming I have the archive_command and archive_timeout parameters set
>> on our "live" system, then I follow these steps:
>>
>> [1] pg_dump -d database > /backup/database.dump,
>> [2] initdb new instance on recovery machine,
>> [3] psql -f ./database.dump,
>> [4] shutdown new recovered db,
>> [5] create recovery.conf,
>> [6] copy WAL's from time of backup till time of recovery to temp dir
>> [7] start postgresql
> 
> No. WALs track disk blocks not table-rows, so you need a file-level
> backup of the original installation.

Ahh okay, that makes a little more sense now. I thought they actually
contained the query and that was replayed to the database being recovered.

> 
>> In my mind I think I will have some problems somewhere along the way,
>> however I don't know enough about the internals of PostgreSQL to
>> actually see if there are additional steps I need to follow.
>>
>> In our environment it takes approx 2 hours to perform a PIT backup of
>> our live system:
>>
>> [1] select pg_start_backup('labe;')
>> [2] cpio & compress database directory (exclude wals)
>> [3] select pg_stop_backup()
>>
>> However, if we perform a plain dump (pg_dump/pg_dumpall) we can dump the
>> whole lot in 15 minutes. For us this is more efficient.
> 
> It sounds like there's something strange with your setup if it's quicker
> for pg_dump to read your data than cp. Do you have *lots* of indexes, or
> perhaps a lot of dead rows? What's the bottleneck with cpio+compress -
> cpu/disk/network?

Thats exactly what I think. There is something strange going on. At the
moment I think it is the disk I am writing the data to that is slow,
possibly due to the fact that it is mounted up as "forcedirectio", so as
not to interfere with the file system cache which we want to have mainly
pg datafiles in, and the RAID controller has this particular logical
driver configured as write-through, so there is no buffering in-between.
The cpu's and network are not the problem here (2 x Dual Core Opterons
and Quad Gigabit Ethernet, total cpu usage is around 10%, NIC's are
pushing around 3Mbit/s over each).

It's not all that big to be honest, the total database size is around
11GB and I'm currently recking my head to find out how to improve the
backup times, and not adversely affect our running instance. I just
recently tried to use UFS snapshots, but the backing store filled up
before i could complete a backup of the snapshot. I need to find a way
to improve the write speed of our destination disk. I have another
question in this pg group about autovacuum that is not running on one of
our database tables which adds an average of around 2.1GB of bloat to
the database each day. I've now (today) scheduled a cron job every 10
minutes to get around this in the meantime. Hopefully that should reduce
the amount of data backed up by 2GB when we get to the bottom of that
issue (autovacuum)

> 
>> The question is, how can we roll forward from our time of pg_dump, to
>> our most recent WAL (in case of failure - touch wood).
> 
> Can't be done I'm afraid.

Thanks, I'll have to stick with PIT backups and find a way to improve
the speed.

> 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] [PGSQL 8.2.x] INSERT+INSERT

2007-06-21 Thread Vincenzo Romano
Hi all.
I'd like to do the following:

insert into t1
  values (
'atextvalue',(
  insert into t2
values ( 'somethingelse' )
returning theserial
)
  )
;

that is, I first insert data into t2 getting back the newly created 
serial values, then i insert this values in another table.
I get an error message:
ERROR:  syntax error at or near "into"
referring to thwe second inner "into".
Is there a way to do this?
The inner insert...returning should be the "expression" to be used in
the outer insert.
My objective is to create an SQL script to load some 20+ million 
records and avoiding function calls would save some time.

Thanks in advance.

-- 
Vincenzo Romano
--
Maybe Computer will never become as intelligent as Humans.
For sure they won't ever become so stupid. [VR-1988]

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Accent insensitive search

2007-06-21 Thread PFC


Hi. I have a few databases created with UNICODE encoding, and I would  
like to be able to search with accent insensitivity. There's something  
in Oracle (NLS_COMP, NLS_SORT) and SQL Server (don't remember) to do  
this, but I found nothing in PostgreSQL, just the 'to_ascii' function,  
which AFAIK, doesn't work with UNICODE.


	The easiest way is to create an extra column which will hold a copy of  
your text, with all accents removed. You can also convert it to lowercase  
and remove apostrophes, punctuation etc. Said column is kept up to date  
with a trigger.

Python is suitable for this (use unicodedata.normalize).
	Keeping a copy of the processed data will speed up search versus WHERE  
remove_accents( blah ) = 'text', even with a function index.
	Note that this function could be written in C and use a table on the  
first 64K unicode symbols for speedup.


See attached file.

create_ft_functions.sql
Description: Binary data

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Recovery/Restore and Roll Forward Question.

2007-06-21 Thread Richard Huxton

Bruce McAlister wrote:

Hi All,

Is it at all possible to "roll forward" a database with archive logs
when it has been recovered using a dump?

Assuming I have the archive_command and archive_timeout parameters set
on our "live" system, then I follow these steps:

[1] pg_dump -d database > /backup/database.dump,
[2] initdb new instance on recovery machine,
[3] psql -f ./database.dump,
[4] shutdown new recovered db,
[5] create recovery.conf,
[6] copy WAL's from time of backup till time of recovery to temp dir
[7] start postgresql


No. WALs track disk blocks not table-rows, so you need a file-level 
backup of the original installation.



In my mind I think I will have some problems somewhere along the way,
however I don't know enough about the internals of PostgreSQL to
actually see if there are additional steps I need to follow.

In our environment it takes approx 2 hours to perform a PIT backup of
our live system:

[1] select pg_start_backup('labe;')
[2] cpio & compress database directory (exclude wals)
[3] select pg_stop_backup()

However, if we perform a plain dump (pg_dump/pg_dumpall) we can dump the
whole lot in 15 minutes. For us this is more efficient.


It sounds like there's something strange with your setup if it's quicker 
for pg_dump to read your data than cp. Do you have *lots* of indexes, or 
perhaps a lot of dead rows? What's the bottleneck with cpio+compress - 
cpu/disk/network?



The question is, how can we roll forward from our time of pg_dump, to
our most recent WAL (in case of failure - touch wood).


Can't be done I'm afraid.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Accent insensitive search

2007-06-21 Thread Diego Manilla Suárez
Hi. I have a few databases created with UNICODE encoding, and I would 
like to be able to search with accent insensitivity. There's something 
in Oracle (NLS_COMP, NLS_SORT) and SQL Server (don't remember) to do 
this, but I found nothing in PostgreSQL, just the 'to_ascii' function, 
which AFAIK, doesn't work with UNICODE.


Thanks in advance.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Recovery/Restore and Roll Forward Question.

2007-06-21 Thread Bruce McAlister
Hi All,

Is it at all possible to "roll forward" a database with archive logs
when it has been recovered using a dump?

Assuming I have the archive_command and archive_timeout parameters set
on our "live" system, then I follow these steps:

[1] pg_dump -d database > /backup/database.dump,
[2] initdb new instance on recovery machine,
[3] psql -f ./database.dump,
[4] shutdown new recovered db,
[5] create recovery.conf,
[6] copy WAL's from time of backup till time of recovery to temp dir
[7] start postgresql

In my mind I think I will have some problems somewhere along the way,
however I don't know enough about the internals of PostgreSQL to
actually see if there are additional steps I need to follow.

In our environment it takes approx 2 hours to perform a PIT backup of
our live system:

[1] select pg_start_backup('labe;')
[2] cpio & compress database directory (exclude wals)
[3] select pg_stop_backup()

However, if we perform a plain dump (pg_dump/pg_dumpall) we can dump the
whole lot in 15 minutes. For us this is more efficient.

The question is, how can we roll forward from our time of pg_dump, to
our most recent WAL (in case of failure - touch wood).

Any comments/suggestions are most welcome. if anyone knows of some docs
or reference info about the way we're trying to follow, please could you
let me know.

Thanks
Bruce

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Experiences of PostgreSQL on-disk bitmap index patch

2007-06-21 Thread Christan Josefsson

Ok.

Big thanks for the information.

You mentioned Bizgres, do you have any more information in that direction,
or do you know who to contact regarding information on Bizgres bitmap
indexes. If there is a bitmap index patch in Bizgres which can be applied to
the latest stable source of PostgreSQL then I have a solution until 8.4 (which
I according to your answers is the assumed release for introducing on-disk
bitmap indexes).

Any guess when 8.4 could be production ready? A year or more?

Regards,
Christian


2007/6/21, Alexander Staubo <[EMAIL PROTECTED]>:


On 6/20/07, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
> Just so there is no confusion. These WILL NOT be in 8.3:
>
> http://developer.postgresql.org/index.php/Todo:PatchStatus

Apologies. I didn't know they had been put on hold.

Alexander.



Re: [GENERAL] [pgadmin-support] Problem editing tables (geom columns)

2007-06-21 Thread Dave Page
Pedro Doria Meunier wrote:
> (First of all sorry for cross-posting but I feel this is a matter that
> interests all recipients)
> Thread on pgadmin support:
> http://www.pgadmin.org/archives/pgadmin-support/2007-06/msg00046.php
> 
> Hello Dave,

Hi Pedro

> This behavior (trying to show the entire geometry field) in Fedora
> Core 6, pgAdmin3 1.6.2, doesn't happen...
> In that scenario the geom field is simply shown blank.

There have been no changes in pgAdmin between 1.6.2 and 1.6.3 that might
account for this behavioural change afaict. I imagine it's probably some
difference in the platform's grid control between the two Fedora versions.

> Nevertheless, if I recall it correctly, proj4, geos, postgis versions,
> in the above scenario, were older than the ones I'm using...
> So I wonder... could it be that pgadmin's code changed for showing
> large fields?
> Could one of proj4, geos, postgis code changed that really interferes
> with pgadmin3?

Unlikely. The data is essentially just text, and pgAdmin treats it as such.

> IMHO, regardless of the scenario involved, the output for large geom
> fields should be suppressed in table edition... (as seen in the past)
> The current behavior hogs way too much processor time.

Well, the suppression you saw wasn't us - I would guess it was simply
that the grid control on the older Fedora just ignores the longer data.

The problem we have trying to suppress it ourselves is that we'd either
need to do it on a per row basis (because we only actually look at the
data when a row is rendered on screen), or pre-scan all the data in
every column before displaying it and make a decision on whether or not
we should suppress entire column.

The problem with the first idea is that it would be very confusing for
the user - we might have some rows that we suppress and thus aren't
editable, and others that we don't and therefore can be edited. The
second method returns us back towards the <= v1.4 behaviour where we had
query runtime + display time in the query tool. It would significantly
slow things down :-(

What I'm currently thinking is just that we add a tab to the Filter/Sort
dialog to allow non-primary key columns to be switched on or off, so you
can disable the larger columns before the query is even run. We can make
that more convenient by saving those options (as well as the sort/filter
options) on a per-table basis.

I'd like some more thoughts from some of the other pgAdmin devs before
we settle on that though.

Regards, Dave

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] AutoVacuum Behaviour Question

2007-06-21 Thread Bruce McAlister
Hi All,

I have enabled autovacuum in our PostgreSQL cluster of databases. What I
have noticed is that the autovacuum process keeps selecting the same
database to perform autovacuums on and does not select any of the others
within the cluster. Is this normal behaviour or do I need to do
something more elaborate with my settings?

Our main concern is the "blueface-service" database. The sipaccounts
table has some high traffic, mainly updates. At the end of an average
day's run without autovacuum this table, which is normally around 20MB
gets bloated to around 2.2GB (now, imagine a busy day) at which point
our nightly "cluster" cleans it up. However, we would like the
autovacuum to be more stringent with this particular table and keep the
bloat to a minimum.

Our setup is as follows:

OS version: Solaris 10 Update 3
DB version: PostgreSQL 8.2.4

I have checked the pg_catalog.pg_stat_all_tables view in each database
and the autovacuum/autoanalyze field is null for all our databases
except the blueface-crm one.

The autovacuum does appear to be running, but only selecting one
database each time.

--
Log Excerpt
--
DEBUG:  autovacuum: processing database "blueface-crm"
DEBUG:  autovacuum: processing database "blueface-crm"
DEBUG:  autovacuum: processing database "blueface-crm"
DEBUG:  autovacuum: processing database "blueface-crm"
DEBUG:  autovacuum: processing database "blueface-crm"
DEBUG:  autovacuum: processing database "blueface-crm"


Auto Vacuum Settings:

autovacuum = on
autovacuum_naptime = 1min
autovacuum_vacuum_threshold = 500
autovacuum_analyze_threshold = 250
autovacuum_vacuum_scale_factor = 0.2
autovacuum_analyze_scale_factor = 0.1
autovacuum_freeze_max_age = 2
autovacuum_vacuum_cost_delay = -1
autovacuum_vacuum_cost_limit = -1

stats_command_string = on
update_process_title = on
stats_start_collector = on
stats_block_level = on
stats_row_level = on
stats_reset_on_server_start = off

vacuum_cost_delay = 0
vacuum_cost_limit = 200

log_min_messages = debug1

If you require any additional info I'd be happy to pass it along.

Thanks
Bruce

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] "Failed to create process: 2!" during installation in windows 2000

2007-06-21 Thread filipe paiva

Hi there!

First of all, sorry if that's not the correct place to send my question but 
I didn't find any installation mailing list. I'd aprecciate if you tell me 
where's the correct mailling list.


My question is: can PostgresSql 8.2 be installed in Windows 2000? In the 
instalation file ppl claim yes but when I tried it appears the following 
error: "Failed to create process: 2!"


What's going on with that? Can you explain me?

Cheers,

Filipe

_
Fazer um telefonema para o PC de um amigo não custa nada. É grátis! 
http://get.live.com/pt-pt/messenger/overview



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq