Re: [GENERAL] Windows file path for copy

2005-08-03 Thread Magnus Hagander
> Windows XP SP2 with Postgresql 8.0.3
> Two commands on fails the other succeeds:
> 
> Fails :
> 
> select import_sharedata('C:\\Documents and 
> Settings\\Richard\\Desktop\\EzyChart-20050721');
> 
> Succeeds:
> 
> select import_sharedata('C:\\EzyChart-20050721');
> 
> is it the spaces in the path that postgres does not like? If 
> so how do I format the enquiry pls?
> 
> Failure message indicates that It can not access the file. 
> However the file is downloaded to my destop and thus has been 
> created with my permissions.

I assume import_sharedate() is a server-side function. In this case, the
*service account* needs permissions, not you. And don't grant it to your
desktop - that's generallyi a bad idea :-) Use a shared dir somewhere
that both you and the service accoutn has permissions on.

//Magnus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Need some help creating a database sandbox...

2005-08-03 Thread Magnus Hagander
> I'm trying to setup a "safe" testing database environment for 
> some unit testing of a web application.  I would like to have 
> the unit tests restore the database to a known state before 
> every test.  The simplest way I thought would be to have the 
> testing tools drop/create the testing database on every test 
> case, and then populate the database from a specified file.  
> However I don't want to give the test user superuser 
> privileges.  Thus I don't think I can restrict it to only 
> drop/create a single named DB.

No, AFAIK there is no way to do that.


> My next thought was to code up a "DELETE ALL" script that 
> would delete all entities in the database.  However it seems 
> easy to miss something and its not robust against schema 
> changes, even though it can be looked down to the test_db_owner.

If you're giong to drop *everything* in the db, you can drive something
off the system tables or information schema. Like:

SELECT 'DROP TABLE ' || table_schema || '.' || table_name FROM
information_schema.tables WHERE table_type='BASE TABLE' AND table_schema
NOT IN ('pg_catalog','information_schema')

And then feed the generated script back through a different psql prompt.

Similar scripts for other object types of coruse (views, functions etc).
It might be easier to drive it off the system tables directly instead of
information schema, if you can live with possible backend version
dependencies.


> A third thought would be to run a second cluster containing 
> only the test database(s).  Then the users wouldn't be 
> shared, so even if it someone connected to the wrong DB it 
> would lack any permissions.  I don't have much experience 
> running multiple clusters, however.  So I don't know if thats 
> opening up another whole can of worms.

Just make them run in completely different directories, and use
different accouts to start each cluster (each only having permissions on
it's own data directory, of course). It's going to mean two sets of
shared buffer caches etc, so you may need to trim the memory values in
your postgresql.conf, and of course run them on different ports, but it
should work just fine.

//Magnus

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Questions about anonymous procedure/function.

2005-08-03 Thread Richard Huxton

Ying Lu wrote:

Greetings,

I have a question about whether I am able to create an *anonymous* 
procedure/function under postgreSQL 8. Something like:


Begin
  ... ...
  update ...
  IF ... THEN
  rollback
  END IF;
  ... ...
END;


No

I'd like to do a series of operations. If any one of the opers failed, 
all previous operation will rollback. 


Well that's just a transaction.

> Actually, I do not want to create
named functions. Just used the structure for different tables with minor 
changes.


Create your function, call it, drop it.

Another question is about how to create a function/procedure do not need 
a return value (void for return value). It seems that postgreSQL forces 
to return some value?


Just return a boolean.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] views to behave differently according to user and schema

2005-08-03 Thread Samuel Thoraval

Hi,

I'd like views to behave differently according to the user and schema.

For instance, having 3 schemas corresponding to 3 users :

-- AS USER nsp1 :
CREATE TABLE nsp1.test (id integer);
CREATE VIEW nsp2.test AS SELECT * FROM nsp1.test WHERE id <= 10;
CREATE VIEW nsp3.test AS SELECT * FROM nsp1.test WHERE id <= 50;
CREATE or REPLACE VIEW public.vtest AS SELECT * FROM test WHERE id <=5;

Here, test in VIEW vtest is actually being tied up to the first 
relation/view it found in the search path (nsp1,public) :

\d public.vtest;
View "public.vtest"
Column |  Type   | Modifiers
+-+---
id | integer |
View definition:
SELECT test.id
  FROM nsp1.test
 WHERE test.id <= 10;

Is it possible to change this behavior so that the VIEW will dynamically 
use the search_path variable (when the schema was not specifically set) ?
In this case, when user nsp2 loggs in, the VIEW vtest would be using 
VIEW nsp2.test, when user nsp3 loggs in it would use VIEW nsp3.test, for 
user nsp1 TABLE nsp1.test ...


Cheers,

--
Samuel Thoraval
LIBROPHYT, Bioinformatique
Centre de Cadarache


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] Hello

2005-08-03 Thread Vahe Ghorghorian

Hello Sir,

I am using PostGreSql ,I want an oledb provider for the postgre in order to 
connect through.net to a postgre Database, I installed the pgoledb.dll and I 
registered it and in the connection string I am using the 
“Provider=PostgreSQL.1 . . .   ,but the following error is occurring.


Error: System.Data.OleDb.OleDbException: FATAL:  invalid command-line 
arguments for server process HINT:  Try "postgres --help" for more 
information.




I need help how I can fix this problem. Thank you



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] views to behave differently according to user and schema

2005-08-03 Thread Richard Huxton

Samuel Thoraval wrote:


Is it possible to change this behavior so that the VIEW will dynamically 
use the search_path variable (when the schema was not specifically set) ?
In this case, when user nsp2 loggs in, the VIEW vtest would be using 
VIEW nsp2.test, when user nsp3 loggs in it would use VIEW nsp3.test, for 
user nsp1 TABLE nsp1.test ...


I don't think so. There's no guarantee that nsp1 and nsp2 have the same 
definition, which means the view could break depending on search_path.


You could create a separate view in each schema of course.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Hello

2005-08-03 Thread Richard Huxton

Vahe Ghorghorian wrote:

Hello Sir,

I am using PostGreSql ,I want an oledb provider for the postgre in order 
to connect through.net to a postgre Database, I installed the 
pgoledb.dll and I registered it and in the connection string I am using 
the �Provider=PostgreSQL.1 . . .   ,but the following error is occurring.


Error: System.Data.OleDb.OleDbException: FATAL:  invalid command-line 
arguments for server process HINT:  Try "postgres --help" for more 
information.


It seems to be saying the problem is with the server, not the oledb 
provider. Can you access PostgreSQL from psql or pgadmin? Do the server 
logs show anything interesting?


--
  Richard Huxton
  Archonet Ltd


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Note on scalar subquery syntax

2005-08-03 Thread Kevin Murphy
I thought this might be helpful in the future to other duffers such as 
myself.


The following is my big contribution to the documentation of the use of 
scalar subqueries ;-):


You have to wrap a scalar subquery in its own parentheses even where you 
might think it to be unnecessary, such as when the subquery is the sole 
argument to a function.


As an example, I wrote a function to explode, or unpack, the elements of 
an array onto separate rows (its return type is SETOF whatever), but it 
took me a while to figure out how to use it effectively in queries.


You have to use it like this:

RIGHT--> select * from array_explode((select array_col from table1 where 
col2 = 'something'));


Note the "extra" set of parentheses.  These are crucial: the syntax is 
invalid without these, as in:


WRONG--> select * from array_explode(select array_col from table1 where 
col2 = 'something');


And no, as mentioned in many archived list messages, you can NOT do the 
following, which is what a lot of people (including me) seem to try first:


WRONG--> select array_explode(array_col) from table1 where col2 = 
'something');


(The previous command results in the error message: "set-valued function 
called in context that cannot accept a set").


-Kevin Murphy


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Hello

2005-08-03 Thread Jaime Casanova
On 8/3/05, Vahe Ghorghorian <[EMAIL PROTECTED]> wrote:
> Hello Sir,
> 
> I am using PostGreSql ,I want an oledb provider for the postgre in order to
> connect through.net to a postgre Database, I installed the pgoledb.dll and I
> registered it and in the connection string I am using the
> "Provider=PostgreSQL.1 . . .   ,but the following error is occurring.
> 
> Error: System.Data.OleDb.OleDbException: FATAL:  invalid command-line
> arguments for server process HINT:  Try "postgres --help" for more
> information.
> 
> 
> 
> I need help how I can fix this problem. Thank you
> 
> 

Maybe what you want is npgsql:
http://gborg.postgresql.org/project/npgsql/projdisplay.php


-- 
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Cost problem

2005-08-03 Thread Victor

Hello.

I use psql (PostgreSQL) 7.4.5.

I have a cost problem.

A simple select lasts too long.
I have a table with aprox 900 rows.
All rows are deleted and reinserted once a minute.

EXPLAIN SELECT * FROM logati;
   QUERY PLAN
---
Seq Scan on logati  (cost=0.00..100741.22 rows=1198722 width=340)


EXPLAIN ANALYZE SELECT * FROM logati;
 QUERY PLAN
---
Seq Scan on logati  (cost=0.00..100741.22 rows=1198722 width=340) 
(actual time=35927.945..35944.272 rows=842 loops=1)

Total runtime: 35945.840 ms
(2 rows)


Tha problem is that "EXPLAIN" function raports that the table has 
1198722 rows.

But the table has only 836 rows and the select lasts a lot.

If I recreate the table the number of rows is ok and the select is quick 
for some time, but it becomes slower in time.


How can I fix this?

PS: The table has a index but no primary key.

Thank you

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Cost problem

2005-08-03 Thread Jaime Casanova
On 8/2/05, Victor <[EMAIL PROTECTED]> wrote:
> Hello.
> 
> I use psql (PostgreSQL) 7.4.5.
> 
> I have a cost problem.
> 
> A simple select lasts too long.
> I have a table with aprox 900 rows.
> All rows are deleted and reinserted once a minute.
> 
> EXPLAIN SELECT * FROM logati;
>QUERY PLAN
> ---
>  Seq Scan on logati  (cost=0.00..100741.22 rows=1198722 width=340)
> 
> 
> EXPLAIN ANALYZE SELECT * FROM logati;
>  QUERY PLAN
> ---
>  Seq Scan on logati  (cost=0.00..100741.22 rows=1198722 width=340)
> (actual time=35927.945..35944.272 rows=842 loops=1)
>  Total runtime: 35945.840 ms
> (2 rows)
> 
> 
> Tha problem is that "EXPLAIN" function raports that the table has
> 1198722 rows.
> But the table has only 836 rows and the select lasts a lot.
> 
> If I recreate the table the number of rows is ok and the select is quick
> for some time, but it becomes slower in time.
> 
> How can I fix this?
> 
> PS: The table has a index but no primary key.
> 
> Thank you
> 
> ---(end of broadcast)---
> TIP 6: explain analyze is your friend
> 

execute ANALYZE in the table from time to time...  And maybe and
better vacuum...


-- 
Atentamente,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Cursor Issue??

2005-08-03 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2005-08-02 10:01:34 -0400:
> I made it happen in MicrosoftSQL using the first code below.  The only 
> difference is I had to create variables.  Which I'm having a hard time 
> trying to replicate it in psql.
> 
> __Microsoft Code___
> USE test
> GO
> DECLARE @qty INT, @Length varchar(20), @Width varchar(40)
> DECLARE cursor1 SCROLL CURSOR FOR
> SELECT * from Parts
> OPEN cursor1
>  FETCH FIRST FROM cursor1
>  INTO @qty, @Length, @Width
>  INSERT INTO PartsTemp (qty, Length, Width)
>   VALUES (@qty, @Length, @Width)
> CLOSE cursor1
> DEALLOCATE cursor1
> GO

The code above puts a single, randomly chosen tuple from Parts into
PartsTemp. If that's all you need, you can do it with:

INSERT INTO PartsTemp (SELECT * FROM Parts LIMIT 1);

> __psql Code___
> (These declaration of vaiables don't work)
> DECLARE c_qty INT;
> DECLARE c_Length FLOAT;
> DECLARE c_Width FLOAT;
> 
> BEGIN;
> DECLARE cursor1 CURSOR FOR SELECT * FROM Parts;
> FETCH FIRST FROM cursor1 INTO c_qty, c_Length, c_Width;
> INSERT INTO partstemp VALUES (c_qty, c_Length, c_Width);
> CLOSE cursor1;
> COMMIT;
> 
> Got any ideas using variable to transfer singular rows?

If you need to do more (you aren't telling much), and want/need to
use cursors, you'll have to resort to using PL/pgSQL. This hack
would do it:

CREATE TABLE t1 (t1i INT, t1c CHAR(1));
CREATE TABLE t2 (t2i INT, t2c CHAR(1));

INSERT INTO t1 VALUES (1, 'a');
INSERT INTO t1 VALUES (2, 'b');
INSERT INTO t1 VALUES (3, 'c');

CREATE FUNCTION do_it()
RETURNS BOOLEAN
VOLATILE
LANGUAGE plpgsql
AS '
  DECLARE
_ti INTEGER;
_tc CHAR(1);
_c1 CURSOR FOR SELECT t1i, t1c FROM t1;
  BEGIN
OPEN _c1;
FETCH _c1 INTO _ti, _tc;
INSERT INTO t2 VALUES (_ti, _tc);
CLOSE _c1;
RETURN TRUE;
  END;
';

SELECT do_it();
DROP FUNCTION do_it();


But watch out, because PL/pgSQL doesn't provide a way to create
SCROLLable cursors, FETCH more than one tuple at a time, or FETCH
orientation.

I urge you to read about functions and PL/pgSQL in the manual:

http://www.postgresql.org/docs/current/static/server-programming.html
http://www.postgresql.org/docs/current/static/plpgsql.html
http://www.postgresql.org/docs/current/static/sql-createfunction.html

and note that SQL DECLARE is a different beast from PL/pgSQL DECLARE,
etc for other statements.

-- 
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Failure to use indexes (fwd)

2005-08-03 Thread Dr NoName
post your table definitions. I suspect you are
indexing the parent table but not the children.

btw, we tried using inherited tables in our
application and quickly found out that they are more
trouble then they are worth (at least the way they are
implemented in postgresql). There are other, more
portable ways of mapping a class hierarchy to
table(s). A few techniques are described in Fowler's
Patterns of Enterprise Application Architecture.

hope this helps,

Eugene


--- Edmund Dengler <[EMAIL PROTECTED]> wrote:

> Greetings!
> 
> I have already increased the stats from 10 to 100.
> In addition, if I
> specify individual tables, then the indexes are
> used. However, when I go
> through the , then indexes are not used. I
> will try and expand
> the statistics, but suspect it is not the root cause
> of the problem.
> 
> Regards!
> Ed
> 
> 
> On Tue, 2 Aug 2005, Scott Marlowe wrote:
> 
> > On Tue, 2005-08-02 at 16:06, Dr NoName wrote:
> > > The solution to my problem was to increase
> statistics
> > > value and do another analyze. You can also
> change
> > > default_statistics_target parameter in
> > > postgresql.conf. Don't know if that's related to
> the
> > > problem you're seeing, but it's worth a try.
> >
> > Cool postgresql trick:
> >
> > alter database test set
> default_statistics_target=200;
> >
> > You can change the default for a databases's new
> tables too.
> >
> > ---(end of
> broadcast)---
> > TIP 2: Don't 'kill -9' the postmaster
> >
> 
> ---(end of
> broadcast)---
> TIP 2: Don't 'kill -9' the postmaster
> 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Cost problem

2005-08-03 Thread Ragnar Hafstað
On Tue, 2005-08-02 at 17:32 +0300, Victor wrote:
> Hello.
> 
> I use psql (PostgreSQL) 7.4.5.
> 
> I have a cost problem.
> 
> A simple select lasts too long.
> I have a table with aprox 900 rows.
> All rows are deleted and reinserted once a minute.
> 

> EXPLAIN ANALYZE SELECT * FROM logati;
>   QUERY PLAN
> ---
>  Seq Scan on logati  (cost=0.00..100741.22 rows=1198722 width=340) 
> (actual time=35927.945..35944.272 rows=842 loops=1)
>  Total runtime: 35945.840 ms

The rows are not actually removed when DELETE happens.
the next VACUUM vill put them into the Free Space Map
if they are older than the oldest running transaction,
so that their space can be reused.

this means that you must VACUUM this table frequently
to keep it from bloating. If the data distribution keeps
changing , you should also ANALYZE frequently (but not
when the table in empty), but that is another matter.

to fix an already severely bloated table, you might
need to do a VACUUM FULL on it once, but after that
frequent-enough regular (non-FULL) VACUUMS should do.

depending on the nature of your processes, you might
want to use TRUNCATE to empty your table.

gnari



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Failure to use indexes (fwd)

2005-08-03 Thread Edmund Dengler
Greetings!

Table definitions were included in the original post. I can repost (or
send directly) if desired.

I am using  to implement database partitioning based on the day.
Postgresql 8.0.1 (and previous) has a number of issues when it comes to
very large tables. Currently we have anywhere from 4,000,000 to 10,000,000
rows per day to deal with. With a  running, we can't seem to
cancel it until it finishes (which takes a bit of time when you have over
3 months of data). Insert/search performance also degrades with one large
table (this gets into the issues of dealing with large volumes of time
series data - really wish Postgresql would have the concept of a queue
table where rows are always inserted at the end, and deletes only happen
at the beginning, with block allocation). By using , we can
truncate a days worth of data very quickly, and only vacuum changing
tables.

Hopefully, the use of constraints in the query optimizer will make it into
8.1, so it will help some of our queries. I could hand optimize queries,
but then I am essentially implementing an optimizer in our application
code, which is definitely the wrong place to put it (also, if I was to go
the full way, then I might switch to a database that supports time series
data better, but would lose the ad-hoc query abilities of SQL).

Indexes are on all the children (as per the post). in addition, when I use
child tables directly, the indexes are used in the queries (which is one
of the reasons why I suspect a bug related to pushing optimization
information through ). Note that I also posted a followup that
showed how a 1 row set would not use indexes when going through 
whereas a simple = would.

Regards!
Ed


On Wed, 3 Aug 2005, Dr NoName wrote:

> post your table definitions. I suspect you are
> indexing the parent table but not the children.
>
> btw, we tried using inherited tables in our
> application and quickly found out that they are more
> trouble then they are worth (at least the way they are
> implemented in postgresql). There are other, more
> portable ways of mapping a class hierarchy to
> table(s). A few techniques are described in Fowler's
> Patterns of Enterprise Application Architecture.
>
> hope this helps,
>
> Eugene
>
>
> --- Edmund Dengler <[EMAIL PROTECTED]> wrote:
>
> > Greetings!
> >
> > I have already increased the stats from 10 to 100.
> > In addition, if I
> > specify individual tables, then the indexes are
> > used. However, when I go
> > through the , then indexes are not used. I
> > will try and expand
> > the statistics, but suspect it is not the root cause
> > of the problem.
> >
> > Regards!
> > Ed
> >
> >
> > On Tue, 2 Aug 2005, Scott Marlowe wrote:
> >
> > > On Tue, 2005-08-02 at 16:06, Dr NoName wrote:
> > > > The solution to my problem was to increase
> > statistics
> > > > value and do another analyze. You can also
> > change
> > > > default_statistics_target parameter in
> > > > postgresql.conf. Don't know if that's related to
> > the
> > > > problem you're seeing, but it's worth a try.
> > >
> > > Cool postgresql trick:
> > >
> > > alter database test set
> > default_statistics_target=200;
> > >
> > > You can change the default for a databases's new
> > tables too.
> > >
> > > ---(end of
> > broadcast)---
> > > TIP 2: Don't 'kill -9' the postmaster
> > >
> >
> > ---(end of
> > broadcast)---
> > TIP 2: Don't 'kill -9' the postmaster
> >
>
>
> __
> Do You Yahoo!?
> Tired of spam?  Yahoo! Mail has the best spam protection around
> http://mail.yahoo.com
>
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly
>

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Windows file path for copy

2005-08-03 Thread Richard Sydney-Smith




Thanks. Can see the logic but many users are going to presume that they
can load from "their" desktop. For now I can operate around the issue
but will have to place instructions in big letters unless I want to
answer this adnauseum.


Magnus Hagander wrote:

  
Windows XP SP2 with Postgresql 8.0.3
Two commands on fails the other succeeds:

Fails :

select import_sharedata('C:\\Documents and 
Settings\\Richard\\Desktop\\EzyChart-20050721');

Succeeds:

select import_sharedata('C:\\EzyChart-20050721');

is it the spaces in the path that postgres does not like? If 
so how do I format the enquiry pls?

Failure message indicates that It can not access the file. 
However the file is downloaded to my destop and thus has been 
created with my permissions.

  
  
I assume import_sharedate() is a server-side function. In this case, the
*service account* needs permissions, not you. And don't grant it to your
desktop - that's generallyi a bad idea :-) Use a shared dir somewhere
that both you and the service accoutn has permissions on.

//Magnus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly




  






[GENERAL] postgresql.conf value need advice

2005-08-03 Thread marcelo Cortez
folks

  what is preferible value for
stats_reset_on_server_start ?

what is default value?

 best regards 
  MDC

__
Correo Yahoo!
Espacio para todos tus mensajes, antivirus y antispam ¡gratis! 
¡Abrí tu cuenta ya! - http://correo.yahoo.com.ar

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] ssl problem with postgres 8.0

2005-08-03 Thread Luca Stancapiano
hello.I use postgresql 8.0 . I've created the server.key and
server.crt in this manner:

openssl req -new -nodes -keyout server.key -out server.csr

openssl req -x509 -key /home/data/server.key -in /home/data/server.csr -out
server.crt

and I put theese in my data home.

I launch postgres in this manner:

postmaster -il -p '$PGPORT' -D '$PGDATA_LOCAL' ${PGOPTS} &

and postgres starts fine. When I try to connect to myself in this mode:

psql -h localhost

I have this error:

psql: could not open certificate file "/root/.postgresql/postgresql.crt":
No such file or directory

my pg_hba.conf is:

local   all all ident sameuser
hostsslallall 0.0.0.0 0.0.0.0 md5


may anyone tell me where I wrong?

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Note on scalar subquery syntax

2005-08-03 Thread Peter Fein
Kevin Murphy wrote:
> I thought this might be helpful in the future to other duffers such as
> myself.
> 
> The following is my big contribution to the documentation of the use of
> scalar subqueries ;-):
> 
> You have to wrap a scalar subquery in its own parentheses even where you
> might think it to be unnecessary, such as when the subquery is the sole
> argument to a function.
> 
> As an example, I wrote a function to explode, or unpack, the elements of
> an array onto separate rows (its return type is SETOF whatever), but it
> took me a while to figure out how to use it effectively in queries.

Mind posting it?  I know I've had need of such I thing & IIRC others
have asked as well...

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] postgresql.conf value need advice

2005-08-03 Thread Ragnar Hafstað
On Wed, 2005-08-03 at 13:30 -0300, marcelo Cortez wrote:
> folks
> 
>   what is preferible value for
> stats_reset_on_server_start ?

depends on whether you want stats to be accumulated for longer periods
than between restarts. I imagine that 'on' is what most people need.

in any case, you can reset stats with the function pg_stat_reset()

> what is default value?
the default is 'on'

see  
http://www.postgresql.org/docs/8.0/static/runtime-config.html#RUNTIME-CONFIG-STATISTICS

gnari



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Note on scalar subquery syntax

2005-08-03 Thread Martijn van Oosterhout
On Wed, Aug 03, 2005 at 09:40:26AM -0400, Kevin Murphy wrote:
> You have to wrap a scalar subquery in its own parentheses even where you 
> might think it to be unnecessary, such as when the subquery is the sole 
> argument to a function.

It first guess I imagine it is because the syntax becomes ambiguous,
expecially if you have multiple arguments to the function.

Say you a function "func" and your query was: SELECT * FROM x ORDER BY y

Then this isn't parsable obviously:

SELECT func( SELECT * FROM x ORDER BY y, 1, 1 ) )

Since you don't know where the ORDER BY ends and the function list
continues. Adding parenthesis at the appropriate point removes the
ambiguity.

Hope this helps,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


pgpQBGbUTzLJO.pgp
Description: PGP signature


[GENERAL] pg_dump - dump specific functions and other items?

2005-08-03 Thread CSN
Is it possible to dump specific function definitions
using pg_dump? Any other items that can be
specifically dumped, besides just tables?

Thanks,
CSN




Start your day with Yahoo! - make it your home page 
http://www.yahoo.com/r/hs 
 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Note on scalar subquery syntax

2005-08-03 Thread Kevin Murphy

Martijn van Oosterhout wrote:


On Wed, Aug 03, 2005 at 09:40:26AM -0400, Kevin Murphy wrote:
 

You have to wrap a scalar subquery in its own parentheses even where you 
might think it to be unnecessary, such as when the subquery is the sole 
argument to a function.
   



It first guess I imagine it is because the syntax becomes ambiguous,
expecially if you have multiple arguments to the function.
 

Thanks, Martijn.  Yes, I don't consider it to be entirely unreasonable; 
it was just surprising and strange-looking to me.


-Kevin


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Note on scalar subquery syntax

2005-08-03 Thread Kevin Murphy

Peter Fein wrote:


Kevin Murphy wrote:
 


As an example, I wrote a function to explode, or unpack, the elements of
an array onto separate rows (its return type is SETOF whatever), but it
took me a while to figure out how to use it effectively in queries.
   



Mind posting it?  I know I've had need of such I thing & IIRC others
have asked as well...
 

I'm no expert, but per Peter's request, here is a generic 
array-unpacking function that works in PostgreSQL 8.0.  It can't be 
invoked if the argument doesn't have an explicit type.  I.e. you would 
have to use it as: "select * from 
array_explode_generic('{apple,banana,cherry}'::text[]);" or "select * 
from array_explode_generic('{1,2,3}'::integer[]);".


CREATE OR REPLACE FUNCTION array_explode(an_array anyarray) RETURNS 
SETOF anyelement AS $$

DECLARE
   idx integer;
BEGIN

   FOR idx IN 1 .. ARRAY_UPPER(an_array, 1) LOOP
   RETURN NEXT an_array[idx];
   END LOOP;
   RETURN;
   END;
$$ LANGUAGE plpgsql;

I would imagine that a type-specific version would be faster.  For that, 
replace "anyarray" with, e.g. "integer[]", and "anyelement" with, e.g. 
"integer".


-Kevin Murphy


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Note on scalar subquery syntax

2005-08-03 Thread Tom Lane
Kevin Murphy <[EMAIL PROTECTED]> writes:
> I'm no expert, but per Peter's request, here is a generic 
> array-unpacking function that works in PostgreSQL 8.0.
> [snip]

> I would imagine that a type-specific version would be faster.

No, actually it'd be exactly the same.  What happens under the hood with
a plpgsql "anyarray" function is that Postgres instantiates a copy for
each specific datatype you call it with during the life of your session.
So there's no real point in doing the same thing manually.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] feeding big script to psql

2005-08-03 Thread Peter Wilson

Tom Lane wrote:

Peter Wilson <[EMAIL PROTECTED]> writes:

Tom Lane wrote:

Oh?  Could you provide a test case for this?  I can certainly believe
that the planner might choose a bad plan if it has no statistics, but
it shouldn't take a long time to do it.



On investigation the problems occurs on 'EXPLAIN ANALYZE' - which is
what pgadminIII does when you press the explain button.


Ah.  Well, this is an ideal example of why you need statistics ---
without 'em, the planner is more or less flying blind about the number
of matching rows.  The original plan had


   ->  Index Scan using ca_pk on contact_att subb  (cost=0.00..6.01 
rows=1 width=8) (actual time=0.207..234.423 rows=3 loops=2791)
 Index Cond: ((instance = '0'::bpchar) AND 
((client_id)::text = 'gadget'::text))
 Filter: ((contact_id)::numeric = 3854.00)


while your "after a vacuum" (I suppose really a vacuum analyze) plan has


 ->  Index Scan using ca_pk on contact_att subb  
(cost=0.00..1433.95 rows=78 width=8) (actual time=0.367..259.617 rows=3 loops=1)
   Index Cond: ((instance = '0'::bpchar) AND 
((client_id)::text = 'gadget'::text))
   Filter: ((contact_id)::numeric = 3854.00)


This is the identical scan plan ... but now that the planner realizes
it's going to be pretty expensive, it arranges the join in a way that
requires only one scan of contact_att and not 2791 of 'em.

The key point here is that the index condition on instance/client_id
is not selective --- it'll pull out a lot of rows.  All but 3 of 'em are
then discarded by the contact_id condition, but the damage in terms
of runtime was already done.  With stats, the planner can realize this
--- without stats, it has no chance.

Looking at your table definition, I suppose you were expecting the
contact_id condition to be used with the index, but since contact_id is
bigint, comparing it to a numeric-type constant is not considered indexable.
You want to lose the ".00" in the query.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Thanks for that Tom - especially the bit about removing the .0 from
the numbers. I'm pretty new to some of this database stuff - even newer at
trying to optimise queries and 'think like the planner'. Never occurred to
me the number format would have that effect.

Removing the zeroes actaully knocked a few ms of the execution times in
real-life querries :-)

Just out of interest - is there an opportunity for the planner to realise
the sub-select is basically invariant for the outer-query and execute once,
regardless of stats. Seems like the loop-invariant optimisation in a 'C'
compiler. If you have to do something once v. doing it 2791 times then
I'd plop for the once!

Thanks again Tom, much appreciated for that little nugget
Pete
--
Peter Wilson. YellowHawk Ltd, http://www.yellowhawk.co.uk

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] ssl problem with postgres 8.0

2005-08-03 Thread Tom Lane
"Luca Stancapiano" <[EMAIL PROTECTED]> writes:
> I have this error:
> psql: could not open certificate file "/root/.postgresql/postgresql.crt":
> No such file or directory

You need a client certificate (if you don't want the server demanding
client certificates, remove its root.crt file).  See
http://www.postgresql.org/docs/8.0/static/libpq-ssl.html
http://www.postgresql.org/docs/8.0/static/ssl-tcp.html

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] oids and pg_class_oid_index constraint violations

2005-08-03 Thread Aaron Harsh
We've just recently started seeing sporadic constraint violations on system 
tables.  For example:

  duplicate key violates unique constraint "pg_class_oid_index" [for Statement 
"CREATE TEMPORARY TABLE...

and

  duplicate key violates unique constraint "pg_toast_4292803267_index" [for 
Statement "INSERT INTO...

and the occasional

  unexpected chunk number 0 (expected 1) for toast value

I suspect that the problem is due to oid reuse, but I'd like to get a second 
opinion.  We've just recently grown to a billion tuples in our database (with 
all our tables built with oids), and the volume of activity makes me think it's 
likely that the majority of records have been updated three or four times.

Our current plan for dealing with the problem is to 'SET WITHOUT OIDS' on our 
tables, then dump & restore the database.

Does it seem reasonable that oid reuse could cause the constraint violations?  
Does it seem likely that this would fix the problem?  Is there an easier way to 
solve the problem?

Thanks in advance

-- 
Aaron Harsh
[EMAIL PROTECTED]
503-284-7581 x347


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Windows file path for copy

2005-08-03 Thread Magnus Hagander
Where does import_sharedata() come from? AFAIK, it's not a part of
standard PostgreSQL. PostgreSQL functions for this is COPY, where just
this fact is documented at
http://www.postgresql.org/docs/8.0/static/sql-copy.html, under "Notes".

//Magnus 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of 
> Richard Sydney-Smith
> Sent: Wednesday, August 03, 2005 6:10 PM
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Windows file path for copy
> 
> Thanks. Can see the logic but many users are going to presume 
> that they can load from "their" desktop. For now I can 
> operate around the issue but will have to place instructions 
> in big letters unless I want to answer this adnauseum.
> 
> 
> Magnus Hagander wrote: 
> 
>   Windows XP SP2 with Postgresql 8.0.3
>   Two commands on fails the other succeeds:
>   
>   Fails :
>   
>   select import_sharedata('C:\\Documents and 
>   Settings\\Richard\\Desktop\\EzyChart-20050721');
>   
>   Succeeds:
>   
>   select import_sharedata('C:\\EzyChart-20050721');
>   
>   is it the spaces in the path that postgres does 
> not like? If 
>   so how do I format the enquiry pls?
>   
>   Failure message indicates that It can not 
> access the file. 
>   However the file is downloaded to my destop and 
> thus has been 
>   created with my permissions.
>   
> 
>   
>   I assume import_sharedate() is a server-side function. 
> In this case, the
>   *service account* needs permissions, not you. And don't 
> grant it to your
>   desktop - that's generallyi a bad idea :-) Use a shared 
> dir somewhere
>   that both you and the service accoutn has permissions on.
>   
>   //Magnus
>   
>   ---(end of 
> broadcast)---
>   TIP 1: if posting/reading through Usenet, please send 
> an appropriate
>  subscribe-nomail command to 
> [EMAIL PROTECTED] so that your
>  message can get through to the mailing list cleanly
>   
>   
>   
>   
> 
> 
> 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] oids and pg_class_oid_index constraint violations

2005-08-03 Thread Tom Lane
"Aaron Harsh" <[EMAIL PROTECTED]> writes:
> We've just recently started seeing sporadic constraint violations on system 
> tables.  For example:
>   duplicate key violates unique constraint "pg_class_oid_index" [for 
> Statement "CREATE TEMPORARY TABLE...

OID wraparound would explain that ...

> and the occasional
>   unexpected chunk number 0 (expected 1) for toast value

... but not that.  The latter might possibly be due to a corrupt index
on a toast table.  If you have a reproducible way of causing it, I'd
definitely love to see it.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] oids and pg_class_oid_index constraint

2005-08-03 Thread Aaron Harsh
> Tom Lane <[EMAIL PROTECTED]> 08/03/05 1:33 PM >>>
> "Aaron Harsh" <[EMAIL PROTECTED]> writes:
> > We've just recently started seeing sporadic constraint violations on system 
> > tables.  For example:
> >   duplicate key violates unique constraint "pg_class_oid_index" [for 
> > Statement "CREATE TEMPORARY TABLE...
> 
> OID wraparound would explain that ...

Fantastic.  Will our plan ('set without oids', pg_dump, pg_restore) take care 
of the problem?

> > and the occasional
> >   unexpected chunk number 0 (expected 1) for toast value
> 
> ... but not that.  The latter might possibly be due to a corrupt index
> on a toast table.  If you have a reproducible way of causing it, I'd
> definitely love to see it.

We've seen the error show up in twice in our serverlog, but I'm not sure what 
caused the toast tables to get in that state.  Is there anything helpful we 
could do with the table next time (save the relevant data/base files; run a 
query against the toast tables)?

Thanks for the advice

-- 
Aaron Harsh
[EMAIL PROTECTED]
503-284-7581 x347


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] initskript after db start; locks on transactions

2005-08-03 Thread Thomas Chille
Hi!

I have have two questions:

1.
What is the best approach to trigger a service script wich will clean
up something in the db after every db (re-)start? Has PG its own
mechanism for such things or have i to use my /etc/init.d/postgresql
script?

2.
Sometime i read something about locks on transactions. Is this only an
internal thing or can i set them by my own and if yes, for what? With
the LOCK command i can only lock tables, or?

Thanks for any help,
Thomas!

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Instalation batch file

2005-08-03 Thread Zlatko Matić



Hello.
 
I would like to install database schema on a server 
using  files with dumped schema and globals (produced by dumpall), that are 
placed on CD. The installation script (batch file) that restores these two 
files is also placed on the same CD, (let's say E:) in the same 
folder.
 
cd C:\Program Files\PostgreSQL\8.0\binpsql -f 
E:\MYBASE_SHEMA.dumpall template1 -U zmaticpsql -f E:\MYBASE_SHEMA.dumpall 
template1 -U zmaticvacuumdb -d MYBASE -U zmatic
But, what if CD is F: or some other 
unit ?  
How could I change this batch file (Windows 
XP) to be able to recognize what is the actual path of the folder that 
containes these three files ? 
 
Thanks in advance,
 
Zlatko


Re: [GENERAL] feeding big script to psql

2005-08-03 Thread Havasvölgyi Ottó

Tom,

My queries were written in multi-line mode like this:

insert into t1 values(1,
2,
3);

I don't know, what effect this has to performace..

Regards,
Otto



- Original Message - 
From: "Tom Lane" <[EMAIL PROTECTED]>

To: "Havasvölgyi Ottó" <[EMAIL PROTECTED]>
Cc: 
Sent: Wednesday, August 03, 2005 1:03 AM
Subject: Re: [GENERAL] feeding big script to psql



=?iso-8859-2?Q?Havasv=F6lgyi_Ott=F3?= <[EMAIL PROTECTED]> writes:

Thanks for the suggestion. I have just applied both switch , -f (I have
applied this in the previous case too) and -n, but it becomes slow again. 
At
the beginning it reads about 300 KB a second, and when it has read 1.5 
MB,

it reads only about 10 KB a second, it slows down gradually. Maybe others
should also try this scenario. Can I help anything?


Well, I don't see it happening here.  I made up a script consisting of a
whole lot of repetitions of

insert into t1 values(1,2,3);

with one of these inserted every 1000 lines:

\echo 1000 `date`

so I could track the performance.  I created a table by hand:

create table t1(f1 int, f2 int, f3 int);

and then started the script with

psql -q -f big.sql testdb

At the beginning I was seeing about two echoes per second.  I let it run
for an hour, and I was still seeing about two echoes per second.  That's
something close to 170MB of script file read (over 5.7 million rows
inserted by the time I stopped it).

So, either this test case is too simple to expose your problem, or
there's something platform-specific going on.  I don't have a windows
machine to try it on ...

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster






---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] oids and pg_class_oid_index constraint

2005-08-03 Thread Tom Lane
"Aaron Harsh" <[EMAIL PROTECTED]> writes:
>> Tom Lane <[EMAIL PROTECTED]> 08/03/05 1:33 PM >>>
>> OID wraparound would explain that ...

> Fantastic.  Will our plan ('set without oids', pg_dump, pg_restore) take care 
> of the problem?

Only temporarily (ie, till the counter wraps around again).  If you can
reduce your consumption of OIDs enough that that's longer than your next
planned Postgres upgrade, maybe it's enough.

I'm a bit surprised that you are seeing the problem often enough to be
worried about it.  In a normal database with say less than a thousand
tables, the odds against a collision with an existing OID ought to be
several million to one.  Of course this analysis is too simplistic,
since the existing OIDs are probably not randomly scattered --- usually
they'll be tightly clumped at the point where you set up the database
and created all your persistent tables.  You might try looking at the
distribution of OIDs in your catalogs; it could be that you are going
through a period where the odds of collision are much more than
millions-to-one, but once you got past the range of OIDs initially
assigned, it'd drop down to much less than that.  If so, you don't have
to passively wait for that to happen --- you can force the nextOID
counter up past the "dense" range of OIDs (see pg_resetxlog).

>>> and the occasional
>>> unexpected chunk number 0 (expected 1) for toast value
>> 
>> ... but not that.  The latter might possibly be due to a corrupt index
>> on a toast table.  If you have a reproducible way of causing it, I'd
>> definitely love to see it.

> We've seen the error show up in twice in our serverlog, but I'm not
> sure what caused the toast tables to get in that state.  Is there
> anything helpful we could do with the table next time (save the
> relevant data/base files; run a query against the toast tables)?

You could try something like
select chunk_seq from pg_toast.pg_toast_NNN where chunk_id = 
and see if you get the same results from both seqscan and indexscan
plans (use enable_seqscan and enable_indexscan to force it to be done
both ways).  Also see if reindexing the toast table makes the error go
away.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Instalation batch file

2005-08-03 Thread Glenn Davy
On Wed, 2005-08-03 at 23:29 +0200, Zlatko Matić wrote:
> Hello.
>  
> I would like to install database schema on a server using  files with
> dumped schema and globals (produced by dumpall), that are placed on
> CD. The installation script (batch file) that restores these two files
> is also placed on the same CD, (let's say E:) in the same folder.
>  
> cd C:\Program Files\PostgreSQL\8.0\bin
> psql -f E:\MYBASE_SHEMA.dumpall template1 -U zmatic
> psql -f E:\MYBASE_SHEMA.dumpall template1 -U zmatic
> vacuumdb -d MYBASE -U zmatic
> 
> But, what if CD is F: or some other unit ?  
> How could I change this batch file (Windows XP) to be able to
> recognize what is the actual path of the folder that containes these
> three files ? 

Im not sure how to determine the path to the media (esp if more than one
cd/dvd), but you could take a couple of different approaches:
1)that is to run the script from the cd drive and build the path to pg's
bin with %HOMEDRIVE%. Type 'SET' to see what other shell variables
there are
2) Run the script from anywhere and take %1 type command line paramaters
to ask location of either bin and/or cd/dvd

Been years since I've had the misfortune to have to use windows shell
scipts, but there are prob many other approachs - best to do a little
research into windows shell scripting and determine best for your
scenario.

Glenn
>  
> Thanks in advance,
>  
> Zlatko


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Instalation batch file

2005-08-03 Thread SCassidy

Hi,

If the psql program is always going to be in C:\Program
Files\PostgreSQL\8.0\bin, couldn't you eliminate the "cd", and just specify
the full path (if psql is not in the current PATH) as needed?  Or, add it
to the PATH:PATH %path%;C:\Program Files\PostgreSQL\8.0\bin

If you running the .bat file from the CD, you won't have to know the drive
letter.

E.g.:

psql -f MYBASE_SHEMA.dumpall template1 -U zmatic
psql -f MYBASE_SHEMA.dumpall template1 -U zmatic
vacuumdb -d MYBASE -U zmatic

You could add the path for psql if you need to.  E.g.:

@ECHO off

if exist test_path.pl GOTO foundit
echo "placing psql in path"
PATH %path%;C:\Program Files\PostgreSQL\8.0\bin

:foundit
echo "now doing the psql"

 ...   then continue with your psql commands.


Hope this helps.


Susan



   
   Zlatko Matić 
   
  <[EMAIL PROTECTED]To:   
  
  .hr>   cc:
   
   Sent by:  Subject:  [GENERAL] 
Instalation batch file

   
  |---| 
   
  [EMAIL PROTECTED] | [ ] Expand Groups |   
 
  tgresql.org |---| 
   

   

   
   08/03/2005 02:29 
   
  PM
   

   

   




Hello.

I would like to install database schema on a server using  files with
dumped schema and globals (produced by dumpall), that are placed on CD. The
installation script (batch file) that restores these two files is also
placed on the same CD, (let's say E:) in the same folder.

cd C:\Program Files\PostgreSQL\8.0\bin
psql -f E:\MYBASE_SHEMA.dumpall template1 -U zmatic
psql -f E:\MYBASE_SHEMA.dumpall template1 -U zmatic
vacuumdb -d MYBASE -U zmatic
But, what if CD is F: or some other unit ?
How could I change this batch file (Windows XP) to be able to recognize
what is the actual path of the folder that containes these three files ?

Thanks in advance,

Zlatko






--
See our award-winning line of tape and disk-based
backup & recovery solutions at http://www.overlandstorage.com
--


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Internal catalogs error in log file

2005-08-03 Thread CSN
I saw this in my log file:

ERROR:  invalid regular expression: quantifier operand
invalid
STATEMENT:  SELECT n.nspname as "Schema",
  p.proname as "Name",
  CASE WHEN p.proretset THEN 'setof ' ELSE ''
END ||
  pg_catalog.format_type(p.prorettype, NULL)
as "Result data type",
  pg_catalog.oidvectortypes(p.proargtypes) as
"Argument data types"
FROM pg_catalog.pg_proc p
 LEFT JOIN pg_catalog.pg_namespace n ON
n.oid = p.pronamespace
WHERE p.prorettype <>
'pg_catalog.cstring'::pg_catalog.regtype
  AND p.proargtypes[0] <>
'pg_catalog.cstring'::pg_catalog.regtype
  AND NOT p.proisagg
  AND
pg_catalog.pg_function_is_visible(p.oid)
  AND p.proname ~ '^+$'
ORDER BY 1, 2, 3, 4;

It's greek to me ;). Any idea what caused this error,
and if there's some sort of problem?

Thanks,
CSN
PG 8.0.2

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] feeding big script to psql

2005-08-03 Thread Tom Lane
=?iso-8859-2?Q?Havasv=F6lgyi_Ott=F3?= <[EMAIL PROTECTED]> writes:
> My queries were written in multi-line mode like this:

> insert into t1 values(1,
> 2,
> 3);

> I don't know, what effect this has to performace..

I tried my test again that way, and it made no difference at all.

Can anyone else replicate this problem?

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Internal catalogs error in log file

2005-08-03 Thread Michael Fuhr
On Wed, Aug 03, 2005 at 06:51:42PM -0700, CSN wrote:
>   AND p.proname ~ '^+$'
> ORDER BY 1, 2, 3, 4;
> 
> It's greek to me ;). Any idea what caused this error,
> and if there's some sort of problem?

Looks like somebody ran the command "\df +" in psql.  Shouldn't
be anything to worry about.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] untrusted languages and non-global superusers?

2005-08-03 Thread CSN
I'm using plphpu and I'd like to allow the regular
database user to use it, but since it's "untrusted" it
requires users to be superusers. If I have to do this,
I don't want the user to be a superuser for all
databases. Is it possible to grant superuser status to
a user for a specific database? All I'm familiar with
is "alter user joe createuser." Or is there a better
way of handling this language and permission issue?
(The function uses mail(), so IIRC that necessitates
using plphpu).

Thanks,
CSN

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] untrusted languages and non-global superusers?

2005-08-03 Thread Tom Lane
CSN <[EMAIL PROTECTED]> writes:
> I'm using plphpu and I'd like to allow the regular
> database user to use it, but since it's "untrusted" it
> requires users to be superusers. If I have to do this,
> I don't want the user to be a superuser for all
> databases. Is it possible to grant superuser status to
> a user for a specific database?

Exactly how would you prevent him from converting that into global
access?  Especially if you're going to give him use of an untrusted
language?  He could easily rewrite any configuration file you might
think is going to lock him out of your other databases.

> (The function uses mail(), so IIRC that necessitates
> using plphpu).

Sending mail from a database function (or doing anything else that
involves external side-effects) is generally A Bad Idea, for reasons
that have been covered many times in the list archives.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] SELECT count(*) Generating Lots of Write Activity

2005-08-03 Thread Logan Bowers
Thanks for the quick responses everyone. It did turn out to be the
commit bit as the table was just loaded (never accessed) and subsequent
SELECTs did not incur any write activity.  

As a side note, I saw in the archives a past conversation about adding
an option to disable touching the commit hint bit.  There were some
conversations about possible uses for such a feature and I'd like to
propose this as a common one:
-Load a whole bunch of "raw" data into big table
-Munge/Transform data and insert it into existing, normalized schema
-Drop table of "raw" data

In my case, the "raw" data is on the order of hundreds of gigabytes and
the increased write activity is a HUGE penalty.  Even with smaller data
sets, this relatively common usage pattern could benefit greatly.  

Logan Bowers

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Monday, August 01, 2005 7:09 PM
To: Logan Bowers
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] SELECT count(*) Generating Lots of Write Activity


"Logan Bowers" <[EMAIL PROTECTED]> writes:
> I'm potentially having a strange performance problem.  I have a BIG
> table: ~100M, ~1KB rows.  I do a SELECT count(*) from it (I know it
will
> be slow) and as I watch procinfo on my DB server I see a huge amount
of
> write activity.  Thus,

> 1)   Why does this statement generate any writes at all?

It could be that it's evicting unrelated dirty pages from cache
(although PG 8.0 is supposed to try to avoid doing that during a simple
seqscan).  Another theory is that the table has a whole lot of
recently-changed rows, and the writes are a side effect of the SELECT
setting commit hint bits to tell future transactions what it found out
about the commit status of the rows.

I dunno what procinfo is --- personally I would use strace and see
exactly which file(s) the database processes are issuing writes against.
Also check whether a second SELECT against the same table continues
to issue writes...

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] SELECT count(*) Generating Lots of Write Activity

2005-08-03 Thread Tom Lane
"Logan Bowers" <[EMAIL PROTECTED]> writes:
> In my case, the "raw" data is on the order of hundreds of gigabytes and
> the increased write activity is a HUGE penalty.

And you think the extra activity from repeated clog tests would not be a
huge penalty?

AFAICS this would only be likely to be a win if you were sure that no
row would be visited more than once before you drop (or truncate) the
containing table.  Which leads me to wonder why you inserted the row
into the database in the first place, instead of doing the data
aggregation on the client side.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] untrusted languages and non-global superusers?

2005-08-03 Thread CSN


--- Tom Lane <[EMAIL PROTECTED]> wrote:

> CSN <[EMAIL PROTECTED]> writes:
> > I'm using plphpu and I'd like to allow the regular
> > database user to use it, but since it's
> "untrusted" it
> > requires users to be superusers. If I have to do
> this,
> > I don't want the user to be a superuser for all
> > databases. Is it possible to grant superuser
> status to
> > a user for a specific database?
> 
> Exactly how would you prevent him from converting
> that into global
> access?  Especially if you're going to give him use
> of an untrusted
> language?  He could easily rewrite any configuration
> file you might
> think is going to lock him out of your other
> databases.

You lost me - how is any of that possible?

> 
> > (The function uses mail(), so IIRC that
> necessitates
> > using plphpu).
> 
> Sending mail from a database function (or doing
> anything else that
> involves external side-effects) is generally A Bad
> Idea, for reasons
> that have been covered many times in the list
> archives.

Why, exactly? In this situation I just set up a
trigger that sends a welcome email to newly inserted
members. Very convenient.

CSN




Start your day with Yahoo! - make it your home page 
http://www.yahoo.com/r/hs 
 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] What happens when wal fails?

2005-08-03 Thread Joseph Shraibman
If I put the pg_xlog directory on its own disk, then that disk fails, 
does that mean the postgres is hosed or does it just mean that postgres 
no longer safe from a power outage?  Does pg detect a problem with the 
wal and then call fsync() on the database files if wal isn't working?


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Tsearch2 doesn't work

2005-08-03 Thread Supiah Mustaffa
Hi,

I want to install Tsearch2 and follow  this command :
1.cd contrib/tsearch2
2. make; make install

It doesn't work and display this messages :
Makefile:5: ../../src/Makefile.global: No such file or directory
Makefile:47: /contrib/contrib-global.mk: No such file or directory
make: *** No rule to make target `/contrib/contrib-global.mk'.  Stop.
Makefile:5: ../../src/Makefile.global: No such file or directory
Makefile:47: /contrib/contrib-global.mk: No such file or directory
make: *** No rule to make target `/contrib/contrib-global.mk'.  Stop.

What it's means? What should I do? Please help me.

Thanks.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match