Re: [GENERAL] UTF8 conversion differences from v8.1.3 to v8.1.4

2006-07-18 Thread Tom Lane
Eric Faulhaber [EMAIL PROTECTED] writes:
 Can anyone help me understand why converting the NULL code point ()
 from UTF8 to ISO8859_1 is no longer legal in v8.1.4?

Embedded nulls in text strings have never behaved sanely in PG ... or
hadn't you noticed?  You'd have been better off passing an empty string,
because that was effectively what you were getting.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] PostgreSQL on Embeded Systems

2006-07-18 Thread Adem HUR
Hi,

I am sory for my bad English :)


We try to use PostgreSQL on an Embeded System.

Plan to use this PC/104 Module
http://www.icop.com.tw/products_detail.asp?ProductID=106


and this EmbedDisk
http://www.icop.com.tw/products_detail.asp?ProductID=185


There is a Linux version (X-Linux) for the PC/104 Module.
http://www.dmp.com.tw/tech/os-xlinux/

This Linux looks enough for our project.

We instal X-Linux on PC/104 Module.

Now, We want to install PostgreSQL

but there are some problems

1 - Firstly,  Can we install PostgreSQL on this X-Linux?

2 - Our EmbedDisk capacity is 128 Mb.
X-Linux size approximately 32Mb.

Our expectation, PostgreSQL operate standart query on basic tables
and listen 5432 port.

A program collect data from peripheral devices, insert this data
into PostgreSQL database. Another program access this database by use 5432
port, read and delete data.

We want to use PostgreSQL on PC/104 Module  because we already use
PostgreSQL for other part of the project.

We have a capacity problem, so PostgreSQL installation and
database tables must be configure to smaller size.

How can we do this? suggestion,information, document...

How much the PostgreSQL installation size?


Thanks,



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] permission to create user

2006-07-18 Thread Rafal Pietrak
On Mon, 2006-07-17 at 07:54 -0400, John DeSoi wrote:
 On Jul 17, 2006, at 2:56 AM, Timothy Smith wrote:
 
  is it possible to give a non super user the ability to create  
  another user of a different group?
  i'm looking for a way to assign a special group of admin's just  
  enough rights to create other lowbie users without letting them  
  bypass all other access restrictions.
 
 You could create a function with the SECURITY DEFINER option which  
 allows the function to be executed with the privileges of the user  
 that created it.

I've been trying to do that same thing, and it works even without the
function. Still, it works with a 'glitch' but the reason for that
'glitch' is not quite clear to me. When I have:
CREATE GROUP masters;
ALTER ROLE masters CREATEUSER;
CREATE USER user_one IN GROUP MASTERS;
CREATE TABLE test1 (stamp timestamp, thing text);
REVOKE ALL ON test1 FROM PUBLIC;
GRANT INSERT ON test1 TO MASTERS;

Then, then I do:
system_prompt$ psql -U user_one mydb
mydb INSERT INTO test1 (stamp) VALUES (current_timestamp);
-- this works OK!!
mydb CREATE USER user_two;
-- this fails unless I do:
mydb SET ROLE masters;
mydb CREATE USER user_two;
-- this works OK, user_two gets created.

Any one knows, why do I have to explicitly SET ROLE, when I try to
exercise the group priviledge of role creation, while I don't need that
when accessing tables? Is this a feature, or a bug?

-- 
-R

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL on Embeded Systems

2006-07-18 Thread Terry Lee Tucker
On Tuesday 18 July 2006 05:50 am, Adem HUR [EMAIL PROTECTED] thus 
communicated:
-- Hi,
--
-- I am sory for my bad English :)
--
--
-- We try to use PostgreSQL on an Embeded System.
--
-- Plan to use this PC/104 Module
-- http://www.icop.com.tw/products_detail.asp?ProductID=106
--
--
-- and this EmbedDisk
-- http://www.icop.com.tw/products_detail.asp?ProductID=185
--
--
-- There is a Linux version (X-Linux) for the PC/104 Module.
-- http://www.dmp.com.tw/tech/os-xlinux/
--
-- This Linux looks enough for our project.
--
-- We instal X-Linux on PC/104 Module.
--
-- Now, We want to install PostgreSQL
--
-- but there are some problems
--
-- 1 - Firstly,  Can we install PostgreSQL on this X-Linux?
--
-- 2 - Our EmbedDisk capacity is 128 Mb.
-- X-Linux size approximately 32Mb.
--
-- Our expectation, PostgreSQL operate standart query on basic tables
-- and listen 5432 port.
--
-- A program collect data from peripheral devices, insert this data
-- into PostgreSQL database. Another program access this database by use
 5432 -- port, read and delete data.
--
-- We want to use PostgreSQL on PC/104 Module  because we already use
-- PostgreSQL for other part of the project.
--
-- We have a capacity problem, so PostgreSQL installation and
-- database tables must be configure to smaller size.
--
-- How can we do this? suggestion,information, document...
--
-- How much the PostgreSQL installation size?
--
--
-- Thanks,
--
--
--
-- ---(end of broadcast)---
-- TIP 3: Have you checked our extensive FAQ?
--
--http://www.postgresql.org/docs/faq
--
From the 7.4.6 documentation:
Also check that you have sufficient disk space. You will need about 65 MB for 
the source tree during compilation and about 15 MB for the installation 
directory. An empty database cluster takes about 25 MB, databases take about 
five times the amount of space that a flat text file with the same data would 
take. If you are going to run the regression tests you will temporarily need 
up to an extra 90 MB. Use the df command to check for disk space.

This includes space required for compiling the application. Looks like about 
40 MB for the installation and an empty cluster.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Antw: Re: [GENERAL] Performance problem with query

2006-07-18 Thread Christian Rengstl
now finally after a long time i have the query plan for the whole filled
table. I hope somebody can explain me why it takes so much longer...

   QUERY PLAN



-
Result  (cost=0.09..254643.09 rows=8044000 width=39) (actual
time=0.043..0.043
rows=0 loops=1)
  One-Time Filter: split_part(($1)::text, '_'::text, 2))::smallint =
1) IS
NOT TRUE) AND (((split_part(($2)::text, '_'::text, 2))::smallint = 22)
IS NOT TR
UE))
  InitPlan
-  Limit  (cost=0.00..0.02 rows=1 width=10) (never executed)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000
width=
10) (never executed)
-  Limit  (cost=0.00..0.02 rows=1 width=10) (actual
time=0.006..0.006 rows
=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000
width=
10) (actual time=0.004..0.004 rows=1 loops=1)
-  Limit  (cost=0.00..0.02 rows=1 width=10) (never executed)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000
width=
10) (never executed)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000
width=39) (nev
er executed)
Total runtime: 0.238 ms

Result  (cost=0.06..254643.06 rows=8044000 width=39) (actual
time=0.056..655772
.273 rows=8044000 loops=1)
  One-Time Filter: ((split_part(($1)::text, '_'::text, 2))::smallint =
1)
  InitPlan
-  Limit  (cost=0.00..0.02 rows=1 width=10) (actual
time=0.003..0.003 rows
=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000
width=
10) (actual time=0.002..0.002 rows=1 loops=1)
-  Limit  (cost=0.00..0.02 rows=1 width=10) (actual
time=0.006..0.007 rows
=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000
width=
10) (actual time=0.004..0.004 rows=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000
width=39) (act
ual time=0.002..191672.344 rows=8044000 loops=1)
Total runtime: 62259544.896 ms

Result  (cost=0.06..254643.06 rows=8044000 width=39) (actual
time=1.245..1.245
rows=0 loops=1)
  One-Time Filter: ((split_part(($1)::text, '_'::text, 2))::smallint =
22)
  InitPlan
-  Limit  (cost=0.00..0.02 rows=1 width=10) (never executed)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000
width=
10) (never executed)
-  Limit  (cost=0.00..0.02 rows=1 width=10) (actual
time=0.013..0.014 rows
=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000
width=
10) (actual time=0.009..0.009 rows=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000
width=39) (nev
er executed)
Total runtime: 22.270 ms
(31 Zeilen)



 Merlin Moncure [EMAIL PROTECTED] 13.07.06 15.20 Uhr 
On 7/13/06, Christian Rengstl
[EMAIL PROTECTED] wrote:
 Good morning list,

 the following query takes about 15 to 20 minutes for around 2 million
lines in the file myfile.txt, but with 8 million lines it takes around 5
hours and i just don't understand why there is such a huge discrepancy
in performance.


on the surface it doesn't make sense, can you post an explain analyze?

merlin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster

 Merlin Moncure [EMAIL PROTECTED] 13.07.06 15.20 Uhr 
On 7/13/06, Christian Rengstl
[EMAIL PROTECTED] wrote:
 Good morning list,

 the following query takes about 15 to 20 minutes for around 2 million
lines in the file myfile.txt, but with 8 million lines it takes around 5
hours and i just don't understand why there is such a huge discrepancy
in performance.


on the surface it doesn't make sense, can you post an explain analyze?

merlin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] function taking a long time. any options o speed it up.

2006-07-18 Thread Rhys Stewart

Hi all,

i created a function and it takes a long time to run. I was testing it
as a wrote it and to the first drop table it takes just short of a
second. however when the rest of the code is added on, it takes
upwards of 5 minutes. Not the most appropriate thing. Are there any
tips out there for making functions go faster?

the code follows:

CREATE OR REPLACE FUNCTION public.interp_point(character varying)
RETURNS varchar
LANGUAGE plpgsql
VOLATILE
AS
$not_in_route$

DECLARE
rout ALIAS FOR $1;
onlygeom geometry;
mathai record;
minthresh integer;
maxthresh integer;
tempname varchar;
amount integer;
total integer;
recset record;
route_len float8;
route_time integer;
prem_time integer;
cur_perc numeric(5,2) default 50;
perc_old numeric(5,2) default 50;
dif  numeric(5,2) default 0;
BEGIN
   tempname := 'tmp_'||$1;
   EXECUTE 'create table ' || tempname || ' as select
route,centroid(geomunion(the_geom)) from route where route = ' ||
quote_literal(rout) || ' group by route';
   EXECUTE 'SELECT distance(the_geom,(select centroid from '||
tempname ||' ))::int as dist from route where route = '||
quote_literal(rout) ||'order by dist limit 1' into minthresh;
   EXECUTE 'SELECT distance(the_geom,(select centroid from '||
tempname ||' ))::int as dist from route where route = '||
quote_literal(rout) ||'order by dist desc limit 1' into maxthresh;
   EXECUTE 'SELECT count(prem) from route where route = ' ||
quote_literal(rout) || '  AND the_geom  expand((select centroid from
' || tempname ||'),'|| minthresh||')' into amount;
   SELECT INTO total count(prem) from route where route = rout;
   SELECT INTO cur_perc ((amount::float/total::float)*100)::numeric(5,2);
   LOOP
 minthresh := minthresh + 90;
 perc_old :=  cur_perc;
 EXECUTE 'SELECT count(prem) from route where route = '
   || quote_literal(rout)
   || '  AND the_geom  expand((select centroid from '
   || tempname ||'),'|| minthresh||')' into amount;
 select into cur_perc ((amount::float/total::float)*100)::numeric(5,2);
 dif := abs(cur_perc - perc_old);
 IF dif  3.25 AND cur_perc  40 THEN
 EXIT;
 END IF;
   END LOOP;

   EXECUTE 'UPDATE ROUTE SET the_geom = null, matchtype = ' ||
quote_literal('4GEOCODE') || ' where route = '
|| quote_literal(rout) || ' AND the_geom is null OR (matchtype ~* '
|| quote_literal('route') || 'OR matchtype ~* '||
quote_literal('t[e]*mp') || 'OR matchtype ~* '||
quote_literal('place')
|| 'OR matchtype ~* '|| quote_literal('fuzzy') || 'OR matchtype ~*
'|| quote_literal('cent')
||') OR prem NOT in (select prem from route where route = '   ||
quote_literal(rout) || '  and  the_geom  expand((select centroid
from ' || tempname ||'),'|| minthresh||'))';
   EXECUTE 'DROP TABLE '|| tempname;
   EXECUTE 'create table ' || tempname || ' as select
makeline(the_geom) from (SELECT the_geom from route where route = '
||quote_literal(rout)|| 'order by timestamp) as the_geom ';
   EXECUTE 'SELECT length(makeline) FROM ' ||tempname INTO route_len;
   EXECUTE 'SELECT time2sec((select timestamp from route where route
= '||quote_literal(rout)||' order by timestamp desc limit 1) - (select
timestamp from route where route = '
||quote_literal(rout) || 'order by timestamp  limit 1))' INTO 
route_time;
   FOR  mathai IN EXECUTE 'SELECT * FROM route WHERE route = ' ||
quote_literal(rout) || ' AND matchtype = '||quote_literal('4GEOCODE')
||' order by timestamp' LOOP
EXECUTE 'SELECT time2sec((select timestamp from route where route =
'||quote_literal(rout)||' order by timestamp desc limit 1) - (select
timestamp from route where prem = '
||quote_literal(mathai.prem)||'))' INTO prem_time;
perc_old:= (route_time - prem_time)/route_time;
EXECUTE 'SELECT line_interpolate_point((SELECT makeline from '||
tempname ||') ,' ||perc_old||')' INTO onlygeom;
EXECUTE 'UPDATE route SET the_geom = '|| quote_literal(onlygeom) ||
'WHERE prem = ' || quote_literal(mathai.prem);
   END LOOP;
EXECUTE 'DROP TABLE '|| tempname;
select into recset route_len, amount,total,cur_perc,minthresh,maxthresh,dif;
   RETURN recset;

END;

$not_in_route$
;

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] permission to create user

2006-07-18 Thread Michael Fuhr
On Tue, Jul 18, 2006 at 01:45:01PM +0200, Rafal Pietrak wrote:
 Any one knows, why do I have to explicitly SET ROLE, when I try to
 exercise the group priviledge of role creation, while I don't need that
 when accessing tables? Is this a feature, or a bug?

http://www.postgresql.org/docs/8.1/interactive/role-membership.html

The role attributes LOGIN, SUPERUSER, CREATEDB, and CREATEROLE can
be thought of as special privileges, but they are never inherited
as ordinary privileges on database objects are.  You must actually
SET ROLE to a specific role having one of these attributes in order
to make use of the attribute.  Continuing the above example, we
might well choose to grant CREATEDB and CREATEROLE to the admin
role.  Then a session connecting as role joe would not have these
privileges immediately, only after doing SET ROLE admin.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Fuzzy Querys

2006-07-18 Thread Werner Echezuria
Hi,
Anyone knows if there's a project to make FSQL querys in Postgre.

Thanks


[GENERAL] Please explain the gin index

2006-07-18 Thread Kevin Murphy

PG tsearch2 users,

I notice there is an 8.1 backport of tsearch2 for 8.2  
(http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch2WhatsNew).


Has anybody used this?

What are the performance differences between the new GIN index and GIST?

Thanks,
Kevin Murphy


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] permission to create user

2006-07-18 Thread Rafal Pietrak
On Tue, 2006-07-18 at 07:31 -0600, Michael Fuhr wrote:
 On Tue, Jul 18, 2006 at 01:45:01PM +0200, Rafal Pietrak wrote:
  Any one knows, why do I have to explicitly SET ROLE, when I try to
  exercise the group priviledge of role creation, while I don't need that
  when accessing tables? Is this a feature, or a bug?
 
 http://www.postgresql.org/docs/8.1/interactive/role-membership.html
 
 The role attributes LOGIN, SUPERUSER, CREATEDB, and CREATEROLE can
 be thought of as special privileges, but they are never inherited
 as ordinary privileges on database objects are.  You must actually
 SET ROLE to a specific role having one of these attributes in order
 to make use of the attribute.  Continuing the above example, we
 might well choose to grant CREATEDB and CREATEROLE to the admin
 role.  Then a session connecting as role joe would not have these
 privileges immediately, only after doing SET ROLE admin.

Thenx. So it's a feature (it is documented).

My appology if the following question is naive, but digging it a bit
more:

Is it a feature, because it should be that way why? (standard says
so?) ...or it's a feature because it's documented: Although we'd like
it to work like priviledges work on tables, current server-side
framework does not allow us to impolement it that way.

In other words:
1) is the discrepancy by design (why?) or
2) is it by accident - just results from development history.

-- 
-R

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] custom installer

2006-07-18 Thread Kevin Flanders



I am looking for 
someone that might help us build a custom pgsql installer for a project of 
ours. Anyone interested. I see this as a small project, with other 
support to follow.

Kevin



Re: [GENERAL] permission to create user

2006-07-18 Thread Timothy Smith

Rafal Pietrak wrote:

On Mon, 2006-07-17 at 07:54 -0400, John DeSoi wrote:
  

On Jul 17, 2006, at 2:56 AM, Timothy Smith wrote:


is it possible to give a non super user the ability to create  
another user of a different group?
i'm looking for a way to assign a special group of admin's just  
enough rights to create other lowbie users without letting them  
bypass all other access restrictions.
  
You could create a function with the SECURITY DEFINER option which  
allows the function to be executed with the privileges of the user  
that created it.



I've been trying to do that same thing, and it works even without the
function. Still, it works with a 'glitch' but the reason for that
'glitch' is not quite clear to me. When I have:
CREATE GROUP masters;
ALTER ROLE masters CREATEUSER;
CREATE USER user_one IN GROUP MASTERS;
CREATE TABLE test1 (stamp timestamp, thing text);
REVOKE ALL ON test1 FROM PUBLIC;
GRANT INSERT ON test1 TO MASTERS;

Then, then I do:
system_prompt$ psql -U user_one mydb
mydb INSERT INTO test1 (stamp) VALUES (current_timestamp);
-- this works OK!!
mydb CREATE USER user_two;
-- this fails unless I do:
mydb SET ROLE masters;
mydb CREATE USER user_two;
-- this works OK, user_two gets created.

Any one knows, why do I have to explicitly SET ROLE, when I try to
exercise the group priviledge of role creation, while I don't need that
when accessing tables? Is this a feature, or a bug?

  
I got it to work for me using the previous advice of setting CREATEROLE 
for the group of users i wanted to have permission to do so.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Simple webuser setup

2006-07-18 Thread msiner
I have a db named archive, a schema named library, and a table named
book.  I have granted USAGE on library to webuser and SELECT on book to
webuser.  When I try to do a select I get an error saying permission
denied for schema library.  I have tried every combination I can think
of, but none seem to work.  My real question is just what should I do
to get this effect for the above?  What permissions do I need for the
db, schema, and table? I have a user named webuser belonging to a
group named webgroup and I am granting the above privileges to
webgroup.


Tino Wildenhain wrote:
 [EMAIL PROTECTED] wrote:
  There must be something simple that I am missing, but here is my
  problem.  I am setting up a standard pg install as a backend to a small
  webapp.  I want to create a user webuser with only enough privileges
  to query all of the tables in my database.  It has not been working for
  me.  What is the simplest way to do this?  Do I need to start at the

 what does is not working for me mean in detail?

  top and then work down (db-schema-table) or is there any cascading
  effect?  I am still pretty new to web development, so is there a
  better/easier way to achieve the same effect?

 There is no cascading effect when it comes to permission but if you
 dont have permission for a schema you cant access the objects in it :-)

 Maybe you can use your schema script and edit it semi-automatically.

 Regards
 Tino



 ---(end of broadcast)---
 TIP 6: explain analyze is your friend


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] cannot un-install postgresql

2006-07-18 Thread Jim
 Postgresql will not completely un-install from my Windows XP computer!  Yes use the wizard provided: Control Panel: Add/Remove programs: Remove.  So UnInstall using the wizard.  Delete the Postgres directory in Program Files.  Re-boot.  But the server that I had previously added was still visible.  Not only that, buy the user Jim is still listed there. And: now my password doesn't work anymore!!!  When I re-installed, went back to postgres for the account name and superuser, and “Jim” was never entered.  But its there now.  Same with the server, even after entire un-install – reboot – install process.  Am using the Windows installer postgresql-8.1.4-1 zip file, extracted to clean folder. I need to get postgresql completely off my hard drive so can re-install it from scratch. 

Re: [GENERAL] permission to create user

2006-07-18 Thread Rafal Pietrak
Hi,

I've been trying to do that same thing, and it works. 

Still, one point in the process is not quite clear to me. When I have:
CREATE GROUP masters;
ALTER ROLE masters CREATEUSER;
CREATE USER user_one IN GROUP MASTERS;
CREATE TABLE test1 (stamp timestamp, thing text);
REVOKE ALL ON test1 FROM PUBLIC;
GRANT INSERT ON test1 TO MASTERS;

Then, then I do:
system_prompt$ psql -U user_one mydb
mydb INSERT INTO test1 (stamp) VALUES (current_timestamp);
-- this works OK!!
mydb CREATE USER user_two;
-- this fails unless I do:
mydb SET ROLE masters;
mydb CREATE USER user_two;
-- this works OK, user_two gets created.

Any one knows, why do I have to explicitly SET ROLE, when I try to
exercise the group priviledge of role creation, while I don't need that
when accessing tables? Is this a feature, or a bug?

-R

On Mon, 2006-07-17 at 07:54 -0400, John DeSoi wrote:
 On Jul 17, 2006, at 2:56 AM, Timothy Smith wrote:
 
  is it possible to give a non super user the ability to create  
  another user of a different group?
  i'm looking for a way to assign a special group of admin's just  
  enough rights to create other lowbie users without letting them  
  bypass all other access restrictions.
 
 You could create a function with the SECURITY DEFINER option which  
 allows the function to be executed with the privileges of the user  
 that created it.
 
 
 http://www.postgresql.org/docs/8.1/interactive/sql-createfunction.html
 
 
 
 
 John DeSoi, Ph.D.
 http://pgedit.com/
 Power Tools for PostgreSQL
 
 
 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings
-- 
Rafal Pietrak [EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] postgreSQL installation

2006-07-18 Thread polder_udo
thanks martijn

I'l have a look in the documentation now as i know it is possible.


Martijn van Oosterhout schrieb:

 On Mon, Jul 17, 2006 at 02:25:20AM -0700, polder_udo wrote:
  Hello
 
  i am planning to use postgreSQL in a new project and have some very
  basic questions reguarding installing the product.
  The project will be written in C# and will be running only on windows.
  The software will be installed via a standard setup.
 
  1) Is it possible to install postgreSQL in quite mode from an other
  setup?

 Yes, please read the installer documentation.

  2) Is it possible to configure postgreSQL automaticly/scripted from
  installation?

 Also covered in installer docs.

 http://pginstaller.projects.postgresql.org/silent.html

  3) Is it possible to create all tables/views/porcs scripted from an
  external app/setup?

 psql  setup.sql

 Have a nice day,
 --
 Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
  From each according to his ability. To each according to his ability to 
  litigate.

 --98e8jtXdkpgskNou
 Content-Type: application/pgp-signature
 Content-Disposition: inline;
   filename=signature.asc
 Content-Description: Digital signature
 X-Google-AttachSize: 190


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Simple webuser setup

2006-07-18 Thread Raymond O'Donnell
On 17 Jul 2006 at 6:08, [EMAIL PROTECTED] wrote:

 I have a db named archive, a schema named library, and a table named
 book.  I have granted USAGE on library to webuser and SELECT on book
 to webuser.  When I try to do a select I get an error saying
 permission denied for schema library.  I have tried every combination

How are you accessing the DB from your web app? What 
language/envoronment are you using?

Are you sure that the username/password combination is being passed 
along to the DB? - have you tried connecting directly to the DB using 
psql, logging in as webuser, and running queries against the DB?

--Ray.


--

Raymond O'Donnell
Director of Music, Galway Cathedral, Galway, Ireland
[EMAIL PROTECTED]
--



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Simple webuser setup

2006-07-18 Thread Raymond O'Donnell
On 17 Jul 2006 at 6:08, [EMAIL PROTECTED] wrote:

 to get this effect for the above?  What permissions do I need for the
 db, schema, and table? I have a user named webuser belonging to a
 group named webgroup and I am granting the above privileges to
 webgroup.

All you really need is the appropriate permissions on the 
table(s)I personally don't understand what permissions mean on 
databases or schemas, but I've never needed to to grant any.

--Ray.

--

Raymond O'Donnell
Director of Music, Galway Cathedral, Galway, Ireland
[EMAIL PROTECTED]
--



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] How to pass array of values to a pgplsql function

2006-07-18 Thread Curtis Scheer
Does anyone have any examples of how I would make a stored procedure in
plpgsql that would allow for passing a list or arrays of values to be used
in an sql IN clause?  Like so: select * from table where field1 in (values).
Is this possible? 


Thanks,

Curtis

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] permission to create user

2006-07-18 Thread Tom Lane
Rafal Pietrak [EMAIL PROTECTED] writes:
 1) is the discrepancy by design (why?) or

Yes.  I think we were mostly concerned about superuserness being too
dangerous to inherit.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] custom installer

2006-07-18 Thread Tony Caduto

Kevin Flanders wrote:
I am looking for someone that might help us build a custom pgsql 
installer for a project of ours.  Anyone interested.  I see this as a 
small project, with other support to follow.


 


Kevin


 
I built a complete installer with Inno setup that you can use to embedd 
your application with.

You can get it here:
http://www.amsoftwaredesign.com/downloads/pg_installer_setup.zip

It uses a delphi DLL to handle things like the do the cluster init.

All source is included.

This is of course for win32.

--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] number of distinct values in tsearch2 gist index

2006-07-18 Thread Kevin Murphy
In the output of gist_stat() in the gevel contrib module, is the number 
of tuples the number of distinct values stored in the index?  Is the 
number (6M) so large because the positions are part of the values?  I'm 
guessing I can't determine the number of distinct lexemes from the 
output of gist_stat() 


select gist_stat('idxfti_idex');
 gist_stat
--
Number of levels:  5
Number of pages:   244881
Number of leaf pages:  234515
Number of tuples:  6017003
Number of invalid tuples:  0
Number of leaf tuples: 5772123
Total size of tuples:  1846434636 bytes
Total size of leaf tuples: 1779845516 bytes
Total size of index:   2006065152 bytes

(1 row)

Time: 193023.012 ms

Thanks,
Kevin Murphy


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Background Writer and performances

2006-07-18 Thread Jan Wieck

On 7/10/2006 9:49 AM, Martijn van Oosterhout wrote:

On Mon, Jul 10, 2006 at 02:56:48PM +0200, DANTE Alexandra wrote:

**
I would like to send charts to show you exactly what happens on the 
server but, with the pictures, this e-mail is not posted on the mailing 
list.

I can send charts to a personal e-mail adress if needed.
**


The best idea is to upload them to a website.

By comparing the charts, I can see that the checkpoints are less 
expensive in term of Disk activity, IO/s and disk write throughput when 
the parameters are set to the maximum values but I don?t not reach to 
have constant disk IO/s, disk activity, disk write throughput before and 
after a checkpoint. I was expecting to see more activity on the disks 
during the bench (and not only a peak during the checkpoint) when the 
parameters are set to the maximum values. Is it possible ?


I have very little experience with the bgwriter, but on the whole, I
don't think the bgwriter will change the total number of I/Os. Rather,
it changes the timing to make them more consistant and the load more
even.


The bgwriter can only increase the total amount of IO. What it does is 
to write dirty pages out before a checkpoint or another backend (due to 
eviction of the buffer) has to do it. This means that without the 
bgwriter doing so, there would be a chance that a later update to the 
same buffer would hit an already dirty buffer as opposed to a now clean 
one. The upside of this increased write activity is that it happens all 
the time, spread out between the checkpoints and that this doesn't allow 
for large buffer cache configurations to accumulate tens of thousands of 
dirty buffers.


The latter is a typical problem with OLTP type benchmarks that are 
designed more closely to real world behaviour, like the TPC-C and TPC-W. 
In those benchmarks, hundreds or thousands of simulated users basically 
go through dialog steps of an application, and just like a real user 
they don't fill in the form in milliseconds and slam ASAP onto the 
submit button, they need a bit of time to think or type. In that 
scenario, the performance drop caused by a checkpoint will let more and 
more users to finish their think/type phase and actually submit the 
next transaction (dialog step), causing a larger and larger number of 
concurrent DB requests and basically spiraling down the DB server.


The default settings are not sufficient for update intense applications.

I am not familiar with BenchmarkSQL, but 9 terminals with a 200 
warehouse configuration doesn't sound like it is simulating real user 
behaviour like outlined above.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] PostgreSQL on Embeded Systems

2006-07-18 Thread Robert Treat
On Tuesday 18 July 2006 07:18, Terry Lee Tucker wrote:
 On Tuesday 18 July 2006 05:50 am, Adem HUR [EMAIL PROTECTED] thus
 communicated:
 -- Hi,
 --
 -- I am sory for my bad English :)
 --
 --
 -- We try to use PostgreSQL on an Embeded System.
 --
 -- Plan to use this PC/104 Module
 -- http://www.icop.com.tw/products_detail.asp?ProductID=106
 --
 --
 -- and this EmbedDisk
 -- http://www.icop.com.tw/products_detail.asp?ProductID=185
 --
 --
 -- There is a Linux version (X-Linux) for the PC/104 Module.
 -- http://www.dmp.com.tw/tech/os-xlinux/
 --
 -- This Linux looks enough for our project.
 --
 -- We instal X-Linux on PC/104 Module.
 --
 -- Now, We want to install PostgreSQL
 --
 -- but there are some problems
 --
 -- 1 - Firstly,  Can we install PostgreSQL on this X-Linux?
 --
 -- 2 - Our EmbedDisk capacity is 128 Mb.
 -- X-Linux size approximately 32Mb.
 --
 -- Our expectation, PostgreSQL operate standart query on basic tables
 -- and listen 5432 port.
 --
 -- A program collect data from peripheral devices, insert this data
 -- into PostgreSQL database. Another program access this database by
 use 5432 -- port, read and delete data.
 --
 -- We want to use PostgreSQL on PC/104 Module  because we already use
 -- PostgreSQL for other part of the project.
 --
 -- We have a capacity problem, so PostgreSQL installation and
 -- database tables must be configure to smaller size.
 --
 -- How can we do this? suggestion,information, document...
 --
 -- How much the PostgreSQL installation size?
 --
 --
 -- Thanks,
 --
 --
 --
 -- ---(end of
 broadcast)--- -- TIP 3: Have you checked our
 extensive FAQ?
 --
 --http://www.postgresql.org/docs/faq
 --
 From the 7.4.6 documentation:
 Also check that you have sufficient disk space. You will need about 65 MB
 for the source tree during compilation and about 15 MB for the installation
 directory. An empty database cluster takes about 25 MB, databases take
 about five times the amount of space that a flat text file with the same
 data would take. If you are going to run the regression tests you will
 temporarily need up to an extra 90 MB. Use the df command to check for disk
 space.

 This includes space required for compiling the application. Looks like
 about 40 MB for the installation and an empty cluster.

Well, certainly you aren't going to want to be compiling from source and 
running full regression on each install... I'd think you would want some type 
of imaged configuration set up... for example dropping the template0, 
template1, and postgres databases once you have your database installed. 
There are other things that can be done too... check the archives as others 
have wieghed in on this topic before. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Log actual params for prepared queries: TO-DO item?

2006-07-18 Thread Bruce Momjian
Ed L. wrote:
 
 We'd like to attempt some log replay to simulate real loads, but 
 in 8.1.2, it appears the formal parameters are logged ('$') 
 instead of the actuals for prepared queries, e.g.:
 
 EXECUTE unnamed  [PREPARE:  UPDATE sessions SET a_session = $1 
 WHERE id = $2]
 
 Thoughts on making this a to-do item?

It is already a TODO item:

* Allow protocol-level BIND parameter values to be logged

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Antw: [GENERAL] Performance problem with query

2006-07-18 Thread Christian Rengstl
now finally after a long time i have the query plan for the whole filled table. 
I hope somebody can explain me why it takes so much longer...

QUERY PLAN



-
Result  (cost=0.09..254643.09 rows=8044000 width=39) (actual time=0.043..0.043
rows=0 loops=1)
  One-Time Filter: split_part(($1)::text, '_'::text, 2))::smallint = 1) IS
NOT TRUE) AND (((split_part(($2)::text, '_'::text, 2))::smallint = 22) IS NOT TR
UE))
  InitPlan
-  Limit  (cost=0.00..0.02 rows=1 width=10) (never executed)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000 width=
10) (never executed)
-  Limit  (cost=0.00..0.02 rows=1 width=10) (actual time=0.006..0.006 rows
=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000 width=
10) (actual time=0.004..0.004 rows=1 loops=1)
-  Limit  (cost=0.00..0.02 rows=1 width=10) (never executed)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000 width=
10) (never executed)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000 width=39) (nev
er executed)
Total runtime: 0.238 ms

Result  (cost=0.06..254643.06 rows=8044000 width=39) (actual time=0.056..655772
.273 rows=8044000 loops=1)
  One-Time Filter: ((split_part(($1)::text, '_'::text, 2))::smallint = 1)
  InitPlan
-  Limit  (cost=0.00..0.02 rows=1 width=10) (actual time=0.003..0.003 rows
=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000 width=
10) (actual time=0.002..0.002 rows=1 loops=1)
-  Limit  (cost=0.00..0.02 rows=1 width=10) (actual time=0.006..0.007 rows
=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000 width=
10) (actual time=0.004..0.004 rows=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000 width=39) (act
ual time=0.002..191672.344 rows=8044000 loops=1)
Total runtime: 62259544.896 ms

Result  (cost=0.06..254643.06 rows=8044000 width=39) (actual time=1.245..1.245
rows=0 loops=1)
  One-Time Filter: ((split_part(($1)::text, '_'::text, 2))::smallint = 22)
  InitPlan
-  Limit  (cost=0.00..0.02 rows=1 width=10) (never executed)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000 width=
10) (never executed)
-  Limit  (cost=0.00..0.02 rows=1 width=10) (actual time=0.013..0.014 rows
=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000 width=
10) (actual time=0.009..0.009 rows=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000 width=39) (nev
er executed)
Total runtime: 22.270 ms
(31 Zeilen)



 Christian Rengstl [EMAIL PROTECTED] 13.07.06 8.37 Uhr 
Good morning list,

the following query takes about 15 to 20 minutes for around 2 million lines in 
the file myfile.txt, but with 8 million lines it takes around 5 hours and i 
just don't understand why there is such a huge discrepancy in performance.

COPY public.temp_table FROM 'myfile.txt' WITH DELIMITER '\t';

INSERT INTO public.master(pid,smid, val1, val2, chr)
SELECT pid, smid, val1, val12, CAST(split_part((SELECT chr from 
public.temp_table LIMIT 1), '_', 2) as int2)
   FROM public.temp_table; 

INSERT INTO public.values(smid, pos, chr)
SELECT DISTINCT smid, pos, CAST(split_part((SELECT chr from 
public.temp_table LIMIT 1), '_', 2) as int2) 
FROM public.temp_table; 

I came up with this query, because i wanted to use the COPY command to load 
huge files into the db, but i don't want to load all the columns contained in 
the file in only one table but copy some of them into one table and some in a 
second table. As i found out with logging, the data is loaded into temp_table 
within 15 minutes, but to transfer it from the temp_table toagain only 
something like 10 minutes. Can it be that the cast takes up so much more time 
than when reading and transferring 2 million lines?

Thanks for any advice!

--
Christian Rengstl M.A.
Klinik und Poliklinik für Innere Medizin II
Kardiologie - Forschung
Universitätsklinikum Regensburg
B3 1.388
Franz-Josef-Strauss-Allee 11
93053 Regensburg
Tel.: +49-941-944-7230


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] How to pass array of values to a pgplsql function

2006-07-18 Thread Erik Jones

Curtis Scheer wrote:

Does anyone have any examples of how I would make a stored procedure in
plpgsql that would allow for passing a list or arrays of values to be used
in an sql IN clause?  Like so: select * from table where field1 in (values).
Is this possible? 

  
Well, a good thing to note here is that there is a very distinct 
semantic difference between an array in postgres and what IN clauses 
take as input:  and array
is a data type whereas IN clauses take a parenthesized list of comma 
separated values.  So, if you  pass an array into a function wherein  
you then need
to use those values in an IN clause, you can build yourself an string of 
the values in the array, comma separated of course.


e.g.

CREATE OR REPLACE FUNCTION somefunc(ids int[]) RETURNS boolean AS $sf$
DECLARE
   in_values varchar;
   good int;
BEGIN
   FOR i IN array_upper(ids, 1) LOOP
  in_values := in_values || ids[i] || ',';
   END LOOP;
   in_values := substring(in_values FROM 1 FOR 
character_length(in_values) - 1); -- this will chop off the last comma


   EXECUTE 'SELECT 1 FROM blah WHERE blah.id IN (' || in_values ||');' 
INTO good;

   IF(good = 1) THEN
  RETURN TRUE;
   ELSE
   RETURN FALSE;
   END IF;
END;
$sf$ LANGUAGE plpgsql;

Or, it may be easier given whatever your situation to simply use the 
array as the argument to a row-wise AND or SOME expression.


--
erik jones [EMAIL PROTECTED]
software development
emma(r)


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] How to pass array of values to a pgplsql function

2006-07-18 Thread Erik Jones

Erik Jones wrote:

Curtis Scheer wrote:

Does anyone have any examples of how I would make a stored procedure in
plpgsql that would allow for passing a list or arrays of values to be 
used
in an sql IN clause?  Like so: select * from table where field1 in 
(values).

Is this possible?
  
Well, a good thing to note here is that there is a very distinct 
semantic difference between an array in postgres and what IN clauses 
take as input:  and array
is a data type whereas IN clauses take a parenthesized list of comma 
separated values.  So, if you  pass an array into a function wherein  
you then need
to use those values in an IN clause, you can build yourself an string 
of the values in the array, comma separated of course.


e.g.

CREATE OR REPLACE FUNCTION somefunc(ids int[]) RETURNS boolean AS $sf$
DECLARE
   in_values varchar;
   good int;
BEGIN
   FOR i IN array_upper(ids, 1) LOOP
  in_values := in_values || ids[i] || ',';
   END LOOP;
   in_values := substring(in_values FROM 1 FOR 
character_length(in_values) - 1); -- this will chop off the last comma


   EXECUTE 'SELECT 1 FROM blah WHERE blah.id IN (' || in_values ||');' 
INTO good;

   IF(good = 1) THEN
  RETURN TRUE;
   ELSE
   RETURN FALSE;
   END IF;
END;
$sf$ LANGUAGE plpgsql;

Or, it may be easier given whatever your situation to simply use the 
array as the argument to a row-wise AND or SOME expression.


Whoa, replied to this out of the General mailing list before I saw the 
other answers on the SQL list...  Sorry guys


--
erik jones [EMAIL PROTECTED]
software development
emma(r)


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] How to pass array of values to a pgplsql function

2006-07-18 Thread Curtis Scheer

Thanks, Erik another possible solution would also be this.

Here it is in case you are interested.

On 7/18/06, Tony Wasson [EMAIL PROTECTED] wrote:
 On 7/18/06, Curtis Scheer [EMAIL PROTECTED] wrote:
  Does anyone have any examples of how I would make a stored procedure 
  in plpgsql that would allow for passing a list or arrays of values 
  to be used in an sql IN clause?  Like so: select * from table where
field1 in (values).
 

Ahhh... Here's an example using Tom's recommended field=ANY (arrayvalue)
SQL.


CREATE TABLE ids
(
id  INTEGER
, PRIMARY KEY (id)
);

INSERT INTO ids VALUES (1);
INSERT INTO ids VALUES (2);
INSERT INTO ids VALUES (3);

CREATE OR REPLACE FUNCTION example_array_input(INT[]) RETURNS SETOF ids AS
$BODY$ DECLARE
in_clause ALIAS FOR $1;
clause  TEXT;
rec RECORD;
BEGIN
FOR rec IN SELECT id FROM ids WHERE id = ANY(in_clause)
LOOP
RETURN NEXT rec;
END LOOP;
-- final return
RETURN;
END
$BODY$ language plpgsql;

SELECT * FROM example_array_input('{1,2,4,5,6}'::INT[]);

---(end of broadcast)---


Well, a good thing to note here is that there is a very distinct 
semantic difference between an array in postgres and what IN clauses 
take as input:  and array
is a data type whereas IN clauses take a parenthesized list of comma 
separated values.  So, if you  pass an array into a function wherein  
you then need
to use those values in an IN clause, you can build yourself an string of 
the values in the array, comma separated of course.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] UTF8 conversion differences from v8.1.3 to v8.1.4

2006-07-18 Thread Eric Faulhaber
Tom Lane wrote:
 Eric Faulhaber [EMAIL PROTECTED] writes:
 Can anyone help me understand why converting the NULL code point ()
 from UTF8 to ISO8859_1 is no longer legal in v8.1.4?
 
 Embedded nulls in text strings have never behaved sanely in PG ... or
 hadn't you noticed?  You'd have been better off passing an empty string,
 because that was effectively what you were getting.
 
   regards, tom lane

OK, but this particular issue is something quite new to the latest
version.  From utf8_and_iso8859_1.c (utf8_to_iso8859_1 function):

...

unsigned char *src = (unsigned char *) PG_GETARG_CSTRING(2);
unsigned char *dest = (unsigned char *) PG_GETARG_CSTRING(3);
int  len = PG_GETARG_INT32(4);
unsigned short c,
   c1;

...

while (len  0)
{
c = *src;
if (c == 0)
report_invalid_encoding(PG_UTF8, (const char *) src, len);

...

This is new code in 8.1.4;  the 8.1.3 version did not screen explicitly
for null bytes.

This has some troubling implications for our runtime layer.  Since the
null byte represents a valid code point in both the database's encoding
(in this case LATIN1) and in the client's encoding (UNICODE/UTF8), I
cannot simply strip out null bytes before handing strings to PG;  they
may well have special meaning to application developers.

Converting varchar/text columns into bytea because they may require
embedded nulls is not an option either, since these are valid strings
and need to be treated as such in our runtime.

Am I stuck at 8.1.3 for the time being?  I'd be happy to create a patch
to resolve this for a future version, but if it is not considered a
defect, it doesn't make sense for me to do that.

Thanks,
Eric


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Just want to verify Posgresql works on DRDB

2006-07-18 Thread Xn Nooby
Hi, I just wanted to verify that there is nothing wrong with storing a Posgreql database on a DRDB partition. I read some old emails about there being problems, but the more recent ones seem to imply it should work fine. 
I'm using two SUSE SLES 9 servers with Heartbeat to fail over some custom services and Posgres. I am using type C communication on DRDB. There have no modifications to Posgresql, I'm not sure if I'm supposed to force fsyncs. So far everything works fine, though I havent done much testing on indexed tables.
Are there any extra precautions I should be taking? (I posted this on the DRDB list, and thought I better ask here, too, no responses there yet)thanks!


Re: [GENERAL] databases hidden in phppgadmin

2006-07-18 Thread Robert Treat
On Friday 14 July 2006 11:22, Tom Lane wrote:
 Darren [EMAIL PROTECTED] writes:
  When I login to phppgadmin, the list of databases does not include any
  databases that are owned by a 'group' (i.e. a role with NOLOGIN set).

 I'm betting that phppgadmin is using something like an inner join of
 pg_database and pg_user to produce its display.  As of PG 8.1 they need
 to be using pg_roles instead ... and if I were them, I'd make it a LEFT
 JOIN so that databases don't disappear completely if the owner can't be
 found.

You know Tom, if you're interested in hacking on phpPgAdmin, I'd be happy to 
send you a copy of Beginning PHP  PostgreSQL 8   
:-)

Darren,  this is fixed in ppa's CVS HEAD.  If you want a specific patch, drop 
me a note off list, though I am thinking of putting out a 4.0.2 release 
sometime this week.  

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Performance problem with query

2006-07-18 Thread Merlin Moncure

On 7/18/06, Christian Rengstl
[EMAIL PROTECTED] wrote:

now finally after a long time i have the query plan for the whole filled table. 
I hope somebody can explain me why it takes so much longer...

QUERY PLAN



-
Result  (cost=0.09..254643.09 rows=8044000 width=39) (actual time=0.043..0.043
rows=0 loops=1)
  One-Time Filter: split_part(($1)::text, '_'::text, 2))::smallint = 1) IS
NOT TRUE) AND (((split_part(($2)::text, '_'::text, 2))::smallint = 22) IS NOT TR
UE))
  InitPlan
-  Limit  (cost=0.00..0.02 rows=1 width=10) (never executed)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000 width=
10) (never executed)
-  Limit  (cost=0.00..0.02 rows=1 width=10) (actual time=0.006..0.006 rows
=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000 width=
10) (actual time=0.004..0.004 rows=1 loops=1)
-  Limit  (cost=0.00..0.02 rows=1 width=10) (never executed)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000 width=
10) (never executed)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000 width=39) (nev
er executed)
Total runtime: 0.238 ms

Result  (cost=0.06..254643.06 rows=8044000 width=39) (actual time=0.056..655772
.273 rows=8044000 loops=1)
  One-Time Filter: ((split_part(($1)::text, '_'::text, 2))::smallint = 1)
  InitPlan
-  Limit  (cost=0.00..0.02 rows=1 width=10) (actual time=0.003..0.003 rows
=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000 width=
10) (actual time=0.002..0.002 rows=1 loops=1)
-  Limit  (cost=0.00..0.02 rows=1 width=10) (actual time=0.006..0.007 rows
=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000 width=
10) (actual time=0.004..0.004 rows=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000 width=39) (act
ual time=0.002..191672.344 rows=8044000 loops=1)
Total runtime: 62259544.896 ms

Result  (cost=0.06..254643.06 rows=8044000 width=39) (actual time=1.245..1.245
rows=0 loops=1)
  One-Time Filter: ((split_part(($1)::text, '_'::text, 2))::smallint = 22)
  InitPlan
-  Limit  (cost=0.00..0.02 rows=1 width=10) (never executed)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000 width=
10) (never executed)
-  Limit  (cost=0.00..0.02 rows=1 width=10) (actual time=0.013..0.014 rows
=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000 width=
10) (actual time=0.009..0.009 rows=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000 width=39) (nev
er executed)
Total runtime: 22.270 ms
(31 Zeilen)



 Christian Rengstl [EMAIL PROTECTED] 13.07.06 8.37 Uhr 
Good morning list,

the following query takes about 15 to 20 minutes for around 2 million lines in 
the file myfile.txt, but with 8 million lines it takes around 5 hours and i 
just don't understand why there is such a huge discrepancy in performance.

COPY public.temp_table FROM 'myfile.txt' WITH DELIMITER '\t';

INSERT INTO public.master(pid,smid, val1, val2, chr)
SELECT pid, smid, val1, val12, CAST(split_part((SELECT chr from 
public.temp_table LIMIT 1), '_', 2) as int2)
   FROM public.temp_table;

INSERT INTO public.values(smid, pos, chr)
SELECT DISTINCT smid, pos, CAST(split_part((SELECT chr from 
public.temp_table LIMIT 1), '_', 2) as int2)
FROM public.temp_table;



what is this phrase doing exactly?
CAST(split_part((SELECT chr from public.temp_table LIMIT 1), '_', 2)

it looks fishy.
merlin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] function taking a long time. any options o speed it up.

2006-07-18 Thread Michael Fuhr
On Tue, Jul 18, 2006 at 08:11:40AM -0500, Rhys Stewart wrote:
 i created a function and it takes a long time to run. I was testing it
 as a wrote it and to the first drop table it takes just short of a
 second. however when the rest of the code is added on, it takes
 upwards of 5 minutes. Not the most appropriate thing. Are there any
 tips out there for making functions go faster?

Find out what parts of the function are slow.  Have you used RAISE
to display the function's progress?  Have you executed any of the
queries by hand to see how fast they run?  Have you used EXPLAIN
ANALYZE to see if you could benefit from rewriting a query, adding
indexes, or tuning configuration settings?

The UPDATE statement with the ORs and regular expression matches
looks like it might be slow.  Is it?

-- 
Michael Fuhr

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] postmaster: StreamConnection: accept: No such device or address

2006-07-18 Thread Reid Thompson
Using a legacy installation ( 7.2.3 ).
Occasionally the system will reach a state where attempted psql
connection attempts fail, with the following error in the postgresql
log:
postmaster: StreamConnection: accept: No such device or address

Will also occasionally get no connection to server errors from ecpg
programs ( which I believe are probably due to the same issud ).

Can anyone shed some light on what could be causing this?  OS resource
limitation?  Known bug in ancient version of PG?

thanks,
reid

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] postmaster: StreamConnection: accept: No such device or address

2006-07-18 Thread Larry Rosenman
Reid Thompson wrote:
 Using a legacy installation ( 7.2.3 ).
 Occasionally the system will reach a state where attempted psql
 connection attempts fail, with the following error in the postgresql
 log:
 postmaster: StreamConnection: accept: No such device or address
 
 Will also occasionally get no connection to server errors from ecpg
 programs ( which I believe are probably due to the same issud ).
 
 Can anyone shed some light on what could be causing this?  OS resource
 limitation?  Known bug in ancient version of PG?
 
 thanks,
 reid


Is something cleaning /tmp of the unix socket?  Older versions didn't keep
the 
timestamp updated, and /tmp cleaning utilities could remove the socket. 


-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 512-248-2683 E-Mail: ler@lerctr.org
US Mail: 430 Valona Loop, Round Rock, TX 78681-3893


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] apparent wraparound

2006-07-18 Thread Reece Hart
Greg, Florian, Joshua, Tom-

On Fri, 2006-07-14 at 17:02 -0700, Gregory S. Williamson wrote:
 You need to edit the postgresql.conf file and increase the
 max_fsm_pages and max_fsm_relations parameters and then restart
 postgres 

I did this and vacuumed.  I didn't need to up shmmax.  The problem's
disappeared (and, fortunately, my data are still intact).

Thanks everyone for your help.

-Reece

-- 
Reece Hart, http://harts.net/reece/, GPG:0x25EC91A0


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] postmaster: StreamConnection: accept: No such device

2006-07-18 Thread Reid Thompson

Larry Rosenman wrote:

Reid Thompson wrote:
  

Using a legacy installation ( 7.2.3 ).
Occasionally the system will reach a state where attempted psql
connection attempts fail, with the following error in the postgresql
log:
postmaster: StreamConnection: accept: No such device or address

Will also occasionally get no connection to server errors from ecpg
programs ( which I believe are probably due to the same issud ).

Can anyone shed some light on what could be causing this?  OS resource
limitation?  Known bug in ancient version of PG?

thanks,
reid




Is something cleaning /tmp of the unix socket?  Older versions didn't keep
the 
timestamp updated, and /tmp cleaning utilities could remove the socket. 



  
The errors occur when psql is used with -h also, and the ecpg program is 
connecting from a different host.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] UTF8 conversion differences from v8.1.3 to v8.1.4

2006-07-18 Thread Tom Lane
Eric Faulhaber [EMAIL PROTECTED] writes:
 OK, but this particular issue is something quite new to the latest
 version.

Again, PG has never stored such data correctly.

 Am I stuck at 8.1.3 for the time being?  I'd be happy to create a patch
 to resolve this for a future version, but if it is not considered a
 defect, it doesn't make sense for me to do that.

It's not a defect ... or at least, it doesn't make sense to change it
unless you are willing to go through the entire system to make it able
to store null bytes in text.  We've looked at that in the past and
always concluded that it was completely impractical :-(

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] UTF8 conversion differences from v8.1.3 to v8.1.4

2006-07-18 Thread Eric Faulhaber
Tom Lane wrote:
 Eric Faulhaber [EMAIL PROTECTED] writes:
 OK, but this particular issue is something quite new to the latest
 version.
 
 Again, PG has never stored such data correctly.
 

Perhaps not, but it silently tolerated such data until this release, at
least at the encoding conversion level.  I don't know what happened to
the embedded nulls beyond that point (ignorance is bliss), but our JDBC
queries were working as expected...

BTW, any idea why we don't see this problem when issuing the same query
from psql?  I've set psql's encoding to UTF8 to try to trigger the
conversion when running against the LATIN1-encoded database.  It happily
returns the result we previously achieved with JDBC on 8.1.3.  Is psql
filtering out embedded nulls before the backend sees them?

 Am I stuck at 8.1.3 for the time being?  I'd be happy to create a patch
 to resolve this for a future version, but if it is not considered a
 defect, it doesn't make sense for me to do that.
 
 It's not a defect ... or at least, it doesn't make sense to change it
 unless you are willing to go through the entire system to make it able
 to store null bytes in text.  We've looked at that in the past and
 always concluded that it was completely impractical :-(
 
   regards, tom lane

:-( indeed, though I appreciate the dialog, Tom.  Sadly, this would not
be the first completely impractical task on my todo list ;-)

Thanks,
Eric


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: Antw: [GENERAL] Performance problem with query

2006-07-18 Thread Q


On 19/07/2006, at 4:24 AM, Christian Rengstl wrote:

now finally after a long time i have the query plan for the whole  
filled table. I hope somebody can explain me why it takes so much  
longer...



These explain analyze results don't appear to be from the queries you  
posted previously. For these results to mean anything you need to  
include the EXACT queries you used to generate them.


Result  (cost=0.06..254643.06 rows=8044000 width=39) (actual  
time=0.056..655772

.273 rows=8044000 loops=1)
  One-Time Filter: ((split_part(($1)::text, '_'::text,  
2))::smallint = 1)

  InitPlan
-  Limit  (cost=0.00..0.02 rows=1 width=10) (actual  
time=0.003..0.003 rows

=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00  
rows=8044000 width=

10) (actual time=0.002..0.002 rows=1 loops=1)
-  Limit  (cost=0.00..0.02 rows=1 width=10) (actual  
time=0.006..0.007 rows

=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00  
rows=8044000 width=

10) (actual time=0.004..0.004 rows=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000  
width=39) (act

ual time=0.002..191672.344 rows=8044000 loops=1)
Total runtime: 62259544.896 ms



This is the query you want to be interested in, the others took no  
time at all.


As a guess I would say the query is an INSERT INTO ... FROM  
SELECT ... WHERE (split_part(???, '_', 2))::smallint = 1 statement.  
The majority of the time appears to be taken up on the actual INSERT  
and not the SELECT part.


How many rows are in the target table and what indexes does it have?  
Does it have any triggers, check constraints, or rules applied to it?  
All these things can make the insert take longer as the number of  
rows you have already in the table increases.


More than likely you have a target table with a LOT of rows and a  
bunch of indexes on it and your disks are being thrashed because the  
indexes are not able to stay cached in RAM. At this point you should  
ensure your machine is not swapping do disk, and at the very least  
you should go through one of the many tuning guidelines available and  
ensure you have allocated the appropriate amount of memory to  
postgresql for your needs.


You may also want to consider dropping the indexes before you do the  
INSERT and recreate them afterwards.




Christian Rengstl [EMAIL PROTECTED]  
13.07.06 8.37 Uhr 

Good morning list,

the following query takes about 15 to 20 minutes for around 2  
million lines in the file myfile.txt, but with 8 million lines it  
takes around 5 hours and i just don't understand why there is such  
a huge discrepancy in performance.


COPY public.temp_table FROM 'myfile.txt' WITH DELIMITER '\t';

INSERT INTO public.master(pid,smid, val1, val2, chr)
SELECT pid, smid, val1, val12, CAST(split_part((SELECT  
chr from public.temp_table LIMIT 1), '_', 2) as int2)

   FROM public.temp_table;

INSERT INTO public.values(smid, pos, chr)
SELECT DISTINCT smid, pos, CAST(split_part((SELECT chr from  
public.temp_table LIMIT 1), '_', 2) as int2)

FROM public.temp_table;

I came up with this query, because i wanted to use the COPY command  
to load huge files into the db, but i don't want to load all the  
columns contained in the file in only one table but copy some of  
them into one table and some in a second table. As i found out with  
logging, the data is loaded into temp_table within 15 minutes, but  
to transfer it from the temp_table toagain only something like 10  
minutes. Can it be that the cast takes up so much more time than  
when reading and transferring 2 million lines?



--
Seeya...Q

   -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

  _  /  Quinton Dolan - [EMAIL PROTECTED]
  __  __/  /   /   __/   /  /
 /__  /   _//  /Gold Coast, QLD, Australia
  __/  __/ __/ /   /   -  /Ph: +61 419 729 806
___  /
_\




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] shortcircuit logic in plpsql

2006-07-18 Thread Joseph Shraibman

I'm trying to do this:

 IF TG_OP = \'INSERT\' OR (TG_OP = \'UPDATE\' AND OLD.status  
NEW.status) THEN


..but pg is complaining:

ERROR:  record old is not assigned yet
DETAIL:  The tuple structure of a not-yet-assigned record is indeterminate.
CONTEXT:  PL/pgSQL function set_dir_count line 4 at if

Does plpgsql not short circuit its logic?

=select version();
 version
-
 PostgreSQL 8.0.8 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2 
20030222 (Red Hat Linux 3.2.2-5)

(1 row)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] shortcircuit logic in plpsql

2006-07-18 Thread Tom Lane
Joseph Shraibman jks@selectacast.net writes:
 Does plpgsql not short circuit its logic?

We make no guarantees about evaluation order.  In the particular 
case at hand, you're losing because plpgsql has to evaluate all
the variables that it's going to pass into the SQL engine for
that expression.  Break it into two statements ...

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] is there any dataware housing tools for postgresql

2006-07-18 Thread vamsee movva
Hello all
could you please tell me if there are any dataware housing tools for postgresql
Thanks in advance
vamsee