Re: [GENERAL] Why DISTINCT ... DESC is slow?

2006-12-12 Thread Richard Huxton

Michael Glaesemann wrote:


On Dec 12, 2006, at 16:43 , Richard Huxton wrote:


Anton wrote:

While without DESC query goes faster... But not so fast!
=# explain analyze SELECT DISTINCT ON (login_id) login_id,
collect_time AS dt FROM n_traffic ORDER BY login_id collect_time;
QUERY PLAN
--- 
Unique  (cost=0.00..29843.08 rows=532 width=12) (actual

time=0.045..5146.768 rows=798 loops=1)
  -  Index Scan using n_traffic_login_id_collect_time on n_traffic
(cost=0.00..27863.94 rows=791656 width=12) (actual
time=0.037..3682.853 rows=791656 loops=1)
Total runtime: 5158.735 ms
(3 rows)
Why? 768 rows is about 1000 times smaller than entire n_traffic. And
why Index Scan used without DESC but with DESC is not?


For the DESC version to use the index try login_id DESC collect_time 
DESC - so both are reversed.


I'm also not sure what this query is meant to do precisely. ORDER BY 
is usually the last stage in a query, so it might be applied *after* 
the DISTINCT ON.


My understanding is that DISTINCT ON requires the ORDER BY, so I'd be 
surprised if ORDER BY is applied after. (Though I'm happy to hear more 
about this.)


(goes away and tests) Ah, you're quite right. I was worried about 
ill-defined results, but it prevents you from doing that.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] search_path when restoring to new db

2006-12-12 Thread SunWuKung
Ok, I understand this difference now.
Knowing that, what is the standard way to copy a single database to
another server?

- can I use pg_dumpall to dump a single db? - or
- I have to use pg_dump and there is a procedure to ensure that old and
new dbs are the same, like

1. create new db
2. check or create user
3. run pg_restore
4. do alter db

and is there a way to automate this? If there is could you point me
there?

If there isn't:
Do you think that automating the complete backup and restore of a
single database would be database specific or it could work with
different databases? Don't you think that this is something that would
be generally useful?

Thank you for the help.
Balázs


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Why DISTINCT ... DESC is slow?

2006-12-12 Thread Anton

 Why? 768 rows is about 1000 times smaller than entire n_traffic. And
 why Index Scan used without DESC but with DESC is not?

For the DESC version to use the index try login_id DESC collect_time
DESC - so both are reversed.


Yes, it helps!

But

If you want the most recent collect_time for each login I'd use
something like:
SELECT login_id, MAX(collect_time) AS most_recent
FROM n_traffic
GROUP BY login_id
ORDER BY login_id DESC, collect_time DESC

is not so good:
=# SELECT login_id, MAX(collect_time) AS most_recent
-# FROM n_traffic
-# GROUP BY login_id
-# ORDER BY login_id DESC, collect_time DESC;
ERROR:  column n_traffic.collect_time must appear in the GROUP BY
clause or be used in an aggregate function

If I correct this error (add collect time to GROUP BY) I'll just get
full table, sorted. And I tried to not use aggregate functions because
they make to do full table scan...

So,
=# explain analyze SELECT DISTINCT ON (login_id) login_id,
collect_time AS dt FROM n_traffic ORDER BY login_idDESC, collect_time
DESC;

QUERY PLAN
-
Unique  (cost=0.00..29843.08 rows=532 width=12) (actual
time=60.656..9747.985 rows=796 loops=1)
 -  Index Scan Backward using n_traffic_login_id_collect_time on
n_traffic  (cost=0.00..27863.94 rows=791656 width=12) (actual
time=60.645..8221.891 rows=789934 loops=1)
Total runtime: 9750.189 ms
(3 rows)

Indexes are used, this is good, but speed still not so good for
2xPIIIx1Ghz + 1Gb RAM + RAID5 on SCSI...

Anyhow, thank you!
--
engineer

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Why DISTINCT ... DESC is slow?

2006-12-12 Thread Richard Huxton

Anton wrote:

SELECT login_id, MAX(collect_time) AS most_recent
FROM n_traffic
GROUP BY login_id
ORDER BY login_id DESC, collect_time DESC

is not so good:
=# SELECT login_id, MAX(collect_time) AS most_recent
-# FROM n_traffic
-# GROUP BY login_id
-# ORDER BY login_id DESC, collect_time DESC;
ERROR:  column n_traffic.collect_time must appear in the GROUP BY
clause or be used in an aggregate function

If I correct this error (add collect time to GROUP BY) I'll just get
full table, sorted. And I tried to not use aggregate functions because
they make to do full table scan...


Sorry - my typo. The order-by doesn't need collect_time of course.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] any api in libpq to get the number of connections

2006-12-12 Thread surabhi.ahuja
 Hi,
is ther any api avaiable in libpq
so that i can get the number of connections that exists with the database
 
thanks,
regards
surabhi


This message has been scanned by the Trend Micro IGSA and found to be free of 
known security risks


[GENERAL] returning parameters from function

2006-12-12 Thread Rikard Pavelic

Hi!

Is there any plan to add implicit declaration of returning parameters 
for functions?

Something like:
create function list(in a int) returns setof implicit record as
$$
if a=1 then select * from table1;
else select * from table2;
end if;
$$
languge sql;

which would than dynamically create output parameters depending on
selected query inside.

I think this would be very helpful, and is one of the
most missing features from MSSQL for me.

One other feature that I miss, but no so much is:
storing the entire function, not just the body
(because of this pg cant remember exactly how I wrote the
function, and if there are many parameters inside I get a looong first line)


Regards,
Rikard

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] any api in libpq to get the number of connections

2006-12-12 Thread Shoaib Mir

This might be of some help using a query like:

select count(*) as NumberOfConn from pg_stat_activity where datname =
'dbname';

--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 12/12/06, surabhi.ahuja [EMAIL PROTECTED] wrote:


 Hi,
is ther any api avaiable in libpq
so that i can get the number of connections that exists with the database

thanks,
regards
surabhi

This message has been scanned by the Trend Micro IGSA and found to be free of 
known security risks.




Re: [GENERAL] out of memory error on 3 table join

2006-12-12 Thread Martijn van Oosterhout
On Mon, Dec 11, 2006 at 05:50:53PM -0600, Kirk Wythers wrote:
 met_data=# SELECT count(*) FROM climate, sites, solar WHERE  
 climate.id = sites.id AND solar.id = sites.id AND climate.year = 1999;
 --
 33061700
 (1 row)

snip

 psql(394) malloc: *** vm_allocate(size=396742656) failed (error code=3)
 psql(394) malloc: *** error: can't allocate region
 psql(394) malloc: *** set a breakpoint in szone_error to debug
 out of memory

Basically, it failed to allocate nearly 400MB of memory to store the
results. The question is: do you want ot store all 400MB at once?

If you don't, use a cursor.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Metadata from NEW and OLD constructs?

2006-12-12 Thread Martijn van Oosterhout
On Mon, Dec 11, 2006 at 06:19:33PM -0700, Lenorovitz, Joel wrote:
 Greetings,
 
 I was wondering if it's possible to get any of the metadata from the NEW
 and OLD constructs in a trigger or view rule?  Specifically, I'd like to
 get the column name or identifier anywhere the new record differs from
 the old record (i.e. NEW.column_X  OLD.column_X).  Any advice would be
 greatly appreciated.

If your trigger is written in C, definitly. If the trigger is in
pl/pgsql, no. For other languages I don't know. which language are you
using?

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] returning parameters from function

2006-12-12 Thread Martijn van Oosterhout
On Tue, Dec 12, 2006 at 10:30:07AM +0100, Rikard Pavelic wrote:
 Hi!
 
 Is there any plan to add implicit declaration of returning parameters 
 for functions?
 Something like:
 create function list(in a int) returns setof implicit record as

snip

Just setof record will do. As for the implicit declaration of
variable names, that's harder. I don't know if you can do that without
making things very ambiguous.

 One other feature that I miss, but no so much is:
 storing the entire function, not just the body
 (because of this pg cant remember exactly how I wrote the
 function, and if there are many parameters inside I get a looong first line)

You should get out what you put in, but it won't remember spacing
outside the function body because the lexer eats that.

I have a file with the function as I want it and edit that. That way I
can have the function exactly how I like it.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] returning parameters from function

2006-12-12 Thread Shoaib Mir

On Tue, Dec 12, 2006 at 10:30:07AM +0100, Rikard Pavelic wrote:

Hi!

Is there any plan to add implicit declaration of returning parameters
for functions?
Something like:
create function list(in a int) returns setof implicit record as


You can use a SETOF function as:

CREATE OR REPLACE FUNCTION get_test_data (numeric)
   RETURNS SETOF RECORD AS
$$
DECLARE
   temp_recRECORD;
BEGIN
   FOR temp_rec IN (SELECT ename FROM emp WHERE sal  $1)
   LOOP
   RETURN NEXT temp_rec;
   END LOOP;
   RETURN;
END;
$$ LANGUAGE plpgsql;

---
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)


[GENERAL] server speed question

2006-12-12 Thread Tomi N/A

I'm trying to optimize performance on my development laptop, one of
the main bottlenecks beeing a 4200 rpm disk.
It's a fairly good machine (Pentium M, 1,73GHz, 1GB RAM), but pg
doesn't seem to use the processing power: the disk works all of the
time.

I'm working with a database with a couple of hundred thousands of rows
and a couple of redundant tables with dozens of columns.
The operations that takes ages are cache updates. I'd like pg to use
more memory and CPU and work as much as possible with no disk writes:
data integrity isn't a very high priority here.

Aside from setting shared_buffers to 30200 (close to 250 MB), fsync to
off and autovacuum to on, and setting a bigger max shared memory space
(echo 25600  /proc/sys/kernel/shmmax)  I'm lost as to what else
to do.
My disk doesn't stop working when I update several thousand records in
the redundant table and it's kind of surprising, given that the entire
database could (theoretically) fit into the 250 MB shared memory
space.

Any hints?
Cheers,
t.n.a.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Database Synchronization

2006-12-12 Thread Rohit Prakash Khare
I developed an app. using VB.NET and MS Access. The Access MDB has 27 tables. I 
want to develop the reporting module using VB.NET and PostgreSQL. I want to 
know whether there are any tools available that can migrate existing Access 
database to PostgreSQL and later synchronize changes in Access database to its 
PostgreSQL copy on scheduled basis.


 
Have you checked out the new-look www.indiatimes.com yet?


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] where can i find posges sql include directory...FC5.installing rpms

2006-12-12 Thread Faqeer ALI


i have installed post gres which comes with the distribution of Fedora Core 
5..


but when i compiled opennms it needs to get the include directory of 
postgres..

error cannot find postgres.h..

Any body plz help me...










  ALi


_
Share your latest news with your friends with the Windows Live Spaces 
friends module. 
http://clk.atdmt.com/MSN/go/msnnkwsp007001msn/direct/01/?href=http://spaces.live.com/spacesapi.aspx?wx_action=createwx_url=/friends.aspxmk



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] where can i find posges sql include directory...FC5.installing rpms

2006-12-12 Thread Shoaib Mir

You can get the include files as part of the source (
http://www.postgresql.org/ftp/source/).


Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 12/12/06, Faqeer ALI [EMAIL PROTECTED] wrote:



i have installed post gres which comes with the distribution of Fedora
Core
5..

but when i compiled opennms it needs to get the include directory of
postgres..
error cannot find postgres.h..

Any body plz help me...










   ALi

_
Share your latest news with your friends with the Windows Live Spaces
friends module.

http://clk.atdmt.com/MSN/go/msnnkwsp007001msn/direct/01/?href=http://spaces.live.com/spacesapi.aspx?wx_action=createwx_url=/friends.aspxmk


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match



Re: [GENERAL] where can i find posges sql include directory...FC5.installing

2006-12-12 Thread Richard Huxton

Faqeer ALI wrote:


i have installed post gres which comes with the distribution of Fedora 
Core 5..


but when i compiled opennms it needs to get the include directory of 
postgres..

error cannot find postgres.h..


Firstly try locate postgres.h and see if that finds it. If not, you 
probably need to install the devel package for postgresql.]


You'll need to do this to get headers for most packages.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] returning parameters from function

2006-12-12 Thread Rikard Pavelic

Shoaib Mir wrote:

You can use a SETOF function as:

CREATE OR REPLACE FUNCTION get_test_data (numeric)
RETURNS SETOF RECORD AS
$$
DECLARE
temp_recRECORD;
BEGIN
FOR temp_rec IN (SELECT ename FROM emp WHERE sal  $1)
LOOP
RETURN NEXT temp_rec;
END LOOP;
RETURN;
END;
$$ LANGUAGE plpgsql;


This doesn't work. ;(
I get ERROR: a column definition list is required for functions 
returning record

SQL state: 42601
when running this function

Regards,
Rikard

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] returning parameters from function

2006-12-12 Thread Matthias . Pitzl
You have to call the function in the following form:

SELECT * FROM get_test_data(1) AS (field1 type, field2 type, ...)

In words, you have to tell the database how the data returned by the
function has to be interpreted.

Greetings,
Matthias

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of 
 Rikard Pavelic
 Sent: Tuesday, December 12, 2006 3:06 PM
 To: Shoaib Mir; pgsql-general@postgresql.org
 Subject: Re: [GENERAL] returning parameters from function
 
 This doesn't work. ;(
 I get ERROR: a column definition list is required for functions 
 returning record
 SQL state: 42601
 when running this function
 
 Regards,
 Rikard
 
 ---(end of 
 broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] 
 so that your
message can get through to the mailing list cleanly
 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] returning parameters from function

2006-12-12 Thread Rikard Pavelic

Martijn van Oosterhout wrote:

snip

Just setof record will do. As for the implicit declaration of
variable names, that's harder. I don't know if you can do that without
making things very ambiguous.
  

I know setof record will do if I explicitly name OUT parameters.
But I want Postgre to figure out for himself what parameters to return 
as out parameters.


I don't see why it would make things very ambiguous.
Postgre now allows only one function with same set of input parameters.
So if you have function1(in int, out varchar) of course you can't have 
another function1(in int)
What it could lead to is problems at running that were not seen at 
designing function.



You should get out what you put in, but it won't remember spacing
outside the function body because the lexer eats that.

I have a file with the function as I want it and edit that. That way I
can have the function exactly how I like it.

Have a nice day,
  


Yeah, I understand pg needs only the body to execute the function, and 
having another field in
pg_catalog.pg_proc besides prosrc maybe seems like wasting space but it 
would make my life

as a developer a lot easier.

Regards,
Rikard

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] returning parameters from function

2006-12-12 Thread Richard Huxton

Rikard Pavelic wrote:

Martijn van Oosterhout wrote:

snip

Just setof record will do. As for the implicit declaration of
variable names, that's harder. I don't know if you can do that without
making things very ambiguous.
  

I know setof record will do if I explicitly name OUT parameters.
But I want Postgre to figure out for himself what parameters to return 
as out parameters.


I don't see why it would make things very ambiguous.


Think about what happens if you use such a function in a join. How does 
the planner know what to do? What about a prepared query?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] (Perl) script to set up an instance for regression tests

2006-12-12 Thread Florian Weimer
For regression tests, I'd like to automatically set up a fresh
PostgreSQL instance.  Has anybody automated the task (initdb, setting
a password, choosing a port at random, starting the server, and after
running the tests, stopping the server and deleting all the
directories)?  I know, it's a straightforward Perl script, but perhaps
someone else has already written it. 8-)

And:

$ fakeroot /usr/lib/postgresql/8.1/bin/postgres -D . postgres
root execution of the PostgreSQL server is not permitted.
[...]

This is a major problem when autobuilding Debian packages. 8-(

-- 
Florian Weimer[EMAIL PROTECTED]
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] search_path when restoring to new db

2006-12-12 Thread Brandon Aiken
The standard method is to use a pg_dumpall for the initial copy and whenever 
globals or the schema changes, and use pg_dump when you just want to get the 
data from a single database.

Globals and schema should not change very often.  In fact, they should be fixed 
except between software revisions.  If they aren't, you might be doing 
something wrong.

What you can do is pg_dumpall --schema-only or pg_dumpall --globals-only (I'm 
not sure if one of those is a subset of the other) and then use pg_dump 
--data-only for the databases you want data for.  I'm not sure offhand which 
options you'll need to use to be sure of getting what you want.  Double check 
the docs and be sure to test it a few times.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of SunWuKung
Sent: Tuesday, December 12, 2006 3:19 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] search_path when restoring to new db

Ok, I understand this difference now.
Knowing that, what is the standard way to copy a single database to
another server?

- can I use pg_dumpall to dump a single db? - or
- I have to use pg_dump and there is a procedure to ensure that old and
new dbs are the same, like

1. create new db
2. check or create user
3. run pg_restore
4. do alter db

and is there a way to automate this? If there is could you point me
there?

If there isn't:
Do you think that automating the complete backup and restore of a
single database would be database specific or it could work with
different databases? Don't you think that this is something that would
be generally useful?

Thank you for the help.
Balázs


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] PostgreSQL 8.2 : IPO link warning with ICC 9.1.045

2006-12-12 Thread DANTE Alexandra

Hello List,

I'm back with my questions on compilation of PostgreSQL 8.2. :-)
I work with an IA-64 server with Red Hat Enterprise Linux 4 AS update 2 
on which the Intel compiler icc 9.1.045 is available.

I try to build PostgreSQL 8.2 with this compiler.

To do that :
- I generated RPM from the postgresql-8.2.0-2PGDG.src.rpm with the 
command rpmbuild -ba postgresql-8.2.spec
- then from the directory BUILD/postgresql-8.2.0/src/test/regress/, I 
launched make check = /*All 103 tests passed*/


But when I check the log of the rpmbuild -ba command, I have found 
this warning :
/opt/intel/cc/9.1.045/bin/icc -O2 -pipe -override_limits -mp1 
-fno-strict-aliasing  -fpic -shared -Wl,-soname,libpq.so.5 
-Wl,--version-script=exports.list  fe-auth.o fe-connect.o fe-exec.o 
fe-misc.o fe-print.o fe-lobj.o fe-protocol2.o fe-protocol3.o 
pqexpbuffer.o pqsignal.o fe-secure.o md5.o ip.o wchar.o encnames.o 
noblock.o pgstrcasecmp.o thread.o strlcpy.o  -L../../../src/port 
-L/usr/lib -lssl -lcrypto -lkrb5 -lcrypt -o libpq.so.5.0

ranlib libpq.a
*IPO link: Warning unknown option '--version-script=exports.list'.*

I must admit that I am quite newbie with the Intel compiler icc so, 
could someone tell me if this warning is dangerous for the use of 
PostgreSQL ?

I have done some searchs on Google but didn't find a good explication.

Thank you in advance.
Regards,
Alexandra DANTE


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] search_path when restoring to new db

2006-12-12 Thread Richard Huxton

Brandon Aiken wrote:

PostgreSQL is simply very granular about what it lets you dump.


True enough, but I'd think you could make a good argument that dumping a 
database should dump any ALTER commands that are attached to it.


Users are shared between databases, so I can see it doesn't necessarily 
make sense to dump them. However, an ALTER DATABASE is meaningless 
without the database concerned.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Restore database from files (not dump files)?

2006-12-12 Thread Tom Lane
wheel [EMAIL PROTECTED] writes:
 I guess the reason is that the pg system database etc are lodged in the 
 hive under \base\, and the system db contains the metadata about the db 
 to be restored? 

No, the reason why selective restore doesn't work is that all tables in
a database cluster depend on the same commit log (pg_clog) to determine
which rows are valid and which are not.  What you were trying to do is
described in the manual (with a warning not to do it) under
backup/restore procedures:
http://www.postgresql.org/docs/8.2/static/backup-file.html

Also, if you would like to know more about the physical layout of the
database contents, it's described here:
http://www.postgresql.org/docs/8.2/static/storage.html

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Unrecognized time zone name error.

2006-12-12 Thread Brad Nicholson
This seems odd.  Any idea what's going on here?

template1=# SET TimeZone TO 'GMT';
ERROR:  unrecognized time zone name: GMT
template1=# SELECT version();
   version
-
 PostgreSQL 8.1.5 on powerpc-ibm-aix5.3.0.0, compiled by GCC gcc (GCC)
3.3.2
(1 row)

-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] forcing compression of text field

2006-12-12 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/11/06 10:18, Jonathan Ellis wrote:
 I have a table of log messages.  They are mostly in the 100-200
 character length, which apparently isn't large enough for PG to want
 to compress it (length == octet_length).  I really need to save disk
 space.  I can store it as a bytea and compress it manually (zlib level
 1 compression gives about 50% savings), but is there a way to force
 pg's own compression before I resort to this?

What can be compressed?  Trailing whitespace or repeating substrings?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is common sense really valid?
For example, it is common sense to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that common sense is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFFfsnvS9HxQb37XmcRAuaeAKDRv9IUDH4zenHoVQPST0vFbpHLkwCdED9k
rvvOkNCRx/J8EkGtBF2Bs9Y=
=XANI
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Why DISTINCT ... DESC is slow?

2006-12-12 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/12/06 01:28, Anton wrote:
 Hi. With this table (about 800 000 rows):
 
 =# \d n_traffic
 Table public.n_traffic
Column|Type |  Modifiers
 --+-+--
 login_id | integer | not null
 traftype_id  | integer | not null
 collect_time | timestamp without time zone | not null default now()
 bytes_in | bigint  | not null default (0)::bigint
 bytes_out| bigint  | not null default (0)::bigint
 Indexes:
n_traffic_collect_time btree (collect_time)
n_traffic_login_id btree (login_id)
n_traffic_login_id_collect_time btree (login_id, collect_time)
 Foreign-key constraints:
n_traffic_login_id_fkey FOREIGN KEY (login_id) REFERENCES
 n_logins(login_id) ON UPDATE CASCADE
n_traffic_traftype_id_fkey FOREIGN KEY (traftype_id) REFERENCES
 n_traftypes(traftype_id) ON UPDATE CASCADE

Why do you have indexes on both LOGIN_ID *and* LOGIN_ID + COLLECT_TIME?

ISTM that you can drop the LOGIN_ID index.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is common sense really valid?
For example, it is common sense to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that common sense is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFFfsqwS9HxQb37XmcRAssSAKDYkQc0VlF7nuEcuMbe6Eub9T++egCgwNec
2ZT0LmH/iDaotUyKi/4hQjg=
=5y2t
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] returning parameters from function

2006-12-12 Thread Rikard Pavelic

Richard Huxton wrote:

Rikard Pavelic wrote:

I know setof record will do if I explicitly name OUT parameters.
But I want Postgre to figure out for himself what parameters to 
return as out parameters.


I don't see why it would make things very ambiguous.


Think about what happens if you use such a function in a join. How 
does the planner know what to do? What about a prepared query?


Good point.

Well, so much about that ;(

Regards,
Rikard

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] where can i find posges sql include directory...FC5.installing rpms

2006-12-12 Thread Tom Lane
Faqeer ALI [EMAIL PROTECTED] writes:
 i have installed post gres which comes with the distribution of Fedora Core 
 5..
 but when i compiled opennms it needs to get the include directory of 
 postgres..
 error cannot find postgres.h..

Did you install postgresql-devel RPM?  It's in there:

...
/usr/include/pgsql/server/port/win32/sys
/usr/include/pgsql/server/port/win32/sys/socket.h
/usr/include/pgsql/server/port/win32/sys/wait.h
/usr/include/pgsql/server/postgres.h
/usr/include/pgsql/server/postgres_ext.h
/usr/include/pgsql/server/postgres_fe.h
/usr/include/pgsql/server/regex
/usr/include/pgsql/server/regex/regcustom.h
...

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] PostgreSQL Developer Full-time Position

2006-12-12 Thread Andrew Chernow

Our Company:
eSilo is a privately held Storage Service Provider, providing offsite 
backup and storage management solutions to businesses of all sizes. 
eSilo built its backup technology in house and continues to expand and 
innovate.  For more information about eSilo, please visit our website: 
http://www.esilo.com/ .



Job Description:
eSilo is looking for a full-time PostgreSQL DBA/Developer with strong C 
skills.  This is an onsite position.  Job duties would include:


1. Design, develop and maintain a 3TB+ PostgreSQL database on linux. 
Database will be the heart of a new storage management system for eSilo.


2. Develop BASH and C programs for automation and application logic.

3. Provide test suites, documentation and time lines

4. Internal PHP-based system for management and statistics


Job Skills:
A high level of general administration and programming experience is 
required for this position.  5+ years experience is required for all 
skills listed below, unless otherwise specified.  In regards to 
education, bachelors degree or relevant field experience will suffice.


1. Linux Administration (via SSH)

2. C/C++ Programming

3. SQL Programming (strong PostgreSQL)

4. PHP, HTML, JavaScript (2+ years)

5. BASH, sed, awk, grep, etc... (PERL a plus)

6. Genral knowledge of other databases a plus (Oracle, MySQL, MSSQL, DB2)

7. Creativity and Innovation

8. Socket programming a plus


--
Andrew Chernow
Chief Technology Officer
eSilo, LLC.
1530 Cypress Drive, Suite H
Jupiter, FL. 33469
561.512.1298 Cell
561.747.6880 Office
800.747.9255 Toll Free
561.747.6488 Fax
[EMAIL PROTECTED]

-- every bit counts

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] (Perl) script to set up an instance for regression tests

2006-12-12 Thread Tom Lane
Florian Weimer [EMAIL PROTECTED] writes:
 For regression tests, I'd like to automatically set up a fresh
 PostgreSQL instance.  Has anybody automated the task (initdb, setting
 a password, choosing a port at random, starting the server, and after
 running the tests, stopping the server and deleting all the
 directories)?

make check?

 $ fakeroot /usr/lib/postgresql/8.1/bin/postgres -D . postgres
 root execution of the PostgreSQL server is not permitted.
 This is a major problem when autobuilding Debian packages. 8-(

Surely you don't build random packages as root.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] returning parameters from function

2006-12-12 Thread Shoaib Mir

You can use it as:

SELECT * FROM get_test_data(1000) AS t1 (emp_name VARCHAR);

--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 12/12/06, Rikard Pavelic [EMAIL PROTECTED] wrote:


Shoaib Mir wrote:
 You can use a SETOF function as:

 CREATE OR REPLACE FUNCTION get_test_data (numeric)
 RETURNS SETOF RECORD AS
 $$
 DECLARE
 temp_recRECORD;
 BEGIN
 FOR temp_rec IN (SELECT ename FROM emp WHERE sal  $1)
 LOOP
 RETURN NEXT temp_rec;
 END LOOP;
 RETURN;
 END;
 $$ LANGUAGE plpgsql;

This doesn't work. ;(
I get ERROR: a column definition list is required for functions
returning record
SQL state: 42601
when running this function

Regards,
Rikard



Re: [GENERAL] search_path when restoring to new db

2006-12-12 Thread Tom Lane
Richard Huxton dev@archonet.com writes:
 True enough, but I'd think you could make a good argument that dumping a 
 database should dump any ALTER commands that are attached to it.

Let's suppose pg_dump did that, so pg_dump foo foo.dump includes
commands like

ALTER DATABASE foo SET ...

Now what happens when you try to load the script in database bar?
Nothing very good ...

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Running initdb from another process

2006-12-12 Thread Greg Fairbanks
Hi,

 

I am looking at trying to integrate PostgreSQL into our software.
Basically, I want to have our

software to spawn postgres so the user doesn't have to concern
themselves with managing the

database. I am using the Windows calls LogonUser and CreateProcessAsUser
to start initdb as

an unprivileged user. However, I am running into trouble on Windows
Vista. Initdb will startup,

but when it starts postgres.exe, postgres.exe seems to get hung up on
the CreateFileMapping

call while setting up shared memory. This only happens on Vista. It
works perfectly on Windows 

NT4/2000/XP/2003.  I was wondering if anyone had any experience with
this type of situation and

could lend any help.

 

Thanks,

Greg



Re: [GENERAL] Unrecognized time zone name error.

2006-12-12 Thread Tom Lane
Brad Nicholson [EMAIL PROTECTED] writes:
 This seems odd.  Any idea what's going on here?

 template1=# SET TimeZone TO 'GMT';
 ERROR:  unrecognized time zone name: GMT

Worksforme.  Perhaps you are missing the /usr/share/pgsql/timezone/
directory (your path might vary)?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] search_path when restoring to new db

2006-12-12 Thread Richard Huxton

Tom Lane wrote:

Richard Huxton dev@archonet.com writes:
True enough, but I'd think you could make a good argument that dumping a 
database should dump any ALTER commands that are attached to it.


Let's suppose pg_dump did that, so pg_dump foo foo.dump includes
commands like

ALTER DATABASE foo SET ...

Now what happens when you try to load the script in database bar?
Nothing very good ...


Which would point to the need to allow ALTER DATABASE 
current_database() or similar I'd say.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] PostgreSQL Developer Full-time Position

2006-12-12 Thread Merlin Moncure

On 12/12/06, Andrew Chernow [EMAIL PROTECTED] wrote:

Our Company:
eSilo is a privately held Storage Service Provider, providing offsite
backup and storage management solutions to businesses of all sizes.
eSilo built its backup technology in house and continues to expand and
innovate.  For more information about eSilo, please visit our website:
http://www.esilo.com/ .


Just to let everybody know, this is a job that I am vacating for
personal reasons not related to work.  In fact, eSilo is a fantastic,
growing company to work for with some great people (in fact, the best
I have ever worked for).  I am personally vouching for them and would
advise any PostgeSQL developer with strong SQL and C skills to give
this opening strong consideration, especially if the thought of
designing complex systems excites you.  eSilo is no dot com, they have
built a solid network services business from the ground up and have
some amazing technology.

Also, eSilo is located in beautiful South Florida, one of the nicest
places in the country to live...and becoming affordable, now that the
real estate craziness has died down.

merlin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Why DISTINCT ... DESC is slow?

2006-12-12 Thread Erik Jones

Ron Johnson wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/12/06 01:28, Anton wrote:
  

Hi. With this table (about 800 000 rows):

=# \d n_traffic
Table public.n_traffic
   Column|Type |  Modifiers
--+-+--
login_id | integer | not null
traftype_id  | integer | not null
collect_time | timestamp without time zone | not null default now()
bytes_in | bigint  | not null default (0)::bigint
bytes_out| bigint  | not null default (0)::bigint
Indexes:
   n_traffic_collect_time btree (collect_time)
   n_traffic_login_id btree (login_id)
   n_traffic_login_id_collect_time btree (login_id, collect_time)
Foreign-key constraints:
   n_traffic_login_id_fkey FOREIGN KEY (login_id) REFERENCES
n_logins(login_id) ON UPDATE CASCADE
   n_traffic_traftype_id_fkey FOREIGN KEY (traftype_id) REFERENCES
n_traftypes(traftype_id) ON UPDATE CASCADE



Why do you have indexes on both LOGIN_ID *and* LOGIN_ID + COLLECT_TIME?

ISTM that you can drop the LOGIN_ID index.
  
Hmm...   Will queries that use only login_id and not collect_time use 
the (login_id, collect_time) index?


--
erik jones [EMAIL PROTECTED]
software development
emma(r)


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Are updateable view as a linked table in ms-access a bad idea?

2006-12-12 Thread Richard Broersma Jr
I expect what I am reporting is already a known problem, but since I haven't 
heard anyone else
report it I thought I should at least mention it.  I am also looking for 
suggestions from anyone
about how they prefer to bind forms to multiple hierarchial tables in access.  
I tried to wrap
them is a view but it doesn't work all that well as you will see below.

I have implemented update rules on a test view in postgresql.  When ever I 
insert/update/delete from the view using ONLY THE SURROGATE KEY in the where 
clause all data
modifications work perfectly.  However, if you reference actual column values 
in where clause
inconsistent updates occur.

For example vwife is the view.

postgres=# select * from vwife;
 id |  name   | dresssize
---+-+---
  3 | dodie   |13
  4 | heather |10
  2 | katie   |11
(3 rows)

postgres=# update vwife 
   set name = 'Katheryn', 
   dresssize = 12 
   where (id,name,dresssize)=(2,'katie',11);
UPDATE 0

postgres=# select * from vwife;
 id |   name   | dresssize
+--+---
  3 | dodie|13
  4 | heather  |10
  2 | Katheryn |11
    --  update 0 is false

Unfortunately when connecting to an update-able view as a linked table in 
MS-access, access
identifies which tuple to update by specifying every element in the view in the 
where clause. 
This obviously makes an update-able view unusable as a linked table in 
MS-access.

Hiroshi Inoue on the ODBC list suggested some work arounds that really helped, 
but pointed out
that it was not a perfect solution and other inconsistent changes may occur.
http://archives.postgresql.org/pgsql-odbc/2006-12/msg00036.php

View and rule definitions are found here:
http://archives.postgresql.org/pgsql-odbc/2006-12/msg00034.php

Is this a known bug?  Does anyone have suggests how I can get around this 
problem?

Regards,

Richard Broersma Jr.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] resetting sequence to cur max value

2006-12-12 Thread developer
I am migrating a system from hsqldb to postgresql.  I have a bunch of
installs of this system live so moving the data is a headache.  I was
using identities in hsqldb and now I am using sequences.  I was able to
move all my data over however I am having an issue with the sequences.  I
default them all to start at a certain number; this works great for a
fresh install.

However when working with existing data the default is  the current
number.  ANd I have several installs and they are all different numbers.

Is there a way to set it up so it knows to skip past existing ids?

I would rather an automated solution  but I was even trying something like
this:

ALTER SEQUENCE seq_address restart with (select max(id) from address)

I guess that doesn't work because it wants a constant.

Any suggestions?
thanks


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] date comparisons

2006-12-12 Thread Belinda M. Giardine
This should be simple but I am missing something.  I am trying to extract
all records entered after a given date.  The table has a field
date_entered which is a timestamp.  In this particular case I am not
worried about time.

I have tried:
select id from main_table where
date_entered  to_timestamp('January 2006', 'Month ');

select id from main_table where
(to_timestamp('January 2006', 'Month '), now()) overlaps (date_entered, 
date_entered);

Both of these return all the rows in the table.  Half of the rows are
dated 2000-06-22 12:00:00.

PostgreSQL version 8.1.4

What am I missing?
Belinda

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Why DISTINCT ... DESC is slow?

2006-12-12 Thread Brandon Aiken
If you have, say, an index(x, y) then that index will often double as an
index(x).  It will generally not double as an index(y).

I'm not sure if that's how all RDBMSs work, but I'm pretty sure that's
how Oracle works.  It never surprises me when PostgreSQL mimics Oracle.

--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Erik Jones
Sent: Tuesday, December 12, 2006 11:33 AM
To: Ron Johnson
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Why DISTINCT ... DESC is slow?

Ron Johnson wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 12/12/06 01:28, Anton wrote:
   
 Hi. With this table (about 800 000 rows):

 =# \d n_traffic
 Table public.n_traffic
Column|Type |  Modifiers

--+-+---
---
 login_id | integer | not null
 traftype_id  | integer | not null
 collect_time | timestamp without time zone | not null default now()
 bytes_in | bigint  | not null default
(0)::bigint
 bytes_out| bigint  | not null default
(0)::bigint
 Indexes:
n_traffic_collect_time btree (collect_time)
n_traffic_login_id btree (login_id)
n_traffic_login_id_collect_time btree (login_id, collect_time)
 Foreign-key constraints:
n_traffic_login_id_fkey FOREIGN KEY (login_id) REFERENCES
 n_logins(login_id) ON UPDATE CASCADE
n_traffic_traftype_id_fkey FOREIGN KEY (traftype_id) REFERENCES
 n_traftypes(traftype_id) ON UPDATE CASCADE
 

 Why do you have indexes on both LOGIN_ID *and* LOGIN_ID +
COLLECT_TIME?

 ISTM that you can drop the LOGIN_ID index.
   
Hmm...   Will queries that use only login_id and not collect_time use 
the (login_id, collect_time) index?

-- 
erik jones [EMAIL PROTECTED]
software development
emma(r)


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Running initdb from another process

2006-12-12 Thread Tony Caduto

Greg Fairbanks wrote:


Hi,

I am looking at trying to integrate PostgreSQL into our software. 
Basically, I want to have our


software to spawn postgres so the user doesn’t have to concern 
themselves with managing the


database. I am using the Windows calls LogonUser and 
CreateProcessAsUser to start initdb as


an unprivileged user. However, I am running into trouble on Windows 
Vista. Initdb will startup,


but when it starts postgres.exe, postgres.exe seems to get hung up on 
the CreateFileMapping


call while setting up shared memory. This only happens on Vista. It 
works perfectly on Windows


NT4/2000/XP/2003. I was wondering if anyone had any experience with 
this type of situation and


could lend any help.


I created a Inno setup (http://www.jrsoftware.org/isinfo.php) that does 
all that, don't know if it will work on vista, but it should.
You can grab it here: 
http://www.amsoftwaredesign.com/downloads/pg_installer_setup.zip

Full source included.

I also use those APIs, but I do it all from a native win32 DLL created 
with Delphi. You don't have to concern yourself with

shared memory, so it may work for you.

The setup completely installs postgresql and starts it, all you have to 
do is add the files to the setup for your application.


Good luck,

--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] date comparisons

2006-12-12 Thread Erik Jones

Belinda M. Giardine wrote:

This should be simple but I am missing something.  I am trying to extract
all records entered after a given date.  The table has a field
date_entered which is a timestamp.  In this particular case I am not
worried about time.

I have tried:
select id from main_table where
date_entered  to_timestamp('January 2006', 'Month ');

select id from main_table where
(to_timestamp('January 2006', 'Month '), now()) overlaps (date_entered, 
date_entered);

Both of these return all the rows in the table.  Half of the rows are
dated 2000-06-22 12:00:00.

PostgreSQL version 8.1.4
  

I think people often make date comparisons too difficult in postgres.

select id
from main_table
where date_entered = '2006-01-01';

There are built in conversions for formatted date strings.

--
erik jones [EMAIL PROTECTED]
software development
emma(r)


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] a question on SQL

2006-12-12 Thread Tomi N/A

Don't really know where to ask this...the general mailing list sounds
like the closest.

Let's say I have three tables: owner, factory and product with a 1:N
relationship at each step.
Assuming that a product has a production date, how would you go about
returning a factory for every owner, where the returned factory is the
factory that produced the oldest product of it's owner?

I'm perplexed by the simplicity of the task and the fact that what
solutions I did come up with fail to perform very well (a subselect
with ORDER BY MIN(production_date) LIMIT 1) or are rather complex
(using temp tables).

Hints?

TIA,
t.n.a.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Unrecognized time zone name error.

2006-12-12 Thread Brad Nicholson
On Tue, 2006-12-12 at 11:13 -0500, Tom Lane wrote:
 Brad Nicholson [EMAIL PROTECTED] writes:
  This seems odd.  Any idea what's going on here?
 
  template1=# SET TimeZone TO 'GMT';
  ERROR:  unrecognized time zone name: GMT
 
 Worksforme.  Perhaps you are missing the /usr/share/pgsql/timezone/
 directory (your path might vary)?
 
   regards, tom lane

So it's documented in the archives, the GMT file was missing from the
directory you mentioned.  Thx.

-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] resetting sequence to cur max value

2006-12-12 Thread Marc Mamin
I would create a small function with the sequence_name and
reference_table as parameters

(not tested)

...
DECLARE 
newvalue int;
rec record;

BEGIN
For rec in EXECUTE 'Select into newvalue max(id) as m from '||$2 
loop
EXECUTE 'ALTER SEQUENCE '||$1||' restart with '||rec.m;
End loop;
END;
Return 0;
...


Cheers,

marc 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: Tuesday, December 12, 2006 5:39 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] resetting sequence to cur max value

I am migrating a system from hsqldb to postgresql.  I have a bunch of
installs of this system live so moving the data is a headache.  I was
using identities in hsqldb and now I am using sequences.  I was able to
move all my data over however I am having an issue with the sequences.
I default them all to start at a certain number; this works great for a
fresh install.

However when working with existing data the default is  the current
number.  ANd I have several installs and they are all different numbers.

Is there a way to set it up so it knows to skip past existing ids?

I would rather an automated solution  but I was even trying something
like
this:

ALTER SEQUENCE seq_address restart with (select max(id) from address)

I guess that doesn't work because it wants a constant.

Any suggestions?
thanks


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] date comparisons

2006-12-12 Thread Belinda M. Giardine


On Tue, 12 Dec 2006, Erik Jones wrote:

 Belinda M. Giardine wrote:
  This should be simple but I am missing something.  I am trying to extract
  all records entered after a given date.  The table has a field
  date_entered which is a timestamp.  In this particular case I am not
  worried about time.
 
  I have tried:
  select id from main_table where
  date_entered  to_timestamp('January 2006', 'Month ');
 
  select id from main_table where
  (to_timestamp('January 2006', 'Month '), now()) overlaps (date_entered, 
  date_entered);
 
  Both of these return all the rows in the table.  Half of the rows are
  dated 2000-06-22 12:00:00.
 
  PostgreSQL version 8.1.4
 
 I think people often make date comparisons too difficult in postgres.

 select id
 from main_table
 where date_entered = '2006-01-01';

 There are built in conversions for formatted date strings.

 --
 erik jones [EMAIL PROTECTED]
 software development
 emma(r)


Thanks that works.  But I am trying to understand why the others did not,
especially my first attempt.  Further testing shows that

select id, date_entered from main_table where
date_entered = to_timestamp('2006 January', ' Month');

works, but

select id, date_entered from main_table where
date_entered = to_timestamp('January 2006', 'Month ');

does not.  The order of the fields in the to_timestamp function changes
the timestamp produced.  Should it be this way?

hbvar=# select to_timestamp('January 2006', 'Month ');
  to_timestamp

 0006-01-01 00:00:00-05
(1 row)

hbvar=# select to_timestamp('2006 January', ' Month');
  to_timestamp

 2006-01-01 00:00:00-05
(1 row)

Belinda


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] a question on SQL

2006-12-12 Thread Ragnar
On þri, 2006-12-12 at 16:47 +, Tomi N/A wrote:
 Don't really know where to ask this...the general mailing list sounds
 like the closest.
 
 Let's say I have three tables: owner, factory and product with a 1:N
 relationship at each step.
 Assuming that a product has a production date, how would you go about
 returning a factory for every owner, where the returned factory is the
 factory that produced the oldest product of it's owner?


 Hints?

someting like this maybe:
  select distinct on (owner.id,factory.factoryid) * 
  from owner,factory,product
  where your join contitions 
  order by owner.id,factory.factoryid,production_date

gnari




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] a question on SQL

2006-12-12 Thread Marc Mamin
Seems that a recursive use of DISTINCT ON will do it:




create table factories (id int, factory varchar(10), ownerid int);
create table products (id int, product varchar(10), atime int
,factory_id int);

--owner 1 : factory 1
insert into products values(1,'p1',123,1);
insert into products values(2,'p2',124,1);
insert into products values(3,'p3',125,1);

--owner 1 : factory 2
insert into products values(4,'p4',1,2);-- expected

--owner 2 : factory 3
insert into products values(5,'p5',127,3);-- expected
insert into products values(6,'p6',128,3);
insert into products values(7,'p7',129,3);


insert into factories values(1,'f1',1);
insert into factories values(2,'f2',1);
insert into factories values(3,'f3',2);

select distinct on (foo.ownerid)
foo.ownerid,foo.factory,foo.atime
from 
  (select distinct on (f.ownerid, p.factory_id)
  f.ownerid,factory,atime
  from factories f,products p
  where p.factory_id=f.id
  order by f.ownerid, p.factory_id, atime 
  )foo
order by foo.ownerid,  foo.atime 

Cheers,

Marc

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] date comparisons

2006-12-12 Thread Richard Huxton

Belinda M. Giardine wrote:

Thanks that works.  But I am trying to understand why the others did not,
especially my first attempt.  Further testing shows that

select id, date_entered from main_table where
date_entered = to_timestamp('2006 January', ' Month');

works, but

select id, date_entered from main_table where
date_entered = to_timestamp('January 2006', 'Month ');

does not.  The order of the fields in the to_timestamp function changes
the timestamp produced.  Should it be this way?


No. Good testing, you've found a bug. Seems to be a problem with the 
Month formatting if there's more fields after it.



hbvar=# select to_timestamp('January 2006', 'Month ');
  to_timestamp

 0006-01-01 00:00:00-05
(1 row)

hbvar=# select to_timestamp('2006 January', ' Month');
  to_timestamp

 2006-01-01 00:00:00-05
(1 row)


SELECT to_timestamp('January 22 2006','Month DD ');
  to_timestamp

 0005-06-28 00:00:00+00
(1 row)

SELECT to_timestamp('Jan 22 2006','Mon DD ');
  to_timestamp

 2006-01-22 00:00:00+00
(1 row)

If you report this bug using the form below, I'm sure one of the 
developers will have a patch out shortly.

  http://www.postgresql.org/support/submitbug

Good catch!
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] resetting sequence to cur max value

2006-12-12 Thread Tom Lane
[EMAIL PROTECTED] writes:
 Is there a way to set it up so it knows to skip past existing ids?

Usually you do something like

select setval('seq_name', (select max(idcol) from table) + 1);

after loading data into the table.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] resetting sequence to cur max value

2006-12-12 Thread developer
Awesome.  Thanks tom.

By the way I am still trying to find a yum install for 8.2 for
centos...anyone?

 [EMAIL PROTECTED] writes:
 Is there a way to set it up so it knows to skip past existing ids?

 Usually you do something like

   select setval('seq_name', (select max(idcol) from table) + 1);

 after loading data into the table.

   regards, tom lane




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Restore database from files (not dump files)?

2006-12-12 Thread Bruce Momjian
Tom Lane wrote:
 wheel [EMAIL PROTECTED] writes:
  I guess the reason is that the pg system database etc are lodged in the 
  hive under \base\, and the system db contains the metadata about the db 
  to be restored? 
 
 No, the reason why selective restore doesn't work is that all tables in
 a database cluster depend on the same commit log (pg_clog) to determine
 which rows are valid and which are not.  What you were trying to do is
 described in the manual (with a warning not to do it) under
 backup/restore procedures:
 http://www.postgresql.org/docs/8.2/static/backup-file.html
 
 Also, if you would like to know more about the physical layout of the
 database contents, it's described here:
 http://www.postgresql.org/docs/8.2/static/storage.html

This is the contents of /data:

PG_VERSION  pg_clog/pg_multixact/  
pg_twophase/postmaster.opts base/  
pg_hba.conf pg_subtrans/pg_xlog/   
postmaster.pid global/  pg_ident.conf  
pg_tblspc/  postgresql.conf

None of these are optional for restoring a database.  They are all
interconected.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] date comparisons

2006-12-12 Thread Tom Lane
Belinda M. Giardine [EMAIL PROTECTED] writes:
 Should it be this way?

Well, to_timestamp() is apparently designed not to complain when the
input doesn't match the format, which is not my idea of good behavior
... but your example is in fact wrong.  'Month' means a 9-character
field, so you are short a couple of spaces.

regression=# select to_timestamp('January 2006', 'Month ');
  to_timestamp

 0006-01-01 00:00:00-05
(1 row)

regression=# select to_timestamp('January   2006', 'Month ');
  to_timestamp

 2006-01-01 00:00:00-05
(1 row)

You probably want

regression=# select to_timestamp('January 2006', 'FMMonth ');
  to_timestamp

 2006-01-01 00:00:00-05
(1 row)

Or, as suggested upthread, forget to_timestamp and just use the native
timestamp or date input conversion, which on the whole is a lot more
robust (it *will* throw an error if it can't make sense of the input,
unlike to_timestamp).

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] shell script to populate array values

2006-12-12 Thread Paul Silveira



Thanks Susan...  I really appreciate your answer and helping me do what I
wanted to do...

I posted the code I was working on here...
http://people.planetpostgresql.org/paul/

It's a pretty cool script and although it's not even done yet, I'm sure it
could be useful to anyone wanting to do the same thing...

-Paul







Paul Silveira wrote:
 
 Hello,
 
 I would like to create a shell script that would populate two variables
 with the return of a SELECT statement that would return two attributes... 
 
 For example...
 
 #!/bin/bash 
 SERVER_NAMES=`psql Admin -Upostgres -hMYSERVER -t -cSELECT servername,
 instanceport from server where serverclass = 3 and isactive = 'True' ;` 
 
 
 As you can see, I'm returning the servername and the instanceport from
 the server table.  This will later allow me to create psql commands to
 connect to each server dynamically.  
 
 I had the script working correctly when I was just running it for the
 server name as below...
 #!/bin/bash 
 SERVER_NAMES=`psql Admin -Upostgres -hMYSERVER -t -cSELECT servername
 from server where serverclass = 3 and isactive = 'True' ;` 
 
 
 Does anyone know the easiest way to get both values out of some variables
 that I could set?  
 
 Later in the script, I'm creating a for loop and iterating through the
 server names and would like to build connection strings dynamically with
 the results from the select string...
 
 
 
   for i in $SERVER_NAMES 
   do 
psql -Upostgres -h$i -p$i -A -t -cSELECT '$i', '$BATCHTIME', name,
 setting, category, short_desc, context, vartype, source, min_val, max_val
 FROM pg_settings; | psql Admin -Upostgres -hMYSERVER -t -cCOPY
 serverconfigsetting FROM STDIN WITH DELIMITER '|' NULL AS '';
echo Done with $i 
   done 
 
 
 As you can see I have -h$i -p$i in the script for the host and port. 
 Again the script worked fine when I just had the [EMAIL PROTECTED] in there...
 
 I know that the current forloop is incorrect specifiying the $i twice but
 I just put that in there to show an example of what I was hoping to do... 
 It would probably be more accessing the array value like -h$i[0:0]
 -p$i[0:1] in pseudo code for accessing array values. 
 
 Thanks in advance,
 
 Paul
 
 
 
 
 
 

-- 
View this message in context: 
http://www.nabble.com/shell-script-to-populate-array-values-tf2796502.html#a7837622
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Why DISTINCT ... DESC is slow?

2006-12-12 Thread Tom Lane
Brandon Aiken [EMAIL PROTECTED] writes:
 If you have, say, an index(x, y) then that index will often double as an
 index(x).  It will generally not double as an index(y).

It's not hard to understand why, if you think about the sort ordering of
a double-column index:

x   y

1   1
1   2
1   3
2   1
2   2
2   3
3   1
...

All similar values of x are brought together, so scanning the index for
x alone works just the same as it would in a one-column index ... the
index entries are bigger so it's marginally less efficient, but only
marginally.  On the other hand, the entries for a specific value or
range of y will be scattered all over the index, so it's almost useless
to use the index for a search on y alone.

As of PG 8.1 or 8.2 (I forget) the optimizer will *consider* using such
an index for a y-only query, but it'll nearly always decide it's a bad
idea.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] date comparisons

2006-12-12 Thread Belinda M. Giardine


On Tue, 12 Dec 2006, Tom Lane wrote:

 Belinda M. Giardine [EMAIL PROTECTED] writes:
  Should it be this way?

 Well, to_timestamp() is apparently designed not to complain when the
 input doesn't match the format, which is not my idea of good behavior
 ... but your example is in fact wrong.  'Month' means a 9-character
 field, so you are short a couple of spaces.

 regression=# select to_timestamp('January 2006', 'Month ');
   to_timestamp
 
  0006-01-01 00:00:00-05
 (1 row)

 regression=# select to_timestamp('January   2006', 'Month ');
   to_timestamp
 
  2006-01-01 00:00:00-05
 (1 row)

 You probably want

 regression=# select to_timestamp('January 2006', 'FMMonth ');
   to_timestamp
 
  2006-01-01 00:00:00-05
 (1 row)

Thanks.  I wanted to understand the reason for my attempt not working no
matter which method I used in the end.  Help to prevent future errors.


 Or, as suggested upthread, forget to_timestamp and just use the native
 timestamp or date input conversion, which on the whole is a lot more
 robust (it *will* throw an error if it can't make sense of the input,
 unlike to_timestamp).

   regards, tom lane

Good to know.
Belinda


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Why DISTINCT ... DESC is slow?

2006-12-12 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/12/06 11:30, Tom Lane wrote:
 Brandon Aiken [EMAIL PROTECTED] writes:
 If you have, say, an index(x, y) then that index will often double as an
 index(x).  It will generally not double as an index(y).
 
 It's not hard to understand why, if you think about the sort ordering of
 a double-column index:
 
   x   y
 
   1   1
   1   2
   1   3
   2   1
   2   2
   2   3
   3   1
   ...
 
 All similar values of x are brought together, so scanning the index for
 x alone works just the same as it would in a one-column index ... the
 index entries are bigger so it's marginally less efficient, but only
 marginally.  On the other hand, the entries for a specific value or
 range of y will be scattered all over the index, so it's almost useless
 to use the index for a search on y alone.

Some DBMSs call this an index scan.

 As of PG 8.1 or 8.2 (I forget) the optimizer will *consider* using such
 an index for a y-only query, but it'll nearly always decide it's a bad
 idea.

Scanning segment-2 of a 2-segment index seems like it would be
faster than scanning the table, if for no other reason than
locality of data: the index will be smaller than the table, so
scanning it looking for record pointers should be faster than
scanning the table.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is common sense really valid?
For example, it is common sense to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that common sense is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFFfun7S9HxQb37XmcRAvaqAJ0X4m933xqHaKBfdYEM0KHaMST/TgCfQsEA
4dBgCERRzIlBrkUK18gfZ08=
=PGjb
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] date comparisons

2006-12-12 Thread Richard Huxton

Tom Lane wrote:

Belinda M. Giardine [EMAIL PROTECTED] writes:

Should it be this way?


Well, to_timestamp() is apparently designed not to complain when the
input doesn't match the format, which is not my idea of good behavior
... but your example is in fact wrong.  'Month' means a 9-character
field, so you are short a couple of spaces.


The padding is on *input* too? Is this an Oracle compatibility feature?

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] date comparisons

2006-12-12 Thread Tom Lane
Richard Huxton dev@archonet.com writes:
 The padding is on *input* too? Is this an Oracle compatibility feature?

I assume so.  If Oracle does not work like that, then it'd be a bug ...
but the whole purpose of that function is to be Oracle-compatible,
so we're sort of stuck doing what Oracle does.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Are updateable view as a linked table in ms-access a bad idea?

2006-12-12 Thread Richard Broersma Jr
 Please show us your exact view, table and rule definitions
 used by this example.

  --  update 0 is false
 I guess what you are seeing are partial updates of the view
 caused by a multi-action rule which doesn't see the updated
 tuple in its subsequent actions anymore. This happens if you try
 to update the referencing key field of a parent table which doesn't 
 get propagated to the joined tables, for example. Again, provide
 your object definitions and we could say more (didn't find the
 vwife view and its update rules by following your provided links).

Here are the table,view, and update rule definitions: thanks for the help

CREATE SEQUENCE public.person_seq
INCREMENT BY 1
START WITH 1;

CREATE TABLEpublic.person
(
 id integer primary key not null
default nextval('public.person_seq'),
namevarchar(30) unique not null
);
ALTER SEQUENCE public.person_seq OWNED BY public.person.id;


CREATE TABLEpublic.wife
(
 idinteger  primary key
references person(id),
 dresssize integer  not null
);


CREATE OR REPLACE VIEW public.vwife (id, name, dresssize)  AS
SELECT
A.id, A.name, B.dresssize
FROM
public.person as A
INNER JOIN
public.wife as B
ON
A.id = B.ID;


CREATE OR REPLACE RULE 
vwife_update 
AS ON UPDATE TO 
public.vwife
DO INSTEAD
(
UPDATE 
public.person
SET 
name = NEW.name
WHERE 
id = OLD.id;

UPDATE
public.wife
SET
dresssize = NEW.dresssize
WHERE
id = OLD.id
);

Thanks for the consideration :-)

Regards,

Richard Broersma Jr.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] a question on SQL

2006-12-12 Thread Tomi N/A

Ragnar, Marc, thanks so much for the help: DISTINCT ON was *exactly*
what I needed.
It's not a part of any SQL standard I know of, but does the job _wonderfully_.

Cheers,
t.n.a.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Database-based alternatives to tsearch2?

2006-12-12 Thread Wes
I've seen questions asked on the list about alternatives to tsearch2, but
not for the type of full text indexing I'm looking for.

I'm looking for a non index-based full text indexing - one that stores the
information as table data instead of index data.  I do not need to implement
SQL operators for searches.  The application library would need to implement
the actual word search.

Indexes are too fragile.  Our documents will be offline, and re-indexing
would be impossible.  Additionally, as I undertstand it, tsearch2 doesn't
scale to the numbers I need (hundreds of millions of documents).

Is anyone aware of any such solutions for PostgreSQL, open source or
otherwise?

Thanks

Wes



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Database-based alternatives to tsearch2?

2006-12-12 Thread Jeff Davis
On Tue, 2006-12-12 at 12:19 -0600, Wes wrote:
 I'm looking for a non index-based full text indexing - one that stores the
 information as table data instead of index data.  I do not need to implement
 SQL operators for searches.  The application library would need to implement
 the actual word search.
 

Store the tsvector (a custom type provided by tsearch2) as a separate
column in the table. This data type holds all the important information
about the indexed text, such as distinct words and some position
information, but it takes up much less space than a large document.

The tsearch2 package provides a lot of functionality even without the
index. But after you have a tsvector column, you can create an index on
it if you want.

 Indexes are too fragile.  Our documents will be offline, and re-indexing
 would be impossible.  Additionally, as I undertstand it, tsearch2 doesn't
 scale to the numbers I need (hundreds of millions of documents).
 

Try PostgreSQL 8.2 with tsearch2 using GIN. As I understand it, that's
very scalable.

Also, as I understand it, a GIN index should not need to be reindexed
unless there is a huge shift in the set of distinct words you're using.
However, if you do need to reindex, you can if you have the tsvector
column. 

Regards,
Jeff Davis




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Asynchronous replication of a PostgreSQL DB to

2006-12-12 Thread Bruce Momjian

I think Sequoia (open source) and Continuent (proprietary) do this.

---

Markus Wollny wrote:
 Hi!
 
 I'd like to export schema and data from a PostgreSQL database to a
 remote MySQL database; any changes to the PG-master should be reflected
 on the MySQL target in a matter of a few minutes to one hour max.
 
 Has anybody done something like this before?
 
 Here's some more background: We've got an Oracle database as our backend
 and a couple of PostgreSQL-DBs as our frontend databases; the schema of
 the backend DB is stable. There are so called publishing jobs running
 every few minutes; these jobs not only update the frontend databases
 with any changes in the backend, they also make changes to the frontend
 dbs schemas whenever the backend says so - the frontend schemas differ
 from the backend's, the DDL of the frontend dbs is partly defined by
 data in the backend.
 
 The logical thing to do would be to create another set of publishing
 jobs for the MySQL databases; however our current network layout makes
 this quite difficult, so I'd rather try and keep the MySQL db and one of
 the PostgreSQL dbs in near sync.
 
 My first problem is that the PostgreSQLs schema is not stable, so if I
 simply write a couple of jobs to transport the data, I need to alter
 these jobs and the MySQL schema whenever there are changes to the PG
 schema. The second problem lies in PostgreSQL-specifics such as tsearch2
 - I actually do not need nor want to replicate such metadata. Custom
 datatypes and functions should also be exempt from this kind of
 replication.
 
 My hopes aren't all too high that there's an easy way to accomplish what
 I wish to do, so any advice would be very much welcome - even a can't
 be done that way by somebody who has tried to travel that path before
 :)
 
 Kind regards
 
Markus
 
 ---(end of broadcast)---
 TIP 6: explain analyze is your friend

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Database-based alternatives to tsearch2?

2006-12-12 Thread Richard Huxton

Wes wrote:


Indexes are too fragile.  Our documents will be offline, and re-indexing
would be impossible.  Additionally, as I undertstand it, tsearch2 doesn't
scale to the numbers I need (hundreds of millions of documents).


Jeff's right about tsvector - sounds like it's what you're looking for.

If you're worried about reindexing costs, perhaps look at partioning the 
table, or using partial indexes (so you could have multiple indexes for 
each table, based on (id mod 100) or some such).


Obviously, partitioning over multiple machines is usually quite do-able 
for this sort of task too.



Is anyone aware of any such solutions for PostgreSQL, open source or
otherwise?


Without wishing to discourage a potential large user from PG, it might 
be worth checking if Google/Yahoo/etc have a non-relational server that 
meets your needs off-the-shelf.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Database-based alternatives to tsearch2?

2006-12-12 Thread Daniel Verite
Wes wrote:

 I've seen questions asked on the list about alternatives to tsearch2, but
 not for the type of full text indexing I'm looking for.
 
 I'm looking for a non index-based full text indexing - one that stores the
 information as table data instead of index data.  I do not need to implement
 SQL operators for searches.  The application library would need to implement
 the actual word search.

Not sure if it would fit your needs, but DBIx-TextIndex implements
FTI using table data:
http://search.cpan.org/~dkoch/DBIx-TextIndex-0.25/lib/DBIx/TextIndex.pm

-- 
 Daniel
 PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL 8.2 : IPO link warning with ICC 9.1.045

2006-12-12 Thread Martijn van Oosterhout
On Tue, Dec 12, 2006 at 04:06:55PM +0100, DANTE Alexandra wrote:
 But when I check the log of the rpmbuild -ba command, I have found 
 this warning :

snip

 *IPO link: Warning unknown option '--version-script=exports.list'.*

That the gcc command-line switch used to stop exporting unnessesary
symbols. It should only be used for gcc, I wonder how it selected it
for your compiler? Did you run configure with the right compiler?

In any case, it's harmless.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] a question on SQL

2006-12-12 Thread Martijn van Oosterhout
On Tue, Dec 12, 2006 at 06:29:07PM +, Tomi N/A wrote:
 Ragnar, Marc, thanks so much for the help: DISTINCT ON was *exactly*
 what I needed.
 It's not a part of any SQL standard I know of, but does the job 
 _wonderfully_.

It's the single most useful non-standard SQL feature postgresql has. It
is thus simultaneously bad (from a portatbility aspect) and brilliant
(because it's a million times easier and faster than the alternatives).

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Online index builds

2006-12-12 Thread Bruce Momjian
Jeff Davis wrote:
 On Thu, 2006-12-07 at 18:11 -0500, Tom Lane wrote:
  Jeff Davis [EMAIL PROTECTED] writes:
   I think all you need to do what you want is something like:
   ALTER TABLE foo DROP CONSTRAINT foo_pkey KEEP INDEX;
  
   Because then you could drop the primary key status on a column without
   affecting the column or the index, then use my suggested syntax to
   switch the primary key status to a different index like so:
   ALTER TABLE foo SET PRIMARY KEY INDEX foo_othercolumn_index;
  
  That seems like an awful lot of uglification simply to let the index be
  marked as primary key rather than just unique.
  
 
 Agreed. It's just a thought.
 
 The reason it came to my mind is because some applications, like Slony,
 use the primary key by default.
 
 After reading through the archives, it looks like Gregory Stark
 suggested a REINDEX CONCURRENTLY, which would certainly solve the
 awkwardness of maintenance on a primary key. I didn't see much
 objection, maybe it's worth consideration for 8.3?

Added to TODO:

* Allow REINDEX CONCURRENTLY

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Online index builds

2006-12-12 Thread Bruce Momjian
bruce wrote:
 Jeff Davis wrote:
  On Thu, 2006-12-07 at 18:11 -0500, Tom Lane wrote:
   Jeff Davis [EMAIL PROTECTED] writes:
I think all you need to do what you want is something like:
ALTER TABLE foo DROP CONSTRAINT foo_pkey KEEP INDEX;
   
Because then you could drop the primary key status on a column without
affecting the column or the index, then use my suggested syntax to
switch the primary key status to a different index like so:
ALTER TABLE foo SET PRIMARY KEY INDEX foo_othercolumn_index;
   
   That seems like an awful lot of uglification simply to let the index be
   marked as primary key rather than just unique.
   
  
  Agreed. It's just a thought.
  
  The reason it came to my mind is because some applications, like Slony,
  use the primary key by default.
  
  After reading through the archives, it looks like Gregory Stark
  suggested a REINDEX CONCURRENTLY, which would certainly solve the
  awkwardness of maintenance on a primary key. I didn't see much
  objection, maybe it's worth consideration for 8.3?
 
 Added to TODO:
 
   * Allow REINDEX CONCURRENTLY

Oops, removed.  Seems there is a deadlock issue.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL 8.2 : IPO link warning with ICC 9.1.045

2006-12-12 Thread Tom Lane
Martijn van Oosterhout kleptog@svana.org writes:
 That the gcc command-line switch used to stop exporting unnessesary
 symbols. It should only be used for gcc, I wonder how it selected it
 for your compiler? Did you run configure with the right compiler?

icc pretends to be gcc ... not very well, but it pretends anyway.

 In any case, it's harmless.

I believe so, yes.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] search_path when restoring to new db

2006-12-12 Thread SunWuKung
I think there should be an easier way to backup a single database and
restore it on another server.

In my case we are developing a db so there are many schema changes to
that. When there is a significant change we find it easier to drop and
recreate the db from the backup - withouth affecting the other
databases that are running on the server. For that now I am using a
batch file that

creates the db from template0,
creates language pgsql,
runs pg_restore,
alters the db to set searchpath

The way I found out that I have to create language before and set the
searchpath after running restore is that I found that the restored db
doesn't work - which was quite scary at first. The fact that you have
to create roles before restore is well documented, but I didn't find
the others, so maybe there are still steps missing I just didn't find
the error in the restored db yet - which is still scary. If you are
saying that I should read the backup-restore in the manual more
carefully I can't argue with that, but I have to say for myselft that I
have read it several times by now.

On one hand it seems that 'PostgreSQL is simply very granular about
what it lets you dump' on the other hand it seems that the only unit
that you can restore on a clean server without scripting is the whole
cluster - which if I understand it correctly is not something you can
define, but is simply given as all the databases on the server.

I apreciate that my case is special and once things settled people
aren't messing so much with their schemas, but how do providers do
this?
Users of a provider surely can only create the backup of their own db?
How would the user restore that backup on his own server?

Thanks for the help.
Balázs


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] a question on SQL

2006-12-12 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


 Ragnar, Marc, thanks so much for the help: DISTINCT ON was *exactly*
 what I needed.
 It's not a part of any SQL standard I know of, but does the job
 _wonderfully_.

 It's the single most useful non-standard SQL feature postgresql has. It
 is thus simultaneously bad (from a portatbility aspect) and brilliant
 (because it's a million times easier and faster than the alternatives).

You mean second-most useful. LIMIT/OFFSET is the champion, hand down. :)

- --
Greg Sabino Mullane [EMAIL PROTECTED]
End Point http://www.endpoint.com/
PGP Key: 0x14964AC8 200612121616
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8

-BEGIN PGP SIGNATURE-

iD8DBQFFfxxjvJuQZxSWSsgRAokYAKDbmzEdfi3B/Fp0L62C6Fn48saMigCfeANo
PFT+tLmygoaZpAqfDO241AQ=
=n0xI
-END PGP SIGNATURE-



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Postgres friendly RegEx?

2006-12-12 Thread Jon Asher

Anyone care to correct this regex for PostgreSQL?  It works in C++ but
Postgres have no love for it:

-{0,1}\d*\.{0,1}\d+\^{0,1}\d*\.{0,1}\d+

This regex accepts  (any num)^(pos num)  such as:
45.2^3
-45.2^3
10^2.5


Re: [GENERAL] search_path when restoring to new db

2006-12-12 Thread Tom Lane
SunWuKung [EMAIL PROTECTED] writes:
 The way I found out that I have to create language before and set the
 searchpath after running restore is that I found that the restored db
 doesn't work - which was quite scary at first.

You should not need to create the language --- that *is* part of the
pg_dump data.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] a question on SQL

2006-12-12 Thread Tom Lane
Greg Sabino Mullane [EMAIL PROTECTED] writes:
 It's the single most useful non-standard SQL feature postgresql has. It
 is thus simultaneously bad (from a portatbility aspect) and brilliant
 (because it's a million times easier and faster than the alternatives).

 You mean second-most useful. LIMIT/OFFSET is the champion, hand down. :)

Yeah, but that one's only quasi-non-standard ... several other DBMSes
have it too.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Postgres friendly RegEx?

2006-12-12 Thread Tom Lane
Jon Asher [EMAIL PROTECTED] writes:
 Anyone care to correct this regex for PostgreSQL?  It works in C++ but
 Postgres have no love for it:

 -{0,1}\d*\.{0,1}\d+\^{0,1}\d*\.{0,1}\d+

It works fine in Postgres, AFAICT.  Maybe you forgot to double the
backslashes in a string literal?  Otherwise, be more specific about
your problem.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Postgres friendly RegEx?

2006-12-12 Thread Jon Asher

In Postgres, it appears to be returning false positives:

select * from
(select '52'::varchar As val) d
where d.val ~ '-{0,1}\\d*\\.{0,1}\\d+\\^{0,1}\\d*\\.{0,1}\\d+'

returns a record.
In C++ only such values match:   45.2^3  or  -45.2^3  or  10^2.5


On 12/12/06, Tom Lane [EMAIL PROTECTED] wrote:


Jon Asher [EMAIL PROTECTED] writes:
 Anyone care to correct this regex for PostgreSQL?  It works in C++ but
 Postgres have no love for it:

 -{0,1}\d*\.{0,1}\d+\^{0,1}\d*\.{0,1}\d+

It works fine in Postgres, AFAICT.  Maybe you forgot to double the
backslashes in a string literal?  Otherwise, be more specific about
your problem.

   regards, tom lane



Re: [GENERAL] Postgres friendly RegEx?

2006-12-12 Thread Jon Asher

Please ignore, my mistake in the translation to Pg regex !


On 12/12/06, Jon Asher [EMAIL PROTECTED] wrote:


In Postgres, it appears to be returning false positives:

select * from
(select '52'::varchar As val) d
where d.val ~ '-{0,1}\\d*\\.{0,1}\\d+\\^{0,1}\\d*\\.{0,1}\\d+'

returns a record.
In C++ only such values match:   45.2^3  or  -45.2^3  or  10^2.5


On 12/12/06, Tom Lane [EMAIL PROTECTED] wrote:

 Jon Asher [EMAIL PROTECTED] writes:
  Anyone care to correct this regex for PostgreSQL?  It works in C++ but

  Postgres have no love for it:

  -{0,1}\d*\.{0,1}\d+\^{0,1}\d*\.{0,1}\d+

 It works fine in Postgres, AFAICT.  Maybe you forgot to double the
 backslashes in a string literal?  Otherwise, be more specific about
 your problem.

regards, tom lane





[GENERAL] indexed function performance

2006-12-12 Thread mikelin
I'm trying to do a complicated ordering of a table with ~40k rows.

I have an IMMUTABLE plpgsql function that returns an integer that I'll
be sorting by, but the function is slow, so I want to cache it somehow.

I found in the docs:
the index expressions are not recomputed during an indexed search,
since they are already stored in the index.
- http://www.postgresql.org/docs/8.1/static/indexes-expressional.html

which sounds like caching, so I created an index on that function,
expecting stellar performance, but the performance turned out to be
pretty bad:

words=# explain analyse select * from word order by
word_difficulty(word) limit 100;

QUERY PLAN

 Limit  (cost=0.00..90.57 rows=100 width=48) (actual
time=43.718..3891.817 rows=100 loops=1)
   -  Index Scan using word_word_difficulty_idx on word
(cost=0.00..37989.19 rows=41946 width=48) (actual time=43.711..3891.251
rows=100 loops=1)
 Total runtime: 3892.253 ms
(3 rows)


I wouldn't have expected that Index Scan to be so slow. In comparison,
I added another column to the table, and cached the results there, and
the index scan on the new column is way faster:

words=# alter table word add column difficulty integer;
ALTER TABLE
words=# update word set difficulty=word_difficulty(word);
UPDATE 41946
words=# create index word_difficulty_idx on word(difficulty);
CREATE INDEX

words=# explain analyse select * from word order by difficulty limit
100;
QUERY
PLAN
---
 Limit  (cost=0.00..89.89 rows=100 width=48) (actual time=0.028..0.646
rows=100 loops=1)
   -  Index Scan using word_difficulty_idx on word
(cost=0.00..37706.32 rows=41946 width=48) (actual time=0.023..0.341
rows=100 loops=1)
 Total runtime: 0.870 ms
(3 rows)


So I'll probably just end up using the latter approach, but I'm
curious, so I ask if anyone can explain why the indexed function is so
slow.

Thanks!
Mikelin


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Online index builds

2006-12-12 Thread Tom Lane
Gregory Stark [EMAIL PROTECTED] writes:
 You could create a whole new index concurrently, then in a completely new
 (third) transaction drop the old one. The problem there is that there could be
 other things (namely foreign key constraints) depending on the old index.
 Fixing them all to depend on the new one may not be a problem or it may, I
 haven't thought it through. Nor have I thought through whether it would be
 possible to keep the original name.

If the idea is to do REINDEX CONCURRENTLY then ISTM you could just swap
the relfilenodes of the two indexes and then zap the new catalog entries
(and old index contents).  The problem is exactly the same as before,
though: you need exclusive lock to do that.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] indexed function performance

2006-12-12 Thread Tom Lane
mikelin [EMAIL PROTECTED] writes:
 which sounds like caching, so I created an index on that function,
 expecting stellar performance, but the performance turned out to be
 pretty bad:

 words=# explain analyse select * from word order by
 word_difficulty(word) limit 100;

 I wouldn't have expected that Index Scan to be so slow.

The index scan actually is not slow; the problem is that the planner
neglects to suppress the computation of the sort-key columns in the
output rows, even though they're not referenced anywhere.  Normally
that doesn't matter a whole lot, but if it's a really expensive function
then it does matter.  We just noticed this problem a few weeks ago:
http://archives.postgresql.org/pgsql-performance/2006-11/msg00054.php

I'm hoping to fix this for 8.3, but suspect that the fix will be too
invasive to consider back-patching to older releases.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] TIMESTAMP WITHOUT TIME ZONE

2006-12-12 Thread Randy Shelley

I am stuck, I am getting two different times from the database depending on
the timezone of the system I am querying from.

The story is this:
I have a table name request. It has a column create_dt of type TIMESTAMP
WITHOUT TIME ZONE.

When I query this from jdbc into a java.sql.Timestamp and out put it like
this

 java.sql.Timestamp ts= rs.getTimestamp(1);
 System.out.println(ts.getTime());

I get different result if I query it from my workstation(US/Easter timezone)
and from the server (GMT timezone).

How can this be?? Please help!

A data type of timestamp without time zone should not do any conversions.
The java.sql.Timestamp does not store any timezone info, just nano seconds
from a date. Some where there is a timezone conversion happening. Why and
how do I prevent it?

My idea is this:
What I save to the database (date  time) should be what I get back no
matter what timezone I save or retrieve it in.

Randy


[GENERAL] Statement timeout not working on broken connections with active queries

2006-12-12 Thread Brendan O'Shea
We have discovered a situation where the statement_timeout is not honored for 
broken connections.  If a connection is in the process of returning results to 
the client and the connection is severed (for example, network cable on client 
is unplugged) then the query continues to run on the server even after the 
statement_timeout is exceeded.  The connection will eventually close on its own 
after about 18-19 minutes and the following log lines will be generated in the 
postgresql log file:

2006-12-12 04:03:22 LOG:  could not send data to client: No route to host
2006-12-12 04:03:22 ERROR:  canceling statement due to statement timeout
2006-12-12 04:03:22 LOG:  could not send data to client: Broken pipe
2006-12-12 04:03:22 LOG:  unexpected EOF on client connection

Our server setup is:
Linux 2.4
Postgresql 8.1.4

Our client setup is:
Windows XP
Java 1.5
postgresql-8.1.jdbc2ee.jar

This behavior appears to be a bug with the statement_timeout.  I'd like to know 
if there is a way to get the connection to close once the statement_timeout is 
exceeded even if the connection to the client has been severed.  I'd also like 
to know what is causing the connection to close on its own after 18-19 minutes 
and if this can be adjusted.  Any help here would be greatly appreciated.

I tried adjusting the tcp_keepalives_idle setting and related settings but 
this had no affect on the time it took for the connection to close on its own.

I have also tried cancelling the active query via a call to select 
pg_cancel_backend(pid), but this has no affect.  I then tried killing the 
connection by running the command ./pg_ctl kill TERM pid, but this also has 
no affect (I realize 'kill TERM' isn't considered safe yet, I see it's still on 
the pg todo list).  The connection can be killed with a QUIT signal, but this 
is not recommended because it causes the database to restart in an unclean way. 
 I'd prefer that the statement_timeout setting simply cancelled the query and 
the connection was closed without any manual intervention, but does anyone know 
of a way to manually kill or cancel connections of this sort in a clean manner?

You can duplicate the problem with other clients besides java.  For example, 
you can use PG Admin III following these steps:

1) execute set statement_timeout = 15000
2) run a query that will return a large number of rows that will take more than 
15 seconds to retrieve
3) a few seconds after you execute the query unplug your network cable
4) wait about 10 seconds
5) plug your network cable back in
6) query the pg_stat_activity view and you will see a non idle connection 
running your query


Below is the Java code used to duplicate the error.  You need to sever your 
network connection once you see the output set statement_timeout = 

Thanks, Brendan



import java.sql.*;

public class TestStatementTimeout {

private static final String URL = jdbc:postgresql://hostname/db_name;
private static final String DB_USER = user;
private static final String DB_PASSWORD = password;
private static final int STMT_TIMEOUT = 15 * 1000;
 
public static void main(String[] args) throws Exception {
String sql = SELECT * FROM table_with_many_rows; 
try {
 System.out.println(Connecting to  + URL);
  Class.forName(org.postgresql.Driver);
  Connection conn = java.sql.DriverManager.getConnection(URL, DB_USER, 
DB_PASSWORD);
  
  Statement stmt = 
conn.createStatement(java.sql.ResultSet.TYPE_SCROLL_INSENSITIVE,java.sql.ResultSet.CONCUR_READ_ONLY);
  stmt.execute(set statement_timeout =  + STMT_TIMEOUT);
  System.out.println(set statement_timeout =  + STMT_TIMEOUT);
  ResultSet rs=stmt.executeQuery(sql);
  System.out.println(executed query);

  while (rs.next())
  {
 System.out.print(column 1 =  + rs.getInt(1) + \015);
  }

  System.out.println(Closing Connection);
  rs.close(); stmt.close();
  conn.close();
}
catch (Exception e) {
e.printStackTrace();
}
}
}

Re: [GENERAL] Postgres friendly RegEx?

2006-12-12 Thread Jeremy Harris

You don't give a pg version.

It looks legal to me as of 8.1.

Try replacing all the {0,1} with ? - but
check the manual for regex_flavor too.
Is there any chance you're in basic mode?

- Jeremy

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] TIMESTAMP WITHOUT TIME ZONE

2006-12-12 Thread Tom Lane
Randy Shelley [EMAIL PROTECTED] writes:
 The java.sql.Timestamp does not store any timezone info, just nano seconds
 from a date.

One would hope that it's implicitly referenced to GMT, though, not some
free-floating value that means who-knows-what.

I think your fundamental error is in using timestamp without time zone
in the database.  Try with-time-zone if you want consistent results
across clients in different zones.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Statement timeout not working on broken connections with active queries

2006-12-12 Thread Tom Lane
Brendan O'Shea [EMAIL PROTECTED] writes:
 We have discovered a situation where the statement_timeout is not =
 honored for broken connections.  If a connection is in the process of =
 returning results to the client and the connection is severed (for =
 example, network cable on client is unplugged) then the query continues =
 to run on the server even after the statement_timeout is exceeded.

Well, the backend is blocked on a write() to the socket and cannot abort
that without bollixing the connection completely (i.e., loss of message
synchronization).  So I think waiting until the TCP stack declares
failure is an appropriate response.  If you want faster TCP failure,
see whether your TCP stack allows timeout adjustments.

Note that the query is not running in the sense of consuming any
meaningful CPU or I/O resources in this state ...

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Online index builds

2006-12-12 Thread Jeff Davis
On Tue, 2006-12-12 at 18:08 -0500, Tom Lane wrote:
 Gregory Stark [EMAIL PROTECTED] writes:
  You could create a whole new index concurrently, then in a completely new
  (third) transaction drop the old one. The problem there is that there could 
  be
  other things (namely foreign key constraints) depending on the old index.
  Fixing them all to depend on the new one may not be a problem or it may, I
  haven't thought it through. Nor have I thought through whether it would be
  possible to keep the original name.
 
 If the idea is to do REINDEX CONCURRENTLY then ISTM you could just swap
 the relfilenodes of the two indexes and then zap the new catalog entries
 (and old index contents).  The problem is exactly the same as before,
 though: you need exclusive lock to do that.
 

My point was that, because we can run it in multiple transactions, can't
we drop the nonexclusive lock before acquiring the exclusive lock,
thereby eliminating the possibility of losing the index we just made to
a deadlock?

In other words, why would the following not work:

CREATE UNIQUE INDEX CONCURRENTLY foo_pkey_tmp ON foo (id);
BEGIN;
UPDATE pg_class SET relfilenode=relfilenode_of_foo_pkey WHERE
relname='foo_pkey_tmp';
UPDATE pg_class SET relfilenode=relfilenode_of_foo_pkey_tmp WHERE
relname='foo_pkey';
COMMIT;
DROP INDEX foo_pkey_tmp;

Or is there something more sophisticated we need to do to swap the
relfilenodes?

Regards,
Jeff Davis


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] shell script to populate array values

2006-12-12 Thread Paul Silveira

I wonder if I could ask another question on this thread...

How would i get the latest ID value of a table in psql and then use that
value as part of an insert statement...

For example...

I would like ot declare a variable in a shell script and then use that value
in the insert statement later in the script...
1)  set the variable...
SERVERCOLLECTIONTIMEID = `psql Admin -Upostgres -hMYSERVER -t -cINSERT INTO
servercollectiontime(batchtime) VALUES('$BATCHTIME'); SELECT
currval('servercollectiontime_servercollectiontimeid_seq');`

2)  use the variable...
psql -Upostgres -hMYSERVER -t -cSELECT '$SERVERCOLLECTIONTIMEID', column1
FROM mytable;

The reason why I want to use the variable is because I want to eventually
insert that data into a table that is looking for that ID value.  

Thanks in Advance...









Paul Silveira wrote:
 
 Hello,
 
 I would like to create a shell script that would populate two variables
 with the return of a SELECT statement that would return two attributes... 
 
 For example...
 
 #!/bin/bash 
 SERVER_NAMES=`psql Admin -Upostgres -hMYSERVER -t -cSELECT servername,
 instanceport from server where serverclass = 3 and isactive = 'True' ;` 
 
 
 As you can see, I'm returning the servername and the instanceport from
 the server table.  This will later allow me to create psql commands to
 connect to each server dynamically.  
 
 I had the script working correctly when I was just running it for the
 server name as below...
 #!/bin/bash 
 SERVER_NAMES=`psql Admin -Upostgres -hMYSERVER -t -cSELECT servername
 from server where serverclass = 3 and isactive = 'True' ;` 
 
 
 Does anyone know the easiest way to get both values out of some variables
 that I could set?  
 
 Later in the script, I'm creating a for loop and iterating through the
 server names and would like to build connection strings dynamically with
 the results from the select string...
 
 
 
   for i in $SERVER_NAMES 
   do 
psql -Upostgres -h$i -p$i -A -t -cSELECT '$i', '$BATCHTIME', name,
 setting, category, short_desc, context, vartype, source, min_val, max_val
 FROM pg_settings; | psql Admin -Upostgres -hMYSERVER -t -cCOPY
 serverconfigsetting FROM STDIN WITH DELIMITER '|' NULL AS '';
echo Done with $i 
   done 
 
 
 As you can see I have -h$i -p$i in the script for the host and port. 
 Again the script worked fine when I just had the [EMAIL PROTECTED] in there...
 
 I know that the current forloop is incorrect specifiying the $i twice but
 I just put that in there to show an example of what I was hoping to do... 
 It would probably be more accessing the array value like -h$i[0:0]
 -p$i[0:1] in pseudo code for accessing array values. 
 
 Thanks in advance,
 
 Paul
 
 
 
 
 
 

-- 
View this message in context: 
http://www.nabble.com/shell-script-to-populate-array-values-tf2796502.html#a7844387
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Online index builds

2006-12-12 Thread Gregory Stark
Jeff Davis [EMAIL PROTECTED] writes:

 I think what I'm confused about is how these non-transactional commands
 work (like VACUUM, etc). Are they still transactions, and just can't be
 run in a block?

In the case of CREATE INDEX CONCURRENTLY it can't be run in a transaction
block because it itself consists of two transactions. First it builds an
index, then it has to commit that and start a second transaction that
completes the index.

 My original thinking was that the shared lock could be unlocked before
 the exclusive lock is taken to switch the relfilenodes and to drop the
 index. However, if it is a real transaction, clearly you can't unlock in
 the middle.

Well you can't play games with the relfilenode if it's concurrent or else
other transactions executing inserts and updates won't be updating your new
index.

You could create a whole new index concurrently, then in a completely new
(third) transaction drop the old one. The problem there is that there could be
other things (namely foreign key constraints) depending on the old index.
Fixing them all to depend on the new one may not be a problem or it may, I
haven't thought it through. Nor have I thought through whether it would be
possible to keep the original name.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Statement timeout not working on broken connections with active queries

2006-12-12 Thread Brian Wipf

On 12-Dec-06, at 4:30 PM, Tom Lane wrote:

Brendan O'Shea [EMAIL PROTECTED] writes:

We have discovered a situation where the statement_timeout is not =
honored for broken connections.  If a connection is in the process  
of =

returning results to the client and the connection is severed (for =
example, network cable on client is unplugged) then the query  
continues =

to run on the server even after the statement_timeout is exceeded.


Well, the backend is blocked on a write() to the socket and cannot  
abort
that without bollixing the connection completely (i.e., loss of  
message

synchronization).  So I think waiting until the TCP stack declares
failure is an appropriate response.  If you want faster TCP failure,
see whether your TCP stack allows timeout adjustments.

Note that the query is not running in the sense of consuming any
meaningful CPU or I/O resources in this state ...
Of course, the query may have locks that block other meaningful  
operations. When a hung connection like this occurs on our server, I  
have resorted to using gdb to return from the write() method. Is this  
an acceptable way to kill the connection on the server side?



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Online index builds

2006-12-12 Thread Tom Lane
Jeff Davis [EMAIL PROTECTED] writes:
 My point was that, because we can run it in multiple transactions, can't
 we drop the nonexclusive lock before acquiring the exclusive lock,

No.  What happens if someone renames the table out from under you, to
mention just one possibility?  If you've been holding nonexclusive lock
for a long time (as you would've been) there's a nontrivial chance that
someone is already queued up for an exclusive lock and will get in
before you do.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PITR and moving objects between table spaces

2006-12-12 Thread Glen Parker

You can probably make this work if you don't issue any CREATE TABLESPACE
commands while PITR logging is active, but you'll want to test your
procedures pretty carefully.


That's what I thought, and after your message, I went ahead with it and 
had no problems.  Thx, Tom.


-Glen

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Online index builds

2006-12-12 Thread Jeff Davis
On Tue, 2006-12-12 at 18:40 -0500, Tom Lane wrote:
 Jeff Davis [EMAIL PROTECTED] writes:
  My point was that, because we can run it in multiple transactions, can't
  we drop the nonexclusive lock before acquiring the exclusive lock,
 
 No.  What happens if someone renames the table out from under you, to
 mention just one possibility?  If you've been holding nonexclusive lock
 for a long time (as you would've been) there's a nontrivial chance that
 someone is already queued up for an exclusive lock and will get in
 before you do.
 

I'm trying to understand what would actually happen. I assume you mean
change the name of the index, because after we create the index
concurrently, it doesn't matter what the table name is.

(1) We create the new index concurrently
(2) someone gets an exclusive lock before we do, and they rename the old
index (foo_pkey is now known as bar_pkey).
(3) We don't find the index, throw an error, and have an extra index
hanging around. Same for any other situation that makes us unable to
continue in a well-defined way.

Even if we deleted the extra index on step 3, we could consider that
reasonable behavior because the user went out of their way to rename an
index with a concurrent REINDEX. They could then try again, albeit with
some wasted effort.

Even thinking about strange edge cases, like if they decide to use their
exclusive lock to swap the names of two indexes in step 2, we could
probably detect whether it was the same old index or not; perhaps by
remembering the relfilenode of the index we're REINDEXing.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Online index builds

2006-12-12 Thread Tom Lane
Jeff Davis [EMAIL PROTECTED] writes:
 On Tue, 2006-12-12 at 18:40 -0500, Tom Lane wrote:
 No.  What happens if someone renames the table out from under you, to
 mention just one possibility?

 I'm trying to understand what would actually happen. I assume you mean
 change the name of the index, because after we create the index
 concurrently, it doesn't matter what the table name is.

Well, if you don't like that one, consider ALTER OWNER revoking your
privilege to perform the REINDEX.  Without an explicit check for the
case, the code would proceed to do it anyway.  (And even if it did
check, what then?  You don't really have the right anymore to undo what
you did so far, either.)

Yeah, we could add defenses one by one for the cases we could think of,
but I'd never feel very secure that we'd covered them all.

Another point here is that I think you are assuming that an OID is a
unique-for-all-time identifier for a table or index.  It's not; as soon
as someone drops the table or index, the OID is up for grabs and could
be re-used for an unrelated table or index.  Admittedly one would have
to be quite unlucky to get burnt that way, but deliberately introducing
race conditions in the name of convenience is not my idea of the way to
design a database.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] shell script to populate array values

2006-12-12 Thread SCassidy
What's stopping you from using the variable?  It works fine for me.

The only problem I see is that you are quoting an integer value (SELECT
'$SERVERCOLLECTIONTIMEID', column1 FROM mytable;) for no reason (leave off
the single quotes around $SERVERCOLLECTIONTIMEID),  although it does not
seem to keep it from working on my system, and the actual SELECT statement
is probably not what you really intended.  I assume you really meant to use
something with a WHERE clause in it, instead of using the value as an
embedded literal.  Something like:

ANSWER=`psql -U postgres --quiet --no-align --field-separator ' ' -t -c
select txtval1 from test1 where id1 = $SERVERCOLLECTIONTIMEID; Admin`


Full sample tested (without showing my declarations for user and database)
in a sample table in one of my db's:

SOMEVAL='some example text'
SERVERCOLLECTIONTIMEID=`psql -U $MYUSR -t -c insert into test1 (txtval1)
values ('$SOMEVAL');select currval('test1_id1_seq'); $MYDB `
echo SERVERCOLLECTIONTIMEID is $SERVERCOLLECTIONTIMEID

#simple retrieve:
ANSWER=`psql -U $MYUSR --quiet --no-align  -t -c select id1, txtval1 from
test1 where id1 = $SERVERCOLLECTIONTIMEID; $MYDB`
echo ANSWER is $ANSWER

#another way to retrieve the data:
IFS=\|
psql -U $MYUSR --quiet --no-align  -t -c select id1, txtval1 from test1
where id1 = $SERVERCOLLECTIONTIMEID; $MYDB |
while read COL1 COL2; do
  echo Col1: $COL1,  Col2: $COL2
done

Produces output:
SERVERCOLLECTIONTIMEID is   16
ANSWER is 16|some example text
Col1: 16,  Col2: some example text

This is really not a PostgreSQL question, just a bash-scripting question.

You could also SELECT one column at a time into one variable, without
having to worry about splitting the columns into separate variables.

Susan Cassidy



   
 Paul Silveira 
 [EMAIL PROTECTED] 
 m To 
 Sent by:  pgsql-general@postgresql.org
 pgsql-general-own  cc 
 [EMAIL PROTECTED] 
   Subject 
   Re: [GENERAL] shell script to   
 12/12/2006 03:35  populate array values   
 PM
   
  |---|
  | [ ] Expand Groups |
  |---|
   
   
   
   
   
   





I wonder if I could ask another question on this thread...

How would i get the latest ID value of a table in psql and then use that
value as part of an insert statement...

For example...

I would like ot declare a variable in a shell script and then use that
value
in the insert statement later in the script...
1)  set the variable...
SERVERCOLLECTIONTIMEID = `psql Admin -Upostgres -hMYSERVER -t -cINSERT
INTO
servercollectiontime(batchtime) VALUES('$BATCHTIME'); SELECT
currval('servercollectiontime_servercollectiontimeid_seq');`

2)  use the variable...
psql -Upostgres -hMYSERVER -t -cSELECT '$SERVERCOLLECTIONTIMEID', column1
FROM mytable;

The reason why I want to use the variable is because I want to eventually
insert that data into a table that is looking for that ID value.

Thanks in Advance...









Paul Silveira wrote:

 Hello,

 I would like to create a shell script that would populate two variables
 with the return of a SELECT statement that would return two attributes...


 For example...

 #!/bin/bash
 SERVER_NAMES=`psql Admin -Upostgres -hMYSERVER -t -cSELECT servername,
 instanceport from server where serverclass = 3 and isactive = 'True' ;`


 As you can see, I'm returning the servername and the instanceport from
 the server table.  This will later allow me to create psql commands to
 connect to each server dynamically.

 I had the script working correctly when I was just running it for the
 server name as below...
 #!/bin/bash
 SERVER_NAMES=`psql Admin -Upostgres -hMYSERVER -t -cSELECT servername
 from server where serverclass = 3 and isactive = 'True' ;`


 Does anyone know the easiest way 

Re: [GENERAL] Online index builds

2006-12-12 Thread Jeff Davis
On Tue, 2006-12-12 at 19:13 -0500, Tom Lane wrote:
 Jeff Davis [EMAIL PROTECTED] writes:
  On Tue, 2006-12-12 at 18:40 -0500, Tom Lane wrote:
  No.  What happens if someone renames the table out from under you, to
  mention just one possibility?
 
  I'm trying to understand what would actually happen. I assume you mean
  change the name of the index, because after we create the index
  concurrently, it doesn't matter what the table name is.
 
 Well, if you don't like that one, consider ALTER OWNER revoking your
 privilege to perform the REINDEX.  Without an explicit check for the
 case, the code would proceed to do it anyway.  (And even if it did
 check, what then?  You don't really have the right anymore to undo what
 you did so far, either.)
 
 Yeah, we could add defenses one by one for the cases we could think of,
 but I'd never feel very secure that we'd covered them all.
 

Ok, fair enough. I just wanted to make sure I understood the reason why
we couldn't (shouldn't?) do it.

 Another point here is that I think you are assuming that an OID is a
 unique-for-all-time identifier for a table or index.  It's not; as soon
 as someone drops the table or index, the OID is up for grabs and could
 be re-used for an unrelated table or index.  Admittedly one would have
 to be quite unlucky to get burnt that way, but deliberately introducing
 race conditions in the name of convenience is not my idea of the way to
 design a database.
 

It essentially does boil down to just convenience. In general we don't
have much ability to change primary key status for columns without
creating/dropping indexes non-concurrently. Admittedly, that isn't
important, but would be convenient.

Regards,
Jeff Davis



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


  1   2   >