[SQL] A generic trigger?

2003-09-13 Thread ow
Hi,

Am looking for a way to minimize the amount of fuctions that support triggers.
E.g., there's "company" and "company_backup" tables. Update trigger on the
"company" table will put a record in the "company_backup" table whenever
"company" record is updated.

The problem is that there's quite a few other tables for which similar backup
logic has to be done (e.g. "custormer" and "customer_backup", etc). The backup
logic is the same, only structure of the tables changes.

Is there a way to write a generic trigger/function that would deal with backup
regardless of the table structure?

Thanks in advance.








__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] A generic trigger?

2003-09-14 Thread ow
--- Peter Childs <[EMAIL PROTECTED]> wrote:
>   Yes it is possible and I've done it. The reason I'm not using it is because
> I 
> wrote it in Pl/Python and if you attach the same trigger to more than one 
> table in the same transaction pg/python (actually the entire server crashes 
> but thats not the point) crashes. Well it did when I last tested it in early 
> versions. I'm still thinking of getting around to rewriting it in a language 
> without this bug, since nobody sounds like they are going to fix it. C might 
> be best!

[snip]

Hi,

In my case, "company" and "company_backup" tables have the *same* structure, so
I was hoping for a simpler solution using just plpgsql.

Any ideas? Thanks






__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com

---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] SET database TO ...?

2003-09-15 Thread ow
Hi,

Suppose, one server/postmaster is running several databases. Is there something
like "SET database TO whatever" (and I mean *database* not schema). I need this
to ensure that a script runs only against db for which it was desinged.

Thanks







__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] SET database TO ...?

2003-09-15 Thread ow
--- Achilleus Mantzios <[EMAIL PROTECTED]> wrote:
[...]
> dynacom=# \c bab
> You are now connected to database bab.
> bab=# 

I'm actually looking for a way to set database from a *script*, not psql
console, if this is possible (similar to "SET search_path TO whatever;").

Any ideas? Thanks









__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] A simple way to Create type ...?

2003-09-16 Thread ow
Hi,

I had a look at "create type" docs and it seems somewhat complex, involving
creation of functions and etc. I hope there's a simpler way for the following:

How should one declare a new custom type, say, "AddressType" that corresponds
internally to "varchar(50)". In other words, all columns that are assigned
"AddressType" would internally be "varchar(50)".

Example:
create type AddressType ... -- how to do it in a simple way?

create table ADDRESS
{
  address_id  int   not null,
  street_address1 AdressTypenot null,
  street_address2 AdressTypenot null,
  
)

Thanks







__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] A simple way to Create type ...?

2003-09-16 Thread ow
--- ow <[EMAIL PROTECTED]> wrote:
> Hi,
[...]
> How should one declare a new custom type, say, "AddressType" that corresponds
> internally to "varchar(50)". In other words, all columns that are assigned
> "AddressType" would internally be "varchar(50)".
> 
> Example:
> create type AddressType ... -- how to do it in a simple way?

I guess, ideally it'd be
  create type AddressType AS varchar(50) ;
but it does not work.

Any ideas? Thanks





__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] A simple way to Create type ...?

2003-09-16 Thread ow
--- Rod Taylor <[EMAIL PROTECTED]> wrote:
> Only one keyword off. SQL calls this a domain.
> 
> They're limited in 7.3, but much improved (not yet perfect) for 7.4.
> 
> http://www.postgresql.org/docs/7.3/interactive/sql-createdomain.html

Excellent news! Thanks






__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[SQL] psql output and password Qs

2003-09-19 Thread ow
Hi,

1) When psql is run from a script, how would one save *all* output generated by
psql (including errors, info messages, etc) in to a file?. I tried redirecting
output with ">" but that did not save error/confirmation messages.

2) When psql is run from a script, how would one pass a password to psql?

Thanks








__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[SQL] FK Constraints, indexes and performance

2003-10-05 Thread ow
Pg 7.3.3 on i386 compiled by GCC 2.96

Hi,

It's understood that FK constraints carry some performance hit. However, the
performance hit I observe is huge. My situation is illustrated by the table
structures below.

Parent table has 20,000 rows and Child table has about 60,000.

Without fk_child_parentid constraint, it takes about 9 seconds to insert 10,000
records into the Child table. WITH fk_child_parentid constraint, it takes about
300 (!) seconds to insert the same 10,000 into the Child table.

The reason for such poor performace with the fk_child_parentid constraint is
the fact that, I think, when verifying the fk_child_parentid constraint, PG is
doing sequential scan of the Parent table instead of the using the implicit
index created by the pk_parent constraint. 1 sequential scans against 2
row table really take a hit on performance.

The reason I think PG is doing sequential scans is because the execution plan
for the following query shows two sequential scans:
explain select *
from parent, child
where child.parentId = parent.id

With reference to the above, two (2) questions:

1) Is there anything that can be done to significantly improve Child insert
performance when fk_child_parentid is in place?

2) Why wouldn't PG use implicit index pk_parent when resolving
"where C.parentId =P.id" in the query above.

Thanks


 Test table structures
Domains
test.did = int
test.dname = varchar(30)
test.dstringlong = varchar(50)

CREATE TABLE test.parent
(
  id test.did NOT NULL,
  name test.dname NOT NULL,
  CONSTRAINT pk_parent PRIMARY KEY (id),
  CONSTRAINT ak_parent_name UNIQUE (name)
) WITH OIDS;

CREATE TABLE test.child
(
  id test.didlong NOT NULL,
  parentid test.did NOT NULL,
  name test.dstringlong NOT NULL,
  CONSTRAINT pk_child PRIMARY KEY (id),
  CONSTRAINT fk_child_parentid FOREIGN KEY (parentid) REFERENCES test.parent
(id) ON UPDATE RESTRICT ON DELETE RESTRICT,
) WITH OIDS;






__
Do you Yahoo!?
The New Yahoo! Shopping - with improved product search
http://shopping.yahoo.com

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] FK Constraints, indexes and performance

2003-10-06 Thread ow

--- Tom Lane <[EMAIL PROTECTED]> wrote:
> It looks to me like the 7.3 planner will not choose indexscans for the FK
> check queries in this example, because the comparison operators are
> misconstrued as shown in this thread:
> http://archives.postgresql.org/pgsql-hackers/2003-03/msg00997.php
> The equality operator on your domain is taken to be "oideq" which won't
> be the same operator associated with the index on the column.
> 
> This seems to be fixed in 7.4.  In 7.3 I'd counsel not introducing
> domains unnecessarily.
> 
>   regards, tom lane

It looks like it worked. I moved to 7.4.b4 and, in my case, performance
improvement on insert is drastic (about 30x). Thanks






__
Do you Yahoo!?
The New Yahoo! Shopping - with improved product search
http://shopping.yahoo.com

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] pg 7.4.rc1, Range query performance

2003-11-08 Thread ow
Postgresql 7.4.rc1 on i686 pc linux compiled by gcc 2.06.
All configuration settings are default.


Hi,

Trying to find a way to improve range query performance.

The table Test has about 30 million records.

-- DLong, Dtimestamp, Dint, etc are domains of the respective types.
create table Test (
id  DLong   not null,
a   Dtimestamp  null,
b   Dintnot null,
c   Dintnot null,
d   Dstring null,
constraint PK_id primary key (id),
constraint AK_abc unique (a, b, c)
);

The following query retrieves a single record, it runs against AK index and is
extremely fast (30-150 ms) for the  table of this size:

-- returns result in 30-150 ms
select * from Test
where a = '2002-09-01'
  and b = 5
  and c = 255

OTOH, the following range query that returns 30 records performs much slower,
about 33000 ms. The query is using AK index, as it should, but why does it take
so much longer to scan the index for the range of just 30 records? I see that
PG is hitting the disk very intensively for this query. Can the query be
rewritten, etc to improve performance? Thanks

select * from Test
where a >= '2002-09-01'
  and a <= '2002-09-30'
  and b = 5
  and c = 255

QUERY PLAN
Index Scan using ak_abc on test  (cost=0.00..106.27 rows=30 width=53) (actual
time=33.536..33200.998 rows=30 loops=1)
  Index Cond: (((a)::timestamp without time zone >= '2002-09-01
00:00:00'::timestamp without time zone) AND ((a)::timestamp without time zone
<= '2002-09-30 00:00:00'::timestamp without time zone) AND ((b)::integer 
= 5) AND ((c) (..)
Total runtime: 33201.219 ms








__
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
http://antispam.yahoo.com/whatsnewfree

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] pg 7.4.rc1, Range query performance

2003-11-10 Thread ow
Hi,

I tried CLUSTER and it did improve performance, somewhat. The query against
"clustered" table performs about five (5) times better than the same table but
"non-clustered". However, even after that table was clustered, the difference
in performance between single record query and range query is significant:

table Test (see below) has 10M records
  single record - 31 ms and remains mostly constant as table grows
  range query returning 30 records - about 10 secs and grows together with the
table

Also, CLUSTER is locking the table (in our case this also means locking the
database), so it may be impossible to use it in production on large tables
(impossible in our case).

It feels like I really have a problem here. Any ideas? Thanks

P.S. For the future I would consider implementing "CREATE [CLUSTERED] INDEX"


--- Bruce Momjian <[EMAIL PROTECTED]> wrote:
> 
> Try CLUSTER --- that usually helps with index scans on ranges.
> 
> ---
> 
> ow wrote:
> > Postgresql 7.4.rc1 on i686 pc linux compiled by gcc 2.06.
> > All configuration settings are default.
> > 
> > 
> > Hi,
> > 
> > Trying to find a way to improve range query performance.
> > 
> > The table Test has about 30 million records.
> > 
> > -- DLong, Dtimestamp, Dint, etc are domains of the respective types.
> > create table Test (
> > id  DLong   not null,
> > a   Dtimestamp  null,
> > b   Dintnot null,
> > c   Dintnot null,
> > d   Dstring null,
> > constraint PK_id primary key (id),
> > constraint AK_abc unique (a, b, c)
> > );
> > 
> > The following query retrieves a single record, it runs against AK index and
> is
> > extremely fast (30-150 ms) for the  table of this size:
> > 
> > -- returns result in 30-150 ms
> > select * from Test
> > where a = '2002-09-01'
> >   and b = 5
> >   and c = 255
> > 
> > OTOH, the following range query that returns 30 records performs much
> slower,
> > about 33000 ms. The query is using AK index, as it should, but why does it
> take
> > so much longer to scan the index for the range of just 30 records? I see
> that
> > PG is hitting the disk very intensively for this query. Can the query be
> > rewritten, etc to improve performance? Thanks
> > 
> > select * from Test
> > where a >= '2002-09-01'
> >   and a <= '2002-09-30'
> >   and b = 5
> >   and c = 255
> > 
> > QUERY PLAN
> > Index Scan using ak_abc on test  (cost=0.00..106.27 rows=30 width=53)
> (actual
> > time=33.536..33200.998 rows=30 loops=1)
> >   Index Cond: (((a)::timestamp without time zone >= '2002-09-01
> > 00:00:00'::timestamp without time zone) AND ((a)::timestamp without time
> zone
> > <= '2002-09-30 00:00:00'::timestamp without time zone) AND ((b)::integer 
> > = 5) AND ((c) (..)
> > Total runtime: 33201.219 ms
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > __
> > Do you Yahoo!?
> > Protect your identity with Yahoo! Mail AddressGuard
> > http://antispam.yahoo.com/whatsnewfree
> > 
> > ---(end of broadcast)---
> > TIP 6: Have you searched our list archives?
> > 
> >http://archives.postgresql.org
> > 
> 
> -- 
>   Bruce Momjian|  http://candle.pha.pa.us
>   [EMAIL PROTECTED]   |  (610) 359-1001
>   +  If your life is a hard drive, |  13 Roberts Road
>   +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073


__
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
http://antispam.yahoo.com/whatsnewfree

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] pg 7.4.rc1, Range query performance

2003-11-10 Thread ow

--- Stephan Szabo <[EMAIL PROTECTED]> wrote:
> I'm not sure that AK_abc is the best index for check a range on a and
> single values on b and c.  I'd think that something like an index
> on (b,c,a) would probably be better for this purpose (without doing any
> testing ;) ).

That would not work for us since most of the time users are working with most
recent data, hence column "a" is the first in the index.

Thanks





__
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
http://antispam.yahoo.com/whatsnewfree

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] pg 7.4.rc1, Range query performance

2003-11-10 Thread ow
--- Bruce Momjian <[EMAIL PROTECTED]> wrote:
> Strange 30 records takes 30x the time than one record.  Can you run
> ANALYZE and send us an EXPLAIN of the query to make sure it hasn't
> changed?
> 

explain analyze select * from Test
where a >= '2002-06-18'
  and a <= '2002-07-18'
  and b = 5
  and c = 255

QUERY PLAN
Index Scan using ak_abc on test  (cost=0.00..121.23 rows=34 width=53) (actual
time=18.060..10726.387 rows=31 loops=1)
  Index Cond: (((a)::timestamp without time zone >= '2002-06-18
00:00:00'::timestamp without time zone) AND ((a)::timestamp without time zone
<= '2002-07-18 00:00:00'::timestamp without time zone) AND ((b)::integer = 5)
AND ((c) (..)
Total runtime: 10726.663 ms

Thanks







__
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
http://antispam.yahoo.com/whatsnewfree

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] pg 7.4.rc1, Range query performance

2003-11-11 Thread ow

--- Bruce Momjian <[EMAIL PROTECTED]> wrote:
> Tom Lane wrote:
> > Bruce Momjian <[EMAIL PROTECTED]> writes:
> > > Sorry there isn't a simple solution.
> > 
> > But there is: make an index with the column order (b,c,a).
> 
> Oh, yea, right.  If he puts the columns he is doing a straight equals
> comparison first, the 'a' comparison will work fine.  Great.
> 

Yes, I think Stephan Szabo was trying to convince me of something like that for
quite sometime; it was not clear to me then. I'll give it a try.

I still think "CREATE [CLUSTERED] INDEX" (or whatever) is an important feature
and  should be implemented in pgSql. Locking large (or huge) tables for
prolonged periods of time may not be acceptable in many setups.

Thanks






__
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
http://antispam.yahoo.com/whatsnewfree

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] Programatically switching database

2003-11-14 Thread ow
Hi,

Suppose pgSql server is running databases "db1" and "db2" that are in the same
db cluster. A client app establishes connection "conn1" to db1. Is there a way
to programatically switch conn1 to use db2 without doing
disconnect-from-db1-connect-to-db2? Something like what "\c" does but to be
used independently from psql? I need this to be able to reuse a pool of
connections to db1 for actions on db1, db2 ... dbn.

Thanks






__
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
http://antispam.yahoo.com/whatsnewfree

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] Programatically switching database

2003-11-15 Thread ow

---  Peter Eisentraut wrote:
> Nothing prevents you from keeping the connection to db1 open when you open
> a connection to db2.  By the way, psql's "\c" command does exactly
> disconnect-from-db1-connect-to-db2.

That's the whole point: I'm trying to avoid maintaining *separate* connection
pools for each db. In other words, instead of having, say, 5 connection pools
to 5 dbs with total of 1000 connections, I could've used just one (1) pool with
200 connections, if there was a way to "switch db" (or to do a cross-db query).

Any ideas? Thanks






__
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
http://antispam.yahoo.com/whatsnewfree

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] Programatically switching database

2003-11-15 Thread ow

--- Peter Eisentraut <[EMAIL PROTECTED]> wrote:
> I'm afraid that what you want to do is not possible.  Perhaps you want to
> organize your data into schemas, not databases.

There's too much data to put it in one db. If anything happens to it, I'll
never be able to restore (or dump) it in time. BTW, mySql has cross-db queries.

Thanks






__
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
http://antispam.yahoo.com/whatsnewfree

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Programatically switching database

2003-11-15 Thread ow

--- Peter Eisentraut <[EMAIL PROTECTED]> wrote:
> I'm afraid that what you want to do is not possible.  Perhaps you want to
> organize your data into schemas, not databases.

There's too much data to put it in one db. If anything happens to it, I'll
never be able to restore (or dump) it in time. BTW, mySql has cross-db queries.

Thanks






__
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
http://antispam.yahoo.com/whatsnewfree

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] Programatically switching database

2003-11-15 Thread ow

--- Peter Eisentraut <[EMAIL PROTECTED]> wrote:
> You could just dump individual schemas.

How? The doc only mentions db: pg_dump [option...] [dbname]

Then, how would I lock users out from the schema while it's being loaded?

Thanks





__
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
http://antispam.yahoo.com/whatsnewfree

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] Programatically switching database

2003-11-15 Thread ow

--- ow <[EMAIL PROTECTED]> wrote:
> How? The doc only mentions db: pg_dump [option...] [dbname]
> 
> Then, how would I lock users out from the schema while it's being loaded?

Never mind how, I see there's "-n namespace" option in 7.4. But still, how
would I lock users out from the schema while it's being loaded?

Thanks







__
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
http://antispam.yahoo.com/whatsnewfree

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] Programatically switching database

2003-11-15 Thread ow
--- Jan Wieck <[EMAIL PROTECTED]> wrote:
> #!/bin/sh
> 
> (
>  echo "start transaction;"
>  cat $2
>  echo "commit transaction;"
> ) psql $1
> 
> 
> 
> then call it as
> 
> reload_in_transaction my_db my_namespace.dump
> 
> Since the whole dump will be restored inside of one transaction, nobody 
> will see it while it's reloading.

Interesting idea. I know some RDBMSs that are very picky about DDL in
transactions but it appears pgSql handles it without any problems.

My concern though ... wouldn't pgSql server collapse when faced with
transaction spawning across 100M+ records? And how would that affect overall
restore time? I've should've tried it myself but I can't, my "regular"
pg_restore has been running for 4+ hours, can't kill now ...

Thanks






__
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
http://antispam.yahoo.com/whatsnewfree

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[SQL] WITHOUT OIDS by default

2003-11-16 Thread ow
Hi,

Is there a way to specify that all tables should be created WITHOUT OIDS by
default?

Thanks





__
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
http://antispam.yahoo.com/whatsnewfree

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[SQL] Seq Scans when index expected to be used

2003-11-29 Thread ow
pgSql 7.4.0

Hi,

Am trying to find duplicate values in a large table (about 80M records).
Somehow, none of the two (2) queries (see below) is using the index "I_bca"
that, I believe, should've been used.

Any ideas? Thanks

--

CREATE TABLE te.test
(
  id te.didlong NOT NULL,
  a te.dtimestamp,
  b te.didint NOT NULL,
  c te.didint NOT NULL,
  d te.dstring,
) WITHOUT OIDS;

-- attempt to create AK_bca failed due to duplicate values
-- alter table te.test
--   add constraint AK_bca unique (b, c, a);

create index I_bca on te.test (
b, c, a
);

alter table te.test
   add constraint PK_id primary key (id);

analyze te.test;

--
--  first attempt to find duplicate values
explain select b, c, a
from test
group by b, c, a
having count(*) > 1

QUERY PLAN
"GroupAggregate  (cost=19644987.88..21026410.30 rows=78938424 width=16)"
"  Filter: (count(*) > 1)"
"  ->  Sort  (cost=19644987.88..19842333.94 rows=78938424 width=16)"
"Sort Key: b, c, a"
"->  Seq Scan on test  (cost=0.00..1589706.24 rows=78938424 width=16)"


--
--  second attempt to find duplicate values
explain select DV1.b, DV1.c, DV1.a
from test DV1, test DV2
where DV1.b = DV2.b
  and DV1.c = DV2.c
  and DV1.a = DV2.a
  and DV1.id <> DV2.id

QUERY PLAN
"Merge Join  (cost=42373495.75..45309925.87 rows=95424260 width=16)"
"  Merge Cond: (("outer"."?column5?" = "inner"."?column5?") AND
("outer"."?column6?" = "inner"."?column6?") AND ("outer"."?column7?" =
"inner"."?column7?"))"
"  Join Filter: (("outer".id)::bigint <> ("inner".id)::bigint)"
"  ->  Sort  (cost=21186747.88..21384093.94 rows=78938424 width=24)"
"Sort Key: (dv1.a)::timestamp without time zone, (dv1.c)::integer,
(dv1.b)::integer"
"->  Seq Scan on test dv1  (cost=0.00..1589706.24 rows=78938424
width=24)"
"  ->  Sort  (cost=21186747.88..21384093.94 rows=78938424 width=24)"
"Sort Key: (dv2.a)::timestamp without time zone, (dv2.c)::integer,
(dv2.b)::integer"
"->  Seq Scan on test dv2  (cost=0.00..1589706.24 rows=78938424
width=24)"






__
Do you Yahoo!?
Free Pop-Up Blocker - Get it now
http://companion.yahoo.com/

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] Seq Scans when index expected to be used

2003-11-29 Thread ow

--- Joe Conway <[EMAIL PROTECTED]> wrote:
> > 
> >>explain select b, c, a
> >>from test
> >>group by b, c, a
> >>having count(*) > 1
> 
> Why would you expect this to use an index scan when it needs to read the 
> entire table? If you read the whole table (or even a significant 
> fraction of it), a seq scan is faster.

My impression was that the index "I_bca" covers the query, hence there should
not be a need to go to the table itself. Why would it?

P.S.
explain analyze vs explain. Normally, would've used "explain analyze" but in
this case it's taking way too long so I used "explain".

Thanks





__
Do you Yahoo!?
Free Pop-Up Blocker - Get it now
http://companion.yahoo.com/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] 7.4 - FK constraint performance

2004-02-11 Thread ow
PostgreSQL 7.4 on i386-linux-gnu, compiled by GCC 2.96

-- about 10 records
CREATE TABLE my.Small
(
  id my.dint NOT NULL,
  code my.dvalue NOT NULL,
  CONSTRAINT pk_1 PRIMARY KEY (id),
) WITHOUT OIDS;

-- about 80M rows
CREATE TABLE my.Large 
(
  id my.dlong NOT NULL, 
  small_id my.dint NOT NULL,
  value my.value,
  CONSTRAINT pk_2 PRIMARY KEY (id),
  CONSTRAINT fk_2 FOREIGN KEY (small_id) REFERENCES my.small (id) ON UPDATE
RESTRICT ON DELETE RESTRICT,
) WITHOUT OIDS;

CREATE INDEX small_fk  ON my.Large USING btree (small_id);

-
The fowllowing queiries run in less than 40 ms.
1) select 1 from Large where small_id = 239
2) SELECT 1 FROM ONLY "my"."Large" x WHERE "small_id" = 239 FOR UPDATE OF x

3) delete from Small where id = 239 
Feb 11 18:07:36 srv07 postgres[2091]: [91-1] LOG:  statement: SELECT 1 FROM
ONLY "my"."Large" x WHERE "small_id" = $1 FOR UPDATE OF x Feb 11 18:10:41 srv07

postgres[2091]: [92-1] LOG: duration: 185273.262 ms

When I try to delete record, it takes > 3 min. Why is it taking so long if 
practically the same select query (see (2)) is running very quickly. Anything
that can be done to fix it?

Thanks





__
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly



Re: [SQL] 7.4 - FK constraint performance

2004-02-12 Thread ow

--- Stephan Szabo <[EMAIL PROTECTED]> wrote:
> Hmm, I'd wonder if maybe it's choosing a sequential scan in the second
> case?  As a random question, does increasing the statistics target on
> Large.small_id and re-analyzing change its behavior?

Ran analyze, the result is the same. Here's more info:

1) There's 1 row in "Large" for "small_id" = 239 
SELECT 1 FROM ONLY "my"."Large" x WHERE "small_id" = 239 FOR UPDATE OF x

Quick query. Explain shows index scan.

2) There are many rows in "Large" for "small_id" = 1
SELECT 1 FROM ONLY "my"."Large" x WHERE "small_id" = 1 FOR UPDATE OF x

Runs for about 3 min. Explain shows table scan.

3) delete from Small where id = 239
Runs for about 3 min. It does appear that table scan is used for FK
verification. But why? Am deleting "id = 239"  not "id = 1" and the query in
(1) runs very quickly. Had suspicion that wrong id is passed during FK
verification but FK constraint DOES work.

4) Domain types used in the example above
my.dint = int
my.dlong = int8
my.dvalue =  varchar(15)

Thanks




__
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] 7.4 - FK constraint performance

2004-02-12 Thread ow

--- Stephan Szabo <[EMAIL PROTECTED]> wrote:
> You also did the alter table to up the statistics target on the column,
> right?

Not really. I did not change the the default stats settings in the
postgresql.conf. Not sure what needs to be changed, can you clarify?

Thanks




__
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] 7.4 - FK constraint performance

2004-02-12 Thread ow
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> ow <[EMAIL PROTECTED]> writes:
> > When I try to delete record, it takes > 3 min.
> 
> I think it must be using a seqscan for the foreign key check query.
> Could you try this and show the results? 

1) SELECT 1 FROM ONLY "my"."large" x WHERE "small_id" = 201 FOR UPDATE OF x;

QUERY PLAN
Index Scan using small_fk on large x  (cost=0.00..6.01 rows=1 width=6) (actual
time=0.251..0.251 rows=0 loops=1)
  Index Cond: ((small_id)::integer = 201)
Total runtime: 0.338 ms

2) prepare foo(my.dint) as
SELECT 1 FROM ONLY "my"."large" x WHERE "small_id" = $1 FOR UPDATE OF x;

explain analyze execute foo(201);

QUERY PLAN
Seq Scan on large x  (cost=0.00..1787052.30 rows=7893843 width=6) (actual
time=210566.301..210566.301 rows=0 loops=1)
  Filter: ((small_id)::integer = ($1)::integer)
Total runtime: 210566.411 ms

Thanks



__
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] 7.4 - FK constraint performance

2004-02-12 Thread ow

--- Tom Lane <[EMAIL PROTECTED]> wrote:
> Well, there's the smoking gun all right.  Why does it think there are
> going to be 7893843 matching rows!?  Could we see the pg_stats row for
> the large.small_id column?
> 
>   regards, tom lane

schemaname  tablename   attname null_frac   avg_width   n_distinct 
 most_common_vals
most_common_freqs   histogram_boundscorrelation
my  large   small_id0   4   10  {7,3,5,1,4,2,8,10,6,9}
{0.108667,0.105,0.104333,0.101333,0.100667,0.098,0.098,0.098,0.094,0.091}
0.0597573




__
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] 7.4 - FK constraint performance

2004-02-12 Thread ow
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> According to this entry, your small_id column only contains the ten
> values 1..10, roughly evenly distributed.  So why are you probing for
> 239??

Let's say we have City (small) and Person (large) tables. A new city was added
(mistakenly) with id=239, it does not have any "persons" assigned yet. Hence,
we want to remove the wrong "city" record.

In any case, one can't remove record from "small" unless there are NO records
in "large", RI will not allow it. The initial problem was that I tried to
delete a record from "small" and it was taking about 3 min to do that.

> The planner is certainly going to estimate a probe for an unspecified
> value as retrieving 10% of the table, and under that assumption it's
> quite right to use a seqscan.

Sounds pretty bad for my case. Any way to avoid the 10% scan?

Thanks




__
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[SQL] currval() without specifying the sequence name

2004-03-09 Thread ow
Hi,

Is there a way to retrieve the current value (i.e. the last used value) of the
last used sequence without specifying the sequence name? Something like the
following:

create table ( id serial, value varchar(10));
insert into someOthertest (value) values ('100');
insert into test (value) values ('10');
insert into test (value) values ('20');

select currval() should returns 2 (the last value of the test table sequence)

Thanks




__
Do you Yahoo!?
Yahoo! Search - Find what you’re looking for faster
http://search.yahoo.com

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] currval() without specifying the sequence name

2004-03-09 Thread ow

--- Achilleus Mantzios <[EMAIL PROTECTED]> wrote:
> 
> You could use the same sequence in many tables.
> 

I guess I could but then:

1) I'd still have to know the sequence name. When integrating pgSql with 3party
frameworks I'd either have to pass it there (could be a very difficult task) or
make some assumptions (also not very good).

2) There may be a need to have different sequences being used in an app.

3) If one sequence is used, in many cases it'll have to be of type int8 and
pgSql does not handle searches with int8 very nicely.

Thanks




__
Do you Yahoo!?
Yahoo! Search - Find what you’re looking for faster
http://search.yahoo.com

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] currval() without specifying the sequence name

2004-03-09 Thread ow
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> or that the last one that's touched is the one you want?

Correct.



__
Do you Yahoo!?
Yahoo! Search - Find what you’re looking for faster
http://search.yahoo.com

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] Follow-up: FK constraint performance

2004-04-20 Thread ow

Hi,

This issue appears to be present in 7.4.2  Am not sure if the plan was to fix
it or leave like this. Any ideas? Thanks

See http://archives.postgresql.org/pgsql-sql/2004-02/msg00104.php
or 
http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&threadm=20040212140516.65859.qmail%40web60807.mail.yahoo.com&rnum=1&prev=/groups%3Fhl%3Den%26lr%3D%26ie%3DISO-8859-1%26q%3D%2B%25227.4%2B-%2BFK%2Bconstraint%2Bperformance%2522%26btnG%3DSearch%26meta%3Dgroup%253Dcomp.databases.postgresql.*









__
Do you Yahoo!?
Yahoo! Photos: High-quality 4x6 digital prints for 25¢
http://photos.yahoo.com/ph/print_splash

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] Follow-up: FK constraint performance

2004-04-20 Thread ow
>
> Hi,
> 
> This issue appears to be present in 7.4.2  Am not sure if the plan was to fix
> it or leave like this. Any ideas? Thanks
> 
> See http://archives.postgresql.org/pgsql-sql/2004-02/msg00104.php

Never mind. I think it works. Thanks







__
Do you Yahoo!?
Yahoo! Photos: High-quality 4x6 digital prints for 25¢
http://photos.yahoo.com/ph/print_splash

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] 7.4 - FK constraint performance

2004-05-31 Thread ow

--- Tom Lane <[EMAIL PROTECTED]> wrote:
> Can't see how we optimize your case without pessimizing more-common cases.

My case appears to be pretty common, i.e. 1 small and 1 large table with RI
constraint between them. In order to delete a record from the small table, the
large table must not have records that are dependent on the deleted row.

I think other RDBMSs simply use preset value instead of partial table scan when
there's not enough stat info. Might be a better way.

Thanks



__
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[SQL] Temp tables, reports in Postgresql (and other RDBMS)

2006-10-15 Thread ow

Hi,

We are considering moving some reports from *** to Postgres. Our reports are
written as stored procs in Transact-SQL and usually have the following
structure:

CREATE PROCEDURE someReportProc AS
/* Purpose: Creates a report based on Table1.
 *
 * Overview of what will be done:
 * 1) create a temp table based on Table1 (that has 3 columns) + 2 extra
columns
 *(col4 and col5) and populate the temp table with data from Table1
 * 2) run some logic to populate 1st extra column (col4)
 * 3) run some logic to populate 2nd extra column (col5)
 * 4) run select to return results to the client
 */
BEGIN

  -- step 1) create temp table #tempReportData
  SELECT Table1.*, space(1) as col4, 0 as col5
  INTO #tempReportData
  FROM Table1
  WHERE 

  -- step 2)
  UPDATE #tempReportData
  SET col4 = Table4.someColumn
  FROM Table4
  WHERE #tempReportData.id = Table4.id AND ...
 
  -- step 3)
  UPDATE #tempReportData
  SET col5 = Table5.someColumn + 123
  FROM Table5
  WHERE #tempReportData.id = Table5.id AND ...

  -- step 4)
  -- return data to the client, #tempReportData will be automatically dropped
  -- once this stored proc execution is completed
  SELECT * from #tempReportData

END


How would one rewrite the above logic in Postgresql? It should be noted that:
1) the real report logic may be much more complex. In other words, writing the
report's logic with one SQL statement should be assumed impossible.

2) The reports are usually written to work on result sets, as in the example
above. It's possible to rewrite the above logic with cursors, etc, though
keeping the result set approach would be more preferable.

Thanks in advance





__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Temp tables, reports in Postgresql (and other RDBMS)

2006-10-15 Thread ow

--- Tom Lane <[EMAIL PROTECTED]> wrote:

> If you need some procedural logic (if-then-else stuff)

Yes

> then you'd need
> to go over to plpgsql, which would be a bit of a pain because its habit
> of caching query plans doesn't play nice with temp tables.

Going to plpgsql is fine. What I'm trying to understand is how one goes about
writing reports in Postgres as per our scenario, be that with temp tables,
cursors, RECORDs, ROWTYPEs, etc.

In our RDBMS we use temp tables because it's the easiest way to take some table
as a base for your report and then extend it, as needed, with additional
columns, as I showed in step (1). Also, once the stored proc has run, the temp
table is automatically dropped, no need to worry about it. How does one go
about in Postgres? Do people use cursors or whatever instead of temp tables?

> could work some trick with ON COMMIT DELETE ROWS temp tables that are
> created once at the start of a session and are auto-emptied after each
> function by the ON COMMIT rule.  Since the tables themselves don't
> get dropped, there's no problem with plan invalidation.

Not sure I understand. Our plan was actually to drop the temp table ON COMMIT,
because it's stor proc that creates the temp table. If the session already has
the temp table then creating it in stored proc again will fail, no?

Thanks



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster