Personally I think it would be neat. For example the admin-tool guys
would be able to get a dump without invoking an external program.
Second it would really be independent of core releases (other than being
tied to the output format.) pg_dump would be just a simple caller of
such a library, and
It probably wouldn't be terribly difficult to put the guts of pg_dump
into a library that you could interface with via C. I'm not sure if the
community would accept such a patch; though, I seem to recall other
people asking for this on occasion.
I think script support is bit risky because if an
I'm executing the queries from phpPgAdmin.
The above are for explain analyse. I was referring to the pure query
execution time.
Does anyone have an idea why the OR-query takes so long?
Any server-side tuning possibilities? I wouldn't like to change the code of
ldap's back-sql...
If you're using
That guy doesn't actually have the foggiest idea what he's doing.
The reason there is no built-in capability to do that is that it *does
not work well*. Search the list archives for "priority inversion" to
find out why not.
http://en.wikipedia.org/wiki/Priority_inversion
-
Yes, but there are definitely programming cases where
memoization/caching definitely helps. And it's easy to tell for a given
function whether or not it really helps by simply trying it with CACHED
and without.
Would this be a simple thing to implement?
It's called a "table" :)
---
Scott Marlowe <[EMAIL PROTECTED]> writes:
It's the refusal of people to stop using MyISAM table types that's the
real issue.
Isn't MyISAM still the default over there? It's hardly likely that the
average MySQL user would use anything but the default table type ...
Since MySQL 5, InnoDB table
Suppose you have a table codes :
(
game_idINT,
codeTEXT,
usedBOOL NOT NULL DEFAULT 'f',
prize...
...
PRIMARY KEY (game_id, code)
)
Just UPDATE codes SET used='t' WHERE used='f' AND game_id=... AND
code=...
Then check the rowcount : if
Well, your pg_dump command lost your BLOBs since the plain text
format doesn't support them.
Well, no.. they are stored as BYTEA not Large Objects.. They are encoded
in ASCII in the pg_dump output.
As a side note: plain text dump format in 8.1 supprts LOBs
---(end of
em in the network configuration
of the machine.
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
--
Christopher Kings-Lynne
Technical Manager
CalorieKing
Tel: +618
how about something like:
DELETE FROM cds.cds_mspecxx WHERE NOT EXISTS (SELECT 1 FROM
cds_stage.cds_Catalog stage where stage.countryCode = 'us' and
stage.ProdId=cds.cds_mspecxx.ProdId) and countryCode = 'us';
Run explain on it first to see how it will be planned. Both tables
should have a
The pgAdmin query tool is known to give an answer about 5x the real
answer - don't believe it!
ryan groth wrote:
Hmm, it came from the timer on the pgadmin III sql query tool. I guess
the 1,000ms includes the round-trip? See the wierd thing is that
mysqlserver is running default configuration o
Relating to this. If anyone can find govt or other free db's and
convert them into pgsql format, I will host them on the dbsamples page.
The dbsamples are _really_ popular!
Chris
Scott Marlowe wrote:
On Fri, 2006-02-17 at 10:51, Ron wrote:
I assume we have such?
Depends on what you wanna
Not really, but you can check out the sample databases project:
http://pgfoundry.org/projects/dbsamples/
Chris
Ron wrote:
I assume we have such?
Ron
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Ouch! That confirms my problem. I generated the random test case because
it was easier than including the dump of my tables, but you can
appreciate that tables 20 times the size are basically crippled when it
comes to creating an index on them.
I have to say that I restored a few gigabyte dum
Reading about this issue further in the FAQ, it seems that I should
ensure that Postgres has adequate and accurate information about the
tables in question by regularly running VACUUM ANALYZE, something I
don't do currently.
Well then you'll get rubbish performance always in PostgreSQL...
I s
So my question is: What's the use of an autovacuum daemon if I still
have to use a cron job to do full vacuums? wouldn't it just be a minor
job to enhance autovacuum to be able to perform full vacuums, if one
really wants it to do that - even if some developers think that it's the
wrong approac
UNLIKELY string LIKE '%% PREEMPT %%'
or:
LIKELY string NOT LIKE '%% PREEMPT %%'
You should be using contrib/tsearch2 for an un-anchored text search perhaps?
---(end of broadcast)---
TIP 4: Have you searched our list archives?
... This seems like a case where PostgreSQL's current optimiser should
easily know what to do if your config settings are correct and you've
been running ANALYZE, so I'd like to see your settings and the explain
analyze plan...
I could, but it would divert us from the main topic of this discus
Boy, you picked a *really* bad example ;-)
The problem is that Postgres decided to filter on myfunc() *first*, and
then filter on row_num, resulting in a query time that jumped from
seconds to hours. And there's no way for me to tell Postgres not to do
that!
Can you paste explain analyze an
> Right on. Some of these "coerced" plans may perform > much better.
If so, we can look at tweaking your runtime
> config: e.g.
>
> effective_cache_size
> random_page_cost
> default_statistics_target
>
> to see if said plans can be chosen "naturally".
I see this over and over.
select * from my_table where row_num >= 5 and row_num < 10
and myfunc(foo, bar);
You just create an index on myfunc(foo, bar)
only if myfunc(foo, bar) is immutable...
And if it's not then the best any database can do is to index scan
row_num - so still you have no problem.
select * from my_table where row_num >= 5 and row_num < 10
and myfunc(foo, bar);
You just create an index on myfunc(foo, bar)
Chris
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
I have been using PostgreSQL (currently 7.4.7) for several years now and
am very happy with it but I currently run a website that has had a
little bit of a boost and I am starting to see some performance problems
(Not necessarily PostgreSQL).
PostgreSQL 8.1.1 should give you greater performanc
You might find it faster to install contrib/tsearch2 for text indexing
sort of purposes...
Nörder-Tuitje wrote:
Hi,
I am breaking up huge texts (between 25K and 250K words) into single
words using PgPlsql.
For this I am using a temp table in the first step :
LOOP
No, my problem is that using TSearch2 interferes with other core
components of postgres like (auto)vacuum or dump/restore.
That's nonsense...seriously.
The only trick with dump/restore is that you have to install the
tsearch2 shared library before restoring. That's the same as all
contribs t
If you're trying to retrieve 26 million rows into RAM in one go of
course it'll be trouble.
Just use a cursor. (DECLARE/FETCH/MOVE)
Chris
Howard Oblowitz wrote:
Hi …
I am trying to run a query that selects 26 million rows from a
table with 68 byte rows.
When run on the Server via psql th
...
So you'll avoid a non-core product and instead only use another non-core
product...?
Chris
Michael Riess wrote:
Has anyone ever compared TSearch2 to Lucene, as far as performance is
concerned?
I'll stay away from TSearch2 until it is fully integrated in the
postgres core (like "cre
I do not really see why all the distributions could do something like this,
instead of mucking around with special statically compiled pg_dumps and the
like...
Contrib modules and tablespaces.
Plus, no version of pg_dump before 8.0 is able to actually perform such
reliable dumps and reloads (d
That way if someone wanted to upgrade from 7.2 to 8.1, they
can just grab the latest dumper from the website, dump their
old database, then upgrade easily.
But if they're upgrading to 8.1, don't they already have the new
pg_dump? How else are they going to dump their *new* database?
Erm. Usu
Isn't your distribution supposed to do this for you? Mine does these days...
A distribution that tries to automatically do a major postgresql update
is doomed to fail - spectacularly...
Chris
---(end of broadcast)---
TIP 1: if posting/reading t
Perhaps we should put a link on the home page underneath LATEST RELEASEs
saying
7.2: de-supported
with a link to a scary note along the lines of the above.
ISTM that there are still too many people on older releases.
We probably need an explanation of why we support so many releases (in
Update to 7.4 or later ;-)
Quite seriously, if you're still using 7.2.4 for production purposes
you could justifiably be accused of negligence. There are three or four
data-loss-grade bugs fixed in the later 7.2.x releases, not to mention
security holes; and that was before we abandoned support
transfer=> explain analyse SELECT * FROM test WHERE test_a=9091150001
AND test_b=1 AND test_c=2 AND test_d=0 AND test_e=0;
Index Scan using test_idx on test (cost=0.00..50.27 rows=1 width=1891)
(actual time=0.161..0.167 rows=1 loops=1)
Index Cond: (test_a = 9091150001::bigint)
Filter: ((t
I'd set up a trigger to maintain summary tables perhaps...
Chris
Charlie Savage wrote:
Thanks everyone for the feedback.
I tried increasing work_mem:
set work_mem to 30;
select tlid, min(ogc_fid)
from completechain
group by tld;
The results are:
"GroupAggregate (cost=9041602.80..1000
Or you could just run the 'vacuumdb' utility...
Put something like this in cron:
# Vacuum full local pgsql database
30 * * * * postgres vacuumdb -a -q -z
You really should read the manual.
Chris
Christian Paul B. Cosinas wrote:
I see.
But How Can I put this in the Cron of my Linux Server?
In what directory in my linux server will I find these 3 tables?
Directory? They're tables in your database...
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Ummm...they're SQL commands. Run them in PostgreSQL, not on the unix
command line...
Christian Paul B. Cosinas wrote:
I try to run this command in my linux server.
VACUUM FULL pg_class;
VACUUM FULL pg_attribute;
VACUUM FULL pg_depend;
But it give me the following error:
-bash: VACUUM:
Now *I* am confused. What does PgAdmin do more than giving the query to
the database?
It builds it into the data grid GUI object.
Chris
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresq
Who needs a paginated view with 100.000 pages ?
- Select min(date) and max(date) from your table
- Present a nifty date selector to choose the records from any day,
hour, minute, second
- show them, with "next day" and "previous day" buttons
- It's more useful to the user
We have a GUI that let user browser through the record page by page at
about 25 records a time. (Don't ask me why but we have to have this
GUI). This translates to something like
select count(*) from table <-- to give feedback about the DB size
select * from table order by date limit 25 o
I am using PHP's PDO PGSQL interface - I haven't read up enough on it
to determine whether a persistent connection can re-use server-side
prepared queries as an option. Anybody know?
It re-uses server-side prepared queries by default, if you are using the
PDOPrepare/PDOExecute stuff.
Chris
What could possibly I do so that I can make this fast?
Here is the code inside my function:
FOR temp_rec IN SELECT * FROM item_qc_doer LOOP
DELETE FROM qc_session WHERE item_id = temp_rec.item_id;
DELETE FROM item_qc_doer WHERE item_id = temp_rec.item_id;
Here is the code inside my function:
FOR temp_rec IN SELECT * FROM item_qc_doer LOOP
DELETE FROM qc_session WHERE item_id = temp_rec.item_id;
DELETE FROM item_qc_doer WHERE item_id = temp_rec.item_id;
END LOOP;
Item_qc_oder table contains 22,000 re
Yes you're right it really bosst a little.
I want to improve the system performance. Are there any more tipps?
The rest of the numbers look vaguely ok...
On this server runs only a webserver with php application which uses
postgre Db. Should I give more memory to postgre? From what I noticed
A lot of them are too large. Try:
Andy wrote:
Hi to all,
I have the following configuration:
Dual Xeon 2.8 Ghz, 1G RAM and postgre 8.0.3 installed.
Modified configuration parameters:
max_connections = 100
shared_buffers = 64000 # 500MB = 500 x 1024 x 1024 / (8 x 1024) (8KB)
shared_b
CREATE SEQUENCE ai_id;
CREATE TABLE badusers (
id int DEFAULT nextval('ai_id') NOT NULL,
UserName varchar(30),
Date datetime DEFAULT '-00-00 00:00:00' NOT NULL,
Reason varchar(200),
Admin varchar(30) DEFAULT '-',
PRIMARY KEY (id),
KEY UserName (UserName),
KEY Date (Date)
);
Generate them all into a table and just delete them as you use them.
It's only 1 rows...
Chris
Choe, Cheng-Dae wrote:
> I need to generate unused random id with format is ID[0-9]{4}
> so i write below query but it seems to be too slow
>
> SELECT * FROM (
> SELECT user_id FROM (
>
Unfortunately there's no very simple way to determine which FK is the
problem. (In 8.1 it'll be possible to do that with EXPLAIN ANALYZE,
but in existing releases EXPLAIN doesn't break out the time spent in
each trigger ...) You have to just eyeball the schema :-(.
phpPgAdmin has a handy info
I forgot to say that it´s a 12GB database...
That's actually not that large.
Ok, I´ll set shared buffers to 30.000 pages but even so "meminfo" and
"top" shouldn´t show some shared pages?
Yeah. The reason for not setting buffers so high is because PostgreSQL
cannot efficiently manage huge sh
I´ve configured postgresql to use 1GB of shared buffers but meminfo and
"top" are indicanting 0 shared buffers page. Why?
1GB shared buffers is far too much. Set it back to like 3 buffers
max...
Chris
---(end of broadcast)---
TIP 4: Have
You could use a 1 column/1 row table perhaps. Use some sort of locking
mechanism.
Also, check out contrib/userlock
Chris
Alan Stange wrote:
Hello all,
is there a simple way to limit the number of concurrent callers to a
stored proc?
The problem we have is about 50 clients come and perfor
No, unless you use some custom triggers.
prasanna s wrote:
Does postgres support indexed views/materialised views that some of the
other databases support?
Thanks
Prasanna S
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Roberto Germano Vieweg Neto wrote:
My application is using Firebird 1.5.2
I have at my database:
- 150 Doamins
- 318 tables
- 141 Views
- 365 Procedures
- 407 Triggers
- 75 generators
- 161 Exceptions
- 183 UDFs
- 1077 Indexes
My question is:
Postgre SQL will be more faster than Firebird? Ho
Insert into a temp table then use INSERT INTO...SELECT FROM to insert
all rows into the proper table that don't have a relationship.
Chris
Dan Harris wrote:
I am working on a process that will be inserting tens of million rows
and need this to be as quick as possible.
The catch is that for
Try Slony: www.slony.info
Shashi Kanth Boddula wrote:
Hi,
I have one customer who is using PostgreSQL 7.4.8 on Linux . He has some
problems with database mirroring . The details are follows.
The customer is using Linux on which PostgreSQL 7.4.8 along with Jboss
3.2.3 is running . He has 2 ser
Try turning on query logging and using the 'pqa' utility on pgfoundry.org.
Chris
Agha Asif Raza wrote:
Is there any MS-SQL Server like 'Profiler' available for PostgreSQL? A
profiler is a tool that monitors the database server and outputs a
detailed trace of all the transactions/queries that a
Is there a different kind of 'prepared' statements
that we should be using in the driver to get logging
to work properly? What is the 'new' protocol?
The 8.0.2 jdbc driver uses real prepared statements instead of faked
ones. The problem is the new protocol (that the 8.0.2 driver users) has
a
we are using jdbc -- the "log_min_duration_statement = 3000 "
statement works fine for me. Looks like there's no other work around
for the bug(?). Not sure since I have no interest in logging a
million statements a day, I only want to see the poorly performing hits.
Doesn't it depend on wha
I'm a bit surprised of that behavior thought, since it means that if we
delete a row from table A all tables (B,C,D) with FK pointing to this
table (A) must be scanned.
If there is no index on those tables it means we gone do all Sequantial
scans. Than can cause significant performance problem!
database=> explain select date_trunc('hour', time),count(*) as total from
test where p1=53 and time > now() - interval '24 hours' group by
date_trunc order by date_trunc ;
Try going:
time > '2005-06-28 15:34:00'
ie. put in the time 24 hours ago as a literal constant.
Chris
-
What's wrong with Slony?
Because it's not multi-master. Our mirroring package is.
I'm curious - how did you write a multi-master replication package in
pgsql, when pgsql doesn't have 2 phase commits or any kind of
distributed syncing or conflict resolution in a release version?
Chris
---
What's wrong with Slony?
David Mitchell wrote:
We have the following function in our home grown mirroring package, but
it isn't running as fast as we would like. We need to select statements
from the pending_statement table, and we want to select all the
statements for a single transaction (pe
PostgreSQL 8 for windows faster AND more reliable :)
Chris
Scott Goldstein wrote:
I'm currently trying to make a decision on whether to use the Cygwin
port of Postgres 7.4 or Postgres 8.0 for a windows installation. Can
someone provide some comparison info from a performance point of view?
Just read the docs in contrib/tsearch2 in the PostgreSQL distribution.
Pierre A. Fortier wrote:
I search for particular strings using regular expressions (e.g. where
column ~* $query) through a text data type column which contains notes
(some html code like bold is included).
It works but my qu
Ok, if all 21 are affected, I can understand the problem.
But allow me to say that this is a "functional error"
No, it's normal MVCC design...
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
We're hoping PostgreSQL can match or beat Sybase performance, and
preliminary tests look good. We should be able to get some load testing
going within a week, and we're shooting for slipping these machines into
the mix around the end of this month. (We've gone to some lengths to
keep our code po
f the same
database is running on.
One instance (the live) just insists on doing the seq scan of the 50k
records in Price_Tbl and the 6.5k records in SCT2SubCatType_Tbl.
Seems weird
Cheers
Jona
Christopher Kings-Lynne wrote:
Thank you for the swift reply, the following is the output of the
SH
Thank you for the swift reply, the following is the output of the SHOW
ALL for shared_buffers and effective_cache_size.
shared_buffers: 13384
effective_cache_size: 4000
server memory: 2GB
effective_cache_size should be 10-100x larger perhaps...
Chris
---(end of bro
contrib/dbsize in the postgresql distribution.
Shanmugasundaram Doraisamy wrote:
Dear Group!
Thank you for all the support you all have been
providing from time to time. I have a small question: How do I find the
actual size of the Database? Awaiting you replies,
Shan.
So, our problem in installing is we don't know a cluster or SSL from a
hole in the ground. Things get confusing about contexts- are we
talking about a user of the system or the database? Yikes, do I need
to write down the 30+ character autogenerated password?
No you don't need to write it do
Without reading too hard, I suggest having a quick look at contrib/ltree
module in the PostgreSQL distribution. It may or may not help you.
Chris
hubert lubaczewski wrote:
hi
first let me draw the outline.
we have a database which stores "adverts".
each advert is in one category, and one or m
Seq Scan on sensor (cost=0.00..1.25 rows=1 width=6) (actual
time=0.055..0.068 rows=1 loops=1)
Filter: (sensor_id = 12)
Total runtime: 801641.333 ms
(3 rows)
Can anybody help me out? Thanks so much!
Does your table have millions of dead rows? Do you vacuum once an hour?
Run VACUUM FUL
When I do an EXPLAIN ANALYZE on the above query, the result is:
Seq Scan on city_alias (cost=0.00..59282.31 rows=2 width=42) (actual
time=73.369..3330.281 rows=407 loops=1)
Filter: ((name)::text ~~ '%FRANCISCO'::text)
Total runtime: 3330.524 ms
(3 rows)
this is a query that our system n
I have some queries that have significan't slowed down in the last
couple days. It's gone from 10 seconds to over 2 mins.
The cpu has never gone over 35% in the servers lifetime, but the load
average is over 8.0 right now. I'm assuming this is probably due to
disk io.
You sure it's not a severe
My Next Task: Finding a Stress Test Harness to Load, and Query Data.
Anyone have ideas?
I am eagerly awaiting the * DESTRUCTION* ** of Oracle around here, and
"yes" I am an oracle DBA and think it's */ very /*// good technology.
Have you tried the simple 'gmake test'?
Other than that, try htt
Would CLUSTER / REINDEX still have an effect if our queries were done
via sequential scan?
SELECTS don't write to the database, so they have no effect at all on
vacuuming/analyzing. You only need to worry about that with writes.
This is a old database (as in built by me when i
was just sta
Can anyone explain why this may be occurring and how I might be able to
keep the original database running at the same speed as "tempdb"?
You're not vacuuming anywhere near often enough. Read up the database
maintenance section of the manual. Then, set up contrib/pg_autovacuum
to vacuum your
If I pg_dump that database then create a new database (e.g. "tempdb")
and upload the dump file (thus making a duplicate) then the same query
only takes 190ms !!
Vacuum, vacuum analyse, and vacuum full analyse does not seem to have an
impact on these times.
Damn, for some reason I didn't read t
--As Chris pointed out, how real-world is this test?
His point is valid. The database we're planning will
have a lot of rows and require a lot of summarization
(hence my attempt at a "test"), but we shouldn't be
pulling a million rows at a time.
If you want to do lots of aggregate analysis, I su
select count(*) from mtable where day='Mon'
Results:
1. P3 600 512MB RAM MSSQL. It takes about 4-5 secs to
run. If I run a few queries and everything is cached,
it is sometimes just 1 second.
2. Athlon 1.3 Ghz 1GB RAM. PostgreSQL takes 7 seconds.
I have played with the buffers setting and cu
I'm doing the writes individually. Is there a better way? Combining
them all into a transaction or something?
Use COPY of course :)
Or at worst bundle 1000 inserts at a time in a transation...
And if you seriously do not care about your data at all, set fsync = off
in you postgresql.conf for a
This time it worked! But VACUUM FULL requires an exclusive lock on the
table which I don't really want to grant. So my question is: why is
VACUUM ANALYZE didn't do the job? Is there any setting I can tweak to
make a VACUUM without granting a exclusive lock?
You need to run normal vacuum analyze
Acceptable Answers to 'So, when/is PG meant to be getting a decent
partitioning system?':
1. Person X is working on it I believe.
2. It's on the list, but nobody has done anything about it yet
3. Your welcome to take a stab at it, I expect the community would
support your efforts a
Another trick you can use with large data sets like this when you want
results
back in seconds is to have regularly updated tables that aggregate the data
along each column normally aggregated against the main data set.
Maybe some bright person will prove me wrong by posting some working
informat
*laff*
Yeah, like they've been working on views for the last 5 years, and
still haven't released them :D :D :D
?
http://dev.mysql.com/doc/mysql/en/create-view.html
...for MySQL 5.0.1+ ?
Give me a call when it's RELEASED.
Chris
---(end of broadcast)
This is why I mention partitioning. It solves this issue by storing
different data sets on different machines under the same schema. These
seperate chunks of the table can then be replicated as well for data
redundancy and so on. MySQL are working on these things
*laff*
Yeah, like they've bee
You didn't do analyze.
Chris
Jona wrote:
Results of VACUUM VERBOSE from both servers
Test server:
comm=# VACUUM VERBOSE StatCon_Tbl;
INFO: --Relation public.statcon_tbl--
INFO: Pages 338: Changed 338, Empty 0; Tup 11494: Vac 0, Keep 0, UnUsed 0.
Total CPU 0.02s/0.00u sec elapsed 0.04 se
So what's the difference between a COPY and a batch of INSERT
statements. Also, surely, fsyncs only occur at the end of a
transaction, no need to fsync before a commit has been issued, right?
With COPY, the data being inserted itself does not have to pass through
the postgresql parser.
Chris
---
conn.setAutoCommit(false);
pst = conn.prepareStatement("INSERT INTO tmp (...) VALUES (?,?)");
for (int i = 0; i < len; i++) {
pst.setInt(0, 2);
pst.setString(1, "xxx");
pst.addBatch();
}
pst.executeBatch();
conn.commit();
This snip takes 1.3 secs in postgresql. How can I lower that?
You're
Yay! You're right! I wonder why have I forgotten about LANGUAGE SQL. :)
It's 30 seconds vs 5 seconds for CASE ... END insisde PLpgsql vs CASE...END
LANGUAGE SQL. :) I.e. its almost the same as in-place entered SQL.
Regards,
Dawid
---(end of broadcast)-
Yay! You're right! I wonder why have I forgotten about LANGUAGE SQL. :)
It's 30 seconds vs 5 seconds for CASE ... END insisde PLpgsql vs CASE...END
LANGUAGE SQL. :) I.e. its almost the same as in-place entered SQL.
Probably because simple SQL functions get inlined by the optimiser.
Chris
---
Is there a way to look at the stats tables and tell what is jamming up
your postgres server the most? Other than seeing long running queries
and watch top, atop, iostat, vmstat in separate xterms...I'm wondering
if postgres keeps some stats on what it spends the most time doing or if
there's a
Try enabliing your checkpoint_segments. In my example, our database
restore took 75mins. After enabling checkpoints_segments to 20, we cut
it down to less than 30 minutes.
Increasing maintenance_work_mem might help too ... or several other
settings ... with no information about exactly *what* is
Am I correct is assuming that the timings are calculated locally by psql
on my client, thus including network latency?
No explain analyze is done on the server...
Chris
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL P
My problem with this really is that in my database it is hard to predict
which inserts will be huge (and thus need FKs dissabled), so I would
have to code it around all inserts. Instead I can code my own integirty
logic and avoid using FKs all together.
Just drop the fk and re-add it, until post
Thanks for the pointer. I got this from the archives:
update pg_class set reltriggers=0 where relname = 'YOUR_TABLE_NAME';
to enable them after you are done, do
update pg_class set reltriggers = count(*) from pg_trigger where
pg_class.oid=tgrelid and relname='YOUR_TABLE_NA
Deferring makes no difference to FK checking speed...
But why then is the speed acceptable if I copy and then manually add the
FK? Is the check done by the FK so much different from when it is done
automatically using an active deffered FK?
Yeah I think it uses a different query formulation...
I am new to cross references between tables, and I am trying to
understand how they impact performance. From reading the documentation I
was under the impression that deffering foreign keys would yield about
the same performance as dropping them before a copy, and adding them
after. However, I cann
To be more explicit, let's say I have table with two fields a and b. If
I have an index on (a,b) and I do a request like "SELECT b FROM table
WHERE a=x", will Postgresql use only the index, or will it need to also
read the table page for that (those) row(s)?
It must read the table because of vis
I like to know whether Indexed View supported in psql 7.1.3.?
No...
Is there any performance analysis tool for psql.?
No, we keep telling you to upgrade to newer PostgreSQL. Then you can
use EXPLAIN ANALYZE.
Chris
---(end of broadcast)---
TIP 6:
1 - 100 of 263 matches
Mail list logo