It probably wouldn't be terribly difficult to put the guts of pg_dump
into a library that you could interface with via C. I'm not sure if the
community would accept such a patch; though, I seem to recall other
people asking for this on occasion.
I think script support is bit risky because if
Personally I think it would be neat. For example the admin-tool guys
would be able to get a dump without invoking an external program.
Second it would really be independent of core releases (other than being
tied to the output format.) pg_dump would be just a simple caller of
such a library,
I'm executing the queries from phpPgAdmin.
The above are for explain analyse. I was referring to the pure query
execution time.
Does anyone have an idea why the OR-query takes so long?
Any server-side tuning possibilities? I wouldn't like to change the code of
ldap's back-sql...
If you're
That guy doesn't actually have the foggiest idea what he's doing.
The reason there is no built-in capability to do that is that it *does
not work well*. Search the list archives for priority inversion to
find out why not.
http://en.wikipedia.org/wiki/Priority_inversion
Suppose you have a table codes :
(
game_idINT,
codeTEXT,
usedBOOL NOT NULL DEFAULT 'f',
prize...
...
PRIMARY KEY (game_id, code)
)
Just UPDATE codes SET used='t' WHERE used='f' AND game_id=... AND
code=...
Then check the rowcount :
.
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
--
Christopher Kings-Lynne
Technical Manager
CalorieKing
Tel: +618.9389.8777
Fax: +618.9389.8444
[EMAIL PROTECTED]
www.calorieking.com
how about something like:
DELETE FROM cds.cds_mspecxx WHERE NOT EXISTS (SELECT 1 FROM
cds_stage.cds_Catalog stage where stage.countryCode = 'us' and
stage.ProdId=cds.cds_mspecxx.ProdId) and countryCode = 'us';
Run explain on it first to see how it will be planned. Both tables
should have
The pgAdmin query tool is known to give an answer about 5x the real
answer - don't believe it!
ryan groth wrote:
Hmm, it came from the timer on the pgadmin III sql query tool. I guess
the 1,000ms includes the round-trip? See the wierd thing is that
mysqlserver is running default configuration
Not really, but you can check out the sample databases project:
http://pgfoundry.org/projects/dbsamples/
Chris
Ron wrote:
I assume we have such?
Ron
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Relating to this. If anyone can find govt or other free db's and
convert them into pgsql format, I will host them on the dbsamples page.
The dbsamples are _really_ popular!
Chris
Scott Marlowe wrote:
On Fri, 2006-02-17 at 10:51, Ron wrote:
I assume we have such?
Depends on what you
Ouch! That confirms my problem. I generated the random test case because
it was easier than including the dump of my tables, but you can
appreciate that tables 20 times the size are basically crippled when it
comes to creating an index on them.
I have to say that I restored a few gigabyte
So my question is: What's the use of an autovacuum daemon if I still
have to use a cron job to do full vacuums? wouldn't it just be a minor
job to enhance autovacuum to be able to perform full vacuums, if one
really wants it to do that - even if some developers think that it's the
wrong
UNLIKELY string LIKE '%% PREEMPT %%'
or:
LIKELY string NOT LIKE '%% PREEMPT %%'
You should be using contrib/tsearch2 for an un-anchored text search perhaps?
---(end of broadcast)---
TIP 4: Have you searched our list archives?
select * from my_table where row_num = 5 and row_num 10
and myfunc(foo, bar);
You just create an index on myfunc(foo, bar)
Chris
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
select * from my_table where row_num = 5 and row_num 10
and myfunc(foo, bar);
You just create an index on myfunc(foo, bar)
only if myfunc(foo, bar) is immutable...
And if it's not then the best any database can do is to index scan
row_num - so still you have no problem.
Right on. Some of these coerced plans may performmuch better.
If so, we can look at tweaking your runtime
config: e.g.
effective_cache_size
random_page_cost
default_statistics_target
to see if said plans can be chosen naturally.
I see this over and over. Tweak the
... This seems like a case where PostgreSQL's current optimiser should
easily know what to do if your config settings are correct and you've
been running ANALYZE, so I'd like to see your settings and the explain
analyze plan...
I could, but it would divert us from the main topic of this
You might find it faster to install contrib/tsearch2 for text indexing
sort of purposes...
Nörder-Tuitje wrote:
Hi,
I am breaking up huge texts (between 25K and 250K words) into single
words using PgPlsql.
For this I am using a temp table in the first step :
LOOP
No, my problem is that using TSearch2 interferes with other core
components of postgres like (auto)vacuum or dump/restore.
That's nonsense...seriously.
The only trick with dump/restore is that you have to install the
tsearch2 shared library before restoring. That's the same as all
contribs
...
So you'll avoid a non-core product and instead only use another non-core
product...?
Chris
Michael Riess wrote:
Has anyone ever compared TSearch2 to Lucene, as far as performance is
concerned?
I'll stay away from TSearch2 until it is fully integrated in the
postgres core (like
If you're trying to retrieve 26 million rows into RAM in one go of
course it'll be trouble.
Just use a cursor. (DECLARE/FETCH/MOVE)
Chris
Howard Oblowitz wrote:
Hi …
I am trying to run a query that selects 26 million rows from a
table with 68 byte rows.
When run on the Server via psql
Isn't your distribution supposed to do this for you? Mine does these days...
A distribution that tries to automatically do a major postgresql update
is doomed to fail - spectacularly...
Chris
---(end of broadcast)---
TIP 1: if posting/reading
That way if someone wanted to upgrade from 7.2 to 8.1, they
can just grab the latest dumper from the website, dump their
old database, then upgrade easily.
But if they're upgrading to 8.1, don't they already have the new
pg_dump? How else are they going to dump their *new* database?
Erm.
I do not really see why all the distributions could do something like this,
instead of mucking around with special statically compiled pg_dumps and the
like...
Contrib modules and tablespaces.
Plus, no version of pg_dump before 8.0 is able to actually perform such
reliable dumps and reloads
Update to 7.4 or later ;-)
Quite seriously, if you're still using 7.2.4 for production purposes
you could justifiably be accused of negligence. There are three or four
data-loss-grade bugs fixed in the later 7.2.x releases, not to mention
security holes; and that was before we abandoned support
Perhaps we should put a link on the home page underneath LATEST RELEASEs
saying
7.2: de-supported
with a link to a scary note along the lines of the above.
ISTM that there are still too many people on older releases.
We probably need an explanation of why we support so many releases
transfer= explain analyse SELECT * FROM test WHERE test_a=9091150001
AND test_b=1 AND test_c=2 AND test_d=0 AND test_e=0;
Index Scan using test_idx on test (cost=0.00..50.27 rows=1 width=1891)
(actual time=0.161..0.167 rows=1 loops=1)
Index Cond: (test_a = 9091150001::bigint)
Filter:
I'd set up a trigger to maintain summary tables perhaps...
Chris
Charlie Savage wrote:
Thanks everyone for the feedback.
I tried increasing work_mem:
set work_mem to 30;
select tlid, min(ogc_fid)
from completechain
group by tld;
The results are:
GroupAggregate
Ummm...they're SQL commands. Run them in PostgreSQL, not on the unix
command line...
Christian Paul B. Cosinas wrote:
I try to run this command in my linux server.
VACUUM FULL pg_class;
VACUUM FULL pg_attribute;
VACUUM FULL pg_depend;
But it give me the following error:
-bash:
In what directory in my linux server will I find these 3 tables?
Directory? They're tables in your database...
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Or you could just run the 'vacuumdb' utility...
Put something like this in cron:
# Vacuum full local pgsql database
30 * * * * postgres vacuumdb -a -q -z
You really should read the manual.
Chris
Christian Paul B. Cosinas wrote:
I see.
But How Can I put this in the Cron of my Linux
Now *I* am confused. What does PgAdmin do more than giving the query to
the database?
It builds it into the data grid GUI object.
Chris
---(end of broadcast)---
TIP 4: Have you searched our list archives?
We have a GUI that let user browser through the record page by page at
about 25 records a time. (Don't ask me why but we have to have this
GUI). This translates to something like
select count(*) from table -- to give feedback about the DB size
select * from table order by date limit 25
Who needs a paginated view with 100.000 pages ?
- Select min(date) and max(date) from your table
- Present a nifty date selector to choose the records from any day,
hour, minute, second
- show them, with next day and previous day buttons
- It's more useful to the user
I am using PHP's PDO PGSQL interface - I haven't read up enough on it
to determine whether a persistent connection can re-use server-side
prepared queries as an option. Anybody know?
It re-uses server-side prepared queries by default, if you are using the
PDOPrepare/PDOExecute stuff.
Chris
Here is the code inside my function:
FOR temp_rec IN SELECT * FROM item_qc_doer LOOP
DELETE FROM qc_session WHERE item_id = temp_rec.item_id;
DELETE FROM item_qc_doer WHERE item_id = temp_rec.item_id;
END LOOP;
Item_qc_oder table contains 22,000
Yes you're right it really bosst a little.
I want to improve the system performance. Are there any more tipps?
The rest of the numbers look vaguely ok...
On this server runs only a webserver with php application which uses
postgre Db. Should I give more memory to postgre? From what I noticed
CREATE SEQUENCE ai_id;
CREATE TABLE badusers (
id int DEFAULT nextval('ai_id') NOT NULL,
UserName varchar(30),
Date datetime DEFAULT '-00-00 00:00:00' NOT NULL,
Reason varchar(200),
Admin varchar(30) DEFAULT '-',
PRIMARY KEY (id),
KEY UserName (UserName),
KEY Date (Date)
);
Generate them all into a table and just delete them as you use them.
It's only 1 rows...
Chris
Choe, Cheng-Dae wrote:
I need to generate unused random id with format is ID[0-9]{4}
so i write below query but it seems to be too slow
SELECT * FROM (
SELECT user_id FROM (
Unfortunately there's no very simple way to determine which FK is the
problem. (In 8.1 it'll be possible to do that with EXPLAIN ANALYZE,
but in existing releases EXPLAIN doesn't break out the time spent in
each trigger ...) You have to just eyeball the schema :-(.
phpPgAdmin has a handy info
I forgot to say that it´s a 12GB database...
That's actually not that large.
Ok, I´ll set shared buffers to 30.000 pages but even so meminfo and
top shouldn´t show some shared pages?
Yeah. The reason for not setting buffers so high is because PostgreSQL
cannot efficiently manage huge
You could use a 1 column/1 row table perhaps. Use some sort of locking
mechanism.
Also, check out contrib/userlock
Chris
Alan Stange wrote:
Hello all,
is there a simple way to limit the number of concurrent callers to a
stored proc?
The problem we have is about 50 clients come and
No, unless you use some custom triggers.
prasanna s wrote:
Does postgres support indexed views/materialised views that some of the
other databases support?
Thanks
Prasanna S
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Insert into a temp table then use INSERT INTO...SELECT FROM to insert
all rows into the proper table that don't have a relationship.
Chris
Dan Harris wrote:
I am working on a process that will be inserting tens of million rows
and need this to be as quick as possible.
The catch is that for
Roberto Germano Vieweg Neto wrote:
My application is using Firebird 1.5.2
I have at my database:
- 150 Doamins
- 318 tables
- 141 Views
- 365 Procedures
- 407 Triggers
- 75 generators
- 161 Exceptions
- 183 UDFs
- 1077 Indexes
My question is:
Postgre SQL will be more faster than Firebird?
Try Slony: www.slony.info
Shashi Kanth Boddula wrote:
Hi,
I have one customer who is using PostgreSQL 7.4.8 on Linux . He has some
problems with database mirroring . The details are follows.
The customer is using Linux on which PostgreSQL 7.4.8 along with Jboss
3.2.3 is running . He has 2
Try turning on query logging and using the 'pqa' utility on pgfoundry.org.
Chris
Agha Asif Raza wrote:
Is there any MS-SQL Server like 'Profiler' available for PostgreSQL? A
profiler is a tool that monitors the database server and outputs a
detailed trace of all the transactions/queries that
we are using jdbc -- the log_min_duration_statement = 3000
statement works fine for me. Looks like there's no other work around
for the bug(?). Not sure since I have no interest in logging a
million statements a day, I only want to see the poorly performing hits.
Doesn't it depend on what
Is there a different kind of 'prepared' statements
that we should be using in the driver to get logging
to work properly? What is the 'new' protocol?
The 8.0.2 jdbc driver uses real prepared statements instead of faked
ones. The problem is the new protocol (that the 8.0.2 driver users) has
I'm a bit surprised of that behavior thought, since it means that if we
delete a row from table A all tables (B,C,D) with FK pointing to this
table (A) must be scanned.
If there is no index on those tables it means we gone do all Sequantial
scans. Than can cause significant performance
database= explain select date_trunc('hour', time),count(*) as total from
test where p1=53 and time now() - interval '24 hours' group by
date_trunc order by date_trunc ;
Try going:
time '2005-06-28 15:34:00'
ie. put in the time 24 hours ago as a literal constant.
Chris
What's wrong with Slony?
David Mitchell wrote:
We have the following function in our home grown mirroring package, but
it isn't running as fast as we would like. We need to select statements
from the pending_statement table, and we want to select all the
statements for a single transaction
What's wrong with Slony?
Because it's not multi-master. Our mirroring package is.
I'm curious - how did you write a multi-master replication package in
pgsql, when pgsql doesn't have 2 phase commits or any kind of
distributed syncing or conflict resolution in a release version?
Chris
PostgreSQL 8 for windows faster AND more reliable :)
Chris
Scott Goldstein wrote:
I'm currently trying to make a decision on whether to use the Cygwin
port of Postgres 7.4 or Postgres 8.0 for a windows installation. Can
someone provide some comparison info from a performance point of view?
Just read the docs in contrib/tsearch2 in the PostgreSQL distribution.
Pierre A. Fortier wrote:
I search for particular strings using regular expressions (e.g. where
column ~* $query) through a text data type column which contains notes
(some html code like bbold/b is included).
It works but
Ok, if all 21 are affected, I can understand the problem.
But allow me to say that this is a functional error
No, it's normal MVCC design...
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
Thank you for the swift reply, the following is the output of the SHOW
ALL for shared_buffers and effective_cache_size.
shared_buffers: 13384
effective_cache_size: 4000
server memory: 2GB
effective_cache_size should be 10-100x larger perhaps...
Chris
---(end of
database is running on.
One instance (the live) just insists on doing the seq scan of the 50k
records in Price_Tbl and the 6.5k records in SCT2SubCatType_Tbl.
Seems weird
Cheers
Jona
Christopher Kings-Lynne wrote:
Thank you for the swift reply, the following is the output of the
SHOW ALL
We're hoping PostgreSQL can match or beat Sybase performance, and
preliminary tests look good. We should be able to get some load testing
going within a week, and we're shooting for slipping these machines into
the mix around the end of this month. (We've gone to some lengths to
keep our code
contrib/dbsize in the postgresql distribution.
Shanmugasundaram Doraisamy wrote:
Dear Group!
Thank you for all the support you all have been
providing from time to time. I have a small question: How do I find the
actual size of the Database? Awaiting you replies,
So, our problem in installing is we don't know a cluster or SSL from a
hole in the ground. Things get confusing about contexts- are we
talking about a user of the system or the database? Yikes, do I need
to write down the 30+ character autogenerated password?
No you don't need to write it
Without reading too hard, I suggest having a quick look at contrib/ltree
module in the PostgreSQL distribution. It may or may not help you.
Chris
hubert lubaczewski wrote:
hi
first let me draw the outline.
we have a database which stores adverts.
each advert is in one category, and one or
When I do an EXPLAIN ANALYZE on the above query, the result is:
Seq Scan on city_alias (cost=0.00..59282.31 rows=2 width=42) (actual
time=73.369..3330.281 rows=407 loops=1)
Filter: ((name)::text ~~ '%FRANCISCO'::text)
Total runtime: 3330.524 ms
(3 rows)
this is a query that our system
My Next Task: Finding a Stress Test Harness to Load, and Query Data.
Anyone have ideas?
I am eagerly awaiting the * DESTRUCTION* ** of Oracle around here, and
yes I am an oracle DBA and think it's */ very /*// good technology.
Have you tried the simple 'gmake test'?
Other than that, try
select count(*) from mtable where day='Mon'
Results:
1. P3 600 512MB RAM MSSQL. It takes about 4-5 secs to
run. If I run a few queries and everything is cached,
it is sometimes just 1 second.
2. Athlon 1.3 Ghz 1GB RAM. PostgreSQL takes 7 seconds.
I have played with the buffers setting and
--As Chris pointed out, how real-world is this test?
His point is valid. The database we're planning will
have a lot of rows and require a lot of summarization
(hence my attempt at a test), but we shouldn't be
pulling a million rows at a time.
If you want to do lots of aggregate analysis, I
If I pg_dump that database then create a new database (e.g. tempdb)
and upload the dump file (thus making a duplicate) then the same query
only takes 190ms !!
Vacuum, vacuum analyse, and vacuum full analyse does not seem to have an
impact on these times.
Damn, for some reason I didn't read
Would CLUSTER / REINDEX still have an effect if our queries were done
via sequential scan?
SELECTS don't write to the database, so they have no effect at all on
vacuuming/analyzing. You only need to worry about that with writes.
This is a old database (as in built by me when i
was just
I'm doing the writes individually. Is there a better way? Combining
them all into a transaction or something?
Use COPY of course :)
Or at worst bundle 1000 inserts at a time in a transation...
And if you seriously do not care about your data at all, set fsync = off
in you postgresql.conf for
This time it worked! But VACUUM FULL requires an exclusive lock on the
table which I don't really want to grant. So my question is: why is
VACUUM ANALYZE didn't do the job? Is there any setting I can tweak to
make a VACUUM without granting a exclusive lock?
You need to run normal vacuum analyze
Acceptable Answers to 'So, when/is PG meant to be getting a decent
partitioning system?':
1. Person X is working on it I believe.
2. It's on the list, but nobody has done anything about it yet
3. Your welcome to take a stab at it, I expect the community would
support your efforts
This is why I mention partitioning. It solves this issue by storing
different data sets on different machines under the same schema. These
seperate chunks of the table can then be replicated as well for data
redundancy and so on. MySQL are working on these things
*laff*
Yeah, like they've
*laff*
Yeah, like they've been working on views for the last 5 years, and
still haven't released them :D :D :D
?
http://dev.mysql.com/doc/mysql/en/create-view.html
...for MySQL 5.0.1+ ?
Give me a call when it's RELEASED.
Chris
---(end of
You didn't do analyze.
Chris
Jona wrote:
Results of VACUUM VERBOSE from both servers
Test server:
comm=# VACUUM VERBOSE StatCon_Tbl;
INFO: --Relation public.statcon_tbl--
INFO: Pages 338: Changed 338, Empty 0; Tup 11494: Vac 0, Keep 0, UnUsed 0.
Total CPU 0.02s/0.00u sec elapsed 0.04
So what's the difference between a COPY and a batch of INSERT
statements. Also, surely, fsyncs only occur at the end of a
transaction, no need to fsync before a commit has been issued, right?
With COPY, the data being inserted itself does not have to pass through
the postgresql parser.
Chris
conn.setAutoCommit(false);
pst = conn.prepareStatement(INSERT INTO tmp (...) VALUES (?,?));
for (int i = 0; i len; i++) {
pst.setInt(0, 2);
pst.setString(1, xxx);
pst.addBatch();
}
pst.executeBatch();
conn.commit();
This snip takes 1.3 secs in postgresql. How can I lower that?
You're
Yay! You're right! I wonder why have I forgotten about LANGUAGE SQL. :)
It's 30 seconds vs 5 seconds for CASE ... END insisde PLpgsql vs CASE...END
LANGUAGE SQL. :) I.e. its almost the same as in-place entered SQL.
Probably because simple SQL functions get inlined by the optimiser.
Chris
Yay! You're right! I wonder why have I forgotten about LANGUAGE SQL. :)
It's 30 seconds vs 5 seconds for CASE ... END insisde PLpgsql vs CASE...END
LANGUAGE SQL. :) I.e. its almost the same as in-place entered SQL.
Regards,
Dawid
---(end of
Is there a way to look at the stats tables and tell what is jamming up
your postgres server the most? Other than seeing long running queries
and watch top, atop, iostat, vmstat in separate xterms...I'm wondering
if postgres keeps some stats on what it spends the most time doing or if
there's
Try enabliing your checkpoint_segments. In my example, our database
restore took 75mins. After enabling checkpoints_segments to 20, we cut
it down to less than 30 minutes.
Increasing maintenance_work_mem might help too ... or several other
settings ... with no information about exactly *what*
Am I correct is assuming that the timings are calculated locally by psql
on my client, thus including network latency?
No explain analyze is done on the server...
Chris
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL
To be more explicit, let's say I have table with two fields a and b. If
I have an index on (a,b) and I do a request like SELECT b FROM table
WHERE a=x, will Postgresql use only the index, or will it need to also
read the table page for that (those) row(s)?
It must read the table because of
I am new to cross references between tables, and I am trying to
understand how they impact performance. From reading the documentation I
was under the impression that deffering foreign keys would yield about
the same performance as dropping them before a copy, and adding them
after. However, I
Deferring makes no difference to FK checking speed...
But why then is the speed acceptable if I copy and then manually add the
FK? Is the check done by the FK so much different from when it is done
automatically using an active deffered FK?
Yeah I think it uses a different query formulation...
My problem with this really is that in my database it is hard to predict
which inserts will be huge (and thus need FKs dissabled), so I would
have to code it around all inserts. Instead I can code my own integirty
logic and avoid using FKs all together.
Just drop the fk and re-add it, until
I like to know whether Indexed View supported in psql 7.1.3.?
No...
Is there any performance analysis tool for psql.?
No, we keep telling you to upgrade to newer PostgreSQL. Then you can
use EXPLAIN ANALYZE.
Chris
---(end of broadcast)---
TIP
I didn't find option analyse in explain command..
how to get time taken by SQL procedure/query?
Explain analyze was added in 7.2 - you really need to upgrade...
You can use \timing in psql to get an approximation...
Chris
---(end of broadcast)---
TIP
how to find the time taken by an query/stored procedure?
In psql, use \timing for an approximate time.
I am using psql 7.1.3 in linux 7.2
how to execute 'explain analyse' in the psql? Is it supported at 7.1.3 ?
Explain analyze is NOT supported in PostgreSQL 7.1. You really should
upgrade your
Using COPY ... FROM STDIN via the Perl DBI (DBD::Pg) interface,
I accidentally strung together several \n-terminated input lines,
and sent them to the server with a single putline.
To my (happy) surprise, I ended up with exactly that number of rows
in the target table.
Is this a bug? Is this
There is clear benefit from forcing them to be the same. In logical data
terms, they *should* be the same. I don't check fruit.apple_grade
against fruit_type.orange_grade. When would I want to make a check of
that nature? If there is a reason, thats great, lets keep status quo
then.
I respect the
Hmm ... I wonder how hard it would be to teach EXPLAIN ANALYZE to show
the runtime expended in each trigger when the statement is of a kind
that has triggers. We couldn't break down the time *within* the
triggers, but even this info would help a lot in terms of finger
pointing ...
Seq
In that case there's a datatype mismatch between the referencing and
referenced columns, which prevents the index from being used for the
FK check.
Is creating such a foreign key a WARNING yet?
Chris
---(end of broadcast)---
TIP 6: Have you searched
Watch your pg_stats_* views before and after the delete and check what
related tables have had lots of seqscans.
Chris
Mark Lewis wrote:
Tom,
I've got a similar problem with deletes taking a very long time. I know
that there are lots of foreign keys referencing this table, and other
foreign
May be it's time to revisit an old proposal that has failed to catch
anybody's attention during the 7.4 beta period:
http://archives.postgresql.org/pgsql-hackers/2003-08/msg00937.php
I'm not sure I'd store index correlation in a separate table today.
You've invented something better for functional
You will still need to use double quotes in 8.0.1...
Chris
Gourish Singbal wrote:
Thanks a lot,
we might be upgrading to 8.0.1 soon.. till than using double quotes
should be fine.
regards
gourish
On Mon, 14 Mar 2005 18:25:22 +1100, Russell Smith [EMAIL PROTECTED] wrote:
On Mon, 14 Mar 2005 06:14
1) in PostgreSQL I use 'varchar(1)' for a lot of fields and in MySQL I
use 'enum'
2) in PostgreSQL in some cases I use connection fields that are not of
the same type (smallint - integer (SERIAL)), in MySQL I use the same
types
Well both those things will make PostgreSQL slower...
Chris
Put around the column name, eg:
insert into limit values (1, 2,3 );
Chris
Gourish Singbal wrote:
Guys,
I am having a problem firing queries on one of the tables which is
having limit as the column name.
If a run an insert/select/update command on that table i get the below error.
ERROR: syntax
I'm doing VACUUM ANALYZE once a night. Before the tests I did VACUUM and
then ANALYZE.
I'd suggest once an hour on any resonably active database...
Chris
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
Unfortunately we will need a rowtype with all the departaments:
CREATE DOMAIN departaments AS (a int, b int, c int, d int, ...);
I think you mean CREATE TYPE departments...
Chris
---(end of broadcast)---
TIP 2: you can get off all lists at once with
with about 8000 rows. For this table query:
SELECT MAX(MsgSeqNum),MAX(LogTimestamp) FROM ELT_tcli_MessageLog
WHERE LogTimestamp = '0' AND IsFromCounterParty = 'Y' AND
IsOutOfSequence = 'N'
AND ConnectionName = 'DB_BENCHMARK'
AND LogTimestamp IN (SELECT MAX(LogTimestamp)
1 - 100 of 209 matches
Mail list logo