We have had four databases serving our web site but due to licensing issues we have had to take two out of production and we are looking to bring those two onto PostgreSQL very quickly with an eye toward moving everything in the longer term. The central web DBs are all copies of
This is a pattern which I've seen many of times. I call it a best
choice query -- you can easily match a row from one table against any
of a number of rows in another, the trick is to pick the one that
matters most. I've generally found that I want the query results to
show more than the columns
load testing
going within a week, and we're shooting for slipping these machines into
the mix around the end of this month. (We've gone to some lengths to
keep our code portable.)
-Kevin
Richard Huxton dev@archonet.com 06/09/05 3:06 AM
Kevin Grittner wrote:
The manager of the DBA team
appreciate if you have further ideas to troubleshoot this issue.
Thank you!
On 6/8/05, Kevin Grittner [EMAIL PROTECTED] wrote:
This is a pattern which I've seen many of times. I call it a best
choice query -- you can easily match a row from one table against any
of a number of rows in another
I've done a lot of work with a bookkeeping system where we have such
redundancy built in. The auditors, however, need to be able to generate
lists of the financial transaction detail to support balances. These
reports are among the most demanding in the system. I shudder to think
how
It sure would be nice if the optimizer would consider that it had the
leeway to add any column which was restricted to a single value to any
point in the ORDER BY clause. Without that, the application programmer
has to know what indexes are on the table, rather than being able to
just worry about
I agree that ignoring useless columns in an ORDER BY clause is less
important than ignoring index columns where the value is fixed. There
is one use case for ignoring useless ORDER BY columns that leaps to
mind, however -- a column is added to the ORDER BY clause of a query to
help out the
I just tried this on 8.0.3. A query which runs very fast through an
index on a 25 million row table blocked when I dropped the index within
a database transaction. As soon as I rolled back the database
transactiton, the query completed, the index appears fine, and the query
runs fast, as usual.
I was hesitant to jump in on this because I am new to PostgreSQL and
haven't seen this problem with _it_, but I have seen this with the
Sybase database products. You can configure Sybase to disable the Nagle
algorithm. If you don't, any query which returns rows too big to fit in
their network
Just out of curiosity, does it do any better with the following?
SELECT ...
FROM a
JOIN b ON (a.key = b.key)
LEFT JOIN c ON (c.key = a.key)
LEFT JOIN d ON (d.key=a.key)
WHERE (b.column = 100)
Dario Pudlo [EMAIL PROTECTED] 07/06/05 4:54 PM
(first at all, sorry for
Hi Oliver,
We had low resource utilization and poor throughput on inserts of
thousands of rows within a single database transaction. There were a
lot of configuration parameters we changed, but the one which helped the
most was wal_buffers -- we wound up setting it to 1000. This may be
higher
This is sounding suspiciously similar to behavior I've seen with other types of
TCP database connections when the tcp-no-delay option is not on. Is it
possible that the ODBC driver for Windows is not successfully setting this up?
-Kevin
Dalibor Sramek [EMAIL PROTECTED] 09/13/05 9:34 AM
(1) Latency and throughput don't necessarily correlate well. When blasting
quantities of data to test throughput, TCP_NODELAY might not matter
much -- a full buffer will be sent without a delay anyway. What do you get
on a ping while running the throughput test?
(2) Besides the TCP_NODELAY
Have you tried the best choice pattern -- where you select the set of
candidate rows and then exclude those for which a better choice
exists within the set? I often get better results with this pattern than
with the alternatives. Transmuting your query to use this patter gives:
select
First off, Mr. Trainor's response proves nothing about anyone or
anything except Mr. Trainor.
I'm going to offer an opinion on the caching topic. I don't have
any benchmarks; I'm offering a general sense of the issue based on
decades of experience, so I'll give a short summary of that.
I've
** Low Priority **
Human feedback from testers and users has proven pretty effective
at catching errors in the human assisted cache configuration. When
people setting up the servers have missed the named cache configuration,
and all they had was the single general purpose cache, it has been
In addition to what Mark pointed out, there is the possibility that a
query
is running which is scanning a large table or otherwise bringing in a
large number of pages from disk. That would first use up all available
unused cache space, and then may start replacing some of your
frequently used
In this particular case both outer joins are to the same table, and
the where clause is applied to one or the other, so it's pretty easy
to prove that they should generate identical results. I'll grant that
this is not generally very useful; but then, simple test cases often
don't look very
The ANSI/ISO specs are not at all ambiguous on this. An
ORDER BY is not allowed for the SELECT statements within
a UNION. It must come at the end and applied to the resulting
UNION.
Similarly, the column names in the result come from the first
query in the UNION. Column names in the query on
That sure seems to bolster the theory that performance is degrading
because you exhaust the cache space and need to start reading
index pages. When inserting sequential data, you don't need to
randomly access pages all over the index tree.
-Kevin
Kelly Burkhart [EMAIL PROTECTED]
I modified
We're converting from a commercial database product to PostgreSQL, and
generally things are going well. While the licensing agreement with the
commercial vendor prohibits publication of benchmarks without their
written consent, I'll just say that on almost everything, PostgreSQL is
faster.
We do
On Wed, Feb 1, 2006 at 1:34 pm, in message
[EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
Kevin Grittner [EMAIL PROTECTED] writes:
We do have a few queries where PostgreSQL is several orders of
magnitude slower. It appears that the reason it is choosing a bad
plan
On Wed, Feb 1, 2006 at 2:36 pm, in message
[EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
Kevin Grittner [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] wrote:
I'm interested to poke at this ... are you in a position to provide
a
test case?
I can't supply the original data
On Wed, Feb 1, 2006 at 2:43 pm, in message
[EMAIL PROTECTED], Kevin Grittner
[EMAIL PROTECTED] wrote:
I took out the OR in the
where clause, without eliminating that last outer join, and it
optimized
fine.
FYI, with both sides of the OR separated:
explain analyze
SELECT C.*, P.partyNo
I hesitate to raise this issue again, but I've noticed something which I
thought might be worth mentioning. I've never thought the performance
of count(*) on a table was a significant issue, but I'm prepared to say
that -- for me, at least -- it is officially and totally a NON-issue.
We are
On Wed, Feb 22, 2006 at 9:52 pm, in message
[EMAIL PROTECTED], Greg Stark [EMAIL PROTECTED] wrote:
Kevin Grittner [EMAIL PROTECTED] writes:
There have been several times that I have run a SELECT COUNT(*) on
an entire
table on all central machines. On identical hardware, with identical
Just a wouldn't it be nice if sort of feature request. I'm not sure
how practical it is.
Someone in our organization wrote a data fix query, which has sort of
odd logic, but it does what they need. The problem is that it ran for
14 hours in a test against a copy of the data. I looked at it and
On Tue, Feb 28, 2006 at 11:05 am, in message
[EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
The issue at hand really has nothing to do with temp indexes, it's
with
the constrained way that the planner deals with EXISTS subplans.
Yet when the index exists, the query is optimized well.
On Tue, Feb 28, 2006 at 11:05 am, in message
[EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
The limiting factor is that EXISTS subplans
aren't flattened ... and once that's fixed, I doubt the example
would
need any new kind of join support.
I rewrote the query to use IN predicates
On Tue, Feb 28, 2006 at 3:02 pm, in message
[EMAIL PROTECTED], Jim C. Nasby
[EMAIL PROTECTED]
wrote:
Maybe it's just the way my twisted mind thinks, but I generally
prefer
using a JOIN when possible...
Definitely. But sometimes you don't want one row from a table for each
qualifying row
Attached is a simplified example of a performance problem we have seen,
with a workaround and a suggestion for enhancement (hence both the
performance and hackers lists).
Our software is allowing users to specify the start and end dates for a
query. When they enter the same date for both, the
On Wed, Mar 15, 2006 at 12:17 pm, in message
[EMAIL PROTECTED], Andreas Kretschmer
[EMAIL PROTECTED] wrote:
Kevin Grittner [EMAIL PROTECTED] schrieb:
Attached is a simplified example of a performance problem we have
seen,
Odd. Can you tell us your PG- Version?
I know we really should
We were seeing clusters of query timeouts with our web site, which were
corrected by adjusting the configuration of the background writer. I'm
posting just to provide information which others might find useful -- I
don't have any problem I'm trying to solve in this regard.
The web site gets 1 to
On Wed, Mar 15, 2006 at 1:17 pm, in message
[EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
8.1 is certainly capable of devising the plan you want, for example
in the regression database:
regression=# explain select * from tenk1 where thousand = 10 and
tenthous
between 42 and 144;
On Wed, Mar 15, 2006 at 1:54 pm, in message
[EMAIL PROTECTED], Joshua D. Drake
[EMAIL PROTECTED] wrote:
I then did some calculations, based on the sustained write speed of
our
drive array (as measured by copying big files to it), and we tried
this:
bgwriter_lru_percent = 20.0
On Wed, Mar 15, 2006 at 5:05 pm, in message
[EMAIL PROTECTED], Simon Riggs
[EMAIL PROTECTED] wrote:
On Wed, 2006- 03- 15 at 11:56 - 0600, Kevin Grittner wrote:
(One obvious way to fix it would be to
rewrite BETWEEN a AND b as = a when a is equal to b, but it
seems
like there is some
On Thu, Mar 16, 2006 at 12:15 pm, in message
[EMAIL PROTECTED], Evgeny Gridasov
[EMAIL PROTECTED] wrote:
please, could you post other settings from your postgresql.conf?
Everything in postgresql.conf which is not commented out:
listen_addresses = '*' # what IP interface(s) to
On Fri, Mar 17, 2006 at 6:24 am, in message
[EMAIL PROTECTED], Evgeny Gridasov
[EMAIL PROTECTED] wrote:
I've maid some tests with pgbench
If possible, tune the background writer with your actual application
code under normal load. Optimal tuning is going to vary based on usage
patterns.
To support migration of existing queries, it would be nice not to have
to rewrite EXISTS clauses as IN clauses. Here is one example of a query
which optimizes poorly:
DELETE FROM CaseDispo
WHERE EXISTS
(
SELECT * FROM Consolidation C
WHERE
On Sun, Nov 26, 2006 at 5:24 AM, in message
[EMAIL PROTECTED], Joost Kraaijeveld
[EMAIL PROTECTED] wrote:
Are there guidelines (or any empirical data) available how to
determine
how often a table should be vacuumed for optimum performance or is
this
an experience / trial- and- error thing?
On Mon, Dec 11, 2006 at 8:58 AM, in message
[EMAIL PROTECTED], Axel
Waggershauser [EMAIL PROTECTED] wrote:
I'm out of ideas here, maybe someone could try to reproduce this
behavior or could point me to the thread containing relevant
information
No guarantees that this is the problem, but
As I understand it, the log space accumulates for the oldest transaction
which is still running, and all transactions which started after it. I
don't think there is any particular limit besides available disk space.
Long running transactions can cause various problems, including table
and index
On Fri, Dec 22, 2006 at 12:14 PM, in message
[EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
Kevin Grittner [EMAIL PROTECTED] writes:
As I understand it, the log space accumulates for the oldest
transaction
which is still running, and all transactions which started after
I'm posting this to performance in case our workaround may be of benefit to
someone with a similar issue. I'm posting to hackers because I hope we can
improve our planner in this area so that a workaround is not necessary. (It
might make sense to reply to one group or the other, depending on
On Fri, Mar 23, 2007 at 4:49 PM, in message [EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
Kevin Grittner [EMAIL PROTECTED] writes:
explain analyze
SELECT A.adjustmentNo, A.tranNo, A.countyNo, H.date,
H.userId, H.time
FROM Adjustment A
JOIN TranHeader H ON (H.tranId
I don't understand -- TRUE OR UNKNOWN evaluates to TRUE, so why would the IN
need to continue? I'm not quite following the rest; could you elaborate or
give an example? (Sorry if I'm lagging behind the rest of the class here.)
-Kevin
Martijn van Oosterhout kleptog@svana.org 03/23/07
On Fri, Mar 23, 2007 at 5:26 PM, in message
[EMAIL PROTECTED], Kevin Grittner
[EMAIL PROTECTED] wrote:
I tried something which seems
equivalent, but it is running for a very long time. I'll show it with just
the explain while I wait to see how long the explain analyze takes
On Fri, Mar 23, 2007 at 6:04 PM, in message
[EMAIL PROTECTED], Peter Kovacs
[EMAIL PROTECTED] wrote:
On 3/23/07, Kevin Grittner [EMAIL PROTECTED] wrote:
[...]
That's the good news. The bad news is that I operate under a management
portability dictate which doesn't currently allow
On Mon, Apr 23, 2007 at 10:52 AM, in message
[EMAIL PROTECTED], Nelson
Kotowski [EMAIL PROTECTED] wrote:
I don't get how creating only the indexes i cluster on would improve my
cluster command perfomance. I believed that all other indexes wouldn't
interfere because so far they're created
On Thu, Jun 14, 2007 at 6:14 PM, in message [EMAIL PROTECTED],
Craig James [EMAIL PROTECTED] wrote:
Looking for replication solutions, I find:
Slony-I
Slony-II
PGReplication
PGCluster
You wouldn't guess it from the name, but pgpool actually supports replication:
On Thu, Jun 28, 2007 at 1:54 AM, in message
[EMAIL PROTECTED], Ho Fat Tsang
[EMAIL PROTECTED] wrote:
I don't know why the server occasionally
slow down a bit for every 3 minutes.
If the problem is checkpoints, try making your background writer more
aggressive. This allows more of the
These query times are the fully cached times for both, from doing a previous
run of the same query. (The first one took 193.772 ms on its first run; I
don't have a good uncached timing for the second one at this point.)
It seems like the first query could move the searchName filter to the
On Fri, Aug 10, 2007 at 8:08 AM, in message
[EMAIL PROTECTED], Brian Hurt [EMAIL PROTECTED]
wrote:
runic wrote:
I have a table with ca. 1.250.000 Records. When I execute
a select count (*) from table (with pgAdmin III) it takes about 40
secs.
I think that takes much to long. Can you please
On Mon, Aug 13, 2007 at 10:35 AM, in message
[EMAIL PROTECTED], Relyea,
Mike [EMAIL PROTECTED] wrote:
I'm running 8.2.4 on Windows XP with 1.5 GB memory.
shared_buffers = 12288
effective_cache_size = 1
For starters, you might want to adjust one or both of these. It looks to me
like
On Mon, Aug 13, 2007 at 1:48 PM, in message
[EMAIL PROTECTED], Relyea,
Mike [EMAIL PROTECTED] wrote:
I've increased shared_buffers to 128MB, and restarted the server. My
total run time didn't really change.
Please forgive me if this guess doesn't help either, but could you try
eliminating
On Mon, Aug 13, 2007 at 4:00 PM, in message
[EMAIL PROTECTED], Relyea,
Mike [EMAIL PROTECTED] wrote:
Re-writing the view like this maybe bought me something.
Tough to tell because I also increased some of the statistics.
I don't know whether it was the finer-grained statistics or the
On Mon, Aug 13, 2007 at 4:25 PM, in message
[EMAIL PROTECTED], Kevin Grittner
[EMAIL PROTECTED] wrote:
On Mon, Aug 13, 2007 at 4:00 PM, in message
[EMAIL PROTECTED], Relyea,
Mike [EMAIL PROTECTED] wrote:
From what Tom
says, it sounds like if I want the data returned faster I'm likely
On Fri, Aug 10, 2007 at 11:57 AM, in message
[EMAIL PROTECTED], Relyea,
Mike [EMAIL PROTECTED] wrote:
I'm have the following view as part of a larger, aggregate query that is
running slower than I'd like.
. . .
HAVING PrintSamples.MachineID = 4741 OR PrintSamples.MachineID =
4745 AND
On Fri, Aug 10, 2007 at 11:57 AM, in message
[EMAIL PROTECTED], Relyea,
Mike [EMAIL PROTECTED] wrote:
HAVING PrintSamples.MachineID = 4741 OR PrintSamples.MachineID =
4745 AND . . .
On top of the issue in my prior email, I don't see any test for 4745 in the
EXPLAIN ANALYZE output, which
On Tue, Aug 21, 2007 at 9:40 PM, in message
[EMAIL PROTECTED], Stephen Davies [EMAIL PROTECTED]
wrote:
Is there any way to make the larger queries more efficient?
People would be in a better position to answer that if you posted the table
structure and the results of EXPLAIN ANALYZE (rather
On Fri, Aug 24, 2007 at 2:57 PM, in message
[EMAIL PROTECTED], Kevin Kempter
[EMAIL PROTECTED] wrote:
c) setup WAL archiving on the 8.1.4 cluster
d) do a full dump of the 8.1.4 cluster and restore it to the new 8.2.4
cluster
e) stop the 8.2.4 cluster and bring it up in
Decibel! [EMAIL PROTECTED] 08/27/07 4:00 PM
They're running version 8.1.4
As for your pg_dump idea... why not just do a CREATE TABLE AS SELECT *
FROM bloated_table? That would likely be much faster than messing around
with pg_dump.
He wanted to upgrade to 8.2.4. CREATE TABLE AS
On Mon, Aug 27, 2007 at 11:13 PM, in message
[EMAIL PROTECTED], Kevin Kempter
[EMAIL PROTECTED] wrote:
Each night during the nightly batch processing several of the servers (2 in
particular) slow to a crawl - they are dedicated postgres database servers.
There is a lot of database activity
On Tue, Aug 28, 2007 at 10:22 AM, in message
[EMAIL PROTECTED], Evan Carroll
[EMAIL PROTECTED] wrote:
Yes, I ran vacuum full after loading both dbs.
Have you run VACUUM ANALYZE or ANALYZE?
-Kevin
---(end of broadcast)---
TIP 2: Don't
On Sat, Sep 1, 2007 at 12:29 PM, in message
[EMAIL PROTECTED], Shawn
[EMAIL PROTECTED] wrote:
update shawns_data set alias = null;
Even after VACUUM this simple line takes 35 sec to complete.
Would any rows already have a null alias when you run this?
If so, try adding 'where alias is not
On Mon, Sep 3, 2007 at 11:15 AM, in message
[EMAIL PROTECTED], Shawn
[EMAIL PROTECTED] wrote:
On Sun, 02 Sep 2007 10:49:09 -0500 Kevin Grittner
[EMAIL PROTECTED] wrote:
On Sat, Sep 1, 2007 at 12:29 PM, in message
[EMAIL PROTECTED], Shawn
[EMAIL PROTECTED] wrote:
update shawns_data
On Mon, Sep 3, 2007 at 11:57 AM, in message
[EMAIL PROTECTED], Shawn
[EMAIL PROTECTED] wrote:
Also it runs a lot faster by itself
Given the context of the run, there is a possibility that a checkpoint tends
to fall at this point in the script because you're filling your WAL files.
There is a
On Mon, Sep 3, 2007 at 6:53 PM, in message
[EMAIL PROTECTED], Shawn
[EMAIL PROTECTED] wrote:
vacuum verbose analyze shawns_data;
INFO: vacuuming public.shawns_data
INFO: scanned index shawns_data_pkey to remove 21444 row versions
DETAIL: CPU 0.24s/0.12u sec elapsed 8.35 sec.
INFO:
On Wed, Sep 5, 2007 at 5:41 PM, in message [EMAIL PROTECTED],
Thomas Finneid [EMAIL PROTECTED] wrote:
how does pg utilise multi cpus/cores, i.e. does it use more than one
core? and possibly, how, are there any documentation about this.
For portability reasons PostgreSQL doesn't use
On Mon, Sep 10, 2007 at 2:25 PM, in message [EMAIL PROTECTED],
Carlo Stonebanks [EMAIL PROTECTED] wrote:
is the default setting of 4.0 realistic or could it be lower?
Wow, such a simple, innocent question.
As you may have inferred, it can't be answered in isolation. Make sure that
you
On Mon, Sep 17, 2007 at 2:49 AM, in message
[EMAIL PROTECTED], valgog
[EMAIL PROTECTED] wrote:
What about saying?:
TBL1.CATEGORY = TBL2.CATEGORY
Are you sure you understood what was the question?
Is the TBL1.CATEGORY = TBL2.CATEGORY the same as TBL1.CATEGORY
TBL2.CATEGORY 0?
On Mon, Sep 17, 2007 at 8:37 AM, in message [EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
Kevin Grittner [EMAIL PROTECTED] writes:
On Mon, Sep 17, 2007 at 2:49 AM, in message
[EMAIL PROTECTED], valgog
[EMAIL PROTECTED] wrote:=20
Are you sure you understood what was the question
On Thu, Sep 20, 2007 at 4:36 PM, in message
[EMAIL PROTECTED], Kamen Stanev
[EMAIL PROTECTED] wrote:
Is there a way to find which query is doing large io operations and/or which
is using cached data and which is reading from disk.
A big part of your cache is normally in the OS, which
On Fri, Sep 21, 2007 at 12:30 PM, in message
[EMAIL PROTECTED], [EMAIL PROTECTED]
wrote:
This is the plan for the old server:
Hash Join (cost=449.55..8879.24 rows=136316 width=904) (actual
time=50.734..1632.491 rows=136316 loops=1)
. . .
Total runtime: 2022.293 ms
And this is the plan
On Fri, Sep 21, 2007 at 3:40 PM, in message [EMAIL PROTECTED],
Luiz K. Matsumura [EMAIL PROTECTED] wrote:
but in suddenly the performance slow down. We noticed that the problem
was with the time to connect with the server, that was very slow.
I think that was some DNS problem (but not
On Thu, Oct 4, 2007 at 10:28 AM, in message
[EMAIL PROTECTED], Josh Trutwin
[EMAIL PROTECTED] wrote:
running postgres 8.1.4
# cat /proc/meminfo
total:used:free: shared: buffers: cached:
Mem: 3704217600 3592069120 1121484800 39460864 2316271616
shared_buffers =
I have a situation where a query is running much slower than I would
expect. The ANALYZE showed that it is hashing some information which
is rarely needed. When I set enable_hashjoin = off for the
connection the query run in 1/1000 the time.
This isn't a debilitating level of performance, but
On Wed, Oct 10, 2007 at 1:31 AM, in message
[EMAIL PROTECTED], Simon Riggs [EMAIL PROTECTED]
wrote:
On Tue, 2007-10-09 at 15:09 -0500, Kevin Grittner wrote:
I have a situation where a query is running much slower than I would
expect. The ANALYZE showed that it is hashing some information
On Wed, Oct 10, 2007 at 1:07 PM, in message [EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
Simon Riggs [EMAIL PROTECTED] writes:
Basically the planner doesn't ever optimise for the possibility of the
never-executed case because even a single row returned would destroy
that assumption.
On Wed, Oct 10, 2007 at 1:54 PM, in message
[EMAIL PROTECTED], Simon Riggs [EMAIL PROTECTED]
wrote:
But the planner doesn't work on probability. It works on a best-guess
selectivity, as known at planning time.
The point I'm trying to make is that at planning time the
pg_statistic row for
On Wed, Oct 10, 2007 at 2:52 PM, in message
[EMAIL PROTECTED], Simon Riggs [EMAIL PROTECTED]
wrote:
The fast plan is an all-or-nothing plan. It is *only* faster when the
number of matched rows is zero. You know it is zero, but currently the
planner doesn't, nor is it able to make use of
On Wed, Oct 10, 2007 at 3:32 PM, in message [EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
I'd be very hesitant
to make it choose a plan that is fast only if there were exactly zero
such rows and is slow otherwise.
I'm not sure why it looks at the slow option at all; it seems like
On Wed, Oct 10, 2007 at 3:48 PM, in message
[EMAIL PROTECTED], Kevin Grittner
[EMAIL PROTECTED] wrote:
This ten times faster
That understates it -- I forgot to get things cached, as I had done
for all the other tests. When cached, this is sub-millisecond,
although not quite the 1000-fold
On Wed, Oct 10, 2007 at 3:48 PM, in message
[EMAIL PROTECTED], Kevin Grittner
[EMAIL PROTECTED] wrote:
I'm not sure why it looks at the slow option at all; it seems like a
remaining weakness in the OUTER JOIN optimizations. If I change the query to
use an inner join between the CaseHist
On Fri, Oct 12, 2007 at 9:57 AM, in message
[EMAIL PROTECTED], Theo Kramer
[EMAIL PROTECTED] wrote:
select * from foo where
(a = a1 and b = b1 and c = c1) or
(a = a1 and b b1) or
(a a1)
order by a, b desc, c;
I have, however, found that transforming the above into a union
On Fri, Oct 12, 2007 at 3:41 PM, in message
[EMAIL PROTECTED], henk de wit
[EMAIL PROTECTED] wrote:
I have a table with some 50 millions rows in PG 8.2. The table has indexes
on relevant columns. My problem is that most everything I do with this table
(which are actually very basic
On Mon, Oct 22, 2007 at 11:10 AM, in message
[EMAIL PROTECTED], Lee Keel
[EMAIL PROTECTED] wrote:
there will probably be no more than 50 simultaneous requests.
Dual-Quad Core 2.33GHz
My benchmarks have indicated that you want to keep the number of
active queries at or below four times the
On Wed, Oct 24, 2007 at 7:15 AM, in message
[EMAIL PROTECTED], Honza Novak
[EMAIL PROTECTED] wrote:
But sometimes (i don't know exactly for what reason) some queries gets
stuck (mostly they are inserts or updates, but realy simple) and
postgres is unable to answer in time
In addition
On Wed, Oct 31, 2007 at 11:45 AM, in message
[EMAIL PROTECTED], Ketema
[EMAIL PROTECTED] wrote:
Who has built the biggest baddest Pg server out there and what do you
use?
I don't think that would be us, but I can give you an example of
what can work. We have a 220 GB database which is a
On Mon, Nov 5, 2007 at 10:42 AM, in message
[EMAIL PROTECTED], Campbell,
Lance [EMAIL PROTECTED] wrote:
How can I [. . .] get rid of some unnecessary indexes
Here's what I periodically run to look for unused indexes:
select relname, indexrelname
from pg_stat_user_indexes
where
On Thu, Nov 8, 2007 at 2:14 PM, in message
[EMAIL PROTECTED], Dimitri Fontaine
[EMAIL PROTECTED] wrote:
The Dell 2900 5U machine has 10 spindles max, that would make 2 for the OS
(raid1) and 8 for mixing WAL and data... not enough to benefit from the
move,
or still to test?
From our
On Fri, Nov 16, 2007 at 4:01 PM, in message
[EMAIL PROTECTED], Merlin Moncure
[EMAIL PROTECTED] wrote:
On Nov 16, 2007 10:56 AM, Brad Nicholson [EMAIL PROTECTED] wrote:
On Wed, 2007-11-14 at 17:46 -0500, Tom Lane wrote:
Russell Smith [EMAIL PROTECTED] writes:
It is possible that analyze
On Mon, Dec 10, 2007 at 6:29 PM, in message [EMAIL PROTECTED],
kelvan [EMAIL PROTECTED] wrote:
i need a more powerful dbms one that can
handle multi threading.
If you're looking to handle a lot of concurrent users, PostgreSQL
has the power. The threading issues really only impact the
On Mon, Dec 10, 2007 at 6:15 PM, in message
[EMAIL PROTECTED], Kevin Grittner wrote:
with 6 MB of RAM
Obviously a typo -- that should read 6 GB of RAM.
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http
Yesterday we moved a 300 GB table containing document images (mostly
raster-scanned from paper), into a 215 GB PostgreSQL 8.2.5 database which
contains the related case management data. (This separation was never right,
since there were links from one to the other, but was necessary under our
On Thu, Dec 13, 2007 at 10:11 AM, in message
[EMAIL PROTECTED], Simon Riggs [EMAIL PROTECTED]
wrote:
On Thu, 2007-12-13 at 09:46 -0600, Kevin Grittner wrote:
The data was inserted through a Java program using a prepared
statement with no indexes on the table. The primary key
On Thu, Dec 13, 2007 at 10:35 AM, in message [EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
Kevin Grittner [EMAIL PROTECTED] writes:
... although to a naive user it's not clear what
is known at vacuum time that the INSERT into the empty table
couldn't have inferred.
The fact
On Thu, Dec 13, 2007 at 10:11 AM, in message
[EMAIL PROTECTED], Simon Riggs [EMAIL PROTECTED]
wrote:
On Thu, 2007-12-13 at 09:46 -0600, Kevin Grittner wrote:
The data was inserted through a Java program using a prepared
statement with no indexes on the table. The primary key
On Thu, Dec 13, 2007 at 12:12 PM, in message
[EMAIL PROTECTED], Simon Riggs [EMAIL PROTECTED]
wrote:
On Thu, 2007-12-13 at 11:46 -0600, Kevin Grittner wrote:
If the hint bit changes are written to the WAL ...
They're not.
So one would expect a write-intensive initial vacuum after a
PITR
On Fri, Dec 14, 2007 at 1:42 AM, in message [EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
My feeling is that vacuum's purpose in life is to offload maintenance
cycles from foreground queries, so we should be happy to have it setting
all the hint bits.
Absolutely.
If Kevin
1 - 100 of 974 matches
Mail list logo