lowe
> Sent: Monday, April 13, 2009 8:41 AM
> To: Rainer Mager
> Cc: Ognjen Blagojevic; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Postgres 8.x on Windows Server in production
>
> On Sun, Apr 12, 2009 at 5:13 PM, Rainer Mager
> wrote:
> > We use Postgres 8.x
We use Postgres 8.x in production on Windows Server 2003. We have not done a
direct head-to-head comparison against any *nix environment, so I can't
really compare them, but I can still give a few comments.
First of all, it seems that some of the popular file systems in *nix are
more robust at pre
Thanks for all of the suggestions so far. I've been trying to reduce the
number of indices I have, but I'm running into a problem. I have a need to
do queries on this table with criteria applied to the date and possibly any
or all of the other key columns. As a reminder, here's my table:
So, I defragged my disk and reran my original query and it got a little
better, but still far higher than I'd like. I then rebuilt (dropped and
recreated) the ad_log_date_all index and reran the query and it is quite a
bit better:
# explain analyze select * from ad_log where date(start_time) <
> -Original Message-
> From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> "Rainer Mager" writes:
> >> From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> >> Hmm ... it's pretty unusual to see the index fetch portion of a
> bitmap
> >> scan take t
Thanks for all the replies, I'll try to address the follow up questions:
> From: David Wilson [mailto:david.t.wil...@gmail.com]
>
> The stats look good and it's using a viable index for your query. What
> kind of hardware is this on, and what are the relevant postgresql.conf
> lines? (Or, for tha
I have a somewhat large table (more than 100 million rows) that contains log
data with start_time and end_time columns. When I try to do queries on this
table I always find them slower than what I need and what I believe should
be possible.
For example, I limited the following query to just a s
I have an interesting performance improvement need. As part of the automatic
test suite we run in our development environment, we re-initialize our test
database a number of times in order to ensure it is clean before running a
test. We currently do this by dropping the public schema and then recre
I have two identical queries except for the date range. In the first case,
with the wider date range, the correct (I believe) index is used. In the
second case where the date range is smaller a different index is used and a
less efficient plan is chosen. In the second query the problem seems to be
, September 09, 2008 1:16 PM
To: Scott Marlowe
Cc: Rainer Mager; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] best use of another drive
On Mon, 8 Sep 2008 22:11:25 -0600
"Scott Marlowe" <[EMAIL PROTECTED]> wrote:
> On Mon, Sep 8, 2008 at 8:19 PM, Rainer Mager <[EMA
I've recently installed another drive in my db server and was wondering what
the best use of it is. Some thoughts I have are:
1. Move some of the databases to the new drive. If this is a good idea, is
there a way to do this without a dump/restore? I'd prefer to move the folder
if possible since t
Thanks for the suggestion. This seems to work pretty well on 8.3, but not so
well on 8.2. We were planning on upgrading to 8.3 soon anyway, we just have
to move up our schedule a bit.
I think that this type of algorithm would make sense in core. I suspect that
being in there some further optimi
50 characters),
are certainly longer than a simple foreign key reference.
--Rainer
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Scott Carey
Sent: Friday, August 29, 2008 8:02 AM
To: David Rowley
Cc: Rainer Mager; pgsql-performance@postgresql.org
Subject: Re: [PERFORM
I'm looking for some help in speeding up searches. My table is pretty simple
(see below), but somewhat large, and continuously growing. Currently it has
about 50 million rows.
The table is (I know I have excessive indexes, I'm trying to get the
appropriate ones and drop the extras):
14 matches
Mail list logo