]
Sent: March 5, 2013 4:21 PM
To: Carlo Stonebanks
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Are bitmap index scans slow to start?
On Wed, Feb 27, 2013 at 1:38 PM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
Is the original query you posted part of the transform
On Thu, Feb 28, 2013 at 12:13 PM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
Could you use CLUSTER on the table after it had been closed off? If
appropriate, that should make the queries run much faster, as elated
entries will be in the same or nearby blocks on disk.
** **
On Wed, Feb 27, 2013 at 1:38 PM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
Is the original query you posted part of the transform process, rather
than being the production query you run after the ETL is over?
Neither, it is part of our auditing and maintenance processes.
*To:* Jeff Janes; Carlo Stonebanks
*Cc:* pgsql-performance@postgresql.org
mailto:pgsql-performance@postgresql.org
*Subject:* AW: [PERFORM] Are bitmap index scans slow to start?
Rebuilding the index might help, as it would put all the leaf pages holding
values for session_id=27
Rebuilding the index might help, as it would put all the leaf pages holding
values for session_id=27 adjacent to each other, so they would read from disk
faster. But with a name like session_id, I don't know how long such
clustering would last though.
If I'm right about the index disk-read
Could you use CLUSTER on the table after it had been closed off? If
appropriate, that should make the queries run much faster, as elated entries
will be in the same or nearby blocks on disk.
Technically, yes. That would really help, but the issue is scheduling.
Although the logs are closed
On Tue, Feb 26, 2013 at 4:33 PM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
Is each of these write operations just covering a single row? Does this
description apply to just one of the many (how many?) databases, so that
there are really 14*N concurrent sessions?
** **
**
pgbouncer is more for making connections line up single-file when the
line is moving at a very fast clip, say 0.01 second per turn. If I were
trying to make tasks that can each last for hours or days line up and take
turns, I don't think pgbouncer would be the way to go.
The recommendation
I had thought you were saying that any one ETL procedure into one database
used 14 concurrent threads. But really, each ETL procedure is
single-threaded, and there can be up to 5 (or theoretically up to 14) of
them running at a time into different databases?
Sorry, just caught this.
On Mon, Feb 25, 2013 at 9:04 AM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
Hi Jeff, thanks for the insight.
** **
And then the next question would be, once they are in the cache, why
don't they stay there? For that you would have to know what other types of
activities are
Is each of these write operations just covering a single row? Does this
description apply to just one of the many (how many?) databases, so that
there are really 14*N concurrent sessions?
All writes are single row. All DB's have exactly the same structure, only
the content is different.
Hi Jeff, thanks for the insight.
And then the next question would be, once they are in the cache, why
don't they stay there? For that you would have to know what other types of
activities are going on that might be driving the data out of the cache.
To give you an idea of the activity
On Friday, February 22, 2013, Carlo Stonebanks wrote:
Hi Jeff, thanks for the reply.
** **
** **
What is going on during the interregnum? Whatever it is, it seems to be
driving the log_2013_01_session_idx index out of the cache, but not the
log_2013_01 table. (Or perhaps the table
On Friday, February 22, 2013, Carlo Stonebanks wrote:
My understanding of PG’s cluster is that this is a one-time command that
creates a re-ordered table and doesn’t maintain the clustered order until
the command is issued again. During the CLUSTER, the table is read and
write locked. So,
Hi Jeff, thanks for the reply.
What is going on during the interregnum? Whatever it is, it seems to be
driving the log_2013_01_session_idx index out of the cache, but not the
log_2013_01 table. (Or perhaps the table visit is getting the benefit of
effective_io_concurrency?)
.
Rebuilding
: pgsql-performance@postgresql.org
Subject: AW: [PERFORM] Are bitmap index scans slow to start?
Rebuilding the index might help, as it would put all the leaf pages holding
values for session_id=27 adjacent to each other, so they would read from
disk faster. But with a name like session_id, I
– an interesting
idea for more specific challenges.
** **
*From:* Marc Mamin [mailto:m.ma...@intershop.de]
*Sent:* February 21, 2013 2:41 PM
*To:* Jeff Janes; Carlo Stonebanks
*Cc:* pgsql-performance@postgresql.org
*Subject:* AW: [PERFORM] Are bitmap index scans slow to start
@postgresql.org
Subject: Re: [PERFORM] Are bitmap index scans slow to start?
I can't really help, but I can make it more clear why postgres is choosing a
_bitmap_ index scan rather than a regular index scan. With a regular index
scan it pumps the index for the locations of the rows that it points
(Sorry moderators for any double posts, I keep making subscription errors.
Hopefully this one gets through)
Hi speed freaks,
Can anyone tell me why the bitmap heap scan takes so long to start for this
query? (SQL and EXPLAIN ANALYZE follows).
The big culprit in this appears to be:
- Bitmap
On Thu, Feb 21, 2013 at 8:57 AM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
(Sorry moderators for any double posts, I keep making subscription errors.
Hopefully this one gets through)
Hi speed freaks,
Can anyone tell me why the bitmap heap scan takes so long to start for this
20 matches
Mail list logo