Tom Lane wrote:
Gavin Hamill [EMAIL PROTECTED] writes:
If I replace the
(allocation0_.Date between '2006-06-09 00:00:00.00' and
'2006-06-09 00:00:00.00')
with
allocation0_.Date ='2006-04-09 00:00:00.00'
then the query comes back in a few milliseconds (as I'd expect :)
This should be fixed by the changes I made recently in
choose_bitmap_and
--- it wasn't being aggressive about pruning overlapping AND conditions
when a sub-OR was involved. It's possible the new coding is *too*
aggressive, and will reject indexes that it'd be profitable to include;
but at least
Hi, Francisco,
Francisco Reyes wrote:
I only wonder what is safer.. using a second or two in commit_delay or
using fsync = off.. Anyone cares to comment?
It might be that you misunderstood commit_delay. It will not only delay
the disk write, but also block your connnection until the write
Hi, Tom,
Tom Lane wrote:
Well, the other thing that's going on here is that we know we are
overestimating the cost of nestloop-with-inner-indexscan plans.
The current estimation for that is basically outer scan cost plus N
times inner scan cost where N is the estimated number of outer
For now, I only could get good performance with bacula and
postgresql
when disabling fsync...
Isn't that less safe?
Most definitly.
FWIW, I'm getting pretty good speeds with Bacula and PostgreSQL on a
reasonably small db (file table about 40 million rows, filename about
5.2 million
For the purpose of the application I need to establish some form of
serialization, therefore I use FOR UPDATE. The query, inside the
function, is like this:
pulitzer2=# explain analyze select id FROM messages JOIN
ticketing_codes_played ON id = message_id WHERE service_id = 1102 AND
Mario Splivalo [EMAIL PROTECTED] writes:
For the purpose of the application I need to establish some form of
serialization, therefore I use FOR UPDATE. The query, inside the
function, is like this:
pulitzer2=# explain analyze select id FROM messages JOIN
ticketing_codes_played ON id =
Mikael Carneholm [EMAIL PROTECTED] writes:
Ok, cool. I don't have time to test this right now as the project has to
move on (and I guess testing the fix would require a dump+build CVS
version+restore), but as a temporary workaround I simly dropped the
xda_dat index (all queries on that table
Hi!
I am having trouble with like statements on one of my tables.
I already tried a vacuum and analyze but with no success.
The database is PostgreSQL Database Server 8.1.3 on i686-pc-mingw32
I get the following explain and I am troubled by the very high
startup_cost ... does anyone have
-Original Message-
From: [EMAIL PROTECTED]
[mailto:pgsql-performance-
[EMAIL PROTECTED] On Behalf Of Tarabas (Manuel Rorarius)
Subject: [PERFORM] Problem with LIKE-Performance
Hi!
I am having trouble with like statements on one of my tables.
It looks like you are getting a
Hi Dave,
DD It looks like you are getting a sequential scan instead of an index
DD scan. What is your locale setting? As far as I know Postgres doesn't
DD support using indexes with LIKE unless you are using the C locale.
Actually no, I am using de_DE as locale because I need the german
Tarabas (Manuel Rorarius) [EMAIL PROTECTED] writes:
I get the following explain and I am troubled by the very high
startup_cost ... does anyone have any idea why that value is so
high?
{SEQSCAN
:startup_cost 1.00
You have enable_seqscan = off, no?
Please refrain from posting
Mario Splivalo [EMAIL PROTECTED] writes:
If there is concurrent locking,
you're also running a big risk of deadlock because two processes might
try to lock the same rows in different orders.
I think there is no risk of a deadlock, since that particular function
is called from the middleware
Hi Tom,
TL Tarabas (Manuel Rorarius) [EMAIL PROTECTED] writes:
I get the following explain and I am troubled by the very high
startup_cost ... does anyone have any idea why that value is so
high?
{SEQSCAN
:startup_cost 1.00
TL You have enable_seqscan = off, no?
You were right,
Hi,
i remember something that you need a special index with localesC.
You nned a different operator class for this index smth. like:
CREATE INDEX idx_image_title
ON image
USING btree
(title varchar_pattern_ops);
You can find the details here:
Dave Dutcher a écrit :
It looks like you are getting a sequential scan instead of an index
scan. What is your locale setting? As far as I know Postgres doesn't
support using indexes with LIKE unless you are using the C locale.
It does if you create your index this way :
CREATE INDEX
Hi Hakan,
HK i remember something that you need a special index with localesC.
HK You nned a different operator class for this index smth. like:
HK CREATE INDEX idx_image_title
HK ON image
HK USING btree
HK (title varchar_pattern_ops);
I also forgot that, thanks a lot for the hint. that
On 18.04.2006, at 17:16 Uhr, Tarabas (Manuel Rorarius) wrote:
Is there any way to speed the like's up with a different locale than C
or to get an order by in a different Locale although using the
default C locale?
Sure. Just create the index with
create index tabname_column_index on tabname
Hi Tom,
TL As already noted, it might be worth your while to add an index using the
TL pattern-ops opclass to help with queries like this.
I have done that now and it works very fine as supposed.
The problem with the high startup_costs disappeared somehow after the
change of the enable_seqscan
Tarabas (Manuel Rorarius) [EMAIL PROTECTED] writes:
After removing the enable_seqscan = off and making sure it was gone,
it is a lot faster again.
Now it takes about 469.841 ms for the select.
Um, no, enable_seqscan would certainly not have had any effect on the
*actual* runtime of this query.
Sriram Dandapani [EMAIL PROTECTED] writes:
Got an explain analyze output..Here it is
Seq Scan on c_chkpfw_hr_tr a (cost=0.00..225975659.89 rows=11000
width=136) (actual time=2.345..648070.474 rows=22001 loops=1)
Filter: (subplan)
SubPlan
- Bitmap Heap Scan on chkpfw_tr_hr_dimension
Thx Tom
I guess I have to abandon the bulk update. The columns in the where
clause comprise 80% of the table columns..So indexing all may not help.
The target table will have on average 60-180 million rows.
I will attempt the in instead of exist and let you know the result
-Original
Tarabas (Manuel Rorarius) wrote:
Hi Tom,
TL As already noted, it might be worth your while to add an index using the
TL pattern-ops opclass to help with queries like this.
I have done that now and it works very fine as supposed.
The problem with the high startup_costs disappeared somehow
Hi Richard,
RH As you can see, the plan is still scanning all the rows. In any case,
RH you've changed the query - this has % at the beginning and end, which no
RH index will help you with.
I realize that, the index definately helped a lot with the query where
the % is just at the end. The time
Suppose you have a table codes :
(
game_id INT,
codeTEXT,
usedBOOL NOT NULL DEFAULT 'f',
prize ...
...
PRIMARY KEY (game_id, code)
)
Just UPDATE codes SET used='t' WHERE used='f' AND game_id=... AND
Thanks for all responses! I agree with most of you, and say that the RI is
best maintened by Database ! Performance must be improved in other ways
(indexes, hardware, etc)!
- Original Message -
From: Jim C. Nasby [EMAIL PROTECTED]
To: Craig A. James [EMAIL PROTECTED]
Cc: PFC [EMAIL
Gavin Hamill [EMAIL PROTECTED] writes:
Tom Lane wrote:
I'm thinking the planner is misestimating something, but it's hard
to tell what without breaking it down.
(allocation0_.Date between '2006-06-10 00:00:00.00' and
'2006-06-10 00:00:00.00');
On Tue, 18 Apr 2006 13:31:48 -0400
Tom Lane [EMAIL PROTECTED] wrote:
There should be a fix for this by the time PG 8.2 comes out, but in
the meantime you might find that it helps to write the range check in
a way that doesn't have identical bounds, eg
date = '2006-06-10'::date AND date
Gavin Hamill [EMAIL PROTECTED] writes:
On Tue, 18 Apr 2006 13:31:48 -0400
Tom Lane [EMAIL PROTECTED] wrote:
There should be a fix for this by the time PG 8.2 comes out, but in
the meantime you might find that it helps to write the range check in
a way that doesn't have identical bounds, eg
Interesting what's EXPLAIN ANALYZE show if you SET
enable_seqscan=off; ?
You should also consider upgrading to 8.1...
On Thu, Apr 13, 2006 at 12:25:02PM +0200, Cris Carampa wrote:
Hello, postgresql 7.4.8 on SuSE Linux here.
I have a table called DMO with a column called ORA_RIF defined
On Fri, Apr 14, 2006 at 11:12:55AM -0400, Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
In my case it would be helpful to break the heap access numbers out
between seqscans and index scans, since each of those represents very
different access patterns. Would adding that be a mess?
On Fri, Apr 14, 2006 at 08:05:39AM +0200, Terje Elde wrote:
Jim Nasby wrote:
While working on determining a good stripe size for a database, I
realized it would be handy to know what the average request size is.
Getting this info is a simple matter of joining pg_stat_all_tables and
On Tue, 18 Apr 2006 15:51:44 -0400
Tom Lane [EMAIL PROTECTED] wrote:
Fair enough, no reason to replace one workaround with another. But
would you try it on your test case, just to verify the diagnosis?
Yup I can confirm it from testing earlier today - as soon as
the two dates are non-equal,
On Fri, Apr 14, 2006 at 03:15:33PM -0500, Scott Marlowe wrote:
On Fri, 2006-04-14 at 15:09, Francisco Reyes wrote:
Michael Stone writes:
I still don't follow that. Why would the RAID level matter? IOW, are you
actually wanting 2 spares, or are you just stick with that because you
On Tue, Apr 18, 2006 at 01:56:44PM +0200, Magnus Hagander wrote:
Bacula already serializes access to the database (they have to support
mysql/myisam), so this shouldn't help. Actually, it might well hurt by
introducing extra delays.
You have any contact with the developers? Maybe they're a
On Fri, Apr 14, 2006 at 03:13:43PM +0200, Julien Drouard wrote:
Hi everyone,
I've seen my pg_toast tables are becoming bigger and bigger. After googling I
would like to modify my max_fsm_pages parameter to prevent that kind of
problem. So I'm wondering if changing this parameter is enough
On Tue, Apr 18, 2006 at 12:51:59PM +0200, Markus Schaber wrote:
In my mind this is tied into another issue, which is that the planner
always costs on the basis of each query starting from zero. In a real
environment it's much cheaper to use heavily-used indexes than this cost
model
Jim C. Nasby [EMAIL PROTECTED] writes:
Actually, if you run with stats_block_level turned on you have a
first-order approximation of what is and isn't cached.
Only if those stats decayed (pretty fast) with time; which they don't.
regards, tom lane
Markus Schaber [EMAIL PROTECTED] writes:
Hmm, how does effective_cach_size correspond with it? Shouldn't a high
effective_cache_size have a similar effect?
It seems reasonable to suppose that effective_cache_size ought to be
used as a number indicating how much stuff would hang around from
You might try rewriting the coalesces into a row comparison...
WHERE row($4, $5, ...) IS NOT DISTINCT FROM row(interface_id, source_id, ...)
See
http://www.postgresql.org/docs/8.1/interactive/functions-comparisons.html#AEN13408
Note that the docs only show IS DISTINCT FROM, so you might have to
Hi
Apologies if this has already been raised...
PostgreSQL 8.1.3 and prior versions. Vacuum done.
Assuming a single table with columns named c1 to cn and a requirement to
select from a particular position in multiple column order.
The column values in my simple example below denoted by 'cnv'
Jim C. Nasby [EMAIL PROTECTED] writes:
You might try rewriting the coalesces into a row comparison...
WHERE row($4, $5, ...) IS NOT DISTINCT FROM row(interface_id, source_id, ...)
That would be notationally nicer, but no help performance-wise; I'm
fairly sure that IS DISTINCT doesn't get
Theo Kramer [EMAIL PROTECTED] writes:
select * from mytable where
(c1 = 'c1v' and c2 = 'c2v' and c3 = 'c3v') or
(c1 = 'c1v' and c2 'c2v') or
(c1 'c1v')
order by c1, c2, c3;
Yeah ... what you really want is the SQL-spec row comparison operator
select ... where (c1,c2,c3) =
Assuming stats are accurate, you're reading through 5.5M index rows in
order to run that limit query. You didn't say what the index was
actually on, but you might want to try giving each column it's own
index. That might make a bitmap scan feasable.
I know this doesn't help right now, but 8.2
On Tue, Apr 18, 2006 at 06:22:26PM -0400, Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
Actually, if you run with stats_block_level turned on you have a
first-order approximation of what is and isn't cached.
Only if those stats decayed (pretty fast) with time; which they don't.
On Tue, Apr 18, 2006 at 06:26:48PM -0400, Tom Lane wrote:
Markus Schaber [EMAIL PROTECTED] writes:
Hmm, how does effective_cach_size correspond with it? Shouldn't a high
effective_cache_size have a similar effect?
It seems reasonable to suppose that effective_cache_size ought to be
used
Markus Schaber [EMAIL PROTECTED] writes:
An easy first approach would be to add a user tunable cache probability
value to each index (and possibly table) between 0 and 1. Then simply
multiply random_page_cost with (1-that value) for each scan.
That's not the way you'd need to use it. But on
Hi all,
I've been struggling with some performance issues with certain
SQL queries. I was prepping a long-ish overview of my problem
to submit, but I think I'll start out with a simple case of the
problem first, hopefully answers I receive will help me solve
my initial issue.
Consider the
patrick keshishian [EMAIL PROTECTED] writes:
I've been struggling with some performance issues with certain
SQL queries. I was prepping a long-ish overview of my problem
to submit, but I think I'll start out with a simple case of the
problem first, hopefully answers I receive will help me
Suppose you have a table codes :
(
game_idINT,
codeTEXT,
usedBOOL NOT NULL DEFAULT 'f',
prize...
...
PRIMARY KEY (game_id, code)
)
Just UPDATE codes SET used='t' WHERE used='f' AND game_id=... AND
code=...
Then check the rowcount :
Jim C. Nasby wrote:
That said, it's the transactions against disk that typically matter. On
FreeBSD, you can get an impression of this using 'systat -vmstat', and
watch the KB/t column for your drives.
On a related note, you know of any way to determine the breakdown
between read
Jim C. Nasby wrote:
On Tue, Apr 18, 2006 at 06:22:26PM -0400, Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
Actually, if you run with stats_block_level turned on you have a
first-order approximation of what is and isn't cached.
Only if those stats decayed (pretty fast) with time;
On Wed, Apr 19, 2006 at 04:35:11AM +0200, Terje Elde wrote:
Jim C. Nasby wrote:
That said, it's the transactions against disk that typically matter. On
FreeBSD, you can get an impression of this using 'systat -vmstat', and
watch the KB/t column for your drives.
On a related note,
On Wed, Apr 19, 2006 at 04:47:40PM +1200, Mark Kirkwood wrote:
Jim C. Nasby wrote:
On Tue, Apr 18, 2006 at 06:22:26PM -0400, Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
Actually, if you run with stats_block_level turned on you have a
first-order approximation of what is and isn't
Mark Kirkwood [EMAIL PROTECTED] writes:
Jim C. Nasby wrote:
Good point. :/ I'm guessing there's no easy way to see how many blocks
for a given relation are in shared memory, either...
contrib/pg_buffercache will tell you this -
I think the key word in Jim's comment was easy, ie, cheap.
Jim C. Nasby wrote:
Yeah... not gonna happen...
It's completely mind-boggling that FBSD doesn't track writes and reads
seperately.
'iostat' does not tell you this, but 'gstat' does - its the geom
system monitor (a bit annoying that the standard tool is lacking in this
regard...).
Cheers
Tom Lane wrote:
Mark Kirkwood [EMAIL PROTECTED] writes:
Jim C. Nasby wrote:
Good point. :/ I'm guessing there's no easy way to see how many blocks
for a given relation are in shared memory, either...
contrib/pg_buffercache will tell you this -
I think the key word in Jim's comment was
57 matches
Mail list logo