On Wed, Nov 12, 2008 at 9:27 AM, - - <[EMAIL PROTECTED]> wrote:
> I've been searching for performance metrics and tweaks for a few weeks now.
> I'm trying to determine if the length of time to process my queries is
> accurate or not and I'm having a difficult time determining that. I know
> postgre
Well, you're obviously right - I didn't know this. I guess I've found
that the index is not used for null values, and deduced somehow that
NULL values are not stored in the index.
Thanks, it's nice to find out a 'bug' before it's too late :-)
regards
Tomas
Are you sure NULL values are not sto
The explain plan tree only shows the time to fetch/compute the new rows,
not to actually perform the update, update indexes, or fire triggers.
If there is a big discrepancy then the extra time must be going into
one of those steps.
8.1 does show trigger execution time separately, so the most ob
This is the critical point. You have this line:
There were 132969 unused item pointers.
Which says there's 132k or so dead rows in your table. Which means
vacuum / autovacuum isn't keeping up. Did you try and stop the update
several times? Each time it starts then gets killed it creates dead
On Wed, Nov 12, 2008 at 10:02 AM, Franck Routier
<[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have to manage a database that is getting way too big for us.
> Currently db size is 304 GB.
>
> One table is accounting for a third of this space.
> The table itself has 68.800.000 tuples, taking 28GB.
>
> Ther
This is the critical point. You have this line:
There were 132969 unused item pointers.
Which says there's 132k or so dead rows in your table. Which means
vacuum / autovacuum isn't keeping up. Did you try and stop the update
several times? Each time it starts then gets killed it creates dead
hi,
select count(*) from songs;
count
---
54909
(1 row)
Time: 58.182 ms
update songs set views = 0;
UPDATE 54909
Time: 101907.837 ms
time is actually less than 10 minutes, but it is still very long :(
vacuum said>
VACUUM VERBOSE songs;
INFO: vacuuming "public.songs"
INFO: index "pk_so
Hi,
I have to manage a database that is getting way too big for us.
Currently db size is 304 GB.
One table is accounting for a third of this space.
The table itself has 68.800.000 tuples, taking 28GB.
There are 39 indices on the table, and many of them use multiple
columns. A lot of these indice
- - <[EMAIL PROTECTED]> writes:
> One of my tables has 660,000 records and doing a SELECT * from that table
> (without any joins or sorts) takes 72 seconds. Ordering the table based on 3
> columns almost doubles that time to an average of 123 seconds. To me, those
> numbers are crazy slow and I
On Mon, Nov 10, 2008 at 9:30 AM, <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have table with cca 60.000 rows and
> when I run query as:
> Update table SET column=0;
> after 10 minutes i must stop query, but it still running :(
What does
vacuum verbose table;
say? I'm wondering if it's gotten overl
Richard Huxton <[EMAIL PROTECTED]> writes:
> [EMAIL PROTECTED] wrote:
>> I try explain query with this result
>> for 10.000 rows > update songs set views = 0 where sid > 2 and sid <
>> 3
>>
>> Bitmap Heap Scan on songs (cost=151.59..6814.29 rows=8931 width=526)
>> (actual time=4.848..16
Depesz,
do you always have this: "(cr='00' or db='00')"? or do the values (00)
change?
if they don't change, or *most* of the queries have "(cr='00' or
db='00')", than the biggest time difference you will get after creating
this index:
create index test on bilkaib (kuupaev) where cr='00' or db='
Vladimir,
I am afraid PostgreSQL is not smart enough to rewrite query with "or" into
two separate index scans. There is no way to improve the query
significantly without rewriting it.
Note: for this case indices on (datecol), (cr) and (db) are not very
helpful.
Thank you very much.
I added
On Wed, Nov 12, 2008 at 07:02:10PM +0200, Andrus wrote:
> explain analyze select max(kuupaev) from bilkaib where
> kuupaev<=date'2008-11-01' and (cr='00' or db='00')
do you always have this: "(cr='00' or db='00')"? or do the values (00)
change?
if they don't change, or *most* of the queries have "
>
> This query finds initial balance date befeore given date.
If you are not interested in other balances except initial ones (the ones
that have '00') the best way is to create partial indices that I have
suggested.
That will keep size of indices small, while providing good performance
(constant
Matthew,
Thank you.
bilkaib table contains GL transactions for every day.
00 records are initial balance records and they appear only in start of year
or start of month.
They may present or may be not present for some month if initial balance is
not calculated yet.
If 00 records are present, u
> >
> > Recheck Cond: ((sid > 2) AND (sid < 3))
> >
> > -> Bitmap Index Scan on pk_songs2 (cost=0.00..151.59 rows=8931
> width=0) (actual time=4.071..4.071 rows=9579 loops=1)
> >
> > Index Cond: ((sid > 2) AND (sid < 3))
> >
> > Is there a way to run this query on sigl
[EMAIL PROTECTED] wrote:
> Hi,
>
> I've changed settings,
> but with no effect on speed.
>
> I try explain query with this result
> for 10.000 rows > update songs set views = 0 where sid > 2 and sid < 3
>
> Bitmap Heap Scan on songs (cost=151.59..6814.29 rows=8931 width=526) (actual
>
On Wed, 12 Nov 2008, Vladimir Sitnikov wrote:
And rewrite query as follows:
select greatest(
(select max(date) from bilkaib where datecol<=date'2008-11-01' and cr='00'),
(select max(date) from bilkaib where datecol<=date'2008-11-01' and db='00'))
Oops, yes, I missed the "OR" in the query.
On Wed, Nov 12, 2008 at 9:02 AM, Andrus <[EMAIL PROTECTED]> wrote:
> There are columns
> kuupaev date, cr char(10), db char(10)
> and regular indexes for all those fields.
> bilkaib table contains large number of rows.
>
> The following query takes too much time.
> How to make it faster ?
> I th
Firstly, please upgrade to Postgres 8.3 if possible.
On Wed, 12 Nov 2008, Andrus wrote:
There are columns
kuupaev date, cr char(10), db char(10)
and regular indexes for all those fields.
Create a single index on (cr, db, datecol).
Matthew
--
Those who do not understand Unix are condemned t
Hi,
I've changed settings,
but with no effect on speed.
I try explain query with this result
for 10.000 rows > update songs set views = 0 where sid > 2 and sid < 3
Bitmap Heap Scan on songs (cost=151.59..6814.29 rows=8931 width=526) (actual
time=4.848..167.855 rows=8945 loops=1)
Re
Incrementing shared_buffers to 1024MB and set effective_cache_size to 6000MB
and test again.
To speed up sort operations, increase work_mem till you notice an
improvement.
Play with those settings with different values.
_
De: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] En nombre de - -
There are columns
kuupaev date, cr char(10), db char(10)
and regular indexes for all those fields.
bilkaib table contains large number of rows.
The following query takes too much time.
How to make it faster ?
I think PostgreSql should use multiple indexes as bitmaps to speed it.
I can re-write
> max_connections = 100
> shared_buffers = 16MB
> work_mem = 64MB
> everything else is set to the default
OK, but what about effective_cache_size for example?
Anyway, we need more information about the table itself - the number of
rows is nice, but it does not say how large the table is. The rows
There are a few things you didn't mention...
First off, what is the context this database is being used in? Is it the
backend for a web server? Data warehouse? Etc?
Second, you didn't mention the use of indexes. Do you have any indexes on
the table in question, and if so, does EXPLAIN ANALYZE
Hi,
so the table occupies about 50 MB, i.e. each row has about 1 kB, right?
Updating 1000 rows should means about 1MB of data to be updated.
There might be a problem with execution plan of the updates - I guess the
100 rows update uses index scan and the 1000 rows update might use seq
scan.
Anyw
I've been searching for performance metrics and tweaks for a few weeks now. I'm
trying to determine if the length of time to process my queries is accurate or
not and I'm having a difficult time determining that. I know postgres
performance is very dependent on hardware and settings and I unders
Hi,
thank you for your reply.
Here is some aditional information:
the problem is on every tables with small and large rows too.
autovacuum is running.
relpagesreltuples
6213 54743
tables are almost write-only
Munin Graphs shows that problems is with I/O bottleneck.
I fou
Hello, Joshua,
I did different test cases and here are the results (numbers in seconds),
using (case sub queries) or not (case join) the index:
Rows (main table) Outer Join Sub queries
setting
1396163 rows39.219.6
work_mem=256Mb
3347443 rows72.2
On Wed, Nov 12, 2008 at 02:22:47PM +0100, Julien Theulier wrote:
> QUESTION: Why the planner choose seq scan in the first case & indexes scan
> in the second case? In a more general way, I observed that the planner has
> difficulties to select index scans & does in almost all the cases seq scan,
>
Hello,
I am doing some performances testing on Postgres & I discovered the
following behavior, when using 2 different ways of writing selects (but
doing the same aggregations at the end):
1. test case 1, using outer join:
create table test2 as
select
soj_session_log_id, pv_timestamp, vi_pv_id,a.i
On Tue, 11 Nov 2008, Tom Lane wrote:
Index is not used for
is null
How to fix ?
Update to something newer than 8.1 (specifically, you'll need 8.3).
Oooh, that's useful to know. We can get rid of all our extra nulls
indexes. Thanks.
Matthew
--
As you approach the airport, you see a sign s
33 matches
Mail list logo