Richard Neill wrote:
The key issue for short,fast transactions seems to be
how fast an fdatasync() call can run, forcing the commit to disk, and
allowing the transaction to return to userspace.
Attached is a short C program which may be of use.
Right. I call this the "commit rate" of the storage
Richard Neill wrote:
Likewise, is there any way to check whether, for example, postgres is
running out of work memory?
It doesn't work like that; it's not an allocation. What happens is that
the optimizer estimates how much memory a sort is going to need, and
then uses work_mem to decide wheth
Richard Neill wrote:
Am I missing something though, or is this project dormant, without
having released any files?
My bad--gave you the wrong url.
http://git.postgresql.org/gitweb?p=pg_top.git;a=summary has the project
I meant to point you toward.
What I really want to know is, how far th
Justin Pitts wrote:
Set work_mem in postgresql.conf down to what the 200 clients need, which
sounds to me like the default setting.
In the session which needs more work_mem, execute:
SET SESSION work_mem TO '256MB'
Isn't that terribly ugly? It seems to me less hackish to rely on the
many cli
>>> max_connections = 500 # (change requires restart)
>>> work_mem = 256MB # min 64kB
>>
>> Not that it has to do with your current problem but this combination could
>> bog your server if enough clients run sorted queries simultaneously.
>> You prob
> -Mensaje original-
> De: Richard Neill
>
> Fernando Hevia wrote:
> >
> >
> >> -Mensaje original-
> >> De: Richard Neill
> >>
> >>
> >> max_connections = 500 # (change requires restart)
> >> work_mem = 256MB# min 64kB
> >
>
Fernando Hevia wrote:
-Mensaje original-
De: Richard Neill
max_connections = 500 # (change requires restart)
work_mem = 256MB# min 64kB
Not that it has to do with your current problem but this combination could
bog your server if
> -Mensaje original-
> De: Richard Neill
>
>
> max_connections = 500 # (change requires restart)
> work_mem = 256MB# min 64kB
Not that it has to do with your current problem but this combination could
bog your server if enough clients
Thom Brown wrote:
2009/11/20 Richard Neill mailto:rn...@cam.ac.uk>>
It might also help if you posted your postgresql.conf too.
Below (have removed the really non-interesting bits).
Thanks,
Richard
I can't actually see anything in your config that would cause this
pr
2009/11/20 Richard Neill
>
>
> It might also help if you posted your postgresql.conf too.
>>
>
> Below (have removed the really non-interesting bits).
>
> Thanks,
>
> Richard
>
>
> I can't actually see anything in your config that would cause this problem.
:/
As for seeing the progress of an upd
Brian Modra wrote:
I had a similar problem: I did a large delete, and then a selct which
"covered" the previous rows.
It took ages, because the index still had those deleted rows.
Possibly the same happens with update.
Try this:
vacuum analyse
reindex database
(your database name instead of
Sam Jas wrote:
Is there any idle connections exists ?
I didn't see any, I'll look better next time.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Thom Brown wrote:
Okay, have you tried monitoring the connections to your database?
Try: select * from pg_stat_activity;
Tried that - it's very useful as far as it goes. I can see that in most
cases, the DB is running just the one query.
What I really want to know is, how far through that
2009/11/20 Richard Neill
>
>
> Thom Brown wrote:
> >
>
>> It looks like your statistics are way out of sync with the real data.
>>
>> > Nested Loop (cost=885367.03..1123996.87 rows=8686 width=12) (actual
>> time=248577.879..253168.466 rows=347308 loops=1)
>>
>> This shows that it thinks there
Thom Brown wrote:
>
It looks like your statistics are way out of sync with the real data.
> Nested Loop (cost=885367.03..1123996.87 rows=8686 width=12) (actual
time=248577.879..253168.466 rows=347308 loops=1)
This shows that it thinks there will be 8,686 rows, but actually
traverses 347
Kevin Grittner wrote:
Richard Neill wrote:
SELECT ( core.demand.qty - viwcs.wave_end_demand.qty_remaining )
FROM
core.demand,
viwcs.previous_wave
LEFT OUTER JOIN viwcs.wave_end_demand USING ( wid )
WHERE core.demand.id = viwcs.wave_end_demand.demand_id;
For comparison, how
Axel Rau wrote:
Am 13.11.2009 um 14:57 schrieb Laszlo Nagy:
I was thinking about ARECA 1320 with 2GB memory + BBU. Unfortunately,
I cannot find information about using ARECA cards with SSD drives.
They told me: currently not supported, but they have positive customer
reports. No date yet for
I don't think so. I actually dumped the tables involved into stripped down
versions of themselves in a new database for testing, so the data involved
should be completely fresh. I ran a vacuum analyze after the dump of
course.
Just for paranoia's sake though I did do the following:
explain analy
On Wed, Nov 18, 2009 at 8:24 PM, Tom Lane wrote:
> Scott Carey writes:
>> For your database DATA disks, leaving the write cache on is 100% acceptable,
>> even with power loss, and without a RAID controller. And even in high write
>> environments.
>
> Really? How hard have you tested that config
It was Thursday 19 November 2009 11:08:10 pm that the wise Tom Lane thus
wrote:
> writes:
> > To make make the retrieval faster, I'm using a
> > partitioning scheme as follows:
> >
> > stats_300: data gathered at 5 mins, child tables named stats_300_t1_t2
> > (where t2 - t1 = 2 hrs), i.e. 12 tabl
20 листопада 2009 р. 17:01 Jonathan Foy написав:
> This seems to result in the same problem; should I attempt to pull for a
> specific id_nbr/id_qfr, postgres uses the index without a problem. If I try
> to join the two tables/views however, it insists on doing a sequential scan
> (actually two i
Richard Neill wrote:
> SELECT ( core.demand.qty - viwcs.wave_end_demand.qty_remaining )
> FROM
> core.demand,
> viwcs.previous_wave
> LEFT OUTER JOIN viwcs.wave_end_demand USING ( wid )
> WHERE core.demand.id = viwcs.wave_end_demand.demand_id;
For comparison, how does this do?:
On Fri, 20 Nov 2009, Lorenzo Allegrucci wrote:
performance is degrading...
In normal conditions the postgres process uses about 3% of cpu time
but when is in "degraded" conditions it can use up to 25% of cpu time.
You don't really give enough information to determine what is going on
here.
This seems to result in the same problem; should I attempt to pull for a
specific id_nbr/id_qfr, postgres uses the index without a problem. If I try
to join the two tables/views however, it insists on doing a sequential scan
(actually two in this case) and will not use the index. Any other
ideas/e
Lorenzo Allegrucci writes:
> So, my main question is.. how can just a plain simple restart of postgres
> restore the original performance (3% cpu time)?
Are you killing off any long-running transactions when you restart?
regards, tom lane
--
Sent via pgsql-performance m
Sorry for top-posting -- challenged mail client.
Thom's suggestion that the estimates are off seems like a useful line of
inquiry, but ANALYZE is what builds statistics. If it is not run often enough
the planner will base its idea of what a good plan is on bad data. So ANALYZE
; is your friend.
2009/11/20 Richard Neill
>
>> Greg Williamson wrote:
>>
>>> Richard --
>>>
>>> You might post the results of "EXPLAIN ANALYZE ;" ... be
>>> sure to run it in a transaction if you want to be able roll it back. Perhaps
>>> try "EXPLAIN ;" first as it is faster, but EXPLAIN ANALYZE shows
>>> what t
Richard Neill wrote:
As far as I know, the only tools that exist are
pg_stat_activity, top, and iotop
Have I missed one?
The ui for pgTop might be easier for what you're trying to do:
http://pgfoundry.org/projects/pgtop/
--
Greg Smith2ndQuadrant Baltimore, MD
PostgreSQL Training, Serv
On Thu, 19 Nov 2009, Greg Smith wrote:
This is why turning the cache off can tank performance so badly--you're going
to be writing a whole 128K block no matter what if it's force to disk without
caching, even if it's just to write a 8K page to it.
Theoretically, this does not need to be the ca
Lorenzo Allegrucci writes:
> A. Kretschmer wrote:
>> In response to Lorenzo Allegrucci :
>>> Hi all,
>>>
>>> I'm experiencing a strange behavior with my postgresql 8.3:
>>> performance is degrading after 3/4 days of running time but if I
>>> just restart it performance returns back to it's normal
Am 13.11.2009 um 14:57 schrieb Laszlo Nagy:
I was thinking about ARECA 1320 with 2GB memory + BBU.
Unfortunately, I cannot find information about using ARECA cards
with SSD drives.
They told me: currently not supported, but they have positive customer
reports. No date yet for implementatio
A. Kretschmer wrote:
In response to Lorenzo Allegrucci :
Hi all,
I'm experiencing a strange behavior with my postgresql 8.3:
performance is degrading after 3/4 days of running time but if I
just restart it performance returns back to it's normal value..
In normal conditions the postgres process
How about
CREATE OR REPLACE VIEW value_codes_view AS
select * from (
SELECT value_codes.id_nbr,
value_codes.id_qfr,
(ARRAY[val_1_cd_1, ... , val_2_cd_12])[i] as value_code,
(ARRAY[val_1_amt_1, ... , val_2_amt_12])[i] as value_amount,
FROM value_codes, generate_series(1,24) i)
In response to Lorenzo Allegrucci :
>
> Hi all,
>
> I'm experiencing a strange behavior with my postgresql 8.3:
> performance is degrading after 3/4 days of running time but if I
> just restart it performance returns back to it's normal value..
> In normal conditions the postgres process uses abo
Hi all,
I'm experiencing a strange behavior with my postgresql 8.3:
performance is degrading after 3/4 days of running time but if I
just restart it performance returns back to it's normal value..
In normal conditions the postgres process uses about 3% of cpu time
but when is in "degraded" condi
35 matches
Mail list logo