.
Peter Childs
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
insert and not found loads of
space in the fsm.
I'm using 8.3.1 (I thought I'd upgraded to 8.3.3 but it does not look
like the upgrade worked) I'm more than happy to upgrade just have to
find the down time (even a few seconds can be difficult)
Any help would be appreciated.
Regards
Peter Childs
2008/10/3 Peter Eisentraut [EMAIL PROTECTED]:
Peter Childs wrote:
I have a problem where by an insert on a large table will sometimes
take longer than usual.
I think the problem might have something to do with checkpoints,
Then show us your checkpointing-related parameters. Or try to set
2008/4/28 Gauri Kanekar [EMAIL PROTECTED]:
All,
We have a table table1 which get insert and updates daily in high
numbers, bcoz of which its size is increasing and we have to vacuum it every
alternate day. Vacuuming table1 take almost 30min and during that time the
site is down.
We need
On 03/01/2008, Tom Lane [EMAIL PROTECTED] wrote:
Peter Childs [EMAIL PROTECTED] writes:
Using Postgresql 8.1.10 every so often I get a transaction that takes a
while to commit.
I log everything that takes over 500ms and quite reguallly it says
things
like
707.036 ms statement
Using Postgresql 8.1.10 every so often I get a transaction that takes a
while to commit.
I log everything that takes over 500ms and quite reguallly it says things
like
707.036 ms statement: COMMIT
Is there anyway to speed this up?
Peter Childs
On 25/11/2007, Pablo Alcaraz [EMAIL PROTECTED] wrote:
Tom Lane wrote:
Peter Childs [EMAIL PROTECTED] writes:
On 25/11/2007, Erik Jones [EMAIL PROTECTED] wrote:
Does the pg_dump create this kind of consistent backups? Or do I
need to do the backups using another program?
Yes
On 25/11/2007, Erik Jones [EMAIL PROTECTED] wrote:
On Nov 25, 2007, at 10:46 AM, Pablo Alcaraz wrote:
Hi all,
I read that pg_dump can run while the database is being used and makes
consistent backups.
I have a huge and *heavy* selected, inserted and updated database.
Currently I
config variables that would be great. I'm
just trying to workout what figures are worth trying to see if I can reduce
them.
From time to time I get commits that take 6 or 7 seconds but not all the
time.
I'm currently working with the defaults.
Peter Childs
On 14/09/2007, Peter Childs [EMAIL PROTECTED] wrote:
On 13/09/2007, Greg Smith [EMAIL PROTECTED] wrote:
Every time the all scan writes a buffer that is frequently used, that
write has a good chance that it was wasted because the block will be
modified again before checkpoint time
On 05/09/07, Gregory Stark [EMAIL PROTECTED] wrote:
Gregory Stark [EMAIL PROTECTED] writes:
JS Ubei [EMAIL PROTECTED] writes:
I need to improve a query like :
SELECT id, min(the_date), max(the_date) FROM my_table GROUP BY id;
...
I don't think you'll find anything much faster for
On 30/05/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
On Wed, 30 May 2007, Jonah H. Harris wrote:
On 5/29/07, Luke Lonergan [EMAIL PROTECTED] wrote:
AFAIK you can't RAID1 more than two drives, so the above doesn't make
sense
to me.
Yeah, I've never seen a way to RAID-1 more than 2
On 22 May 2007 01:23:03 -0700, valgog [EMAIL PROTECTED] wrote:
I found several post about INSERT/UPDATE performance in this group,
but actually it was not really what I am searching an answer for...
I have a simple reference table WORD_COUNTS that contains the count of
words that appear in a
On 26/02/07, Pallav Kalva [EMAIL PROTECTED] wrote:
Hi,
I am in the process of cleaning up one of our big table, this table
has 187 million records and we need to delete around 100 million of them.
I am deleting around 4-5 million of them daily in order to catchup
with vacuum and also
On 12/01/07, Tobias Brox [EMAIL PROTECTED] wrote:
We have a table with a timestamp attribute (event_time) and a state flag
which usually changes value around the event_time (it goes to 4). Now
we have more than two years of events in the database, and around 5k of
future events.
It is
On 20/12/06, Steinar H. Gunderson [EMAIL PROTECTED] wrote:
On Tue, Dec 19, 2006 at 11:19:39PM -0800, Brian Herlihy wrote:
Actually, I think I answered my own question already. But I want to
confirm - Is the GROUP BY faster because it doesn't have to sort results,
whereas DISTINCT must
On 24/11/06, Arnau [EMAIL PROTECTED] wrote:
Hi all,
I have a table with statistics with more than 15 million rows. I'd
like to delete the oldest statistics and this can be about 7 million
rows. Which method would you recommend me to do this? I'd be also
interested in calculate some kind of
On 28/08/06, Michal Taborsky - Internet Mall [EMAIL PROTECTED] wrote:
Markus Schaber napsal(a):
Hi, Michal,
Michal Taborsky - Internet Mall wrote:
When using this view, you are interested in tables, which have the
bloat column higher that say 2.0 (in freshly dump/restored/analyzed
to reclaim
the free space. (cluster of the relevent tables may work.
If you run Vacuum Verbose regullally you can check you are vacuuming
often enough and that your free space map is big enough to hold your
free space.
Peter Childs
but then you would have to include Access Vacuum too I'll think you
will find Tools - Database Utils - Compact Database preforms
a simular purpose and is just as important as I've seen many Access
Databases bloat in my time.
Peter Childs
in later
versions...) so you might want to try reindex.
They are all worth a try its a brief summary of what been on
preform for weeks and weeks now.
Peter Childs
---(end of broadcast)---
TIP 7: don't forget to increase your free space map
the optimized from the inside a function (you can write aggrate functions
your self if you wish) to do somthing slighly differently.
for my large table
select max(field) from table; (5264.21 msec)
select field from table order by field limit 1; (54.88 msec)
Peter Childs
group by instead. I think this is an old bug its fixed in
7.3.2 which I'm using.
Peter Childs
`
[EMAIL PROTECTED]:express=# explain select distinct region from region;
QUERY PLAN
by instead. I think this is an old bug its fixed in
7.3.2 which I'm using.
Peter Childs
`
[EMAIL PROTECTED]:express=# explain select distinct region from region;
QUERY PLAN
that many vacuums may be slowing does my database
Peter Childs
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
, what is it called? As I can't see it.
Is it part of gborg?
Peter Childs
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])
26 matches
Mail list logo