all user_indexes is 1100. About 2/3rds of them are
obsolete ? I did do an ETL from mySQL -> postgreSQL but.. that's still a
ridiculous amount of (potentially) unused indexes.
Regards
Stef
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make chan
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Stef Telford wrote:
> Stef Telford wrote:
>> Mark Kirkwood wrote:
>>> Scott Carey wrote:
>>>> A little extra info here >> md, LVM, and some other tools do
>>>> not allow the file system to use write ba
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Stef Telford wrote:
> Mark Kirkwood wrote:
>> Scott Carey wrote:
>>> A little extra info here >> md, LVM, and some other tools do
>>> not allow the file system to use write barriers properly So
>>> th
Matthew Wakeling wrote:
> On Wed, 1 Apr 2009, Stef Telford wrote:
>>Good UPS, a warm PITR standby, offsite backups and regular checks is
>> "good enough" for me, and really, that's what it all comes down to.
>> Mitigating risk and factors into an 'accep
Scott Marlowe wrote:
> On Wed, Apr 1, 2009 at 10:15 AM, Stef Telford wrote:
>
>> I do agree that the benefit is probably from write-caching, but I
>> think that this is a 'win' as long as you have a UPS or BBU adaptor,
>> and really, in a prod enviro
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Greg Smith wrote:
> On Wed, 1 Apr 2009, Stef Telford wrote:
>
>> I have -explicitly- enabled sync in the conf...In fact, if I turn
>> -off- sync commit, it gets about 200 -slower- rather than
>> faster.
>
> Yo
ologyforum.com/forum/showthread.php?t=54038 )
Curiously, I think with SSD's there may have to be an 'off' flag
if you put the xlog onto an ssd. It seems to complain about 'too
frequent checkpoints'.
I can't wait for -either- of the drives to arrive. I want to see
i
essed: 288000/288000
tps = 1398.907206 (including connections establishing)
tps = 1399.233785 (excluding connections establishing)
It's also running ext4dev, but, this is the 'playground' server,
not the real iron (And I dread to do that on the real iron). In short,
racer bullets
aside, I have heard that -that- can be dangerous ;p
Regards
Stef
Will Reese wrote:
> I'm preparing for an upgrade from PostgreSQL 7.4.5 to 8.1.3, and I
> noticed a potential performance issue.
>
> I have two servers, a dual proc Dell with raid 5 running Postgr
ecurity fix in 7.4.8.
Since the db is in a state of 'down' or repair, why not do it now ?
two birds, one stone.
Regards
Stef
---(end of broadcast)---
TIP 6: explain analyze is your friend
, minor point releases aren't adding new features or changing
basic functionality, they are pure and simple bugfixes. If I was in
-your- position, I would run (don't walk ;) and install upto 8.1.3
of course, thats jst my 2c, feel free to ignore :D
Regards
Stef
Chris wrote:
george y
Tino Wildenhain mentioned :
=> Well, text columns are automatically compressed via the toast mechanism.
=> This is handled transparently for you.
OK, I misread the documentation, and I forgot to mention that
I'm using postgres 7.3 and 8.0
It's actually the EXTERNAL storage type that is larger, not
I have a table in the databases I work with,
that contains two text columns with XML data
stored inside them.
This table is by far the biggest table in the databases,
and the text columns use up the most space.
I saw that the default storage type for text columns is
"EXTENDED" which, according t
Bruno Wolff III mentioned :
=> > => If you have a proper FSM setting you shouldn't need to do vacuum fulls
=> > => (unless you have an older version of postgres where index bloat might
=> > => be an issue).
Thanks Alvaro and Bruno
I just want to clarify something that I also couldn't
find a cle
Bruno Wolff III mentioned :
=> If you have a proper FSM setting you shouldn't need to do vacuum fulls
=> (unless you have an older version of postgres where index bloat might
=> be an issue).
What version of postgres was the last version that had
the index bloat problem?
-
[EMAIL PROTECTED] mentioned :
=> Try ANALYZE after loading the referenced tables, but before loading the main
table
I attached a new script for creating the load file...
Analyze didn't help, it actually took longer to load.
I set autocommit to off, and put a commit after every 100
inserts, chat
Hi all,
I suspect this problem/bug has been dealt with already, but I couldn't
find anything in the mail archives.
I'm using postgres 7.3, and I managed to recreate the problem using the
attached
files.
The database structure is in slow_structure.sql
After creating the database, using this s
Hello Andrew,
Everything that Shridhar says makes perfect
sense, and, speaking from experience in dealing with
this type of 'problem', everything you say does as
well. Such is life really :)
I would not be at -all- surprised if Sybase
and Oracle did query re-writing behind the sc
Pierre-Frédéric Caillaud mentioned :
=> http://www.postgresql.org/docs/7.4/static/jdbc-query.html#AEN24298
My question is :
Is this only true for postgres versions >= 7.4 ?
I see the same section about "Setting fetch size to turn cursors on and off"
is not in the postgres 7.3.7 docs. Does this me
Tom Lane mentioned :
=> Not if you haven't got the RAM to support it :-(
=>
=> Another thing you might look at is ANALYZEing the tables again after
=> you've loaded all the new data. The row-count estimates seem way off
=> in these plans. You might need to increase the statistics target,
=> too,
Hi all,
I've attached all the query in query.sql
I'm using postgres 7.3.4 on Linux version 2.4.26-custom
( /proc/sys/vm/overcommit_memory = 0 this time )
free :
total used free sharedbuffers cached
Mem: 18102121767384 42828 0
Tom Lane mentioned :
=> Turn off
=> memory overallocation in your kernel to get more stable behavior when
=> pushing the limits of available memory.
I think this will already help a lot.
Thanks!!
=> If your concern is with a single nightly process, then that quad Xeon is
=> doing squat for you, b
Hi all,
I'm running postgres 7.3.4 on a quad Xeon 2.8 GHz with
Mem: 1057824768 309108736 7487160320 12242944 256413696
Swap: 518053888 8630272 509423616
on Linux version 2.4.26-custom
Data directory is mounted with noatime.
Nothing else but one 11GB database is running on this mach
Tom Lane mentioned :
=> Please try it on 7.4.2 and see if you still have a problem.
Will do, and I'll post the results
Thanks!
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
Hi all,
I've been dealing with a problem for the past two days
where a certain sql statement works 2 out of 5 times, and
the other 3 times, it causes the machine (quad Xeon 2.8GHz
+ 792543232 bytes mem, linux kernel 2.4.26-custom, pg ver 7.3.4)
to slow down, and finally grind to a halt. It looks
Forgot to mention that I use postgres 7.3.4
Stef mentioned :
=> Hi all ,
=>
=> I'm trying to find out if there is a specific setting
=> to make transactions time out faster in a scenario
=> where there's an update on a table in a transaction
=> block, and another up
Hi all,
It seems I always find a solution just after
panicking a little bit.
Anyway, I found that statement_timeout solved
my problem. When I tested it earlier, I actually
made an error, and skipped it as a possible
solution.
Cheers
Stef
Stef mentioned :
=> Forgot to mention that I
Hi all ,
I'm trying to find out if there is a specific setting
to make transactions time out faster in a scenario
where there's an update on a table in a transaction
block, and another update process tries to update
the same column.
It looks like the second process will wait until you
end the t
major
difference in memory usage.
Regards
Stef
On Mon, 6 Oct 2003 09:55:51 +0200
Stef <[EMAIL PROTECTED]> wrote:
=> Thanks for the replies,
=>
=> On Fri, 3 Oct 2003 11:08:48 -0700
=> Josh Berkus <[EMAIL PROTECTED]> wrote:
=> => 1. Make sure that the WAL files (pg_xlo
M. I've gathered that it is theoretically
possible, so no to go try it.
Regards
Stef
pgp0.pgp
Description: PGP signature
possible. (Which I know it is, but not how)
Regards
Stef
pgp0.pgp
Description: PGP signature
uch from normal operation on a machine like that.
Any tips on how to achieve the most diminutive vmem an conf settings?
I tried to figure this out from the docs, and played around with
backend/storage , but I'm not really winning.
Regards
Stef
pgp0.pgp
Description: PGP signature
s are expected and
will be accepted. I will need the
functionality of >= 7.3.4 , at least.
Any help will be much appreciated.
Regards
Stef
0009.mimetmp
Description: PGP signature
pgp0.pgp
Description: PGP signature
Hi all,
I posted this problem on the sql list, and was referred to this list in stead.
I have attached an sql statement that normally runs under 3 minutes.
That is, until I vacuum analyze the database (or just the tables in the query),
then the same query runs longer than 12 hours, and I have to k
Hi Tom,
Thanks for responding.
I got as much info as I could :
On Mon, 11 Aug 2003 11:43:45 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
=> Could we see the results of "EXPLAIN ANALYZE", rather than just EXPLAIN,
=> for the un-analyzed case?
Attached the output of this.
=> Also, what do you see
On Mon, 11 Aug 2003 14:25:03 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
=> set enable_mergejoin to off;
=> explain analyze ... query ...
=>
=> If it finishes in a reasonable amount of time, send the explain output.
Hi again,
I did this on the 7.3.1 database, and attached the output.
36 matches
Mail list logo