Scott Marlowe wrote:
I'm guessing that you just had more data in the table or something by
the time you tested that, or some cron job was running in the
background, or some other issue, not the index.
It starts from scratch and builds up. Every insert has constant time
from the first to the
On Sun, 25 Jan 2009, Scott Marlowe wrote:
More cores is more important than faster but fewer
Again, more slower disks fewer slower ones.
Not necessarily. It depends what you are doing. If you're going to be
running only one database connection at a time, doing really big complex
queries,
On Mon, Jan 26, 2009 at 10:10:13AM +0100, Thomas Finneid wrote:
Scott Marlowe wrote:
I'm guessing that you just had more data in the table or something by
the time you tested that, or some cron job was running in the
background, or some other issue, not the index.
It starts from scratch and
Kenneth Marshall wrote:
It may be that the smaller index has update contention for the same
blocks that the larger index does not.
Is that an assumption based on both indexes existing? if so I might
agree, but if you are talking about only one index existing at a time
then could you explain
On Mon, Jan 26, 2009 at 03:49:00PM +0100, Thomas Finneid wrote:
Kenneth Marshall wrote:
It may be that the smaller index has update contention for the same
blocks that the larger index does not.
Is that an assumption based on both indexes existing? if so I might agree,
but if you are
Matthew Wakeling wrote:
On Sun, 25 Jan 2009, M. Edward (Ed) Borasky wrote:
Actually, this isn't so much a 'pgbench' exercise as it is a source of
'real-world application' data for my Linux I/O performance visualization
tools. I've done 'iozone' tests, though not recently. But what I'm
M. Edward (Ed) Borasky wrote:
At the CMG meeting I asked the disk drive engineers, well, if the
drives are doing the scheduling, why does Linux go to all the trouble?
One big reason is that Linux knows more about the relative importance of
I/O operations than the individual drives do. Linux's
Craig Ringer wrote:
M. Edward (Ed) Borasky wrote:
At the CMG meeting I asked the disk drive engineers, well, if the
drives are doing the scheduling, why does Linux go to all the trouble?
One big reason is that Linux knows more about the relative importance of
I/O operations than the
On Mon, 26 Jan 2009, M. Edward (Ed) Borasky wrote:
Is there a howto somewhere on disabling this on a Seagate Barracuda?
http://inferno.slug.org/cgi-bin/wiki?Western_Digital_NCQ is a good
discussion of disabling NCQ support under Linux (both in user-space and
directly in the kernel itself).
On Mon, Jan 26, 2009 at 4:09 AM, Matthew Wakeling matt...@flymine.org wrote:
On Sun, 25 Jan 2009, Scott Marlowe wrote:
More cores is more important than faster but fewer
Again, more slower disks fewer slower ones.
Not necessarily. It depends what you are doing. If you're going to be
On Jan 26, 2009, at 2:42 PM, David Rees wrote:
Lots of people have databases much, much, bigger - I'd hate to imagine
have to restore from backup from one of those monsters.
If you use PITR + rsync you can create a binary snapshot of the db, so
restore time is simply how long it takes to
On Mon, 2009-01-26 at 14:58 -0500, Jeff wrote:
voila. I have 2 full copies of the db. You could even expand it a bit
and after the rsync friends have it fire up the instance and run
pg_dump against it for a pg_restore compatible dump just in case.
It takes a long time to restore a
The technique Jeff is speaking of below is exactly how we do it,
except we use file-system snapshots vs rsync.
The problem is how slow log application is when recovering since it's
a single process, and very slow at that.
-kg
On Jan 26, 2009, at 11:58 AM, Jeff wrote:
On Jan 26, 2009,
On Mon, Jan 26, 2009 at 11:58 AM, Jeff thres...@torgo.978.org wrote:
On Jan 26, 2009, at 2:42 PM, David Rees wrote:
Lots of people have databases much, much, bigger - I'd hate to imagine
have to restore from backup from one of those monsters.
If you use PITR + rsync you can create a binary
On Jan 26, 2009, at 3:00 PM, Joshua D. Drake wrote:
On Mon, 2009-01-26 at 14:58 -0500, Jeff wrote:
voila. I have 2 full copies of the db. You could even expand it a
bit
and after the rsync friends have it fire up the instance and run
pg_dump against it for a pg_restore compatible dump
Greg Smith wrote:
It's a tough time to be picking up inexpensive consumer SATA disks right
now. Seagate's drive reliability has been falling hard the last couple
of years, but all the WD drives I've started trying out instead have
just awful firmware. At last they're all cheap I guess.
I
On Mon, Jan 26, 2009 at 12:27 PM, Jeff thres...@torgo.978.org wrote:
I'm quite excited about the feature. I'm still on 8.2 mostly because of the
downtime of the dump restore. I wrote up some plans a while back on doing
the poor-mans parallel restore, but I haven't had the time to actually do
M. Edward (Ed) Borasky wrote:
At the CMG meeting I asked the disk drive engineers, well, if the
drives are doing the scheduling, why does Linux go to all the trouble?
Their answer was something like, smart disk drives are a relatively
recent invention. But
One more reason?
I imagine the disk
Ron Mayer wrote:
M. Edward (Ed) Borasky wrote:
At the CMG meeting I asked the disk drive engineers, well, if the
drives are doing the scheduling, why does Linux go to all the trouble?
Their answer was something like, smart disk drives are a relatively
recent invention. But
One more reason?
Jeff wrote:
If you use PITR + rsync you can create a binary snapshot of the db, so
restore time is simply how long it takes to untar / whatever it into
place. Our backup script basically does:
archive backup directory
pg_start_backup
rsync
pg_stop_backup
voila. I have 2 full copies of the
On Mon, 26 Jan 2009, Matthew Wakeling wrote:
On Sun, 25 Jan 2009, Scott Marlowe wrote:
RAID-10 is almost always the right choice.
Agreed. Unless you don't care about the data and need the space, where RAID 0
might be useful, or if you really don't need the space, where RAID 1 might be
On Mon, 26 Jan 2009, David Rees wrote:
And yes, the more memory you can squeeze into the machine, the better,
though you'll find that after a certain point, price starts going up
steeply. Of course, if you only have a 15GB database, once you reach
16GB of memory you've pretty much hit the
Craig Ringer wrote:
These devices would be interesting for a few uses, IMO. One is temp
table space and sort space in Pg. Another is scratch space for apps
(like Photoshop) that do their own VM management. There's also potential
Surely temp tables and sort space isn't subject to fsync and
On Tue, 27 Jan 2009, James Mansion wrote:
Craig Ringer wrote:
These devices would be interesting for a few uses, IMO. One is temp
table space and sort space in Pg. Another is scratch space for apps
(like Photoshop) that do their own VM management. There's also potential
Surely temp tables and
Folks,
I turned on temp file logging for PostgreSQL to see if I needed to
adjust work_mem. Oddly, it's logging file usage up to 33 times per
executed query (and no, the query isn't large enough to need 33 separate
sorts).
Any idea what's going on here?
--Josh
--
Sent via
25 matches
Mail list logo