On Wed, May 03, 2006 at 04:30:32PM -0500, Scott Marlowe wrote:
If you've not tried bonnie++ on a limited memory machine, you really
should.
Yes, I have. I also patched bonnie to handle large files and other such
nifty things before bonnie++ was forked. Mostly I just didn't get much
value out
On Wed, 2006-05-03 at 15:53, Michael Stone wrote:
> On Wed, May 03, 2006 at 02:40:15PM -0500, Scott Marlowe wrote:
> >Note that I'm referring to bonnie++ as was an earlier poster. It
> >certainly seems capable of giving you a good idea of how your hardware
> >will behave under load.
>
> IME it gi
On Wed, May 03, 2006 at 02:40:15PM -0500, Scott Marlowe wrote:
Note that I'm referring to bonnie++ as was an earlier poster. It
certainly seems capable of giving you a good idea of how your hardware
will behave under load.
IME it give fairly useless results. YMMV. Definately the numbers posted
On Wed, 2006-05-03 at 14:26, Michael Stone wrote:
> On Wed, May 03, 2006 at 01:08:21PM -0500, Jim C. Nasby wrote:
> >Well, in this case the question was about random write access, which dd
> >won't show you.
>
> That's the kind of thing you need to measure against your workload.
Of course, the fi
On Wed, May 03, 2006 at 01:08:21PM -0500, Jim C. Nasby wrote:
Well, in this case the question was about random write access, which dd
won't show you.
That's the kind of thing you need to measure against your workload.
Mike Stone
---(end of broadcast)---
On Wed, May 03, 2006 at 01:06:06PM -0400, Michael Stone wrote:
> On Wed, May 03, 2006 at 11:07:15AM -0500, Scott Marlowe wrote:
> >I have often used the mem=xxx arguments to lilo when needing to limit
> >the amount of memory for testing purposes. Just google for limit memory
> >and your bootloader
On Wed, May 03, 2006 at 11:07:15AM -0500, Scott Marlowe wrote:
I have often used the mem=xxx arguments to lilo when needing to limit
the amount of memory for testing purposes. Just google for limit memory
and your bootloader to find the options.
Or, just don't worry about it. Even if you get b
On Wed, 2006-05-03 at 10:59, Michael Stone wrote:
> On Wed, May 03, 2006 at 09:19:52AM -0400, Jeff Trout wrote:
> >Bonnie++ is able to use very large datasets. It also tries to figure
> >out hte size you want (2x ram) - the original bonnie is limited to 2GB.
>
> Yes, and once you get into large
On Wed, May 03, 2006 at 09:19:52AM -0400, Jeff Trout wrote:
Bonnie++ is able to use very large datasets. It also tries to figure
out hte size you want (2x ram) - the original bonnie is limited to 2GB.
Yes, and once you get into large datasets like that the quality of the
data is fairly poor b
On May 3, 2006, at 10:16 AM, Vivek Khera wrote:
On May 3, 2006, at 9:19 AM, Jeff Trout wrote:
Bonnie++ is able to use very large datasets. It also tries to
figure out hte size you want (2x ram) - the original bonnie is
limited to 2GB.
but you have to be careful building bonnie++ since i
On May 3, 2006, at 9:19 AM, Jeff Trout wrote:
Bonnie++ is able to use very large datasets. It also tries to
figure out hte size you want (2x ram) - the original bonnie is
limited to 2GB.
but you have to be careful building bonnie++ since it has bad
assumptions about which systems can do
On May 3, 2006, at 8:18 AM, Michael Stone wrote:
On Tue, May 02, 2006 at 08:09:52PM -0600, Brendan Duddridge wrote:
---Sequential Output ---Sequential
Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --
Block--- --Seeks---
MachineMB
On Tue, May 02, 2006 at 08:09:52PM -0600, Brendan Duddridge wrote:
---Sequential Output ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec % CP
Tom Lane wrote:
Eric Lam <[EMAIL PROTECTED]> writes:
what is the quickest way of dumping a DB and restoring it? I have done a
"pg_dump -D database | split --line-bytes 1546m part"
Don't use "-D" if you want fast restore ...
regards, tom lane
t
Hi Jim,
The output from bonnie on my boot drive is:
File './Bonnie.27964', size: 0
Writing with putc()...done
Rewriting...done
Writing intelligently...done
Reading with getc()...done
Reading intelligently...done
Seeker 2...Seeker 1...Seeker 3...start 'em...done...done...done...
---
BTW, you should be able to check to see what the controller is actually
doing by pulling one of the drives from a running array. If it only
hammers 2 drives during the rebuild, it's RAID10. If it hammers all the
drives, it's 0+1.
As for Xserve raid, it is possible to eliminate most (or maybe even
RAID 10 is better than RAID 0+1. There is a lot of information on
the net about this, but here is the first one that popped up on
google for me.
http://www.pcguide.com/ref/hdd/perf/raid/levels/multLevel01-c.html
The quick summary is that performance is about the same between the
two, but
They are not equivalent. As I understand it, RAID 0+1 performs about
the same as RAID 10 when everything is working, but degrades much less
nicely in the presence of a single failed drive, and is more likely to
suffer catastrophic data loss if multiple drives fail.
-- Mark
On Tue, 2006-05-02 at
Everyone here always says that RAID 5 isn't good for Postgres. We
have an Apple Xserve RAID configured with RAID 5. We chose RAID 5
because Apple said their Xserve RAID was "optimized" for RAID 5. Not
sure if we made the right decision though. They give an option for
formatting as RAID 0+1.
On Wed, Apr 26, 2006 at 05:14:41PM +0930, Eric Lam wrote:
> all dumpfiles total about 17Gb. It has been running for 50ish hrs and up
> to about the fourth file (5-6 ish Gb) and this is on a raid 5 server.
RAID5 generally doesn't bode too well for performance; that could be
part of the issue.
--
Tom Lane <[EMAIL PROTECTED]> schrieb:
> Eric Lam <[EMAIL PROTECTED]> writes:
> > what is the quickest way of dumping a DB and restoring it? I have done a
>
> >"pg_dump -D database | split --line-bytes 1546m part"
>
> Don't use "-D" if you want fast restore ...
hehe, yes ;-)
http://people.p
Eric Lam <[EMAIL PROTECTED]> writes:
> what is the quickest way of dumping a DB and restoring it? I have done a
>"pg_dump -D database | split --line-bytes 1546m part"
Don't use "-D" if you want fast restore ...
regards, tom lane
---(end of bro
Hello list,
what is the quickest way of dumping a DB and restoring it? I have done a
"pg_dump -D database | split --line-bytes 1546m part"
Restoration as
"cat part* | psql database 2> errors 1>/dev/null"
all dumpfiles total about 17Gb. It has been running for 50ish hrs and up
to about the
23 matches
Mail list logo