On Wed, Feb 18, 2009 at 12:52 AM, Rajesh Kumar Mallah
mallah.raj...@gmail.com wrote:
the raid10 voulme was benchmarked again
taking in consideration above points
Effect of ReadAhead Settings
disabled,256(default) , 512,1024
xfs_ra0 414741 , 66144
xfs_ra256
Effect of ReadAhead Settings
disabled,256(default) , 512,1024
SEQUENTIAL
xfs_ra0 414741 , 66144
xfs_ra256403647, 545026 all tests on sda6
xfs_ra512411357, 564769
xfs_ra1024 404392, 431168
looks like 512 was the best
On Wed, Feb 18, 2009 at 1:44 AM, Rajesh Kumar Mallah
mallah.raj...@gmail.com wrote:
Effect of ReadAhead Settings
disabled,256(default) , 512,1024
SEQUENTIAL
xfs_ra0 414741 , 66144
xfs_ra256403647, 545026 all tests on sda6
xfs_ra512
have you tried hanging bunch of raid1 to linux's md, and let it do
raid0 for you ?
I heard plenty of stories where this actually sped up performance. One
noticeable is case of youtube servers.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
On Wed, Feb 18, 2009 at 2:27 PM, Grzegorz Jaśkiewicz gryz...@gmail.com wrote:
have you tried hanging bunch of raid1 to linux's md, and let it do
raid0 for you ?
Hmmm , i will have only 3 bunches in that case as system has to boot
from first bunch
as system has only 8 drives. i think reducing
Well, that does sound weird... can you post the full definition for
the images_meta table? Are there any other triggers on that table?
Is it referenced by any foreign keys? How fast is the insert if you
drop the trigger?
...Robert
Yes, weird. Something was wrong in my own code, after
Ross J. Reedstrom reeds...@rice.edu writes:
On Tue, Feb 17, 2009 at 12:20:02AM -0700, Rusty Conover wrote:
Try running tests with ttcp to eliminate any PostgreSQL overhead and
find out the real bandwidth between the two machines. If its results
are also slow, you know the problem is
2009/2/18 Rajesh Kumar Mallah mallah.raj...@gmail.com:
On Wed, Feb 18, 2009 at 2:27 PM, Grzegorz Jaśkiewicz gryz...@gmail.com
wrote:
have you tried hanging bunch of raid1 to linux's md, and let it do
raid0 for you ?
Hmmm , i will have only 3 bunches in that case as system has to boot
from
On 2/18/09 12:31 AM, Scott Marlowe scott.marl...@gmail.com wrote:
Effect of ReadAhead Settings
disabled,256(default) , 512,1024
xfs_ra0 414741 , 66144
xfs_ra256403647, 545026 all tests on sda6
xfs_ra512411357, 564769
xfs_ra1024
One thing to note, is that linux's md sets the readahead to 8192 by default
instead of 128. I've noticed that in many situations, a large chunk of the
performance boost reported is due to this alone.
On 2/18/09 12:57 AM, Grzegorz Jaśkiewicz gryz...@gmail.com wrote:
have you tried hanging
On 2/17/09 11:52 PM, Rajesh Kumar Mallah mallah.raj...@gmail.com wrote:
the raid10 voulme was benchmarked again
taking in consideration above points
Effect of ReadAhead Settings
disabled,256(default) , 512,1024
xfs_ra0 414741 , 66144
xfs_ra256403647, 545026
There has been an error in the tests the dataset size was not 2*MEM it
was 0.5*MEM
i shall redo the tests and post results.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
12 matches
Mail list logo