There has been an error in the tests the dataset size was not 2*MEM it
was 0.5*MEM
i shall redo the tests and post results.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 2/17/09 11:52 PM, "Rajesh Kumar Mallah" wrote:
the raid10 voulme was benchmarked again
taking in consideration above points
Effect of ReadAhead Settings
disabled,256(default) , 512,1024
xfs_ra0 414741 , 66144
xfs_ra256403647, 545026 all tests o
One thing to note, is that linux's md sets the readahead to 8192 by default
instead of 128. I've noticed that in many situations, a large chunk of the
performance boost reported is due to this alone.
On 2/18/09 12:57 AM, "Grzegorz Jaśkiewicz" wrote:
have you tried hanging bunch of raid1 to l
On 2/18/09 12:31 AM, "Scott Marlowe" wrote:
> Effect of ReadAhead Settings
> disabled,256(default) , 512,1024
>
> xfs_ra0 414741 , 66144
> xfs_ra256403647, 545026 all tests on sda6
> xfs_ra512411357, 564769
> xfs_ra1024 404392,
2009/2/18 Rajesh Kumar Mallah :
> On Wed, Feb 18, 2009 at 2:27 PM, Grzegorz Jaśkiewicz
> wrote:
>> have you tried hanging bunch of raid1 to linux's md, and let it do
>> raid0 for you ?
>
> Hmmm , i will have only 3 bunches in that case as system has to boot
> from first bunch
> as system has onl
"Ross J. Reedstrom" writes:
> On Tue, Feb 17, 2009 at 12:20:02AM -0700, Rusty Conover wrote:
>>
>> Try running tests with ttcp to eliminate any PostgreSQL overhead and
>> find out the real bandwidth between the two machines. If its results
>> are also slow, you know the problem is TCP relat
> Well, that does sound weird... can you post the full definition for
> the images_meta table? Are there any other triggers on that table?
> Is it referenced by any foreign keys? How fast is the insert if you
> drop the trigger?
>
> ...Robert
Yes, weird. Something was wrong in my own code, aft
On Wed, Feb 18, 2009 at 2:27 PM, Grzegorz Jaśkiewicz wrote:
> have you tried hanging bunch of raid1 to linux's md, and let it do
> raid0 for you ?
Hmmm , i will have only 3 bunches in that case as system has to boot
from first bunch
as system has only 8 drives. i think reducing spindles will red
have you tried hanging bunch of raid1 to linux's md, and let it do
raid0 for you ?
I heard plenty of stories where this actually sped up performance. One
noticeable is case of youtube servers.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your su
On Wed, Feb 18, 2009 at 1:44 AM, Rajesh Kumar Mallah
wrote:
>>> Effect of ReadAhead Settings
>>> disabled,256(default) , 512,1024
>>>
> SEQUENTIAL
>>> xfs_ra0 414741 , 66144
>>> xfs_ra256403647, 545026 all tests on sda6
>>> xfs_ra512411357
>> Effect of ReadAhead Settings
>> disabled,256(default) , 512,1024
>>
SEQUENTIAL
>> xfs_ra0 414741 , 66144
>> xfs_ra256403647, 545026 all tests on sda6
>> xfs_ra512411357, 564769
>> xfs_ra1024 404392, 431168
>>
>> looks like 512
On Wed, Feb 18, 2009 at 12:52 AM, Rajesh Kumar Mallah
wrote:
> the raid10 voulme was benchmarked again
> taking in consideration above points
> Effect of ReadAhead Settings
> disabled,256(default) , 512,1024
>
> xfs_ra0 414741 , 66144
> xfs_ra256403647, 545026
12 matches
Mail list logo