Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i controller
On Wed, Feb 18, 2009 at 12:52 AM, Rajesh Kumar Mallah wrote: > the raid10 voulme was benchmarked again > taking in consideration above points > Effect of ReadAhead Settings > disabled,256(default) , 512,1024 > > xfs_ra0 414741 , 66144 > xfs_ra256403647, 545026 all tests on sda6 > xfs_ra512411357, 564769 > xfs_ra1024 404392, 431168 > > looks like 512 was the best setting for this controller That's only known for sequential access. How did it perform under the random access, or did the numbers not change too much? > Considering these two figures > xfs25 350661, 474481(/dev/sda7) > 25xfs 404291 , 547672(/dev/sda6) > > looks like the beginning of the drives are 15% faster > than the ending sections , considering this is it worth > creating a special tablespace at the begining of drives It's also good because you will be short stroking the drives. They will naturally have a smaller space to move back and forth in and this can increase the random speed access at the same time. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i controller
>> Effect of ReadAhead Settings >> disabled,256(default) , 512,1024 >> SEQUENTIAL >> xfs_ra0 414741 , 66144 >> xfs_ra256403647, 545026 all tests on sda6 >> xfs_ra512411357, 564769 >> xfs_ra1024 404392, 431168 >> >> looks like 512 was the best setting for this controller > > That's only known for sequential access. > How did it perform under the random access, or did the numbers not > change too much? RANDOM SEEKS /sec xfs_ra0 6341.0 xfs_ra256 14642.7 xfs_ra512 14415.6 xfs_ra102414541.6 the value does not seems to be having much effect unless its totally disabled. regds mallah. > -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i controller
On Wed, Feb 18, 2009 at 1:44 AM, Rajesh Kumar Mallah wrote: >>> Effect of ReadAhead Settings >>> disabled,256(default) , 512,1024 >>> > SEQUENTIAL >>> xfs_ra0 414741 , 66144 >>> xfs_ra256403647, 545026 all tests on sda6 >>> xfs_ra512411357, 564769 >>> xfs_ra1024 404392, 431168 >>> >>> looks like 512 was the best setting for this controller >> >> That's only known for sequential access. >> How did it perform under the random access, or did the numbers not >> change too much? > > RANDOM SEEKS /sec > > xfs_ra0 6341.0 > xfs_ra256 14642.7 > xfs_ra512 14415.6 > xfs_ra102414541.6 > > the value does not seems to be having much effect > unless its totally disabled. excellent. and yes, you have to dump and reload from 32 to 64 bit. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i controller
have you tried hanging bunch of raid1 to linux's md, and let it do raid0 for you ? I heard plenty of stories where this actually sped up performance. One noticeable is case of youtube servers. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i controller
On Wed, Feb 18, 2009 at 2:27 PM, Grzegorz Jaśkiewicz wrote: > have you tried hanging bunch of raid1 to linux's md, and let it do > raid0 for you ? Hmmm , i will have only 3 bunches in that case as system has to boot from first bunch as system has only 8 drives. i think reducing spindles will reduce perf. I also have a SATA SAN though from which i can boot! but the server needs to be rebuilt in that case too. I (may) give it a shot. regds -- mallah. > I heard plenty of stories where this actually sped up performance. One > noticeable is case of youtube servers. > -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] Call of function inside trigger much slower than explicit function call
> Well, that does sound weird... can you post the full definition for > the images_meta table? Are there any other triggers on that table? > Is it referenced by any foreign keys? How fast is the insert if you > drop the trigger? > > ...Robert Yes, weird. Something was wrong in my own code, after I've rewrite it to send you full sources of problem example, execution times of image insertion and direct scaling function call became the same. Insertion of 4000x2667px (2MB) image and direct function call for downscaling original image to 800x600px and 128x128px both takes 1.6 sec. Sorry for confusion. And it is almost the same time that takes command line utility to do the task. So, practically there is no overhead of using triggers for such purposes. Nevertheless here is my sources, maybe there is a better way to solve the task? http://www.filedropper.com/imscalepgexample -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] TCP network cost
"Ross J. Reedstrom" writes: > On Tue, Feb 17, 2009 at 12:20:02AM -0700, Rusty Conover wrote: >> >> Try running tests with ttcp to eliminate any PostgreSQL overhead and >> find out the real bandwidth between the two machines. If its results >> are also slow, you know the problem is TCP related and not PostgreSQL >> related. > > I did in fact run a simple netcat client/server pair and verified that I > can transfer that file on 0.12 sec localhost (or hostname), 0.35 over the > net, so TCP stack and network are not to blame. This is purely inside > the postgresql code issue, I believe. There's not much Postgres can do to mess up TCP/IP. The only things that come to mind are a) making lots of short-lived connections and b) getting caught by Nagle when doing lots of short operations and blocking waiting on results. What libpq (or other interface) operations are you doing exactly? [also, your Mail-Followup-To has a bogus email address in it. Please don't do that] -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's On-Demand Production Tuning -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i controller
2009/2/18 Rajesh Kumar Mallah : > On Wed, Feb 18, 2009 at 2:27 PM, Grzegorz Jaśkiewicz > wrote: >> have you tried hanging bunch of raid1 to linux's md, and let it do >> raid0 for you ? > > Hmmm , i will have only 3 bunches in that case as system has to boot > from first bunch > as system has only 8 drives. i think reducing spindles will reduce perf. > > I also have a SATA SAN though from which i can boot! > but the server needs to be rebuilt in that case too. > I (may) give it a shot. Sure, if you do play with that - make sure to tweak 'chunk' size too. Default one is way to small (IMO) -- GJ -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i controller
On 2/18/09 12:31 AM, "Scott Marlowe" wrote: > Effect of ReadAhead Settings > disabled,256(default) , 512,1024 > > xfs_ra0 414741 , 66144 > xfs_ra256403647, 545026 all tests on sda6 > xfs_ra512411357, 564769 > xfs_ra1024 404392, 431168 > > looks like 512 was the best setting for this controller That's only known for sequential access. How did it perform under the random access, or did the numbers not change too much? In my tests, I have never seen the readahead value affect random access performance (kernel 2.6.18 +). At the extreme, I tried a 128MB readahead, and random I/O rates were the same. This was with CentOS 5.2, other confirmation of this would be useful. The Linux readahead algorithm is smart enough to only seek ahead after detecting sequential access. The readahead algorithm has had various improvements to reduce the need to tune it from 2.6.18 to 2.6.24, but from what I gather, this tuning is skewed towards desktop/workstation drives and not large RAID arrays. The readaheaed value DOES affect random access as a side effect in favor of sequential reads when there is mixed random/sequential load, by decreasing the 'read fragmentation' effect of mixing random seeks into a sequential request stream. For most database loads, this is a good thing, since it increases total bytes read per unit of time, effectively 'getting out of the way' a sequential read rather than making it drag on for a long time by splitting it into non-sequential I/O's while other random access is concurrent.
Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i controller
One thing to note, is that linux's md sets the readahead to 8192 by default instead of 128. I've noticed that in many situations, a large chunk of the performance boost reported is due to this alone. On 2/18/09 12:57 AM, "Grzegorz Jaśkiewicz" wrote: have you tried hanging bunch of raid1 to linux's md, and let it do raid0 for you ? I heard plenty of stories where this actually sped up performance. One noticeable is case of youtube servers.
Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i controller
On 2/17/09 11:52 PM, "Rajesh Kumar Mallah" wrote: the raid10 voulme was benchmarked again taking in consideration above points Effect of ReadAhead Settings disabled,256(default) , 512,1024 xfs_ra0 414741 , 66144 xfs_ra256403647, 545026 all tests on sda6 xfs_ra512411357, 564769 xfs_ra1024 404392, 431168 looks like 512 was the best setting for this controller Try 4096 or 8192 (or just to see, 32768), you should get numbers very close to a raw partition with xfs with a sufficient readahead value. It is controller dependant for sure, but I usually see a "small peak" in performance at 512 or 1024, followed by a dip, then a larger peak and plateau at somewhere near # of drives * the small peak. The higher quality the controller, the less you need to fiddle with this. I use a script that runs fio benchmarks with the following profiles with readahead values from 128 to 65536. The single reader STR test peaks with a smaller readahead value than the concurrent reader one (2 ot 8 concurrent sequential readers) and the mixed random/sequential read loads become more biased to sequential transfer (and thus, higher overall throughput in bytes/sec) with larger readahead values. The choice between the cfq and deadline scheduler however will affect the priority of random vs sequential reads more than the readahead - cfq favoring random access due to dividing I/O by time slice. The FIO profiles I use for benchmarking are at the end of this message. Considering these two figures xfs25 350661, 474481(/dev/sda7) 25xfs 404291 , 547672(/dev/sda6) looks like the beginning of the drives are 15% faster than the ending sections , considering this is it worth creating a special tablespace at the begining of drives For SAS drives, its typically a ~15% to 25% degradation (the last 5% is definitely slow). For SATA 3.5" drives the last 5% is 50% the STR as the front. Graphs about half way down this page show what it looks like for a typical SATA drive: http://www.tomshardware.com/reviews/Seagate-Barracuda-1-5-TB,2032-5.html And a couple figures for some SAS drives here http://www.storagereview.com/ST973451SS.sr?page=0%2C1 > > If testing STR, you will also want to tune the block device read ahead value > (example: /sbin/blockdev -getra > /dev/sda6). This has very large impact on sequential transfer performance > (and no impact on random access). >How large of an impact depends quite a bit > on what kernel you're on since the readahead code has been getting >better > over time and requires less tuning. But it still defaults out-of-the-box to > more optimal settings for a single >drive than RAID. > For SAS, try 256 or 512 * the number of effective spindles (spindles * 0.5 > for raid 10). For SATA, try 1024 or >2048 * the number of effective > spindles. The value is in blocks (512 bytes). There is documentation on the > >blockdev command, and here is a little write-up I found with a couple web > searches: >http://portal.itauth.com/2007/11/20/howto-linux-double-your-disk-read-performance-single-command FIO benchmark profile examples (long, posting here for the archives): *Read benchmarks, sequential: [read-seq] ; one sequential reader reading one 64g file rw=read size=64g directory=/data/test fadvise_hint=0 blocksize=8k direct=0 ioengine=sync iodepth=1 numjobs=1 nrfiles=1 runtime=1m group_reporting=1 exec_prerun=echo 3 > /proc/sys/vm/drop_caches [read-seq] ; two sequential readers, each concurrently reading a 32g file, for a total of 64g max rw=read size=32g directory=/data/test fadvise_hint=0 blocksize=8k direct=0 ioengine=sync iodepth=1 numjobs=2 nrfiles=1 runtime=1m group_reporting=1 exec_prerun=echo 3 > /proc/sys/vm/drop_caches [read-seq] ; eight sequential readers, each concurrently reading a 8g file, for a total of 64g max rw=read size=8g directory=/data/test fadvise_hint=0 blocksize=8k direct=0 ioengine=sync iodepth=1 numjobs=8 nrfiles=1 runtime=1m group_reporting=1 exec_prerun=echo 3 > /proc/sys/vm/drop_caches *Read benchmarks, random 8k reads. [read-rand] ; random access on 2g file by single reader, best case scenario. rw=randread size=2g directory=/data/test fadvise_hint=0 blocksize=8k direct=0 ioengine=sync iodepth=1 numjobs=1 nrfiles=1 group_reporting=1 runtime=1m exec_prerun=echo 3 > /proc/sys/vm/drop_caches [read-rand] ; 8 concurrent random readers each to its own 1g file rw=randread size=1g directory=/data/test fadvise_hint=0 blocksize=8k direct=0 ioengine=sync iodepth=1 numjobs=8 nrfiles=1 group_reporting=1 runtime=1m exec_prerun=echo 3 > /proc/sys/vm/drop_caches *Mixed Load: [global] ; one random reader concurrently with one sequential reader. directory=/data/test fadvise_hint=0 blocksize=8k direct=0 ioengine=sync iodepth=1 runtime=1m exec_prerun=echo 3 > /proc/sys/vm/drop_caches [seq-read] rw=read size=64g numjobs=1 nrfiles
Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i controller
There has been an error in the tests the dataset size was not 2*MEM it was 0.5*MEM i shall redo the tests and post results. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance